and Verification of Software 
Wednesday SeminarsNext SeminarSeminar ListingHomeDecember Seminar13123
Show Details Abstract A popular model in knowledge updating is where we receive true information from a trusted source. The work of Jan Plaza and others, and the AGM theory both use this paradigm. This is also the method used by Geanakoplos and Polemarchakis, and RP and Krasucki, in their "activist" versions of Aumann's result. However, the literature is rife with examples where information is revealed inadvertantly so that the one revealing information is not doing it deliberately but is tricked into it. We give examples from Shakespeare, Saki and Birbal. A second departure from the paradigm arises when the two sources do not share objectives, so that the Gricean assumption of cooperation does not hold. Nonetheless, some information can get across when the interests of the two parties overlap at least in part. We report on some theoretical work in this area, some of it falling under the heading of "cheap talk," and offer a model. Speaker Profile: Prof. Rohit Parikh is a Distinguished Professor of Computer Science, Mathematics and Philosophy at the Graduate Center, City University of New York. He is also affiliated with Department of Computer Science at Brooklyn College. Prof. Parikh has made deep contributions to wide range of areas including formal languages, recursive function theory, proof theory, nonstandard analysis, logic of programs, logic of knowledge, philosophy of language, belief revision and game theory. The theme that concerns most of his recent papers is Social Software, which is an analysis of social procedures, from elections to cake cutting, using ideas from Computer Science, Game Theory and Logic. Organization: City University of New York and Brooklyn College References To be updated Seminar13122
Show Details Abstract The aim of program synthesis is to automatically generate a program that satisfies a given specification, in contrast to program verification, for which both the specification and the program are given as input. The underlying goal is to improve program reliability and optimize design constraints, like time and human errors, and to get rid of the lowlevel programming tasks, by replacing them with the design of highlevel specifications. The old dream of automatic synthesis, which among others was shared by Church, is difficult to realize for generalpurpose programming languages. However in recent years, there has been a renewed interest in feasible methods for the synthesis of application specific programs, which have been, for instance, applied to reactive systems, distributed systems, programs manipulating arithmetic or concurrent datastructures. Reactive systems are nonterminating programs that continuously interact with their environment. They arise both as hardware and software, and are usually part of safetycritical systems, for example microprocessors, air traffic controllers, programs to monitor medical devices, or nuclear plants. It is therefore crucial to guarantee their correctness. The temporal logic LTL is a very important abstract formalism to describe properties of reactive systems. As shown by Pnueli and Rosner in 89, the synthesis of reactive systems from LTL specifications is a 2Exptime complete problem. In this talk, I will present recent progresses in LTL synthesis based on a bounded synthesis approach inspired by bounded modelchecking, and show that the high worstcase time complexity of LTL synthesis does not handicap its practical feasibility. This is achieved by exploiting the structure underlying the automata constructions used to solve the synthesis problem. Speaker Profile: Emmanuel Filiot (PhD 2008, INRIA Lille, France) is an FNRS research associate in the computer science department of Brussels University (ULB), since September 2013. His research interests cover the theoretical foundations of datacentric systems and synthesis of reactive systems from temporal specifications, based on automata, transducer and logics for strings and trees, and computational game theory. He teaches foundations of computerscience at Brussels University. Organization: Brussels University References To be updated Seminar13121
Show Details Abstract Finite state automata are language acceptors. They map strings read on their input tape to {0,1}. Transducers extend automata with an output tape, and can therefore map strings to other strings. This talk will first survey some of the main results of the theory of finite state transducers. Then, several major extensions will be presented, that either increase the computational power of finite state transducers (e.g. by allowing the input head to move in both directions), or process more general structures, e.g. infinite strings or trees. Finally, an application to streaming transformations and its connection with strategy synthesis in game theory for reactive systems (Church problem) will be shown. Speaker Profile: Emmanuel Filiot (PhD 2008, INRIA Lille, France) is an FNRS research associate in the computer science department of Brussels University (ULB), since September 2013. His research interests cover the theoretical foundations of datacentric systems and synthesis of reactive systems from temporal specifications, based on automata, transducer and logics for strings and trees, and computational game theory. He teaches foundations of computerscience at Brussels University. Organization: Brussels University References To be updated Seminar13112
Show Details Abstract Computing aggregates such as the sum or the counting of solutions is a natural task in databases. Conjunctive queries are a basic class of queries equivalent to selectprojectjoin queries and strongly related to constraint satisfaction problems. Most research on conjunctive queries has focused on the socalled boolean conjunctive query problem, short CQ, which is, given a query and a database, to decide if the query has any answers with respect to the database. While CQ is hard in general, many 'islands of tractability' have been found based on the hypergraphs associated to queries. Recently, Pichler and Skritek have shown that these hypergraph based techniques are not sufficient to guarantee tractability for the associated counting problem #CQ, i.e., given a conjunctive query and a database, determine the number of answers to the query. I will give a short introduction into conjunctive queries and survey know results. I will also present an exact characterizion of the tractability frontier for #CQ for wellknown classes of conjunctive queries. Joint works with Stefan Mengel (JCSS and ICDT 2013). References
Seminar13111
Show Details Abstract Among the applications of algorithmic game theory (AGT) are sponsored search auctions such as those of Google's AdWords. In online advertisement systems, advertisers are repeatedly involved in auctions to acquire advertisement spaces. Classical models of private value auctions assume that bidders know their own private value for the slot being auctioned. We explore games where players have a private value, but can only learn this value through experimentation, a scenario that is typical in AdAuctions. We consider this question in a repeated game context, where early participation in the auction can help the bidders learn their own value. We consider what is a good bidding strategy for a player in this game, and show that with low enough competition new bidders will enter and experiment, but with a bit higher level of competition, initial credit offered by the platform can encourage experimentation, and hence ultimately can increase revenue. References
Seminar13101
Show Details Abstract This paper presents a novel approach to the design verification of Software Product Lines(SPL). The proposed approach assumes that the requirements and designs at the feature level are modeled as finite state machines with variability information. The variability information at the requirement and design levels are expressed differently and at different levels of abstraction. Also the proposed approach supports verification of SPL in which new features and variability may be added incrementally. Given the design and requirements of an SPL, the proposed design verification method ensures that every product at the design level behaviourally conforms to a product at the requirement level. The conformance procedure is compositional in the sense that the verification of an entire SPL consisting of multiple features is reduced to the verification of the individual features. The method has been implemented and demonstrated in a prototype tool SPLEnD (SPL Engine for Design Verification) on a couple of fairly large case studies. References
Seminar13092
Show Details Abstract This paper presents a novel approach to the design verification of Software Product Lines(SPL). The proposed approach assumes that the requirements and designs at the feature level are modeled as finite state machines with variability information. The variability information at the requirement and design levels are expressed differently and at different levels of abstraction. Also the proposed approach supports verification of SPL in which new features and variability may be added incrementally. Given the design and requirements of an SPL, the proposed design verification method ensures that every product at the design level behaviourally conforms to a product at the requirement level. The conformance procedure is compositional in the sense that the verification of an entire SPL consisting of multiple features is reduced to the verification of the individual features. The method has been implemented and demonstrated in a prototype tool SPLEnD (SPL Engine for Design Verification) on a couple of fairly large case studies. References
Seminar13091
Show Details Abstract In a Software Product Line (SPL), the central notion of implementability provides the requisite connection between specifications (feature sets) and their implementations (component sets), leading to the definition of products. While it appears to be a simple extension (to sets) of the traceability relation between components and features, it actually involves several subtle issues which are overlooked in the definitions in existing literature. In this paper, we give a precise and formal definition of implementability over a fairly expressive traceability relation to solve these issues. The consequent definition of products in the given SPL naturally entails a set of useful analysis problems that are either refinements of known problems, or are completely novel. We also propose a new approach to solve these analysis problems by encoding them as Quantified Boolean Formula(QBF) and solving them through Quantified Satisfiability (QSAT) solvers. The methodology scales much better than the SATbased solutions hinted in the literature and is demonstrated through a prototype tool called SPLANE(SPL Analysis Engine),on a couple of fairly large casestudies. References
Seminar13083
Show Details Abstract n the $k$server problem, $k$ servers occupy points in a metric space. An adversary presents a sequence of online requests, each of which is a point in the metric space. To serve the current request, the algorithm has to move one of the servers to the requested point, incurring a cost equal to the distance traveled by the server. The weighted $k$server problem is a generalization of the $k$server problem in which the servers have weights, and the cost of moving a server of weight $\beta_i$ through a distance $d$ is $\beta_i\cdot d$. In this talk I will discuss techniques for proving upper bounds for randomized memoryless algorithms for the weighted server problem on the uniform metric space. This models paging where pages may have different write costs. I will exhibit an algorithm that always outputs a solution to within roughly 2^2^k times the optimal. The interesting thing is that the approximation factor is a function of k alone and not of either the number of points in the metric space nor the length of the request sequence. Previously such a result was known only for $k=2$. Our result also significantly improves the best known upper bound for deterministic algorithms. I will also present a partial proof hinting at the optimality of these results. The paper is joint work with Sundar Vishwanathan and is available here References
Seminar13082
Show Details Abstract This talk is centered around our recent model of nested words as a representation of data with both a linear ordering and a hierarchically nested matching of items. Such dual structure occurs in diverse corners of computer science ranging from executions of structured programs where there is a natural wellnested matching of entries to and exits from functions and procedures, to XML documents with the wellnested structure given by starttags matched with end tags. Finitestate acceptors for nested words define the class of regular languages of nested words that has all the appealing theoretical properties that the class of classical regular word languages enjoys: deterministic nested word automata are as expressive as their nondeterministic counterparts; the class is closed under union, intersection, complementation, concatenation, and Kleene*; membership, emptiness, language inclusion, and language equivalence are all decidable; and definability in monadic second order logic corresponds exactly to finitestate recognizability. We argue that for algorithmic lineartime verification of pushdown systems, instead of viewing the program as a contextfree language over words, one should view it as a regular language of nested words, and this would allow model checking of many properties (such as stack inspection, prepost conditions) that are not expressible in existing specification logics. We also discuss nested trees that are obtained adding nesting edges along paths of trees, and finitestate acceptors over nested trees. This leads to a very expressive fixpoint logic (equivalently, alteranting parity nested tree automata) with a decidable model checking problem for pushdown systems. References
Seminar13081
Show Details Abstract In this talk, we will talk about decision problems for linear recurrence sequences (LRS), which are generalizations of the Fibonacci sequence. Starting from the longstanding open Skolem problem (which asks if a given LRS has a zero), we will look at several variants and survey some recent results in this area. Further, we will motivate these highly mathematical problems by showing how they occur rather naturally in the domains of program termination and probabilistic verification. References
Seminar13072
Show Details Abstract CyberPhysical Systems are the next generation of embedded systems with the tight integration of computing, communication and control of â€œmessyâ€ plants. I will describe our recent efforts in modeling for scheduling and control of closedloop CyberPhysical Systems across the domains of medical devices, energyefficient buildings, wireless control networks and programmable automotive systems. The design of bugfree and safe medical device software is challenging, especially in complex implantable devices that control and actuate organs whose response is not fully understood. Safety recalls of pacemakers and implantable cardioverter defibrillators between 1990 and 2000 affected over 600,000 devices. Of these, 200,000 or 41%, were due to firmware issues (i.e. software) that continue to increase in frequency. There is currently no formal methodology or open experimental platform to test and verify the correct operation of medical device software within the closedloop context of the patient. I will describe our efforts to develop the foundations of modeling, synthesis and development of verified medical device software and systems from verified closedloop models of the pacemaker and the heart. With the goal to develop a toolchain for certifiable software for medical devices, I will walk through (a) formal modeling of the heart and pacemaker in timed automata, (b) verification of the closedloop system, (c) automatic model translation from UPPAAL to Stateflow for simulationbased testing, and (d) automatic code generation for platformlevel testing of the heart and real pacemakers. More details here. As time permits, I will describe our investigations inenergyefficient building automation in which we coordinate scheduling of controllers for peak power minimization across multiple plants. We will also briefly discuss invehicle and networked vehicletovehicle programmable automotive architectures for the future. Speaker Profile: Rahul Mangharam is the Stephen J Angello Chair and Assistant Professor in the Dept. of Electrical & Systems Engineering and Dept. of Computer &Information Science at the University of Pennsylvania. He directs the RealTime and Embedded Systems Lab at Penn. His interests are in realtime scheduling algorithms for networked embedded systems with applications in automotive systems, medical devices and industrial control networks. He received his Ph.D. in Electrical & Computer Engineering from Carnegie Mellon University where he also received his MS and BS in 2007, 2002 and 2000 respectively. In 2002, he was a member of technical staff in the UltraWide Band Wireless Group at Intel Labs. He was an international scholar in the Wireless Systems Group at IMEC, Belgium in 2003. He has worked on ASIC chip design at Marconi Communications (1999) and Gigabit Ethernet at Apple Computer Inc. (2000). Rahul received the 2013 NSF CAREER Award, 2012 Intel Early Faculty Career Award and was selected by the National Academy of Engineering for the 2012 US Frontiers of Engineering. References To be updated Seminar13071
Show Details Abstract [This talk was a video screening of a talk by Rajeev Alur. Video link: here] Modern software engineering heavily relies on clearly specified interfaces for separation of concerns among designers implementing components and programmers using those components. The need for interfaces is evident for assembling complex systems from components, but more so in control applications where the components are designed by control engineers using mathematical modeling tools and used by software executing on digital computers. However, the notion of an interface for a control component must incorporate some information about timing, and standard programming languages do not provide a way of capturing such resource requirements. This talk will describe how finite automata over infinite words can be used to define interfaces for control components. When the resource is allocated in a timetriggered manner, the allocation from the perspective of an individual component can be described by an infinite word over a suitably chosen alphabet. The control engineer can express the interface of the component as an omegaregular language that contains all schedules that meet performance requirement. The software must ensure, then, that the runtime allocation is in this language. The main benefit of this approach is composability: conjoining specifications of two components corresponds to a simple languagetheoretic operation on interfaces. We have demonstrated how to automatically compute automata for performance requirements such as exponential stability and settling time for the LQG control designs. The framework is supported by a toolkit, RTComposer, that is implemented on top of Real Time Java. The benefits of the approach will be demonstrated using applications to wireless sensor/actuator networks based on the WirelessHART protocol and to distributed control systems based on the Control Area Network (CAN) bus. References
Seminar13063
Show Details Abstract [This is a continuation of the talk given by the speaker on June 12] First Order Logic (FOL) is one of the most natural logics in which a wide variety of mathematical statements can be written. For example, various properties of widely used structures like groups, rings, fields, partial orders, lattices, etc., in fact their very definitions, can be expressed in FOL. In computer science, the flourishing area of descriptive complexity is all about giving logical characterizations of complexity classes that are defined using other mechanisms like turing machines, circuits, etc.  and FOL is again useful in capturing several interesting complexity classes. SQL (Structured Query Language), which is one of the most widely used database query languages, is exactly FOL when aggregation and other counting constructs are dropped. Ironically, as useful as it is in many contexts, FOL is as weak to express many natural properties that are of mathematical interest. In the context of graphs, FOL is incapable of expressing properties like connectedness, treeness, kcolourability, amongst many others. In fact as trivial a property as the evenness of the size of a graph is also FO inexpressible! In this talk, we shall look at these and other inexpressibility results and some techniques used in proving them. We shall also try to give an intuitive idea of why FOL must show this weakness in its expressiveness. Time permitting, we shall delve into the inexpressibility of FOL over arbitrary structures (i.e. infinite structures included) and present, amongst other techniques, some new techniques (having limited applicability) based on some our recent results in the area of model theory. References
Seminar13062
Show Details Abstract First Order Logic (FOL) is one of the most natural logics in which a wide variety of mathematical statements can be written. For example, various properties of widely used structures like groups, rings, fields, partial orders, lattices, etc., in fact their very definitions, can be expressed in FOL. In computer science, the flourishing area of descriptive complexity is all about giving logical characterizations of complexity classes that are defined using other mechanisms like turing machines, circuits, etc.  and FOL is again useful in capturing several interesting complexity classes. SQL (Structured Query Language), which is one of the most widely used database query languages, is exactly FOL when aggregation and other counting constructs are dropped. Ironically, as useful as it is in many contexts, FOL is as weak to express many natural properties that are of mathematical interest. In the context of graphs, FOL is incapable of expressing properties like connectedness, treeness, kcolourability, amongst many others. In fact as trivial a property as the evenness of the size of a graph is also FO inexpressible! In this talk, we shall look at these and other inexpressibility results and some techniques used in proving them. We shall also try to give an intuitive idea of why FOL must show this weakness in its expressiveness. Time permitting, we shall delve into the inexpressibility of FOL over arbitrary structures (i.e. infinite structures included) and present, amongst other techniques, some new techniques (having limited applicability) based on some our recent results in the area of model theory. References
Seminar13061
Show Details Abstract [This talk was a video screening of an invited talk at KR 2012 (Intl. Conference on Principles of Knowledge Representation and Reasoning)] One of the surprising developments in the area of program verification in the late part of the 20th Century is the emergence of linear temporal logic (LTL), a logic that emerged in philosophical studies of free will, as the canonical language for describing temporal behavior of computer systems. LTL, however, is not expressive enough for industrial applications. The first decade of the 21st century saw the emergence of industrial temporal logics such as ForSpec, PSL, and SVA. These logics, however, are not clean enough to serve as objects of theoretical study. This talk will describe the rise and fall of LTL, and will propose a new canonical temporal logic: linear dynamic logic (LDL). References
Seminar13054
Show Details Abstract [This is a continuation of the talk given by the speaker on May 22] Metric Temporal Logic, MTL^{pw} [ U_I , S_I ] is amongst the most studied realtime logics. It exhibits considerable diversity in expressiveness and decidability properties based on the permitted set of modalities and the nature of time interval constraints I. In this paper, we sharpen the decidability results by showing that the satisfiability of MTL^{pw} [ U_I , S_{NS} ] (where N S denotes nonsingular intervals) is also decidable over finite pointwise strictly monotonic time. We give a satisfiability preserving reduction from the logic MTL^{pw} [ U_I , S_{NS} ] to decidable logic MTL^{pw} [ U_I ] of Ouaknine and Worrell using the technique of temporal projections. We also investigate the decidability of unary fragment MTL^{pw} [F_I ,P_I ] (a question posed by A. Rabinovich) and show that MTL^{c} [F_I ] over continuous time as well as MTL^{pw} [F_I ,P_I ] over finite pointwise time are both undecidable. Moreover, MTL^{pw} [F_I] over finite pointwise models already has Non primitive Recursive lower bound for satisfiability checking. We also compare the expressive powers of some of these fragments using the technique of EF games for MTL. References
Seminar13053
Show Details Abstract Metric Temporal Logic, MTL^{pw} [ U_I , S_I ] is amongst the most studied realtime logics. It exhibits considerable diversity in expressiveness and decidability properties based on the permitted set of modalities and the nature of time interval constraints I. In this paper, we sharpen the decidability results by showing that the satisfiability of MTL^{pw} [ U_I , S_{NS} ] (where N S denotes nonsingular intervals) is also decidable over finite pointwise strictly monotonic time. We give a satisfiability preserving reduction from the logic MTL^{pw} [ U_I , S_{NS} ] to decidable logic MTL^{pw} [ U_I ] of Ouaknine and Worrell using the technique of temporal projections. We also investigate the decidability of unary fragment MTL^{pw} [F_I ,P_I ] (a question posed by A. Rabinovich) and show that MTL^{c} [F_I ] over continuous time as well as MTL^{pw} [F_I ,P_I ] over finite pointwise time are both undecidable. Moreover, MTL^{pw} [F_I] over finite pointwise models already has Non primitive Recursive lower bound for satisfiability checking. We also compare the expressive powers of some of these fragments using the technique of EF games for MTL. References
Seminar13052
Show Details Abstract [This is a continuation of the talk given by the speaker on May 1] I will survey the state of the art in the automated analysis of several classes of probabilistic infinitestate systems which naturally arise when studying probabilistic programs with recursive calls, queueing systems, population dynamics and natural language processing. I will describe the relationships between these different models and recent developments in the computational complexity of their analysis. I will also briefly describe tool PReMo, which is able to analyse these models, along with some interesting performance results. (This talk is based on several joint papers with T. Brazdil, V. Brozek, K. Etessami, S. Kiefer, A. Kucera, and M. Yannakakis). References
Seminar13051
Show Details Abstract I will survey the state of the art in the automated analysis of several classes of probabilistic infinitestate systems which naturally arise when studying probabilistic programs with recursive calls, queueing systems, population dynamics and natural language processing. I will describe the relationships between these different models and recent developments in the computational complexity of their analysis. I will also briefly describe tool PReMo, which is able to analyse these models, along with some interesting performance results. (This talk is based on several joint papers with T. Brazdil, V. Brozek, K. Etessami, S. Kiefer, A. Kucera, and M. Yannakakis). References
Seminar13043
Show Details Abstract Online computation involves processing "pieces" of input by taking irrevocable decisions on the arrival of every piece. For instance, the caching problem involves servicing each request in a sequence of requests to pages, without the knowledge the future requests. The performance of online algorithms is traditionally expressed by a quantity called "competitive ratio". I will discuss a few elementary techniques to prove bounds on the competitive ratios of online problems, and briefly mention the $k$server problem, and the best known bounds for this problem. If time permits, I will introduce the weighted $k$server problem and discuss our recent result about the competitive ratio of memoryless algorithms for this problem. References
Seminar13042
Show Details Abstract Despite many decades of trying, we do not have algorithms that run in time 2^{o(n)} for 3SAT. A reasonable hypothesis, therefore, is that such algorithms do not exist. This is called the Exponential Time Hypothesis, and like P \neq NP, it has many implications for other problems. We will see some of them. References
Seminar13041
Show Details Abstract The talk will address the problem of solving a system of Boolean equations in several variables over finite Boolean algebra coefficients. The well known BooleShannon expansion of a Boolean function is a basis for the method of elimination of variables and condition of consistency in Boolean equations. On the other hand it is always possible to carry out expansion of a Boolean function in a system of orthonormal terms which generalizes the BooleShannon expansion vastly. However an equivalent condition for consistency is not available which generalizes the one variable counterpart of BooleShannon case. This talk shows that such a condition is possible with respect to two specialized sets of orthonormal terms when the Boolean functions belong to a restricted class. References Seminar13032
Show Details Abstract (This was a talk given as a part of the Faculty Seminar Series of the Dept. of CSE.) Reasoning about fixed width bitvectors (as opposed to reasoning about integers or reals) is important in the formal verification and analysis of software and hardware systems that use fixedwidth words in the underlying machine architecture. While restricting the domain to fixed width bitvectors renders all interesting problems decidable, the worstcase complexities may still be high, and one needs to exercise care in designing algorithms that scale to problems of practical size. In this talk, we look at the problem of quantifier elimination for linear arithmetic expressions, a key operation in verification, automated synthesis and analysis of important classes of software and hardware systems. We present a layered algorithm for scalable quantifier elimination from Boolean combinations of linear bitvector arithmetic expressions. Our algorithm invokes cheaper, relatively less complete heuristics initially, and invokes more expensive and complete techniques only when needed. We present a substitution based heuristic and a counting based heuristic that are extremely efficient and succeed in eliminating almost all quantifiers in an extensive set of benchmarks from the publicdomain. To handle cases that are not amenable to these heuristics, we also present an adaptation of the classical FourierMotzkin quantifier elimination algorithm for reals to linear bitvector arithmetic. While this is an expensive step, our experiments indicate that this is invoked in a very small fraction of cases. Our experiments clearly show that a layered approach with carefully designed heuristics lends both precision and scalability to bitprecise quantifier elimination. We are currently working on extending this work to eliminating quantifiers from nonlinear bitvector inequalities and also for combinations of bitvector theories. This is joint work with Ajith John. References To be updated Seminar13031
Show Details Abstract Unambiguous starfree langauges (UL) were defined by Schutzenberger in 1976 as the variety DA. Since then, this langauge class has found diverse characterizations in logics and automata over finite words. Pioneering work of Pin, Weil, Therien and Wilke showed that logics FO2[<] as well as Delta2[<] characterize this class. Etessami, Vardi and Wilke showed effective equivalence between FO2[<] and the well known "unary" temporal logic TL[F,P]. Schwentik, Volmer and Therien showed that UL are precisely the langauges recognized by Partially ordered 2way DFA (PO2DFA). The talk will provide a quick overview of these results. We will then move to more recent work on temporal logics for UL. Four widely different temporal logics over finite words, namely TL[F,P], TL[Xa,Ya], Recursive TL[Xphi,Yphi] and interval logic UITL+ can be shown to be equally expressive and to precisely characterize the language class UL, Moreover, they all have NP complete satisfiability. Clearly, an explanation of robustness and efficient decidability of temporal logics for UL is needed. Towards this, we formulate effective reductions between these logics and also to PO2DFA, showing that these logics are equally expressive. Moreover, the reductions from logics to automata are efficient and they allows us to establish the NPComplete satisfiability of all the four logics. We observe that these reductions rely upon on some important properties of "deterministic" temporal logics, namely, unique parsability, ranker directionality and ranker convexity. These will be explained and motivated. (joint work with Simoni Shah.) References
Seminar13022
Show Details Abstract (This seminar was the video screening of a lecture delivered at the Isaac Newton Institute, University of Cambridge, UK. Video link: Click here) Why give so much credit to Turing and not to Church? Church was a full professor at Princeton in 1936 when Turing was a mere graduate student. Church had studied Hilbert's papers a decade before Turing and had explained them to his Princeton thesis adviser, Oswald Veblen. Church was working in the HerbrandGodel recursive functions defined by Godel, the most eminent logician at the time. These used the concept of recursion (induction) which had appreared in mathematics since Dedekind [1888]. In contrast, Turing machines were a fanciful new invention without such a concise, mathematical definition in a familiar formalism. By 1934, Church and Kleene had shown that most number theoretic functions were $\lambda$definable and therefore recursive, giving clear evidence for Church's Thesis. Church was the first to propose Church's Thesis when even Godel did not believe it. Church got it right and he got it first. The effectively calculable functions _are_ the recursive functions. By any purely quantifiable evaluation, Church's contribution was at least as important as Turing's. However, characterizing human computability was _not_ a purely quantifiable process. References To be updated Seminar13021
Show Details Abstract This talk is aimed at presenting an approach for solving Boolean equations by expanding the. Functions involved in terms of special systems of orthonormal (ON) terms. These special systems are polynomial order as compared to ON terms such asminterms which are of exponential order. ON expansions bring in a natural parallelism in the computation. This expansion generalizes the BoolShannon expansion and the resulting elimination theory of Boolean algebras. References
Seminar13014
Show Details Abstract STE is a symbolic technique for hardware model checking. It supports three valued abstraction(0,1,X). It is a BDD (binary decision diagram) based model checking technique. The abstraction is generated from the user specifications. This evaluation boils down to checking the property that given the antecedent(A) and the circuit specifications, does the consequent (C) hold true. i.e. A => C. References Seminar13013
Show Details Abstract If a theory admits quantifier elimination, then there is a decision procedure for it. Tarski (1951) proved that the theory of real closed fields in the first order language of partially ordered rings(consisting of the binary predicate symbols "=" and "ï¿½ï¿½", the operations of addition, subtraction and multiplication and the constant symbols 0,1) admits quantifier elimination. We shall look at these and related results in the talk. References Seminar13012
Show Details Abstract kSAT, the canonical NPcomplete problem, is an algorithmic question of immense theoretical as well as practical relevance. Consequently, algorithms that significantly improve on bruteforce search for this problem are of great interest. In this talk, we survey some exponentialtime algorithms for kSAT along with, time permitting, relations to some questions in Boolean Circuit Complexity. References
Seminar13011
Show Details Abstract Cook and Reckhow defined the notion of a proof system for a language. They defined a proof system for a language L as a polynomial time computable function f whose range is exactly L. In the past, proof systems with varying complexity of the function f have been studied. In this talk we will discuss a proof system in which the function f is computable by a deterministic logtime machine. References

People —> Principal Investigators —> Research Employees —> Students Activities —> Weekly Meetings —> Formal Methods Update '05 —> Workshop'05 —> Workshop 2002 —> Workshop 2017 Projects —> R & D Projects —> Sponsored Projects —> Exploratory Projects Papers/Reports —> Papers —> Technical Reports —> Theses Download —> Course Material —> Online documentation Others —> Photographs —> Internal Website —> Webmail 

