A few TSC Questions and Answers

Context: The questions were generated by a LLM, I attempted to answer the questions on 04Jul24. The intention is to evaluate my own understanding of the basics of theoretical compute science and related topics. These are hidden by default, but press to display them all at once or hide them again.


Can you explain the difference between operational and denotational semantics in programming language theory?

Operational Semantics describes the meaning of a programming language by specifying explicit steps that are made, e.g. in the case of the lambda calculus when evaluating an expression. Denotational semantics intend to describe the “meaning” by mapping a program to a mathematical value, e.g. a poset that indicates non-termination using a bottom element and otherwise the result of evaluating the expression. The advantage of the former is that it is intuitive and immediatly useful to a person implementing a language, while denotational semantics is compositional and allows more flexibility when implementing a language, as it only restraints the results instead of each step of the computation.


What is the connection between modal logic and coalgebras?

I am well familiar with coalgebras (a general system to describe dynamical systems, with possibly opaque states that we destruct to get information or successor states; examples include streams, colists, conatural numbers, automata), and know what modal logic is about (the extension of propositional or predicate logic by operators that classically designate the moalities of necessity and possibility, but can also be used to state epistemic (knows, belivess), legal (must, may) statements; it can also be extended by annotations in the modalities, e.g. involving programms that must or may terminate), but don’t understand how they are tied together.


Can you describe the Curry-Howard isomorphism and its significance?

This is a marvellous conicidence between logic and type-theory. It states that the act of proving a theorem in constructive logic is isomorphic to the inhabitation of a type in simply typed lambda calculus. This can be further complicated with more powerful logics and type-systems. This is significant, as it is the foundation for a lot of proof assistants, that take a “programms as proofs” approach, allowing confident and powerful reasoning within a computer system. This can be further extended by an isomorphism to cateogry theory, when considering cartesian closed objects that also correspond to intuitionistic logic. This is then called the Lambek-Curry-Howard isomorphism, and is or example of what people call computational trinitarianism, indicating deep links between the notion of computation and provability.


What is the difference between a Turing machine and a finite state automaton in terms of computational power?

A FSA, as the name implies only has finite states. Any computation will therefore eventually reach a fixed-point, limiting the general computational power. A turing machine on the other hand has infinite memory it can access as is necessary. This gives us greater strength, avoiding the fix-point issue, but introduces the question of when it will halt.

The Chomsky-Hierarchy is a order of formal languages (regular, context-free, context-sensitive, recursive) with corresponding automata that can recognise these (finite-state-machines, non-deterministisch pushdown automata, linearly bounded turning machines, turing mashines). Each grammar can express or automata can recognise words in the language preceeding them in the list, but not those mentioned afterwards. The four-language charachterisation is the classical way to talk about the hierarchy as elaborated by Noam Chomsky in the late 1950’s, but there are interesting languages between the aforementioned levels. E.g. compilers prefer to use LL(1)-type languages that are recognised by deterministic pushdown automata (as LL(1) require the grammar to not have ambigious parsing rules that have to be tracked, with some kind of a power-set construction, as is the case in the CYK or Earley algorithm), as this accelerates parsing.


Can you explain what a monad is in category theory and give an example of its use in functional programming?

A monad is a monoid in the category of endofunctors. What this means is that we have an endofunctor F along with two natural transformations, unit of XX from the identity functor Id\mathrm{Id} to FF and multiplication from FFFF to FF, that satisfy two necessary properties (colloquially the triangle and square law, requiring different combinations of the natural transformatons along with the functor to compose): idTX=μXTηX=μXηTX\mathrm{id}_{TX} = \mu_X \circ T{}\eta_X = \mu_X \circ \eta_{TX} and μXμTX=μXTμX\mu_X \circ \mu_{TX} = \mu_X \circ T{}\mu_X In functional programming, the notion of a monad has established itself as a powerful and universal construct to uniformly describe many different constructs. E.g. in Haskell the list monad models non-determinism, the maybe monad models termination, the IO monad denotes descriptions of computation with side effects that must be executed in a specific order.


What is the halting problem, and why is it undecidable?

The halting problem raises the question of whether there is a general algorithm to decide if a program will terminate or not. That would mean that there is some effective procedure that can in all cases, give an accurate yes or no answer to the question of whether or not a TM with some encoding will halt or not. Turings proof that this problem is not decidable, which is in turn a specialisation of Gödels proof that FOL is incomplete, involves a diagonalisation approach. Assume there were such a machine DD, then construct a machine PP that takes an encoding of a machine as input, uses ED to decide it will halt or not. If it halts, then PP shouldn’t halt, while if it doesn’t, then PP should halt immediately. If we now pass an encoding of PP to PP, then we have a paradox, which invalidates the assumption that DD exists, hence that the halting problem is decidable.


Can you describe the basic principles of structural induction and give an example of its application?

Structural induction is a proof principle that allows claims to be made over some recursive structure. Constituents of a recursive structure can either be simple or complex. For the former, we just need to prove the property as is, while the latter make use of an induction hypothesis granted for the recursive instances. Applications include natural induction, where we want to prove a claim for all natural numbers, but can analogously be applied to structures like trees in computer science or formulas in logic.


What is the significance of the lambda calculus in theoretical computer science?

The lambda calculus serves as a different model for computation, compared to the mechanical nature of a turing machine, by focusing on evaluation of expressions. The study of the lambda calculus has been important for the development of functional programming languages (as pointed out by Peter Landin), type-systems (STLC influences ML and further developments such as type inference and dependent types) as well as serving as a theoretical foundation for proof assistants.

The Church-Turing hypothesis is the claim that all formal systems that express computation are at best equally strong. It was first stated out by Church when proving that Turing’s Machines and the Lambda Calculus had the same power, and can simulate one another. It has also popped up in different forms, such as involving non-determinism in TMs, or in different paradigms such as logic programming (i.e. unification and backtracking is Turing Complete, just like in the lambda calculus evaluation and substitution is). This is a fascinating observation, as it appears to be a general law of the universe, as some argue computation even occurs in physical nature. As it turns out, you don’t need much to compute, but one hits a similar barrier in all cases. The significance of the lambda calculus here is that it first spanned the space of what different forms computation could take.