Context: The questions were generated by a LLM, I attempted to answer the questions on 04Jul24. The intention is to evaluate my own understanding of the basics of theoretical compute science and related topics. These are hidden by default, but press to display them all at once or hide them again.
Operational Semantics describes the meaning of a programming language by specifying explicit steps that are made, e.g. in the case of the lambda calculus when evaluating an expression. Denotational semantics intend to describe the “meaning” by mapping a program to a mathematical value, e.g. a poset that indicates non-termination using a bottom element and otherwise the result of evaluating the expression. The advantage of the former is that it is intuitive and immediatly useful to a person implementing a language, while denotational semantics is compositional and allows more flexibility when implementing a language, as it only restraints the results instead of each step of the computation.
I am well familiar with coalgebras (a general system to describe dynamical systems, with possibly opaque states that we destruct to get information or successor states; examples include streams, colists, conatural numbers, automata), and know what modal logic is about (the extension of propositional or predicate logic by operators that classically designate the moalities of necessity and possibility, but can also be used to state epistemic (knows, belivess), legal (must, may) statements; it can also be extended by annotations in the modalities, e.g. involving programms that must or may terminate), but don’t understand how they are tied together.
This is a marvellous conicidence between logic and type-theory. It states that the act of proving a theorem in constructive logic is isomorphic to the inhabitation of a type in simply typed lambda calculus. This can be further complicated with more powerful logics and type-systems. This is significant, as it is the foundation for a lot of proof assistants, that take a “programms as proofs” approach, allowing confident and powerful reasoning within a computer system. This can be further extended by an isomorphism to cateogry theory, when considering cartesian closed objects that also correspond to intuitionistic logic. This is then called the Lambek-Curry-Howard isomorphism, and is or example of what people call computational trinitarianism, indicating deep links between the notion of computation and provability.
A FSA, as the name implies only has finite states. Any computation will therefore eventually reach a fixed-point, limiting the general computational power. A turing machine on the other hand has infinite memory it can access as is necessary. This gives us greater strength, avoiding the fix-point issue, but introduces the question of when it will halt.
The Chomsky-Hierarchy is a order of formal languages (regular, context-free, context-sensitive, recursive) with corresponding automata that can recognise these (finite-state-machines, non-deterministisch pushdown automata, linearly bounded turning machines, turing mashines). Each grammar can express or automata can recognise words in the language preceeding them in the list, but not those mentioned afterwards. The four-language charachterisation is the classical way to talk about the hierarchy as elaborated by Noam Chomsky in the late 1950’s, but there are interesting languages between the aforementioned levels. E.g. compilers prefer to use LL(1)-type languages that are recognised by deterministic pushdown automata (as LL(1) require the grammar to not have ambigious parsing rules that have to be tracked, with some kind of a power-set construction, as is the case in the CYK or Earley algorithm), as this accelerates parsing.
A monad is a monoid in the category of endofunctors. What this means is that we have an endofunctor F along with two natural transformations, unit of from the identity functor to and multiplication from to , that satisfy two necessary properties (colloquially the triangle and square law, requiring different combinations of the natural transformatons along with the functor to compose): and In functional programming, the notion of a monad has established itself as a powerful and universal construct to uniformly describe many different constructs. E.g. in Haskell the list monad models non-determinism, the maybe monad models termination, the IO monad denotes descriptions of computation with side effects that must be executed in a specific order.
The halting problem raises the question of whether there is a general algorithm to decide if a program will terminate or not. That would mean that there is some effective procedure that can in all cases, give an accurate yes or no answer to the question of whether or not a TM with some encoding will halt or not. Turings proof that this problem is not decidable, which is in turn a specialisation of Gödels proof that FOL is incomplete, involves a diagonalisation approach. Assume there were such a machine , then construct a machine that takes an encoding of a machine as input, uses to decide it will halt or not. If it halts, then shouldn’t halt, while if it doesn’t, then should halt immediately. If we now pass an encoding of to , then we have a paradox, which invalidates the assumption that exists, hence that the halting problem is decidable.
Structural induction is a proof principle that allows claims to be made over some recursive structure. Constituents of a recursive structure can either be simple or complex. For the former, we just need to prove the property as is, while the latter make use of an induction hypothesis granted for the recursive instances. Applications include natural induction, where we want to prove a claim for all natural numbers, but can analogously be applied to structures like trees in computer science or formulas in logic.
The lambda calculus serves as a different model for computation, compared to the mechanical nature of a turing machine, by focusing on evaluation of expressions. The study of the lambda calculus has been important for the development of functional programming languages (as pointed out by Peter Landin), type-systems (STLC influences ML and further developments such as type inference and dependent types) as well as serving as a theoretical foundation for proof assistants.
The Church-Turing hypothesis is the claim that all formal systems that express computation are at best equally strong. It was first stated out by Church when proving that Turing’s Machines and the Lambda Calculus had the same power, and can simulate one another. It has also popped up in different forms, such as involving non-determinism in TMs, or in different paradigms such as logic programming (i.e. unification and backtracking is Turing Complete, just like in the lambda calculus evaluation and substitution is). This is a fascinating observation, as it appears to be a general law of the universe, as some argue computation even occurs in physical nature. As it turns out, you don’t need much to compute, but one hits a similar barrier in all cases. The significance of the lambda calculus here is that it first spanned the space of what different forms computation could take.