“Meta-logic” and “object-logic” (as word) definition in Isabelle - definition

What is the formal and complete definition of the words "meta-logic" and "object-logic" in Isabelle? I see people keep using these but could not find any definition for these.

You don't find them because they are specific to Isabelle (as far as I know). "Object-logic" and "meta-logic" are terms introduced by Larry Paulson (as far as I can tell). In general, though not specifically, they are related to the general terms "metalanguage" and "object language", for disciplines like logic and set theory. Do a search on those and you'll get the standard wiki pages, because they're a standard part of logic.
Here, I'm looking at page 16, 2.2.3 Meta and object language of Logic and Computation - Interactive Proof with Cambridge LCF, by Larry Paulson, published 1987. At that time he was still conforming to standard terms, but then he switched. I forgot where I read it, but he made the switch somewhere to "meta-logic" and "object-logic", to clarify things for his own purpose. The two terms are in his papers and in the Isabelle distribution docs.
Others can give you their expert knowledge, but the meta-logic specifically is what you get when you import the theory Pure, in particular, a minimal set of logical connectives, ==>, \<And>, &&&, and ==. Discussion of these are spread throughout the Isabelle distribution documentation.
I know nothing much about intuitionistic logic, other than it doesn't provide the law of excluded middle, but you will read that they provide a minimal, intuitionistic logic.
Don't thank me. I've just read some things here and there, and listened. Others can provide expert knowledge.

My findings with regard this question are below.
I found in the Clemens Ballarin slides, slide 20.:
Meta logic: The logic used to formalize another logic.
Example: Mathematics used to formalize derivations in formal logic.
and it is put in parallel with:
Meta language: The language used to talk about another language.
Examples: German in a Spanish class, English in an English class.
Wikipedia has an entry on Metalogic, one section is Metalanguage - Object language:
In metalogic, formal languages are sometimes called object languages.
The language used to make statements about an object language is
called a metalanguage. This distinction is a key difference between
logic and metalogic. While logic deals with proofs in a formal system,
expressed in some formal language, metalogic deals with proofs about a
formal system which are expressed in a metalanguage about some object
language.
And here is slide 21 from Ballarin:

Related

Programming languages that define the problem instead of the solution?

Are there any programming languages designed to define the solution to a given problem instead of defining instructions to solve it? So, one would define what the solution or end result should look like and the language interpreter would determine how to arrive at that result. Looking at the list of programming languages, I'm not sure how to even begin to research this.
The best examples I can currently think of to help illustrate what I'm trying to ask are SQL and MapReduce, although those are both sort of mini-languages designed to retrieve data. But, when writing SQL or MapReduce statements, you're defining the end result, and the DB decides the best course of action to arrive at the end result set.
I could see these types of languages, if they exist, being used in crunching a lot of data or finding solutions to a set of equations. The dream language would be one that could interpret the defined problem, identify which parts are parallelizable, and execute the solution across multiple processes/cores/boxes.
What about Declarative Programming? Excerpt from wikipedia article (emphasis added):
In computer science, declarative
programming is a programming paradigm
that expresses the logic of a
computation without describing its
control flow. Many languages
applying this style attempt to
minimize or eliminate side effects by
describing what the program should
accomplish, rather than describing how
to go about accomplishing it. This
is in contrast with imperative
programming, which requires an
explicitly provided algorithm.
The closest you can get to something like this is with a logic language such as Prolog. In these languages you model the problem's logic but again it's not magic.
This sounds like a description of a declarative language (specifically a logic programming language), the most well-known example of which is Prolog. I have no idea whether Prolog is parallelizable, though.
In my experience, Prolog is great for solving constraint-satisfaction problems (ones where there's a set of conditions that must be satisfied) -- you define your input set, define the constraints (e.g., an ordering that must be imposed on the previously unordered inputs) -- but pathological cases are possible, and sometimes the logical deduction process takes a very long time to complete.
If you can define your problem in terms of a Boolean formula you could throw a SAT solver at it, but note that the 3SAT problem (Boolean variable assignment over three-variable clauses) is NP-complete, and its first-order-logic big brother, the Quantified Boolean formula problem (which uses the existential quantifier as well as the universal quantifier), is PSPACE-complete.
There are some very good theorem provers written in OCaml and other FP languages; here are a whole bunch of them.
And of course there's always linear programming via the simplex method.
These languages are commonly referred to as 5th generation programming languages. There are a few examples on the Wikipedia entry I have linked to.
Let me try to answer ... may be Prolog could answer your needs.
I would say Objective Caml (OCaml) too...
This may seem flippant but in a sense that is what stackoverflow is. You declare a problem and or intended result and the community provides the solution, usually in code.
It seems immensely difficult to model dynamic open systems down to a finite number of solutions. I think there is a reason most programming languages are imperative. Not to mention there are massive P = NP problems lurking in the dark that would make such a system difficult to engineer.
Although what would be interesting is if there was a formal framework that could leverage human input to "crunch the numbers" and provide a solution, perhaps imperative code generation. The internet and google search engines are kind of that tool but very primitive.
Large problems and software are basically just a collection of smaller problems solved in code. So any system that generated code would require fairly delimited problem sets that can be mapped to more or less atomic solutions.
Lisp. There are so many Lisp systems out there defined in terms of rules not imperative commands. Google ahoy...
There are various Java-based rules engines which allow declarative programming - Drools is one that I've played with and it seems pretty interesting.
A lot of languages define more problems than solutions (don't take this one seriously).
On a serious note: one more vote for Prolog and different kinds of DSLs designed to be declarative.
I remember reading something about computation using DNA back when I was in college. You would put segments of DNA in a solution that represented segments of the problem, and define it in such a way that if the DNA fits together, it's a valid solution. Then you let the properties of chemicals solve the problem for you and look for finished strands that represent a solution. It sounds sort of like what you are refering to.
I don't recall if it was theoretical or had been done, though.
LINQ could also be considered another declarative DSL (aschewing the argument that it's too similar to SQL). Again, you declare what your solution looks like, and LINQ decides how to find it.
The beauty of these kinds of languages is that projects like PLINQ (which I just found) can spring up around them. Check out this video with the PLINQ developers (WMV direct link) on how they parallelize solution finding without modifying the LINQ language (much).
While mathematical proofs don't constitute a programming language, they do form a formal language where you simply define solutions (as long as you allow nonconstructive proofs). Of course, it's not algorithmic, so "math" might not be an acceptable answer.

a language for semantic analysis?

background:
- there are formal languages for expressing programming language valid lexicon and syntax
- such representations (e.g. regular expression and context-free grammars) can be automatically compiled into lexicon/syntax analyzers for some programming language using some tools (e.g. LEX and YACC)
questions:
- are there such formal languages for defining programming language semantics ?
- are there compilers to automate generating the code of the semantic analyzer based on these formal languages ?
- any resources to read regarding source code semantic analysis ?
notes:
- you might find my question unrealistic, it’s ok… I’m a newbie to compilers ?
- by semantic analysis i really mean collecting necessary information out of source code for the next steps of code optimization and generation… this should include type/boundary checking… etc
i hope that i made myself clear
many thanks in advance
There are many schemes for defining the semantics of programming languages:
denotational semantics (which maps syntax into functions that compute program state);
operational semantics (which amounts to building interpreters for your language),
transformational semantics (which amounts to translating your languauge to another
language for which some other semantics already exists), etc.
Very few of them presently lead to systems usable for real programming languages. One of the other answers here suggests Centaur as a system that tries to do this.
Action semantics by Peter Mosses is one of the more serious recent attempts.
For real systems, rather more ad hoc methods are presently the most effective.
These inlude systems in which the lexical and grammatical syntax can be effectively
defined (as variants of LEX and YACC), and automatically build trees.
Attribute grammars allow the specification of computations over trees,
which allow one to define some kinds of analyses, such as symbol table
construction or metrics (technically you could
do denotational semantics this way). Most conventional languages (C, Java, C#,
COBOL, ...) all have relatively similar structures regarding control flow and
data flow, and as a consequence one can build generic flow analysis routines
to allow one to reason about such standard languages.
Finally, you need a purpose for your semantic analysis: what facts
exactly do you want to extract? The available static analysis
systems have hybrid pattern-driven/procedural code methods for
collecting the syntax, symbol-table, and flow facts of interest
to compute specific answers to specific questions.
Some systems enable one to use this semantic information to carry
source code modification.
One system that follows the more ad hoc approach above is our
DMS Software Reengineering Toolkit, which also has the generic semantic defintions
(syntax, symbol tables, data flow anlaysis) completed for real
languages such as Java, C, C++ and COBOL. DMS can apply
source-to-source transformations to the ASTs conditioned
by various fact-collecting procedures, and this enables mass
transformation of code in a reliable way.
There are many formal languages for programming languauge semantics - look at the wikipedia entry for some more terms to google. I did a small amount of Z in my early career ( that link to the formal methods wiki is another possible place to start exploring ). I'd suggest asking on the lambda the ultimate forum as the people there use some of them, and can suggest what to use for whatever use case you have in mind; this paper seems somewhat related to what you asked for in terms of generating compilers - you need both the semantics of the language and a description of the target machine to create a compiler.
Specifically for static analysis, look at http://rw4.cs.uni-sb.de/~martin/pag/
What you describe is exactly what the Centaur project was about:
http://www-sop.inria.fr/croap/centaur/centaur.html
In fact, you could very far in describing the semantics of your language, to the point where the system was able to give you an interpreter for that language (that's when you had described the semantics completely). But you didn't have to go all the way. You could do less description work, and still obtain a structured editor and typechecker for your efforts.
Although work has ceased on the project (as far as I can tell),
you may find interesting articles and download in the links.

Intense study of type systems and type theory

I want to understand the actual theory behind types rather than just learning about the latest practical change made in some existing language (e.g., not just how Haskell or Scala's type system works).
What is the best way to pick up this background?
Type theory is a big area. First of all, the term "types" is a kind of a misnomer in computer science, for a few reasons, even though they are mostly used for the same basic idea. Types have arisen in many contexts, philosophy, computer science, and mathematics, for mostly the same reasons. The origin of types in mathematics comes from trying to formalize set theory and running into nitty paradoxes, though similar paradoxes arise in computer science. (For example, I like to point to Girard's paradox on this one).
The way you probably interpret types at the current moment is an idea that became popular throughout the 70s-90s in computers: types are a lightweight flow insensitive analysis that allow us to make concise logical guarantees about the programs we write. Types can be used for this, but you can actually take them all the way out to encoding higher order logic, where programs are proofs. Once you go here, you can take a proof, extract the computational component, and turn it into a program which computes a "correct" result (with respect to the theorem you proved).
There are a few roads you can take from here:
Grab yourself a copy of Ben Pierce's Types and Programming Languages. This is the book to read, and one of the best books in computer science. It covers the theory for why we need types, and how we can use them to enforce constraints about our programs!
Learn a language where you use types on a regular basis to enforce things about the program semantics. In particular, you can learn OCaml, Haskell, or Agda. Haskell seems to be the best choice at the moment, it's got quite a few extensions that make it really attractive, and a really active user community. In fact, it's very common that results published at top conferences flow to widespread use in the community within only a few years!
Learn to use a theorem prover based on a constructive type theory. In my opinion, this is the only real way to grok the real issues behind type theory. There are a number of good options, though Coq and Isabelle stand out as the real contenders now. Coq has one great advantage: Ben Pierce also has a book that covers it! Software Foundations covers an amount of theory from Programming Languages in depth, and you really get to prove things using it.
You might also want to look into a few related fields:
Category theory is the math that tends to underlie this stuff. For example, it's possible to take a categorical interpretation to (co-)inductive datatypes given in these languages.
Logic. Learning a bit of good old traditional logic can be helpful, though I feel confident that most of what you need can be picked up from reading through Ben Pierce's SF book. However, there is still lots of reference to older systems (sequent calculus and natural deduction).
Learn about the Haskell community! They are moving quickly, as I said, and ask deep questions about types in computer science.
Great Works in Programming Languages has a number of great articles!
There are a few great recommendations I have for learning beyond this, but I'd definitely start with Ben Pierce's books (I've never really gotten into the follow up book "advanced topics in types and computer science," but that's also perhaps interesting to you). In particular, I remember the Handbook of Automated Reasoning having a great article on high order type theory.
P.s., I'm convinced that the answer to this question is concretely, "get a PhD. in programming languages, philosophy, or logic..." ;-)

How does a Haskell compiler work?

Where can I get some paper/doc/whatever which describes how a Haskell compiler actually works? I read quite a few of the docs of GHC, but stopped after getting a headache. So, something which doesn't require a PhD to understand it and isn't written in the You're-supposed-to-be-already-familiar-with-it style would be preferable. It's not a problem if it's really long and takes some time to understand it though.
PS: Most interesting would be something about GHC, but anything is ok.
You can get an answer from the horse's mouth! Simon Peyton Jones (GHC wizard) wrote a book explaining how to implement functional programming languages. It's available for free online since it's now out of print: http://research.microsoft.com/en-us/um/people/simonpj/papers/pj-lester-book/
Of course, GHC has moved on since the book was written, but it's still very relevant.
Are you looking for details especially about compiling lazy-evaluation? There is Simon Peyton-Jones's book mentioned by Max Bolingbroke, also the book detailing Clean's implementation is online:
http://wiki.clean.cs.ru.nl/Functional_Programming_and_Parallel_Graph_Rewriting
If you have a university affiliation and want something smaller you could try to get these books (Henderson & Diller are certainly out of print):
Antoni Diller "Compiling Function Languages" ISBN 0 471 92027 4
Peter Henderson "Functional Programming Application and Implementation" ISBN 0-13-331579-7
AJT Davie "An Introduction to Functional Programming Systems using Haskell" ISBN 0 521 27724 8
Diller has a full compiler for a lazy language (implemented in Pascal) via combinator reduction. This was the implementation technique invented by David Turner for SASL. Henderson has many parts of a compiler for LISPkit a miniature, lazy variant of Lisp. Davie details quite a bit of the machinery for compiling a lazy language, for instance there's a description of the STG thats much shorter than Simon Peyton-Jones's book (the STG is the abstract machine SPJ used for Haskell).
The Clean developers have quite a bit of info on implementing SAPL (a Simple Applicative Language) if you look through their publications list:
http://www.st.cs.ru.nl/Onderzoek/Publicaties/publicaties.html
Finally there are quite a number of papers documenting aspects of the Utrecht Haskell Compiler UHC (and EHC). I think most of the information is how the compiler is organized (with attribute grammars and "Shuffle") and how the type systems (there are various levels of type system in EHC) are implemented, rather than how the back-end 'compilation' works.
Unfortunately, I suspect that what you're looking for doesn't exist. Compiler theory and Formal Language theory are reasonably complex topics in Computer Science, and Haskell is by no means a starting point.
First, you should probably get a good grounding in:
Lexical Analysis: http://en.wikipedia.org/wiki/Lexical_analysis
Parsing: http://en.wikipedia.org/wiki/Parsing#Programming_languages
Context Free (and other) Grammar systems (CFG, BNF)
Code Generation: http://en.wikipedia.org/wiki/Code_generation_(compiler)
I would suspect anything explaining anything about the internals of Haskell would require a substantially better understanding of the above topics than say, C would.
I've taken a single course on the subject so far, so I have no formal literature to recommend, but I'm sure there exist many good sources.
Compilers are a huge subject and it would be impossible to explain them in entirety here. But here's an overview for a general compiler. Hopefully this will give you some understanding that may make reading things specifically about GHC a little easier to understand.
Compilers generally work by a series of transformations in 2 parts, the front-end and back-end.
The first transformation is turning plain text into something a little easier to traverse. This itself is usually split into 2 parts:
Lexical Analysis or Tokenization - The act of transforming plain text into small chunks (typically operators, identifiers, literals etc).
Syntactic Analysis or Parsing - Turning these small chunks into a tree structure. (typically an AST, an Abstract Syntax Tree)
The next stage is semantic analysis. In this stage a compiler will usually add information to the AST (like type information) and build a symbol table. That concludes the front-end.
The next transformation transforms the AST into an IR, an Intermediate Representation. This is generally, nowadays an SSA form, a Single Static Assignment.
This is then optimized, via Constant Propagation, Dead code analysis, Vectorisation etc.
The last transformation is code generation. Transforming the IR into machine code. This can be very complicated. It is also sometimes referred to as lowering.
For more information I recommend this wikipedia page.
Wikipedia has a good - readable - overview of the internals of the GHC (similar to #dan_waterworth's explanation, but specific to Haskell and the GHC):
http://en.wikipedia.org/wiki/Glasgow_Haskell_Compiler#Architecture

Meta-relationships on language-oriented programming

In the process of developing a new language. How can be related the "high level" concepts such as "LALR parser", "abstract syntax tree", "context-free grammars", etc. with other "low level" concepts like the specific grammar rules "A -> B".
I mean as some kind of metalanguage relationship, or similar. Any ideas or suggestions to look more information on this topic? Please, feel free to rephrase my question to gain some clarity.
Any help will be greatly appreciated.
Usually these entities (grammar rules, parsers, ASTs) have conceptual relations as captured by books such as the Dragon compiler book (already mentioned in a comment).
I don't think these relations are very interesting when it comes to designing a new langauge; all you really care about is the syntax of the language (often expressed as
a "context-free" grammar with additional constraints, and the semantics of the langauge (usually expressed as a very big reference documents, sometimes expressed in a formal notation such as denotational semantics that can interpret an abstract parse tree produced by magic.
"Real" relations occur when you have machinery that ties them together: If I give grammar A to parser generator B, and use the result to process source code S, I may get AST T.
At this level, you aren't designing your langauge so much as implementing it. What you want here is an integrated set of tools for processing your language definition; ideally, it will accept your grammar and semantic notation directly. There aren't any practical tools that do both of these ideally that I know about, so you have to choose among those that do exist.
Two tools that can be used for this to varying degrees of effectiveness are ANTLR and my DMS Software Reengineering Toolkit.
DMS provides at least one way in which semantics can be defined, by providing means for writing down "algebraic" laws of equivalence between language forms. In essence, you can say this language form is equivalent to that langauge form by writing pattern1 = pattern2 just like you do with algebra. You can see how this is done using algebra as an example.

Resources