In response to Stefano's reply to my post about Coq:
Usually such pieces of software are more seen as academic toys, seeing it promoted as a geeky software is a joy to me.
I really hate to detract from your happiness -- but I definitely had the word "math" implicitly stuck before "geek toy". Most coders that I've met are, at best, indifferent to mathematics, outright hostile at worst, so I doubt that coq will get far outside of small niches where the benefits are irresistable (security software, maybe? -- and of course pure math). I read up on this stuff despite it being utterly useless for anything I'm likely to be paid to do (unless I return to academia) because I'm a hopeless weenie loser who likes math. :P
Coq is not really a programming language (even if the CoqArt book says so, to attract the developer community), it is mainly a proof assistant; i.e. a software in which you give definitions, state theorems and prove them.
Well, this all depends on how you define programming language, which is one of those never-ending definitional questions. But using my general conception of what a programming language is, I actually have noticed two programming languages inside Coq.
- The more interesting one (IMO) is the proof language they call "Gallina" and the "Vernacular", in which you construct proofs by reducing your goal to a tautology. The goal here seems to be to compute a well-typed term in Coq's core calculus, and you generally construct programs interactively: Coq shows you the current program state (expressed as one or more goals and a set of local hypotheses) and you manipulate it by (e.g.) applying lemmas to yield new known facts. I think this language is fairly weak viewed as a programming language. It's completely unreadable and the proofs generated don't correspond to the reasoning process one normally follows when working out a proof. I am less annoyed by it now that I've learned how to do forward reasoning :), but it seems like backwards reasoning is really pushed, and that just seems .. well .. backwards to me. There's also a strong tendency to use implicitly generated names for things and to just "know" what the state of the system after issuing a command will be; as far as I can tell, the only way to understand a Coq proof is to step through it in coqide. While it's nice to be able to "trust the kernel", it would also be nice to be able to "read a proof" directly. Among other things, their inscrutability makes me think that Coq proofs would be annoying to maintain over time. This is not necessary for stuff that's part of Principia Mathematica, but seems likely if you're developing a new theory -- in particular, I would shudder (theatrically) at the mere thought of including embedded Coq proofs in my programs. Some additional brief comments on this were written by Nick Benton and put on the Web under the title of Machine Obstructed Proof.
- The other sense is that Coq directly embeds a strongly normalizing variant of System F (I think? -- at least the simply typed LC with some type parametricity) with primitive recursion over inductive structures. This isn't Turing-complete (of course!), but I would certainly call it a real programming language. As you note, though, this is less interesting unless you want to prove facts about your programs.
Regarding the synergy of Coq with OCaml: yes, you can prove in Coq the correctness of OCaml program, but actually not more than you can do with programs written in other programming languages
I was specifically thinking about Concoqtion; I have no idea how other projects compare, and I apologize if I incorrectly summarized it!