为什么用函数式语言编写编译器更容易?

我一直在思考这个问题很长时间,但真的不能找到答案在谷歌以及一个类似的问题在 Stackoverflow。如果真有复制品,我很抱歉。

许多人似乎认为,用函数式语言(如 OCaml 和 Haskell)编写编译器和其他语言工具比用命令式语言编写要高效和容易得多。

这是真的吗?如果是这样的话——为什么用函数式语言而不是命令式语言(如 C 语言)来编写它们是如此的高效和容易呢?还有——函数式语言中的语言工具不是比 C 等低级语言中的语言工具慢吗?

13830 次浏览

A lot of compiler tasks are pattern matching on tree structures.

Both OCaml and Haskell have powerful and concise pattern matching capabilities.

It's harder to add pattern matching to imperative languages as whatever value is being evaluated or extracted to match the pattern against must be side-effect free.

Often times a compiler works a lot with trees. The source code is parsed into a syntax tree. That tree might then be transformed into another tree with type annotations to perform type checking. Now you might convert that tree into a tree only containing core language elements (converting syntactic sugar-like notations into an unsugared form). Now you might perform various optimizations that are basically transformations on the tree. After that you would probably create a tree in some normal form and then iterate over that tree to create the target (assembly) code.

Functional language have features like pattern-matching and good support for efficient recursion, which make it easy to work with trees, so that's why they're generally considered good languages for writing compilers.

One important factor to consider is that a big part of any compiler project is when you can self-host the compiler and "eat your own dog food." For this reason when you look at languages like OCaml where they are designed for language research, they tend to have great features for compiler-type problems.

In my last compiler-esque job we used OCaml for exactly this reason while manipulating C code, it was just the best tool around for the task. If the folks at INRIA had built OCaml with different priorities it might not have been such a good fit.

That said, functional languages are the best tool for solving any problem, so it logically follows that they are the best tool for solving this particular problem. QED.

/me: crawls back to my Java tasks a little less joyfully...

See also

F# design pattern

FP groups things 'by operation', whereas OO groups things 'by type', and 'by operation' is more natural for a compiler/interpreter.

Seems like everyone missed another important reason. It's quite easy to write a embedded domain specific language (EDSL) for parsers which look a lot like (E)BNF in normal code. Parser combinators like Parsec are quite easy to write in functional languages using higher-order functions and function composition. Not only easier but very elegantly.

Basically you represent the most simplest generic parsers as just functions and you have special operations (typically higher-order functions) which let you compose these primitive parsers into more complicated, more specific parsers for your grammar.

This is not the only way to do parer frameworks of-course.

Basically, a compiler is a transformation from one set of code to another — from source to IR, from IR to optimized IR, from IR to assembly, etc. This is precisely the sort of thing functional languages are designed for — a pure function is just a transformation from one thing to another. Imperative functions don't have this quality. Although you can write this kind of code in an imperative language, functional languages are specialized for it.

One possibility is that a compiler tends to have to deal very carefully with a whole host of corner cases. Getting the code right is often made easier by using design patterns that structure the implementation in a way that parallels the rules it implements. Often that ends up being a declarative (pattern matching, "where") rather than imperative (sequencing, "when") design and thus easier to implement in a declarative language (and most of them are functional).