Too much, then we go in the other direction


I find situations where a trend is generally followed, but then as we keep going, the trend reverses, very interesting. I am such situations here as I come across them.

Performance of higher level languages

The general assumption is that higher level programming languages are slower than lower languages than C

because SQL is so high-level, the "compiler" can substitute totally different algorithms, apply multiple processors or I/O channels or entire servers transparently, and more.

I think of Haskell as being the same. You might think you just asked Haskell to map the input list to a second list, filter the second list into a third list, and then count how many items resulted. But you didn't see GHC apply stream-fusion rewrite rules behind the scenes, transforming the entire thing into a single tight machine code loop that does the whole job in a single pass over the data with no allocation — the kind of thing that would be tedious, error-prone and non-maintainable to write by hand. That's only really possible because of the lack of low-level details in the code.

Why is Haskell so fast?

Garbage collection

Why would garbage collection be faster than explicit memory allocation as in C? It's often assumed that calling free costs nothing. In fact free is an expensive operation which involves navigating over the complex data structures used by the memory allocator. If your program calls free intermittently, then all of that code and data needs to be loaded into the cache, displacing your program code and data each time you free a single memory allocation. A collection strategy which frees multiple memory areas in one go ... pays this penalty only once for multiple allocations (thus the cost per allocation is greatly reduced).

GCs also move memory areas around and compact the heap. This makes allocation easier, hence faster, and a smart GC can be written to interact well with the L1 and L2 caches.

OCaml garbage collection

Conversely, going lower level can potenially reduce performace.

Diseconomies of scale