Name: Jesper Louis Andersen
Member since: 2003-08-10 10:25:32
Last Login: 2007-08-15 23:15:19



Studying Computer Science at the university of Copenhagen. Generally confused and never gets anything really done.

Languages: Fluent in SML, C and Perl; Occasionally speaks: C++, Python, Ruby, Java, Scheme, Haskell; Could probably learn the basics of most languages fast

Interests: Compiler theory, Operating system concepts, Semantics

Recent blog entries by jlouis


mlton has been bootstrapped and does now run on NetBSD too. Unfortunately it is IA32 only. Primary work done by me. In case you might wonder, MLton is a very fast compiler for the language Standard ML.

The next thing I will work on is a pkg for the NetBSD pkgsrc tree, so it will be easy to install. Bootstrapping a SML compiler written in SML poses a little chicken-and-egg problem, so you might have to download a binary version of the compiler in order to build itself.

MLton does not have a proper type-inference pass with error messaging. It relies on other compilers/interpreters for this. I intend to look a bit into this to see if it is that hard to add to the compiler. I think it is easy, but the problem is getting the code acceptable on a larger scale and hopefully doing it the right way the first time.

MLton has a neat trick which I think more Open-Source projects should pick up: Do some work for them and get a t-shirt for your efforts. It is a rather simple thing, but I think the idea is brilliant.


Now raph has opened the can of worms: Judy is interesting, but does also pose problems. The algorithm is complex and lacks formalism. My bet is that Judy is hard to get to work correctly in all corner cases. The idea of exploiting the cache for higher speed is neat though and parts of the code/ideas could probably benefit other projects.

Perfect hashes

I've decided to run the following algorithms against each other: GPerf, a trie structure, a PATRICIA tree (see Knuth TAOCP vol 3 for this one), Judy, and a DJB hash (Somewhere on DJB's page). The task will be pretty narrow: Given a list of C keywords look them up in a large project and report the number of keywords and identifiers. ''Large project'' could for instance be the linux kernel, a *BSD /usr/src or similiar. I will have a control program which does just read the data and throw them away for comparison reasons.

The result of the above should show if the measurements in the 13 year old GPerf paper still holds. If they do, I will recommend that people uses a trie instead of a perfect hash, since they are faster and easier to understand. They use more memory though so they might now be adequate for large-number-of-keywords problems.

Hah, work on MLton support on NetBSD progressed nicely this day.

It starts with I wake up and notice that my preleminary patches are being submitted into the main trunk. Further a couple of minor glitches disappears the next couple of hours, most of the regressions work and all shines.

It doesnt bootstrap completely yet though. More concentrated work is needed if I should kill that odd exception the Module Functor elaborater spits out 20 seconds into the compile. Hopefully I will get it attacked and fixed though.

The Perfect hash function has not been discontinued. In fact what I want is a static set rather than a perfect hash and I am continuing with this plan. Primary problem is that the measurements were done 13 years ago on a mere Sparcstation 20. I think we have seen a bit of changes since then. Cache behaviour comes to mind.

Further specialisation: We want it to work good in situations where the cardinality of the keyword set is small. We do not care for larger keywords. I checked CLRS (Introduction to algorihtms) on the subject, but they assume IntInf/libgmp-like numbers and that is going to cost them. Furthermore they do at least 2 cache line fills per lookup. This is rather good though, but the question is if the GMP thing costs too much.

GPerf cheats. A lookup table, set up properly, does the thing for it. The 13 year old tests show that it is actually slower than for instance a trie but uses more memory. The question here is if increased cache and more behaviour on this changes the game. More research is needed.

Judy is damn interesting. This is a trie, but it has a neat trick. It packs each trie node, since most of them can be sparsely represented. This results in very few cache fetches, minimal trashing and a very fast algorithm, since getting a miss today costs.... a lot.... With CPU's entering 3ghz a miss is so expensive that unpacking the stuff instead actually gives speed. Impressive. There are an article on the abovementioned link which is worth reading.

Work had me port scripts which hook into GNU Mailman. Perl programming in the large scale sucks, but I got to dance with the dbi-proxy in DBI at least. I have nailed the problem down to 2 more scripts, and then the Mailman backend can be seperated from the main server. Could be a nice modularity gain in the monolithic world.

I read Bram's resume and wondered what he meant by proces. Waterfall methodology? What is that? 30 seconds later and a couple of cycles more spent at the google search-engine farm I got the answer:

It turns out that it is the methodology they try to teach us at the university. 4 phases: Analysis, Design, Implementation and Test. But of course it doesnt work. Evolutionary design in which you continously repeat the above 4 phases each time reaching another milestone in the project where hopefully more and more works is far better in my opninion.

The reason is simple: Every project has errors. If they come at the analysis phase in the waterfall methodology your next 3 phases are doomed. On the other hand a review in the evolutionary process should hopefully result in the problem being exposed

No method is totally error-free though. Such is life. So you still have to do the basics:

Think! Strategy before tactics. At some point it is important to stop thinking though. Detailing everything beyond measure is probably just a silly waste of time.

Divide! Smaller problems are easier to solve than big ones. When they accumulate they will eventually solve the big problem.

Use modularity. Reuseability is mostly a dead fish but inside a given project the reuseability can be quite high, so use it.

Measure! There is no reason to turn on the performance knobs when you dont know where it hurts. Remember the cache of newer systems. It often plays more than a major role.

Understand! Errors should never be solved by turning on the semantical knobs of the program. Understand why it fails and then do the fix correctly.

Problem: GCC uses gperf for computing perfect hash tables. gperf is under the GPL. Thus it cannot be used for TenDRA.

Solution: Create one!

Perfect hashes are not that hard and making one shouldnt be either. I have chosen Standard ML as the implementation language. At least in the start until I have a hold of the code. Rewriting into C does not interest me that much.

How should it be done? I do not know yet. For now I am hunting for articles and researching the area. The aim is to have a fast and effective library which can compete with gperf. It will be interesting to see how this fares.

Monopolies are inherently bad. I think most would agree a comment like that. It kills the free market economics and makes further evolvement stagnant. Unfortunately there is more to this than smashing at a big company all the time.

My target is GCC. GCC has now become the monopoly in the Open Software world. The compiler is de-facto, breaks proper standards and is generally hard to use. But people still use it. The main problem is competition. The GCC compiler needs competition.

Adding extensions to a language hurts. It is very hard to fight against such extend-and-embrace tactics and it only leads to feature bloat instead of well-thought additions.

There is the 5% rule. If we can get another compiler into the top 5% of the speed of GCC we have reached the point where fair competition can begin.

At some point I do not care that much about C, which I find to be a miserable language, but still better than much else. On the other hand, where should the competition to GCC come from?

Not Intels CC. Commercial dependence is bad. Then there is TenDRA, Watcom, LCC and friends. Right now I have decided to throw some time after the TenDRA project and see what happens.


jlouis certified others as follows:

  • jlouis certified asmodai as Journeyer
  • jlouis certified markonen as Apprentice
  • jlouis certified Bram as Master
  • jlouis certified trs80 as Apprentice
  • jlouis certified raph as Master
  • jlouis certified ask as Journeyer
  • jlouis certified jlouis as Journeyer
  • jlouis certified deekayen as Apprentice
  • jlouis certified snim2 as Journeyer
  • jlouis certified sweeks as Master
  • jlouis certified Akira as Journeyer
  • jlouis certified fejj as Journeyer
  • jlouis certified ebizo as Apprentice

Others have certified jlouis as follows:

  • TomHornson certified jlouis as Journeyer
  • shlomif certified jlouis as Apprentice
  • jlouis certified jlouis as Journeyer
  • fejj certified jlouis as Journeyer
  • snim2 certified jlouis as Journeyer
  • holsta certified jlouis as Journeyer

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

Share this page