Previous    |   Up   |    Next

Wrapping up `Practical SML#'

Review of chapters 9-11 & overall thoughts

For Context: All the previous posts in this series

Looking back

In July 2023, I’d just received my copy of “Practical Programming with SML#” (SML#で始める実践MLプログラミング), and decided to blog about each chapter as I read it, as a form of making SML# a bit more accessible to the English-speaking blogosphere.

I started out with a pretty good cadence, working through the first three chapters before the end of July. Then, my pace flagged, and chapters 4-8 took me the rest of the year to write up.

It’s now February 2024 and I’m wrapping up this series, without having reviewed the most meaty and intriguing final three chapters. The truth is that my enthusiasm towards SML# as a hobby has gone down considerably, and most of my (very scarce) recreational programming time is going towards plain SML these days.

The rest of this post will contain a brief synopsis of the last three chapters, and then my overall impressions of the SML# ecosystem, based on my admittedly cursory engagement.

Chapter 9: Cooperating with Databases

This chapter walks us through the process of defining and persisting our application data in a relational database. As the other chapters in the book, it’s very complete, and has us do everything from establishing a database connection, through to INSERT and UPDATE commands, handling conversion and query results, and finally running an analysis of the COVID data (imported using the JSON conversion features introduced in the previous chapter).

Having felt enough professional pain from tight coupling of application models and logic to the DB layer (ActiveRecord, SQLAlchemy, etc., and to a lesser extent Ecto), I was not exactly super enthused about this chapter and the approach taken by the language designers.

Essentially, as I understand it, SML# uses its built-in reflection and dynamic typing capabilities to build SQL statements based on SML#-level type information. This means that the entities on the SML# side and the entities on the SQL side must align 1:1 in terms of naming and semantics. Perhaps this approach is in line with the KISS philosophy, and forces application developers to keep their application-side code wholly reflective of the reality in the DB.

Maybe I’m spoiled from having been exposed to the Haskell way of solving these issues with Typeclasses, but to me both the JSON conversions and the SQL mappings are a kind of false economy, making the easy things easier but the hard things impossible. I’d prefer to do a bit more work up-front defining converters or parsers (as the Elm language forces you to), than to acquiesce to 1-to-1 mappings, and therefore dependencies, between my application code and serialized data.

(To be pedantic: it’s possible, of course, to have a layer of SML# code that serves as the ‘parser’ layer, and then create truly pure models from this ‘parser’ layer, but practically speaking no one is going to go to these lengths in the name of elegance or flexibility.)

This is the chapter that probably “lost me” to the cause. But, being so close to completing the book, I decided to skip implementing the code in this chapter and jump straight to parallel programming.

Chapter 10: Parallel Programming

Having used Haskell for several years, and then Erlang and Elixir for most of my programming career, I came to this chapter with high expectations. Unfortunately, I wasn’t really able to get my hands dirty with the material, because Pthread.Thread.join would keep hanging my smlsharp session, and only a SIGKILL could get it unstuck. Yes, the thread I was trying to join had finished doing the work. Yes it did print the result after getting SIGKILLed! (You might be thinking that I should have gone and debugged this strange and interesting concurrency bug. That’s true, but I really felt at this point that the juice was not worth the squeeze. Keep reading for more on this topic).

I really wanted to see the pretty ray-traced pictures, so I implemented the single-threaded raytracer in regular SML, and compiled with polyml. It compiled and ran like a charm!

Half-shaded sphere, rendered in white-and-black

At this point I was already disheartened enough that I didn’t proceed with the MassiveThreads parallel raytracer discussed in the rest of the chapter.

But to summarize the remaining part: in contrast with OS-level threads that regular SML implementations use, SML# offers drop-in support for a “green-thread” system called MassiveThreads. These are much cheaper to initialize and run, enabling finer-grained parallelism and better utilization of CPU resources.

Chapter 12: Techniques of Developing Practical Systems

This chapter is a synthesis of all the previous ones: we have a fully-fledged C-integration with Cairo (producing PDFs no less), a database in sqlite, and command-line parsing. The chapter sets out to prove that we can realistically apply SML# to the real-world task of plotting datapoints from a relational database.

Epilogue

The epilogue drives home the key points made by the authors in the course of the book. There are four fundamentals of SML:

1) Think in types
2) Write functions while keeping recursive structures in mind
3) Express the problem with data definitions
4) Use make and incremental compilation

And four special characteristics offered by SML#:

5) Access software libraries via C
6) Ingest external data safely with dynamic typing
7) Directly program the database
8) Use parallel computation features, harnessing multicore CPUs


My thoughts on the book, the language, and the ecosystem

First of all, a disclaimer: I read the book solely for personal enjoyment, in a hobbyist capacity. Perhaps working through it in either an academic or a professional setting, with experts on hand to talk to, would have been a different experience. I also didn’t implement any larger program apart from the examples and exercises from the book.

With that perspective clear, here are my thoughts.

1. This was one of the best programming books I have read

I would place it right alongside “Common Lisp: A Gentle Introduction to Symbolic Computation”, “Structure and Interpretation of Computer Programs”, and “Erlang Programming”, my three personal favorites.

It’s not just a well-written book, but it’s a book that accomplishes what it sets out to do: impart the authors’ knowledge on the reader. There’s never any doubt as to what the authors are trying to convey, and every bit of code has an explanation. My Japanese reading skills are nowhere near native level, yet I never felt lost or confused by grammar or vocabulary choices.

There is a lot of code in the book, interspersed with discussion, so the pace is fast and always feels engaging. The SQL chapter is a bit of an odd one out, sometimes coming stylistically closer to a reference manual than to a tutorial.

Then there are the exercises, which directly reinforce and often expand on the material from the preceding chapter. In my mind, this is the gold standard for a practical programming book. I really enjoyed doing these exercises, and thanks to this I retained much more of the presented information.

2. The language is an unqualified ergonomic improvement over Standard ML, with some raw areas

SML# the language is the reason I bought the book, and I wasn’t let down. It improves on the standard in many subtle but very pleasant ways, from thorough Unicode support to the ability to selectively match on records, really bringing the developer experience into the 21st century when it comes to programming in the small.

The Dynamic, SQL and Foreign extensions to the language feel a bit raw, like the authors are sharing with us the internal implementation of features not yet fully completed. These modules are very powerful but somewhat arbitrary, like Go’s special treatment of maps and slices, or StandardML’s own special “equality types” and math operator overloading. It feels like some parts of the system have been given superpowers, but the user has only the ability to plug in to these superpowers, and not to extend them.

I can imagine these features seeing more involved development. On one hand: allowing user-level extensions (such as custom Dynamic.fromXYZ converters), and on the other, building higher-level features (such as deriving-style code generators) that use these extensions under the hood. We already know it’s possible, given how smoothly the JSON and SQL intergrations work.

3. The tooling is underwhelming given its ambitions

There is a lot to like in the smlsharp tooling. The system compiles cleanly (*once you sort out massivethreads), includes high-quality SML libraries (smlnj-lib, smlunit) and tools (smlyacc, smllex, smlformat), and the make integration is a very welcome change from endless language-specific build tools. And yet, there are a couple things that prevent the experience from really taking off.

First: the compiler is just slow. The authors at several points make the case that the main goal for SML# has always been extending the language in a practical direction for usability, and this came at the cost of performance work. I appreciate this approach and think it’s the correct one. This approach gave us Lisp and Erlang and Haskell, and each of these languages is —nowadays— sufficiently performant for real-world usage.

Still, with that understanding, the compilation process is almost unacceptably slow for day-to-day use. Here are the compilation times for the ray-tracer program:

% time mlton main.sml
Warning: main.sml 98.14-98.21.
  Declaration is not exhaustive.
    missing pattern: NONE
    in: val SOME y = Int.fromString (hd args)

real	0m4.188s
user	0m2.459s
sys	0m1.482s
% time smlsharp main.sml
main.sml:98.14(2768)-98.46(2800) Warning: binding not exhaustive
      SOME y => ...

real	0m1.069s
user	0m0.868s
sys	0m0.195s
% time polyc main.sml
main.sml:98: warning: Pattern is not exhaustive. Found near val (SOME y) = Int.fromString (hd args)

real	0m0.156s
user	0m0.112s
sys	0m0.045s

So… not as slow as mlton, but still several times slower than polyc. This really adds up, and works against the ‘practical’ bent of the language. Fast compilation, and therefore developer feedback, is why Turbo Pascal was so popular back in the day :)

Second: The make-based incremental build system has some warts, and the .smi interface-file scheme feels like a throwback to the early ’90s. I don’t enjoy having to write practically the same code in two places, and that’s effectively what you end up doing. Again, maybe I’m spoiled by Haskell’s module system with explicit imports and exports, but the .smi-file dance feels to me like something the compiler/build-system should be doing for me.

Also, regarding the clunkiness of the make system, here’s a line from the last chapter of the book: the chapter which, mind you, demonstrates the full extent of real-world programming with SML#:

$ smlsharp -MMm dbplot.smi > Makefile
$ sed -i.orig -e '/^LIBS/s/$/ -lcairo/' Makefile
$ make

First, we make some edits to our interface file, dbplot.smi. As I noted above, this seems like an unnecessary step, at least for most usage. (I know there is a special provision for mutliple files exporting the same interface). Anyway, okay, we edited the .smi file, so we have to regenerate our Makefile. Okay, let’s say we agree to this step. But the next step is unacceptable: we have to go and re-write our autogenerated Makefile with sed.

This is a violation of DRY: I don’t want to have to re-run sed on my makefiles everytime I change an interface file. I want to specify somewhere that my code depends on -lcairo and not have to continuously re-jigger the build artifacts by hand. I even made a Pull Request with a proposal for a fix… which brings us to the worst aspect of smlsharp, and the real dealbraker when it comes to real-world adoption. The lack of publicly active users.

4. The community is inactive, to put it politely

For a project of such high quality and high visiblity, SML# feels like a ghost town. Since the book came out in 2021, there have been a meager 23 commits to the github master branch, the last one in March 2023. There isn’t any discussion on the PRs that folks (including myself) have put up, and the github forums in both Japanese and English have had zero activity since early 2022.

To compile massivethreads on a reasonably modern GLIBC, you need to find the debian patch in a developer’s personal fork. This hasn’t been addressed in the smlsharp master branch README.

There was some activity on Japanese programming Twitter around the time the book came out, but sadly no one really ran with it for a longer period of time.

In general, from my 7-month long engagement with smlsharp, I got a taste of that famous Japanese feeling of wabi: the presense of something brilliant, deep, meaningful, and yet always distant, empty, and already fading away.

 

So there you have it. For my recreational programming, I’m going to be sticking with the “standard” Standard ML and the blazing fast poly compiler. If you’re up for some SuccessorML-style experimentation, you’d do much better to look at LunarML, which is very active and very, very promising. And also Made in Japan!

Previous    |   Up   |    Next