The ‘Standard Sequence’ for Prolog

The standard sequence for learning prolog:

  1. install SWI-Prolog
  2. Obtain a copy of William Clocksin & Chris Mellish, “Programming in Prolog with the ISO Standard” (older copies are “Programming in Prolog”) and work your way through it.
  3. Obtain a copy of Leon Sperling’s ‘Art of Prolog’ and work your way through it.
  4. Spend lots of time browsing around SWI-Prolog and write some modest sized real programs. At some point do the Web Application and DCG tutorials at Real World Programming in SWI-Prolog (bias warning, I’m the author of the web app one, and coauthor on the dcg one)
  5. Play with any Amzi tutorials that grab your fancy. If you want to play with the expert systems ones you’ll benefit from actually getting a copy of Dennis Merritt’s “Building Expert Systems in Prolog”. It’s downloadable as a pdf from here.
  6. At some point read Richard O’Keefe’s “Craft of Prolog”.
  7. Learn clp(fd) and the other clp(X) systems (I have a tutorial for clp(fd) at that website as well).
  8. Subscribe to the SWI-Prolog mailing list and comp.lang.prolog
  9. Hang out on ##prolog
  10. Make some packs, or an open source project, in Prolog.
  11. Read through the SWI-Prolog library code
  12. Find something you wish to improve. The website would be an appropriate place for a beginning contributor.
  13. Contribute patches to SWI-Prolog
  14. Try several other Prolog implementations, and languages like Logtalk and Mercury
  15. Contribute yet more patches to SWI-Prolog, and at some point Jan will probably get tired of checking them and make you a committer.
  16. Read lots of logic programming theory. Learn CHR. Learn all the dark corners of SWI (Or whatever implementation has become your primary one).
  17. Attend ICLP, the International Conference on Logic Programming
Posted in Uncategorized | Leave a comment


The old Jasik debuggerfor the classic Mac OS had this cool quote in the help somewhere. Forgive me for paraphrasing:

Any halfway decent programmer can debug the problem if they can display the relevant variables in an understandable way.

Of course for most programmers these days this means an interactive debugger. Which is fine, but tends to be misused to do step-step=step boredom debugging.

Functional and logic languages win here. The SWI-Prolog debugger includes ‘r’, retry, which restarts the execution at the enclosing predicate, so you don’t have the common step-step-over-over-oh crap… phenomenon. If you do, you just hit r, step over down to the problem, and this time step in.

Advantages of state free systems.

Other people favor print statements. There’s definitely a time for print statements. They can focus attention on what’s relevant. But they can often be like sightseeing in a submarine, and it’s perishingly easy to end up sticking debugging statements everywhere. This becomes logging.

Logging as debugging tends to be most useful for figuring out who to blame after production goes down. Needing lots of logging is often a smell that your system has grown beyond your control.

But often Jasik’s right. I’ve frequently found it useful to build a little GUI tool into my program that lets me browse the important data structures. A dynamic, animated tool to display data is often only an hour or two’s work, and gets repaid many times over.

Debuggers single step. You should ask yourself if you actually need to be doing that.

I used to have a collection of short sound clips. I’d insert calls to play these as the program ran in debug mode. This was vastly nifty – you soon learned what the happy sound was, and could step over big hunks of code fairly safely as long as they made the right hooting and burping sound.  Also handy for improving performance.

You can also make a little interface that displays a pixmap. Shift the image left one, then  draw in pixels with values for colors each frame. Make a rollover that tells you variable names based on rows.

Or any other way of displaying data in a way that’s well connected to the underlying model.

Yes, you probably do have a view as part of the final product. But it’s designed to hide abstractions. Your debug UI is designed to help you understand them.

If you can read the stack depth, a moving graph of that can be informative. So can graphs of CPU time by thread. SWI-Prolog provides such graphs as a built-in thing. Query


Back in Burroughs Cobol days we used to be able to tell what pass the compiler was in by leaning on the cabinet of the hard drive – the motion was characteristic.

Some intelligence and imagination is often a better solution for debugging than a death march with the interactive debugger.



Posted in Uncategorized | Leave a comment

Publish Spec First, Think Later


MQTT is a messaging specification for a lightweight messaging protocol for the internet of things.

It has this gem in it:

Section 2.3.1

… SUBSCRIBE, UNSUBSCRIBE, and PUBLISH (in cases where QoS > 0) Control Packets MUST contain a non-zero 16-bit Packet Identifier [MQTT-2.3.1-1]. Each time a Client sends a new packet of one of these types it MUST assign it a currently unused Packet Identifier [MQTT-2.3.1-2]

OK, so what happens  65,536 packets later when we run out of unique identifiers?

Uh, hey, guys and gals, you do understand if this is a 10packet/sec high feed from some instrument (this IS, after all, an IoT messaging spec) in just under 2 hours you’re going to run out of unique identifiers?

Then there’s this gem from section 3.1

The payload contains one or more encoded fields. They specify a unique Client identifier for the Client, a Will topic, Will Message, User Name and Password. All but the Client identifier are optional and their presence is determined based on flags in the variable header.

Okey dokey… and how, in the name of Ba’al the Soul Eater, Death, Destroyer of Worlds, King Of Kings, is the client supposed to obtain a unique client identifier?

And since this identifier is not the User Name, does this mean any user can pretend to be any client?

::Annie is here, sacrificing a chicken to Bumba, the god who vomits forth the world::



Posted in Uncategorized | Leave a comment

The Snail

I’m building a robotic snail named Pomatia.

They’s a research tool to examine how robots and children interact. I’ll take them (her? him? Snails are simultaneous hermaphrodites) around to libraries and schools and interact with children.

I have a long term project to have one robot for each learner in K3 classrooms.

She’s about 60 cm long, and has a plush outer covering.

Internally she’s powered by a variety of DC motors and a vacuum pump.

She has a ‘jamming gripper’ underneath to pick things up, and a receipt printer so she can leave behind scraps of paper with notes on them, and a speaker to talk.

She’s equipped with an array of sensors – camera, touch sensors, acoustic ranging sensor, passive infrared sensor, microphone, edge-of-table IR reflective sensor.

She can move various parts of her body. She has bulldozer like treads that move her.

Powering all of this is a large NiMH battery.  She can recharge by moving atop a charging station.

Much of her computation is off board, on a conventional laptop. The laptop acts as a wireless connection for the robot.

She works in conjunction with some stand off sensors – notably a kinect. This does some of the SLAM work. She’s more comfortable in a room with fiducials.

She’s semi-autonomous. High level behaviors are triggered by a human minder using a semi-covert presentation controller of the type used by sales people. More complex control is done through a localhost browser based interface in the laptop.

Yes, this is all crazy.




Posted in Uncategorized | Leave a comment

Actively avoiding knowing what you’re doing

Some years ago I needed 256channels of analog output for a hack, and was scared to attach my home brew gizmo to my expensive computer (desktops were expensive then). Trying to keep costs low, I looked outta the box.

My solution was to epoxy photoresistors to the face of an old monitor, and trigger them by painting squares on the monitor. One power darlington + 1 photocell per channel, low part count.

…. I don’t actually recommend doing this, though it did work just fine ….



Posted in Uncategorized | Leave a comment

Useful Dependencies

libraries, utilities, devops glitter, etc….

I’m not a fan of adding to the tech stack without seriously thinking about whether this trip is necessary.

Suppose we need to write FoobyCalc’s file format, .foob

Bob the engineer says it’ll take him T days to write code to write .foob format. But there’s a utility, NiftyVert, that converts .blarg, which we already write, to .foob

Of course we all have to install NiftyVert, but that’s no problem. We have e engineers, and production, and test servers, n = e + 2, and it takes time t to install it.

So, T >> nt, we’re saving time….

BUT… we now have to install NiftyVert on every system, including all future hires, and Nancy’s machine after she spills her latte in the old one, we have to deal with new versions of NiftyVert, we have to keep fiddling with NiftyVert, a process that takes a small fraction of each engineer’s time, say k, where k is 0.01 or something.

So, if L is the development lifetime of the software, we invest Lnk in NiftyVert, as opposed to T in our own converter. k had better be darn small or Lnk is going to be larger than T.

And it gets worse when we do this over and over. Suppose NiftyVert’s no better or worse than anything else we install, and we install m things like NiftyVert. Now we have every engineer working at 1 –  mk efficiency, and time lost to sysadmin stuff is Lnmk.

Now, if we double the size of the project, things get even uglier.

Let’s ignore the mythical man month and say our team doubles in size. A bigger project probably also lives twice as long. n has roughly doubled (you probably now have a demo install for marketing, and one for the graphic artists to use). L has doubled because we’ve doubled the project lifetime. m has doubled, since the density of these things is constant in the size of the codebase. k hopefully goes down a little as you have more resources to make installs and such easier.

But still, at some point this rolling snowball is not only going to far exceed mT, but it can grow til it consumes most of the project’s resources. I’ve worked on projects, many of them, where it seemed impossible to get all the dependencies working, and mk was > .5

I’ve worked on a couple where mk was approximately 1.

Of course if you roll your own you’re going to find yourself extending it. So you can find yourself recoding Excel. At that point NiftyVert begins to look pretty Nifty. So keep a weather eye on Bob’s little converter he wrote, lest it become a department. If these decisions were easy, anyone could make them. Sadly, it seems no one actually makes them.

OK, everybody in the audience, raise your hands if you’ve worked on a project where it took more than a day to get a dev environment up when you were hired.

Now raise your hands if you’ve spent more than two hours dealing with something not working when it had nothing to do with what you actually were changing.

OK, everybody in the audience raise your right hand and say “I <your name> hereby promise to not shove random dependencies into the code”









Posted in Uncategorized | Leave a comment

Why Clear Standards Are Important

On 1 July 2002 a Tu154 passenger jet flown by Bashkirian Airlines, flight 2937, and a DHL cargo flight, 611, collided over Germany.

As the planes approached each other at the same altitude two systems, both designed to prevent such an occurance, cancelled each other out.

The German air traffic controller observed the imminent collision, and ordered 2937 to descend and DHX611 to climb.

Meanwhile, the automated TCAS collision avoidance system aboard both planes had triggered a verbal alarm. The system was designed to hold an election so one plane was ordered to climb, the other descend. It chose 2937 to climb, and DHX611 to descend.

The German flight manual clearly states that in the event of a conflict between TCAS and air traffic control, follow TCAS.

The Russian flight manual states that in the event of a conflict between TCAS and air traffic control, follow air traffic control.

Both crews were professional, thoroughly trained, and carried out the proper action, by the book.

72 dead. No survivors[1].

Description of the accident. The description of how the conflicting statements got in the manuals should be required reading for anyone on a standards committee.


[1] One more death is attributable to the crash. The flight controller was later murdered by the grieving husband and father of a woman and two children on 2937.


Posted in Uncategorized | Leave a comment