The process dictionary vs. Sexual education
The process dictionary is one of these beasts that new Erlangers (and even somewhat knowledgeable ones) are told to avoid at all costs. The idea, of course, is to protect said Erlangers (but mostly more experienced ones who will have to maintain their stuff) from insidious side effects that are difficult to track and look after when a bug happens.
This, in my opinion, leads to some form of cargo cult programming (just think about Go To Statement Considered Harmful [PDF]). You get hordes of programmers who just turn a blind eye to the existence of the process dictionary, when it is actually a vital part of the OTP framework. You also get hordes of programmers uneducated about its uses, intents and purposes. This is somewhat similar to the whole debate regarding sexual education in schools. I do not believe teaching abstinence is a good policy when sooner or later, the programmer is going to encounter the process dictionary; you'd better aim for good comprehension and education on its usage to avoid nasty consequences and gross ignorance later on.
So what is it good for?
In OTP, the process dictionary is used for two purposes: storing the process' ancestors and also storing the initial function call. You can see them being inserted into your process every time you use proc_lib:spawn/1
or proc_lib:spawn_link/1
:
1> proc_lib:spawn(fun() -> io:format("~p~n", [process_info(self())]) end). [{current_function,{erl_eval,do_apply,5}}, {initial_call,{proc_lib,init_p,3}}, ... {links,[]}, {dictionary,[{'$ancestors',[<0.31.0>]}, {'$initial_call',{erl_eval,'-expr/5-fun-1-',0}}]}, {trap_exit,false}, ... {suspending,[]}] <0.33.0>
The ancestors are useful when terminating a higher level supervisor. This lets OTP know that a given message or exit signal comes from a parent process and might order an orderly shutdown rather than a crash. In case of a process that traps exits, this lets the framework call your terminate functions and avoid forwarding parent signals to handle_info/2
callbacks.
The $initial_call
value isn't used that much outside of debugging, but it is usually a good way to figure out where a given process is coming from.
Other uses of the process dictionary include the random
module, storing the random seed in the process. The user
module uses it to fetch info from the shell process which might be waiting for results from somewhere else. The global
module/server uses the process dictionary as a key-value cache for remote node configurations. The fprof
application makes a similar use of it for its internal state.
There are other modules using them, but that covers a good range of them and the different use cases.
Could these be handled somewhere else?
It's probably possible for some of these, although it might not be ideal.
For the ancestors and initial calls, it might not be nearly as efficient to do it somewhere else: the process dictionary disappears with each process and is automatically garbage collected with it. The calls are local, and thus will never really be part of a bottleneck (it could happen with ETS), which is especially important when it comes to calls every process in a system could be making.
For the random module, it allows quick mutation and a handling of the random seed away from the viewer's eyes. Technically speaking, it wouldn't be such a big deal if you had to carry the seed around and update the references you hold yourself, but it does make the API simpler, at the cost of making it more confusing sometimes. I must admit being bitten a few times by forgetting to set a seed, something that wouldn't happen with a functional interface.
For the user module, it could possibly be the kind of stuff handled by ETS, although you would likely run into a bunch of race conditions. Basically, the shell process has to wait for a bunch of results from functions it tries to run (or ports it listens to), and then quickly make what it's currently doing available and give details about its state to the rest of the VM that may need it. It is simply easier and probably safer (who thought these words would be use to advocate the process dictionary!) to do things the way they are done right now.
In the case of the global server, I do not believe there is any reason that forces the use of the process dictionary past the simplicity of holding a bunch of different values and the quick updates it gives you.
So what should we do?
That's a tough one. As shown above, the process dictionary has all kinds of uses, from never changing configuration metadata, to some kind of hidden state, workarounds for concurrency, without forgetting the good old 'optimising through destructive updates' use case (which a reader aptly compared to ASM in C).
Does it mean we (the common Erlang programmers) should use it? Hardly. A few things to note about these modules and applications:
- They are generally written by Erlang experts, or at least, experienced users;
- They have been used during years for hundreds of projects that likely acted as very good tests, both within and outside Ericsson;
- They are performance gains that can influence entire systems given how widely used they can be (global) or how dangerous they can be to a production system (fprof);
- They are the best tool for the job.
Point 1 alone is hardly an excuse to use the process dictionary. While I don't consider myself an expert, I know I would find it ridiculous to just say "I'm a pro, let's use p-dicts!" It's more of a general statement that says that you know what you're doing, understand most, if not all of the tradeoffs, etc. If you're a newcomer to the language and are considering using the process dictionary, ask around the community (maybe on IRC, #erlang on freenode) to get some advice and suggestions; there might be other solutions you do not know about.
Point 2 pretty much excludes most applications we are writing ourselves, but there's no doubt good testing can increase our confidence in the product to the point that the dangers of the process dictionary isn't much of a worry anymore. Hopefully the confidence in your code is well deserved.
Point 3 is particularly interesting and requires a judgment call. Should this simply moved to C? Is it a simple enough compromise that basically lets me use the code the same way as if I had a third party process to do it for the benefits with little consequence? Am I ready to give up the explicitness of standard key-value data structures for this one?
Point 4 is a rare occurrence. I think the user module might have a valid case. Same for the OTP meta-data; it's inherently tied to the process itself, gains from the concurrency and garbage collection aspects of the deal and never changes.
I believe (without proof) that most people complain about the use of the process dictionary when you don't have the point 1 or 4. Point 2 or 3 are rarely a concern for most Erlang programmers (again, a proofless statement). There's also some anger that pops up when people use them for cheap destructive updates when other simpler, easier to maintain solutions exist for optimisations to be tried in the first place. Make an effort, god damn it!
Back to sex ed
I guess the easiest way to tie this post back to sexual education is that with the process dictionary, you will know when it's the right time. Not because other options are too hard to do right now, but simply because they are not as adequate.
The process dictionary is not inherently evil, but it has many drawbacks that have been mentioned countless times already: it is harder to debug and reason about, it has different semantics than most of the language, it breaks when you try to send state to other processes without knowing it's tied to the process' very existence, it is not garbage collected until the process dies, it is hard to replace by a different key-value store and it angers people (I do consider this to be a negative). On the positive side, you have better update speed. On the neutral side, you have global access to some values, within the scope of the process: this can both be useful (static config) or dangerous (global scope!)
Look for the tradeoffs you're ready to make, what your application actually needs. Use the right tool for the right job and make sure the process-global aspect of it and the speed do warrant all the downsides of it. I hope this helps.