During the spring semester, I supervised three groups of students working on master’s thesis projects that I proposed. Tackling quite different problems, they all did an excellent job, and it was fun and interesting to supervise their work and to follow their results. In this post, I’m giving a brief account of their work now that they are all done. Below, you can also find links to the official thesis reports in Chalmers’s Library. While we’re at it: if you’re a current student looking for a master’s thesis project, make sure to check out the projects I currently offer or drop me a line.

The spec is out there — Extracting contracts from code

Pontus Doverstav and Christoffer Medin developed a tool that extracts partial JML contracts (a kind of formal specification) from Java code. Their approach differs from the most common techniques for specification inference, which ultimately reason about the semantics of code (for example, abstract interpretation techniques abstract the concrete semantics of a program on simpler domains). In contrast, Pontus and Christoffer’s work is syntax-driven, as it is based on a collection of patterns that capture how certain code implicitly suggests simple contracts. For example, a qualified call t.foo() implicitly requires that t != null; this assertion is propagated to the precondition of the method where the call appears.

The whole approach is deliberately kept simple and best effort. For instance, whenever there is a method call, we try to determine its effects based on the callee’s (partial) contract; if no callee contract is available, we just assume a worst-case behavior to preserve soundness. In the worst case, an analyzed method may be marked as “failure” by the tool, denoting that no meaningful specification could be reliably extracted. An advantage of keeping the analysis so simple is that we can apply it extensively and retain any result that is useful. When they applied their tool to thousands of methods in four open-source projects, Christoffer and Pontus found that it could extract more than one JML specification case per method on average. Many extracted contracts are simple, but a few are quite complex — such as one capturing in detail the behavior of a large switch statement covering 256 cases. The thesis also reports on some preliminary evidence that suggests the contracts extracted this way can be complementary to those a programmer would write, thus helping the analysis of programs and other applications driven by specifications.

Comparing programming languages in Google Code Jam

Alexandra Back and Emma Westman analyzed empirically the solutions submitted to Google’s Code Jam programming contest, with the overall goal of comparing features of different programming languages. The main motivation behind targeting Code Jam was finding a collection of programs that all implement the same functionality (whatever is required by each round in GCJ) in different programming languages (contestants can use any programming language of their choice), while being able to control for quality — the intuition being that a solution’s rank in the contest is strongly indicative of its quality in terms of correctness and efficiency.

The study focused on the five programming languages most widely used in the contest — C, C++, C#, Java, and Python — and analyzed over 210’000 solution programs. Since a contestant’s performance is based only on his or her ability to produce the correct output for a given, large input to the program, there is no guarantee that the program that has been submitted is the same as the one that was used to generate the correct solution. This complicated the study, and ultimately required to drop 38% of the solutions from some analyses, as they could not be compiled and executed. Nonetheless, a large number of usable solutions remained that could be fully analyzed, which gives confidence in the study’s results. Among them, Alexandra and Emma found that the highest-ranked contestants use C++ predominantly but by far not exclusively; that Python programs tend to be the most concise ones, and C# programs the most verbose; that C and C++ dominate the comparison of running time performance, even though the other languages do not fall that far behind; that C and C++ also have the smallest memory footprint, while Java has the largest (probably due to how the JVM manages memory). Check out the thesis for many more details and results.

Incremental deductive verification for a subset of the Boogie language

Leo Anttila and Mattias Åkesson developed a tool that verifies Boogie programs incrementally as they are modified in each new iteration. Boogie is a popular intermediate verification language, combining a simple imperative language and an expressive program logic. Boogie is also the name of a tool that verifies Boogie programs. Whenever a procedure changes implementation or specification, Boogie verifies it from scratch — like most other verifiers; in contrast, Leo and Mattias’s tool is able to reuse the results of previous verification runs. Suppose, for example, we have verified a procedure foo; later, we introduce a small change to foo that is still compatible with the procedure’s specification. The incremental verifier is aware of what has changed, and only verifies that, intuitively, the difference between the current version of foo and the previous one does not affect correctness.

The details of how incremental verification works are quite technical, and rely on a combination of attribute grammars and incremental parsing. My experience working with Boogie (or program verifiers that use it as back-end) suggests that such an incrementality matches well the way so-called auto-active verification tools are used in practice — namely in a stream of small changes to the source code each immediately followed by a call to the verifier, whose responsiveness is of the essence to support a smooth user experience. The thesis describes experiments simulating this mode of interaction, showing that incrementality can often reduce the running time of the verifier by 30% or more.

References

  1. Christoffer Medin and Pontus Doverstav: The spec is out there — Extracting contracts from code, Master’s thesis in computer science. Department of Computer Science and Engineering, Chalmers University of Technology, Sweden, June 2017.
  2. Alexandra Back and Emma Westman: Comparing programming languages in Google Code Jam, Master’s thesis in computer science, algorithms, languages and logic. Department of Computer Science and Engineering, Chalmers University of Technology, Sweden, June 2017.
  3. Leo Anttila and Mattias Åkesson: Incremental deductive verification for a subset of the Boogie language, Master’s thesis in computer science, algorithms, languages and logic. Department of Computer Science and Engineering, Chalmers University of Technology, Sweden, June 2017.

Are you looking for a faculty job? Make sure to read the various guides available online; they’re packed with useful information. Their shortcoming is that they’re but entirely focused on the North American/US job market. If you’re interested in knowing how things work on the other side of the pond, read on.

The following notes are mainly based on my experience during the academic year 2014–2015, when I’ve been searching for a faculty position in Europe. Each section describes a particular aspect of the whole search process, and tries to compare it to the US experience which is generally documented more broadly elsewhere. Whatever aspect I do not comment on, you can assume it’s quite similar to the situation in north America. Keep in mind that, besides the desire to remain in Europe, I was a candidate with significant experience: I completed my PhD in 2007, and I’ve spent several years as senior researcher — a position comparable to assistant professor without tenure. This entails that I was looking for more senior positions, tenured or with a well-defined not-too-long path to tenure. These requirements turned out to be not overly constraining for the European job marked, but they may have skewed may perception of some aspects. Your mileage may vary.

Overall process

The overall process is the usual one. You apply for an advertised position by submitting all required application material by the deadline. After a while, shortlisted candidates are invited for an interview. Following the interviews, selected candidates receive offers. If you’re one of the lucky ones, you get to negotiate your position until you reach an agreement and sign an employment contract (or decline the offer).

The timing and transparency of this process vary wildly between institutions. First, application deadlines are spread out over the whole year. While the majority of deadlines are from September to the following February, several continue well into the spring, and some may even be during the summer. The other phases also follow asynchronously: in some cases, I haven’t heard back for 8 months after submitting the application; others informed me about every step in the selection process, and even shared with me the evaluations of the hiring committee (sometimes about all candidates) before interviewing me. Shortlisting was sometimes done in stages, with a longer list of candidates who are passed on to the next filtering stage. In all, you have to be flexible, and be ready to prioritize the positions that are more interesting for you. Checking with the hiring committee about tentative dates does not hurt, if it can reduce the uncertainty.

The application

What goes into the application material is roughly similar in all places, but different institutions often use different different formats or just different terminology. The cover letter may not always be explicitly required, but make sure to include it anyway as it’s your chance to customize the application and present an overview of your profile that is sufficiently short to be read in its entirety. The research statement (also research plan, research highlights) is a combination of presenting your research in a convincing frame and outlining future work. The latter part is normally the shorter of the two, even though some advertisements seem to suggest that it should be organized like a full-fledged research grant application. I took such notes as a suggestion to be concrete in outlining future work; but I would still recommend to give enough space for describing what you have already done, since that can be much more compelling in a hiring context. The teaching statement is not always required; in a few cases, it had to follow an elaborate format used internally by the organization. I always tried to stick to the format (within reason), but I never had the impression the teaching statement was given much attention. For both research and teaching statement, always stick to the given page limit if they give one (if there’s no page limit, still don’t overdo it; at least, length should be commensurate to achievements). The CV is probably what the hiring committee goes through once they get past the cover letter. It’s important that it’s carefully organized, not just to highlight your achievements but also in a way that it can be skimmed through efficiently. The publications list is another part that may have to stick to a given format (for example, journal and conference papers in different categories). I sometimes took the liberty of ignoring the requirements when I had a good reason to do so; in particular, I listed conference and journal papers in the same category because that’s how we roll in computer science. Make sure publications are numbered in a way that makes it easy to count them — at least until we stop the numbers game (don’t hold your breath).

The practice of requiring (or not) reference letters is customary in English-speaking countries. While it’s catching up in most of Europe as well, there are some countries where referees are not used at all. My suggestion is to include the list of referees in your CV regardless of whether they are required or not. In places where officially using recommendation letters is not customary or not permitted by the regulation, the hiring committee may still decide to get the same information informally by talking to your referees. When I’ve be involved in hiring decisions (in my case, regarding PhD students), I’ve always found that recommendation letters can play a big role especially in the case where the competition among candidates is strong.

Frustratingly, some institutions required applicants to jump through some additional hoops, like requiring to fill in a Word form (so poorly designed that even copy-pasting into it was a challenge), or using a poorly-designed submission system that asks you to enter the publication information using online forms (needless to say with no BibTeX import options). My policy has generally been to ignore as many as these artificial requirements as possible, unless they were relevant to the specific application or there was no other way of providing that information. In one case, I just wrote something like “See the attached material” in the first input form on the online application, and attached a single PDF with my actual application material. Result: they soon invited me for an interview. Provided you have carefully prepared your standard application material and you have a competitive profile, no reasonable person will desk-reject you just because you didn’t use Word. Indeed, sticking to your format and your story may even show some independence and that your time is valuable too (that’s my opinion at least). In any case, make sure to leave enough time before the application deadline to go through the submission system and check any unusual requirements.

The interview

The interview may turn out to be interviews (plural): some places do selection in stages, and may organize a presentation and informal discussion with a few shortlisted candidates, and then invite half of them to a formal interview at a later date. To my great surprise, there are a few places that refuse to reschedule an interview if the invited candidate is not available on the suggested date. This strategy is clearly detrimental to getting the best candidates; it’s a mystery to me why some practice it against their self-interest. Of course you should try to clear your schedule as much as possible during interview season, but complete inflexibility on the other side is not an inviting sign.

The actual interview typically follows a talk by the candidate on their research, teaching, or both. The talk’s length changes wildly — from ten minutes to one hour. Whatever your time budget it, stay within it — you do not want to be cut off. For the actual interview you usually sit around a table with the hiring committee. Unlike what is described about north America, you do not typically get to have one-on-one meeting with administrative figures; everybody sits around the same table and they all take turn asking you questions on their specific area of interest or expertise (research, teaching, career, and so on). It’s useful to have answers to most “standard” questions rehearsed in advance (but not rigidly memorized), so as to be assertive and to-the-point.

In most, but not all, places you also have a chance to have some informal contact with a small number of faculty members, typically those who are closer to your research interests and you may already know. This step is very important because, if no one in the hiring committee has a direct interest in what you do, the chances that they’ll hire you are dim. I also believe that this part of the interviewing process is about you getting to know the place and your potential future colleagues too. If a place does not give you any chance to chat with the faculty over lunch, or to get an idea of how the department works “from below”, it’s hardly a good sign. If you do know somebody on the faculty, you should also go ahead and contact them proactively in advance to make room for such informal meetings.

The offer

With few exception, the decision comes in one or two weeks after you interviewed. It’s unlikely that you have a lot of time to decide: this seems in contrast to the US where offers are often extended without time limits. Still, you should be able to take at least a couple of weeks after the negotiation ends. Thus it’s quite likely that, if you applied to numerous places, you will not have a chance to line up all offers and decide. To prepare for that, it’s useful to rank the places where you applied from best to worst (according to your preferences), as well as according to an estimate of how competitive getting that position is; the rankings will change interview after interview based on your refined impression of each place. Then, if you have to decide on an offer, you can base your decision on which other options are still open or may become open later on. If you feel creative, you may use an optimal stopping strategy :-).

Negotiation is often not a big deal, because conditions tend to be standardized by age and experience. Most of the negotiation is actually learning in details about the employment conditions. But what is offered is not the same everywhere. For instance not all places have a starting package or pay relocation costs. Directly comparing salaries in Europe is also difficult because the cost of living changes significantly from country to country, and so do income taxes. Related to this, it is important to learn about each country’s national research funding system. This also varies considerably from country to country, and it can make a big difference in how many resources you will be able to raise on top of whatever the university grants you.

Competitiveness

In hindsight I was invited to interview wherever there was a clearly good match between my profile and the advertised position in terms of research area and expected seniority. While you cannot pretend you are younger than you actually are (academic age starts running when you get your PhD), some people try to change the research statement to fit some of the topics highlighted in the call for applications. I never did that, and I suggest against doing it. First, because writing a strong research statement requires time and a lot of polishing; tweaking it last-minute for each new application runs the risk of becoming less readable and compelling. Second, because the research statement should present your research according to your vision; feigning expertise in topics that you’re not really expert in is unlikely to convince the real experts who are hiring. However, do customize the cover letter with a few strategic sentences where you show that you can clearly make a connection to the specific topic, and that you have genuine interest in making it. Also, don’t be shy in applying broadly — within reason — without necessarily waiting for the perfect match: a strong application is always well received, even if it does not get you an interview or an offer (I’ve had a couple of cases of positive feedback about my profile, who clearly said that they were not following up because they were looking for a different profile but appreciated my application nonetheless).

Enjoy it!

While academic job search involves a long and stressful process, you should also have a chance to enjoy some moments of it. In fact, interviewing is not that dissimilar from meeting people at conferences and other scientific events. The interviewing panels where you find the most at ease may also provide a strong indication of which places you want to work for. In any case, good luck!

In case the spamming — I mean, discreet, surgically targeted advertising — through the other channels miraculously hasn’t reached you yet, you should know about an open PhD student position in my research group. The official ad is online. In this post I’d like to give a more informal description and clarify a few aspects that may not be obvious to every prospective candidate.

There are a total of 4 openings in the Software Technology division that are currently open; you will often see them advertised together. One of the PhD student positions is under my direct supervision, and it’s the one I describe here.

There are only loose constraints on the research topic for the to-be-hired PhD student. Obviously I expect it to be something around the areas I’m interested in, so software engineering and formal methods. But there’s plenty of room within these general denominations for a variety of concrete topics that span from theoretical to practical. To get an idea, have a look at the description of my past research. If anything there tickles you’re fancy then… great! We have a match. If nothing does… great! It’s a chance to start working on something new and exciting. Seriously, that page describes my past research; it doesn’t predict the future. Of course I still expect a match in the general area of interest. If you want to do research in, say, ornithology you may want to look elsewhere (or not, if you find a good connection to my research interests :-).

I refrain from giving a list of the ideal candidates. You can find plenty of good recommendations, and I subscribe to pretty much all of them. The fact is, there’s no such thing as the ideal skills set for PhD students. Rather, different great students often have incomparable skills: some are better at analytic reasoning, some are outstanding programmers, others excel at communicating; regardless of what they’re less good at, they’ll make up for it by cultivating new skills and honing existing ones to perfection. So perhaps the only universally required qualities are meta-qualities (potentialities): the ability to learn and adapt skills, including meta-skills, and a passion for pawing through new problems “like some starving raccoon!

A few organizational points. A PhD in Sweden normally takes 5 years, including 20% of the time spent on teaching assistantship. As a PhD student you are an employee not just a student; and you get a salary not a stipend, which is a pretty decent way of making a living. If you’re concerned about not speaking Swedish, don’t be: what they say about everybody in Sweden speaking good English is not a myth.

So should you apply? Yes, everybody with a genuine interest should!

As much as I’m deeply interested in technology, and make a living out of it, I’m really mainly in it for the science, and I’m in science ultimately for the sake of it. Thus, I welcome a healthy sobriety and a tad of skepticism in our relationship with technology in everyday’s life. Not out of pessimism, gravity, or self-denial (are you kidding?), but simply because blindly seeing technology as a flawless dispenser of solutions to humanity’s problems is ultimately irrational.

With this mindset I took up reading Evgeny Morozov‘s To save everything click here with the expectation of finding original insights and provocative criticism of certain uncritical attitudes towards technology creation and fruition. Unfortunately I have been sorely disappointed. The whole book drowns few noncontroversial common-sense points against an uncritical attitude towards technology in an ocean of meandering rants — often tinted with intellectual snobbery, and occasionally turning into flat-out nonsense. The book’s content could have been a moderately interesting article; 400+ pages are definitely too many.

I will make a couple of examples that illustrate my criticism; the whole book is mainly “more of the same”.

Chapter 3 takes a stab at criticizing the proponents of openness (of information) at all costs. It starts with some generally reasonable statement that it easy to agree upon with: “No serious philosopher would ever proclaim that either transparency or openness is an unquestionable good or absolute value to which human societies should aspire.” Honestly, the statement is hardly very profound. Pretty much every value, even “absolute” ones, requires critical thinking to be turned into something actionable and valuable. (Also, why the restriction to philosophers?) Anyway, to illustrate this otherwise unproblematic point the chapter tells the story of Manuel Aristarán, an Argentinian programmer who built a website that aggregates public spending data from his hometown’s municipality records. A year after the website was launched, the municipality restricted access to the spending data by adding CAPTCHAs; this change, which made it difficult to automate the updates to Aristarán’s website, was criticized by open-government activists.

This episode triggers Morozov’s criticism against the idea that more open government is always better for politics. His argument? First, he imagines a “populist group” leveraging the data on Aristarán’s website to pressure the government into diverging funds from education to “a nearby rum-making factory”. He doesn’t bother to explain exactly how this would happen or, more to the point, how the aggregator website would facilitate it, but since “an inefficient democracy is always preferable to a well-run dictatorship”, he feels safe to conclude that restricting access to the spending data is for the greater good. If this is starting to make little sense, my summary is doing justice to the text! The chapter continues with a detour that includes repeated praises of Bruno Latour’s work (of fashionable nonsense fame) and advocates read-only websites — humans can read them but they cannot be downloaded or reproduced elsewhere (good luck implementing that). It then goes back to the episode of Argentinian local politics arguing, while remaining serious, that what the advocates of openness in politics, like Larry Lessig, want ultimately boils down to asking that “the local politics in Bahía Blanca [Aristarán’s hometown] make sacrifices so that a 15-year-old in Palo Alto can remix cat videos without going to jail.” At this point, the confusion is such that it’s not clear what Morozov’s point is anymore, nor what he considers reasonable and what not. And this is not even halfway through the chapter, during which he even manages to argue for “greater transparency around the financing of political parties” (yes, at some point he seems to briefly argue for it, albeit in passing). The mystery of who the cat-video remixer is remains unfortunately unsolved to date.

Fast forward to Chapter 7, which retells a story by some Ivan Illich, a riveting story about one of Illich’s student who once declined his offer for a second glass of cider not wanting “to get into a sugar high.” End of the story. For a variety of reasons, none of which I could expect, Illich and Morozov are quite upset by the student’s reply. Illich criticizes “the idea that all people have specifiable needs”; in other words, he claims she doesn’t really know whether she needs more sugar or not. Morozov attributes to the young lady the belief that “her moral compass is exhausted by her easily measurable and quantifiable needs”, thinks that “she might have a moral obligation […] to be polite and accept the drink”, and that she ignores that “she might actually derive great sensual pleasure from drinking the cider”. Judging by the pertinence and convincingness of these arguments, one can tell Illich and Morozov are not ones to turn down drinks.

While not all passages are as nonsensical as the story of the cider, there’s little to save in the whole book, and the little that is reasonable and can be agreed upon is neither very original nor particularly eloquent. On somewhat similar themes I recommend instead, among others, Lessig, Schneier, and Turkle, for really insightful analyses of the implications of technology and the Internet on modern society. And of course keep reading this blog for time-saving summaries 😉

Do you trust nuclear energy? I know, it’s a complex question — not the kind that easily leads to a “yes” or “no” answer. Let me rephrase it into a possibly less controversial form: do you trust nuclear engineers? Now, the odds are turning, and there’s a good chance Joe or Jane Sixpack would answer in the affirmative, without feeling the need to add qualifications to their answer.

Indeed, for all the negative stereotyping about being boring or uncreative, engineers have a generally solid reputation among the general public of being trustworthy and competent professionals. Well, I mean, traditional engineers have. Those who build bridges, buildings, and power plants. Those are the guys you can trust. Ah, if only software development were a solid engineering discipline in the same league with civil, mechanical, and nuclear engineering! We wouldn’t have to deal with lousy apps or insecure systems anymore! We could enjoy the same level of reliability and safety of, say, nuclear power which, in the immortal words of Homer J. Simpson, “has yet to cause a single proven fatality.”

Seriously, I don’t want to compare apples and oranges, but it is true that software developers may enjoy a generally poorer reputation than other professionals. The latest outcry follows an all too familiar argument: 1. Software is spectacularly unreliable; 2. Traditional engineering delivers highly reliable products; 3. Ergo, software development should adopt engineering practices (or, as the article goes, stop calling itself software engineering).

I will not pick on that particular article, even though it’d be a particularly easy target given its technical shallowness and shaky arguments (how is the dieselgate an indictment of software’s low quality?). Instead I’m interested in the more general issue of what traditional engineering has to teach to software engineering. I also want to avoid going down the line of nominalism — which all too often emerges in debating these kinds of topics — looking for the “essence” of engineering, software development, or the like. That’d be neither useful nor particularly interesting.

First of all, I believe that the gap of reliability between the best software systems and the best artifacts produced by traditional engineering is much smaller than the common opinion would lead us to believe. In fact, the line between the two is blurry, in that some of the most reliable complex software lies at the core of airplanes, cars, or control plants. If the latter are reliable, it is largely because the former is. As another, historical but still very significant, example see Feynman’s report on the Challenger disaster: among the various engineering divisions, the one responsible for avionics software is the only one whose practices pass with flying colors: “the computer software checking system and attitude is of the highest quality.” Of course most software development does not come even close to complying with the standards of quality of avionics software; but neither is the engineering of the majority of traditional consumer products with no mission-critical requirements.

Another feature traditionally ascribed to traditional engineering as opposed to software engineering is a rigorous, systematic approach that does not leave anything to chance. A nuclear engineer, for example, knows exactly what goes on in every stage of development and deployment of a nuclear power plant, and knows how to ensure that no accidents occur. Except this is not exactly true. It turns out that [Mahaffey, 2015] the riskiest activity related to harnessing nuclear power is fuel (re)processing, where fissile fuel is produced by chemical means. The major risk is that some of the fuel becomes critical, that is capable of sustaining a spontaneous chain reaction. Criticality does not only depend on the amount of fuel but also on its geometric configuration. It is practically impossible to predict every possible configuration the fuel may take while being processed, and in fact several accidents occurred due to unexpected accumulation of material in what turned out to be criticality-inducing configurations. This does not mean that the whole enterprise is left to chance, only that perfect planning is unattainable and one must deal flexibly with uncertainties. The best practices are to ensure that the operators are thoroughly trained not only in the procedures but are also fully aware of the general issues that may arise and on the lookout for unpredicted sources of danger. This attitude towards prevention and awareness was first adopted in the fuel processing plants of the Manhattan Project, when a young Feynman (him again) pointed out to his superiors that the excessive secrecy initially imposed, which prevented workers from knowing what exactly they were doing and what risks they could encounter, was counterproductive and foolishly dangerous. Prevention is the best protection, and prevention requires knowledgeable people, not drones.

Which leads us to the other main point: what can software engineering really learn from traditional engineering — and nuclear engineering in particular? It’s not the scientific foundations: informatics provides rock-solid foundations. It’s not so much the institutionalization as a licensed profession: while it may be useful in certain contexts (for example for freelance software developers) it’d have little or no relevance in others. The attitude that software engineering can learn from nuclear engineering is what to in the aftermath of an accident.

When something goes wrong in a nuclear reactor and it gets scrammed, it is imperative that the dynamics of the accident be understood down to minute details. This involves understanding the physics of the reaction, any failure of the equipment against its supposed behavior, how the personnel reacted, what practices were followed up to the time of the accident that may have altered operating conditions or generally increased risks. Based on an exhaustive post mortem, procedures, technologies, equipment, and training are revised. These activities have top priority, and must be completed to satisfaction before operations can restart. The net effect is that the chances of the same kind of problem occurring twice are minimal, and reliability improves by building on an ever growing knowledge base. (Bertrand Meyer made similar remarks about aerospace engineering.)

Two final comments. First, I believe such kind of practices are already in place, in some form at least, in the best software development environments. Besides safety-critical software, where we have several well-known case studies, the practice of large-scale software development includes inspections, regressions, and rigorous analysis. Even Facebook has given up on their “move fast and break things”, and realized the importance of stable infrastructure and rigorous analysis. Second, engineering is ultimately all about trade-offs. If you’re developing a silly app, you may just accept that “mediocre” is good enough. And that what you’re doing is not quite “engineering”.

Reference

  1. James A. Mahaffey: Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima. Pegasus, 2015.