Saturday, December 29, 2012

Does IPv6 mean the end of NAT?

I frequently encounter a casual mention that, with the larger address space in IPv6, Net Address Translation (NAT)--a mainstay of wireless routers everywhere--will go away. I don't think so. There are numerous reasons to embrace path-based routing, and I believe the anti-NAT folks are myopically focusing on just one of them.

As background, what a NAT router does is allow multiplexing multiple private IP addresses behind a single, public IP address. From outside the subnet, it looks like the NAT router is a single machine. From inside the subnet, there are a number of machines, each with its own IP address. The NAT router allows communication between the inside and outside worlds by swizzling IP addresses and ports as connections go through the router. That's why it is a "net address translator" -- it translates between public IPs and private IPs.

My first encounter with NAT was to connect multiple machines to a residential ISP. It was either a cable company or a phone company; I forget which. The ISP in question wanted to charge extra for each device connected within the residential network. That is, if you connect two computers, you should pay more than if you connect one computer. I felt, and still feel, that this is a poor business arrangement. The ISP should concern itself with where I impose costs on it, which is via bandwidth. If I take a print server from one big box and move it onto its own smaller computer, then I need a new IP address, but that shouldn't matter at all to the ISP. By using NAT--in my case, Linux's "masquerading" support--the ISP doesn't even know.

This example broadens to a concern one could call privacy. What an organization does within its own network is its own business. Its communication with the outside world should be through pre-agreed protocols that, to the extent feasible, do not divulge decisions that are internal to the organization. It shouldn't matter to the general public whether each resident has their own machine, or whether they are sharing, or whether the residents have all bought iPads to augment their other devices.

For larger organizations, privacy leads to security. If you want to break into an organization's computer infrastructure, one of the first things you want to do is to feel out the topology of the network. Unless you use NAT at the boundary between your organization's network and the general internet, then you are exposing your internal network topology to the world. You are giving an attacker an unnecessary leg up.

You could also view these concerns from the point of view of modularity. The public network protocol of an organization is an interface. The internal decisions within the organization are an implementation. If you want everything to hook up reliably, then components should depend on interfaces, not implementations.

Given these concerns, I see no reason to expect NAT to go away, even given an Internet with a larger address space. It's just sensible network design. Moreover, I wish that the IETF would put more effort into direct support for NAT. In particular, the NAT of today is unnecessarily weak when it comes to computers behind different NATing routers making a direct connections with each other.

It is an understatement to say that not everyone agrees with me. Vint Cerf gave an interview earlier this year where he repeatedly expressed disdain for NAT.

"But people had not run out of IPv4 and NAT boxes [network address translation lets multiple devices share a single IP address] were around (ugh), so the delay is understandable but inexcusable."

Here we see what I presume is Cerf's main viewpoint on NAT: it's an ugly mechanism that is mainly used to avoid address exhaustion.

One of the benefits of IPv6 is a more direct architecture that's not obfuscated by the address-sharing of network address translation (NAT). How will that change the Internet? And how seriously should we take security concerns of those who like to have that NAT as a layer of defense? Machine to machine [communication] will be facilitated by IPv6. Security is important; NAT is not a security measure in any real sense. Strong, end-to-end authentication and encryption are needed. Two-factor passwords also ([which use] one-time passwords).

I respectfully disagree with the comment about security. I suspect his point of view is that you can just as well use firewall rules to block incoming connections. Speaking as someone who has set up multiple sets of firewall rules, I can attest that they are fiddly and error prone. You get a much more reliable guarantee against incoming connections if you use a NAT router.

In parting, let me note a comment in the same interview:

Might it have been possible to engineer some better forwards compatibility into IPv4 or better backwards compatibility into IPv6 to make this transition easier? We might have used an option field in IPv4 to achieve the desired effect, but at the time options were slow to process, and in any case we would have to touch the code in every host to get the option to be processes... Every IPv4 and IPv6 packet can have fields in the packet that are optional -- but that carry additional information (e.g. for security)... We concluded (perhaps wrongly) that if we were going to touch every host anyway we should design an efficient new protocol that could be executed as the mainline code rather than options.

It is not too late.

Thursday, December 27, 2012

Windows 8 first impressions

An acquaintance of mine got a Windows 8 machine for Christmas, and so I got a chance to take a brief look at it. Here are some first impressions.

Windows 8 uses a tile-based UI that was called "Metro" during development. As a brief overview, the home page on Windows 8 is no longer a desktop, but instead features a number of tiles, one for each of the machine's most featured applications. Double-clicking on a tile causes it to maximize and take the whole display. There is no visible toolbar, no visible taskbar, no overlapping of windows. In general, the overall approach looks very reasonable to me.

The execution is another story. Let me hit a few highlights.

First, the old non-tiled desktop interface is still present, and Windows drops you into it from time to time. You really can't avoid it, because even the file browser uses the old mode. I suppose Microsoft shipped this way due to concerns about legacy software, but it's really bad for users. They have to learn not only the new tiles-based user interface but also the old desktop one. Thus it's a doubly steep learning curve compared to other operating systems, and it's a jarring user experience as the user goes from one UI to the other.

An additional problem is a complete lack of guide posts. When you switch to an app, you really switch to the app. The tiles-based home page goes away, and the new app fills the entire screen, every single pixel. There is no title bar, no application bar, nothing. You don't know what the current app is except by studying it's current screen and trying to recognize it. You have no way at all to know which tile on the home page got you here; you just have to remember. The UI really needs some sort of guide posts to tell you where you are.

The install process is bad. When you start it, it encourages you to create a Microsoft account and log in using that. It's a long process, including an unnecessary CAPTCHA; this process is on the critical path and should be made short and simple. Worse, though, I saw it outright crash at the end. After a hard reboot, it went back into the "create a new account" sequence, but after entering all the information from before, it hits a dead end and says the account is already being used on this computer. This error state is bad in numerous ways. It shouldn't have evened jumped into the create-account sequence with an account already present. Worse, the error message indicates that the software knows exactly what the user is trying to do. Why provide an error message rather than simply logging them in?

Aside from those three biggies, there are also a myriad of small UI details that seem pointlessly bad:

  • The UI uses a lot of pullouts, but those pullouts are completely invisible unless you know the magic gesture and the magic place on the screen to do it. Why not include a little grab handle off on the edge? It uses a little screen space, and it adds some clutter, but for the main pullouts the user really needs to know they are there.
  • In the web browser, they have moved the navigation bar to the bottom of the screen. This breaks all expectations of anyone that has used another computer or smart phone ever in their life. In exchange for those broken expectations, I can see no benefit; it's the same amount of screen space either way.
  • The "support" tile is right on the home page, which is a nice touch for new users. However, when you click it the first time, it dumps you into the machine registration wizard. Thus, it interrupts your already interrupted work flow with another interruption. It reminds me of an old Microsoft help program that, the first time you ran it, asked you about the settings you wanted to use for the search index.

On the whole, I know I am not saying anything new, but it strikes me that Microsoft would benefit from more time spent on their user interfaces. The problems I'm describing don't require any deep expertise in UIs. All you have to do is try the system out and then fix the more horrific traps that you find. I'm guessing the main issue here is in internal budgeting. There is a temptation to treat "code complete" as the target and to apportion your effort toward getting there. Code complete should not be the final state, though; if you think of it that way, you'll inevitably ship UIs that technically work but are full of land mines.

Okay, I've tried to give a reasonable overview of first impressions. Forgive me if I close with something a little more fun: Windows 8 while drunk.

Saturday, December 15, 2012

Recursive Maven considered harmful

I have been strongly influenced by Peter Miller's article Recursive Make Considered Harmful. Peter showed that if you used the language of make carefully, you could achieve two very useful properties:
  • You never need to run make clean.
  • You can pick any make target, in your entire tree of code, and confidently tell make just to build that one target.

Most people don't use make that way, but they should. More troubling, they're making the same mistakes with newer build tools.

What most people do with Maven, to name one example, is to add a build file for each component of their software. To build the whole code base, you go through each component, build it, and put the resulting artifacts into what is called an artifact repository. Subsequent builds pull their inputs from the artifact repository. My understanding of Ant and Ivy, and of SBT and Ivy, is that those build-tool combinations are typically used in the same way.

This arrangement is just like recursive make, and it leads to just the same problems. Developers rebuild more than they need to, because they can't trust the tool to build just the right stuff, so they waste time waiting on builds they didn't really need to run. Worse, these defensive rebuilds get checked into the build scripts, so as to "help" other programmers, making build times bad for everyone. On top of it all, even for all this defensiveness, developers will sometimes fail to rebuild something they needed to, in which case they'll end up debugging with stale software and wondering what's going on.

On top of the other problems, these manually sequenced builds are impractical to parallelize. You can't run certain parts of the build until certain other parts are finished, but the tool doesn't know what the dependencies are. Thus the tool can't parallelize it for you, not on your local machine, not using a build farm. Using a standard Maven or Ivy build, the best, most expensive development machine will peg just one CPU while the others sit idle.

Fixing the problem

Build tools should use a build cache, emphasis on the cache, for propagating results from one component to another. A cache is an abstraction that allows computing a function more quickly based on partial results computed in the past. The function, in this case, is for turning source code into a binary.

A cache does nothing except speed things up. You could remove a cache entirely and the surrounding system would work the same, just more slowly. A cache has no side effects, either. No matter what you've done with a cache in the past, a given query to the cache will give back the same value to the same query in the future.

The Maven experience is very different from what I describe! Maven repositories are used like caches, but without having the properties of caches. When you ask for something from a Maven repository, it very much matters what you have done in the past. It returns the most recent thing you put into it. It can even fail, if you ask for something before you put it in.

What you want is a build cache. Whereas a Maven repository is keyed by component name and version number (and maybe a few more things), a build cache is keyed by a hash code over the input files and a command line. If you rebuild the same source code but with a slightly different command, you'll get a different hash code even though the component name and version are the same.

To make use of such a cache, the build tool needs to be able to deal sensibly with cache misses. To do that, it needs a way to see through the cache and run recursive build commands for things that aren't already present in the cache. There are a variety of ways to implement such a build tool. A simple approach, as a motivating example, is to insist that the organization put all source code into one large repository. This approach easily scales to a few dozen developers. For larger groups, you likely want some form of two-layer scheme, where a local check-out of part of the code is virtually overlaid over a remote repository.

Hall of fame

While the most popular build tools do not have a proper build cache, there are a couple of lesser known ones that do. One such is the internal Google Build System. Google uses a couple of tricks to get the approach working well for themselves. First, they use Perforce, which allows having all code in one repository without all developers having to check the whole thing out. Second, they use a FUSE filesystem that allows quickly computing hash codes over large numbers of input files.

Another build tool that gets this right is the Nix build system. Nix is a fledgling build tool built as a Ph.D. project at the University of Delft. It's available open source, so you can play with it right now. My impression is that it has a good core but that it is not very widely used, and thus that you might well run into sharp corners.

How we got here

Worth pondering is how decades of build tool development have left us all using such impoverished tools. OS kernels have gotten better. C compilers have gotten better. Editors have gotten better. IDEs have gotten worlds better. Build tools? Build tools are still poor.

When I raise that question, a common response I get is that build tools are simply a fundamentally miserable problem. I disagree. I've worked with build tools that don't have these problems, and they simply haven't caught on.

My best guess is that, in large part, developers don't know what they are missing. There's no equivalent, for build tools, of using Linux in college and then spreading the word once you graduate. Since developers don't even know what a build tool can be like, they instead work on adding features. Thus you see build tool authors advertising that they support Java and Scala and JUnit and Jenkins and on and on and on with a very long feature list.

Who really cares about features in a build tool, though? What I want in a build tool is what I wrote to begin with, what was described over a decade ago by Peter Miller: never run a clean build, and reliably build any sub-target you like. These are properties, not features, and you don't get properties by accumulating more code.

Saturday, November 24, 2012

Changing views toward recorded music

I frequently encounter the following argument, in this case voiced by Terrence Eden:
Imagine, just for a moment, that your Sony DVD player would only play Sony Movies' films. When you decided to buy a new DVD player from Samsung, none of those media files would work on your new kit without some serious fiddling. That's the walled garden that so many companies are now trying to drag us into. And I think it stinks.

I agree as far as it goes. Many people are involved in walled gardens, and they aren't as good as open versions. I am particularly worried about the rise of Facebook, a site that is openly dismissive of rights such as privacy and pseudonymity.

I am less worried about walled gardens for music because I think about music differently. Let me describe two relevant changes.

First, copies of music are now very easy to replace. Aside from the price being low, the time is now instant: you can click on a song on Amazon or iTunes and have that song right now. As such, the value of a stockpile of music copies is much lower than it used to be; I haven't pulled out my notebook of carefully accumulated and alphabetized CDs in well over a year.

I saw the same thing happen a decade ago in a much smaller media market: academic papers. For most of the 20th century, anyone who followed academic papers kept a shelf full of journals and a filing cabinet full of individual papers. That changed about a decade ago, when I started encountering one person after another who had a box full of papers that they never looked into. Note I said box, not cabinet: they had moved offices more recently than they'd gone fishing for a printed copy, so the papers were all still in a big box from their last move.

The second change is that I have been mulling over how a reasonable IP regime might work for music. While copies of music have been a big part of the music market in our lifetimes, it's a relatively recent development in the history of professional music. We shouldn't feel attached to it in the face of technological change. There are a number of models that work better for music than buying copies, including Pandora and--hypothetically--Netflix for music.

Selling copies has not been particularly good for music in our culture. Yes, it provides a market at all, and for that I am grateful. However, it's a market at odds with how music works. Music is transient, something that exists in time and then goes away. Copies are not: they are enshrined forever in their current form, like a photograph of a cherished moment. As listeners, the copy-based market has led to us listening to the same recordings over and over. On the performers side, we have a winner-takes-all market where the term "rock star" was born.

We would be better off with a market for music that is more aligned with performance than with recordings. Imagine we switched to something like Pandora and completely discarded digital copyright. Musicians would no longer be able to put out a big hit and then just ate the money in indefinitely. They'd have to keep performing, and they'd have to compete with other performers that are covering their works for free. I expect a similar amount of money would be in the market, just spread more evenly across the producers. Listeners, meanwhile, would have a much more dynamic and vibrant collection of music to listen to--a substantial public good. Yes, such a scenario involves walled gardens, but that's a lesser evil than digital copyright.

Sunday, October 21, 2012

Source Maps with Non-traditional Source Code

I recently explored using JavaScript source maps with a language very different from JavaScript. Source maps let developers debug in a web browser while still looking at original source code, even if that source code is not JavaScript. A lot of programming languages support them nowadays, including Dart, Haxe, and CoffeeScript.

In my case, I found it helpful to use "source" code that is different from what the human programmers typed into a text editor and fed to the compiler. This post explains why, and it gives a few tricks I learned along the way.

Why virtual source?

It's might seem obvious that the source map should point back to original source code. That's what the Closure Tools team designed it for, and for goodness' sake, it's called a source map. That's the approach I started with, but I ran into some difficulties that eventually led me to a different approach.

One difficulty is a technical one. When you place a breakpoint in Chrome on a file mapped via a source map, it places one and only one breakpoint in the emitted JavaScript code. That works fine for a JavaScript-to-JavaScript compiler, but I was compiling from Datalog. In Datalog, there are cases where the same line of source code is used in multiple places in the output code. For example, Datalog rules are run in two different modes: once during the initial bootstrapping of a database instance, and later during an Orwellian "truth maintenance" phase. With a conventional source map, it is only possible to breakpoint one of the instances, and the developer doesn't even know which one they are getting.

That problem could be fixed by changes to WebKit, but there is a larger problem: the behavior of the code is different in each of its variants. For example, the truth maintenance code for a Datalog rule has some variants that add facts and some that remove them. A programmer trying to make sense of a single-stepping session needs to know not just which rule they have stopped on, but which mode of evaluation that rule is currentlty being used in. There's nothing in the original source code that can indicate this difference; in the source code, there's just one rule.

As a final cherry on top of the excrement pie, there is a significant amount of code in a Datalog runtime that doesn't have any source representation at all. For example, data input and data output do not have an equivalent in source code, but they are reasonable places to want to place a breakpoint. For a source map pointing to original source code, I don't see a good way to handle such loose code.

A virtual source file solves all of the above problems. The way it works is as follows. The compiler emits a virtual source file in addition to the generated JavaScript code. The virtual source file is higher-level than the emitted JavaScript code, enough to be human readable. However, it is still low-level enough to be helpful for single-step debugging.

The source map links the two forms of output together. For each character of emitted JavaScript code, the source map maps it to a line in the virtual source file. Under normal execution, web browsers use the generated JavaScript file and ignore the virtual source file. If the browser drops into a debugger--via a breakpoint, for example--then it will show the developer the virtual source file rather than the generated JavaScript code. Thus, the developer has the illusion that the browser is directly running the code in the virtual source file.

Tips and tricks

Here are a few tips and tricks I ran into that were not obvious at first.

Put a pointer to the original source file for any code where such a pointer makes sense. That way, developers can easily go find the original source file if they want to know more context about where the code in question came from. Here's the kind of thing I've been using:

    /* browser.logic, line 28 */

Also, for the sake of your developers' sanity, each character of generated JavaScript code should map to some part of the source code. Any code you don't explicitly map will end up implicitly pointing to the previous line of virtual source that does have a map. If you can't think of anything to put in the virtual source file, then try a blank line. The developer will be able to breakpoint and single-step that blank line, which might initially seem weird. It's less weird, though, than giving the developer incorrect information.

Name your JavaScript variable names carefully. I switched generated temporaries to start with "z$" instead of "t$" so that they sort down at the bottom of the variables list in the Chrome debugger. That way, when an app developer looks at the list of variables in a debugger, the first thing their eyes encounter are their own variables.

Emit variable names into the virtual source file, even when they seem redundant. It provides an extra cue for developers as they mentally map what they see in the JavaScript stack trace and what they see in the virtual source file. For example, here is a line of virtual source code for inputting a pair of values to the "new_input" Datalog predicate; the "value0" and "value1" variables are the generated variable names for the pair of values in question.

    INPUT new_input(value0, value1)

Implementation approach

Implementing a virtual source file initially struck me as a cross-cutting concern that was likely to turn the compiler code into a complete mess. However, here is an approach that makes it not so bad.

The compiler already has an "output" stream threaded through all classes that do any code generation. The trick is to augment the class used to implement that stream with a couple of new methods:

  • emitVirtual(String): emit text to the virtual source file
  • startVirtualChunk(): mark the beginning of a new chunk of output

With this extended API, working with a virtual source file is straightforward and non-intrusive. Most compiler code remains unchanged; it just writes to the output stream as normal. Around each human-comprehensible chunk of output, there is a call to startVirtualChunk() followed by a few calls to emitVirtual(). For example, whenever the compiler is about to emit a Datalog rule, it first calls startVirtualChunk() and then pretty prints the code to the emitVirtual() stream. After that, it emits the output JavaScript.

With this approach, the extended output stream becomes a single point where the source map can be accumulated. Since this class intercepts writes to both the virtual file and the final generated JavaScript file, it is in a position to maintain a mapping between the two.

The main downside to this approach is that the generated file and the virtual source file must put everything in the same order. In my case, the compiler is emitting code in a reasonable order, so it isn't a big deal.

If your compiler rearranges its output in some wild and crazy order, then you might need to do something different. One approach that looks reasonable is to build a virtual AST while emitting the main source code, and then only convert the virtual AST to text once it is all accumulated. The startVirtualChunk() method would take a virtual AST node as an argument, thus allowing the extended output stream to associate each virtual AST node with one or more ranges of generated JavaScript code.

Monday, August 6, 2012

Deprecation as product lines

I would like to draw a connection between two lines of research: deprecation, and product lines. The punchline is that my personal view on deprecation could be explained by reference to product lines: deprecation is a product line with just two products. To see how that connection works, first take a look at what each of these terms means.

A product line is a collection of products built from a single shared pool of source code. Some examples of a product line would be:

  • The Android, iPhone, Windows, and Macintosh versions of an application.
  • The English, Chinese, and Lojban versions of an application.
  • The trial, normal, and professional versions of an application.
  • The embedded-Java and full-Java versions of a Java library.

There is a rich literature on product lines; an example I am familiar with is the work on CFJ (Colored Featherweight Java). CFJ is Java extended with "color" annotations. You "color" your classes, methods, and fields depending on which product line each part of the program belongs to. A static checker verifies that the colors are consistent with each other, e.g. that the mobile version of your code does not invoke a method that is only present on the desktop version. A build-time tool can build individual products in the product line by extracting just the code that goes with a designated color. To my knowledge, CFJ has not been explicitly used outside of the CIDE tool it was developed with, and CIDE itself does not appear to be widely used. Instead, the widely used tools for product lines don't have a good theoretical grounding.

Deprecation, meanwhile, is the annotation of code that is going away. As with CFJ, deprecation tools are very widely used but not well grounded theoretically. With deprecation, programmers mark chunks of code as deprecated, and a compile time checker emits warnings whenever non-deprecated code accesses deprecated code. I have previously shown that the deprecation checker in Oracle javac has holes; there are cases where removing the deprecated code results in a a program that either does not type check or that does not behave the same.

As much as I enjoyed working on a specific theoretical framework for deprecation, I must now admit that it's really a special case of CFJ. For the simpler version of deprecation checking, choose two colors, non-deprecated and everything, and mark everything with the "everything" color. You then have two products in the product line: one where you leave everything as is, and one where you keep only the non-deprecated code.

There is a lot of potential future work in this area; for this post I just wanted to draw the connection. I believe CFJ would benefit from explicitly claiming that the colored subsets of the program have the same behavior as the full program; I believe it has this property, and I went to the trouble of proving it holds for deprecation checking. Also, I believe there is fruitful work in studying the kinds of colors that are available. With deprecation, there is usually no point in time where you can remove all deprecated code in the entire code base. You want to have a number of colors for the deprecated code, for example different colors for different future versions of the software.

Sunday, July 8, 2012

Evan Farrer Converts Code from Python to Haskell

Evan Farrer has an interesting post up where he converts some code from Python to Haskell. Kudos to Farrer for empirically studying a language design question. Here is his summary:
The results of this experiment indicate that unit testing is not an adequate replacement for static typing for defect detection. While unit testing does catch many errors it is difficult to construct unit tests that will detect the kinds of defects that would be programatically detected by static typing. The application of static type checking to many programs written in dynamically typed programming languages would catch many defects that were not detected with unit testing, and would not require significant redesign of the programs.

I feel better about delivering code in a statically typed language if the code is more than a few thousand lines long. However, my feeling here is not due to the additional error checking you get in a statically typed language. Contra Farrer's analysis, I feel that this additional benefit is so small as to not be a major factor. For me, the advantages are in better code navigation and in locking developers down to using relatively boring solutions. Both of these lead to code that will stay more robust as it undergoes maintenance.

As such, the most interesting piece of evidence Farrer raises is that the four bodies of code he converted were straightforward to rewrite in Haskell. We can conclude, for these four small programs, that the dynamic features of Python were not important for expressiveness.

On the down side, Farrer's main conclusion is as much undermined by his evidence as supported. His main claim is that Haskell's type checker provides substantial additional error checking compared to what you get in Python. My objection is that all programs have bugs, and doing any sort of study of code is going to turn up some of them. The question is in the significance of those bugs. On this criterion the bugs Farrer finds do not look very important.

The disconnect is that practicing programmers don't count bugs by number. The attribute they care about is the overall bugginess of the software. Overall bugginess can be quantified in different ways; one way to do it is to consider the amount of time lost by end users due to bugs in the software. Based on this metric, a bug that loses a day's work for the end user is supremely important, more important than any feature. On the other hand, a bug that merely causes a visual artifact, and not very often, would be highly unimportant.

The bugs Farrer reports mostly have to do with misuse of the software. The API is called in an inappropriate way, or an input file is provided that is bad. In other words, the "bugs" have to do with the software misbehaving if its preconditions are not met, and the "fix" is to update the software to throw an explicit error message rather than to progress some distance before yielding a walk back on a dynamic type error.

At this point in the static versus dynamic face off, I would summarize the score board as follows:

  • You can write industry-standard code in either style of language.
  • Static typing does not automatically yield non-buggy software. Netscape Navigator is a shining example in my mind. It's very buggy yet it's written in C++.
  • Static languages win, by quite a lot, for navigating code statically.
  • It's unclear which language gives the more productive debugging experience, but both are quite good with today's tools.
  • Testing appears to be adequate for finding the bulk of the significant errors that a type checker would find.
  • Static languages run faster.
  • Dynamic languages have consistently fast edit-run cycles; static languages at best tie with dynamic languages, and they are much worse if your development setup is off the beaten path.
  • Expressiveness does not align well with staticness. To name a few examples, C is more expressive that BASIC, Python is better than C, and Scala is better than Python.

Monday, July 2, 2012

Saving a file in a web application

I recently did an exploration of how files can be saved in a web application. My specific use case is to save a table of numbers to an Excel-friedly CSV file. The problem applies any time you want to save a file to the user's local machine, however.

There are several Stack Overflow entries on this question, for example Question 2897619. However, none of them have the information organized in a careful, readable way, and I spent more than a day scouting out the tradeoffs of the different available options. Here is what I found.

Data URLs and the download attribute

Data URLs are nowadays supported by every major browser. The first strategy I tried is to stuff the file's data into a data URL, put that URL as the href of an anchor tag, and set the download attribute on the anchor.

Unfortunately, multiple problems ensue. The worst of these is that Firefox simply doesn't support the download attribute; see Issue 676619 for a depressingly sluggish discussion of what strikes me as a simple feature to implement. Exacerbating the problem is Firefox Issue 475008. It would be tolerable to use a randomly generated filename if at least the extension were correct. However, Firefox always chooses .part at the time of this writing.

Overall, this technique is Chrome-specific at the time of writing.

File Writer API

The File Writer API is a carefully designed API put together under the W3C processes. It takes account of the browser security model [sic], for example by disallowing file access except those verified by the user by using a native file picker dialog.

This API is too good to be true. Some web searching suggests that only Chrome supports or even intends to support it; not even Safari is marked as planning to support it, despite the API being implemented in Webkit and not in Chrome-specific code. I verified that the API is not present in whatever random version of Firefox is currently distributed with Ubuntu.

The one thing I will say in its favor is that if you are going to be Chrome-specific anyway, this is a clean way to do it.

ExecCommand

For completeness, let me mention that Internet Explorer also has an API that can be used to save files. You can use ExecCommand with SaveAs as an argument. I don't know much about this solution and did not explore it very far, because LogicBlox web applications have always, so far, needed to be able to run in non-Microsoft browsers.

For possible amusement, I found that this approach doesn't even reliably work on IE. According to a Stack Overflow post I found, on certain common versions of Windows, you can only use this approach if the file you are saving is a text file.

Flash

Often when you can't solve a problem with pure HTML and JavaScript, you can solve it with Flash. Saving files is no exception. Witness the Downloadify Flash application, which is apparently portable to all major web browsers. Essentially, you embed a small Flash application in an otherwise HTML+JavaScript page, and you use the Flash application to do the file save.

I experimented with Downloadify's approach with some custom ActionScript, and there is an unfortunate limitation to the whole approach: there is a widely implemented web browser security restriction that a file save can only be initiated in response to a click. That alone is not a problem by itself in my context, but there's a compounding problem: web browsers do not effectively keep track of whether they are in a mouse-click context if you cross the JavaScript-Flash membrane.

Given these restrictions, the only way I see to make it work is to make the button the user clicks on be a Flash button rather than an HTML button, which is what Downloadify does. That's fine for many applications, but it opens up severe styling issues. The normal way to embed a Flash object in a web page involves using a fixed pixel size for the width and height of the button; for that to work, it implies that the button's face will be a PNG file rather than nicely formatted text using the user's preferred font. It seems like too high of a price to pay for any team trying to write a clean HTML+JavaScript web application.

Use an echo server

The most portable solution I am aware of is to set up an echo server and use a form submission against that server. It is the only non-Flash solution I found for Firefox.

In more detail, the approach is to set up an HTML form, stuff the data to be saved into a hidden field of the form, and submit the form. Have your echo server respond with whatever data the client passed to it, and have it set the Content-Disposition HTTP header to indicate that the data should be saved to a file. Here is a typical HTTP header that can be used:

Content-Disposition: attachment; filename=export.csv

This technique is very portable; later versions of Netscape would probably be new enough. On the down side, it requires significant latency to upload the content to the server and then back down again.

Wednesday, March 28, 2012

Shapiro on compiling away abstraction

Via Lambda the Ultimate, I see that Jonathan Shapiro has a rambling retrospective on BitC and why he thinks it has gotten into a dead end.

One of the several themes is that the following combination of design constraints cause trouble:
  • He wants good performance, comparable to C++.
  • He wants a better set of abstraction facilities than C++.
  • He wants separate compilation to do most of the work, like in C++, rather than have the runtime do most of the real compilation, as in Java.
It's hard to excerpt, but here's him explaining the way this all works in C++:
In C++, the "+" operator can be overloaded. But (1) the bindings for primitive types cannot be replaced, (2) we know, statically, what the bindings and representations *are* for the other types, and (3) we can control, by means of inlining, which of those operations entail a procedure call at run time. I'm not trying to suggest that we want to be forced to control that manually. The key point is that the compiler has enough visibility into the implementation of the operation that it is possible to inline the primitive operators (and many others) at static compile time.
To contrast, BitC has trouble due to its extra level of abstraction:
In BitC, *both* of these things *are* abstracted at static compile time. It isn't until link time that all of the representations are in hand.

He goes on to consider the implications of different points in the design space. One point he brings up is that there is another stage of compilation that can be helpful to exploit: install time. Instead of compile time, run time, or even the link time for an application, you can get a lot of leverage if you apply compilation techniques at the point that a collection of applications and libraries are installed onto a system.

Web toolkits are a different domain than Shapiro is thinking about, but they face this particular question as well. You can greatly improve web applications if the tools do some work before all the source code gets to the web browser in front of the user. Without tools, if you just hack JavaScript files by hand and post them on a static HTTP server, the web browser ends up lazily linking the program, which means the application takes longer to start up. Good toolkits do a lot of work before the code makes it down to the end user, and in particular they really go to down at link time. At link time, the entire program is available, so it's possible to divide the program content--both programmatic code and media resources--into reasonably sized bundles of downloadable content.

Saturday, March 10, 2012

Greg Mankiw on SOPA

Greg Mankiw proposes a productive starting point for discussion about SOPA:
This is an important economic issue for the United States. We are large producers of intellectual property: movies, novels, software, video games, TV shows, and even economics textbooks. If offshore websites find a way to distribute this intellectual property without paying for it, it is as if organized crime were stealing merchandise from a manufacturing firm at the loading dock.

I fully agree. Heck, I make my living in intellectual property.

However, I strongly feel that when there is a conflict, basic liberties take priority. People playing DVDs they own, on DVD players they own, should not be liable for inducing infringement. Teenagers making mix tapes for each other should not be criminals, not even formally. Web sites should not be taken down until the people running them have had their day in court.

We should all find a way to understand that changing technologies mean that some businesses will rise and others decline. However, there are plenty of business models within the parameters of basic freedom. I have suggested controlling performance, and Arnold Kling has suggested controlling aggregates of data. Those are two ideas, and I am sure there are plenty more.

Before we can really try to find the next business models around intellectual property, we must all get used to the idea that the 20th century is neither the beginning nor the end of history. Business models developed during the 20th century made some amount of sense for their time. Technology has significantly improved, especially technology that involves computation and data transmission, and we owe it to ourselves to improve the business models, too.

Wednesday, March 7, 2012

Posner on digital copyright

Richard Posner takes on digital copyright:
The importance of copyright, and hence the negative consequences of piracy for the creation of new works, are, however, often exaggerated. Most of the world’s great literature was written before the first copyright statute, the Statute of Ann, enacted in 1710. [...] Copyright law needs to be adapted to the online revolution in distribution.

Posner has a radical suggestion that I believe would work out just fine:
So, were Google permitted to provide complete online access to all the world’s books, in their entirety, the gain in access might more than offset the loss in authors’ royalties.

Posner justifies his claim by considering the increase in creativity and in creative works that would result.

I would further justify such a policy by considering what it is going to take to protect copyright in its current form. SOPA, PROTECT-IP, ACTA, and the DMCA are all based on controlling copies. I have little doubt that measures like them will succeed over time and grow stronger. The main way to fight them is more fundamental. Stop trying to prevent copies--which is impossible--and focus more on other revenue models. The models don't even have to be designed as a matter of public policy. Simply remove the props on the old-fashioned models, and make room for entrepreneurs to search for new models.

Wednesday, January 25, 2012

The good and bad of type checking, by example

I enjoyed watching Gilad Bracha present Dart to a group of Stanford professors and students. As one might expect, given Bracha's background, he spends considerable time on Dart's type system.

Several members of the audience seemed surprised to find a defender of Dart's approach to typing. They understand untyped languages such as JavaScript, and they understand strictly typed languages such as Java. However, they don't understand why someone would intentionally design a language where types, when present, might still fail to hold up at run time.

One blog post will never convince people one way or another on this question, but perhaps I can show the motivation and dissipate some of the stark mystification around Dart's approach. Let me provide two examples where type checking would complain about a program. Here's the first example:

String fileName = new File("output.txt");

I find examples like this very compelling. The programmer has made an easy mistake. There's no question it is a mistake; this code will always fail when it is run. Furthermore, a compiler can easily detect the mistake simply by assigning a type to each variable and expression and seeing if they line up. Examples like this make type checking look really good.

On the other hand, consider this example:

void drawWidgets(List<Widget> widgets) { ... }
List<LabelWidget> labels = computeLabels();
drawWidgets(labels);

This program is probably fine, but a traditional type checker is forced to reject it. Even though LabelWidget is a subclass of Widget, a List<LabelWidget> is not a subtype of List<Label>, so the function call in the last line will not type check. The problem is that the compiler has no way of knowing that drawWidgets only reads from its input list. If drawWidgets were to add some more widgets to the list, then there would be a type error.

There are multiple ways to address this problem. In Java, programmers are expected to rewrite the type signature of drawWidgets as follows:

void drawWidgets(List<? extends Widget> widgets) { ... }
In Scala, the answer would be to use an alternate List type that is covariant in its argument.

Whatever the solution, it is clear that this second example has a much different impact on developer productivity than does the first one. First of all, in this second example, the compiler is probably wrong, and it is just emitting an error to be on the safe side. Second, the corrected version of the code is much harder to understand than the original; in addition to parametric types, it also uses an bounded existential type variable. Third, it raises the bar for who can use this programming language. People who could be perfectly productive in a simply typed language will have a terrible time with quantifier-happy generic Java code. For a host of reasons, I feel that on net the type checker makes things worse in this second example. The cases where it prevents a real error are outweighed by all the problems.

Dart's type system is unusual in that it is consistent with both examples. It rejects code like the first example, but is quiet for code like the second one.

Sunday, January 22, 2012

DNS takedowns alive and well

I wrote earlier that PROTECT-IP and SOPA are getting relatively too much attention. Specifically, I mused about this problem:
First, DNS takedowns are already happening under existing law. For example, the American FBI has been taking down DNS names for poker websites in advance of a trial. SOPA and PROTECT-IP merely extend the tendrils rather than starting something new.

Today I read news that indeed, the FBI has taken down the DNS name for Megaupload.com. I'm not sure the American public is in tune with precisely what its federal government is doing.

The news has other sad aspects than the use of DNS takedowns. A few other aspects lept out for me:

  • There has been not yet been a trial. If I ask most Americans about how their legal system works, I expect one of the first things people would say is that, in America, people are innocent until proven guilty.
  • There is twenty years of jail time associated with the charges. Isn't that a little harsh for copyright violations? I think of jail as how you penalize murderers, arsonists, and others who are going to be a threat to the public if they are left loose. Intellectual property violations somehow seem to not make the cut.
  • It's an American law, but New Zealand police arrested some of the defendants.
  • The overall demeanor of the authorities comes off as rather thuggish. For example, they seized all manner of unrelated assets of the defendants, including their cars.
I am glad SOPA and PROTECT-IP went down. However, much of what protesters complained about is already happening.

Monday, January 2, 2012

DNS takedowns under fire in the U.S.

I get the impression that SOPA, the latest version of a U.S. bill to enable DNS takedowns of non-American web sites, is under a lot of pressure. A major blow to its support is that the major gaming console companies backing out.

I am certainly heartened. However, the problem is still very real, for at least two reasons.

First, DNS takedowns are already happening under existing law. For example, the American FBI has been taking down DNS names for poker websites in advance of a trial. SOPA and PROTECT-IP merely extend the tendrils rather than starting something new.

Second, this bill won't be the last. So long as the Internet uses DNS, there is a vulnerability built right into the protocols. Secure DNS doesn't make it any better; on the contrary, it hands the keys to the DNS over to national governments.

The only long term way to fix this problem is to adjust the protocols to avoid a single point of vulnerability. It requires a new way to name resources on the Internet.