Wednesday, March 28, 2012

Shapiro on compiling away abstraction

Via Lambda the Ultimate, I see that Jonathan Shapiro has a rambling retrospective on BitC and why he thinks it has gotten into a dead end.

One of the several themes is that the following combination of design constraints cause trouble:
  • He wants good performance, comparable to C++.
  • He wants a better set of abstraction facilities than C++.
  • He wants separate compilation to do most of the work, like in C++, rather than have the runtime do most of the real compilation, as in Java.
It's hard to excerpt, but here's him explaining the way this all works in C++:
In C++, the "+" operator can be overloaded. But (1) the bindings for primitive types cannot be replaced, (2) we know, statically, what the bindings and representations *are* for the other types, and (3) we can control, by means of inlining, which of those operations entail a procedure call at run time. I'm not trying to suggest that we want to be forced to control that manually. The key point is that the compiler has enough visibility into the implementation of the operation that it is possible to inline the primitive operators (and many others) at static compile time.
To contrast, BitC has trouble due to its extra level of abstraction:
In BitC, *both* of these things *are* abstracted at static compile time. It isn't until link time that all of the representations are in hand.

He goes on to consider the implications of different points in the design space. One point he brings up is that there is another stage of compilation that can be helpful to exploit: install time. Instead of compile time, run time, or even the link time for an application, you can get a lot of leverage if you apply compilation techniques at the point that a collection of applications and libraries are installed onto a system.

Web toolkits are a different domain than Shapiro is thinking about, but they face this particular question as well. You can greatly improve web applications if the tools do some work before all the source code gets to the web browser in front of the user. Without tools, if you just hack JavaScript files by hand and post them on a static HTTP server, the web browser ends up lazily linking the program, which means the application takes longer to start up. Good toolkits do a lot of work before the code makes it down to the end user, and in particular they really go to down at link time. At link time, the entire program is available, so it's possible to divide the program content--both programmatic code and media resources--into reasonably sized bundles of downloadable content.

Saturday, March 10, 2012

Greg Mankiw on SOPA

Greg Mankiw proposes a productive starting point for discussion about SOPA:
This is an important economic issue for the United States. We are large producers of intellectual property: movies, novels, software, video games, TV shows, and even economics textbooks. If offshore websites find a way to distribute this intellectual property without paying for it, it is as if organized crime were stealing merchandise from a manufacturing firm at the loading dock.

I fully agree. Heck, I make my living in intellectual property.

However, I strongly feel that when there is a conflict, basic liberties take priority. People playing DVDs they own, on DVD players they own, should not be liable for inducing infringement. Teenagers making mix tapes for each other should not be criminals, not even formally. Web sites should not be taken down until the people running them have had their day in court.

We should all find a way to understand that changing technologies mean that some businesses will rise and others decline. However, there are plenty of business models within the parameters of basic freedom. I have suggested controlling performance, and Arnold Kling has suggested controlling aggregates of data. Those are two ideas, and I am sure there are plenty more.

Before we can really try to find the next business models around intellectual property, we must all get used to the idea that the 20th century is neither the beginning nor the end of history. Business models developed during the 20th century made some amount of sense for their time. Technology has significantly improved, especially technology that involves computation and data transmission, and we owe it to ourselves to improve the business models, too.

Wednesday, March 7, 2012

Posner on digital copyright

Richard Posner takes on digital copyright:
The importance of copyright, and hence the negative consequences of piracy for the creation of new works, are, however, often exaggerated. Most of the world’s great literature was written before the first copyright statute, the Statute of Ann, enacted in 1710. [...] Copyright law needs to be adapted to the online revolution in distribution.

Posner has a radical suggestion that I believe would work out just fine:
So, were Google permitted to provide complete online access to all the world’s books, in their entirety, the gain in access might more than offset the loss in authors’ royalties.

Posner justifies his claim by considering the increase in creativity and in creative works that would result.

I would further justify such a policy by considering what it is going to take to protect copyright in its current form. SOPA, PROTECT-IP, ACTA, and the DMCA are all based on controlling copies. I have little doubt that measures like them will succeed over time and grow stronger. The main way to fight them is more fundamental. Stop trying to prevent copies--which is impossible--and focus more on other revenue models. The models don't even have to be designed as a matter of public policy. Simply remove the props on the old-fashioned models, and make room for entrepreneurs to search for new models.