I am very excited to announce that I am writing Beginning Scala and it will be published by APress.I'm sure it will be excellent. I'm suprised, though, that APress wasn't more interested in his first idea of a book specifically about Lift.
Wednesday, January 21, 2009
Another Scala book
David Pollak just announced that he is writing a Scala book:
Monday, January 12, 2009
One event at a time
A design rule I've come to believe in due to painful experience is that it's best to run the code for different purposes independently whenever possible. The Floater bridge software used to break this rule and update the UI in various places while in the middle of serving a network request. Once I changed all of these to run at the next tick of the event loop, it stopped locking up. The Unix Squeak VM used to (and possibly still can) lock up if you resize the window, because at least two different primitives had an internal call to pump the event loop waiting for an event that matches some pattern.
Nowadays I really try to run code from the event loop when possible. It can cause a big decrease in complexity, because each event handler runs in a fresh context with all of the program's invariants established. When implementing code that will run at the top level of the call stack, you don't have to reason about what arbitrary event might already be in progress while trying to run the new one.
Unfortunately, AJAX might strike again against this principle. Eugene Lazutkin has written an in-depth defense of the event-loop architecture I sketch above. Unfortunately, one of the important tools for using this approach is apparently crippled in every single browser:
"Lower than that value" includes a timeout of 0.
I wonder how hard it would be for GWT to work around this and make zero-timeout events really work?
Nowadays I really try to run code from the event loop when possible. It can cause a big decrease in complexity, because each event handler runs in a fresh context with all of the program's invariants established. When implementing code that will run at the top level of the call stack, you don't have to reason about what arbitrary event might already be in progress while trying to run the new one.
Unfortunately, AJAX might strike again against this principle. Eugene Lazutkin has written an in-depth defense of the event-loop architecture I sketch above. Unfortunately, one of the important tools for using this approach is apparently crippled in every single browser:
I ran this code on different browsers and different operating systems and it turned out that all of them have the minimal timeout time. Setting the timeout time lower than that value doesn't reduce the timeout.
"Lower than that value" includes a timeout of 0.
I wonder how hard it would be for GWT to work around this and make zero-timeout events really work?
Tuesday, January 6, 2009
Just how evil is synchronous XHR?
The web is based on de facto specs, requiring a lot of investigation to find out what exactly the platform does. One question about the de facto behavior is being reraised by code-splitting efforts: just how bad is synchronous XHR, if used in a place that the application may as well pause anyway? This question comes up because you can't use GWT's code-splitting approach without good static analysis. You can use dynamic analysis, but whenever the dynamic analyzer guesses wrong, the system must fall back on synchronous XHR.
Mark S. Miller sent me a link to Mark Pruett, who did some actual experiments to see what happens in practice. Pruett concludes that all browsers but Firefox 2 are fine.
Kelly Norton is less sanguine. While he likes Opera's behavior reasonably well, he's unsure about IE 6, and he thinks the Safari discussion is inaccurate. It's not clear to what extent browsers are going to mimic Opera.
Overall, I come away thinking that synchronous XHR is reasonable for code splitting so long as the dynamic analyzer is only very infrequently wrong. The app will freeze when it happens, which is bad, but it should be infrequent. Further, the development effort will need to put time into setting up a suite of interaction cases to feed to the dynamic analyzer, and that will consume time. I guess extra time is expected, though, if you want better results.
It makes me glad not to be working with raw JavaScript, though. To the extent code-splitting is important, I really think new web applications should not use raw JavaScript. They should use some analyzable subset of JavaScript that has not yet been defined, or they should use a suitable existing language such as Java.
Mark S. Miller sent me a link to Mark Pruett, who did some actual experiments to see what happens in practice. Pruett concludes that all browsers but Firefox 2 are fine.
Kelly Norton is less sanguine. While he likes Opera's behavior reasonably well, he's unsure about IE 6, and he thinks the Safari discussion is inaccurate. It's not clear to what extent browsers are going to mimic Opera.
Overall, I come away thinking that synchronous XHR is reasonable for code splitting so long as the dynamic analyzer is only very infrequently wrong. The app will freeze when it happens, which is bad, but it should be infrequent. Further, the development effort will need to put time into setting up a suite of interaction cases to feed to the dynamic analyzer, and that will consume time. I guess extra time is expected, though, if you want better results.
It makes me glad not to be working with raw JavaScript, though. To the extent code-splitting is important, I really think new web applications should not use raw JavaScript. They should use some analyzable subset of JavaScript that has not yet been defined, or they should use a suitable existing language such as Java.
Monday, January 5, 2009
Method calls should be immediate
There's a famous Note on Distributed Computing, by Jim Waldo, et al, arguing that the procedure calls of a language should not be made transparently remote. It's a story that has repeated itself over the years: code starts appearing on multiple networked machines, people want to abstract away from the network, and so they add transparent procedure calls of some kind. To my knowledge it has always gone badly, with the possible exception of cluster computing. Waldo's famous Note lists four killer differences between local and remote calls: latency, memory access, partial failure, and concurrency. I'm convinced.
When it comes to code splitting, Waldo's Note is being challenged again. Doloto and other AJAX tools take the approach where every method call potentially calls into non-loaded code. When that happens, the method call blocks until more code is loaded.
This scenario is not exactly the same as a transparent remote call. Memory access is no longer an issue, because all data of the computation is stored on a single computer. Partial failure could arguably be avoided by deciding all failures will cause the app to shut down. What about latency, though?
Latency looks like a very hard problem for the approach. People developing web applications try to make their sites start up with a minimum of round trips. They aim for numbers like one round trip, or two round trips. To actually achieve such low latencies, programmers must be thoroughly in control of where delays happen, not have those delays happen at any old method call. Further, programmers would like to give some feedback to the user while a download is happening. How can they implement this if any method call might block for more downloading? If execution is blocked, the program can't possibly execute more code to put up a feedback message.
The challenges are severe, so I look forward to seeing how these systems address them.
For the Google Web Toolkit, we are trying a different approach. Regular method calls stay as normal and run immediately. However, wherever the programmer explicitly specifies a split point, the compiler is allowed to arrange for code to download later. A split point looks like this:
There is no mistaking this for a regular method call! Notice that it looks just like passing an event handler into a GUI framework such as Swing. The event handler is specified as an anonymous inner class. In this case there are two methods on the event handler, one called once the code is downloaded, and one called in case there is any network failure. Note that the latter means partial failure is still supported. You can design an application to keep running but with reduced functionality.
With this arrangement, programmers know exactly where a network download can occur and thus can design a loading pattern that will make their application start quickly. Just as importantly, though, regular method calls remain regular method calls, and programmers don't have to worry about extra network activity or failure conditions.
When it comes to code splitting, Waldo's Note is being challenged again. Doloto and other AJAX tools take the approach where every method call potentially calls into non-loaded code. When that happens, the method call blocks until more code is loaded.
This scenario is not exactly the same as a transparent remote call. Memory access is no longer an issue, because all data of the computation is stored on a single computer. Partial failure could arguably be avoided by deciding all failures will cause the app to shut down. What about latency, though?
Latency looks like a very hard problem for the approach. People developing web applications try to make their sites start up with a minimum of round trips. They aim for numbers like one round trip, or two round trips. To actually achieve such low latencies, programmers must be thoroughly in control of where delays happen, not have those delays happen at any old method call. Further, programmers would like to give some feedback to the user while a download is happening. How can they implement this if any method call might block for more downloading? If execution is blocked, the program can't possibly execute more code to put up a feedback message.
The challenges are severe, so I look forward to seeing how these systems address them.
For the Google Web Toolkit, we are trying a different approach. Regular method calls stay as normal and run immediately. However, wherever the programmer explicitly specifies a split point, the compiler is allowed to arrange for code to download later. A split point looks like this:
public void onComposeMailButtonClicked() {
GWT.runAsync(new RunAsyncCallback() {
public void onSuccess() {
activateComposeMailView();
}
public void onFailure(Exception e) {
Window.alert("Server cannot be reached.");
}
});
}
There is no mistaking this for a regular method call! Notice that it looks just like passing an event handler into a GUI framework such as Swing. The event handler is specified as an anonymous inner class. In this case there are two methods on the event handler, one called once the code is downloaded, and one called in case there is any network failure. Note that the latter means partial failure is still supported. You can design an application to keep running but with reduced functionality.
With this arrangement, programmers know exactly where a network download can occur and thus can design a loading pattern that will make their application start quickly. Just as importantly, though, regular method calls remain regular method calls, and programmers don't have to worry about extra network activity or failure conditions.
Subscribe to:
Posts (Atom)