Monday, November 28, 2011

Dana Boyd on Pseudonyms

I'm late to notice, but Dana Boyd has a good article up on the case for pseudonymity. She emphasizes the safety issues, which I certainly agree about.

Something I hadn't fully processed is that many people are using Facebook as an example that real names work. Perhaps this argument is so popular because the Zuckerbergs have publicly emphasized it. At any rate, it's a weak argument. For one thing, quite a number of Facebook profiles are using pseudonyms. See Lady Gaga, Ghostcrawler, and Anne Rice. If the Zuckerbergs really are trying to shut down pseudonyms, they're doing a terrible job of it. Another reason is that, as Boyd points out, Facebook is unusual for starting as a close-knit group of college grads. The membership it grew from is a group of people relatively uninterested in pseudonyms.

Reading the comments to Boyd's post, it appears that the main reasons people are convinced about pseudonyms is the hope that it will improve the level of conversation in a forum. I continue to be mystified by this perspective, but it does appear to be what is driving the most opponents of pseudonyms. I just don't get it. Partially I'm just used to an Internet full of pseudonyms. Partially it's just too easy to think about perfectly legitimate activities that wouldn't be good to pop up if someone does a web search on "Lex Spoon". People interested in that stuff should instead search for Saliacious Leximus. They'll avoid all the nerdy computer talk and get straight to the goods they are looking for.

Overall, pseudonyms appear to be one of those divides where people on each side have a hard time talking over the gulf. Apparently is is perfectly obvious to many people that if Google Plus and Facebook embraced pseudonyms, then their members would get overwhelmed by harassment and spam. Personally, I don't even understand the supposed threat. Why would I circle or friend a complete stranger? If I had, why wouldn't I simply unfriend them?

Saturday, November 19, 2011

The Web version of interface evolution

Nick Bradbury points out an important issue with web APIs:
I created FeedDemon 1.0 in 2003, and it was the first app I wrote that relied on web APIs. Now those APIs no longer exist, and almost every version of FeedDemon since then has required massive changes due to the shifting sands of the web APIs I've relied on.

There are a tangle of issues involved here.

One is that, for sure, web APIs are not a suitable way to build a self-contained system that will at least remain internally consistent indefinitely. You couldn't build a space probe using any web APIs that you don't control. By the time it got to Jupiter the protocols might well have changed. Even barring an intentional protocol change, the service might upgrade its software and accidentally break you.

Not all software is of this character, however. Most user-facing software is expected to stay compatible with the other software a particular user is taking advantage of. Most user-facing software has a mechanism for being updated after it's deployed.

The thought experiment I use here is to consider why the Lisp Machine is no use nowadays. These systems are very impressive; think Emacs, but with a better programming language, and with multiple for-profit companies writing professional software to run on it. Nonetheless, they are useless nowadays (and Emacs is as awesome as ever). That it is so is obvious, but what is the precise reason for it?

My best answer is that the world changed around them. Even without any explicit API break, Lisp Machines just don't have integration with other forms of software that its users would find important. That whole Internet thing is a simple example.

In general, most software is only useful if it is under active maintenance. The difference between zero maintenance and a little bit of maintenance is huge. If you are considering using software isn't under maintenance, run away! If you continue to use it, you will eventually find that you have become its maintainer yourself.

Which brings me back to APIs. APIs are never perfect, and so they evolve just like any other interface. The only way this is different for web APIs is that the clients do not get to choose when to upgrade. One day, the provider updates their software and it's simply on the new API. Gilad Bracha describes this as "versionless software".

Ideally, this evolution should involve discussion between the service provider and all of the clients. Exactly how those discussions work is a rich question that is similar to any other decision process by a number of stakeholders. For an API like one offered by Google, individual clients have very little influence on the API, so you have to decide whether to take it or leave it; part of that decision involves your expectation that the service provider will treat you well. In other cases, there might be a contractual agreement between a user of the service and its provider; in that case, any API changes could be worked out as part of negotiating the contract. In still other cases, a group of software users might meet in standards committees, as happens to some degree with the HTTP protocol.

In short, I really don't think that constant interface change is a fundamental reason to avoid web APIs. Instead, it's a fundamental part of most software development that you have to keep maintaining, whether or not the APIs you program against are accessed over the web. Instead, you should be choosy about what specific APIs you depend on. Just like you wouldn't want to depend on an ancient unmaintained hunk of software, you also wouldn't want to use a hyperactively maintained hunk of web service that has a completely different API from one week to the next. Think explicitly about the API evolution story for any service you depend on, and use your judgement.

Tuesday, November 8, 2011

Cloud9 is hitting limits with JavaScript

Cloud9 has a well-written call for adding classes to JavaScript:
Adding classes to JavaScript is technically not hard -- yet, its impact can be profound. It lowers the barrier to entry for new JavaScript developers and reduces the incompatibility between libraries. Classes in JavaScript do not betray JavaScript’s roots, but are a pragmatic solution for the developer to more clearly express his or her intent. And in the end, that’s what programming languages are all about.
Their argument about library compatibility seems strong to me. It is reasonable to write a Python or Java library that stands alone and has minimal external dependencies. With JavaScript, however, the temptation is strong to work within a framework like Dojo or JQuery just so that you get basic facilities like, well, classes. It's a good argument. If I were working on a large JavaScript code base, however, I'd be strongly tempted to switch over to Dart. It already has the basic language facilities they yearn for, and it's going to move forward much more quickly.