Thursday, May 26, 2011

Google Wallet: Why NFC ?

I was excited to read that Google is going to build a payment method based on a smartphone:
Today in our New York City office, along with Citi, MasterCard, First Data and Sprint, we gave a demo of Google Wallet, an app that will make your phone your wallet. You’ll be able to tap, pay and save using your phone and near field communication (NFC). We’re field testing Google Wallet now and plan to release it soon

I have long wished that payments could be made using an active device in the buyer's possession rather than having the buyer type secret information--a PIN--into a device the seller owns. It requires that a device the buyer has never seen before be dilligent about deleting the PIN after it is used. It also requires that a device the buyer has never seen before is making the same request to the bank that it is displaying on its screen. Security is much higher when using a device the buyer owns.

The main flaw with this approach is that it requires people to carry around these active devices. Google's bright idea is to make that device be a smartphone. Brilliant.

The one thing I don't understand is why Google is only supporting it using NFC. I had never heard of NFC until today, and for any readers like me, it is basically it's a really dumb, short-range, low-bandwidth wireless protocol. It sounds well-suited for the application, but no current phones support it.

An alternative approach that should work with current phones is to use bar code reading software. The seller's hardware would display a barcode that includes a description of what is being bought, the amount of money, and a transaction ID number. It would simultaneously upload the transaction information to Google. The buyer would scan the bar code, and if the user authorizes the payment, it would send authorization to Google. The seller would then receive notification that the payment has been authorized. For larger transactions, further rounds of verification are possible, but for groceries and gas, that could be the end of the story.

Why limit the feature to NFC devices? While NFC solutions look a little more convenient, barcodes don't look bad. Why not offer both?

Wednesday, May 25, 2011

Regehr on bounding the possible benefits of an idea

John Regehr posted a good thought on bounding the possible benefits of an idea before embarking on weeks or months of development:
A hammer I like to use when reviewing papers and PhD proposals is one that (lacking a good name) I call the “squeeze technique” and it applies to research that optimizes something. To squeeze an idea you ask:
  • How much of the benefit can be attained without the new idea?
  • If the new idea succeeds wildly, how much benefit can be attained?
  • How large is the gap between these two?

I am not sure how big of a deal this is in academia. If you are happy to work in 2nd-tier or lower schools, then you probably need to execute well rather than to choose good ideas. However, it's a very big deal if you want to produce a real improvement to computer science.

The first item is the KISS principle: keep it simple, stupid. Given that human resources are usually the most tightly constrained, simple solutions are very valuable. Often doing nothing at all will already work out reasonably. Trickier, there is often a horribly crude solution to a problem that will work rather effectively. In such a case, be crude. There are better places to use your time.

The second item is sometimes called a speed of light bound, due to the speed of light being so impressively unbeatable. You ask yourself how much an idea could help even if you expend years of effort and everything goes perfectly. In many cases the maximum benefit is not that high, so you may as well save your effort. A common example is in speeding up a system. Unless you are working on a major bottleneck, any amount of speedup will not help very much.

Thursday, May 19, 2011

IRBs under review

There are several interesting blog entries up at blog.bioethics.gov concerning the ongoing presidential review of Internal Review Boards.

I liked this line:
“We pushed for an ethical reform of system, real oversight, and now we are left with this bureaucratic system, really a nitpicking monster,” Arras said, addressing Bayer. “And I am as stupefied as you are.”

I am not sure why this pattern would be stupefying. A great many things that people attempt to do don't work out as intended. IRBs are just one more for the list, albeit one that has lingered for decades.

I am not as sanguine about the reviewers about this conclusion about whether another "Guatemala" could happen:
“Of the many things that happened there, no, it could not happen again because of informed consent,” said Dafna Feinholz, chief of the Bioethics Section, Division of Ethics and Science and Technology, Sector for Social and Human Sciences, United Nations Educational, Scientific and Cultural Organization.

The idea is that since IRBs require informed consent of study participants, the Guatemala experiments could never again happen, because the study participants would know what is going on.

I hope so, but consider the following evil scenarios:
  • A wealthy autodidact negotiates directly with local authorities and runs the experiment on his own dime. No university is involved, so no IRB review even happens.
  • A university researcher learns about a disease outbreak in some part of the world. The researcher waits two years and then applies for a research grant to study the effects of the disease. Since the researcher did nothing for the first two years, there was nothing for the IRB to review.
  • The Professor Muckety, Chair of Hubert OldnDusty at BigGiantName University, announces a grand new experiment that he expects will cure cancer. He invites all of the upcoming faculty in his area to take part in it, and there will be numerous papers and great acclaim to all the participants. The IRB at BigGiantName U.'s is stacked with faculty that are totally brainwashed into thinking the experiment is for the greater good. Will they really take a stand against the project?
I would not be so sure that, despite all the efforts of IRBs, an evil experiment couldn't happen again.

Whenever something goes wrong, there is a natural reaction for everyone to yell, "DO SOMETHING!" IRBs are the result of such an outcry. They are there to project human subjects, but I don't believe they are very effective at that. I believe that the MucketyMucks largely breeze through the red tape doing whatever they like, and instead we are staffing a bunch of bureaucrats to check that the smaller players filed form T19-B in triplicate, double spaced and typed with a manual type writer.

To improve on the current mess, carving out a large exempt category would be a large improvement. Surveys, observations, and other experiments with minimal opportunity for harm shouldn't need prior review.

Tuesday, May 17, 2011

It was just getting started...

There are many things wrong with California jumping in to regulate Facebook's privacy policies:
  • Facebook is a world-wide service, not a California service. Why is this up to California?
  • Facebook has over five hundred million users. That's five times more than the number of people who watch the SuperBowl. Whatever Facebook is doing, it must be pretty reasonable.
  • Social network sites tend to only last about five years before the next new hotness overtakes them. The odds are against Facebook lasting all that long.

All of these matter, but the last one is most peculiar to Internet services. I really want to see what the next social site is like, and the next site after that. I don't relish a long sequence of watered-down Facebook clones with all of their paperwork properly stamped and in order. How dreary.

Monday, May 16, 2011

A package universe for Lisp

Quicklisp is a package manager for Common Lisp that is popular among Lisp programmers. I'm happy to read that one of their secrets to success is quality control in the central distribution:
Quicklisp has a different approach:
  • Quicklisp does not use any external programs like gpg, tar, and gunzip.
  • Quicklisp stores copies of project archives in a central location (hosted on Amazon S3 for reliability and served with Amazon CloudFront for speed).
  • Quicklisp computes a lot of project info in advance. Projects that don't build or don't work nicely with other projects don't get included.
I would quibble with the order of their bullet points, because the last point is overwhelmingly important. It isn't a little side benefit to have a well-defined distribution and to test the members of that distribution against each other. On the contrary, it's a make or break property of the system if you want users to have some level of confidence in the code they're downloading.

Wednesday, May 11, 2011

Sven Efftinge on Java-killer languages

I just ran across Sven Efftinge's fun post on what he wants to see in a Java-killer language.

My list would be something like: remove boilerplate for common coding arrangements, make things easier to understand, be compatible with existing Java code, and otherwise leave everything alone.

Sven has a more detailed list. Here are his bullet points and some thoughts on them:

1. Don't make unimportant changes. Gosh yes. Changing = to :=, or changing the keywords, adds a barrier to entry for anyone learning the language. Don't do it without a real benefit.

2. Static typing. Static typing is one of those choices where the up-front choice is far from obvious and has many intangibles, but once you choose, many of the follow-up choices are fairly clear to people who know the area. I think it is perfectly reasonable to have untyped languages on the JVM, and I think it's perfectly reasonable to have simply typed languages with generics only used for collections. Note that the choice will strongly influence what sorts of applications the language is good for, however. Additionally, I would emphasize that today's type systems have gotten more convenient to use, so the niche for untyped languages is smaller than it used to be.

3. Don't touch generics. Java's type system is long in the tooth. While its basic parametric types are fine, there are parts that are simply bad: raw types, wildcards, arrays, and primitive types. If you are developing a Java killer, improving the type system is one of the ways you can improve the language. You'd be crazy not to consider it.

4. Use type inference. Absolutely. This is a large source of boilerplate in Java.

5. Care about tool support (IDE). I agree. When I joined the Scala project in 2005, I was glad to see that the core team was working on a number of tools, including: scaladoc, the scala command (repl, script runner, and object runner), scalap, ant tasks, and the Eclipse plugin. Nowadays there are even more tools, including an excellent IntelliJ plugin and integration with a larger number of build tools.

In a nutshell, making programmers productive requires more than a good programming language. There are huge benefits to good tools and rich libraries. The overall productivity of a programmer is something like the product of language, tools, and libraries.

6. Closures. Yes, please. The main reason to leave it out historically is the lack of garbage collection. I don't understand why Java has been so slow to adopt them, and I was terribly saddened to hear Guy Steele at OOPSLA 1998 pronouncing that Java didn't look like it really needed closures. It was surreal given the content of the talk that he had given just minutes before.

7. Get rid of old unused concepts. Yes, in general. However, this can be hard to do while also maintaining compatibility and generally letting people write things in a Java way if they want. For the specific things Sven lists: totally agreed about fall-through switch; totally agreed about goto, but it's not in Java anyway; not so sure about bit operations. Bit operations are useful on the JVM, and besides, Java's numerics work reasonably already. Better to focus on areas where larger wins are possible.

Free linking on the web?

Lauren Weinstein has a great article up on the efforts of governments around the world to make Internet material disappear. One tactic for this is to go after search engines:
In Europe, one example of this is the so-called Spanish “right to be forgotten” -- currently taking the form of officials in Spain demanding that Google remove specific search results from their global listings that “offend” (one way or another) particular plaintiffs.

I agree with Weinstein's conclusion:
We are at the crossroads. Now is the time when we must decide if the Internet will continue its role as the most effective tool for freedom of information in human history, or if it will be adulterated into a mechanism for the suppression of knowledge, a means to subjugate populations with a degree of effectiveness that dictators and tyrants past could not even have imagined in their wildest dreams of domination.

The U.S. is in a position to affect that future. Currently, it is gradually inserting censorship backdoors into the Internet at the request of its music and film industries. It's not worth the cost. I freely admit that Hollywood is wonderful, but we should remember that Broadway is pretty cool, too. Unlike Hollywood, Broadway has business models that don't require an Internet overload.

Tuesday, May 10, 2011

Externally useful computer science results

John Regehr asks what results in computer science would be directly useful outside the field. I particularly like his description of his motivation:
An idea I wanted to explore is that a piece of research is useless precisely when it has no transitive bearing on any externally relevant open problem.
A corollary of this rule is that the likelihood of a research program ever being externally useful is exponentially decreased by the number of fundamental challenges to the approach. Whenever I hear about a project relying on synchronous RPC, my mental estimate of likely external usability goes down tremendously. As well, there is the familiar case of literary deconstruction.

Regehr proceeds from here to speculate on what results in computer science would truly be useful. I like most of Regehr's list--go read it! I would quibble about artificial intelligence being directly useful; it would be better to be more specific. Is Watson an AI? It's not all that much like human intelligence, so perhaps it's not really AI, but it is a real tour de force of externally useful computer science.

One thing not on the list is better productivity for software developers, including tools, programming languages, and operating systems. When software developers get more done, more quickly, more reliably, anything that includes a computer can be built more quickly and cheaply.