Showing posts with label internet. Show all posts
Showing posts with label internet. Show all posts

Wednesday, August 5, 2015

Ken Clark on shame mobs

Ken Clark has posted the top seven things he likes about shame mobs. Here's a taste:

5) Internet shame mobs weigh the evidence carefully and deliberately before attacking, so they only happen to people who deserve them. [...] 3) Internet shame mobs always make sure that the punishment is proportional to the crime.

There's a larger phenomenon here where problematic information spreads faster than the correction to it. If it spreads fast enough, then it can even pass a tipping point where it becomes very hard to get a hearing to object to any part of it. Everyone has already heard the idea from, well, everyone else, so they quickly dismiss anyone who objects without even really considering it.

The key to stopping such memetic chain reactions is to apply some filtering before propagating information that you read. It's still early days for the Internet, though, and we are all still learning to inoculate ourselves from being the wrong kind carrier.

There is some reason to have hope. Chain emails used to flourish, but are now mostly stamped out. In their heyday, 15-20 years ago, it was fairly common to open your mail program and see numerous messages that said something very exciting, and furthermore that the message should be passed on to everyone you know as fast as possible. Nowadays, the people I interact with just delete any email such emails. If an email explicitly says that it should be forwarded to everyone you know, then it triggers something like an antibody response. Such an email starts looking very bogus, and it gets deleted quickly and possible even flagged for followup by the email provider.

Intriguingly, people would likely not have developed that response had they not gone through the misery of dealing with chain emails earlier on. There are clear parallels to viruses and to building up antibodies!

Shame mobs are a place where it still goes on, though. I'm not entirely sure why it happens. In part, people just want to defend an idea, and they are happy to use real people as an example no matter the consequences. In part, people just enjoy being part of a mob. I hope that shame mobs go the same way as the chain email. We shall see.

Wednesday, January 14, 2015

Surveillance states are possible

While clamping down on private encryption is bad policy, both for the economy and for privacy, I don't think it's technically impossible to implement. Let me draw a couple of comparisons to show why.


As background, here is Cory Doctorow explaining, like many other commenters, that the Internet is too wild and wooly for major governments to possibly implement widespread surveillance:

For David Cameron's proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you've downloaded hasn't been tampered with.

With cellular phones, any phone that uses the relevant chunks of bandwidth is legally required to use certain protocols that are registered with the government. This has been bad economically, in that the telephone network has developed much more slowly than the relatively unregulated Internet. However, being bad economically has never exactly stopped rules from being put into place.

Yes, you can rig up a wireless network in your garage that breaks the rules. However, as soon as you try to use it over a wide geographic region, you're going to be relatively easy to catch. You will have to either broadcast a strong signal, or make use of the existing telephone backbone, or both.

To draw another comparison, consider the income tax. Income tax is easy to avoid with small operations, because you can just pay cash under the table. However, larger operations have to file a variety of paperwork, and the interlocking paperwork is what will get you. The more you take part in the above-ground economy, the harder it is to spin a big enough web of lies to get out of your taxes.

To get back to Internet protocols, it will certainly always be possible to break the rules on an isolated darknet you assemble in your garage. However, as soon as you send packets across the Internet backbone, any use of unregistered protocols is going to be very easy to detect.

To rub the point in further, don't forget that the authorities have no requirement to go after everyone who they detect doing something fishy. If they are anything like the American tax service, they'll randomly (or politically....) select people to target, and those people will then be required to undergo an audit at their own expense. If they survive the audit, the tax service just says "I'm sorry" and moves on to the next victim. Because of selective enforcement, law enforcement has no requirement to go after everyone using illegal encryption.

Of course all this is bad for the economy and for humanity's development at large. Don't oppose a cryptography clampdown because it's technically impossible, or you will look just as silly as the people that say DNS takedowns are technically impossible. Rather, oppose a cryptography clampdown because we don't want to live like that. We want to have private communications, and we want to allow innovation on the Internet. It's brand new, and if we clamp down on it, it will ossify in its current state the same way that the telephone network did.

Sunday, November 23, 2014

Is this the right server?

It's nice to see someone else reach the following conclusion:

"For those familiar with SSH, you should realize that public key pinning is nearly identical to SSH's StrictHostKeyChecking option. SSH had it right the entire time, and the rest of the world is beginning to realize the virtues of directly identifying a host or service by its public key."

Verifying a TLS certificate via the CA hierarchy is better than nothing, but it's not really all that reassuring. Approximately, what it tells you is that there is a chain of certification leading back to one or more root authorities, which for some reason we all try not to think about too much are granted the ultimate authority on the legitimacy web sites. I say "approximately", because fancier TLS verifiers can and do incorporate additional information.

The root authorities are too numerous to really have faith in, and they have been compromised in the past. In general, they and their delegates have little incentive to be careful about what they are certifying, because the entities they certify are also their source of income.

You can get better reliability in key verification if you use information that is based on the interactions of the actual participants, rather than on any form of third-party security databases. Let me describe three examples of that.


Pin the key

For many applications, a remotely installed application needs only communicate with a handful of servers back at a central site you control. In such a case, it works well to pin the public keys of those servers.

The page advocates embedding the public key directly in the application. This is an extremely reliable way of obtaining the correct key. You can embed the key in the app's binary as part of your build system, and then ship the whole bundle over the web, the app store, or however else you are transmitting it to the platform it will run on. Given such a high level of reliability, there is little benefit from pulling in the CA hierarchy.

As linked above, you can implement pinning today. It appears to be tricky manual work, though, rather than something that is built into the framework. As well, you don't get to ignore the CA hierarchy by doing this sort of thing. So long as you use standard SSL libraries, you still have to make sure that your key validates in the standard ways required by SSL.


Associate keys with links

The Y property deserves wider recognition, given how important hyperlinks are in today's world. Put simply, if someone gives you a hyperlink, and you follow that hyperlink, you want to reliably arrive at the same destination that the sender wanted you to get to. That is not what today's URLs give you.

The key to achieving this property is that whenever you transmit a URL, you also transmit a hash of the expected host key. There are many ways to do this, including the ones described at the above hyperlink (assuming you see the same site I am looking at as I write this!). Just to give a very simple example, it could be as simple as using URLs of the following form:

     https://hash-ABC123.foo.bar/sub/dir/foo.html

This particular example is interesting for being backward compatible with software that doesn't know what the hash means.

I don't fully know why this problem is left languishing. Part of it is probably that people are resting too easy on the bad assumption that the CA hierarchy has us covered. There's a funny mental bias where if we know nothing about a subject, and we see smart people working on it, the more optimistic of us just assume that it works well. Another part of the answer is that the core protocols of the world-wide web are implemented in many disparate code bases; SSH benefits from having an authoritative version of both the client and the server, especially in its early days.

As things stand, you can implement "YURLs" for your own software, but they won't work as desired in standard web browsers. Even with custom software, they will only work among organizations that use the same YURL scheme. This approach looks workable to me, but it requires growing the protocols and adopting them in the major browsers.


Repeat visits

One last source of useful information is the user's own previous interactions with a given site. Whenever you visit a site, it's worth caching the key for future reference. If you visit the "same" site again but the key has changed, then you should be extremely suspicious. Either the previous site was wrong, or the new one is. You don't know which one is which, but you know something is wrong.

Think how nice it would be if you try to log into your bank account, and the browser said, "This is a site you've never seen before. Proceed?"

You can get that already if you use pet names, which have been implemented as an experimental browser extension. It would be great if web browsers incorporated functionality like this, for example turning the URL bar and browser frame yellow if they see a a site is a new one based on its certificate. Each browser can add this sort of functionality independently, as an issue of quality of implementation.

In your own software, you can implement key memory using the same techniques as for key pinning, as described above.


Key rotation

Any real cryptography system needs to deal with key revocation and with upgrading to new keys. I have intentionally left out such discussion to keep the discussion simple, but I do believe these things can be worked into the above systems. It's important to have a way to sign an official certificate upgrade, so that browsers can correlate new certificates with old ones during a graceful phase-in period. It's also important to have some kind of channel for revoking a certificate, in the case that one has been compromised.

For web applications and for mobile phone applications, you can implement key rotation by forcing the application to upgrade itself. Include the new keys in the newly upgraded version.

Thursday, November 20, 2014

FCC inches away from neutrality

The FCC’s latest proposal for network neutrality rules creates space for broadband carriers to offer “paid prioritization” services.[11] While the sale of such prioritization has been characterized as a stark and simple sorting into “fast” and “slow” traffic lanes,[12] the offering is somewhat more subtle: a paid prioritization service allows broadband carriers to charge content providers for priority when allocating the network’s shared resources, including the potentially scarce bandwidth over the last-mile connection between the Internet and an individual broadband subscriber. Such allocation has historically been determined by detached—or “neutral”—algorithms. The Commission’s newly proposed rules, however, would allow carriers to subject this allocation to a content provider’s ability and willingness to pay.

That's from a review on Standard Law Review a few months ago. I think this evolution in the FCC's approach will benefit the public.

It seems important to consider realistic developments of the Internet. Here's a thought experiment I've used for a long time, and that seems to be happening in practice. Try to imagine what goes wrong if a site like YouTube or Netflix pays--with its own money--to install some extra network infrastructure in your neighborhood, but only allows its own packets to go across that infrastructure. Doing so is a flagrant violation of network neutrality, because packets from one site will get to you faster than packets from another site. Yet, I can't see the harm. It seems like a helpful development, and just the sort of thing that might get squashed by an overly idealistic commitment to neutrality.

As a follow-on question, what changes if instead of Netflix building the infrastructure itself, it pays Comcast to do it? It's the same from a consumer's view as before, only now the companies in question are probably saving money. Thus, it's even better for the general public, yet it's an even more flagrant violation of network neutrality. In this scenario, Netflix is straight-up paying for better access.

It seems that the FCC now agrees with that general reasoning. They not only support content delivery networks in general, but now they are going to allow generic ISPs to provide their own prioritized access to sites that pay a higher price for it.

I believe "neutrality" is not the best precise goal to go for. Rather, it's better to think about a more general notion of anti-trust.

Saturday, January 18, 2014

Is Internet access a utility?

I forwarded a link about Network Neutrality to Google Plus, and it got a lot of comments about how Internet access should be treated like a utility. I think that's a reasonable perspective to start with. What we all want, I think, is to have Internet access itself be a baseline service, and that Internet services on top of it have fierce competition.

In addition to considering the commonalities with Internet access and utilities, we should also note the differences.

One difference is that a utility is for a monopoly, but Internet access is not monopolized. You can only put one road in any physical location, and I will presume for the sake of argument that you don't want to have multiple power grids in the same locale. Internet access is not a monopoly, though! At least in Atlanta, we have cable, DSL, WiMax, and several cellular providers. We have more high-speed Internet providers than supermarket chains.

Another difference is that utilities lock down technology change to a snail's pace. With roads and power grids, the technology already provides near-maximum service for what is possible, so this doesn't matter. With telephony, progress has been locked down for decades, and I think we all lost out because of that; the telephone network could have been providing Skype-like services a long time ago, but as a utility they kept doing things the same way as always. Meanwhile, the Internet is changing rapidly. It would be really bad to stop progress on Internet access right now, the way we did with telephony several decades ago.

I believe a better model than utilities would be supermarkets. Like Internet providers, supermarkets carry a number of products that are mostly produced by some other company. I think it has gone well for everyone that supermarkets to have tremendous freedom in their content selection, pricing, promotional activities, hours, floor layout, buggies, and checkout technology.

In contrast to what some commenters ask, I do not have any strong expectation about what Comcast will or won't try. I would, however, like them to be free to experiment. I've already switched from Comcast and don't even use them right now. If Comcast is locked into their current behavior, then that does nothing for me good or bad. If they can experiment, maybe they will come up with something better.

In principle, I know that smart people disagree on this, but I currently don't see anything fundamentally wrong with traffic shaping. If my neighbor is downloading erotica 24/7, then I think it is reasonable that Comcast give my Game of Thrones episode higher priority. The fact that Comcast has implemented this badly in the past is troubling, but that doesn't mean the next attempt won't work better. I'd like them to be free to try.

Monday, November 11, 2013

It's ad targeting, isn't it?

I see continued assumptions by people that the real names policies of Facebook and Google Plus have actual teeth.

I've posted before on whether real names are truly enforced on Facebook, and it looks like the answer there is no. My impression is that it's not working great on Plus, either, although there have been some famous botched efforts.

The rationale that it improves the level of discussion seems thin and inaccurate. There are too many legitimate reasons to participate in a forum but not to want it to pop up when your boss does a Google search on your name.

As far as I can tell, the main purpose of a real names policy is to appease advertisers. Advertisers feel, probably correctly, that more information about users will improve the accuracy of ad targeting. It's weird, though, because nobody seems to talk about it that way. It's analogous to the exhortations in a hotel room that it's good for the environment to avoid washing so many towels. Ummm, I'm pretty sure it's more about the money.

Thursday, January 31, 2013

The "magic moment" for IPv6

The Internet has undergone many large changes in the protocols it uses. A few examples are: the use of MIME email, the replacement of Gopher by HTTP, and the use of gzip compression within HTTP. In all three of these examples, the designers of the protocol upgrades were careful to provide a transition plan. In two out of the three examples (sorry, Gopher), the old protocol is still practical to use today, if you can live with its limitations.

Things are going differently for IPv6. In thinking about why, I like Dan Bernstein's description of a "magic moment" for IPv6. It goes like this:

The magic moment for IPv6 will be the moment when people can start relying on public IPv6 addresses as replacements for public IPv4 addresses. That's the moment when the Internet will no longer be threatened by the IPv4 address crunch.

Note that Dan focuses on the address crunch. Despite claims to the contrary, I believe most people are interested in IPv6 for its very large address space. While there are other cool things in IPv6, such as built-in encryption and simplified fragmentation, they are not enough that people would continue to lobby for IPv6 after all these years. The address crunch is where it's at.

While I like Dan's concept of a magic moment, I think the above quote asks for too much. There are some easier magic moments for individual kinds of nodes on the computer, and some might well happen before others. Let me focus on two particular kinds of Internet nodes: public web sites and home Internet users.

How close is the magic moment for web sites? Well, web servers can discard their IPv4 addresses just as soon as the bulk of the people connecting to them all have IPv6 connectivity. I do not know how to gather data on that, but as a spot point, I have good networking hardware but cannot personally connect to IPv6 sites. My reason is both mundane and common: I am behind a Linksys NATing router, and that router does not support IPv6. Even if it did, it does not support any sort of tunneling that would allow my local computer to connect to an IPv6-only web server. To the extent people are using plain old Linksys routers, we are a long way away from the magic moment for web servers.

How about for home users? Well, it's the other way around for home users: home users can switch once the majority of public web sites have an IPv6 address. This status is easier to gather data on. I just looked up the top ten web sites (according to Alexa's Top 500 Web Sites) and checked them with a publicly available IPv6 validation site (http://ipv6-test.com/validate.php). Of the top ten web sites, only four can be reached from an IPv6-only client: Google, Facebook, YouTube, and Wikipedia. The other six still require IPv4: Yahoo, Baidu, Live.com, Amazon, QQ.com, and Twitter. As things stand, we are also a long way from when home users can switch to IPv6-only.

Overall, this was a cursory analysis, but I think these "magic moments" are a helpful framework for thinking about the IPv6 changeover. Unfortunately, this framework currently indicates that we are nowhere close.

Saturday, December 29, 2012

Does IPv6 mean the end of NAT?

I frequently encounter a casual mention that, with the larger address space in IPv6, Net Address Translation (NAT)--a mainstay of wireless routers everywhere--will go away. I don't think so. There are numerous reasons to embrace path-based routing, and I believe the anti-NAT folks are myopically focusing on just one of them.

As background, what a NAT router does is allow multiplexing multiple private IP addresses behind a single, public IP address. From outside the subnet, it looks like the NAT router is a single machine. From inside the subnet, there are a number of machines, each with its own IP address. The NAT router allows communication between the inside and outside worlds by swizzling IP addresses and ports as connections go through the router. That's why it is a "net address translator" -- it translates between public IPs and private IPs.

My first encounter with NAT was to connect multiple machines to a residential ISP. It was either a cable company or a phone company; I forget which. The ISP in question wanted to charge extra for each device connected within the residential network. That is, if you connect two computers, you should pay more than if you connect one computer. I felt, and still feel, that this is a poor business arrangement. The ISP should concern itself with where I impose costs on it, which is via bandwidth. If I take a print server from one big box and move it onto its own smaller computer, then I need a new IP address, but that shouldn't matter at all to the ISP. By using NAT--in my case, Linux's "masquerading" support--the ISP doesn't even know.

This example broadens to a concern one could call privacy. What an organization does within its own network is its own business. Its communication with the outside world should be through pre-agreed protocols that, to the extent feasible, do not divulge decisions that are internal to the organization. It shouldn't matter to the general public whether each resident has their own machine, or whether they are sharing, or whether the residents have all bought iPads to augment their other devices.

For larger organizations, privacy leads to security. If you want to break into an organization's computer infrastructure, one of the first things you want to do is to feel out the topology of the network. Unless you use NAT at the boundary between your organization's network and the general internet, then you are exposing your internal network topology to the world. You are giving an attacker an unnecessary leg up.

You could also view these concerns from the point of view of modularity. The public network protocol of an organization is an interface. The internal decisions within the organization are an implementation. If you want everything to hook up reliably, then components should depend on interfaces, not implementations.

Given these concerns, I see no reason to expect NAT to go away, even given an Internet with a larger address space. It's just sensible network design. Moreover, I wish that the IETF would put more effort into direct support for NAT. In particular, the NAT of today is unnecessarily weak when it comes to computers behind different NATing routers making a direct connections with each other.

It is an understatement to say that not everyone agrees with me. Vint Cerf gave an interview earlier this year where he repeatedly expressed disdain for NAT.

"But people had not run out of IPv4 and NAT boxes [network address translation lets multiple devices share a single IP address] were around (ugh), so the delay is understandable but inexcusable."

Here we see what I presume is Cerf's main viewpoint on NAT: it's an ugly mechanism that is mainly used to avoid address exhaustion.

One of the benefits of IPv6 is a more direct architecture that's not obfuscated by the address-sharing of network address translation (NAT). How will that change the Internet? And how seriously should we take security concerns of those who like to have that NAT as a layer of defense? Machine to machine [communication] will be facilitated by IPv6. Security is important; NAT is not a security measure in any real sense. Strong, end-to-end authentication and encryption are needed. Two-factor passwords also ([which use] one-time passwords).

I respectfully disagree with the comment about security. I suspect his point of view is that you can just as well use firewall rules to block incoming connections. Speaking as someone who has set up multiple sets of firewall rules, I can attest that they are fiddly and error prone. You get a much more reliable guarantee against incoming connections if you use a NAT router.

In parting, let me note a comment in the same interview:

Might it have been possible to engineer some better forwards compatibility into IPv4 or better backwards compatibility into IPv6 to make this transition easier? We might have used an option field in IPv4 to achieve the desired effect, but at the time options were slow to process, and in any case we would have to touch the code in every host to get the option to be processes... Every IPv4 and IPv6 packet can have fields in the packet that are optional -- but that carry additional information (e.g. for security)... We concluded (perhaps wrongly) that if we were going to touch every host anyway we should design an efficient new protocol that could be executed as the mainline code rather than options.

It is not too late.

Sunday, January 22, 2012

DNS takedowns alive and well

I wrote earlier that PROTECT-IP and SOPA are getting relatively too much attention. Specifically, I mused about this problem:
First, DNS takedowns are already happening under existing law. For example, the American FBI has been taking down DNS names for poker websites in advance of a trial. SOPA and PROTECT-IP merely extend the tendrils rather than starting something new.

Today I read news that indeed, the FBI has taken down the DNS name for Megaupload.com. I'm not sure the American public is in tune with precisely what its federal government is doing.

The news has other sad aspects than the use of DNS takedowns. A few other aspects lept out for me:

  • There has been not yet been a trial. If I ask most Americans about how their legal system works, I expect one of the first things people would say is that, in America, people are innocent until proven guilty.
  • There is twenty years of jail time associated with the charges. Isn't that a little harsh for copyright violations? I think of jail as how you penalize murderers, arsonists, and others who are going to be a threat to the public if they are left loose. Intellectual property violations somehow seem to not make the cut.
  • It's an American law, but New Zealand police arrested some of the defendants.
  • The overall demeanor of the authorities comes off as rather thuggish. For example, they seized all manner of unrelated assets of the defendants, including their cars.
I am glad SOPA and PROTECT-IP went down. However, much of what protesters complained about is already happening.

Monday, January 2, 2012

DNS takedowns under fire in the U.S.

I get the impression that SOPA, the latest version of a U.S. bill to enable DNS takedowns of non-American web sites, is under a lot of pressure. A major blow to its support is that the major gaming console companies backing out.

I am certainly heartened. However, the problem is still very real, for at least two reasons.

First, DNS takedowns are already happening under existing law. For example, the American FBI has been taking down DNS names for poker websites in advance of a trial. SOPA and PROTECT-IP merely extend the tendrils rather than starting something new.

Second, this bill won't be the last. So long as the Internet uses DNS, there is a vulnerability built right into the protocols. Secure DNS doesn't make it any better; on the contrary, it hands the keys to the DNS over to national governments.

The only long term way to fix this problem is to adjust the protocols to avoid a single point of vulnerability. It requires a new way to name resources on the Internet.

Saturday, December 17, 2011

Blizzard embraces pseudonyms

Blizzard Software's lets you use the same name on multiple games and on multiple servers within the same game. Historically, they required you to use a "real name" (in their case, a name on a credit card). This week they announced that they are deploying a new system without that requirement:
A BattleTag is a unified, player-chosen nickname that will identify you across all of Battle.net – in Blizzard Entertainment games, on our websites, and in our community forums. Similar to Real ID, BattleTags will give players on Battle.net a new way to find and chat with friends they've met in-game, form friendships, form groups, and stay connected across multiple Blizzard Entertainment games. BattleTags will also provide a new option for displaying public profiles.[...] You can use any name you wish, as long as it adheres to the BattleTag Naming Policy.
I am glad they have seen the light. There are all sorts of problems with giving away a real [sic] name within a game.

From a technical perspective, the tradeoffs they choose for the BattleTag names are interesting and strike me as solid:

If my BattleTag isn't unique, what makes me uniquely identifiable? How will I know I'm adding the right friend to my friends list? Each BattleTag is automatically assigned a 4-digit BattleTag code, which combines with your chosen name to create a unique identifier (e.g. AwesomeGnome#3592).
I'll go out on a limb and assume that the user interfaces that use this facility will indicate when you are talking to someone on your friends list. In that case, the system will be much like a pet names system, just with every name including a reasonable default nickname. When working within such UIs, they will achieve all of Zooko's Triangle. When working outside it, the security aspect will be weaker, because attackers can make phony accounts with a victim's nickname but a different numeric code. That's probably not important in practice, so long as all major activities happen within a good UI such as one within one of Blizzard's video games.

Regarding pseudonymity, I have to agree with the commenters on the above post. Why not do it this way to begin with and not bother with RealID? They can still support real [sic] names for people who want them, simply by putting a star next to the names of people whose online handle matches their credit card. Going forward, now that they've done this right, why not simply scrap RealID? It looks like high-level political face cover. You have to read closely in the announcement even to realize what they are talking about.

Monday, November 28, 2011

Dana Boyd on Pseudonyms

I'm late to notice, but Dana Boyd has a good article up on the case for pseudonymity. She emphasizes the safety issues, which I certainly agree about.

Something I hadn't fully processed is that many people are using Facebook as an example that real names work. Perhaps this argument is so popular because the Zuckerbergs have publicly emphasized it. At any rate, it's a weak argument. For one thing, quite a number of Facebook profiles are using pseudonyms. See Lady Gaga, Ghostcrawler, and Anne Rice. If the Zuckerbergs really are trying to shut down pseudonyms, they're doing a terrible job of it. Another reason is that, as Boyd points out, Facebook is unusual for starting as a close-knit group of college grads. The membership it grew from is a group of people relatively uninterested in pseudonyms.

Reading the comments to Boyd's post, it appears that the main reasons people are convinced about pseudonyms is the hope that it will improve the level of conversation in a forum. I continue to be mystified by this perspective, but it does appear to be what is driving the most opponents of pseudonyms. I just don't get it. Partially I'm just used to an Internet full of pseudonyms. Partially it's just too easy to think about perfectly legitimate activities that wouldn't be good to pop up if someone does a web search on "Lex Spoon". People interested in that stuff should instead search for Saliacious Leximus. They'll avoid all the nerdy computer talk and get straight to the goods they are looking for.

Overall, pseudonyms appear to be one of those divides where people on each side have a hard time talking over the gulf. Apparently is is perfectly obvious to many people that if Google Plus and Facebook embraced pseudonyms, then their members would get overwhelmed by harassment and spam. Personally, I don't even understand the supposed threat. Why would I circle or friend a complete stranger? If I had, why wouldn't I simply unfriend them?

Friday, October 7, 2011

What every guide says about child safety on the Internet

At the same time that Blizzard and Google are fighting for real names only on the Internet, children's advocacy groups are fighting for exactly the opposite. Take a look at the top hits that come up if you do a web search on "advice to children online".

First there is ChildLine, a site targeted directly at children. Here is the entirety of their guide on how to stay safe:

How do I stay safe when playing games online?
  • Don’t use any personal information that might identify you. This could be your full name, home address, phone number or the name of your school.
  • Use a nickname instead of your real name and chose one that will not attract the wrong type of attention.
  • Look out for your mates. Treat your friend’s personal details like you would your own and do not share their details with others.
Not only do they suggest not using real names, it is pretty much the only advice they give.

Next is Safe Kids, a site targeted at parents. This site has a more detailed guide on things you can do to help a child say safe. Here is their number one suggestion under "Guidelines for parents":

Never give out identifying information—home address, school name, or telephone number—in a public message such as chat or newsgroups, and be sure you’re dealing with someone both you and your children know and trust before giving out this information via E-mail. Think carefully before revealing any personal information such as age, financial information, or marital status. Do not post photographs of your children in newsgroups or on web sites that are available to the public. Consider using a pseudonym, avoid listing your child’s name and E-mail address in any public directories and profiles, and find out about your ISP’s privacy policies and exercise your options for how your personal information may be used.

Third up is BullyingUK, a site dedicated to bullying in particular instead of general child abuse. Here are their first two pieces of advice for Internet saftey:

  • Never give out your real name
  • Never tell anyone where you go to school

The real names movement is not just out of touch with BBS culture and with norms of publication. It's also out of touch with child safety advocates.

Real names proponents talk about making Internet users accountable. Child advocates, meanwhile, strive for safety. Safety and accountability are in considerable tension. To be safe on a forum, one thing you really want is the ability to exit. You want children to be able to leave a forum that has turned sour and not have ongoing consequences from it. To contrast, real name proponents hope that if someone misbehaves and leaves a forum, there is some outside mechanism to track the person down and retaliate. That might sound good if the person tracked down is really a troll, but it's a chilling prospect if the person being hunted is a child.

Friday, September 30, 2011

Pseudonyms lead to uncivil forums?

I am late to realize, but apparently, Google Plus is requiring a real names only. They go so far as to shut down accounts that are using a name they are suspicious of, and they're doing a lot of collateral damage to people with legal names that happen to sound funny.

The battle for "real names" is one that I have a hard time understanding. Partially this is because it is impossible to indicate which names are "real". Is it ones on legal papers? On a credit card or bank account? Ones people call you all the time? Partially it is that I started using forums at an impressionable age. Online forums are filled with pseudonyms and they work just fine. Hobbit and Ghostcrawler are the real names of real people in my world. It's all so normal and good that I have a hard time understanding why someone would want to shut it down.

Let me take a try at it, though, because I think it's important that pseudonymity thrive on the Internet.

The most common defense I hear for a real-names policy is that it improves the quality of posts in a forum. That's the reason Blizzard used when they announced they would require real names only on their official forums. As far as I can understand, the idea is that a "real name" gives some sort of accountability that a pseudonym does not.

There is much to say on this, but often a simple counter-example is the strongest evidence. Here are the first four Warcraft guilds I could find, by searching around on Google, that have online forums viewable by the public.

Feel free to peruse them and see what a forum is like without real names. At a glance, I don't see a single real name. Everyone is posting using names like Brophy, Porcupinez, and Nytetia. As well, after skimming a few dozen posts, I didn't find a single one that is uncivil. In fact, the overall impression I get is one of friendliness. Camaraderie. Just plain fun.

The tone of these forums is not surprising if you think about the relationship the members of a guild have with each other. This is just the sort of thing you see over and over again if you participate in Internet forums. It is just the kind of thing that will be shut down under a real names policy.

Thursday, July 7, 2011

Professors' letter against PROTECT-IP

A number of professors have signed a letter to the U.S. Congress opposing Protect IP:
The undersigned are 108 professors from 31 states, the District of Columbia, and Puerto Rico who teach and write about intellectual property, Internet law, innovation,and the First Amendment. We strongly urge the members of Congress to reject the PROTECT-IP Act (the "Act"). Although the problems the Act attempts to address-–online copyright and trademark infringement–-are serious ones presenting new and difficult enforcement challenges, the approach taken in the Act has grave constitutional infirmities, potentially dangerous consequences for the stability and security of the Internet's addressing system, and will undermine United States foreign policy andstrong support of free expression on the Internet around the world.

The most important point raised in the letter is that it is a violation of free speech. Forgetting the constitutional issue in the U.S., isn't it a bad way for people to interact online? Shutting down a DNS address is much like cutting a person's phone access, something that is simply not done unless the person is about to be arrested. The authors accurately call it an "Internet death sentence". It's far overboard.

The letter also raises the issues with secure DNS, but I believe this is a counter-productive argument. Secure DNS is a gift to anyone who wants to cut off DNS records. Sure, PROTECT-IP as it stands might not work, but all that means is that Secure DNS version 2 will be updated to have a government back door. The problems of PROTECT-IP are not technical.

Most of all, I really wish people could be more creative about digital copyright. You can copy bits, but you can't copy skill. Thus, we would do better to sell skill than to sell the bits that result from them. We can make that change, but expect Hollywood to fight it.

Friday, June 10, 2011

Peek IPv4 ?

In its announcement about "IPv6 Day", the Internet Society (ISOC) casually remarked that IPv4 addresses are about to run out:
With IPv4 addresses running out this year, the industry must act quickly to accelerate full IPv6 adoption or risk increased costs and limited functionality online for Internet users everywhere.

This is highly misleading, and the recommended solution is not a good idea. I am sure if pressed the ISOC would respond that by "running out", they mean in the technical sense that some registrar or another has now given away all its addresses. However, the actual verbiage implies something far different. It implies that if you or I try to get an IPv4 address on January 1, 2012, we won't be able to do so. That implication is highly unlikely.

A better way to think about the issue is via price theory. From the price theory point of view, the number of IPv4 addresses at any time is finite, and each address has a specific owner. At the current time, every valid address is owned by some entity or another. (Thus, in some sense they "ran out" a long time ago.)

When a new person wants to get an IPv4 address for their own use, they must obtain the rights from some entity that already has one. While some large organizations can use political mechanisms to gain an IPv4 address, most people must purchase or rent the address from some entity that already owns one. Typically those IP addresses are bundled with a service contract that provides Internet bandwidth, though in some cases addresses can be purchased by themselves.

The price one pays gives us a way to think about the scarcity of addresses. Diamonds are relatively scarce, and their price is correspondingly high. Being 747s are even more scarce, and their price is even higher. For IP addresses, the price is surely rising over time as more and more people hook things up to the Internet. Already the price is high enough that, for example, most home users do not assign a separate publicly routable address to every IP device in their home. They make do with a single IP address from their Internet provider.

What is that price right now? The question is crudely phrased, because some addresses are more valuable than others, and all addresses come with some sort of strings attached. However, we can get a ballpark idea by considering a few data points:
  • Linode offers its subscribers an extra IP address for $1/month.
  • Linode offers an IP address along with an Internet hosting service for under $20/month.
  • Broadband providers such as Comcast and AT&T offer an IP address along with Internet connectivity for on the order of $50/month.
From these observations we can infer that the cost of an IP address is at most a few dollars per month. With the cost this low, I can't see any major site going to IPv6-only any time soon. A few dollars per month is a very low price to pay for a great deal of extra accessibility. With the protocols designed as they are right now, the reason to consider IPv6 is that it's a newer, better protocol, not because it has more available addresses.

I wish public communication about IPv6 would make this more clear. The Internet is important, and as such, it is important that the techies get it right. This isn't a minor technical detail.

Thursday, May 26, 2011

Google Wallet: Why NFC ?

I was excited to read that Google is going to build a payment method based on a smartphone:
Today in our New York City office, along with Citi, MasterCard, First Data and Sprint, we gave a demo of Google Wallet, an app that will make your phone your wallet. You’ll be able to tap, pay and save using your phone and near field communication (NFC). We’re field testing Google Wallet now and plan to release it soon

I have long wished that payments could be made using an active device in the buyer's possession rather than having the buyer type secret information--a PIN--into a device the seller owns. It requires that a device the buyer has never seen before be dilligent about deleting the PIN after it is used. It also requires that a device the buyer has never seen before is making the same request to the bank that it is displaying on its screen. Security is much higher when using a device the buyer owns.

The main flaw with this approach is that it requires people to carry around these active devices. Google's bright idea is to make that device be a smartphone. Brilliant.

The one thing I don't understand is why Google is only supporting it using NFC. I had never heard of NFC until today, and for any readers like me, it is basically it's a really dumb, short-range, low-bandwidth wireless protocol. It sounds well-suited for the application, but no current phones support it.

An alternative approach that should work with current phones is to use bar code reading software. The seller's hardware would display a barcode that includes a description of what is being bought, the amount of money, and a transaction ID number. It would simultaneously upload the transaction information to Google. The buyer would scan the bar code, and if the user authorizes the payment, it would send authorization to Google. The seller would then receive notification that the payment has been authorized. For larger transactions, further rounds of verification are possible, but for groceries and gas, that could be the end of the story.

Why limit the feature to NFC devices? While NFC solutions look a little more convenient, barcodes don't look bad. Why not offer both?

Tuesday, May 17, 2011

It was just getting started...

There are many things wrong with California jumping in to regulate Facebook's privacy policies:
  • Facebook is a world-wide service, not a California service. Why is this up to California?
  • Facebook has over five hundred million users. That's five times more than the number of people who watch the SuperBowl. Whatever Facebook is doing, it must be pretty reasonable.
  • Social network sites tend to only last about five years before the next new hotness overtakes them. The odds are against Facebook lasting all that long.

All of these matter, but the last one is most peculiar to Internet services. I really want to see what the next social site is like, and the next site after that. I don't relish a long sequence of watered-down Facebook clones with all of their paperwork properly stamped and in order. How dreary.

Wednesday, May 11, 2011

Free linking on the web?

Lauren Weinstein has a great article up on the efforts of governments around the world to make Internet material disappear. One tactic for this is to go after search engines:
In Europe, one example of this is the so-called Spanish “right to be forgotten” -- currently taking the form of officials in Spain demanding that Google remove specific search results from their global listings that “offend” (one way or another) particular plaintiffs.

I agree with Weinstein's conclusion:
We are at the crossroads. Now is the time when we must decide if the Internet will continue its role as the most effective tool for freedom of information in human history, or if it will be adulterated into a mechanism for the suppression of knowledge, a means to subjugate populations with a degree of effectiveness that dictators and tyrants past could not even have imagined in their wildest dreams of domination.

The U.S. is in a position to affect that future. Currently, it is gradually inserting censorship backdoors into the Internet at the request of its music and film industries. It's not worth the cost. I freely admit that Hollywood is wonderful, but we should remember that Broadway is pretty cool, too. Unlike Hollywood, Broadway has business models that don't require an Internet overload.

Friday, April 1, 2011

Dan Wallach on fixing the certificate authorities

I like the latest ideas from Dan Wallach about building better infrastructure for browsers to detect impostor web sites.

First, there's this:
A straightforward idea is to track the certs you see over time and generate a prominent warning if you see something anomalous. This is available as a fully-functioning Firefox extension, Certificate Patrol. This should be built into every browser.
This is similar to pet names, but is more similar to the way SSH works. Like Pet Names, this approach will tell you if you visit a site and its certificate has changed. Unlike Pet Names, it won't say anything when you visit a new site. There's a trade off there. Either is a big improvement on the current state, though I suspect pet names could lead to a better overall user interface. The reason is that pet names can be integrated with the browser's bookmarks.

Second, there's this more speculative request:
In addition to your first-hand personal observations, why not leverage other resources on the network to make their own observations? For example, while Google is crawling the web, it can easily save SSL/TLS certificates when it sees them, and browsers could use a real-time API much like Google SafeBrowsing.

The Y property would give us this effect. What if, when you got a Google search result, it not only told you the URLs for the hits but also the certificates for those pages? You can then only be attacked if the attacker fools both you and also every Google web crawler.

Let me add one thing. If web protocols used these two tricks, how important would certificate authorities be? These two decentralized techniques strike me as so much more effective that certificate authorities are a waste of time. If you already know that a site is the same one you've visited a dozen times, and you already know it's the same site that Google thinks it is, what do you care about what some Iranian agency thinks of the site?