Tuesday, June 3, 2014

My analysis of the Swift language

Apple has put out Swift, which sounds like a nice language overall. Here is my flagrantly non-humble opinion about how its features line up with what I consider modern, well-established aspects of programming language design.

The good

First off, Swift includes range-checked integer arithmetic! Unless you explicitly ask for wraparound, any overflow will cause an exception. I was just commenting yesterday on what a tough problem this is for current programming languages.

It has function types, nested functions, and closures, and it has numerous forms of optimized syntax for closures. This is all heartening, and I hope it will stick going forward, much the way lexical variable binding has stuck. Closures are one of those things that are both helpful and have small down side once your language has garbage collection.

Swift's closures can assign to variables in an outer scope. That's the right way to do things, and I find it painful how much Java's designers struggle with this issue. As a technical detail, I am unclear what happens if a closure captures a variable but does not modify it. What ought to happen is that any read from it will see the latest value, not the value at the time the capture happened. However, the Closures section of the Language Guide suggests that the compiler will capture just the initial value in this case. I believe this is misguided and will cause as many traps as it fixes; for example, suppose the programmer captured a counter, but does not increment that counter itself? The motto here should be: you don't know what the programmer meant, but you know what they wrote.

Type inference is quite welcome. I don't know what more to say than that developers will take advantage of it all the time, especially for local variables.

Tuple types are a small touch that comes up in practical programming all the time. How many times have you wanted to return two values from a function, and had to design a class for it or otherwise to pervert your design?

Enumerations seem good to include in the language. Language designers often seem to think that enums are already handled by other language features, and therefore should not be included. I respect that, but in this case, it's a simple feature that programmers really like to use. Java's enums are baroque, and none of the several Datalog dialects I have wokred on include enums at all. I miss having language support for a closed set of named integers. It's easy to support and will be extremely popular.

As an interesting trick, keyword arguments to functions are supported, but you have to opt in. That's probably a good combination. Keyword arguments are quite useful in cases where you have a lot of parameters, and sometimes this legitimately happens. However, it's unfortunate if you afflict all functions with keyword arguments, because the keyword arguments become part of the API. By making it opt in, the feature is there for the functions which can use it.

Including both structs and classes looks initially redundant, but it's quite helpful to have a value type that encompasses multiple other values. As an example, the boxed Integer type on Java would be much better as a struct than as a class.

Extensions look valuable for programming in the large. They allow you can make an existing class fit into a new framework, and they let you add convenience methods to an existing class. Scala uses its implicit conversions for extensions, but direct support for extensions also makes a lot of sense.

The way option chaining works is a nice improvement on Objective C. In Objective C, any access to nil returns nil. In practice, programmers are likely better off with getting an error when they access nil, as a form of design by contract: when something goes wrong, you want the program to stop at that point, not some indefinite time later. Still, sometimes you want nil propagation, and when you do, Swift lets you just put a "?" after the access.

Weak references are helpful for any language with automatic memory management, but they look especially helpful in a language with reference-counting memory management. I don't follow why there are also the "unowned" references, except that the designers didn't want your code to get polluted with ! dereferences. Even so, I would think this is a case of do or do not do. If you are worried about ! pollution, which is a legitimate concern, then simply don't require the !.

As an aside, this is the reason I am not sure pervasive null is as bad as often claimed. In practical code, there are a lot of cases where a value is sometimes optional but, in a specific context, is known to be present. In such a case, you are just going to deference it, and possibly suffer a null-pointer check if you were wrong. As such, programmers are guided into a style where they just insert dereferences until the compiler shuts up, which makes the code noisey without increasing practical reliability.

The dubious

Swift looks very practical and workable, but there are some issues I think could have been done better.

Single inheritance seems like a step backward. The linearization style of multiple inheritance has proven helpful in practice, and it eliminates the need for a separate "interface" or "protocol" feature. Perhaps designers feel like C++'s multiple inheritance went badly, and are avoiding multiple inheritance like the plague? I used to think that way, but it's been multiple decades since C++'s core design. There are better design for multiple inheritance nowadays.

Swift doesn't appear to include persistent data structures. This is the one feature I miss the most when I don't get to program in Scala, and I don't know why it isn't catching on more widely in other languages. Developers can add their own collection types, but since the new types aren't standard, you end up having to convert to standard types whenever you call into another library.

The automatic immutability of collections assigned to constants looks driven by the lack of persistent collections. It's better to support both features independently: let variables be either mutable or not, and let collections be mutable or not. All four combinations are very useful.

Deinitialization, also known as finalization, looks like a throwback to me. In a system with automatic memory management, you don't want to know precisely when your memory is going to get freed. As such, you can't count on deinitializers running soon enough to be useful. Thus, you always need a fallback plan of deallocating things manually. Once you deallocate manually, though, deinitializers become just a debugging technique. It's better to debug leaks using a tool than with a language feature.

In-out parameters seem like a step backwards. The trouble is that most functions use only in parameters, so when you see a function call, a programmer's default assumption is that the callee will not modify the argument. It can lead to bad surprises if the parameter gets modified at all. Out parameters are so unusual that it's better to be explicit about them, for example by taking a mutable collection as an argument.

Custom precedence (and associativity) is likely to go badly. We discussed this in detail, over the course of days, for X10, because X10 is a scientific language that really benefits from a rich set of operators. One problem with user-defined precedence is that it's hard to scope: you want to scope the operators themselves, not their implementations, because parsing happens before method lookup. It's also tough on programmers if they have to learn a new precedence table for every file of code they read. All in all, we concluded that Scala had a reasonable set of trade offs here: have a built-in precedence table with a huge number of available operators, and make library designers simply choose from the existing operators.

I see no exceptions, which is likely to be a nuisance to programmers if they are truly missing. Sometimes you want to tear down a whole chunk of computation without exiting the whole program. In such cases, exceptions work very well. Maybe I just missed it.

Integer types are hard to get right, and I am not sure Swift has chosen a great approach. It's best to avoid unsigned types, and instead to have untyped operations that can apply to typed integers. It's also best to avoid having low-precision operations, even if you have low-precision storage. Given all of the above, you don't really need explicit conversions any more. Java's integer design is quite good, with the exception of the unnecessary char type that is not even good for holding characters. I suspect many people overlook this about Java, because it's a case where programmers are better off with a language with fewer features.

3 comments:

zem said...

One comment: I'm not looking at it right now, but I thought the language guide said you have to prefix variables you pass as inout parameters with an & when you call the function, which seems explicit to me. I might have this wrong, though.

Lex Spoon said...

I see that you're right, zem. I retract my comments about implicit out parameters; it sounds like a great and surprise-free design for out parameters.

I do wonder how often they will get used, given that the language also has multiple return. Maybe the existing body of Objective C libraries have a lot of out parameters, and they didn't want to wrap them all for Swift.

Anonymous said...

Agreed it looks designed for interop. E.g. standard error handling uses references.

Yours was a thoroughly excellent summary! The only gripe I missed is no implicit return except single-line closures. I guess I can live with that.

I've been surprised how often Swift feels more readable than Scala (constructors the most conspicuous exception). Its enum-as-case-class pattern looks especially terse.

Thanks for the great read :)