I was recently pointed to the Funmath project. Funmath is a mathematical notation that is meant to make manipulation on paper be fast and reliable. It is headed by Raymond Boute.
I don't understand the notation well enough to say how well it works, but I most certainly appreciate the goal. If you learn the notation and rules of algebraic and calculus notation, you can fearlessly race through rewrites as fast as you can move your pencil. The notation means exactly what it looks like, and the rewrites can be done through simple mental pattern matching. To contrast, all manner of other parts of math notation don't work so well. If a formula involves existential or universal quantifiers, set comprehensions, summations, lists, or even functions, then there is a lot of "it means what I mean" in the notation people use. Each step of a derivation has to be very carefully considered, because you can't just pattern match. You have to do a deep parse and figure out exactly which of the possible options each bit of notation really means.
Boute has an interesting historical explanation of this difference. Algebraic notation developed before the dawn of computers, so people computed on paper all the time. As a result, the notation evolved to support paper computation. Logical notation, on the other hand, hasn't been as pressing for computation on paper. Interest has risen in the last few decades, but almost everyone involved is entering their formulas directly into computers. There was never a period of time when logical notation was more heavily used on paper than on computers, so a step was skipped.
In attempting to fill in that gap, Boute has applied design rules that are familiar in programming language design. Particularly interesting to me is that he handles all variable-binding forms by using a single notation for function literals. This is exactly like the function literals that get bandied about in programming-language design circles. In math notation, as in programming languages, having a lightweight syntax for function literals means you can cut out a lot of syntactic forms. In Funmath, summation and quantifiers don't get special syntax. They are library functions that take other functions as arguments. Summation takes a function and maps it to the sum of its range. Quantifiers take a function and map it to a truth value.
If Funmath is as advertised, then it is as much a step up for logic notation as Arabic numerals were over Roman. I wonder if anyone outside the core research group has given it a try?