Programming Languages
Reply #64 – 2005-12-19 06:18:37
I've never seen one person on either side of this argument persuaded in 100s of threads in far more detail than this one, so I guess it's unlikely we'll be the first. Since the standard rebuttal to your standard defense of static typing above answers your points, I'll just roll it out again: Ok 1. you have to unit test everything anyway since static typing can't catch all kinds of important errors Here's the difference. It's unlikely you can develop unit tests to handle all combinations of cases, just through intuition. On the other hand, by having the semantics of failure-liable computations explicit in the types, the compiler can prove (that's a powerful idea if you think about it) that in certain ways your code is simply not going to fail. As for the kinds of "important errors" the type system isn't going to catch, they aren't going to be errors related to failed computations of the sort you've been describing. The types of errors the type checker won't catch is incorrect algorithms, or doing things like checking the wrong fields perhaps (although in some cases it will catch it if the fields produce unexpected type mismatches). So far, I've seen you mention the fact that the type checker won't catch many types of errors, but also I've not seen you give explicit examples of this. Since I deal with type checkers all the time, I'll assume you're thinking of the same kinds of problems I am then, and as I just pointed out, failure liable computations (the kind important in your O/R example) are exactly the type of errors they do catch.2. your code will work without checking for these maybe | maybe not types all over the place because you've unit tested it and all the extra checking code brings down readability. it's like checking for a null pointer every time you access *anything* Wrong. Because of the way monads work, the failure checking is not made explicit in the syntax you are using. This is difficult to understand, and difficult for me to explain briefly. But I'll try. Consider this example:niftyFunction = do obj <- retrieveObjFromDbase dbase building <- building obj if building == "someSpecialPlace" then ... else ... Now what you don't see is important. Each line of code in that monad represents a result of a computation bound to a variable for another function. Depending on how you design your monad, a failure by any one of those functions (say if the obj cannot be retrieved, or the building doesn't exist) will (or can, depending on how you design it) "fold" all the way out and cause the entire computation to fail in a safe way, without you ever having to make checks anywhere except for a few very key places (the "boundaries" of the monad). This is because the "binder" function that operates on each line, binding the variables to the functions as I described, is the one that knows how to handle the failed computations so your other functions don't have to. If it detects a failed computation, it triggers a particular process which might be like the "fold" that I mentioned. But essentially, the point is that the effect of one computation will cascade across another, but by using a monad you don't have to interact with this process directly in the explicit syntax. And still through all this, you've got a provably safe handling of failure-liable computations. It's pretty cool really. If you'd like to learn more, there's some great explanations and tutorials here .3. type inference and static type safety add a layer of semantic complexity to code that a lot of programmers find confusing or at least distracting Sure. A lot of programmers find databases confusing too. Or memory. Or objects. Or ... well, damn near anything non-trivial about programming. But they can learn, usually, and type safety isn't really an exception. It's just alien to people used to the "C" way, or nowadays maybe the "Python" or "Ruby" way.Like I said, I used to be pretty excited by the idea of static type checking with type inference but I've personally grown to dislike it after working in languages with it and without it. Maybe we can bet each other a six pack Haskell, Scala etc remain interesting research languages and not much more in 2015? My other prediction is that the dynamic languages gradually start to resemble Dylan more with optional static type declarations as type assertions and aids to the compiler. [a href="index.php?act=findpost&pid=351146"][{POST_SNAPBACK}][/a] You're probably right that Haskell and Scala aren't going to be mainstream languages ever. But that doesn't meant that static languages can't do metaprogramming It also doesn't mean they aren't great choices for many types of things people think that dynamic languages can only do. I really don't know what the future of programming is going to look like, so I wouldn't make bets too heavily either way. I do hope that there's still a place for what I feel is a good approach that so many people just seem to not understand for whatever reason, though.