HydrogenAudio

Misc. => Off-Topic => Topic started by: kl33per on 2005-11-09 02:03:04

Title: Programming Languages
Post by: kl33per on 2005-11-09 02:03:04
Quote
One other area that comes to mind as an example would be languages.  They have a bunch of smart language researchers working for them, such as Simon Peyton Jones, Simon Marlow and others, but then they release crap like C# and VB.NET.  This while F# and SML.NET basically languish in obscurity, and the Simons' Haskell work remains completely untapped (AFAIK).

*sigh*...
[a href="index.php?act=findpost&pid=339529"][{POST_SNAPBACK}][/a]

Sorry, but I have to take you up on this.  In the last six months I've been using C# extensively and I have to say it is quite a good language for rapid application development.  I can understand grievances about cross platform issues to do with .NET (although there is Mono), but what exactly is wrong the C# language itself.  As for VB.NET, it is a large step away from traditional VB (now it's object orientated), but it actually is a far better language for it.  Again, there are cross platform issues, but if you use enough features of almost any implementation of a language you're going to run into cross platform issues.  I can't run Objective-C+Cocoa on Windows either.  So again, what are the specific failures of the C# language?
Title: Programming Languages
Post by: ErikS on 2005-11-09 02:52:12
Quote
So again, what are the specific failures of the C# language?
[a href="index.php?act=findpost&pid=340607"][{POST_SNAPBACK}][/a]


I've seen this obsession with functional languages take over some persons around me, and they sometimes go on and create fantastic things. But all this stays in the academic world - I've never seen any practical applications built on Haskell for example. I think it's simply because they are not well suited for that. The Haskell / Lisp / ML fanatics though think otherwise. Time will tell who was right...
Title: Programming Languages
Post by: Dibrom on 2005-11-09 02:54:01
Quote
Quote
One other area that comes to mind as an example would be languages.  They have a bunch of smart language researchers working for them, such as Simon Peyton Jones, Simon Marlow and others, but then they release crap like C# and VB.NET.  This while F# and SML.NET basically languish in obscurity, and the Simons' Haskell work remains completely untapped (AFAIK).

*sigh*...
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=339529")

Sorry, but I have to take you up on this.  In the last six months I've been using C# extensively and I have to say it is quite a good language for rapid application development.  I can understand grievances about cross platform issues to do with .NET (although there is Mono), but what exactly is wrong the C# language itself.  As for VB.NET, it is a large step away from traditional VB (now it's object orientated), but it actually is a far better language for it.  Again, there are cross platform issues, but if you use enough features of almost any implementation of a language you're going to run into cross platform issues.  I can't run Objective-C+Cocoa on Windows either.  So again, what are the specific failures of the C# language?
[a href="index.php?act=findpost&pid=340607"][{POST_SNAPBACK}][/a]


I'd actually love to discuss this, but it'd be a long post, and it would be even more off topic than my previous post on this issue.

I'll only say that, in the context of modern language development and cutting edge research (which was the topic at hand by my reference to MS language researchers and their work on other languages), C# is quite lacking.  This is in regards to design mostly but also to features.  C# appears to make a better alternative to some common industry choices, such as Java (at least in some regards that is), but that in itself doesn't make it a good language -- it just makes it a better one than some other popular choices.

I'd suggest that if you really want to know what's "wrong" with it, that you spend some time looking into some of the other languages I mentioned and some of the research done by the people I mentioned, among others who have done related work.  If you still are surprised at why I don't think C# is a very good language after that fact, I'd be quite surprised myself

If you really must have something more from me, I would probably say that some, but not all of the [a href="http://www.hydrogenaudio.org/forums/index.php?showtopic=38229&view=findpost&p=337941]criticisms[/url] that I made of PHP also hold true for C#.
Title: Programming Languages
Post by: kl33per on 2005-11-09 03:27:35
Quote
I'd actually love to discuss this, but it'd be a long post, and it would be even more off topic than my previous post on this issue.
[a href="index.php?act=findpost&pid=340617"][{POST_SNAPBACK}][/a]

Indeed, I shouldn't have furthed the debate.  Another day perhaps.

Edit:  Just realised where I'd seen that logo in your avatar before. 
Title: Programming Languages
Post by: ErikS on 2005-11-09 03:32:15
Quote
Indeed, I shouldn't have furthed the debate.  Another day perhaps.
[a href="index.php?act=findpost&pid=340626"][{POST_SNAPBACK}][/a]


But, don't you both have moderator status at least? Then couldn't you just move some posts to the Off topic forum? I'd love to hear more opinions about this...
Title: Programming Languages
Post by: Dibrom on 2005-11-09 03:40:53
Quote
Quote
So again, what are the specific failures of the C# language?
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=340607")


I've seen this obsession with functional languages take over some persons around me, and they sometimes go on and create fantastic things. But all this stays in the academic world - I've never seen any practical applications built on Haskell for example. I think it's simply because they are not well suited for that. The Haskell / Lisp / ML fanatics though think otherwise. Time will tell who was right...
[a href="index.php?act=findpost&pid=340615"][{POST_SNAPBACK}][/a]


I wasn't really planning on debating this in such a thread, but oh well

No practical applications from Haskell?

I guess [a href="http://abridgegame.org/darcs/]Darcs[/url] (the best RCS I have probably ever seen) and Pugs (http://www.pugscode.org/) (*the* Perl6 implementation) don't count.  These being only 2 pretty recent examples.  There are many more.

But seriously, people who say that functional languages and that Haskell / Lisp / ML are only used in the academic community need to take a closer look at things.  There are often great industrial grade applications written in these languages, and often times they are used in mission critical places, such as telecom, wall-street and the banking industry, energy companies, etc.

And as for these languages not being practical for personal programming, well you could have fooled me.  Just to give an example of some of the stuff I've created quite recently: I created a CPU emulator complete with an assembler, for a homebrew CPU I designed in about a day of work, just a few weeks ago (yes, in Haskell, yes, without assignments or side effects, and no it wasn't hard).  I also created a Scheme implementation in Haskell, with a REPL, file interpreter, etc., in about 2 days -- this from having never implemented a programming language before.  Some months back I won a programming contest at school here using Ocaml with OpenGL to render pretty fractal designs (I actually used lindenmayer systems).  Not only did it run faster than the competition (who programmed in C and C# mostly), but it also produced the prettiest output.  I've written dozens of other things in these languages (usually Haskell), from distributed concurrent cellular automata simulators, to genetic algorithms, to physics simulations, linguistic analysis, etc.  Currently I'm working on turning the Scheme interpreter into a full fledged optimizing compiler with an LLVM backend, and am also working on implementing a subset of Fortress (http://research.sun.com/projects/plrg/fortress0707.pdf).  Just about all of these projects I've worked on or that I am working on are non-trivial for the most part, and would not be possible in languages that were incapable of producing anything non-academic.  And furthermore, most of these projects would be significantly more difficult in your typical run of the mill industry favorite language.

Really, I would like to see some one come up with an example of something that couldn't be done in one of these well established functional (or symbolic, in the case of LISP) languages that doesn't deal with direct memory management (something that most modern procedural languages aside from C/C++ also don't have control over).  The truth is that they probably won't or can't.  They've just heard that functional languages are no good, or maybe they tried one once but were too set in their ways to figure out how to do something the non-C or non-Java way.  This isn't to say by the way that some of these languages are the best tools for the job at all times -- of course this isn't true.  You wouldn't use Haskell for example if you needed to do realtime DSP, but you might use OCaml.  Even if you used C or even C++ (gaining Fortran like performance with expression templates), that would be fine.  All would probably make a better choice than Java or C#.  You probably wouldn't use a purely functional language as a first choice for your average run of the mill GUI desktop application either, but this is mostly because of API issues and the host language for these.  That's not to say you can't though, Haskell has bindings for Cocoa, GTK, wx, etc., and most other notable similar languages have something in that vein also.

But anyway, it's not just about functional languages.  They tend to be my favorite, and they tend to offer massive benefits for certain areas of programming such as reasoning about program correctness and preventing a huge portion of the most common programming errors, but aside from this I also happen to like procedural languages, symbolic languages (like LISP, which in truth isn't really functional through and through -- it can be used functionally, but Common LISP is usually not used in this sense the same way ML or Scheme is), declarative languages, etc.

I program in C, Objective-C, C++ (w/ advanced template and macro metaprogramming, functional style, a la Boost), Python, Ruby, Scheme, SML, OCaml, Alice ML, Oz, Haskell, and APL, just to name the ones I'm mostly active with.  I don't consider myself obsessed with functional languages, but I do consider myself to be able to know a good language when I see one.  And I also consider it to be the case that the majority of languages in popular use are most of the time either simply not very good, or not the right tool for the job.

As a final note, it's worth saying that whether or not something is popular is not an indication of quality, technical proficiency, performance, etc., etc.  This should be a rather obvious point by now.  Most languages that are currently popular are not so because they are the best, but because they have had the most exposure.  They are usually the ones that are taught, because they are the ones previous generations know well.  Also, because the less technically proficient segments of industry tend to be conservative in their choices (tending to make certain languages outlive their usefulness), and because most "software engineers" could give a shit about programming for the love of it rather than just "as a job", you tend to see little variation in what is popular, and little adoption of radically new paradigms.  This is rather unfortunate, and many people see this and seem to mistakenly assume that these other approaches must be no good.  Of course, they could "see for themselves," in much the same way HA encourages people to perform their own listening tests, but it appears that for the most part, this is too much too ask...
Title: Programming Languages
Post by: ErikS on 2005-11-09 04:47:24
Thank you for your very extensive answer. I'll answer some parts of it below...

Quote
I guess Darcs (http://abridgegame.org/darcs/) (the best RCS I have probably ever seen) and Pugs (http://www.pugscode.org/) (*the* Perl6 implementation) don't count.  These being only 2 pretty recent examples.  There are many more.

Oh, they most certainly count. Only it seems they are a bit obscure programs that not many seem to use. Perhaps I'm wrong, but I thought Perl5 was the version people actually use, and likewise I thought CVS and SVN were the programs mainly used in the open source world (Rational Rose, Starteam etc in the commercial world). 

Quote
But seriously, people who say that functional languages and that Haskell / Lisp / ML are only used in the academic community need to take a closer look at things.  There are often great industrial grade applications written in these languages, and often times they are used in mission critical places, such as telecom, wall-street and the banking industry, energy companies, etc.

I only know of the local company Ericsson which uses a homebrewn functional language in their telecom switches. And from what I've heard, new engineers who are introduced to that system are not too happy when they have to make changes in the existing code.. Not to say that functional programming must lead to messy code (quite the opposite), but in this case I guess it has.

Do you have better examples where they use functional programming in commercial applications?

Quote
[...]from distributed concurrent cellular automata simulators, to genetic algorithms, to physics simulations, linguistic analysis, etc.  Currently I'm working on turning the Scheme interpreter into a full fledged optimizing compiler with an LLVM backend, and am also working on implementing a subset of Fortress (http://research.sun.com/projects/plrg/fortress0707.pdf).

Impressive list! And they all seem to be typical academic projects 

Quote
Really, I would like to see some one come up with an example of something that couldn't be done in one of these well established functional (or symbolic, in the case of LISP) languages that doesn't deal with direct memory management (something that most modern procedural languages aside from C/C++ also don't have control over).  The truth is that they probably won't or can't.  They've just heard that functional languages are no good, or maybe they tried one once but were too set in their ways to figure out how to do something the non-C or non-Java way.  This isn't to say by the way that some of these languages are the best tools for the job at all times -- of course this isn't true.  You wouldn't use Haskell for example if you needed to do realtime DSP, but you might use OCaml.  Even if you used C or even C++ (gaining Fortran like performance with expression templates), that would be fine.  All would probably make a better choice than Java or C#.  You probably wouldn't use a purely functional language as a first choice for your average run of the mill GUI desktop application either, but this is mostly because of API issues and the host language for these.  That's not to say you can't though, Haskell has bindings for Cocoa, GTK, wx, etc., and most other notable similar languages have something in that vein also.

I didn't say functional programming is useless. But it's less useful for some things than it is for others.

The areas where it is really useful is where the problems are of a recursive nature. Compilers, lexigraphical analysis etc is very much so as far as I have understood it. Also the mathematical part of the fractal thing you mentioned is easily described with recursion.

Also when you need to prove correctness of your program I think it's better to use functional languages.

But this is a very small and fairly uninteresting area if you want to make money on programs. Yes, for research it's very interesting, and if you happen to be a company which makes compilers then it is also, but otherwise not so much.

Look at the very successful office suite from microsoft. Do you think a functional language would have been useful in the development there?

Then take a look at computationally intensive applications. There you simply can't compete with the speeds that fortran provides. Neither with the simplicity to prototype programs as matlab or mathematica does.

In hardware construction I can see the usefulness of fp when the constructors start to realize that the designs have to be correct in the first revision - small possibilities to patch an asic once it's produced. Then the property that you can prove correctness can be useful. But still I haven't seen that many (commercially useful) tools that can do this yet... Not any in fact, but I haven't looked very hard.

Quote
As a final note, it's worth saying that whether or not something is popular is not an indication of quality, technical proficiency, performance, etc., etc.  This should be a rather obvious point by now. 


Well, the obvious answer to that is that whether something is popular or not is an indication if you  can make money out of it.

Also if it's popular it will be easier to draw on other people's work for your own. I mean, there are heaps of discussion groups etc where you can look for answers when you run into a problem in a popular language, and also lots of code already written which you in many cases are free to use in your own program.



My own exposure to functional programming is not nearly as big as yours - only an introduction course and one or two after that - so there are many things about it which I don't know, but I will try to keep an open mind. And I think your posts here are quite insightful and interesting, so I don't mind if you try to convert me...
Title: Programming Languages
Post by: kl33per on 2005-11-09 05:16:48
Well I've got nothing against functional languages. They do have advantages and are great for the problems you describe. Any problem that uses heavy mathematical functions and relies heavily on recursion is natively suited to a functional programming language. And you're correct when saying that any problem that can be solved by an object-orientated language can be solved by a functional language, and vice-versa. However that doesn't always mean it will be easy.

The advantage of an object-orientated language is obviously the ability to break any problem down to the smallest of components that each work individually to perform a specific task. In functional programming, you have functions that depend on other functions that depend on other functions. Change one and you could break ten others. It is possible to build programs in a functional language with OOP principles in mind, but is it more difficult to do this in a functional language... that's debatable.

I looked at your criticisms of PHP. Now PHP is not a language I have a strong background in. In fact, I've only ever used it on one project and only in a very small way, so I do not really have a good understanding of the language. However, you said most of these criticisms apply to C#, so I will try to address them.

Quote
1. The single biggest reason is that the language is poorly designed from the get go. It has all the marks of being a toy language that has been extended in a kludge-like fashion over time, but without any serious thought going into long term design.

I do not believe this holds true for C#. Now admittedly, C# is only in its second iteration so it's difficult to see how the language will evolve over time. However, I find the language to be consistent. I'm not sure what you specifically meant in regards to PHP as again, I have a poor understanding of that language.

Quote
2. It has no formal semantic. This isn't always necessary for a powerful language (c and c++ didn't have one either), but it is usually necessary for a powerful, well-designed language that is easy to understand and to extend in a clean fashion (witness the ugliness of C++ mandated by adhering to previous C syntax, when C wasn't really designed for the type of things C++ does).

Well I'll agree that C++ is ugly is this situation. C was definitely not designed for the types of things C++ does. As such I see C# as what C++ should have been, an object-orientated language syntactically similar to C.

Quote
3. The syntax is overly verbose and often superfluous. Part of this comes from copying some of C's inferior syntax, and part of it comes from just adding more unnecessary stuff on top of that. Stuff like prefixing variables with $ is totally unnecessary, and is just one of many examples. In a language like Perl, where symbolic prefixes actually mean something, it's one thing, but in PHP it just reflects poor design once again.

Again, I don't believe C# suffers from this problem. In fact, many things that were changed in C# were designed to reduce the amount of superfluous syntax.

Quote
4. Related to 3, PHP is pretty horribly inexpressive. There's very little in the way of syntactic sugar, and there's little support for functional programming. You have a few basic things like map, filter, and reduce defined for arrays, and a really ugly and hackish anonymous function facility (string based, rather than code based!), but they're setup in such a way that they wouldn't be very useful for anything really advanced. Beyond that, it doesn't have closures, currying, pattern matching, array/list comprehension, laziness, etc. This isn't even to mention other much more advanced features that most modern advanced languages might feature such as higher-order parametric polymorphism with quantified types, type classes, monads, arrows, uniqueness typing, implicit parameters, continuations, concurrency, lazy streams, constraints, etc., etc., etc.

The lack of functional programming features is true for C# as well, but this is not its purpose. If you want functional programming, use a functional language.

Quote
5. Not scalable. PHP only recently even got OOP AFAIU. And on top of that, it's support for classes is very weak (not to mention quite slow) compared to something actually useful like in C++. OOP is by far not the only way (and possibly not the best even) to handle scalability, but what other non-OOP languages use to deal with this problem, PHP does not have either.

Clearly not applicable to C# as it's an OOP language with strong typing.

Quote
6. No metaprogramming. Nothing like C++ templates, LISP or Scheme Macros, or templates in Haskell, etc. This is a real shame because for the kinds of tasks PHP is used for, metaprogramming can tremendously cut down on boilerplate code, as well as improving efficiency, reducing errors, and improving maintainability.

C# 2.0 has generics that are similar to C++ Templates, but support a few additional features.

Quote
7. Poorly designed integration with webpage content. PHP's cut and paste style integration with web pages might have been a good idea years ago when non-standard HTML code abounded and people weren't concerned so much with structural validity of code, or the ease of maintenance of such code and integration of it with other tools, but that is no longer true today. Something like Zope's Page Templates, which allow integration of python code into completely valid XML, is a much more elegant solution. Not only does it make page maintenance way easy, and allows for separation of concerns, but it also allows people to use the same tools for dynamic pages that they use for static pages. Yes, there is a sort of Page Template implementation in the 3rd party PHPTAL library, which basically copies the syntax and semantics of the Zope version, but since PHP was not built with this style of implementation in mind, it's a lot more limited than what you can get with Zope and Python.

Again not applicable as C# is not designed as a web language.

Quote
8. Relatively weak development tools and environment. Due to the nature of the language, and the lack of a coherent runtime environment like you get with many alternatives to PHP, PHP just isn't very strong on this front. A lot of this has to do with the fact that PHP has no real non-webpage centric basis. Maybe there's something really good out there that I don't know about, but I've certainly never seen anything free or fairly cheap that is nearly as powerful as the tools I use for any other of the dozen or so languages I program with.

After you get used to the visual complexity Visual Studio, you realise it is a very powerful IDE offering a number of very useful features. Furthermore, the express editions of VS.2005 are now free if you get them within the next year (by which time the next version of VS will be in full development). The license is not restricted to that year period either, so you can use them as long as you want.

Quote
9. Slow. PHP just isn't very fast compared to most real languages. I don't think there's much else to say there.

I point you to this (http://www.tommti-systems.de/go.html?http://www.tommti-systems.de/main-Dateien/reviews/languages/benchmarks.html) (rather limited) perfomance comparison. Now again, I'll admit this test was fairly limited (ok very), but I think it accurately depicts C#'s average performance. There is another (limited) performance comparison here (http://www.osnews.com/story.php?news_id=5602).  The consensus seems to be that C# is slower than C++, but faster than Java. So it's relatively quick compared to it's main competition (Java).

OK, it seems ErikS got in before me. Anyway, like Erik I'm relatively new to functional languages, so don't destroy my credibility too badly  .

As a final note, C# and Haskell are to completely different languages. They target different markets with different problem solving needs. One is designed around the rapid and easy building of applications. The other is designed around the speedy evaluation of functions and recursion. There are problems best solved by OOP languages, and problems best solved by functional languages, but in terms of OOP languages, I think C# has to be one of the better ones.
Title: Programming Languages
Post by: Dibrom on 2005-11-09 05:48:07
Quote
Thank you for your very extensive answer. I'll answer some parts of it below...

Quote
I guess Darcs (http://abridgegame.org/darcs/) (the best RCS I have probably ever seen) and Pugs (http://www.pugscode.org/) (*the* Perl6 implementation) don't count.  These being only 2 pretty recent examples.  There are many more.

Oh, they most certainly count. Only it seems they are a bit obscure programs that not many seem to use. Perhaps I'm wrong, but I thought Perl5 was the version people actually use, and likewise I thought CVS and SVN were the programs mainly used in the open source world (Rational Rose, Starteam etc in the commercial world).


It is true that they are somewhat obscure.  Darcs is a newcomer to RCS and has not been around as most of the other well established systems.  Perl6 isn't even finished yet -- the specifications are still being worked on.  But both programs are surely shining examples of Real World applications whether people actually use them much or not, I think.  I also happen to think that both of these programs have the potential to become heavily used in non fp circles.  They are certainly useful enough to do so, but it depends on a variety of issues.  For one thing, I certainly hope Perl6 replaces Perl5, because Perl5 is quite ugly

Quote
Quote
But seriously, people who say that functional languages and that Haskell / Lisp / ML are only used in the academic community need to take a closer look at things.  There are often great industrial grade applications written in these languages, and often times they are used in mission critical places, such as telecom, wall-street and the banking industry, energy companies, etc.

I only know of the local company Ericsson which uses a homebrewn functional language in their telecom switches. And from what I've heard, new engineers who are introduced to that system are not too happy when they have to make changes in the existing code.. Not to say that functional programming must lead to messy code (quite the opposite), but in this case I guess it has.

Do you have better examples where they use functional programming in commercial applications?


That's interesting, but what I have heard about Erlang (the language of which you speak) is quite the opposite.  From what I'd heard, Ericsson claimed programming efficiency improvements after switching to fp through Erlang versus other alternatives, and that engineers found themselves more comfortable with the latter approach.  Some of this is discussed here (http://www.haskell.org/aboutHaskell.html), and at some papers linked too off of that page.  I suspect information to that regard is also available on the Erlang page itself, although I'm too busy to dig around for it right now.

Erlang is regarded as one of the greatest success stories for "recent" implementation of modern functional programming as far as I'm aware though.

As for applications, aside from what is discussed on the page I just linked to, there is also stuff here (http://www.haskell.org/practice.html) and here (http://homepages.inf.ed.ac.uk/wadler/realworld/index.html).  There's a lot more that's available without too much digging.  Most of that list is Haskell / ML centric, but if you include LISP, Erlang, and others, it should grow quite considerably.

Quote
Quote
[...]from distributed concurrent cellular automata simulators, to genetic algorithms, to physics simulations, linguistic analysis, etc.  Currently I'm working on turning the Scheme interpreter into a full fledged optimizing compiler with an LLVM backend, and am also working on implementing a subset of Fortress (http://research.sun.com/projects/plrg/fortress0707.pdf).

Impressive list! And they all seem to be typical academic projects


Well, I think maybe we have a different definition of "academic"

To me, something purely academic is something that is implemented purely for the sake of research with regards to the implementation itself.  If something is task centric (which most of my examples were), meaning if it was designed to perform some task beyond the sake of research alone, then it is a Real World application.

It so happens then that a lot of Real World applications can be academic in a certain sense, yes.  But I think this makes sense -- Real World shouldn't be limited to office suites or media players or video games, or simply some typical desktop application.  If we only allowed that definition, huge chunks of Unix userland (or the OS itself -- any OS that is), for example, would not be considered Real World.

Quote
I didn't say functional programming is useless. But it's less useful for some things than it is for others.


Sure.  This is definitely true.  I just think I disagree with you on where exactly it might be useful.

Quote
The areas where it is really useful is where the problems are of a recursive nature. Compilers, lexigraphical analysis etc is very much so as far as I have understood it. Also the mathematical part of the fractal thing you mentioned is easily described with recursion.


This is true.  But it's also useful in other areas, provided the language is powerful enough.  For example, my CPU emulator is not a problem that is naturally specified recursively.  It's naturally specified as a state machine, which is practically by definition full of side effects.  Yet, even in this case, a language like Haskell was perfect (for rapid development that is -- it would not be perfect for speed).  Admittedly this is because I used Monads, which are a clever way to handle things like side effects in a purely functional way but while still allowing an imperative facade over the top.  In my case, I threaded the CPU state through a series of monadic combinators, but for someone who does not understand the theory behind it and does not know the inner workings of the language, they would have assumed it was all imperative simply by looking at the code.

Quote
Also when you need to prove correctness of your program I think it's better to use functional languages.

But this is a very small and fairly uninteresting area if you want to make money on programs. Yes, for research it's very interesting, and if you happen to be a company which makes compilers then it is also, but otherwise not so much.


I disagree.  Whether you choose a recursive or imperative style is really not important most of the time.  What is important is which language can most closely match the problem domain, and which language offers the best tools to do this, and which language is the most flexible in this way, etc.

I believe there is a large category of commercial programs which could be made more quickly, more efficiently, more error-free, and more flexibly by using a better approach at the language level, which may include fp or not, but which would probably include expanding the possibility of implementation beyond the couple of choices usually considered.

In Haskell, when I need to solve a problem, I usually implement a domain specific language that suits the problem best, and then use that.  It sounds like a strange approach, but it usually results in 1/4th the code it would have taken in a typical language, and it often simply works the first time around.  Usually, it's incredibly modular too, and in a way much less restrictive than relying on the OO approach.

Quote
Look at the very successful office suite from microsoft. Do you think a functional language would have been useful in the development there?


Actually, yes.

There are many areas where an fp language could have been good for an office suite.  For one, there's a lot of parsing type stuff going on, which as you just mentioned is an area fp is good at.  There's a lot of stream processing type stuff going on as well, which is another area they are good at.  Processing of large documents can be done pretty efficiently and naturally with a lazy language too, which is something not possible with conventional approaches.

For XML processing, which is increasingly important in recent office suite implementations, Haskell can be incredibly powerful.  You might want to take a look at this (http://www.cs.vu.nl/boilerplate/) as an incredible example of the power available.  That particular example shows something which I have seen very few other languages able to accomplish -- perhaps LISP can somewhat approach that power, but often at the sake of dynamic typing, which I see as a disadvantage for that type of problem.

For spreadsheet applications like Excel, fp should be a no brainer.  Being able to embed small snippets of fp code into cells would be a natural usage, and is completely practical in the case of a concise expressive language like Haskell or ML.  Look here (http://www.cs.kent.ac.uk/projects/pivotal/graphics.html) to see a great example of these possibilities.  This type of thing would not be practical with most procedural languages.
Title: Programming Languages
Post by: Dibrom on 2005-11-09 05:48:31
Quote
Then take a look at computationally intensive applications. There you simply can't compete with the speeds that fortran provides. Neither with the simplicity to prototype programs as matlab or mathematica does.


Ocaml and SML (with MLton) are fast enough.  Often faster than C++ (without crazy template expression optimizations), and approaching the speed of C.  See here (http://shootout.alioth.debian.org/benchmark.php?test=all?=all&sort=fullcpu).

Of course they aren't as fast as Fortran for purely numeric code, but then they also are a lot easier to program in than Fortran, and if you really need speed you can use Foreign Function Interfaces and write your inner loops in some SIMD instruction set to make things really fly.

As a side note, hopefully Fortress (which I mentioned earlier) will provide something nice which approximates parts of ML, while providing very good facilities for replacing Fortran on the other side of things.  It is, after all, supposedly a Fortran replacement.  Hopefully Sun will get this one right

Quote
In hardware construction I can see the usefulness of fp when the constructors start to realize that the designs have to be correct in the first revision - small possibilities to patch an asic once it's produced. Then the property that you can prove correctness can be useful. But still I haven't seen that many (commercially useful) tools that can do this yet... Not any in fact, but I haven't looked very hard.


I've seen a lot of these, but I'm not very good at keeping tabs on this sort of thing since hardware isn't typically my primary interest.  FP has been seeing increasing usage in hardware design and verification all over the map though.  One interesting application that I do know of right off the top of my head (because I was looking at it a few days ago), is Lava (http://www.cs.chalmers.se/~koen/Papers/lava.ps).  I've seen many similar projects to this.

Quote
Quote
As a final note, it's worth saying that whether or not something is popular is not an indication of quality, technical proficiency, performance, etc., etc.  This should be a rather obvious point by now. 


Well, the obvious answer to that is that whether something is popular or not is an indication if you  can make money out of it.


Absolutely.  But whether you can make money out of it or not is also not a good indication of worth.  You can make boatloads of money selling snake-oil audiophile toys like $6000 power cables, for example

Quote
Also if it's popular it will be easier to draw on other people's work for your own. I mean, there are heaps of discussion groups etc where you can look for answers when you run into a problem in a popular language, and also lots of code already written which you in many cases are free to use in your own program.


Yes.  And many people may suspect that this is not the case with a less popular language like Haskell or LISP or ML or even Erlang.  But this isn't really the case.  Usually there's a much smaller community, but arguably those communities are more knowledgeable, more enthusiastic, more willing to help, and generally have a higher signal to noise ratio than the same communities you'd see for very popular languages.  You may be able to find more raw code examples in a popular language, but you will probably find fewer good code examples, and probably will have a harder time finding the same level of expert advice you'd get than if you went to say #haskell or something like that.

Quote
My own exposure to functional programming is not nearly as big as yours - only an introduction course and one or two after that - so there are many things about it which I don't know, but I will try to keep an open mind. And I think your posts here are quite insightful and interesting, so I don't mind if you try to convert me...
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=340631")


If there's one thing I've learned, it's that most undergrad courses in fp are an atrocious experience.  I think this is partly because of preconceived notions of the student before taking the course, but also a big failure on the part of the education system for CS as a whole.  And sometimes it's a fault of the professor for not realizing all of this and finding a better way to teach things.

My own experience with CS has been incredibly negative, in large part because I don't fit in at all with the curriculum (I should be taking grad courses, but can't even take certain undergrad courses I need yet, which is totally frustrating), but also because most professors don't give a shit, and most students don't give a shit.

I guess the bottom line is that your average CS course is probably not going to leave you with a great impression of fp unless you have an awesome professor or unless you are going to a school which is very heavy on theory and has a strong fp basis from which to draw.  On the other hand, if you haven't programmed at all before, and are mathematically inclined, then maybe things don't have to be so perfect to have a good experience.

Haskell is a strange language, teaching wise.  It can be very natural and simple, which you can see [a href="http://conal.net/fran/tutorial.htm]here[/url].  It can also be incredibly difficult and complicated, and nearly require an understanding of category theory, which you can see here (http://www.haskell.org/arrows/).  This variance is a testament to the flexibility of the language, that it can encompass such broad scope and still remain, at base, a relatively minimal language (no OO or things like that, few keywords, very compact and consistent library, etc.).  So in the end, it requires a very carefully thought out course to get people on the right path early and to get them interested.

For what it's worth, I don't think that this would be as much of a problem if CS was the way it used to be -- more of an offshoot of discrete math and logic than an engineering enterprise.  I still believe this is the way CS should be taught, but it's not usually the way things actually happen nowadays.

So anyway, I'm not too surprised if you aren't really enthusiastic about fp.  Most people aren't.  Most of them completely hate it by the time they finish their first CS course on it.  I never took a CS course on fp (or any language for that matter -- even in my C related courses, I already knew C beforehand).  I've learned fp along with the rest of programming through self-study, and as a result, I've probably not had some of the negative experiences many people might have, and usually what I know, I've learned because I was interested in learning it.

It is rather interesting though... to this day, I probably would not have learned much beyond C,C++,Java, and Python, if it weren't for one of my best friends trying to explain this strange Haskell thing to me one day and showing me the magical quicksort in 4 lines (which can actually be made into a 1 liner without too much ugliness):

Code: [Select]
qsort []     = []
qsort (x:xs) = qsort elts_lt_x ++ [x] ++ qsort elts_greq_x
   where elts_lt_x   = [y | y <- xs, y <  x]
         elts_greq_x = [y | y <- xs, y >= x]


When I first saw it, I just couldn't understand how it worked, after being used to this:

Code: [Select]
qsort( a, lo, hi ) int a[], hi, lo;
{
 int h, l, p, t;

 if (lo < hi) {
   l = lo;
   h = hi;
   p = a[hi];

   do {
     while ((l < h) && (a[l] <= p))
         l = l+1;
     while ((h > l) && (a[h] >= p))
         h = h-1;
     if (l < h) {
         t = a[l];
         a[l] = a[h];
         a[h] = t;
     }
   } while (l < h);

   t = a[l];
   a[l] = a[hi];
   a[hi] = t;

   qsort( a, lo, l-1 );
   qsort( a, l+1, hi );
 }
}


I'm glad I took the time to figure it out
Title: Programming Languages
Post by: ErikS on 2005-11-09 06:18:28
Quote
If there's one thing I've learned, it's that most undergrad courses in fp are an atrocious experience.  I think this is partly because of preconceived notions of the student before taking the course, but also a big failure on the part of the education system for CS as a whole.  And sometimes it's a fault of the professor for not realizing all of this and finding a better way to teach things.

My own experience with CS has been incredibly negative, in large part because I don't fit in at all with the curriculum (I should be taking grad courses, but can't even take certain undergrad courses I need yet, which is totally frustrating), but also because most professors don't give a shit, and most students don't give a shit.

I guess the bottom line is that your average CS course is probably not going to leave you with a great impression of fp unless you have an awesome professor or unless you are going to a school which is very heavy on theory and has a strong fp basis from which to draw.  On the other hand, if you haven't programmed at all before, and are mathematically inclined, then maybe things don't have to be so perfect to have a good experience.

[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=340634")


I can't blame bad professors in our courses, in fact I think we had an extraordinary good teacher (John Hughes) in our first computer science class which introduced fp right away before any imperative language. And most faculty members here are very partial towards Haskell. At that time I was also completely convinced that fp was the way to go, but that has changed over time.

Maybe you should come over here and do your master  You can choose courses freely then...

[a href="http://www.cs.chalmers.se/Cs/Education/Courses/index-en.html]http://www.cs.chalmers.se/Cs/Education/Courses/index-en.html[/url]
Title: Programming Languages
Post by: Dibrom on 2005-11-09 06:29:40
Quote
Well I've got nothing against functional languages. They do have advantages and are great for the problems you describe. Any problem that uses heavy mathematical functions and relies heavily on recursion is natively suited to a functional programming language. And you're correct when saying that any problem that can be solved by an object-orientated language can be solved by a functional language, and vice-versa. However that doesn't always mean it will be easy.


Hrmm.  Maybe.  I suggest that any problem easily solved with OO can just as easily be solved with a sufficiently powerful fp language.  If you can come up with a good example where this is not true, I would like to know about it, because I suspect I could provide you with an easy counterexample.

Quote
The advantage of an object-orientated language is obviously the ability to break any problem down to the smallest of components that each work individually to perform a specific task. In functional programming, you have functions that depend on other functions that depend on other functions. Change one and you could break ten others. It is possible to build programs in a functional language with OOP principles in mind, but is it more difficult to do this in a functional language... that's debatable.


You are thinking of procedural languages, not functional languages.

Functional languages likewise have you break the problem down to the smallest components, except usually the smallest components are even more abstract and general than what you see in OO.  You do not run into the dependency problem because you use abstract data types, parametric polymorphism (which in fp you essentially get for free because of the ease of functional abstraction and automatic type inference -- the same isn't usually true of OO and you need to design the polymorphism in with a lot more effort), and higher order functions (the latter of which you don't even have in procedural languages).  If you need sufficient flexibility, you use higher order polymorphism, type classes, and quantified types.  Through type classes you get something quite analogous to overloaded functions and groups of types (something like class interfaces, but with a functional flavor), which you can compose through (multiple) inheritance in a way very similar to OO.  Unlike OO, you are not forced into the data encapsulation, message passing paradigm.  You can be more general, which is quite a good thing.  But if data encapsulation and message passing is your game, you have closures and records, and because you have first class functions in an fp language, you can perfectly emulate OO if you need it.  Taking it further, with the ability to arbitrarily define operators (infix functions of symbolic characters like +, ->, =, etc.) and through the use of type classes and monads, you can define a domain specific language that looks practically *exactly* the same syntax wise as your favorite OO language.

FP is a lot more sophisticated than you give credit for, especially if you are considering a modern language.  If you are talking about a procedural language like C, without OO, and without any of the mechanisms which allow you to easily achieve the same ends as OO, then I agree with you that there is a problem.  But the way you just said it is a misunderstanding I believe.

Quote
I looked at your criticisms of PHP. Now PHP is not a language I have a strong background in. In fact, I've only ever used it on one project and only in a very small way, so I do not really have a good understanding of the language. However, you said most of these criticisms apply to C#, so I will try to address them.


I don't think I'd say most, but I'd say many.  Obviously there are many that do not apply

Quote
I do not believe this holds true for C#. Now admittedly, C# is only in its second iteration so it's difficult to see how the language will evolve over time. However, I find the language to be consistent. I'm not sure what you specifically meant in regards to PHP as again, I have a poor understanding of that language.


I will agree that C# is much, MUCH, better designed than PHP.  There's no comparison.  But I don't agree with many of the design decisions.

Quote
Well I'll agree that C++ is ugly is this situation. C was definitely not designed for the types of things C++ does. As such I see C# as what C++ should have been, an object-orientated language syntactically similar to C.


C# isn't significantly less ugly than C++.  It is in certain ways, but it makes up for it, like Java, in other ways by being more verbose.  OO doesn't have to be so cluttered and verbose.  Smalltalk or even LISP proved that a long time ago.

Quote
Again, I don't believe C# suffers from this problem. In fact, many things that were changed in C# were designed to reduce the amount of superfluous syntax.


It's better, but it's still too much IMO.  I suppose it's very hard to understand my complaint here though unless you have programmed in ultra concise languages.  Using something like Haskell, ML, or APL, and then going to something like C# or Java, is a HUGE difference.  The former show that a language can be extremely compact and more expressive at the same time, than what you see in recent popular languages.

My beef with this goes beyond aesthetic concerns too though.  In my opinion, there's so much syntactical cruft in the latter languages that it becomes more difficult to reason about program operation simply because you can't absorb as much of the code at once -- you have to consciously filter out a lot of stuff that doesn't need to be there, in terms of brackets (FWIW, I hate this about LISP and Scheme too, and this is why I don't program much in these languages even though I recognize it's power aside from its verbosity -- I often have long arguments about language terseness and expressivity with my LISP weeny friends ), keywords, etc.

Quote
The lack of functional programming features is true for C# as well, but this is not its purpose. If you want functional programming, use a functional language.


Sure.  But why not have some of both?  Most OO languages use iterators these days (something like the STL for C++), which to anyone having used fp, is obviously trying to emulate higher order functions.  Why not do something like Python and actually provide lambda and list comprehension and some of the rest of it?  Why not do something like OCaml and provide a thoroughly functional ML dialect with powerful native OO at the very same time?

Languages which can offer both styles are better than languages which can only offer one style.  Oz, even more than the previous two, is an excellent example of this.  If you'd like to explore this idea more, I suggest reading this (http://www.amazon.com/exec/obidos/tg/detail/-/0262220695/qid=1131522291/sr=1-1/ref=sr_1_1/102-3701972-1572120?v=glance&s=books) book.

IMO it's stupid to constrain a language solely to one style if that style cannot at least emulate the others necessary with sufficient transparency.  Most of these OO languages, when trying to do fp type stuff with iterators, are not sufficiently transparent.

And FWIW, some of the features I mentioned (pattern matching, list comprehension laziness, constraints, continuations, etc.) do not even necessarily need to be constrained to fp languages.  Often times, things like this really have nothing to do with fp per se at all.  It's just the case that fp type languages that follow in the footsteps of ML often provide them.  There is absolutely no good reason to leave them out of other languages like C#.
Title: Programming Languages
Post by: Dibrom on 2005-11-09 06:30:03
Quote
Quote
6. No metaprogramming. Nothing like C++ templates, LISP or Scheme Macros, or templates in Haskell, etc. This is a real shame because for the kinds of tasks PHP is used for, metaprogramming can tremendously cut down on boilerplate code, as well as improving efficiency, reducing errors, and improving maintainability.


C# 2.0 has generics that are similar to C++ Templates, but support a few additional features.


C# 2.0 has generics, but those generics are not capable of metaprogramming like C++ templates are.  Java's similar solution is likewise not capable of it.

C++ are quite a strange exception in this regard, because they were not designed with metaprogramming in mind, and in fact it's more of a side effect that you can actually use them as such.  Some may be surprised to learn that most of this power is due to C++'s template implementation being Turing complete.  Indeed, when it was first discovered that one can use C++ templates for metaprogramming, someone implemented LISP inside the C++ template system.  No, not as a C++ program using templates, but as a template program running inside of the C++ compiler.

It may be unclear what exactly this means if you are unfamiliar with the powerful macros of other languages like those of LISP, Scheme, and Haskell, but trust me, C#'s generics are not even in the same class.  They will provide a nice solution to some problems, but not this one.

Quote
After you get used to the visual complexity Visual Studio, you realise it is a very powerful IDE offering a number of very useful features. Furthermore, the express editions of VS.2005 are now free if you get them within the next year (by which time the next version of VS will be in full development). The license is not restricted to that year period either, so you can use them as long as you want.


I've no argument with regards to C# development tools.  I know there are good ones.

Quote
The consensus seems to be that C# is slower than C++, but faster than Java. So it's relatively quick compared to it's main competition (Java).


Yes, this is the impression I've had before too.  I actually like C# a lot better than Java from what I have seen of it.

Quote
As a final note, C# and Haskell are to completely different languages. They target different markets with different problem solving needs. One is designed around the rapid and easy building of applications. The other is designed around the speedy evaluation of functions and recursion.


Hrmm, I don't really agree with such a simple depiction of the situation.

Both C# and Haskell are full fledged, general purpose programming languages.  It is true that as far as marketing is concerned, they target different markets.  But as far as capability to solve problems is concerned, this isn't all that important of an issue.

C# may be designed with RAD in mind, but I think you would find that a seasoned Haskell programmer (or even possibly a moderately successful one) could prototype a solution to a problem in a fraction of the time it would take you in C#.  Likewise, the codebase would only be a fraction of that of C#.  I would argue, that if it were the seasoned Haskell programmer, it would also be a more general solution, and probably more modular too (despite being functional).

As some evidence to substantiate this claim, you might want to take a look at this (http://icfpc.plt-scheme.org/).  The ICFP contest is called a "functional" contest, but you can really use any language you want.  People use C fairly often, and I believe people have probably used Java and C#.  The problems for the contest are not at all what you expect for a "functional" contest.  The last one, I believe, was a contest to design an ant simulation where people designed ant models that competed against eachother (in other words, IIRC, the idea was to design the best ant model and behavior for competing against opponents).  Not surprisingly, in my opinion, Haskell and OCaml usually dominate this.  I believe this is because they are better at fundamental problem solving.  Given time constraints, the "RAD" aspect of C# shows itself in these situations to be somewhat of a facade, unless you are dealing with very specific types of problems.

(Edit: Actually, this (http://www.cis.upenn.edu/~plclub/contest/index.php) one from 2004 is the ant one)

This is, of course, assuming that you are working on a problem for which there is a similar level of library support.  I think there is a reasonable correspondence for most situations.

Perhaps the one exception where this may not hold true is with GUI programming.  A lot of this is going to depend upon your tool usage outside of the language.  Haskell can make use of Glade through Gtk2hs, but I have a feeling that C# via Visual Studio will be a better solution there.  It may be the case though that in the end, the ease of implementing the rest of the solution to the problem in Haskell may make up for inferior GUI construction tools however.

Quote
There are problems best solved by OOP languages, and problems best solved by functional languages, but in terms of OOP languages, I think C# has to be one of the better ones.
[a href="index.php?act=findpost&pid=340632"][{POST_SNAPBACK}][/a]


Strictly speaking, there is some truth to this.  However, a powerful enough functional language can emulate all of what an OO language can, especially once you recognize that closures are equivalent to objects.  With algebraic data structures, as I described above, you have everything OO has.  But this is not true with OO -- you can't nearly as easily emulate the same stuff the other way around.  And also strictly speaking, the cases where OO is *clearly* a good choice (stuff like network transparency, agent based simulations, etc.) is not the domain that most people recognize OO to be superior in -- instead they usually think it is best for "almost everything."  This simply isn't true.
Title: Programming Languages
Post by: Dibrom on 2005-11-09 06:39:34
Quote
I can't blame bad professors in our courses, in fact I think we had an extraordinary good teacher (John Hughes) in our first computer science class which introduced fp right away before any imperative language.


Oh, wow.  I am jealous of you now  I would have loved to have taken a class from someone actually interested in fp to the degree Hughes is.  I've been reading some of his work lately (about Arrows specifically), and it is very, very interesting to me.

Quote
And most faculty members here are very partial towards Haskell. At that time I was also completely convinced that fp was the way to go, but that has changed over time.

Maybe you should come over here and do your master  You can choose courses freely then...

http://www.cs.chalmers.se/Cs/Education/Courses/index-en.html (http://www.cs.chalmers.se/Cs/Education/Courses/index-en.html)
[a href="index.php?act=findpost&pid=340638"][{POST_SNAPBACK}][/a]


Yeah, I've heard that there's a bit of support for Haskell there

I in fact may try to go to school in Europe somewhere once I am finished with the first part of my education here.  Right now, I do not know for sure what I will do, but I am thinking quite a bit about language research as a serious possibility.  Part of why I am so partial to Haskell is that it has sort of served as a launching pad for me to begin to learn some very difficult aspects of theoretical computer science and in some cases math (stuff like category theory, theory of types, denotational semantics, etc.) that I would not have otherwise had exposure to.  To find out that I like this stuff as much as I do, and to have it all fit together so nicely, is part of my enthusiasm over fp and related concepts (though I find the idea of multi-paradigm kernel languages like Oz to be quite fascinating as well).
Title: Programming Languages
Post by: kl33per on 2005-11-09 08:16:15
Quote
It may be unclear what exactly this means if you are unfamiliar with the powerful macros of other languages like those of LISP, Scheme, and Haskell, but trust me, C#'s generics are not even in the same class. They will provide a nice solution to some problems, but not this one.

Indeed, I just spent some time looking into metaprogramming, and it's a much more powerful tool then C#/Java generics.  There not even the same type of tool really.

However, C#2.0 does provide a number of other FP style programming paradigms.  For example, C# has anonymous delegates (closures, but not closures).  This allows for curring, and a whole lot of other nice FP style things.  See here (http://blogs.msdn.com/sriram/archive/2005/08/07/448722.aspx).  C#3.0 will also add Lanbda expressions like:
Quote
listOfFoo.Where(delegate(Foo x) { return x.size>10;}) becomes listOfFoo.Where(x => x.size>10);


Quote
Yes, this is the impression I've had before too. I actually like C# a lot better than Java from what I have seen of it.

I did 1.5 years of Java programming before learning C#, and let me tell you C# is a much nicer language.  It's not just the language though, but the whole system.  The libraries, core language features, the IDE's, the documentation... C# and .NET have seemingly one-upped everything from Sun.  This is however typical Microsoft style, take the core ideas of someone else's product and re-invision them into a wholey better product (or they at least try to).

Quote
C# may be designed with RAD in mind, but I think you would find that a seasoned Haskell programmer (or even possibly a moderately successful one) could prototype a solution to a problem in a fraction of the time it would take you in C#.

Perhaps, but the beginner programmer certainly couldn't.  You can't just pick up a functional langauge and start using it unless you have a solid understanding of functional languages in general and mathematics.  The concept of objects is much easier to grasp and learning most object orientated languages is a relatively easy affair.  Does it pay to learn a functional language through-and-through... certainly, but it takes more time and effort to learn a functional language then a OOP language.  That being said, my University starts with LISP (although I started with Java at univeristy, and VB6 in high school).

Quote
# isn't significantly less ugly than C++. It is in certain ways, but it makes up for it, like Java, in other ways by being more verbose. OO doesn't have to be so cluttered and verbose. Smalltalk or even LISP proved that a long time ago.

I think ugly is the wrong word.  I think C# is more consistant then C++.  Being more verbose isn't neccessarily a bad thing either.  The whole point of high-level languages is to:
a. make it easier to program, and
b. make it easier to read.
Verbose languages are certainly easier to read and understand than something like LISP, particularly for someone who is new to the language.  Of course verbose languages mean more typing, and that means more time, but IDE's do half the typing for you now anyway.  BTW, LISP's brackets annoy me to  .

Quote
I will agree that C# is much, MUCH, better designed than PHP. There's no comparison. But I don't agree with many of the design decisions.

Fair enough.  I know plenty of people who don't agree with the design decisions in C#.  There are things that annoy me to.

Quote
Perhaps the one exception where this may not hold true is with GUI programming. A lot of this is going to depend upon your tool usage outside of the language. Haskell can make use of Glade through Gtk2hs, but I have a feeling that C# via Visual Studio will be a better solution there. It may be the case though that in the end, the ease of implementing the rest of the solution to the problem in Haskell may make up for inferior GUI construction tools however.

C# certainly does significantly improve on C++ in terms of controlling object behaviour.  That said, I'm looking forward to the next version of VS.  They'll provide full integrated support for WinFX with WPF for graphics, and XAML code to build forms.  I've watched a few channel9 videos of late and XAML is certainly going to make form design and connecting program logic to the GUI significantly easier than in C++.  But like you say, this is down to tools.

Quote
Sure. But why not have some of both?...

Is that like building the car/house/insert-your-favourite-inanimate-object-here that has everything.  The whole strength of .NET is that one minute you can be writing in C#, and the next in Perl, then in J#, then in C++.  When you can so readily switch between languages, why build a language that can do absolutely everything?  Why not more closely target your languages to specific problems.

Quote
Hrmm. Maybe. I suggest that any problem easily solved with OO can just as easily be solved with a sufficiently powerful fp language. If you can come up with a good example where this is not true, I would like to know about it, because I suspect I could provide you with an easy counterexample.

That's like saying I could drive faster than any other car if I had a powerful enough motor-cycle.  Of course you can.  That doesn't mean FP/OOP languages are good/bad, it just means they're different.  I will however look for an example where FP languages are not richly suited to the task.  And I am sure you could provide an easy counter example, there are many situations not suited to OOP languages.
Title: Programming Languages
Post by: Dibrom on 2005-11-09 08:25:11
Quote
Quote
Sure. But why not have some of both?...

Is that like building the car/house/insert-your-favourite-inanimate-object-here that has everything.  The whole strength of .NET is that one minute you can be writing in C#, and the next in Perl, then in J#, then in C++.  When you can so readily switch between languages, why build a language that can do absolutely everything?  Why not more closely target your languages to specific problems.
[a href="index.php?act=findpost&pid=340662"][{POST_SNAPBACK}][/a]


Hrmm, I don't think that's what it is like.  Oz, for example, is a much less complex language than C#, yet it is significantly more general.  You should look at that book I posted a link to if you ever get a chance.

Designing a language general enough to encompass a wide variety of computational styles doesn't necessarily imply you have to go the kitchen sink route -- it just means you need a very clever design is all.

As for why you'd want all of that in a single language, well it comes in incredibly handy when you want to keep things clean, concise, and maintainable.  You can break things into very small components, yet use different styles -- sometimes these programmatic units are too small to be worthy of using an entirely different language for, and often language interfaces are not as nice as you'd want and add too much of an overhead to make this approach viable otherwise.

If you ever use a language which is suited to domain specific language implementation, you'll see why having this sort of flexibility is so incredibly useful.  Sure, you can get some of that with .NET, but it comes at a high price, both conceptually, and tool-wise.
Title: Programming Languages
Post by: kl33per on 2005-11-09 08:38:17
Quote
You should look at that book I posted a link to if you ever get a chance.
[a href="index.php?act=findpost&pid=340664"][{POST_SNAPBACK}][/a]

Bookmarked.  That's now two books I have to read over my end-of-year break from university.  The other one is that C book you recommended a while ago.
Title: Programming Languages
Post by: Dibrom on 2005-11-09 08:42:46
Quote
Quote
You should look at that book I posted a link to if you ever get a chance.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=340664")

Bookmarked.  That's now two books I have to read over my end-of-year break from university.  The other one is that C book you recommended a while ago.
[a href="index.php?act=findpost&pid=340669"][{POST_SNAPBACK}][/a]


Nice.  Both are good, but this one should probably prove more worthwhile if you don't plan on doing much plain C programming.

In fact, the authors were giving away pdf revisions of the book while it was being written some time back.  There is probably a late-but-not-final revision floating around somewhere still if you look.

Another good book in the same spirit as Concepts... is the [a href="http://mitpress.mit.edu/sicp/full-text/book/book.html]SICP[/url] (although Scheme based rather than Oz based), which is available for free online.  Some view Concepts... as being a sort of modern SICP.

At any rate, regarding C#, I should clarify that I don't think it's crap -- that was just strong language in the context of a general frustration I was expressing.  I think that for what it is, it's a probably a pretty good iteration.  If I ever have to use that type of language, I'd much rather use it than Java.
Title: Programming Languages
Post by: kl33per on 2005-11-09 09:06:25
Quote
Another good book in the same spirit as Concepts... is the SICP (although Scheme based rather than Oz based), which is available for free online. Some view Concepts... as being a sort of modern SICP.

Aaah, another bookmark, oh well.


Quote
At any rate, regarding C#, I should clarify that I don't think it's crap -- that was just strong language in the context of a general frustration I was expressing. I think that for what it is, it's a probably a pretty good iteration. If I ever have to use that type of language, I'd much rather use it than Java.

Well I can settle for that.

Seeing as where talking about programming languages, what have you been building your new music player in.  Is it all Objective-C+Cocoa, or is their some FP in their two (or can you fake FP with Obj-C)?.
Title: Programming Languages
Post by: Lyx on 2005-11-09 15:44:48
About functional vs. OO languages:

Maybe the question is stupid - why is it actually neccessary to choose between the two? Take ruby for example: technically, it has no functions.... but methods are very similiar to functions. So, ruby only has methods and objects, but the trick now is that you can use ruby as a functional language as well and ignore the OO-part if you wanted to..... why? well, because you start out with a "root-object"... the root-object basically is the environment in which your code is executed...... this means that you can add methods seemingly without attaching them to an object..... so, use them just like functions.... behind your back, these "unasigned" methods are just bound to the root-object.

I do find this approach very interesting, because you can code however you want.... you dont need to care about OOP vs. functional programming..... if you want to programm with functions only, then you can do that.... if you want to code with objects and classes, you can do that...... and you can also do both at once, as well as start with the functional approach, and later add OO-aspects to your code, without having to rewrite much.

I definatelly dont consider ruby to be a pretty and good language...... but the above aspect of it i like very much... you can just code your style, without being forced to decide on paradigms beforehand.
Title: Programming Languages
Post by: kuniklo on 2005-11-09 17:09:56
I was very taken with the whole idea of functional languages for many years but I've gradually grown more suspicious of them.  Advocates of functional languages make all kinds of claims for their superiority but offer very little proof.  There's this unstated assumption that academic programmers are inherently more intelligent and sophisticated than their industrial counterparts and the only thing holding back the success of functional languages is the poor taste of the working programmer.

It seems more likely to me that people working in academia enjoy a life free of the constraints imposed on the working programmer and that many of the claimed advantages of those languages don't translate to typical software development projects.  Take Haskell, for instance.  A very elegant language, but lazy evaluation as the *default*?  What a nightmare to debug memory problems.  All the real Haskell code I've seen is littered with strictness annotations.  OCaml seems more practical at first but the type system can make C++ templates seem straightforward.  Lisp is much more pragmatic and, IMO, the most genuinely useful of the whole family but it scares off too many people to form the kind of community a language needs to thrive without the backing of a Sun or a Microsoft.

These days I'm much more taken with the "scripting" languages, particularly Ruby.  Very practical, cleanly organized, none of the complexity of static typing etc.  I've learned a lot from studying functional languages and I think it's made me a better programmer but I also have to confess I'm glad I don't have to work in one.  I turned down an Erlang job a few years ago and with hindsight I think it was the right move.
Title: Programming Languages
Post by: Dibrom on 2005-11-09 17:11:37
@Lyx:

What you describe isn't quite functional programming.

Ruby is capable of emulating some important bits from fp with its block/closure thing, but that alone doesn't give you fp.  And fp itself isn't just about programming with functions.

Probably the two most defining features of fp are higher order functions (with Ruby can kinda sorta do), and referential transparency (which Ruby doesn't have).
Title: Programming Languages
Post by: Dibrom on 2005-11-09 17:20:22
Quote
Take Haskell, for instance.  A very elegant language, but lazy evaluation as the *default*?  What a nightmare to debug memory problems.


There are definitely some trade offs by going lazy by default.  Simon Peyton Jones had some interesting observations on this here (http://research.microsoft.com/users/simonpj/papers/haskell-retrospective/HaskellRetrospective.pdf).  Sure, basically, it's not all roses.  But it has many pluses.  I personally tend to like something like the approach in Alice ML, where you can get laziness quite easily if you need it, but everything is strict by default.  However, Alice ML is not as flexible as Haskell in many regards, unless you are doing constraint based programming or doing something with distribution and concurrency.

Quote
OCaml seems more practical at first but the type system can make C++ templates seem straightforward.


I really have to disagree completely here.  Have you seen advanced C++ template programming?  Take a look at Boost.  Then come back and tell me OCaml's rather simple type system is less straightforward.  Strong, static typing via some derivative of Hindley-Milner with the type of annotations that OCaml uses is, in my opinion, not much more difficult to understand than the type system in much less advanced languages -- that is, unless you're not used to thinking about static typing.  Most of the LISP users I know find ML very difficult because of this.

Quote
Lisp is much more pragmatic




Quote
and, IMO, the most genuinely useful of the whole family but it scares off too many people to form the kind of community a language needs to thrive without the backing of a Sun or a Microsoft.


For reference, are you talking about Scheme or Common LISP?  Sure, both are good languages.  I'd argue as to whether they're more genuinely useful than ML though, but that's a debate that probably won't see any resolution.

Quote
These days I'm much more taken with the "scripting" languages, particularly Ruby.  Very practical, cleanly organized, none of the complexity of static typing etc.[a href="index.php?act=findpost&pid=340753"][{POST_SNAPBACK}][/a]


Ruby is nice.  But as strong static typing leaves, and with it algebraic data structures, you lose a lot of flexibility overall.  It might make the rest of the language seem easier to use for some things though.
Title: Programming Languages
Post by: kuniklo on 2005-11-09 17:42:01
Quote
I really have to disagree completely here.  Have you seen advanced C++ template programming?  Take a look at Boost.  Then come back and tell me OCaml's rather simple type system is less straightforward.


I don't think I'd call OCaml's type system "simple".  Have you actually used it much?  When people start nesting functors and using polymorphic variants the type inferencer can start spitting out some really scary stuff.  Then it becomes a game of finding all the right type declarations to keep it happy.  A bunch of the real-world OCaml code I've seen is littered with type prefixes like string_ on fields etc to help keep this stuff straight.  The objects system exists as almost an alternate type system alongside the underlying ML core, and things can get really weird when you start mixing them.

Quote
For reference, are you talking about Scheme or Common LISP?  Sure, both are good languages.  I'd argue as to whether they're more genuinely useful than ML though, but that's a debate that probably won't see any resolution.


CL by far.  Scheme is a pretty research and teaching language but it's *far* less useful than CL for real work.  I can think of quite a few extremely complex and large CL apps out in the world now but pretty much nothing for Scheme.  It's not obvious at first, but CL is an extremely practical language.  Much of what's in there is there because it makes writing real code easier, unlike Scheme which always chooses the elegant over the practical.  People point to the conciseness of the Scheme spec but it's not hard to define a language that simple when you leave so much out.

Quote
Ruby is nice.  But as strong static typing leaves, and with it algebraic data structures, you lose a lot of flexibility overall.  It might make the rest of the language seem easier to use for some things though.


People always say this but I'm not sure I believe it.  Surely in an absolute sense duck typing is far more *flexible* than any static typing system could be. 

I think in all language debates it's important to keep in mind that no language is the "best" in any absolute sense.  OCaml and Haskell are very nice for writing compilers but I don't think it's an accident that nobody's written a web browser or photoshop or itunes or a vorbis encoder in them.  The research languages are valuable in that they provide a testing ground for new ideas and the best ideas tend to percolate down to the mainstream eventually but I think there are good reasons why they tend to stay in the laboratory.
Title: Programming Languages
Post by: Dibrom on 2005-11-09 18:00:05
Quote
Quote
I really have to disagree completely here.  Have you seen advanced C++ template programming?  Take a look at Boost.  Then come back and tell me OCaml's rather simple type system is less straightforward.


I don't think I'd call OCaml's type system "simple".  Have you actually used it much?  When people start nesting functors and using polymorphic variants the type inferencer can start spitting out some really scary stuff.  Then it becomes a game of finding all the right type declarations to keep it happy.  A bunch of the real-world OCaml code I've seen is littered with type prefixes like string_ on fields etc to help keep this stuff straight.  The objects system exists as almost an alternate type system alongside the underlying ML core, and things can get really weird when you start mixing them.


Yes, I've used it quite extensively.  You are right, there are parts of it's type system that are non-trivial.  But polymorphic variants, functors, and similar features (which I have used) do not need to be used for most typical programs either.  I'd expect an advanced programmer in a functional language to be likewise advanced in dealing with the type system.  For many tasks, however, you rarely have to deal directly with the type system and can rely mostly on type inference.  As the complexity of your task grows, the complexity of the tools (type system) grows along with it to some extent -- no real surprise there.

And at the end of the day, I still find OCaml's type system to be significantly less difficult than Haskell when you really push them to the edge of their capabilities.  This also should be no surprise, because Haskell's type system is simply more advanced through and through (rank-n polymorphism, fundeps, GADTs, existential types, type classes, etc.).  Again, though, most of these features won't need to be used for simple programming tasks.

Quote
Quote
For reference, are you talking about Scheme or Common LISP?  Sure, both are good languages.  I'd argue as to whether they're more genuinely useful than ML though, but that's a debate that probably won't see any resolution.


CL by far.  Scheme is a pretty research and teaching language but it's *far* less useful than CL for real work.  I can think of quite a few extremely complex and large CL apps out in the world now but pretty much nothing for Scheme.  It's not obvious at first, but CL is an extremely practical language.  Much of what's in there is there because it makes writing real code easier, unlike Scheme which always chooses the elegant over the practical.  People point to the conciseness of the Scheme spec but it's not hard to define a language that simple when you leave so much out.


That's pretty much what I expected.  Yes, Scheme is less practical than CL, if only for the library support.  I admit I tend to like the design of Scheme a hell of a lot better than CL on a number of points though.  I don't anticipate using either for large projects though -- they remain a little more like a novelty to me, or something to experiment in, than anything else.  For me, giving up strong static typing and most of the other features I've grown used to in ML family languages is too much, unless I'm going to program in a wholly difficult language paradigm to begin with.

But it is interesting that you consider CL functional -- most of my LISP using friends tell me(and from what I infer from many other places) CL is rarely used functionally.  Although you can do it, most people don't.  Scheme is often used more functionally (obviously), but on the whole its usage is still less pure than ML, and the same for ML vs. Haskell or other purely functional languages.

Quote
Quote
Ruby is nice.  But as strong static typing leaves, and with it algebraic data structures, you lose a lot of flexibility overall.  It might make the rest of the language seem easier to use for some things though.


People always say this but I'm not sure I believe it.  Surely in an absolute sense duck typing is far more *flexible* than any static typing system could be.


It's flexible in the sense that "anything goes", sure.  But in the sense of doing something complex in a safe way that's easy to reason about (usually important for complex things!) and easy to maintain, it's not.

Quote
I think in all language debates it's important to keep in mind that no language is the "best" in any absolute sense.  OCaml and Haskell are very nice for writing compilers but I don't think it's an accident that nobody's written a web browser or photoshop or itunes or a vorbis encoder in them.  The research languages are valuable in that they provide a testing ground for new ideas and the best ideas tend to percolate down to the mainstream eventually but I think there are good reasons why they tend to stay in the laboratory.
[a href="index.php?act=findpost&pid=340762"][{POST_SNAPBACK}][/a]


I never said that they were the best languages.  I never even said functional programming was best.  I explicitly stated many times that I tend to use many non-functional languages -- but for some reason, people latched on to the functional parts of my comments more than others, and the discussion has slowly gone off in that direction.

Someone has written a web browser in Haskell FWIW, and probably OCaml.  The latter you are probably correct about.  But this I believe is not so much due to the languages as it is due to API issues.  The applications you list are quite typically heavily dependent upon OS specific API's for network code, image code, audio code, and GUI toolkit.  Most functional languages have bindings for large parts (or even all) of those, but they do not match what is often provided as the "native" implementation language for the OS, and this is the reason I believe you don't see those types of programs in fp often.  Again, this isn't really a mark against fp though, because aside from Java, most of the world is still stuck in C or C++ mode on that regard.  The situation is pretty similar for most of the "scripting" languages as well, although not quite as bad since there is a little more interest in supporting them (I believe this is due to the fact that they are on the whole quite similar to languages most people are already familiar with).
Title: Programming Languages
Post by: Lyx on 2005-11-09 18:09:15
Quote
These days I'm much more taken with the "scripting" languages, particularly Ruby.  Very practical, cleanly organized, none of the complexity of static typing etc.

While i as well am mostly interested in this kind of languages, i disagree that ruby is cleanly organized - sure, it has achieved alot without a single revamp, but even matt now agrees that it is suffering from growing pains, that some of its behaviour (like local vars) is weird and that some of its syntax for advanced features like hashes, keyword-arguments, etc. are plainly unnecessary complicated. I'm very interested in how RITE/ruby2 will turn out - but then again, currently they seem to lack the necessary manpower to do it.... and i fear ruby2 may become something like duke nukem forever ;)

What, however, i'm more afraid of, is that in all those plans for ruby2 and its syntax-redesign, no single word was mentioned on the topic nested namespaces... i consider the :: notation the most ugly piece of ruby's syntax. And dont get me started about the limitations and unnecessary complicated creation and handling of nested namespaces.

So, i do like ruby alot - coding with it overally is just "fun"... it feels intuitive. I completely agree with the intro of the pragmatic programmers guide to ruby - it brings back the fun into coding. Unfortunatelly however, i don't consider ruby to be ready yet.... the overall concept and philosophy behind it are great, but it needs a revamp and polishing to be ready. Take the roadmap-features of ruby2, RITE, and namespace/module-handling as it is done in python, and it would be my dream-language.
Title: Programming Languages
Post by: kuniklo on 2005-11-09 18:26:01
Quote
That's pretty much what I expected.  Yes, Scheme is less practical than CL, if only for the library support.  I admit I tend to like the design of Scheme a hell of a lot better than CL on a number of points though.


Scheme certainly has fewer warts, but Lisp was pretty simple in the beginning too.  Languages tend to become less elegant as they evolve.

Quote
But it is interesting that you consider CL functional -- most of my LISP using friends tell me(and from what I infer from many other places) CL is rarely used functionally.  Although you can do it, most people don't.  Scheme is often used more functionally (obviously), but on the whole its usage is still less pure than ML, and the same for ML vs. Haskell or other purely functional languages.


CL's syntactic flexibility makes it a truly multi-paradigm language.  You can write oo, functional, imperative or logic code with CL.  I think it's probably true that most CL code isn't particularly functional, but I'd argue that it's because the functional paradigm isn't really all that practical for a lot of things.  Mutable state is just usually more straightforward for than a more elegant but less tractable functional implementation.  Have you read Okasaki's "Furely Functional Data Structures"?  Some very clever stuff in there but I found myself shaking my head halfway through it at the convolutions he has to go through to do relatively simple things.  I like the functional, expression-oriented style in the small but I think it breaks down when you start trying to treat big complex aggregates in a functional way.

Quote
Quote

People always say this but I'm not sure I believe it.  Surely in an absolute sense duck typing is far more *flexible* than any static typing system could be.


It's flexible in the sense that "anything goes", sure.  But in the sense of doing something complex in a safe way that's easy to reason about (usually important for complex things!) and easy to maintain, it's not.


It seems that this point always gets argued in a vacuum.  Large and complex programs have been written in statically and dynamically typed languages, so I think it's not really clear which one has the advantage.  Perhaps other factors are more important?

Quote
Someone has written a web browser in Haskell FWIW, and probably OCaml.  The latter you are probably correct about.  But this I believe is not so much due to the languages as it is due to API issues.  The applications you list are quite typically heavily dependent upon OS specific API's for network code, image code, audio code, and GUI toolkit.  Most functional languages have bindings for large parts (or even all) of those, but they do not match what is often provided as the "native" implementation language for the OS, and this is the reason I believe you don't see those types of programs in fp often.  Again, this isn't really a mark against fp though, because aside from Java, most of the world is still stuck in C or C++ mode on that regard.  The situation is pretty similar for most of the "scripting" languages as well, although not quite as bad since there is a little more interest in supporting them (I believe this is due to the fact that they are on the whole quite similar to languages most people are already familiar with).


People have writen toy browsers in OCaml, Haskell, and even emacs lisp, but I'm talking about something to compete with Firefox.  Your point about the C bias of the underlying OS is valid, but I don't think it's the biggest issue.  Honestly, I think people put too much emphasis on programming languages.  I think the biggest predictor of a project's success is the skill and the motivation of the programmers.  I think the biggest problem for the ML/Haskell/Lisp is that they're too weird and counter-intuitive to attract the kind of grassroots support that a language needs to survive without the backing of a big company.  Somebody's got to write all the essential but unglamorous libraries and documentation and I think it's instructive that Ruby's made more progress in two years than OCaml or Haskell has in toto.
Title: Programming Languages
Post by: snookerdoodle on 2005-11-09 18:42:37
I wrote my first "program" in RPG  II in 1974. In college (TAMU '80), all of my assignments were in Fortran, except process control stuff in some obscure assembler. Since then, I've programmed professionally in several assembly languages, Basic variants, scripting variants, lisp, C, Objective C, C++ (starting before they had templates), C (starting with original K&R pre ANSI), Java... Blah blah blah blah.

I've read a couple of books on C# and like it, but haven't actually tried to use it. I've done a couple of things (not professionally) in Smalltalk and think it would be the Only Language in a Just World.

(Oh, I can't believe none of you mentioned Just About Every Function in emacs as a use of lisp. sortof. 'Not to start a vi vs emacs war.  )

After all of that blah blah blah, the bottom line is this: you are going to write your best, most creative *and* most reliable code in the language you know best. You're communicating instructions to a dumb machine, for heaven's sake. Yes, new languages are Good (Java and C# make some aspects of reliability *easier* than, say, assembly language). But it's what you know. Yes, some problem spaces just should not be attempted in certain languages and sometimes it's time to learn a new language.

A semi parallel: I know folks who are fluent (really!) in multiple spoken languages. They don't just "get around", they know the grammars. But they're an aberration. The Rest Of Us are wise to stick with our first language when possible. We make mistakes when we try to communicate in any other language.

Why it's not a good parallel: programming languages are vastly simpler than spoken ones. Sometimes, it's worth learning a new one when the new one makes it easier to express your desires to the computer.

Here's another rub: if you're going to argue about which language is better, you need some criteria. Ease of expression is a Big One. You may think your language is The Best. But I'm here to tell you that the only way to measure that is to do some random sampling and see how easy folks pick it up. If 100 random people all have a harder time picking it up than, say, C#, you may be tempted to conclude you found 100 idiots but the truth really will be that your new language sucks. If you don't have the tests to prove your language is easier and more expressive, well, see Ye Olde Rule 8.

And, after all of that, I'll admit that just about every language I program in (I currently stick to C++, Java, and Javascript) sucks.

Second, wide acceptance IS a fair criterion. Why don't I waste my time learning Latin when it's Such A Beautiful Language ™? Because nobody speaks it (unless, of course, you have alternative motivations). For most folks and for this reason, Latin sucks and so does your New Obscure Programming Language. And no, I don't have one in mind. And yes, I know that the ease of expression issue (which, again, can only be addressed by testing with large numbers of people) can turn today's obscure language into in the Lingua Franca of tomorrow.

Oh, FLAC R0X0RZ!

Mark
Title: Programming Languages
Post by: Lyx on 2005-11-09 19:07:28
Quote
I've read a couple of books on C# and like it, but haven't actually tried to use it. I've done a couple of things (not professionally) in Smalltalk and think it would be the Only Language in a Just World.

You should check out "Io" in that case - you'd probably like it. Personally, while i've been fascinated about the smalltalk-approach, i always found myself doing too much micromanagement and loosing track of the big-picture when using smalltalk-alike languages.
Title: Programming Languages
Post by: kuniklo on 2005-11-09 19:07:53
Quote
While i as well am mostly interested in this kind of languages, i disagree that ruby is cleanly organized - sure, it has achieved alot without a single revamp, but even matt now agrees that it is suffering from growing pains, that some of its behaviour (like local vars) is weird and that some of its syntax for advanced features like hashes, keyword-arguments, etc. are plainly unnecessary complicated. I'm very interested in how RITE/ruby2 will turn out - but then again, currently they seem to lack the necessary manpower to do it.... and i fear ruby2 may become something like duke nukem forever


I don't think I'd call a hash an advanced feature.  Ruby's hash syntax is essentially the same as Perl's or Python's so I'm not sure I understand what you object to about it.  The keyword syntax for functions is a hack, and I do hope they do something about it for Ruby 2.

I think Ruby gets the important things right though:

1. deep and pervasive object orientation
2. consistent method naming
3. simple but flexible object system
4. dead simple C api
5. consistently expression, not statement, oriented
6. abundant syntactic sugar without perl's excesses
7. good documentation and third-party library support

It's not perfect but I very strongly prefer it to the alternatives.  Matz set out to make a language that was elegant and enjoyable to use and I think he succeeded.
Title: Programming Languages
Post by: Lyx on 2005-11-09 19:20:40
Quote
The keyword syntax for functions is a hack, and I do hope they do something about it for Ruby 2.

That's the plan.
http://www.rubyist.net/~matz/slides/rc2003/ (http://www.rubyist.net/~matz/slides/rc2003/)
Title: Programming Languages
Post by: Dibrom on 2005-11-09 19:21:26
Quote
CL's syntactic flexibility makes it a truly multi-paradigm language.  You can write oo, functional, imperative or logic code with CL.  I think it's probably true that most CL code isn't particularly functional, but I'd argue that it's because the functional paradigm isn't really all that practical for a lot of things.  Mutable state is just usually more straightforward for than a more elegant but less tractable functional implementation.  Have you read Okasaki's "Furely Functional Data Structures"?  Some very clever stuff in there but I found myself shaking my head halfway through it at the convolutions he has to go through to do relatively simple things.  I like the functional, expression-oriented style in the small but I think it breaks down when you start trying to treat big complex aggregates in a functional way.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=340772")


I agree that LISP is quite multiparadigm.  I feel that Haskell is as well, although it all boils down to a functional base.  With monadic combinators, and many of its other more advanced features, you can emulate basically any style of computation you want.  [a href="http://www.willamette.edu/~fruehr/haskell/evolution.html]This[/url] is meant as a joke, and it's a bit twisted, but it's actually also quite illustrative.  And many of those examples only touch the surface of Haskell, not really using many of the recent extensions to the language.  Oz, I think, is another great example of a truly awesome multiparadigm language.  Sadly, it will probably remain rather obscure.

As for Okasaki's book, I actually own that one  You're right about some of the convolutions.  But you have to remember that he took a rather extreme approach with that book -- going purely functional with a fairly minimal set of features.  In OCaml, most (all?) of the "extra" standard data structures provided by the library use side-effects.  In Haskell, in most cases, when dealing with the sorts of problems you might see in his book, you'll usually end up using a variety of techniques to make the whole process more tractable.  Most of them can be seen on the Haskell wiki through various links from this (http://haskell.org/hawiki/CommonHaskellIdioms) page.

If you want to see an extreme example of Haskell flexibility when dealing with big complex aggregate data types, you should check out the "Scrap your Boilerplate" papers on generic programming with Haskell.  I posted a link to it ealier, but for reference, you can find them here (http://www.cs.vu.nl/boilerplate/).

Quote
It seems that this point always gets argued in a vacuum.   Large and complex programs have been written in statically and dynamically typed languages, so I think it's not really clear which one has the advantage.  Perhaps other factors are more important?


To me, it's very advantagous to be able to look at the type signature of a function and tell rather immediately what the function, overall, is supposed to do.  If there is no type signature, I just ask the interpreter to tell me what it is.  In LISP, or other dynamically typed languages, or in weakly typed static languages, you often have to read the entire function to get a real good idea of what it is supposed to do.  And if you misread something (or say, if you don't know the behavior of certain library functions and are not able to look them up immediately), you can make some rather nasty mistakes.

Strong static typing forces you to reason more thoroughly about your code *before* you implement it, and this can only be a good thing from a reliability and maintenance point of view.  Strong compile time error finding through this approach is a true advantage when deploying code as well.

Dynamic typing makes for easier rapid prototyping, but it also makes for more (runtime) errors (and in some cases even lower performance).  It can make complex problems easier by allowing people to solve them in ways that are not really optimal either, but maybe in some cases this is what is needed.

Both approaches obviously have advantages and disadvantages, but I feel that in most cases, strong static typing with good type inference is a better choice than dynamic typing, even when (or perhaps especially when) dealing with complex aggregate data structures.

Typing, however, will remain a rather religious point, and I don't expect there will be much agreement on strong static vs. dynamic anytime soon

Quote
Somebody's got to write all the essential but unglamorous libraries and documentation and I think it's instructive that Ruby's made more progress in two years than OCaml or Haskell has in toto.


What do you mean by progress?  Are you talking about userbase adoption?  Well, then yes, Ruby has made more progress.  Are you talking about language maturity and library maturity?  If so, I'd have to disagree that Ruby has made more progress.
Title: Programming Languages
Post by: kl33per on 2005-11-09 23:26:09
Typing can confuse new programmers to.  Coming from VB6 to Java, it took my a while to get my head around typing, implicit and explicit type conversion, and other such matters, because in VB6 it wasn't necessary.  When I started learning Java, I found the whole concept of casting extremely difficult to comprehend, because no one explained to me strong typing vs. weak typing.

The very idea that a variable could have specific types was almost foriegn to me.  Of course you can specify integers and strings in VB, but most of the time you end specifying var's.  This made learning Java quite painful (although Java itself did not help).

To quote Edsger Dijkstra, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration". 
Title: Programming Languages
Post by: kuniklo on 2005-11-10 06:15:33
Quote
If you want to see an extreme example of Haskell flexibility when dealing with big complex aggregate data types, you should check out the "Scrap your Boilerplate" papers on generic programming with Haskell.  I posted a link to it ealier, but for reference, you can find them here (http://www.cs.vu.nl/boilerplate/).


Interesting.  I'll check it out.  Thanks for the link.

Quote
Typing, however, will remain a rather religious point, and I don't expect there will be much agreement on strong static vs. dynamic anytime soon


I think this one gets argued monthly on comp.lang.functional, and no consensus has yet to emerge.  Lispers would argue that a well chosen name will tell you more about what a function does than any type signature ever could but these days I wonder if there might just be two different kinds of programmers.

Quote
What do you mean by progress?  Are you talking about userbase adoption?  Well, then yes, Ruby has made more progress.  Are you talking about language maturity and library maturity?  If so, I'd have to disagree that Ruby has made more progress.
[a href="index.php?act=findpost&pid=340787"][{POST_SNAPBACK}][/a]


By progress I mean viability as a commercial development language.  I'd feel pretty comfortable starting a new company based on Ruby.  Less so on CL and I'm pretty sure I wouldn't try it with ML at all.  Ruby has excellent documentation, a large and active user base, pretty comprehensive library support, and a design that's straightforward enough that any decent java/perl/python programmer should be productive in it within a week or two.  It's less novel from a language design point of view than Haskell or ML but these days I find myself a lot more interested in technology as a means to an end rather than as an end in itself.
Title: Programming Languages
Post by: AlexanderTG on 2005-12-03 13:22:54
This thread is well beyond me but I'm still going to stick my leg in! Probably going to regret it! 

I am a .Net programmer, so you would think that my opinion is going to be bias but its not.

Good points about C++.
If you want to make something truly powerful like a new operating system, it's not possible with .Net! If I remember correctly from 1 of Microsoft’s videos, they said that Windows Vista was written with C++ and so was .Net and so was Visual Studios! They also said that office 12 is written with C++.
Base technology for Microsoft products, window vista, office 12, visual studios, and even .Net Frameworks and SDK's are all written in C++!

Bad points about C++.
Not suited to RAD.
New applications must be installed on each and every PC.
There are other bad points but they don’t really concern me.

Good points about .Net.
Cannot be beaten for Rapid application development.
Suited for most office environment
Once .Net is installed on all PC's all your applications can be make available to all PC's with OneClick technology. No installation is required at all.
Very simple and very fast to make applications

Bad points about .Net.
Must pay to get more functionality! We had to pay to get VS.Net 2003 just to make it easier to use the functions in .Net 1.1 which were not available in .Net 1.0. Now we are going to have to pay more to get VS.Net 2005 just to make it easier to use the features in .Net 2.0 which are not available in .Net 1.1.
Must run within an environment. e.g. .Net framework.
Have to go around to each PC to install the next .Net version.  Still not as bad as doing that with every application written in C++ each and every time.

In my environment I will never need C++. So I guess it all comes down to the developers needs!
Title: Programming Languages
Post by: zima on 2005-12-04 08:23:22
I wonder what do you think about Lisp, especially opinions closely and loosely related to it that are posted on Paul Graham's site...

Also I'm curious about views on Objective C and Cocoa/Gnustep.


BTW, do you think Python, perhaps even with Pythoncard is good for a start?
Mostly "toying" activity - that's why there's no rush - apps that could be desribed as interfaces to functionalities that already exist/glueing of ready libraries. Well, perhaps in the long run something in area of cognitive science, hence my slight interest in Lisp  . Anyway still not very serious...
My only experience is very short, basic contact with C, and also somehow C++, though used like C - I don't even have an understanding of the concept of object oriented programming 
Title: Programming Languages
Post by: Dibrom on 2005-12-04 08:43:37
Quote
I wonder what do you think about Lisp, especially opinions closely and loosely related to it that are posted on Paul Graham's site...
[a href="index.php?act=findpost&pid=347504"][{POST_SNAPBACK}][/a]


Who is the question directed at?
Title: Programming Languages
Post by: zima on 2005-12-04 11:03:05
Anyone who might be competent to answer it
Title: Programming Languages
Post by: Dibrom on 2005-12-05 08:29:58
Quote
I wonder what do you think about Lisp, especially opinions closely and loosely related to it that are posted on Paul Graham's site...


I haven't read too much from Paul Graham, but what I have read I've generally found myself to be in overall agreement with.  I used his simple lisp implementation example based on McCarthy's paper to implement my first simple LISP in Haskell, going on only that short snippet and an hour long conversation with a LISPer I know (which was helpful because prior to that I'd had about 15 minutes of experience programming in LISP).

As for his upcoming LISP implementation, Arc I believe it's called, I can't say a whole lot.  I've heard both some grumblings and some praise about it from various LISP users.  Personally, I'm not sure I see much need for a new LISP implementation though.  I could see utility in a new language that borrowed certain concepts from LISP, but LISP in itself I think has some problems that, at least for me, don't make it an ideal language:

1.  Dynamic typing -- A lot of people seem to love dynamic typing, but I don't agree with a lot of the supposed advantages that it offers, which mostly seem to center on supposed programming flexibility but which seems to come at the cost of both program efficiency, and program safety.  There is also a question about expressivity of dynamic typing in a strictly technical sense which may have interesting consequences for certain types of problems.

Static typing, as opposed to dynamic typing, is strictly more expressive, and the proof of this is rather trivial:  All untyped (dynamic) programming languages correspond to a typed (static) programming language with a single universal type.  However, there exist some typed programming languages which have more than a single universal type (e.g., the most fundamental being the simply typed lambda calculus).  Therefore, typed programming languages are strictly more expressive than untyped programming languages.

Whether this particular technical point manifests itself in practice or not, LISP does not appear to me to make a good target language for certain types of problems where type information is quite prevalent in the data, and in a complex nested fashion.  The lack of something like [G]ADT's and proper pattern matching are a big part of this.  For example, an area of one project I am currently working on involves writing a rather complex type checker and inferencing algorithm.  This sort of problem would be much more difficult (but not impossible) to do in a language like LISP.  It's not really limited to that sort of niche area though -- I think LISP may not be as suitable for other somewhat similar tasks like complex XML parsing, or anything where polytypic programming with a large emphasis on safety and performance is desirable.

Dynamic typing has the advantage of loosening up the evaluation of the program in a way that can sometimes make rapid application development easier -- but ultimately I don't think it is worth the trade offs.  Others, of course, disagree  However, for the most part, advantages of "duck typing" as mentioned earlier in this thread for example, are easily captured with a rich static type system that features something like parametric polymorphism and type classes.  One could for example to do something silly like this, if they so desired:

2. Lack of many modern features -- LISP lacks a lot of the features I criticized other languages in this thread for not having as well.  However, a couple of big ones for me are: currying (this has always been a strange one IMO, given how LISP is supposedly modeled after the Lambda Calculus), pattern matching, laziness (optional or otherwise), concurrency, etc.

3. Too much reliance on macros -- When I talk to LISP people, and mention a feature lacking in the language, they are very quick to state that most of the time these things can be added with macros.  Sure, this is true, but I believe the LISP community has come to a point where it relies on macros far too much simply to make up for certain problems with the language that result from its lack of expressivity, which is enforced by the syntax being such a simple and direct representation of a linked list data type.

Problems with relying too much on macros are that there's little in the way of consistency for how to solve certain types of problems -- this makes code maintenance harder and makes code less idiomatic, which affects the scalability of projects written in the language when they are deployed in collaborative contexts.  The other major problem with macros are that many of the features people "add" to the language via macros do not benefit from compiler optimization in the same way that such a feature would if it were more directly represented in the semantics of the language and subsequently accounted for in the language intermediate representation and compiler optimization steps.

Despite all of this, I find LISP users tend to think that metaprogramming is a necessary step on the way to solving problems much more often than is actually the case.  In a language like Haskell for example, I have solved many problems that I've seen solved in LISP code that made much use of metaprogramming via macros.  However, I have only twice used Haskell's extremely capable Template facility (which is capable of full blown metaprogramming): once when using a library that employed metaprogramming to create bindings to Objective C at compile time (generating classes and doing all sorts of magic under the hood), and one other time simply when I was curious about learning it's Template system.

4. Somewhat bloated -- This will probably draw some annoyance from some LISP fans, but I think Common LISP is simply bloated (library and extension wise) from what I've seen.  I cringe at using that word to describe LISP which, despite what I've said about it so far, I do have a lot of respect for, at least in a historical context if nothing else.  If you like the "kitchen sink" approach, then maybe it's for you... I tend to prefer a language which is more concise in this regard, and where complex libraries and extensions are implemented in a highly consistent fashion which tends to encourage idiomatic programming styles.  I don't think LISP does this both from what I've seen as far as what Common LISP contains, and the macro issue I mentioned above.

I'm not sure if Arc is going to follow Common LISP or not, so whether it will suffer this same criticism I can't be entirely sure.  Maybe this one won't be a problem.

Quote
Also I'm curious about views on Objective C and Cocoa/Gnustep.


Objective-C is a nice simple language.  It's basically C with an easier, and in some ways better, object system than C++.  It doesn't have many of the more advanced features that C++ has though, with the most glaring omission being the lack of templates.  You don't tend to need templates with Objective-C nearly as much though because all objects in Objective-C inherit from a common base, and the objects are dynamically typed (in C++ they are weakly statically typed).

Objective-C on it's own is pretty bare and I wouldn't use it as a language for most tasks.  With Cocoa, which is a fantastic and fairly complete sort of "standard library" for Objective-C on OS X (GNUStep being the much more incomplete and less well maintained GNU alternative), Objective-C is a pretty decent platform for application programming.  I don't think I'd use it for any other task though.  Similar to C, I don't think it's the ideal language for most tasks.  C usually is a good choice for DSP type stuff, and in that regard Objective-C is just as suitable, but the catch is that you aren't going to be using Objective-C's object system for that in most cases.  If you need real fast and efficient OO then you would be much better off using C++ with static polymorphism through template expressions.

Quote
BTW, do you think Python, perhaps even with Pythoncard is good for a start?


Python is an OK language to learn on.  I don't know anything about Pythoncard.  If I were to recommend a language for someone to learn on today though, Python wouldn't be my first choice.  I would instead recommend one of: 1) Ruby, 2) Scheme, or 3) Oz.  Which one I would choose would depend on exactly why the person is learning programming.  I would basically break it down like this:

1. Ruby -- I would recommend Ruby if the person were interested in learning programming basically to eventually go on and do typical programming work in the industry.  Stuff like application programming, web application development, scripting small programs to serve as glue between some sort of other components, etc.  I think Ruby is going to become a lot more prevalent in the industry as time progresses.  It's got a lot of momentum right now and I think it will probably stay that way for awhile.  In terms of language design, it's cleaner than Python and I like the direction it is going in much more than I like the direction Python seems to be going in...

2. Scheme -- I would recommend Scheme probably as a language for people with very little to no exposure to computational ideas in general (one of my friends taught a programming class to kids and used Scheme quite successfully in this regard).  A nice thing about Scheme is that it's pretty consistent and concise, and the syntax is basically dead simple (but as I noted above, I think this comes at a price).  Despite being simple, scheme is powerful enough to express some nice theoretical qualities (see Structure and Interpretation of Computer Programs (http://mitpress.mit.edu/sicp/full-text/book/book.html)) which I think too many programmers these days know next to nothing about and would do well to learn.

3. Oz -- I would recommend Oz (http://www.mozart-oz.org) as a language to learn on for people truly interested in theoretical computer science.  People who have an interest in solving difficult academic style problems, or who might be the kind of person who wants to know how languages work deep down and might enjoy implementing their own -- these are the kind of people that I think should learn Oz.  Oz is perhaps the most multiparadigm language that I have seen, and one can learn functional programming, OO, logic and constraint based programming, distributed and concurrent programming, and strict vs lazy evaluation based programming from it, all in one nice, simple, and consistent package.  It has a lot of unique features that make it easy to learn about it's theoretical qualities and to reason about, such as the inspector (http://www.mozart-oz.org/documentation/inspector/index.html) which is a sort of graphical tool that shows the realtime evaluation of code, the explorer (http://www.mozart-oz.org/documentation/explorer/index.html) which is a tool that shows a graphical representation of search trees and the like in a constraint computation, along with other tools such as things that show realtime statistics on program concurrency, etc.  I don't think there is any other language (except possibly for Haskell, but it is much harder to learn) that can expose someone to so many interesting theoretical aspects of computer science in the programming language aspect all from a single source.  Someone learning this language should read Concepts, Techniques, and Models of Computer Programming (http://www2.info.ucl.ac.be/people/PVR/book.html) as an aid -- it's like Structure and Interpretation of Computer Programs but uses Oz instead of Scheme, covers a much broader set of topics, and is much more up to date, having  only come out last year.  It is by far one of (maybe THE) best book of it's type.

Quote
Mostly "toying" activity - that's why there's no rush - apps that could be desribed as interfaces to functionalities that already exist/glueing of ready libraries.


For that, I would go with Ruby.

Quote
Well, perhaps in the long run something in area of cognitive science, hence my slight interest in Lisp   .


I've never quite understood why LISP still retains the reputation of being an AI language.  Prolog is more obvious, but as for LISP I could understand it having this reputation decades ago when there were few other languages which were as "high level" as LISP and which were not capable of decent symbolic processing.  However, I think languages like ML or Oz are better suited to this task these days.  "Cognitive Science," or AI, type stuff is an area that I think falls into the category I mentioned earlier about richly typed data.  ML (or Haskell) excels at this kind of task.  Other aspects of AI style programming deal with 1) knowledge sets, 2) autonomous agents -- both of which Oz handles perfectly well with 1) its rich logic and constraint based facilities, and 2) its rich OO and distributed/concurrent computational abilities.  Alice ML (http://www.ps.uni-sb.de/alice/) -- which is basically the ML answer to Oz (written by largely the same team as the Oz guys) -- offers essentially the exact same capabilities as Oz (right down to the same graphical evaluation tools even), but within a tradtional ML dialect w/ the expected extensions.  Haskell also has some excellent extensions as far as parallel computation goes.

Quote
My only experience is very short, basic contact with C, and also somehow C++, though used like C - I don't even have an understanding of the concept of object oriented programming 
[a href="index.php?act=findpost&pid=347504"][{POST_SNAPBACK}][/a]


If I were you, I would personally try to read a book with a little bit of theory before trying to learn complex programming ideas straight up.  It helps a lot to have a solid foundation on what various concepts mean (like OO for example), and why someone would or would not want to use them, before actually getting to work.  Most people skip this step and never look back, and I think it is ultimately to their detriment. YMMV of course...
Title: Programming Languages
Post by: Lyx on 2005-12-05 23:27:11
I just read the slides and presentations of this year's ruby-conference, and it seems that ruby's short-term future is not as dark as it seems. The new VM has progressed rather well(from my understanding, most of the work - excluding multi-CPU optimization - is already finished). The benchmarks look rather nice and put ruby *ahead* of python in terms of performance with complex mathematical calculations.

As for syntax-changes, seems that the two most important changes, not in the current CVS yet, are keyword-arguments and block-local scope. I was quite amusing when i read the slides and saw proposals for taking some aspects from Common LISP in more than one occassion.

For those who are interested in where Ruby is headed:
Ruby 2.0 slides: http://www.rubyist.net/~matz/slides/rc2005/index.html (http://www.rubyist.net/~matz/slides/rc2005/index.html)
New VM progress-report: http://rubyforge.org/pipermail/yarv-devel/...ber/000372.html (http://rubyforge.org/pipermail/yarv-devel/2005-October/000372.html)

- Lyx
Title: Programming Languages
Post by: kuniklo on 2005-12-06 03:52:42
Quote
I wonder what do you think about Lisp, especially opinions closely and loosely related to it that are posted on Paul Graham's site...



There are a lot of brilliant ideas in Lisp and you can learn quite a bit from it but it's really not practical for writing real-world code for a variety of reasons I won't go into here.

If you want to start writing real code that actually does something interesting soon, I'd strongly recommend ruby.  If you're more ambitious and willing to work a lot harder, get a copy of Structure and Interpretation of Computer Programs and work through it.  It'll make your brain explode but you'll have a much better understanding of programming.
Title: Programming Languages
Post by: Dibrom on 2005-12-08 22:15:53
Thought I'd add a little something more to this thread...

I just got done writing my first real program in Scala (http://scala.epfl.ch/), and all I can say is: wow what an incredibly cool language!  I guess I should have checked it out sooner given that it was supposedly a big inspiration for the current Fortress language spec (a quick examination of Scala confirms this -- Fortress syntax and featureset is very close to Scala, similar to how heavily Java was influenced by C++).  At any rate, I think I've found my new favorite OO language, easily displacing Ruby.  To understand why, just take a look at some of the features it supports:... and those are just the ones I know of after a day looking at it.  More can be seen here (http://scala.epfl.ch/index.html) and here (http://scala.epfl.ch/docu/index.html).

The language is pretty new, having only been around since 2002, and really only having taken off in more recent times.  Previously, I was focusing more on Ruby as the OO language to watch, but I take that back after having seen Scala.  It seems that they pretty much got everything right on (that I've seen so far), having managed to marry the functional and OO style in a very unique and well thought out way.  I think it illustrates that a lot of the features which I have been faulting other languages in this thread as not supporting, can and do in fact have a place in an a more "traditional" OO paradigm. The design, as well as their documentation and the examples they give make it rather clear why both approaches combined into one are much better than just pure OO with no, or very little functional support.  They've managed to make OO much more "lightweight" and usable by implementing so many features that provide opportunities for higher order programming too.

It's also refreshing that they went with static typing with type inference rather than going the easy route and just doing it all dynamically.  Type inference when both parametric polymorphism and ad-hoc polymorphism with subtyping are simultaneously in the equation is not an easy thing to implement, but they've done it and done it well at that.  This will, I think, make the language incredibly scalable and suitable for very complex industrial tasks.  No surprise to see that they designed the language for essentially those purposes.  The XML manipulation power and structural regular expression support is just the tip of the iceberg with regard to what I think is possible there...

Oh, the program that I wrote was a simple untyped lambda calculus evaluator (sans parser since I ran out of time).  I wrote an almost purely functional version and then an almost purely OO version (just to test language flexibility).  The curious can find them here (http://static.morbo.org/dibrom/LambdaCalculus.zip) (note that the files are encoded as utf-8 and use some unicode chars).
Title: Programming Languages
Post by: kuniklo on 2005-12-08 22:30:57
Quote
Thought I'd add a little something more to this thread...


There was a time when I would have looked at that list of features and thought "Wow!  What a cool language".  Now I read it and think "Wow!  Another research language I'll never get to use at work!".

That stuff might all sound great on paper but I'm not convinced it has that much real-world programming value.  Ruby is simple, mostly fits all in my head at once, and is straightforward enough that I have some chance of persuading my coworkers to try it.  We could have round 7999 of the static vs. dynamic typing debate, but it's impossible to argue that large complex systems *can't* be built in dynamic languages and I think the burden of proof really likes with strong typing advocates.

It's kind of like 20th century orchestral music.  Schoenberg and Xenakis' music sounds great from a theoretical point of view, but nobody listens to it and their most persuasive ideas have been cherry-picked and reworked into something more palatable to the general public.  So maybe the functional programmers are the 12 tone serialists of the software world?
Title: Programming Languages
Post by: Dibrom on 2005-12-08 22:43:53
Quote
There was a time when I would have looked at that list of features and thought "Wow!  What a cool language".  Now I read it and think "Wow!  Another research language I'll never get to use at work!".


Some people might feel that way, but there are a couple of things that make the situation with Scala different I think.

The first big point is that it runs on the JVM or .NET.  Not even Ruby does that (Edit: it does seem that there is some sort of .NET bridge for Ruby here (http://www.saltypickle.com/rubydotnet), however this is not the same as making it a native facility of the language itself).  This means that Java programmers (of which there are many) are going to be much more comfortable trying it because they don't have to rely on another VM and they can use all of their existing libraries and all of the libraries they've grown accustomed to over the years.  Using a Java library in Scala is transparent.  You simply import it just like a Scala library, and use it from there.  That is a huge advantage that is hard to understate for new languages to the scene.

The second big advantage is that from reading the Scala documentation, it's very clear that they are targetting the language at people from traditional OO backgrounds.  Almost every example is explained in a way an OO programmer would understand, rather than how a functional programmer would look at the situation.  The great thing, though, is that the designers were not so narrow minded as to throw out the huge advantages that functional programming offers simply because many people may not be familiar with them.

Quote
That stuff might all sound great on paper but I'm not convinced it has that much real-world programming value.  Ruby is simple, mostly fits all in my head at once, and is straightforward enough that I have some chance of persuading my coworkers to try it.


I might agree with you regarding Haskell.  In Haskell, any serious programming requires monads, and most of the time complex nested monads at that.  Honestly, those can get very hard.

Scala, on the other hand, can be used almost exactly the same way someone would use Java.  It just happens to offer incredible power when someone wants to dig deeper.  And amazingly, they managed to offer all of it in a way that is not convoluted and terribly complex.  They've managed to fit more flexibility into their language than a language such as Java has, but while keeping the specification, documentation, and learning curve at a mere fraction of the size.

I definitely think Scala has real world value, because most of its features are so clearly geared towards that, and it works now, with a huge wealth of existing libraries already available at its disposal.

Quote
We could have round 7999 of the static vs. dynamic typing debate, but it's impossible to argue that large complex systems *can't* be built in dynamic languages and I think the burden of proof really likes with strong typing advocates.


Complex systems can be built with dynamic typing, I never said otherwise.  But I find it strange that people don't seem to see the advantage in having the compiler doing the heavy lifting with regards to safety, rather than having to sprinkle type checking predicates or exceptions all throughout their code and cross their fingers after they've deployed the product.

I don't know, maybe I'm just missing something...

Quote
It's kind of like 20th century orchestral music.  Schoenberg and Xenakis' music sounds great from a theoretical point of view, but nobody listens to it and their most persuasive ideas have been cherry-picked and reworked into something more palatable to the general public.  So maybe the functional programmers are the 12 tone serialists of the software world?
[a href="index.php?act=findpost&pid=348729"][{POST_SNAPBACK}][/a]


And that's just it.  Scala is the cherry-picked, reworked-into-something-more-palatable version of functional programming for the OO generation.

You should take a look at it before assuming outright that it's useless.  I don't think that it's exactly a coincidence that the Sun guys are drawing so heavily from Scala for Fortress, their next "Big Language", considering all of these things.
Title: Programming Languages
Post by: kuniklo on 2005-12-08 23:06:02
Quote
This means that Java programmers (of which there are many) are going to be much more comfortable trying it because they don't have to rely on another VM and they can use all of their existing libraries and all of the libraries they've grown accustomed to over the years.


Lots of languages have tried this but so far it doesn't seem to have helped Jython, Nemerle, or any of the JVM-targeting Schemes.  Again, sounds great in theory but so far in practice it's been a dud.

Quote
The second big advantage is that from reading the Scala documentation, it's very clear that they are targetting the language at people from traditional OO backgrounds.  Almost every example is explained in a way an OO programmer would understand, rather than how a functional programmer would look at the situation.


I think most OO programmers will run screaming away from the gnarly type errors.  The sad fact of this Paul Graham-ish "design for smart programmers" is that most programmers aren't really very good and a language needs these people to really thrive.  A recent case in point, reddit.com which was held up as a lisp success story until last week:

http://reddit.com/blog/2005/12/on-lisp.html (http://reddit.com/blog/2005/12/on-lisp.html)

Notice that the main reason they switch to Python, despite the fact that they consider Lisp to be superior, is that they didn't have to do nearly as much grunt work because so many support libraries had already been written by the "average" programmers that never got around to doing the same thing for Lisp.

Peter Norvig, who's probably forgotten more about programming than the rest of us will ever know, grudgingly accepts Python here:

http://norvig.com/python-lisp.html (http://norvig.com/python-lisp.html)

Quote
Complex systems can be built with dynamic typing, I never said otherwise.  But I find it strange that people don't seem to see the advantage in having the compiler doing the heavy lifting with regards to safety, rather than having to sprinkle type checking predicates all throughout their code and cross their fingers after they've deployed the product.


Trotting out the standard counter-argument: static typing only catches certain kinds of errors, you still have to do proper unit testing and q.a., and it's not clear that static typing doesn't cause as many problems as it solves via increased language complexity and conceptual overhead.

Quote
And that's just it.  Scala is the cherry-picked, reworked-into-something-more-palatable version of functional programming for the OO generation.

You should take a look at it before assuming outright that it's useless.


It looks to me like they're still clinging to most of the functional communities most cherished and untested assumptions, but I'll give it a closer look.

The burden on all language designers is to demonstrate that their language solves some important problem much better than existing alternatives.  Ruby's grown more in the last six months than in it's entire previous existence thanks to Rails.  The ball is in the functional community's court to do the same.
Title: Programming Languages
Post by: Dibrom on 2005-12-08 23:27:41
Quote
Lots of languages have tried this but so far it doesn't seem to have helped Jython, Nemerle, or any of the JVM-targeting Schemes.  Again, sounds great in theory but so far in practice it's been a dud.


I don't know anything about Nemerle, but I think the reason this approach has failed with Jython and JVM-targeting schemes is because those are both dialects of a language which does not specify mandatory support for those facilities out of the box.

There is a huge difference between downloading Scala and using System.out.println in your first hello world program, and downloading a special, non-popular dialect or other poorly supported language extension facility to get the same functionality.

The only language I know of which does this sort of thing as well as Scala is D, which also happens to allow usage of C libraries transparently right out of the box.

Quote
Quote
The second big advantage is that from reading the Scala documentation, it's very clear that they are targetting the language at people from traditional OO backgrounds.  Almost every example is explained in a way an OO programmer would understand, rather than how a functional programmer would look at the situation.


I think most OO programmers will run screaming away from the gnarly type errors.


Well for starters there aren't going to be gnarly type errors for most typical OO programmers.  Why? Because they'll simply stick to your typical OO style programming with no parametric polymorphism, ADT's, or higher order programming.  When used in the most typical OO way, there should be no more type complexity at work than in Java.

Quote
The sad fact of this Paul Graham-ish "design for smart programmers" is that most programmers aren't really very good and a language needs these people to really thrive.


Sure.  I am excited about Scala because I think it has the potential not to scare these people away, since the advanced functionality of the language is not mandatory for average tasks, and is non-intrusive when it is used in sprinklings.

Yet, when someone like me wants something a little bit more, it has to the potential to provide me most of what I need for even the most complex tasks.

Quote
Notice that the main reason they switch to Python, despite the fact that they consider Lisp to be superior, is that they didn't have to do nearly as much grunt work because so many support libraries had already been written by the "average" programmers that never got around to doing the same thing for Lisp.


This seems to be a good argument in support of Scala with regards to its backwards compatibility with the JVM and .NET, doesn't it?

Quote
Trotting out the standard counter-argument: static typing only catches certain kinds of errors, you still have to do proper unit testing and q.a., and it's not clear that static typing doesn't cause as many problems as it solves via increased language complexity and conceptual overhead.


Catching half the errors is better than catching none of them.  The half that isn't caught is usually easier to prevent also -- most of the remaining errors in my Haskell programs after everything has been type checked are algorithmic errors.  For complex algorithms in real world code, most of the behavior of these algorithms will be proved before being implemented.  And of course one will still use unit testing.  Static typing with type inference can actually make unit testing much nicer, in fact.  See haskells quickCheck for example.

As for conceptual overhead, I'm not convinced.  If you're working on a complex problem, you need conceptual overhead to maintain integrity over the program operation.  The more complex a task is, the less you can afford for some aspects of the computation to behave in an unknown fashion.  You need to do a little house keeping to prevent this, whether you use type predicates and exceptions or static typing.  In all except the most extreme (and then probably poorly designed) cases, typing information should not get in the way so much that the rest of the computation becomes difficult.  As just mentioned, if that were the case, it would more than likely be indicative of poor design with regards to the solution one is employing.

The advantage with the static approach is you can use a combination of a variety of techniques (type checking, exceptions, unit testing), whereas with dynamic typing your options are more limited with regards to safety.

Quote
It looks to me like they're still clinging to most of the functional communities most cherished and untested assumptions, but I'll give it a closer look.


Out of curiosity, which untested assumptions are you referring to?

As I said before though, Scala is primarily an OO language, with proper functional support after that.  Their tutorials even say things like "Scala Tutorial for Java Programmers."  I don't think the Scala design team obsesses about functional programming, I just think they've seen how it's advantageous.

On the other hand, it's somewhat amusing how most major OO languages these days are now adding quite a few functional style features after years of people saying how useless fp is.  Strange isn't it?

Quote
The burden on all language designers is to demonstrate that their language solves some important problem much better than existing alternatives.  Ruby's grown more in the last six months than in it's entire previous existence thanks to Rails.  The ball is in the functional community's court to do the same.
[a href="index.php?act=findpost&pid=348735"][{POST_SNAPBACK}][/a]


One look at their XML support and the rest of the language features that operate well with this approach (the regex stuff I mentioned, pattern matching, etc.) should make it obvious to just about anyone with serious programming experience just how useful some of those things could be for "real world" programs.

Seriously, people can rail on (no pun intended ) all day about "real world" programming or "academic" languages, but those definitions are so amorphous as to be practically meaningless to me.  What I think is important is whether a language is both capable and practical.  I think Scala meets both of those criterion.  Whether it will become a major player in the language community lies more with the random whims of the Language Gods than it does with other factors; Scala, I think, has those covered already.
Title: Programming Languages
Post by: kuniklo on 2005-12-08 23:45:18
Quote
There is a huge difference between downloading Scala and using System.out.println in your first hello world program, and downloading a special, non-popular dialect or other poorly supported language extension facility to get the same functionality.


A lot of the other JVM-targeting languages integrate very nicely with the rest of the Java runtime but it didn't seem to help them much.  Maybe things will be different for Scala but the precedents aren't encouraging.

Quote
Quote
Notice that the main reason they switch to Python, despite the fact that they consider Lisp to be superior, is that they didn't have to do nearly as much grunt work because so many support libraries had already been written by the "average" programmers that never got around to doing the same thing for Lisp.


This seems to be a good argument in support of Scala with regards to its backwards compatibility with the JVM and .NET, doesn't it?


Yeah, in theory, but again, there are a lot of skeletons littered along that yellow brick road.

Quote
Catching half the errors is better than catching none of them.  The half that isn't caught is usually easier to prevent also -- most of the remaining errors in my Haskell programs after everything has been type checked are algorithmic errors.


Most of the bugs I find in my code are things that a type-checker wouldn't catch, but this is a hard point to argue in a vacuum.  I think it's interesting that really exceptional programmers come down on both sides of this issue.  I think it's one of those engineering things where it's not a question of which one is "better" in any absolute sense but more of a question of what the tradeoffs of each approach are.

Quote
Quote
It looks to me like they're still clinging to most of the functional communities most cherished and untested assumptions, but I'll give it a closer look.


Out of curiosity, which untested assumptions are you referring to?


The whole idea that some kind of rigourous mathematical theoretical grounding is important and valuable for typical programming tasks.  All this type theory and analysis and lambda calculus and graph rewriting yadda yadda makes for interesting research papers but I still don't think anyone's demonstrated that it translates to more productive languages.  Personally I think Larry Wall was closer to the truth with his idea of programming languages that borrow from natural languages.  He went so far off the deep end with it in Perl that he's largely discredited it but I think he was on to something.  Ultimately it's the programmer that makes the biggest difference and the more you can do to lower the barriers between how they he about code and how they write code the more you'll do to make him more productive.

Quote
On the other hand, it's somewhat amusing how most major OO languages these days are now adding quite a few functional style features after years of people saying how useless fp is.  Strange isn't it?


I certainly wouldn't say fp is useless.  Higher-order functions have proven their worth in many contexts.  I'd say the jury's still out on some other fp ideas like lazy evaluation and non-mutability.

Quote
One look at their XML support and the rest of the language features that operate well with this approach (the regex stuff I mentioned, pattern matching, etc.) should make it obvious to just about anyone with serious programming experience just how useful some of those things could be for "real world" programs.


I don't think you can make XML processing easy enough that people will switch languages for it.  With the right libraries it's not that big of a deal in most commercial languages now.  You can probably do better but not so much better people will switch for that reason.  Declarative languages make for elegant examples but XSLT isn't exactly on fire either.

Quote
What I think is important is whether a language is both capable and practical.  I think Scala meets both of those criterion.  Whether it will become a major player in the language community lies more with the random whims of the Language Gods than it does with other factors; Scala, I think, has those covered already.


Time will tell, I guess.  The shift to web-oriented computing has opened the door to new languages and approaches, so there's at least a window for newcomers to prove their worth.
Title: Programming Languages
Post by: Dibrom on 2005-12-09 00:08:01
Quote
A lot of the other JVM-targeting languages integrate very nicely with the rest of the Java runtime but it didn't seem to help them much.  Maybe things will be different for Scala but the precedents aren't encouraging.


I think they probably will, if only because of Fortress.  Scala in its current form may not take off, but Fortress is probably going to be a big deal, and since it borrows so much from Scala, and is probably also going to be interoperable with the JVM to some degree, I suspect you'll see at least many of the concepts behind Scala, as well as Scala programmers, becoming more relevant in the future.

Of course, it's very hard to predict what is going to happen in just a few years, so maybe it'll all be a wash.  If it is, I'd be both disappointed and surprised though...

Quote
Quote
Catching half the errors is better than catching none of them.  The half that isn't caught is usually easier to prevent also -- most of the remaining errors in my Haskell programs after everything has been type checked are algorithmic errors.


Most of the bugs I find in my code are things that a type-checker wouldn't catch, but this is a hard point to argue in a vacuum.  I think it's interesting that really exceptional programmers come down on both sides of this issue.  I think it's one of those engineering things where it's not a question of which one is "better" in any absolute sense but more of a question of what the tradeoffs of each approach are.


What I think bugs me the most about the typing debate is that a lot of time, what is being discussed isn't really relevant to typing itself, but is instead more relevant to practical implementation issues.

Most people (I'm not saying you fall into this category) argue about static vs. dynamic typing (usually against the former) without really knowing anything about type theory itself, or why, theoretically and fundamentally, there is a difference.  As a result, most people aren't even in a position to really argue over anything more than "ease of use," and thus a very rich part of the debate is completely glossed over.

Maybe that's one of those "theory is unimportant" bits, but I doubt it.  Milner said something to the effect of "types make computer programs tractable," and having played with implementing type systems and many variants of the Lambda Calculus, I fully agree.  Most of what makes modern programming palatable is due to advances in theory, which are often at least indirectly related to type theory.

Many people, including language implementors, choose to ignore all of this.  I'm just glad some people don't.  However, people with a thorough understanding of type theory seem to be having less influence on some of the more popular languages as of late, and thus static implementations are becoming more and more scarce.  I'm convinced this isn't because they are inferior (If anything, I think the opposite), but because static typing with type inference is hard.  But, I also think it's worth it.

I tend to think that the negative view most people have of static typing could be changed with a little bit of patience and learning about some fundamental points of type theory in general, but this probably isn't going to happen.

Dynamically typed languages are not inferior of course, but I view them as skipping over the usage of a very powerful tool/theoretical underpinning.

Quote
Quote
Quote
It looks to me like they're still clinging to most of the functional communities most cherished and untested assumptions, but I'll give it a closer look.


Out of curiosity, which untested assumptions are you referring to?


The whole idea that some kind of rigourous mathematical theoretical grounding is important and valuable for typical programming tasks.  All this type theory and analysis and lambda calculus and graph rewriting yadda yadda makes for interesting research papers but I still don't think anyone's demonstrated that it translates to more productive languages.  Personally I think Larry Wall was closer to the truth with his idea of programming languages that borrow from natural languages.  He went so far off the deep end with it in Perl that he's largely discredited it but I think he was on to something.  Ultimately it's the programmer that makes the biggest difference and the more you can do to lower the barriers between how they he about code and how they write code the more you'll do to make him more productive.


Type systems probably aren't going to have much of an effect on productivity beyond a certain point.  This is probably due to the fact that to gain an increase in productivity offered by richer type systems, programmers would need a greater degree of expertise to begin with.

However, that still leaves room for safety and optimization, both of which rigorous theoretical underpinnings have much to offer.

Oh, and let's not forget about concurrency.  Concurrency and distribution are only going to become more and more of an issue in programming, and both of those things are quite intractable in a language if you do not heed theory ahead of time...

Quote
I certainly wouldn't say fp is useless.  Higher-order functions have proven their worth in many contexts.  I'd say the jury's still out on some other fp ideas like lazy evaluation and non-mutability.


Non-mutability (which is optional in most fp languages) is important for certain things like concurrency and distribution.  At least insofar as being able to reason about constraining side effects to specific portions of computations.  Laziness, on the other hand, is a tougher one to argue in favor of.  It can make certain types of programming styles a lot easier, but it does have very serious drawbacks with regards to defaulting to that mode of operation (memory issues).
Title: Programming Languages
Post by: Lyx on 2005-12-09 08:53:12
I've taken a brief look at Scala, and although i dislike static typing, i like the syntax very much. Even with jumping right into examples, i can make sense of them without background knowledge on its syntax. Thus, after being familiar with its syntax, it should result in very clean and easy to understand code.

Some questions:
- i have not seen any references to prototyping. Personally, i like the idea of using prototypes instead of classes very much. Does Scala offer some way to "simulate" prototypes without the need to deepcopy?
- Compiling: AOT-only or also JIT? I would miss JIT quite alot, because especially in the beginning stages of projects, i prefer to be able to make just a few changes in the code, and then check the result immediatelly.

- Lyx
Title: Programming Languages
Post by: Dibrom on 2005-12-09 20:08:59
Quote
- i have not seen any references to prototyping. Personally, i like the idea of using prototypes instead of classes very much. Does Scala offer some way to "simulate" prototypes without the need to deepcopy?


It doesn't have prototyping support in the typical sense as far as I know.  I think that approach would clash with some of the other more fundamental design characteristics of Scala.

However, you can probably approximate some aspects of prototype-based programming in Scala through the use of mixins, anonymous classes, and views.  You're still going to have to use the class based approach there ultimately, but by using a combination of those features you can get a more ad-hoc style of extensibility, perhaps similar to prototyping, than is typical of most other OO languages.

Quote
- Compiling: AOT-only or also JIT? I would miss JIT quite alot, because especially in the beginning stages of projects, i prefer to be able to make just a few changes in the code, and then check the result immediatelly.
[a href="index.php?act=findpost&pid=348870"][{POST_SNAPBACK}][/a]


Depends on what you mean exactly...

Scala compiles to the JVM or .NET, which should both make use of JIT to dynamically compile portions of the bytecode to native code when needed.

I suspect you are probably meaning something more along the lines of whether it is possible to get away with not compiling .scala -> .class before running the program though.  In that sense, there is a REPL (scalaint) which is similar to the REPL in python or ruby (or lisp or haskell or ...), and from there you can load .scala files directly.  If you're an emacs user, then scala-mode is designed like most other modes for languages with a REPL, and you can evaluate your current editing buffer directly in the REPL on the fly.
Title: Programming Languages
Post by: Lyx on 2005-12-14 11:07:11
Thanks for the info Dibrom. I've played around with scala a bit in the recent days and also scouted the web to get an idea about available docs and support.

As an experienced grown up coder who likes high type-safety, i would definatelly like scala.

However, i'm neither of the above, plus RAD will be most important for my upcoming project. The situation is the following:

- it will be a huge task, definatelly above my current skills and work-capacity
- yet still, i want to do it, because its something which wanted since i was 4years old or so
- I've lost hope and confidence in finding the right person to help me doing the framework(which is the most difficult task)
- therefore, i'll have to do it alone... which means that the only sane way to achieve that will be making it longterm sideproject, and working often on it, but not with much effort at a given point in time - to avoid burnout and loss of motivation
- this in turn means that writing the code will need to happen fast and without much effort
- since "without much effort" has much to do with the language, already being familiar with ruby, as well as the big community and availability of documentation and tutorials for non-pro coders, is a big plus

Thus, for my specific needs, i think ruby fits better. There is the downside, that ruby's syntax is currently being reworked - thus, i'll have to idle a few months, until the most important syntax-changes are implemented into the CVS-trunk. But if thats the only drawback, then i'm willing to accept it.

Still, thank you for the info - it definatelly changed my view on strong-typed languages.

- Lyx
Title: Programming Languages
Post by: Dibrom on 2005-12-15 06:24:27
Quote
There is the downside, that ruby's syntax is currently being reworked - thus, i'll have to idle a few months, until the most important syntax-changes are implemented into the CVS-trunk.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=350095")


Just a small comment regarding that...

You know, people can often easily say things to the effect that a rigorous mathematical specification of a language (e.g., a formal semantics, etc.) is unnecessary, but it's times like these that you really see how useful they would have been.

One of the biggest problems with popular programming languages is that so many of them are never very well defined from the start, and so when it's time for the language to grow, there's often not a clear path ahead of time for how to extend the syntax without breaking backwards compatibility.

You would have thought that after C, people would appreciate rigorous formal specifications a little bit more.  With C, really the language "specification" basically boils down to the behavior of the most popular compilers.  Because of that particular mess, you still see certain language features poorly supported years after they were proposed in some standard.  C++ compilers are just finally starting to become pretty consistent and reliable in behavior with highly complex code involving templates and other advanced features.

kuniklo said this earlier:

Quote
Languages tend to become less elegant as they evolve.


In response to my statement about Scheme being nicer than Common LISP.  I really don't think that this is correct though -- I think that, in particular, it's pathological of languages which disregard the importance of formal specification from the get go.

There are many functional languages (usually the languages that tend to have formal specifications more often than not... surprising?) that have been around for awhile that I think have probably become more elegant as they have evolved, particularly as theory has eventually aided them in their further flexibility.  Haskell is a great example of this, although not the only one.

I don't know if Scala will ever become a major language.  Chances are slim, mostly because the deck is highly stacked against any language not pushed by multiple major corporations.  However, I think that even if it doesn't become a major language, it has the potential to stick around for quite awhile simply because it is already well defined and well understood, and will be easy to extend as needed in the future without as much trouble as many of the current popular "scripting languages" will face.

I think Ruby is a nice language, but I'm thinking it's going to continue to face more growing pains in the future, beyond even currently planned changes, particularly if it is ever planning to tackle increasing concerns in programming like security, concurrency, and distribution.  These are difficult problems, and as I said earlier, they are practically impossible to handle in a straightforward fashion without a deep, well-designed, and mathematically precise model of the language's behavior -- something that goes well beyond it's "implementation" in whatever compiler or VM is standard.

As for your project -- of course you should use whatever works best for you.  Ruby is a fine choice for RAD, and it's probably true that it would "not get in the way" of your thinking as far as rapid implementation goes.  But that's also a double-edged sword: dynamic typing and the sort of flexibility offered by a "loose" language like Ruby offers you many more opportunities for poor design (from all of a maintainability, safety, and performance p.o.v.), although kuniklo will probably disagree

On the other hand, if you already have to wait "months," I don't see why you couldn't become a pro with Scala in that time easily.  In the brief time I've known about it, I've already written a dozen non-trivial applications.  I don't feel as comfortable in it as in Haskell or ML yet of course, but I'm not too far behind.  Despite most of the advanced features I listed that Scala supports (along with quite a few I learned of since then that I didn't mention), the core language is very simple.  It's true that not many 3rd-party tutorials exist for the language yet, but both the introductory tutorial and example pdf are pretty comprehensive.  If that isn't enough, most Java source code (from tutorials for example) can be translated almost directly over to Scala with a few mostly [a href="http://lamp.epfl.ch/~emir/bqbase/2005/01/21/java2scala.html]cosmetic changes[/url], if that's the style of programming you want to do.  And finally, the library support of Java is going to be hard to beat, even for Ruby.  Everything for Java "just works" with Scala.  I've written some SWT applications, some applications using OpenGL (via lwjgl), Cocoa-Java applications with Scala, etc.  Everything I've tried so far from Java, even some of the more experimental and weird stuff, just plugs right into Scala with no hassle.  I think that's a pretty big deal, even though I really don't like Java itself.

Anyway, good luck on your project
Title: Programming Languages
Post by: kuniklo on 2005-12-15 06:31:23
Quote
I think Ruby is a nice language, but I'm thinking it's going to continue to face more growing pains in the future, beyond even currently planned changes, particularly if it is ever planning to tackle increasing concerns in programming like security, concurrency, and distribution.  These are difficult problems, and as I said earlier, they are practically impossible to handle in a straightforward fashion without a deep, well-designed, and mathematically precise model of the language's behavior -- something that goes well beyond it's "implementation" in whatever compiler or VM is standard.


Why optimize for the hard case?  Most applications never need to deal with these issues in a serious way and it significantly complicates a language to try to address them all.  All language designs involve tradeoffs and compromises, so why not leave the really hard stuff for specialized tools?  The Common Lisp people trot this argument out all the time but I think the net result of all their work to handle the "hard" cases is that CL is just too much for the average programmer and nobody uses it.  Have you ever read Richard Gabriel's "Worse is Better" essay?  (http://www.jwz.org/doc/worse-is-better.html)  Some real wisdom borne of experience, I think.  There's a reason perl is 100 times as popular as all the exotic languages put together and I think it has a lot to do with this philosophy.

As for the evolution of Ruby, I think Matz has so far shown remarkably good taste and I trust him to handle it well.  Ruby has a mental ergonomic that just fits and could never be specified in a simple mathematical formalism and I don't think you can really appreciate it until you've actually worked in the language for a while. 

I think Lyx's decision making process is going to be pretty common for the average programmer choosing between something like Python or Ruby and something like Haskell or Scala.
Title: Programming Languages
Post by: Dibrom on 2005-12-15 06:57:52
Quote
Why optimize for the hard case?  Most applications never need to deal with these issues in a serious way and it significantly complicates a language to try to address them all.   All language designs involve tradeoffs and compromises, so why not leave the really hard stuff for specialized tools?


Most applications don't have to deal with these issues as much right now, but as I pointed out, they are going to have to do so much more commonly in the future.  In a language designed for the long haul, it makes sense to plan ahead, and a formal semantics and similar characteristics help immensely in this regard.  That was the point I was trying to make.

If you'd rather say: "well Ruby is a good language except I never expect it to handle those problems, and as those problems become more common, perhaps Ruby should be used less," well, OK

Quote
The Common Lisp people trot this argument out all the time but I think the net result of all their work to handle the "hard" cases is that CL is just too much for the average programmer and nobody uses it.


Well, there's LISP, and then there's everything else, right?  At least that's how the LISP people look at it.

But actually, there's a twisted sort of truth there.  LISP's overly simplistic syntax, which lends itself to the sort of macro heaven that LISP users are so infatuated with, also make it less expressive in some sense.  Common LISP takes the approach of adding features as solutions to these sorts of "hard problems" in the form of macro support-style domain specific languages via libraries.  This tends to make things harder to learn because features are more spread out and there's very little in the way of syntactical sugar to make things easier to remember and to aid in idiomatic programming styles.

As I see it, there's really two much better alternatives:



Quote
Have you ever read Richard Gabriel's "Worse is Better" essay?  (http://www.jwz.org/doc/worse-is-better.html)  Some real wisdom borne of experience, I think.  There's a reason perl is 100 times as popular as all the exotic languages put together and I think it has a lot to do with this philosophy.


I just skimmed it, but I'll read it and respond more later.

Quote
As for the evolution of Ruby, I think Matz has so far shown remarkably good taste and I trust him to handle it well.  Ruby has a mental ergonomic that just fits and could never be specified in a simple mathematical formalism and I don't think you can really appreciate it until you've actually worked in the language for a while.


  That last sentence is a particularly profound statement.  Can you really support that?

The tools available for language specification are not "simple" mathematical formalisms by any means.  Maybe you should read TaPL (http://www.cis.upenn.edu/~bcpierce/tapl/index.html) and/or ATTaPL (http://www.cis.upenn.edu/~bcpierce/attapl/index.html), or any of the other dozen similar sources, if you really believe that.

Saying that Ruby has an "ergonomic" which cannot be specified in a simple mathematical formalism signifies to me that the design is simply inconsistent and not well understood.  Either that, or in fact it can be specified, but nobody has bothered to do it.  There is absolutely no need for a formal specification to conflict with an elegant "ergonomic" -- the two are not diametrically opposed.

Quote
I think Lyx's decision making process is going to be pretty common for the average programmer choosing between something like Python or Ruby and something like Haskell or Scala.
[a href="index.php?act=findpost&pid=350302"][{POST_SNAPBACK}][/a]


Of course it is.  But that doesn't necessarily mean it's always the right decision making process...
Title: Programming Languages
Post by: kuniklo on 2005-12-15 07:40:25
Quote
Most applications don't have to deal with these issues as much right now, but as I pointed out, they are going to have to do so much more commonly in the future.  In a language designed for the long haul, it makes sense to plan ahead, and a formal semantics and similar characteristics help immensely in this regard.  That was the point I was trying to make.


I think the point of Gabriel's essay is that, ironically, in designing for the long haul you make sure you don't survive the short haul.

Quote
If you'd rather say: "well Ruby is a good language except I never expect it to handle those problems, and as those problems become more common, perhaps Ruby should be used less," well, OK


There are a lot of things I wouldn't even try to use Ruby for, but I'd say the same thing about any particular language.

Quote
But actually, there's a twisted sort of truth there.  LISP's overly simplistic syntax, which lends itself to the sort of macro heaven that LISP users are so infatuated with, also make it less expressive in some sense.


Agreed.  All the domain-specific languages in Lisp look the same, so it's not as much of a gain as you'd think.  I think Larry Wall was actually on the right track in trying to make Perl's syntax follow the function of the code.  I'm a big fan of non-alphanumerics in syntax because I think they can convey a lot of information in  a quickly grasped visual form.

Quote
Quote
As for the evolution of Ruby, I think Matz has so far shown remarkably good taste and I trust him to handle it well.  Ruby has a mental ergonomic that just fits and could never be specified in a simple mathematical formalism and I don't think you can really appreciate it until you've actually worked in the language for a while.


  That last sentence is a particularly profound statement.  Can you really support that?


Let me put it this way - I've studied and admired many of the functional languages and been impressed with their mathematical rigor.  For a while I thought it was somehow important that the entire language could be collapsed to a formalism as simple as lambda calculus or graph rewriting.  These days I'm very skeptical of this.  My mind works in a completely different way when I'm writing Ruby code and I can just crank out working, tested code at least 5x as fast as I ever could in anything else.  It just *fits* like a good tool.  And no, I don't think you could reduce Ruby to a simple logical formalism, although under the hood it's very scheme-like.

Quote
Saying that Ruby has an "ergonomic" which cannot be specified in a simple mathematical formalism signifies to me that the design is simply inconsistent and not well understood.  Either that, or in fact it can be specified, but nobody has bothered to do it.  There is absolutely no need for a formal specification to conflict with an elegant "ergonomic" -- the two are not diametrically opposed.


Maybe, but so empirically that's been the case in the languages I'm familiar with.  They all suffer from scheme disease to some extent.  They're not willing to pollute their model with things that make real programming easier.  That kind of stuff gives theorists and mathematicians a boner but seems pretty far removed from typical real-world tasks.

Quote
Of course it is.  But that doesn't necessarily mean it's always the right decision making process...


What does "right" even mean here?  A programming language is just a means to an end and for people like me and Lyx Ruby gets the job done.  I think you do have to choose between linguistic elegance and sophistication and wide acceptance and good tool and library support and the equation keeps coming down on the right side for me.

Anyway, I'd be curious to hear what you think of the Gabriel essay.  I first read it in the middle of my functional language phase and thought it was moronic.  I keep re-reading it and appreciating the wisdom of it more ever time now.  I think the natural world mostly works this way because its the only way to manage complexity.
Title: Programming Languages
Post by: Lyx on 2005-12-15 14:14:57
Quote
Ruby is a fine choice for RAD, and it's probably true that it would "not get in the way" of your thinking as far as rapid implementation goes. But that's also a double-edged sword: dynamic typing and the sort of flexibility offered by a "loose" language like Ruby offers you many more opportunities for poor design (from all of a maintainability, safety, and performance p.o.v.), although kuniklo will probably disagree wink.gif

I think the usefulness of this "safety" heavily depends on the kind of project you'll do. In my case, it will be simple but extensive OO-tasks. Almost everything will be about handling objects in its most literal definition. Thus, there really won't be much complex mathematical algorythms or string-manipulation going on. I'd expect 80% of the code just being about creating/destroying objects and reading/writing their properties. Now you may think that this is one of those cases where type-safety is important - however, the amount of accessors really is quite limited - safety-checks in those central accessors, as well as the process-management part of the program handling exceptions should already provide more than enough safety. Maybe your thought at this point is "okay, catching and handling errors at the outer defense is all well and good, but how about providing safety already in their roots?". Well, thats undesired - what i want to achieve would be heavily limited(and therefore beat its purpose) if the inner components of the program couldn't trust each other - (almost) unrestricted access is necessary there. Thus, there even isn't a need for a strong inner security model.

I dont think that the "double-edged sword" in my case is a disadvantage - the biggest effort is getting the whole framework done in the first place. Optimizing, debugging and extending it is easy afterwards. The reason why this is so is because everything related/depending on everything else - so, when getting it into place the first time the problem is that one needs to think about everything at once. Refining individual components afterwards is much easier, because then focussing is much easier.

Thus, "built it now, care about minor glitches later" in this case indeed makes sense.

Quote
On the other hand, if you already have to wait "months," I don't see why you couldn't become a pro with Scala in that time easily.

I worded that a bit unclear. Yes, i would have to idle a few months regarding writing code. However, the architecture-conceptwork isn't finished yet, so i can just use those months for finishing everything on paper and planning ahead. It would be *useful* if while doing that i could also try out some of it in code, but this is not a *requirement*.


- Lyx
Title: Programming Languages
Post by: Dibrom on 2005-12-18 21:05:37
Quote
Anyway, I'd be curious to hear what you think of the Gabriel essay.  I first read it in the middle of my functional language phase and thought it was moronic.  I keep re-reading it and appreciating the wisdom of it more ever time now.  I think the natural world mostly works this way because its the only way to manage complexity.
[a href="index.php?act=findpost&pid=350305"][{POST_SNAPBACK}][/a]


I've read Gabriel's article, and here are my thoughts on it.

First, the distinction between the "MIT approach" and the "New Jersey approach" has some validity, but not as much as is initially made out.  Gabriel basically says he's presenting it in "strawman" form, so this isn't a real big surprise, but I think it's important.

The problem, as I see it, is that simplicity and correctness are not necessarily orthogonal.  They can be in certain cases, but this is not axiomatic.  The exact same thing goes for consistency.  I don't see any particular reason why either of those are necessarily at odds with simplicity.

Completeness, now, is a different story.  It is true that completeness, almost by nature, is at odds with simplicity.  But on the other hand, proper design should hold that functionality is essentially abstracted where needed, and so it should be possible to leave out functionality without compromising simplicity, correctness, or consistency.

It may be hard to believe that about the consistency point, but it's true.  Most languages that I prefer to use have good support for domain-specific languages.  It is through this facility that they are able to sacrifice completeness without violating the other principles.  Extra functionality can be added through a domain specific language, and done in a simple, correct, and consistent way.  Haskell is the best example I know of this approach, although Fortress sounds like it's going to try to do it just as well.

Now, with regards to the "PC loser-ing" problem -- the author does have a valid point there.  But it is a special case, also.  He's talking about an OS design issue and portability.  The same needs in that context do not map directly to the programming language design space.  Portability is important, sure, but that can be solved in other ways while still having a complex implementation whilst maintaining a simple interface.  GHC, for example, can compile to C code first and solve portability that way.  We also have things like the JVM, the .NET CLR, SEAM, or LLVM to make things easier on that front as well.

In the programming language design space, there is no need to sacrifice safety in most contexts (I'm not saying everyone needs to go "purely" functional however).  It really isn't all that hard to come up with a proper design for the behavior of a language, using  well understood theory, and to apply that to meet the principles of simplicity, correctness, and consistency.  It's just that most language implementors for the particularly popular languages don't go that route.  Maybe it's because they don't like math or theory or whatever else -- I don't know.  All I do know is that I'm not the greatest at math, and I find the concepts in TaPL quite easy to understand and implement.

As for implementation simplicity from the language p.o.v., I don't really know what to think about what the author says about that.  Sure, it might be easy to make a basic C compiler, but it's almost impossible to make a really great one.  Compare that to a language like LISP or ML (the latter of which is essentially just a polymorphic, typed lambda calculus like System F with lots of sugar), where it's easy to make a basic compiler, and not impossible to make a great compiler -- the latter of which is possible because there's plenty of theory available that makes it abundantly clear just how to do it.  C, and C++, on the other hand, are total messes, and it doesn't take a genius to look at the current compiler situation after all these years to understand the problem.

Now, I do think I understand where the author was coming from.  His criticism is aimed particularly at CL.  After all these years, the recent issue with Reddit has highlighted that there are serious problems with the CL community with regards to relationship between their design, implementation, and expectations.  That I don't dispute.  What I do dispute is that such principles are generalizable.

Haskell, I don't believe, suffers from the problems the CL guys do.  Haskell has two really high quality free implementations: GHC and Hugs.  They run almost everywhere (importantly, win32 and Mac OS X where free CL implementations are often weak as I understand it).  They come with libraries that are both simple and complete.  There are lots of 3rd party libraries, and there is endless progress being made on all fronts in the Haskell development community, whether it's new features being researched, better libraries being developed, theory being conceptualized, or whatever else.

Haskell, I believe, meets the principles of simplicity (with a few exceptions that have more to do, I believe, with language education than actual complexity), correctness, consistency, and completeness.  I think it does this without falling into the trap that the author describes the "MIT approach" as affording.

I don't think there's anything "special" about Haskell in this regard either.  I think it's approach is reproducible elsewhere.  Scala, I don't think, is quite there yet, but it seems to me that they are on the right track, given their different design constraints.

Now, with regards to languages like Ruby, I don't think it meets the concept of implementation simplicity that the author of the article discusses so much.  When I consider implementation simplicity, part of that includes a formal specification that makes re-implementing the language simple.  If there is a mapping between the concepts of OS portability and language portability, I think this is the important part of it.  Having a well understood specification makes it easy to create other dialects (in effect "porting" the essence of the language to another "platform"), or to create higher quality implementations of the same dialects.

By introducing syntax changes that are non-backwards compatible and through all of the fuss over the new VM, it's clear to me that Ruby doesn't really meet the principle of implementation simplicity.  It may be the case that the actual Ruby runtime environment that currently exists is simple in its underlying code, but this isn't what really counts.

So, essentially, I don't buy the line that designing for the long haul ensures failure in the short term.  It certainly can happen if it's done incorrectly, but if anything it's a case of correlation, not causation.  The key to doing it right essentially boils down to handling abstraction properly.

In the past, it's been especially difficult handling abstraction properly because the theory was either underdeveloped or nonexistent.  Over time things have improved tremendously and the same isn't really true anymore.  Now, for example, we have incredibly rich and powerful polymorphic type systems with well understood algorithms for type inference, as well as a much better understanding of problems relating to concurrency and distribution (and frameworks for handling them as well), whereas none of that was available for the first languages.  And those are only 2 examples of many.

Aside from problems in early languages that are obvious, you might say that early Haskell (or Miranda or Gopher) was a failure.  Maybe (in the popular sense, probably even, but certainly not to the group of people it was intended as there is no other language with as rich of a research community that I know of).  Certainly if Haskell arrived on the scene today without Monads, there would be deep problems preventing its success.  But nowadays we have all of these tools in the form of theoretical frameworks developed over the years since the implementations of the first serious programming languages.  It makes sense for implementors of other languages (like Ruby) to take advantage of it all.  Doing so is not going to signal their failure at all -- in fact, I think quite the opposite.

Maybe something like Haskell or Scala isn't as viral as C or Ruby either, but in the end I don't think this is because of the "MIT approach" or the "New Jersey Approach," but most likely because of more subtle issues like language education (and CS education in general).
Title: Programming Languages
Post by: kuniklo on 2005-12-19 02:19:38
Quote
Maybe something like Haskell or Scala isn't as viral as C or Ruby either, but in the end I don't think this is because of the "MIT approach" or the "New Jersey Approach," but most likely because of more subtle issues like language education (and CS education in general).


I think the analogy actually works pretty well, but I guess when we're speaking this abstractly it's going to be largely a matter of taste.

So, more concretely, one area where statically typed languages seem to fall really short is in metaprogramming - introspection, dynamic code generation, etc.  How do you accomplish these kinds of things in a language like Haskell?  For example, Rails dynamically adds methods to it's O/R mapping classes based on the structure of the database tables.  How would you handle this in a statically typed language?
Title: Programming Languages
Post by: Dibrom on 2005-12-19 02:41:10
Quote
Quote

Maybe something like Haskell or Scala isn't as viral as C or Ruby either, but in the end I don't think this is because of the "MIT approach" or the "New Jersey Approach," but most likely because of more subtle issues like language education (and CS education in general).


I think the analogy actually works pretty well, but I guess when we're speaking this abstractly it's going to be largely a matter of taste.


I'm not sure if you mean the "MIT approach" vs the "New Jersey Approach" is an analogy that works well when applied to educational problems, or if you're referring to something else I said...

In the case of the former, well, I don't really know what to say to that.  On the one hand, there is a place for "rough and ready" education styles for people that need to do something simple like tinker with VB, or know some very basic C or something like that.  On the other hand, too often those same people try to use those same languages and approaches for something well beyond the scope for which they are fit.  It's all about the right tool for the right job, and that philosophy should be applied from education all the way down to programming language choice and even computation style.

I don't agree with the idea that "one size fits all" here (all though some can come close), and I don't agree with the concept that languages should be made to approximate natural language or to be "intuitive" to Joe Sixpack.  Why?  Because Joe Sixpack's intuitions are wrong more often than not, whether in the realm of design or something more concrete like implementation of a particular algorithm.

Computer science was, and still should be a mathematical enterprise, as far as I'm concerned.  Engineering is something else, and while it's laudable in its own right, people too often associate CS nowadays directly with the latter and view the former as some sort of unnecessary legacy baggage.

Quote
So, more concretely, one area where statically typed languages seem to fall really short is in metaprogramming - introspection, dynamic code generation, etc.  How do you accomplish these kinds of things in a language like Haskell?
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=351120")


First of all, very, very few languages have metaprogramming facilities that are on the level of LISP, even other dynamically typed languages.

Static languages have always had good properties for metaprogramming, although few of them expose this functionality in a way familiar to users of dynamic languages.  But as far back as the original usage of ML, the idea has been for these languages to support such properties.

Going back to metaprogramming on the level of LISP, Haskell in fact has a very powerful answer to that in the form of [a href="http://www.haskell.org/th/]Template Haskell[/url] (there's also MetaOCaml (http://www.metaocaml.org/) for OCaml, but I've not used this one yet), which is just as powerful, if not moreso, than LISP macros.  You get code generation, access to the abstract syntax, whatever you need just about.

Other static languages which are slightly less powerful in their metaprogramming languages are C++ and Scala.  Both use the idea of parametric polymorphism and objects to basically create a new AST from an expression by returning nested objects through the use of overloaded operators.  Both of them use this for domain specific language style programming, and C++ in particular can use it for other things like conditional compilation (which doesn't have much of an analog in the VM targeting Scala anyway).

As for introspection, besides Template Haskell, Haskell has Generics, which I linked you to a paper about ("Scrap your boilerplate").  You can also approximate "dynamic code creation" through rather basic uses of higher order programming with ADT's.  In both Haskell and Scala this is trivially easy because of first-class functions.  In C++ it's still not that hard, but a little uglier since function pointers suck (you usually end up using Functional Objects instead).  But it should be noted that introspection doesn't have a lot of use in a static language because if you design your solution properly, you don't need to "introspect" anyway.  You already know what kind of values an object or something like it is going to contain, and you specify and deal with that through ADT's.

However, in the case of Haskell and Scala, I'd say that dynamic code generation style programming is in fact better than what you get with a dynamic language because by defining your ADT beforehand, you basically setup a constraint which proves your code generated at compile time will not go wrong either.  Both that and pattern matching make it dead simple.

Aside from what I've already said, there's more on stuff somewhat related to this all throughout the Haskell Wiki at pages like this (http://www.haskell.org/hawiki/RunTimeCompilation) one.

Quote
For example, Rails dynamically adds methods to it's O/R mapping classes based on the structure of the database tables.  How would you handle this in a statically typed language?


Haskell actually has things like eval now, and can be used for dynamically loaded plugins (see: hs-plugins (http://www.cse.unsw.edu.au/~dons/hs-plugins/)), so one way is simply to create a list or tuple or record or some other type which points to the structure of the database tables and evaluates the contents they represent.  "lambdabot" on #haskell for example, can both execute and give the type of arbitrary haskell code printed into the channel (e.g., @eval let fix f = f (fix f) in fix id).  It does this by using this dynamic approach.  This should give an idea of how you might be able to do something similar for the database problem.

Another way is to to specify a domain specific language for which the database contains snippets and then embed an evaluator in a monad in Haskell.  That might sound complex, but it's rather easy because of the way Haskell is already designed.  There are plenty of other approaches as well, but those are two that I think are more obvious.

This is assuming, by the way, that you mean that somehow the contents of the tables themselves contain something as rich as actual code snippets.  If not, then a much more basic approach is possible.

Other static languages I haven't talked much about yet like OCaml have a few other options as well, whether through serialization and unserialization of code (coupled with continuations maybe), or even support for dynamic types in the type system.  OCaml has beginning support for that with Dynaml (http://farrand.net/dynaml.shtml) (Haskell has it as well I believe, although with all the other ways I already outlined, it's not really needed).  AliceML does something sort of like this with it's package (http://www.ps.uni-sb.de/alice/manual/packages.html) system which allows it to load arbitrary code across a network for something like computation servers and things like that.

So basically, I think it's wrong to say that static languages fall very short on metaprogramming facilities.  I've just outlined a wide variety of different ways that various static languages handle this problem quite well.
Title: Programming Languages
Post by: kuniklo on 2005-12-19 05:00:30
Quote
Computer science was, and still should be a mathematical enterprise, as far as I'm concerned.  Engineering is something else, and while it's laudable in its own right, people too often associate CS nowadays directly with the latter and view the former as some sort of unnecessary legacy baggage.


I agree.  I think the problems arise when people from a CS background start suggesting that their tools can magically solve engineering problems (a common claim on the functional language group).  If anyone wants to claim that their language or paradigm has significant advantages in the field then the burden falls on them to prove their claim.

Quote
Haskell actually has things like eval now, and can be used for dynamically loaded plugins (see: hs-plugins (http://www.cse.unsw.edu.au/~dons/hs-plugins/)), so one way is simply to create a list or tuple or record or some other type which points to the structure of the database tables and evaluates the contents they represent.  "lambdabot" on #haskell for example, can both execute and give the type of arbitrary haskell code printed into the channel (e.g., @eval let fix f = f (fix f) in fix id).  It does this by using this dynamic approach.  This should give an idea of how you might be able to do something similar for the database problem.

Another way is to to specify a domain specific language for which the database contains snippets and then embed an evaluator in a monad in Haskell.  That might sound complex, but it's rather easy because of the way Haskell is already designed.  There are plenty of other approaches as well, but those are two that I think are more obvious.

This is assuming, by the way, that you mean that somehow the contents of the tables themselves contain something as rich as actual code snippets.  If not, then a much more basic approach is possible.


I'm actually talking about the simple case of mapping database columns to attributes on objects, not executing arbitrary snippets of text as code.  How would you typecheck this kind of code?  For instance, in a typical rails program, you have lots of code manipulating attributes on objects that represent rows in database tables.  Since you don't know until runtime if all these attributes will actually be present, how do you statically check the correctness of the code?
Title: Programming Languages
Post by: Dibrom on 2005-12-19 05:15:25
Quote
I agree.  I think the problems arise when people from a CS background start suggesting that their tools can magically solve engineering problems (a common claim on the functional language group).  If anyone wants to claim that their language or paradigm has significant advantages in the field then the burden falls on them to prove their claim.


Concurrency and distribution are two very difficult engineering problems, and functional languages lead the way here by far.  As far as I'm concerned, they've already proven their claims both mathematically and empirically.  People seem not to want to listen though, because it's not done in an imperative, OO way.

If you don't believe me, you're welcome to check out some of the state of the art languages and see how powerful they are and easy to use for these types of problems.  Erlang, Oz, Scala, or Alice are all good choices to try for something like that.

Quote
I'm actually talking about the simple case of mapping database columns to attributes on objects, not executing arbitrary snippets of text as code.  How would you typecheck this kind of code?  For instance, in a typical rails program, you have lots of code manipulating attributes on objects that represent rows in database tables.  Since you don't know until runtime if all these attributes will actually be present, how do you statically check the correctness of the code?
[a href="index.php?act=findpost&pid=351132"][{POST_SNAPBACK}][/a]


Hrmm.. I'm a little bit confused by this.  I assumed you meant the much more complex problem of dealing with actual code because the simpler case you described is a pretty basic problem.  I mean no offense by this, but I have to wonder: have you really actually done much functional programming?  You've implied you have, but I think the solution to this sort of problem is kind of obvious if you have.

As I already said, you basically use ADTs.

I hope you can get the jist of that.  If you're not dealing with arbitrary code snippets, that implies you know that your properties fall within a finite set, and that finite set can be specified as a variant adt like TableProperty.  What you'd do then is read the database contents to a string, then parse the string into various properties.  I gave the example of returning a list of properties (e.g., [TableProperty]), but in reality you'd have to have it be something like  "IO [TableProperty]" to keep it safe, unless you just didn't care and wanted to use unsafePerformIO

Maybe I'm still misunderstanding your problem and there's something a lot more difficult about it that I just don't get.  But as it stands, it seems pretty simple to deal with to me.  By making your ADT recursive, you can deal with just about any type of nested properties you might encounter also.  By then, it naturally starts to look a lot like a domain-specific language with an abstract syntax

If you're referring essentially just to the basic problem of how to deal with type safety with a computation that might fail (i.e., a property you expect not existing), well that's what monads are for.  For simple, non-IO cases you usually use Maybe (or if you want fancy non-deterministic computational support you can use lists, which are also monadic in haskell by default).  For example, a function that might fail would have a signature like this: elemExistsInList :: (a -> Bool) -> [a] -> Maybe a.  In the IO monad, you can do basically the same thing.  And you don't actually need full blown monad support to handle this problem.  OCaml can use other similar, but ultimately less general techniques to do the exact same thing.
Title: Programming Languages
Post by: kuniklo on 2005-12-19 05:32:31
Quote
Concurrency and distribution are two very difficult engineering problems, and functional languages lead the way here by far.  As far as I'm concerned, they've already proven their claims both mathematically and empirically.  People seem not to want to listen though, because it's not done in an imperative, OO way.

If you don't believe me, you're welcome to check out some of the state of the art languages and see how powerful they are and easy to use for these types of problems.  Erlang, Oz, Scala, or Alice are all good choices to try for something like that.


  I turned down a job doing Erlang for Bluetail because I didn't want to go live in Sweden.  The Erlang guys have done a very good job of solving a particularly difficult engineering problem with a custom functional language, and non-mutability certainly plays a part.  Interestingly, they also feel *very* strongly that Erlang's dynamic typing was a crucial factor in both the stability and durability of their running applications and in their ability to bring their engineering staff up to a productive level in the new language quickly.

Quote
Hrmm.. I'm a little bit confused by this.  I assumed you meant the much more complex problem of dealing with actual code because the simpler case you described is a pretty basic problem.  I mean no offense by this, but I have to wonder: have you really actually done much functional programming?  You've implied you have, but I think the solution to this sort of problem is kind of obvious if you have.

As I already said, you basically use ADTs.


I'm talking about a full blown O/R mapping layer like Hibernate for Java.  It maps table rows into first class objects with arbitrary attributes that then get passed around like any other Ruby object except mutations are passed transparently back on to the database.  For instance, if I have a user table like so:

id | login | phone | building
....
....
....

I can have code that uses a "User" object like this:

Code: [Select]
u = User.find(1)
u.login = 'foo'
u.phone = 'x3352'
u.save


How would you typecheck this if you can't tell if u.login is valid until runtime?

Quote
If you're referring essentially just to the basic problem of how to deal with type safety with a computation that might fail (i.e., a property you expect not existing), well that's what monads are for.  For simple, non-IO cases you usually use Maybe (or if you want fancy non-deterministic computational support you can use lists, which are also monadic in haskell by default).  For example, a function that might fail would have a signature like this: elemExistsInList :: (a -> Bool) -> [a] -> Maybe a.  In the IO monad, you can do basically the same thing.  And you don't actually need full blown monad support to handle this problem.  OCaml can use other similar, but ultimately less general techniques to do the exact same thing.


So what do you do if 95% of your code can "fail" like this?  What is static checking buying you if every function has to be decorated and checked like this?
Title: Programming Languages
Post by: Dibrom on 2005-12-19 05:52:17
Quote
I'm talking about a full blown O/R mapping layer like Hibernate for Java.  It maps table rows into first class objects with arbitrary attributes that then get passed around like any other Ruby object except mutations are passed transparently back on to the database.  For instance, if I have a user table like so:

id | login | phone | building
....
....
....

I can have code that uses a "User" object like this:

Code: [Select]
u = User.find(1)
u.login = 'foo'
u.phone = 'x3352'
u.save


How would you typecheck this if you can't tell if u.login is valid until runtime?


OK.  Now your problem is a little more clear to me.  But as I already said, you basically use Monads, or in simpler terms, you "tag" the return type of your functions (or the type of your objects, or whatever -- depending on how the system is being used) with another type specifying that such a computation is liable to fail.  This makes the semantics of the computation clear to the type checker.  It doesn't actually know that "Maybe a" is a type that might represent a failed computation of 'a', but it doesn't care -- all that is important to it is that you have abstracted this important semantic detail into the type.

Since Haskell doesn't have "objects," a concrete implementation of this would look very similar to the example I already gave, except that if you wanted to make it fit a little more with the O/R concept, you could return records with labeled fields representing the retrieved properties.  Since the properties might not exist, a field representing a property 'a' could have a type "Maybe a", or you could move the abstraction up the chain a little and enclose the type of the object in some similar abstraction.  There are many choices, and different ones would be appropriate for different types of usage.

Quote
Quote
If you're referring essentially just to the basic problem of how to deal with type safety with a computation that might fail (i.e., a property you expect not existing), well that's what monads are for.  For simple, non-IO cases you usually use Maybe (or if you want fancy non-deterministic computational support you can use lists, which are also monadic in haskell by default).  For example, a function that might fail would have a signature like this: elemExistsInList :: (a -> Bool) -> [a] -> Maybe a.  In the IO monad, you can do basically the same thing.  And you don't actually need full blown monad support to handle this problem.  OCaml can use other similar, but ultimately less general techniques to do the exact same thing.


So what do you do if 95% of your code can "fail" like this?  What is static checking buying you if every function has to be decorated and checked like this?
[a href="index.php?act=findpost&pid=351139"][{POST_SNAPBACK}][/a]


You don't decorate the functions manually usually because the typechecker will handle this for you.  You might need a few annotations to develop the original semantic abstraction, but for the most part you can leave annotations off 95% of everything else and the typechecker will just "get it."

The huge advantage to making subtle semantic details like failure-liable computations explicit in the types is that the type system will force you to not do something completely stupid like not checking for failure and then having the system blow up on you when you least expect it.  Having a strong static type system in this situation is desirable because it alleviates you from having to manually insert exceptions all over the place and cross your fingers, hoping you've managed to catch everything.

With the static approach, your code simply won't compile if it's possible you're doing something dangerous like not acknowledging the fact that a computation might fail somewhere throughout an expression.  Since you're using ADT's, dealing with these situations is easy, since you just pattern match off the result.

You can also tell the compiler to check and make sure all your pattern matching clauses are exhaustive, which adds another layer of safety.

And if you want something more powerful than just basic pattern matching, you can use an Error monad which basically gives you exceptions with the level of generality of continuations almost (or if you need that level, you can just use the continuation monad and get restartable errors even).

*shrug* Maybe at the end of the day we just have very different ideas about how to solve problems.  To me, it seems having a type system which allows one to make subtle semantic details about unsafe computations explicit in the type system, and then using that to make code very safe, is a good thing and not a hindrance.  I have a hard time seeing how the features I outlined above are not desirable for these types of problems.
Title: Programming Languages
Post by: kuniklo on 2005-12-19 06:04:17
Quote
Example:
Code: [Select]
getSomeDbaseProperty :: DbaseHandle -> Maybe TableProperty
getSomeDbaseProperty handle =
 case dosomething handle of
   Nothing -> ... {- failure case -}
   Just property -> {- success case -}


You can also tell the compiler to check and make sure all your pattern matching clauses are exhaustive, which adds another layer of safety.

And if you want something more powerful than just basic pattern matching, you can use an Error monad which basically gives you exceptions with the level of generality of continuations almost (or if you need that level, you can just use the continuation monad and get restartable errors even).

*shrug* Maybe at the end of the day we just have very different ideas about how to solve problems.  To me, it seems having a type system which allows one to make subtle semantic details about unsafe computations explicit in the type system, and then using that to make code very safe, is a good thing and not a hindrance.  I have a hard time seeing how the features I outlined above are not desirable for these types of problems.


I've never seen one person on either side of this argument persuaded in 100s of threads in far more detail than this one, so I guess it's unlikely we'll be the first.  Since the standard rebuttal to your standard defense of static typing above answers your points, I'll just roll it out again:

1. you have to unit test everything anyway since static typing can't catch all kinds of important errors
2. your code will work without checking for these maybe | maybe not types all over the place because you've unit tested it and all the extra checking code brings down readability.  it's like checking for a null pointer every time you access *anything*
3. type inference and static type safety add a layer of semantic complexity to code that a lot of programmers find confusing or at least distracting

Like I said, I used to be pretty excited by the idea of static type checking with type inference but I've personally grown to dislike it after working in languages with it and without it.  Maybe we can bet each other a six pack Haskell, Scala etc remain interesting research languages and not much more in 2015?  My other prediction is that the dynamic languages gradually start to resemble Dylan more with optional static type declarations as type assertions and aids to the compiler.
Title: Programming Languages
Post by: Dibrom on 2005-12-19 06:18:37
Quote
I've never seen one person on either side of this argument persuaded in 100s of threads in far more detail than this one, so I guess it's unlikely we'll be the first.  Since the standard rebuttal to your standard defense of static typing above answers your points, I'll just roll it out again:


Ok

Quote
1. you have to unit test everything anyway since static typing can't catch all kinds of important errors


Here's the difference.  It's unlikely you can develop unit tests to handle all combinations of cases, just through intuition.  On the other hand, by having the semantics of failure-liable computations explicit in the types, the compiler can prove (that's a powerful idea if you think about it) that in certain ways your code is simply not going to fail.

As for the kinds of "important errors" the type system isn't going to catch, they aren't going to be errors related to failed computations of the sort you've been describing.  The types of errors the type checker won't catch is incorrect algorithms, or doing things like checking the wrong fields perhaps (although in some cases it will catch it if the fields produce unexpected type mismatches).

So far, I've seen you mention the fact that the type checker won't catch many types of errors, but also I've not seen you give explicit examples of this.  Since I deal with type checkers all the time, I'll assume you're thinking of the same kinds of problems I am then, and as I just pointed out, failure liable computations (the kind important in your O/R example) are exactly the type of errors they do catch.

Quote
2. your code will work without checking for these maybe | maybe not types all over the place because you've unit tested it and all the extra checking code brings down readability.  it's like checking for a null pointer every time you access *anything*


Wrong.  Because of the way monads work, the failure checking is not made explicit in the syntax you are using.

This is difficult to understand, and difficult for me to explain briefly.  But I'll try.  Consider this example:

Code: [Select]
niftyFunction =
 do obj <- retrieveObjFromDbase dbase
    building <- building obj
    if building == "someSpecialPlace" then ... else ...


Now what you don't see is important.  Each line of code in that monad represents a result of a computation bound to a variable for another function.  Depending on how you design your monad, a failure by any one of those functions (say if the obj cannot be retrieved, or the building doesn't exist) will (or can, depending on how you design it) "fold" all the way out and cause the entire computation to fail in a safe way, without you ever having to make checks anywhere except for a few very key places (the "boundaries" of the monad).  This is because the "binder" function that operates on each line, binding the variables to the functions as I described, is the one that knows how to handle the failed computations so your other functions don't have to.  If it detects a failed computation, it triggers a particular process which might be like the "fold" that I mentioned.  But essentially, the point is that the effect of one computation will cascade across another, but by using a monad you don't have to interact with this process directly in the explicit syntax.

And still through all this, you've got a provably safe handling of failure-liable computations.  It's pretty cool really.  If you'd like to learn more, there's some great explanations and tutorials here (http://www.nomaware.com/monads/html/meet.html#maybe).

Quote
3. type inference and static type safety add a layer of semantic complexity to code that a lot of programmers find confusing or at least distracting


Sure.  A lot of programmers find databases confusing too.  Or memory.  Or objects.  Or ... well, damn near anything non-trivial about programming.  But they can learn, usually, and type safety isn't really an exception.  It's just alien to people used to the "C" way, or nowadays maybe the "Python" or "Ruby" way.

Quote
Like I said, I used to be pretty excited by the idea of static type checking with type inference but I've personally grown to dislike it after working in languages with it and without it.   Maybe we can bet each other a six pack Haskell, Scala etc remain interesting research languages and not much more in 2015?  My other prediction is that the dynamic languages gradually start to resemble Dylan more with optional static type declarations as type assertions and aids to the compiler.
[a href="index.php?act=findpost&pid=351146"][{POST_SNAPBACK}][/a]


You're probably right that Haskell and Scala aren't going to be mainstream languages ever.  But that doesn't meant that static languages can't do metaprogramming  It also doesn't mean they aren't great choices for many types of things people think that dynamic languages can only do.

I really don't know what the future of programming is going to look like, so I wouldn't make bets too heavily either way.  I do hope that there's still a place for what I feel is a good approach that so many people just seem to not understand for whatever reason, though.
Title: Programming Languages
Post by: kuniklo on 2005-12-19 06:29:17
Quote
As for the kinds of "important errors" the type system isn't going to catch, they aren't going to be errors related to failed computations of the sort you've been describing.  The types of errors the type checker won't catch is incorrect algorithms, or doing things like checking the wrong fields perhaps (although in some cases it will catch it if the fields produce unexpected type mismatches).


Right, but the tests you write to catch those problems will also catch the type mistakes because they won't get to point of exercising the algorithm if they don't.

Quote
Wrong.  Because of the way monads work, the failure checking is not made explicit in the syntax you are using.


I'll admit to still not really getting monads, and there's definitely some black magic possible there.  I guess I'll just say that the average ruby programmer understands the Ruby object and type system and basic metaprogramming tricks because it's really pretty simple.  I've seen a lot of very sharp and dedicated people struggle with monads.

Quote
I really don't know what the future of programming is going to look like, so I wouldn't make bets too heavily either way.  I do hope that there's still a place for what I feel is a good approach that so many people just seem to not understand for whatever reason, though.


It's certainly a much more interesting time to be a programmer than it has been for many years.  With so many applications moving on to the web there's an opportunity for a lot of new approaches.  It will be interesting to see how it all shakes out.  My guess is that the languages people are using 10 years from now will be hybrids of a lot of these ideas.
Title: Programming Languages
Post by: Dibrom on 2005-12-19 06:37:27
Quote
Quote
As for the kinds of "important errors" the type system isn't going to catch, they aren't going to be errors related to failed computations of the sort you've been describing.  The types of errors the type checker won't catch is incorrect algorithms, or doing things like checking the wrong fields perhaps (although in some cases it will catch it if the fields produce unexpected type mismatches).


Right, but the tests you write to catch those problems will also catch the type mistakes because they won't get to point of exercising the algorithm if they don't.


To some extent, yes.  But in the case of something like failure-liable computations, this isn't true.  You need to add exception handling to the tests because since there's no semantic information at the type level about the behavior of the computation, there's nothing relating to that that will enter into the algorithm that might cause it to fail unless you actually have a failure somewhere else first which may not happen at unit test time even, for whatever reason, whether its related to the comprehensiveness of your tests, or network or other hardware related problems.

The jist of it is that unit tests can get you some of the safety of a type system, but they cover a different type of problem and are not suited for subtle semantic details.  As I mentioned before though, there's nothing wrong with having a strong static system and unit tests and then getting the best of both worlds.  You can't do that so easily from the other side.

Quote
Quote
Wrong.  Because of the way monads work, the failure checking is not made explicit in the syntax you are using.


I'll admit to still not really getting monads, and there's definitely some black magic possible there.  I guess I'll just say that the average ruby programmer understands the Ruby object and type system and basic metaprogramming tricks because it's really pretty simple.  I've seen a lot of very sharp and dedicated people struggle with monads.


I've seen the same thing.  I don't know what it is I guess.  Monads are like continuations, so simple and general they seem hard.  Then you realize they are easy and all of a sudden you sit there wondering what all the fuss was about

On a more serious note, the big problem with monads is that most Haskell people are not good at explaining them.  That tutorial I linked to is great though, and the best source I've seen on them.

Quote
It will be interesting to see how it all shakes out.  My guess is that the languages people are using 10 years from now will be hybrids of a lot of these ideas.
[a href="index.php?act=findpost&pid=351149"][{POST_SNAPBACK}][/a]


Yeah, that's probably the only certainty
Title: Programming Languages
Post by: Dibrom on 2005-12-19 07:06:55
Just to give a more in depth example of how both type annotations and explicit error checking are unnecessary for most code dealing with the kinds of problems I was addressing, I've uploaded an example of some code I wrote earlier today.

It's a parser for the simply typed lambda calculus, with many extensions.  The parser is not written using a parser generator, but instead uses parser combinators.  Basically I just check for certain string combinations in sequence.  This is exactly the kind of thing you usually have to use a bunch of checks for, whether in the form of standard if-else clauses, or exceptions.  However, there's almost no error checking which is explicitly apparent in the code.  In fact, the one case it is present is related not to the error-prone string processing at all, but instead from a check to see whether a particular element of a list was found to exist or not.  In fact, even that case could have been "hidden" from the explicit syntax, but I felt it unnecessary.

Despite the highly error prone process of dealing with a ton of different possible string combinations in different orders at different times, the entire thing is type safe.

It's also a nice example of why domain-specific languages are so cool, since in fact it uses one to represent the parser combinators in a way that maps rather closely to something like ebnf.

Anyway, here (http://static.morbo.org/dibrom/Parser.hs.txt)'s the example.
Title: Programming Languages
Post by: Lyx on 2005-12-19 14:24:26
Personally, my impression about language-trends is:

now(ongoing)
- merging of multiple paradigms and features

soon(as in starting in a few years)
- simplification. Basically polishing of the confusing mess created by the mergers. How? More merging, but of the previously included features, to bring down the amount of too-many too-similiar features.

future(as in 8-14 years)
- languages have become very simple & intuitive, yet powerful. However, a need for safety and "implicit" distinction arises. Thus, type-safety and similiar safety-features will be reimplemented but in a different way - probably less mandatory but instead more like user-annotations built into the syntax. In essence, moving the responsibility for safety and correctness from the language to the user, yet providing builtin features to make this an easy task.

Thus, i dont think that type-safety will go away in the long run. But i do think that the way how it is done today, will slowly go away.

With my limited experience, i could of course be very wrong. The above is just what my intuition is saying after having looked around a bit regardiing the various existing and newly-created languages.

- Lyx
Title: Programming Languages
Post by: sumone on 2005-12-20 06:55:48
I always thought C# would come and go. I guess I was wrong. Classic ASP  & VB.NET are what I mess with 90% time now.

A trend I see is simplification in IDEs also. Drop this object on the form, the IDE creates all the code for you. Sometimes the IDEs do more "work" than the programmer himself.
Title: Programming Languages
Post by: singaiya on 2005-12-30 21:15:33
Quote
Quote
Thought I'd add a little something more to this thread...

It's kind of like 20th century orchestral music.  Schoenberg and Xenakis' music sounds great from a theoretical point of view, but nobody listens to it and their most persuasive ideas have been cherry-picked and reworked into something more palatable to the general public.  So maybe the functional programmers are the 12 tone serialists of the software world?
[a href="index.php?act=findpost&pid=348729"][{POST_SNAPBACK}][/a]


Actually I enjoy listening to Xenakis (and others modern & experimental) and admit that I don't follow the theory behind the compositions. To me, it's incredibly psychedelic sounding music. But I admit my tastes aren't mainstream; and I don't listen to bizarre experimental all the time, only when it seems right.

As for whether the analogy is a good one, I don't know. I always thought the only reason classical music went weird is because there was no where else for it to go. It was a logical evolution in the context of history. Popular tastes in music were changing at the turn of the century from the classical paradigm to more folk based forms: Tin Pan Alley tunes, ragtime, Folk/country, etc. In the early 20th century, how much of these "new" forms of music were inspired by classical compositions (cherry picked)? I don't know.
Title: Programming Languages
Post by: kuniklo on 2006-02-02 17:50:25
Bringing this thread back from the dead...

Here's a very interesting presentation from Tim Sweeney, Unreal's main programmer, on the future of programming languages for game development:

http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt (http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt)

Since game developers tend to be a few years ahead of the mainstream in terms of technologies and challenges it's a good peek into the future.  He seems to think that a functional, statically typed approach could be the solution to the concurrency problems new architectures pose.  Curiously, he's a big fan of lazy evaluation but not type inference.
Title: Programming Languages
Post by: Lyx on 2006-02-04 08:16:13
A few general observations about the slides:
Two points catched my attention the most. The first is modularity of objects. I've noticed this with my own project - the future-path inevitably leads to highly modular object-oriented programming with high-concurrency (everything can affect everything). Creating and handling existing objects is easy, but this controlled chaos makes it very difficult to *undo* changes - in a plugin-environment, this means that it becomes easy to add plugins, but difficult to "uninstall" them. This is because either you save more object-properties in the modules themselves, thus making heavy sacrifices in flexibility, or you store all changes ever made to an object - which is impossible because of performance and storage-space issues. Irony is that "reality" faces the same problem - in the real world, anything can affect anything, and the system works "forwards" - but its almost impossible to undo changes without leaving traces back. Thus, one of the main challenges which i see in future game programming is how to handle plugin-uninstalls in a highly modular environment.

The other thing which i knew before, but noticed again, is the insanely high burden placed on development-resources by "eye-candy" - mainly regarding 3d-graphics. And the trend is for this burden to become worse and worse. Development-resources however are not infinite - so at some point it will more and more impact gameplay-design - especially regarding content-variety (the cost to add a feature to the game becomes higher and higher). Personally, i think this trend will crash and someday result in a counter-trend towards gameplay- and interaction-focussed gamedesign. This wouldn't be the first time something like this happened - it did happen in almost predicable cycles in history, for example with music and movies(the bigger, better, louder style is not a new phenomenon - it happened before, went away, and returned,...)

One thing in which i agree with the slides, is that the only solution to rapid-fire concurrency with ten-thousands of objects, is atomic transactions.

I also agree that exceptions currently are too unpredictable and need more reliable management in the mid-level code.

- - - - -

Other personal observations:
I've noticed, is that for high-level tasks, there is an increasing demand for "simple" languages(LUA and Io come to mind). At the same time, industrial use requires a complex language. The current short-term trend is to have languages which specialize on being simple(LUA, Io), and languages which specialize on being complex(Python, Ruby). Personally, i think that this approach will not last, and that sooner or later a merger will be desired. This in turn could only happen by doing it the firefox/foobar2000 way: making the core language simple, and implementing everything else as extensions/plugins. I'm aware that current complex languages already make use of extensions - however, they currently tend to have a very large standard-library - the core is still much larger and complex than the "simple languages". Making the core minimal yet having a vast amount of extensions, solves all issues at once:
- can be lightweight and simple
- can be embeddable
- can be complex with wide middleware support
- no transitions and wrappers required to communicate between simple and complex projects. No reprogramming needed if a simple project grows to become complex.

Or in short:  languages which can fully adapt to users needs.

- - - - -

Another opinion of me, with which probably a few will disagree is:
Classes will go away and be replaced with prototypes.
The reason is simple, classes IMHO are just a bad excuse for not having prototypes. I can think of nothing classes can do, which couldn't be done easily with prototypes which can have "behaviours" attached to them. So, classes have no advantages over prototypes(except of the more easy implementation in the language - prototypes are more difficult to "do it right"), but prototypes have advantages over classes. In addition to this, the concept of prototypes is much easier to understand to newbies, than the concept of classes(actually, classes are like prototypes with additional predefined special rules(the "behaviours" i mentioned before).

Here's why...

Current paradigm:
Classes = unusable immutable "blueprints"
Modules = immutable mixins and/or seperate namespace
Instances = to create usable objects from the blueprints

So, the paradigm clearly expects your code-architecture to follow a given pattern. But sometimes thats not possible without inefficient implementation. Further evidence for the fundamental flaw in this paradigm is that it tries to fix the emerging holes in its design with more and more exceptions like singletons. You end up with a dozen possible object-types, each with different rules, adding significant planning-overhead... or in other words, sometimes you're more busy with "translating" your scenario to the outlined system - instead of implementing it naturally.

In direct contrast - a modern prototype-paradigm:
- Only one kind of object: prototypes.
- From this universally-usable single object-type, you can tailor it to your needs with features like:
- freezing (immutable object-methods)
- muting (unusable object-methods)
- allow to either "clone" or "copy" a proto (clone=shallow copy, copy=deepcopy)

Thus, with just one object-type and 3 "behaviours", you can do anything which you can do with classes & modules, plus more. Basically, it is no longer the case that you have to adapt your implementation to the language - instead, the language adapts to your implementation... in line with the intention: "the human is the master, the machine is the slave".


- Lyx
Title: Programming Languages
Post by: foosion on 2006-02-04 10:49:41
Quote
Other personal observations:
I've noticed, is that for high-level tasks, there is an increasing demand for "simple" languages(LUA and Io come to mind). At the same time, industrial use requires a complex language. The current short-term trend is to have languages which specialize on being simple(LUA, Io), and languages which specialize on being complex(Python, Ruby).[a href="index.php?act=findpost&pid=361808"][{POST_SNAPBACK}][/a]

I think the primary difference between the languages you named is not the complexity of the languages, but the complexity and size of their standard libraries.
Title: Programming Languages
Post by: kuniklo on 2006-02-04 16:45:11
Quote
The other thing which i knew before, but noticed again, is the insanely high burden placed on development-resources by "eye-candy" - mainly regarding 3d-graphics. And the trend is for this burden to become worse and worse. Development-resources however are not infinite - so at some point it will more and more impact gameplay-design - especially regarding content-variety (the cost to add a feature to the game becomes higher and higher). Personally, i think this trend will crash and someday result in a counter-trend towards gameplay- and interaction-focussed gamedesign.


I think you can see this happening already in the current generation of consoles.  All the Xbox 360 games are formula - first person shooters, racing games, and rpgs that look beautiful but don't take any risks with gameplay.  Nintendo's trying to buck the trend by building a lower-spec console and writing games that emphasize gameplay and new kinds of interaction.  Hopefully they'll succeed but I suspect the market will reward the more cinematic but dull 360 and PS3.
Title: Programming Languages
Post by: Dibrom on 2006-02-05 00:16:29
Quote
Bringing this thread back from the dead...

Here's a very interesting presentation from Tim Sweeney, Unreal's main programmer, on the future of programming languages for game development:
[a href="index.php?act=findpost&pid=361505"][{POST_SNAPBACK}][/a]


Yeah.  I just recently saw that posted on Ltu, where Sweeney has posted some additional comments about it in various threads.  I haven't had a chance to read it until now, but it's very interesting.

First of all, I'd like to point out that a lot of what he says is pretty similar to what I've already been trying to convey in this thread.  In particular, you notice that he is not fond of C# or Java, and he gives some very good reasons why, e.g., exceptions everywhere, lack of true safety (lame type systems), poor/broken concurrency support, etc.

It's very interesting, given the kind of work he does and the complexity of his programs, that he makes these observations and then comes down so strongly in support of static functional languages (even mentioning Haskell!) with mathematically-founded safety characteristics.  I think it goes to show that I'm not just crazy in my advocacy of these sorts of things, even though I might be the only one in this thread really interested in any of it...

There are a couple of specific things I'd like to comment on in the slides:

Anyway... a very cool presentation, IMO, and certainly inspirational for someone who has faith in type theory and static functional programming languages.  If someone as established and influential in the "practical" industry as Sweeney thinks there is a lot to be gained from such approaches, then the future certainly looks a lot less grim indeed.
Title: Programming Languages
Post by: Dibrom on 2006-02-05 01:47:58
Quote
One thing in which i agree with the slides, is that the only solution to rapid-fire concurrency with ten-thousands of objects, is atomic transactions.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=361808")


Hrmm.. I don't think atomic transactions are the answer.  What is needed is a good mathematical theory of concurrency.  Something where you can prove certain parts of the system won't misbehave in certain ways, and that general characteristics hold across large tracts of code.  Atomic transactions by themselves don't address any of this, they just ensure things won't "blow up" when making a change.  But they don't tell you how you can structure your code in ways that make it massively scalable, and they don't let you prove aspects of behavior by themselves.

The pi-calculus is an example of a theory that already provides the kind of stuff I just described.  It's only be used in inspiration for a few languages in recent years though (the theory itself is only from the 90's IIRC), but I think this will change as time goes on.

Quote
I also agree that exceptions currently are too unpredictable and need more reliable management in the mid-level code.


I think this falls directly in line with the above observations really.  Exceptions, or something better which provides similar functionality in the end, should fit into the theory from the get go.  Also, these things should be very rarely needed in a properly designed system.  If you have to spend all your time dealing with error cases, you have a bad system and a bad theory.

Quote
At the same time, industrial use requires a complex language.


I really don't think industry needs languages as complex and unwieldy as are currently popular.  Java is a great example of this.  It's so big primarily because the core of the system is an ugly kludge that is not easily extended in an intuitive and safe way.  Everything gets thrown into the libraries (including [a href="http://brinch-hansen.net/papers/1999b.pdf]broken concurrency[/url]), which just keep getting bigger and bigger because of the bloated and inexpressive syntax, and the lack of flexibility induced by object-oriented tunnel vision.

But you have languages like Haskell which offer a lot of the same functionality as Java (including libraries for most of the same purposes, such as graphics, windows toolkits, networking, etc.), but are overall much more manageable, and which have remained pretty elegant over years of evolution.  The haskell libraries are simple and easy to understand, yet complete.  Likewise, basic Haskell education can be compressed into a book (http://www.amazon.com/exec/obidos/tg/detail/-/0521644089/qid=1139102633/sr=1-1/ref=sr_1_1/002-0059369-8957642?v=glance&s=books) of under 400 pages, a lot like the original C Reference.  Compare this to the thousand pound tombs that comprise current Java books.

Industry doesn't need "complex languages," it just needs "good languages."  Ever since C++ arrived on the scene, things have just been going downhill.  Maybe that trend will change.

Quote
This in turn could only happen by doing it the firefox/foobar2000 way: making the core language simple, and implementing everything else as extensions/plugins. I'm aware that current complex languages already make use of extensions - however, they currently tend to have a very large standard-library - the core is still much larger and complex than the "simple languages".


What is interesting is that the lambda calculus did this, and it was introduced in the 1930s... heh.  Most modern state of the art programming languages are based on the lambda calculus extended with higher-order type systems (which makes it System-F or System-F-omega now).  You can't really get any simpler than the lambda calculus: it's just a basic mathematical model for computation, with only a few operations, all of which are mathematically tractable.  To get ML or Haskell, you simply take the lambda calculus and add both new syntactic forms to it (e.g., "if" or "let" expressions) and new semantic rules for typing and evaluation.  For Haskell, you change to call-by-need evaluation rather than call-by-value which is the more typical.  There are other "small" details, but this is the essence of it.

This is the way you start with a simple core, and extend it.  I've done this myself in a few programming languages I created recently for experimentation.  Each new capability gets added onto the basic lambda calculus in a very clean way.  You can see an example of a program in my language here (http://static.morbo.org/dibrom/FiniteMap.sysf.txt) if you like (and output of the program here (http://static.morbo.org/dibrom/FiniteMap.sysf.output.txt)).  The current syntax is a little bit verbose and clunky since I haven't had time to refine it much, and there are way more type annotations than needed in this example since I didn't have type inference implemented initially, but both are easily fixed.  Keep in mind this program uses no "library" either, so it's a lot more complex than a typical program would be if the language were developed into something for everyday usage.  This language is just the basic lambda calculus with a higher-order type system added, then the addition of "if", "let", "letrec", "case", records, tuples, lists, variants, polymorphism, side-effects, numbers, booleans, strings, etc., etc.  If I wanted to, which I didn't really, I could have added objects and well-understood subtyping (for inheritance type stuff) based upon System-F<:.  Overall, it's a good example of a "modular" language, since each time I extend it, I have to change almost nothing from before.

When I look at current language design approaches in most 'dynamically typed' languages, or in popular OO languages, I really kind of have to wonder... It often seems that the people behind these languages have some aversion to learning from the past, and that they have absolutely no interest in mathematical rigor, which is surprising given it's well known benefits in science and engineering in general.

Eventually though, as can be seen in part by the things Sweeney points out in his presentation, programming problems are going to get so complex that this approach is untenable.  Really, I'm surprised it has taken this long.  As a glimpse of things to come, outside of Sweeney's presentation, you can take a look at the Fortress language that Sun is going to be pushing, as well as C# 3.0.  Both have some very functional inspired qualities to them, and both are making a lot more use of the theoretical advantages of type theory.  C# is much less this way than Fortress though, and IMO neither of them are probably radical enough to really last in the long term, but we'll see I guess.
Title: Programming Languages
Post by: Lyx on 2006-02-05 08:28:33
Quote
Also, these things should be very rarely needed in a properly designed system.  If you have to spend all your time dealing with error cases, you have a bad system and a bad theory.
[a href="index.php?act=findpost&pid=361970"][{POST_SNAPBACK}][/a]

Hmm, i asume you're estimating a scenario where you asume that all the developers are skilled programmers? What about a scenario like this:

- you have a system which describes objects
- the objects have properties
- these properties in turn are described via many "templates" and allow mixins
- objects in turn can be built via templates, which just contain a list of commands on which properties to attach and setting their values
- the objects have AIs
- the AI in turn is a modular plugin-architecture.
- AIs are manufactured via templates which can attach those plugins, the templates allow mixins(sub-templates)
- etc. etc.

The whole template-system is designed so that unskilled 3rd-party scripters and designers can express "ideas" while having low experience in coding.

So, you have a very complex modular system with high concurrency, and the screws and bolts of the system are not trustworthy, because they are submitted by unskilled scripters. Thus, a single bad script can wreak maximum havoc.

Automatic as well as manual reviewing of those "templates" is obviously necessary - yet, you need a good "middle-layer" error-management as a last line of defence(so, basically every part of the scripting-API as well as template-loading, needs error-management) - or am i missing some other possibility here? To clarify: i'm not talking about complex error-cases, but mostly about malformed commands(the scripter just made a mistake).

- Lyx
Title: Programming Languages
Post by: legg on 2006-02-05 15:06:56
Quote
Hrmm.. I don't think atomic transactions are the answer.  What is needed is a good mathematical theory of concurrency.  Something where you can prove certain parts of the system won't misbehave in certain ways, and that general characteristics hold across large tracts of code. 


AFAIK, Petri nets do that.
Title: Programming Languages
Post by: Dibrom on 2006-02-05 17:24:42
Quote
Quote
Hrmm.. I don't think atomic transactions are the answer.  What is needed is a good mathematical theory of concurrency.  Something where you can prove certain parts of the system won't misbehave in certain ways, and that general characteristics hold across large tracts of code. 


AFAIK, Petri nets do that.
[a href="index.php?act=findpost&pid=362089"][{POST_SNAPBACK}][/a]


Petri nets were an early model that addressed some of this, but they had some serious technical limitations.  Both Milner's Calculus of Communicating Systems (CCS) and Hoare's Communicating Sequential Processes (CSP) were subsequent, more mature models.  The pi-calculus (also from Milner and co.) is the most recent and most advanced of these group of process calculi.  There are some newer calculi derived directly from the pi-calculus, or based upon some parts of it, like the higher-order typed pi-calculus, or the Join-calculus, etc.
Title: Programming Languages
Post by: Dibrom on 2006-02-05 17:49:22
Quote
Quote
Also, these things should be very rarely needed in a properly designed system.  If you have to spend all your time dealing with error cases, you have a bad system and a bad theory.
[a href="index.php?act=findpost&pid=361970"][{POST_SNAPBACK}][/a]

Hmm, i asume you're estimating a scenario where you asume that all the developers are skilled programmers? What about a scenario like this:


Not really, no.  Similar to the ideas Sweeney expressed, you should really have the compiler catch the bulk of errors for you.  Furthermore, your system should be expressive enough to deal with abstractions that don't require you to insert an exception handler every 10 or so odd lines of code.  Something like Monads are a good way to avoid explicit error handling in complex tasks, but there are other ways to do the same thing.  Many of these approaches happen to not be very easy to work with in certain popular languages though because of expressivity problems in these languages.

Quote
- you have a system which describes objects
- the objects have properties
- these properties in turn are described via many "templates" and allow mixins
- objects in turn can be built via templates, which just contain a list of commands on which properties to attach and setting their values
- the objects have AIs
- the AI in turn is a modular plugin-architecture.
- AIs are manufactured via templates which can attach those plugins, the templates allow mixins(sub-templates)
- etc. etc.


I'm not really getting a good visualization of the system from this description.  It's just too vague the way you stated it I think.  It would be better if you had some code or some sort of graphs to describe it.

But anyway, it's not impossible to describe in a "safe" objects communicating in a world, where the objects have certain attributes, including AI, and there is much use of generic programming.

Quote
The whole template-system is designed so that unskilled 3rd-party scripters and designers can express "ideas" while having low experience in coding.

So, you have a very complex modular system with high concurrency, and the screws and bolts of the system are not trustworthy, because they are submitted by unskilled scripters. Thus, a single bad script can wreak maximum havoc.


If a "bad script" can "wreak maximum havoc," then as I said before, that's a very poorly designed system.  Your system should be fault tolerant, but that doesn't mean you need exception spaghetti all over the place.

Quote
Automatic as well as manual reviewing of those "templates" is obviously necessary - yet, you need a good "middle-layer" error-management as a last line of defence(so, basically every part of the scripting-API as well as template-loading, needs error-management) - or am i missing some other possibility here? To clarify: i'm not talking about complex error-cases, but mostly about malformed commands(the scripter just made a mistake).
[a href="index.php?act=findpost&pid=362029"][{POST_SNAPBACK}][/a]


Malformed commands in the script should be caught by the parser (you should compile to some sort of 'bytecode' or more compact representation anyway for performance reasons).  Semantic mistakes in the script should be caught by the type checker.

This leaves mostly only errors possibly involving communication with non-existant objects somewhere down the line.  You have 2 possibilities here: 1) You can deal with this by making the type system of the scripting language slightly more complex so that it is possible to specify and handle failure liable computations without explicit manual intervention, or 2) you can use some sort of communication model which provides its own implicit type of error handling.  Both are similar, and could potentially be viewed as the "same thing" depending on how you look at it.

But really, if you have a system where you have a lot of interacting parts, some of which may be described in an ad-hoc fashion by unskilled developers, then you need to make sure that as much static error detection is performed as possible, so that if their script is bad, it won't even compile and enter into the system.
Title: Programming Languages
Post by: Lyx on 2006-02-06 06:21:40
Code: [Select]
Gameworld(DB)
  /|\ |
   |  +-->Objectmanager(must explicitely state object-type)
   |          |                         /|\
   |          |                          |
   |          |                     (obj_lookup)
   |         \|/                         |
   +---Object-Instance(Script-API)       |
      /|\                                |
       +--------(obj_manipulation)-----Script
 

Pseudo-Scriptcode:
1) myobject = objmanager.find(objecttype: "Foo", searchpattern: "Bar")
2) myobject.action.kick.attach
3) otherobject = objmanager.find(objecttype: "Bleh", searchpattern: "Blah")
4) myobject.action.kick.use(target: otherobject)


Explanation:
1) script sends lookup command to objectmanger. Objectmanager searches the gameworld-DB, finds the desired object, creates an object-instance of it(which serves as script-API to this individual object) and returns this instance to the script. Script creates a "myobject"-alias for the returned instance.
2) script attaches the action "kick" to myobject. Changes are immediatelly sent to the gameworld-DB.
3) Like "2)" for for "otherobject". It is a different type of object, therefore its available script-API(methods) will differ. But we dont care about this in this example(more on this later).
4) script makes myobject kick otherobject ;)


There are two kinds of errors which may happen here:
1. the script executes API-commands(methods) which dont exist for this particular object-type (reason may be typo, or the scripter's memory was wrong when he tried to remember which commands are availabe for this object). Since type-declaration only happens in the lookup, i can only see two solutions to this: A: built error-handling into the script-API (disadvantage: constant performance-cost). B: code a preprocessor which checks all scripts during server-initialization (disadvantage: not easy to code, makes management of API-changes more complex(every change needs to be done in the API and the preprocessor)).

2. the script may lookup a nonexisting object. Obviously, the scripter should create error-handling for that, but what if he didn't? Then, the safest way to handle the situation would be to stop the script, throw an exception, and propagate the exception up to the event which called the script. This leads to the next issue.

3. If a script aborts in the middle of its code because of an exception, then it may already have done some manipulations to the world, thus having done incomplete work. Two solutions which i could think of: A: create a journal of all actions done by the script and do a rollback if it errors. B: make it so that all script-commands are first written to a temporary buffer - if the script finishes without error, write the temp-buffer to the real world, if it errors then drop the buffer completely. Problem of solution B of course is how to make the buffer "transparent" to the script, so that for the script it looks as if the buffer doesn't exist.

In any case, this all means quite complex error-handling mechanisms - not so much in the scripts, but in the components which execute scripts and the script-API.

Compile-time error-checking obviously can only help with type-errors here. But its helpless in most other cases, because the compiler cannot know beforehand which objects do exist in the gameworld.

- Lyx

edit: to me, the most promising approach appears to be the "temp-buffer"-approach: since scripts do NOT run concurrently, one could make it so that the DB-accessors allow oneself to set the db-writer to "buffer-mode" - this buffer-mode could be activated when a script is executed. Then, whenever an error in the script happens, just throw an exception, let the script-executer catch it and drop the buffer.

When scripts read from the DB, the temp-buffer would be taken into account. The rest of the system would only read from the permanent-DB. Thus, to scripts their commands would seem "instant", while to the rest of the system, events(groups of executed scripts) would be atomic. If a script fails, the entire event never takes place.

The only issue would be how to make it so that during script-DB-reading, the DB-accessors should first look in the buffer, and then in the permanent DB - without too high performance penalties. Need to check SQL-syntax for that - maybe its possible to simply use a join-method for that... would be cheap if the buffer is memory-only. If i can get this to work, then i would get a nice freebie: i dont need to write code to "collect" write commands to bundle them in db-transactions - instead, when the script finishes, i can just bundle everything in the temp-buffer, into a write-transaction to the permanent DB.
Title: Programming Languages
Post by: Lyx on 2006-02-06 09:52:26
- post may be deleted -
Title: Programming Languages
Post by: niktheblak on 2006-02-07 12:20:00
First of all, I have to express my gratitude towards Dibrom for his very informative and thorough replies to this thread. They ultimately got me interested in functional programming!

I've been somewhat active in the field of programming for about ten years now including five years of university. I've been a professional software engineer for about a year now. Before browsing through this thread about a month ago I had very little idea about functional programming, although I have some very faint recollections that I might have heard about lambda calculus in first/second year computer science theory courses. My university mostly used a Pascalish/Javaish pseudo-language for teaching the 'science' portion of computer science and Java for all the rest, much to my discontent. This actually got me to switch universities at one point.

Needless to say, I've never encountered functional programming during my university time besides some vague hints and possibly some brief mentions about lambda calculus. Dibrom's posts encouraged me to finally look FP up, the language features he mentioned were just too interesting to leave the subject alone.

After starting this chore I had a major problem of choice -- what language to start learning? The list seemed endless; Lisp, Scheme, Dylan, Logo, ML, Alice ML, SML, Haskell, O'Caml, Erlang, Clean etc. I wanted to learn just one language and learn it well since I knew I was diving into something totally new and scary. I have no problems learning multiple imperative languages simultaneously, but a radical paradigm shift into a totally new world with several different and possibly contradictory languages might be too much.

Reading up about the languages only confused me more since nobody seems to have any idea what Lisp even is, and the same goes for ML which doesn't seem to have any real implementations, just variants with widely different emphasises and standard libraries. After that came in the 'pure/unpure functional' controversy, which was pretty scary. After Dibrom's posts I decided to pursue Haskell and O'Caml. I browsed through the Haskell 98 Report from the home page, but that was so dense that it made Bjarne Stroustrup's books seem easy. I did try to write some Haskell programs, but I dind't really get it. But there were a lot of things I did get, including what 'pure functional' really means. I decided that a jump into pure functional after ten years of imperative might be too much at once and I wanted an easy start, so I started examining Scala and O'Caml.

With Scala I was continuously being awestruck (Higher order functions? Wow! Pattern matching? Wow!!) and also the OO features seemed to be an order of magnitude better than in typical run of the mill industry leading OO languages. Scala did seem to have some problems though, the .NET integration leaves much to be desired and the syntax seems quite cluttered compared to some other languages like Boo (http://boo.codehaus.org/) and O'Caml. Then by pure accident I ran into Microsoft's F# (http://research.microsoft.com/projects/ilx/fsharp.aspx) which is basically a .NET implementation of O'Caml, or 'F# shares a common compilable subset with the O'Caml language' as Microsoft expresses it. This boosted my efforts towards learning O'Caml, which turned out to be easy since the net is full of O'Caml tutorials for imperative programmers. Then it really struck me -- my god, the syntax of that language is like insanely elegant! I found out I could reimplement a couple of my pet projects (one of them being a complete tagging library supporting APE, ID3v1, ID3v2, Vorbis, Lyrics3 etc) comprising of N lines of C/C++/C# code in about sqrt(N) lines of O'Caml code looking so beautiful it could be submitted into a poetry contest. This combined to the fact that O'Caml is fast makes it a sure winner.

Whereas Scala seems pretty interesting I found O'Caml's (or actually F#'s) clutter-free syntax and type system so nice that I ended up dropping Scala for my personal projects. But I would pick Scala instantly if I had to target JVM for some reason. Personally I have absolutely no interest in JVM but at my workplace where we are basically forced to use JVM, Scala could turn out to be really useful.

Now I feel I might have found my functional cup of tea in O'Caml and Scala. I very much like statically typed languages with type inference which instantly disqualifies the Lisps, and Haskell just... well, it just seems so very elitistic. It's similar enough to O'Caml that I still consider learning it, but it's seemingly total denial of sequental execution of commands (with side effects) is IMHO as bad as the dreaded OO tunnel vision -- computers work by executing instructions in a sequental order so I don't think a programming language should make it as difficult and obscure as possible just to make two or more things happen at a certain order.

But my reservations with Haskell aside, in the end I found myself really enjoying functional programming. Lately when I've been browsing through imperative code I found myself really shunning functions like 'void doIt()' which don't take any arguments and don't return anything, i.e. only rely on side effects. Perhaps I eventually end up being a Haskell user after all

Too bad I (indirectly) work for Nokia and not Ericsson so I can't use functional programming at work. But perhaps it's safer this way -- I'm a total n00b in FP but a pretty seasoned professional with OO. But only for the time being, that is!
Title: Programming Languages
Post by: zima on 2006-03-30 17:36:16
While I restrained myself from posting more of my useless crap (but I'll ask about choices for a "starter" in new thread when time comes), I think some of you might find this link amusing/interesting

http://ruby-talk.org/cgi-bin/scat.rb/ruby/ruby-talk/179642 (http://ruby-talk.org/cgi-bin/scat.rb/ruby/ruby-talk/179642)