Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Programming Languages (Read 35143 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Programming Languages

Reply #25
Quote
These days I'm much more taken with the "scripting" languages, particularly Ruby.  Very practical, cleanly organized, none of the complexity of static typing etc.

While i as well am mostly interested in this kind of languages, i disagree that ruby is cleanly organized - sure, it has achieved alot without a single revamp, but even matt now agrees that it is suffering from growing pains, that some of its behaviour (like local vars) is weird and that some of its syntax for advanced features like hashes, keyword-arguments, etc. are plainly unnecessary complicated. I'm very interested in how RITE/ruby2 will turn out - but then again, currently they seem to lack the necessary manpower to do it.... and i fear ruby2 may become something like duke nukem forever ;)

What, however, i'm more afraid of, is that in all those plans for ruby2 and its syntax-redesign, no single word was mentioned on the topic nested namespaces... i consider the :: notation the most ugly piece of ruby's syntax. And dont get me started about the limitations and unnecessary complicated creation and handling of nested namespaces.

So, i do like ruby alot - coding with it overally is just "fun"... it feels intuitive. I completely agree with the intro of the pragmatic programmers guide to ruby - it brings back the fun into coding. Unfortunatelly however, i don't consider ruby to be ready yet.... the overall concept and philosophy behind it are great, but it needs a revamp and polishing to be ready. Take the roadmap-features of ruby2, RITE, and namespace/module-handling as it is done in python, and it would be my dream-language.
I am arrogant and I can afford it because I deliver.

Programming Languages

Reply #26
Quote
That's pretty much what I expected.  Yes, Scheme is less practical than CL, if only for the library support.  I admit I tend to like the design of Scheme a hell of a lot better than CL on a number of points though.


Scheme certainly has fewer warts, but Lisp was pretty simple in the beginning too.  Languages tend to become less elegant as they evolve.

Quote
But it is interesting that you consider CL functional -- most of my LISP using friends tell me(and from what I infer from many other places) CL is rarely used functionally.  Although you can do it, most people don't.  Scheme is often used more functionally (obviously), but on the whole its usage is still less pure than ML, and the same for ML vs. Haskell or other purely functional languages.


CL's syntactic flexibility makes it a truly multi-paradigm language.  You can write oo, functional, imperative or logic code with CL.  I think it's probably true that most CL code isn't particularly functional, but I'd argue that it's because the functional paradigm isn't really all that practical for a lot of things.  Mutable state is just usually more straightforward for than a more elegant but less tractable functional implementation.  Have you read Okasaki's "Furely Functional Data Structures"?  Some very clever stuff in there but I found myself shaking my head halfway through it at the convolutions he has to go through to do relatively simple things.  I like the functional, expression-oriented style in the small but I think it breaks down when you start trying to treat big complex aggregates in a functional way.

Quote
Quote

People always say this but I'm not sure I believe it.  Surely in an absolute sense duck typing is far more *flexible* than any static typing system could be.


It's flexible in the sense that "anything goes", sure.  But in the sense of doing something complex in a safe way that's easy to reason about (usually important for complex things!) and easy to maintain, it's not.


It seems that this point always gets argued in a vacuum.  Large and complex programs have been written in statically and dynamically typed languages, so I think it's not really clear which one has the advantage.  Perhaps other factors are more important?

Quote
Someone has written a web browser in Haskell FWIW, and probably OCaml.  The latter you are probably correct about.  But this I believe is not so much due to the languages as it is due to API issues.  The applications you list are quite typically heavily dependent upon OS specific API's for network code, image code, audio code, and GUI toolkit.  Most functional languages have bindings for large parts (or even all) of those, but they do not match what is often provided as the "native" implementation language for the OS, and this is the reason I believe you don't see those types of programs in fp often.  Again, this isn't really a mark against fp though, because aside from Java, most of the world is still stuck in C or C++ mode on that regard.  The situation is pretty similar for most of the "scripting" languages as well, although not quite as bad since there is a little more interest in supporting them (I believe this is due to the fact that they are on the whole quite similar to languages most people are already familiar with).


People have writen toy browsers in OCaml, Haskell, and even emacs lisp, but I'm talking about something to compete with Firefox.  Your point about the C bias of the underlying OS is valid, but I don't think it's the biggest issue.  Honestly, I think people put too much emphasis on programming languages.  I think the biggest predictor of a project's success is the skill and the motivation of the programmers.  I think the biggest problem for the ML/Haskell/Lisp is that they're too weird and counter-intuitive to attract the kind of grassroots support that a language needs to survive without the backing of a big company.  Somebody's got to write all the essential but unglamorous libraries and documentation and I think it's instructive that Ruby's made more progress in two years than OCaml or Haskell has in toto.

Programming Languages

Reply #27
I wrote my first "program" in RPG  II in 1974. In college (TAMU '80), all of my assignments were in Fortran, except process control stuff in some obscure assembler. Since then, I've programmed professionally in several assembly languages, Basic variants, scripting variants, lisp, C, Objective C, C++ (starting before they had templates), C (starting with original K&R pre ANSI), Java... Blah blah blah blah.

I've read a couple of books on C# and like it, but haven't actually tried to use it. I've done a couple of things (not professionally) in Smalltalk and think it would be the Only Language in a Just World.

(Oh, I can't believe none of you mentioned Just About Every Function in emacs as a use of lisp. sortof. 'Not to start a vi vs emacs war.  )

After all of that blah blah blah, the bottom line is this: you are going to write your best, most creative *and* most reliable code in the language you know best. You're communicating instructions to a dumb machine, for heaven's sake. Yes, new languages are Good (Java and C# make some aspects of reliability *easier* than, say, assembly language). But it's what you know. Yes, some problem spaces just should not be attempted in certain languages and sometimes it's time to learn a new language.

A semi parallel: I know folks who are fluent (really!) in multiple spoken languages. They don't just "get around", they know the grammars. But they're an aberration. The Rest Of Us are wise to stick with our first language when possible. We make mistakes when we try to communicate in any other language.

Why it's not a good parallel: programming languages are vastly simpler than spoken ones. Sometimes, it's worth learning a new one when the new one makes it easier to express your desires to the computer.

Here's another rub: if you're going to argue about which language is better, you need some criteria. Ease of expression is a Big One. You may think your language is The Best. But I'm here to tell you that the only way to measure that is to do some random sampling and see how easy folks pick it up. If 100 random people all have a harder time picking it up than, say, C#, you may be tempted to conclude you found 100 idiots but the truth really will be that your new language sucks. If you don't have the tests to prove your language is easier and more expressive, well, see Ye Olde Rule 8.

And, after all of that, I'll admit that just about every language I program in (I currently stick to C++, Java, and Javascript) sucks.

Second, wide acceptance IS a fair criterion. Why don't I waste my time learning Latin when it's Such A Beautiful Language ™? Because nobody speaks it (unless, of course, you have alternative motivations). For most folks and for this reason, Latin sucks and so does your New Obscure Programming Language. And no, I don't have one in mind. And yes, I know that the ease of expression issue (which, again, can only be addressed by testing with large numbers of people) can turn today's obscure language into in the Lingua Franca of tomorrow.

Oh, FLAC R0X0RZ!

Mark

Programming Languages

Reply #28
Quote
I've read a couple of books on C# and like it, but haven't actually tried to use it. I've done a couple of things (not professionally) in Smalltalk and think it would be the Only Language in a Just World.

You should check out "Io" in that case - you'd probably like it. Personally, while i've been fascinated about the smalltalk-approach, i always found myself doing too much micromanagement and loosing track of the big-picture when using smalltalk-alike languages.
I am arrogant and I can afford it because I deliver.

Programming Languages

Reply #29
Quote
While i as well am mostly interested in this kind of languages, i disagree that ruby is cleanly organized - sure, it has achieved alot without a single revamp, but even matt now agrees that it is suffering from growing pains, that some of its behaviour (like local vars) is weird and that some of its syntax for advanced features like hashes, keyword-arguments, etc. are plainly unnecessary complicated. I'm very interested in how RITE/ruby2 will turn out - but then again, currently they seem to lack the necessary manpower to do it.... and i fear ruby2 may become something like duke nukem forever


I don't think I'd call a hash an advanced feature.  Ruby's hash syntax is essentially the same as Perl's or Python's so I'm not sure I understand what you object to about it.  The keyword syntax for functions is a hack, and I do hope they do something about it for Ruby 2.

I think Ruby gets the important things right though:

1. deep and pervasive object orientation
2. consistent method naming
3. simple but flexible object system
4. dead simple C api
5. consistently expression, not statement, oriented
6. abundant syntactic sugar without perl's excesses
7. good documentation and third-party library support

It's not perfect but I very strongly prefer it to the alternatives.  Matz set out to make a language that was elegant and enjoyable to use and I think he succeeded.

Programming Languages

Reply #30
Quote
The keyword syntax for functions is a hack, and I do hope they do something about it for Ruby 2.

That's the plan.
http://www.rubyist.net/~matz/slides/rc2003/
I am arrogant and I can afford it because I deliver.

Programming Languages

Reply #31
Quote
CL's syntactic flexibility makes it a truly multi-paradigm language.  You can write oo, functional, imperative or logic code with CL.  I think it's probably true that most CL code isn't particularly functional, but I'd argue that it's because the functional paradigm isn't really all that practical for a lot of things.  Mutable state is just usually more straightforward for than a more elegant but less tractable functional implementation.  Have you read Okasaki's "Furely Functional Data Structures"?  Some very clever stuff in there but I found myself shaking my head halfway through it at the convolutions he has to go through to do relatively simple things.  I like the functional, expression-oriented style in the small but I think it breaks down when you start trying to treat big complex aggregates in a functional way.
[{POST_SNAPBACK}][/a]


I agree that LISP is quite multiparadigm.  I feel that Haskell is as well, although it all boils down to a functional base.  With monadic combinators, and many of its other more advanced features, you can emulate basically any style of computation you want.  [a href="http://www.willamette.edu/~fruehr/haskell/evolution.html]This[/url] is meant as a joke, and it's a bit twisted, but it's actually also quite illustrative.  And many of those examples only touch the surface of Haskell, not really using many of the recent extensions to the language.  Oz, I think, is another great example of a truly awesome multiparadigm language.  Sadly, it will probably remain rather obscure.

As for Okasaki's book, I actually own that one  You're right about some of the convolutions.  But you have to remember that he took a rather extreme approach with that book -- going purely functional with a fairly minimal set of features.  In OCaml, most (all?) of the "extra" standard data structures provided by the library use side-effects.  In Haskell, in most cases, when dealing with the sorts of problems you might see in his book, you'll usually end up using a variety of techniques to make the whole process more tractable.  Most of them can be seen on the Haskell wiki through various links from this page.

If you want to see an extreme example of Haskell flexibility when dealing with big complex aggregate data types, you should check out the "Scrap your Boilerplate" papers on generic programming with Haskell.  I posted a link to it ealier, but for reference, you can find them here.

Quote
It seems that this point always gets argued in a vacuum.   Large and complex programs have been written in statically and dynamically typed languages, so I think it's not really clear which one has the advantage.  Perhaps other factors are more important?


To me, it's very advantagous to be able to look at the type signature of a function and tell rather immediately what the function, overall, is supposed to do.  If there is no type signature, I just ask the interpreter to tell me what it is.  In LISP, or other dynamically typed languages, or in weakly typed static languages, you often have to read the entire function to get a real good idea of what it is supposed to do.  And if you misread something (or say, if you don't know the behavior of certain library functions and are not able to look them up immediately), you can make some rather nasty mistakes.

Strong static typing forces you to reason more thoroughly about your code *before* you implement it, and this can only be a good thing from a reliability and maintenance point of view.  Strong compile time error finding through this approach is a true advantage when deploying code as well.

Dynamic typing makes for easier rapid prototyping, but it also makes for more (runtime) errors (and in some cases even lower performance).  It can make complex problems easier by allowing people to solve them in ways that are not really optimal either, but maybe in some cases this is what is needed.

Both approaches obviously have advantages and disadvantages, but I feel that in most cases, strong static typing with good type inference is a better choice than dynamic typing, even when (or perhaps especially when) dealing with complex aggregate data structures.

Typing, however, will remain a rather religious point, and I don't expect there will be much agreement on strong static vs. dynamic anytime soon

Quote
Somebody's got to write all the essential but unglamorous libraries and documentation and I think it's instructive that Ruby's made more progress in two years than OCaml or Haskell has in toto.


What do you mean by progress?  Are you talking about userbase adoption?  Well, then yes, Ruby has made more progress.  Are you talking about language maturity and library maturity?  If so, I'd have to disagree that Ruby has made more progress.

Programming Languages

Reply #32
Typing can confuse new programmers to.  Coming from VB6 to Java, it took my a while to get my head around typing, implicit and explicit type conversion, and other such matters, because in VB6 it wasn't necessary.  When I started learning Java, I found the whole concept of casting extremely difficult to comprehend, because no one explained to me strong typing vs. weak typing.

The very idea that a variable could have specific types was almost foriegn to me.  Of course you can specify integers and strings in VB, but most of the time you end specifying var's.  This made learning Java quite painful (although Java itself did not help).

To quote Edsger Dijkstra, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration". 

Programming Languages

Reply #33
Quote
If you want to see an extreme example of Haskell flexibility when dealing with big complex aggregate data types, you should check out the "Scrap your Boilerplate" papers on generic programming with Haskell.  I posted a link to it ealier, but for reference, you can find them here.


Interesting.  I'll check it out.  Thanks for the link.

Quote
Typing, however, will remain a rather religious point, and I don't expect there will be much agreement on strong static vs. dynamic anytime soon


I think this one gets argued monthly on comp.lang.functional, and no consensus has yet to emerge.  Lispers would argue that a well chosen name will tell you more about what a function does than any type signature ever could but these days I wonder if there might just be two different kinds of programmers.

Quote
What do you mean by progress?  Are you talking about userbase adoption?  Well, then yes, Ruby has made more progress.  Are you talking about language maturity and library maturity?  If so, I'd have to disagree that Ruby has made more progress.
[a href="index.php?act=findpost&pid=340787"][{POST_SNAPBACK}][/a]


By progress I mean viability as a commercial development language.  I'd feel pretty comfortable starting a new company based on Ruby.  Less so on CL and I'm pretty sure I wouldn't try it with ML at all.  Ruby has excellent documentation, a large and active user base, pretty comprehensive library support, and a design that's straightforward enough that any decent java/perl/python programmer should be productive in it within a week or two.  It's less novel from a language design point of view than Haskell or ML but these days I find myself a lot more interested in technology as a means to an end rather than as an end in itself.

Programming Languages

Reply #34
This thread is well beyond me but I'm still going to stick my leg in! Probably going to regret it! 

I am a .Net programmer, so you would think that my opinion is going to be bias but its not.

Good points about C++.
If you want to make something truly powerful like a new operating system, it's not possible with .Net! If I remember correctly from 1 of Microsoft’s videos, they said that Windows Vista was written with C++ and so was .Net and so was Visual Studios! They also said that office 12 is written with C++.
Base technology for Microsoft products, window vista, office 12, visual studios, and even .Net Frameworks and SDK's are all written in C++!

Bad points about C++.
Not suited to RAD.
New applications must be installed on each and every PC.
There are other bad points but they don’t really concern me.

Good points about .Net.
Cannot be beaten for Rapid application development.
Suited for most office environment
Once .Net is installed on all PC's all your applications can be make available to all PC's with OneClick technology. No installation is required at all.
Very simple and very fast to make applications

Bad points about .Net.
Must pay to get more functionality! We had to pay to get VS.Net 2003 just to make it easier to use the functions in .Net 1.1 which were not available in .Net 1.0. Now we are going to have to pay more to get VS.Net 2005 just to make it easier to use the features in .Net 2.0 which are not available in .Net 1.1.
Must run within an environment. e.g. .Net framework.
Have to go around to each PC to install the next .Net version.  Still not as bad as doing that with every application written in C++ each and every time.

In my environment I will never need C++. So I guess it all comes down to the developers needs!

Programming Languages

Reply #35
I wonder what do you think about Lisp, especially opinions closely and loosely related to it that are posted on Paul Graham's site...

Also I'm curious about views on Objective C and Cocoa/Gnustep.


BTW, do you think Python, perhaps even with Pythoncard is good for a start?
Mostly "toying" activity - that's why there's no rush - apps that could be desribed as interfaces to functionalities that already exist/glueing of ready libraries. Well, perhaps in the long run something in area of cognitive science, hence my slight interest in Lisp  . Anyway still not very serious...
My only experience is very short, basic contact with C, and also somehow C++, though used like C - I don't even have an understanding of the concept of object oriented programming 

Programming Languages

Reply #36
Quote
I wonder what do you think about Lisp, especially opinions closely and loosely related to it that are posted on Paul Graham's site...
[a href="index.php?act=findpost&pid=347504"][{POST_SNAPBACK}][/a]


Who is the question directed at?

Programming Languages

Reply #37
Anyone who might be competent to answer it

Programming Languages

Reply #38
Quote
I wonder what do you think about Lisp, especially opinions closely and loosely related to it that are posted on Paul Graham's site...


I haven't read too much from Paul Graham, but what I have read I've generally found myself to be in overall agreement with.  I used his simple lisp implementation example based on McCarthy's paper to implement my first simple LISP in Haskell, going on only that short snippet and an hour long conversation with a LISPer I know (which was helpful because prior to that I'd had about 15 minutes of experience programming in LISP).

As for his upcoming LISP implementation, Arc I believe it's called, I can't say a whole lot.  I've heard both some grumblings and some praise about it from various LISP users.  Personally, I'm not sure I see much need for a new LISP implementation though.  I could see utility in a new language that borrowed certain concepts from LISP, but LISP in itself I think has some problems that, at least for me, don't make it an ideal language:

1.  Dynamic typing -- A lot of people seem to love dynamic typing, but I don't agree with a lot of the supposed advantages that it offers, which mostly seem to center on supposed programming flexibility but which seems to come at the cost of both program efficiency, and program safety.  There is also a question about expressivity of dynamic typing in a strictly technical sense which may have interesting consequences for certain types of problems.

Static typing, as opposed to dynamic typing, is strictly more expressive, and the proof of this is rather trivial:  All untyped (dynamic) programming languages correspond to a typed (static) programming language with a single universal type.  However, there exist some typed programming languages which have more than a single universal type (e.g., the most fundamental being the simply typed lambda calculus).  Therefore, typed programming languages are strictly more expressive than untyped programming languages.

Whether this particular technical point manifests itself in practice or not, LISP does not appear to me to make a good target language for certain types of problems where type information is quite prevalent in the data, and in a complex nested fashion.  The lack of something like [G]ADT's and proper pattern matching are a big part of this.  For example, an area of one project I am currently working on involves writing a rather complex type checker and inferencing algorithm.  This sort of problem would be much more difficult (but not impossible) to do in a language like LISP.  It's not really limited to that sort of niche area though -- I think LISP may not be as suitable for other somewhat similar tasks like complex XML parsing, or anything where polytypic programming with a large emphasis on safety and performance is desirable.

Dynamic typing has the advantage of loosening up the evaluation of the program in a way that can sometimes make rapid application development easier -- but ultimately I don't think it is worth the trade offs.  Others, of course, disagree  However, for the most part, advantages of "duck typing" as mentioned earlier in this thread for example, are easily captured with a rich static type system that features something like parametric polymorphism and type classes.  One could for example to do something silly like this, if they so desired:

2. Lack of many modern features -- LISP lacks a lot of the features I criticized other languages in this thread for not having as well.  However, a couple of big ones for me are: currying (this has always been a strange one IMO, given how LISP is supposedly modeled after the Lambda Calculus), pattern matching, laziness (optional or otherwise), concurrency, etc.

3. Too much reliance on macros -- When I talk to LISP people, and mention a feature lacking in the language, they are very quick to state that most of the time these things can be added with macros.  Sure, this is true, but I believe the LISP community has come to a point where it relies on macros far too much simply to make up for certain problems with the language that result from its lack of expressivity, which is enforced by the syntax being such a simple and direct representation of a linked list data type.

Problems with relying too much on macros are that there's little in the way of consistency for how to solve certain types of problems -- this makes code maintenance harder and makes code less idiomatic, which affects the scalability of projects written in the language when they are deployed in collaborative contexts.  The other major problem with macros are that many of the features people "add" to the language via macros do not benefit from compiler optimization in the same way that such a feature would if it were more directly represented in the semantics of the language and subsequently accounted for in the language intermediate representation and compiler optimization steps.

Despite all of this, I find LISP users tend to think that metaprogramming is a necessary step on the way to solving problems much more often than is actually the case.  In a language like Haskell for example, I have solved many problems that I've seen solved in LISP code that made much use of metaprogramming via macros.  However, I have only twice used Haskell's extremely capable Template facility (which is capable of full blown metaprogramming): once when using a library that employed metaprogramming to create bindings to Objective C at compile time (generating classes and doing all sorts of magic under the hood), and one other time simply when I was curious about learning it's Template system.

4. Somewhat bloated -- This will probably draw some annoyance from some LISP fans, but I think Common LISP is simply bloated (library and extension wise) from what I've seen.  I cringe at using that word to describe LISP which, despite what I've said about it so far, I do have a lot of respect for, at least in a historical context if nothing else.  If you like the "kitchen sink" approach, then maybe it's for you... I tend to prefer a language which is more concise in this regard, and where complex libraries and extensions are implemented in a highly consistent fashion which tends to encourage idiomatic programming styles.  I don't think LISP does this both from what I've seen as far as what Common LISP contains, and the macro issue I mentioned above.

I'm not sure if Arc is going to follow Common LISP or not, so whether it will suffer this same criticism I can't be entirely sure.  Maybe this one won't be a problem.

Quote
Also I'm curious about views on Objective C and Cocoa/Gnustep.


Objective-C is a nice simple language.  It's basically C with an easier, and in some ways better, object system than C++.  It doesn't have many of the more advanced features that C++ has though, with the most glaring omission being the lack of templates.  You don't tend to need templates with Objective-C nearly as much though because all objects in Objective-C inherit from a common base, and the objects are dynamically typed (in C++ they are weakly statically typed).

Objective-C on it's own is pretty bare and I wouldn't use it as a language for most tasks.  With Cocoa, which is a fantastic and fairly complete sort of "standard library" for Objective-C on OS X (GNUStep being the much more incomplete and less well maintained GNU alternative), Objective-C is a pretty decent platform for application programming.  I don't think I'd use it for any other task though.  Similar to C, I don't think it's the ideal language for most tasks.  C usually is a good choice for DSP type stuff, and in that regard Objective-C is just as suitable, but the catch is that you aren't going to be using Objective-C's object system for that in most cases.  If you need real fast and efficient OO then you would be much better off using C++ with static polymorphism through template expressions.

Quote
BTW, do you think Python, perhaps even with Pythoncard is good for a start?


Python is an OK language to learn on.  I don't know anything about Pythoncard.  If I were to recommend a language for someone to learn on today though, Python wouldn't be my first choice.  I would instead recommend one of: 1) Ruby, 2) Scheme, or 3) Oz.  Which one I would choose would depend on exactly why the person is learning programming.  I would basically break it down like this:

1. Ruby -- I would recommend Ruby if the person were interested in learning programming basically to eventually go on and do typical programming work in the industry.  Stuff like application programming, web application development, scripting small programs to serve as glue between some sort of other components, etc.  I think Ruby is going to become a lot more prevalent in the industry as time progresses.  It's got a lot of momentum right now and I think it will probably stay that way for awhile.  In terms of language design, it's cleaner than Python and I like the direction it is going in much more than I like the direction Python seems to be going in...

2. Scheme -- I would recommend Scheme probably as a language for people with very little to no exposure to computational ideas in general (one of my friends taught a programming class to kids and used Scheme quite successfully in this regard).  A nice thing about Scheme is that it's pretty consistent and concise, and the syntax is basically dead simple (but as I noted above, I think this comes at a price).  Despite being simple, scheme is powerful enough to express some nice theoretical qualities (see Structure and Interpretation of Computer Programs) which I think too many programmers these days know next to nothing about and would do well to learn.

3. Oz -- I would recommend Oz as a language to learn on for people truly interested in theoretical computer science.  People who have an interest in solving difficult academic style problems, or who might be the kind of person who wants to know how languages work deep down and might enjoy implementing their own -- these are the kind of people that I think should learn Oz.  Oz is perhaps the most multiparadigm language that I have seen, and one can learn functional programming, OO, logic and constraint based programming, distributed and concurrent programming, and strict vs lazy evaluation based programming from it, all in one nice, simple, and consistent package.  It has a lot of unique features that make it easy to learn about it's theoretical qualities and to reason about, such as the inspector which is a sort of graphical tool that shows the realtime evaluation of code, the explorer which is a tool that shows a graphical representation of search trees and the like in a constraint computation, along with other tools such as things that show realtime statistics on program concurrency, etc.  I don't think there is any other language (except possibly for Haskell, but it is much harder to learn) that can expose someone to so many interesting theoretical aspects of computer science in the programming language aspect all from a single source.  Someone learning this language should read Concepts, Techniques, and Models of Computer Programming as an aid -- it's like Structure and Interpretation of Computer Programs but uses Oz instead of Scheme, covers a much broader set of topics, and is much more up to date, having  only come out last year.  It is by far one of (maybe THE) best book of it's type.

Quote
Mostly "toying" activity - that's why there's no rush - apps that could be desribed as interfaces to functionalities that already exist/glueing of ready libraries.


For that, I would go with Ruby.

Quote
Well, perhaps in the long run something in area of cognitive science, hence my slight interest in Lisp   .


I've never quite understood why LISP still retains the reputation of being an AI language.  Prolog is more obvious, but as for LISP I could understand it having this reputation decades ago when there were few other languages which were as "high level" as LISP and which were not capable of decent symbolic processing.  However, I think languages like ML or Oz are better suited to this task these days.  "Cognitive Science," or AI, type stuff is an area that I think falls into the category I mentioned earlier about richly typed data.  ML (or Haskell) excels at this kind of task.  Other aspects of AI style programming deal with 1) knowledge sets, 2) autonomous agents -- both of which Oz handles perfectly well with 1) its rich logic and constraint based facilities, and 2) its rich OO and distributed/concurrent computational abilities.  Alice ML -- which is basically the ML answer to Oz (written by largely the same team as the Oz guys) -- offers essentially the exact same capabilities as Oz (right down to the same graphical evaluation tools even), but within a tradtional ML dialect w/ the expected extensions.  Haskell also has some excellent extensions as far as parallel computation goes.

Quote
My only experience is very short, basic contact with C, and also somehow C++, though used like C - I don't even have an understanding of the concept of object oriented programming 
[a href="index.php?act=findpost&pid=347504"][{POST_SNAPBACK}][/a]


If I were you, I would personally try to read a book with a little bit of theory before trying to learn complex programming ideas straight up.  It helps a lot to have a solid foundation on what various concepts mean (like OO for example), and why someone would or would not want to use them, before actually getting to work.  Most people skip this step and never look back, and I think it is ultimately to their detriment. YMMV of course...

Programming Languages

Reply #39
I just read the slides and presentations of this year's ruby-conference, and it seems that ruby's short-term future is not as dark as it seems. The new VM has progressed rather well(from my understanding, most of the work - excluding multi-CPU optimization - is already finished). The benchmarks look rather nice and put ruby *ahead* of python in terms of performance with complex mathematical calculations.

As for syntax-changes, seems that the two most important changes, not in the current CVS yet, are keyword-arguments and block-local scope. I was quite amusing when i read the slides and saw proposals for taking some aspects from Common LISP in more than one occassion.

For those who are interested in where Ruby is headed:
Ruby 2.0 slides: http://www.rubyist.net/~matz/slides/rc2005/index.html
New VM progress-report: http://rubyforge.org/pipermail/yarv-devel/...ber/000372.html

- Lyx
I am arrogant and I can afford it because I deliver.

Programming Languages

Reply #40
Quote
I wonder what do you think about Lisp, especially opinions closely and loosely related to it that are posted on Paul Graham's site...



There are a lot of brilliant ideas in Lisp and you can learn quite a bit from it but it's really not practical for writing real-world code for a variety of reasons I won't go into here.

If you want to start writing real code that actually does something interesting soon, I'd strongly recommend ruby.  If you're more ambitious and willing to work a lot harder, get a copy of Structure and Interpretation of Computer Programs and work through it.  It'll make your brain explode but you'll have a much better understanding of programming.

Programming Languages

Reply #41
Thought I'd add a little something more to this thread...

I just got done writing my first real program in Scala, and all I can say is: wow what an incredibly cool language!  I guess I should have checked it out sooner given that it was supposedly a big inspiration for the current Fortress language spec (a quick examination of Scala confirms this -- Fortress syntax and featureset is very close to Scala, similar to how heavily Java was influenced by C++).  At any rate, I think I've found my new favorite OO language, easily displacing Ruby.  To understand why, just take a look at some of the features it supports:
  • Very lightweight and non-obtrusive syntax (i.e., not cluttered with numerous unnecessary keywords or other fluff)
  • Strong static typing with type inference
  • Parametric polymorphism (think like Generics or Templates) and ad-hoc polymorphism with subtyping (OO inheritance, dynamic dispatch)
  • Everything is an object, including "primitive values" (similar to Ruby, and unlike Java) and functions
  • Lambda abstractions (e.g., x:Int => x * x)
  • Thus functions are first class (e.g., val xs:List[Int] = ...; xs.map(x:Int => x * x);)
  • Functions can then also be nested
  • Functions can also be curried (i.e., methods can be partially applied to parameters, which then returns a new method taking the remaining unbound parameters -- good for proper higher order programming)
  • "Object" abstraction in addition to Classes (Objects are instantiated singleton classes, requiring no special design pattern or multiple extra keywords to establish)
  • Traits (i.e., similar to Java interfaces, except can contain default implementations, but no mutable state)
  • Mixin in addition to standard inheritance (e.g., somewhat complex to explain here, but it's a way to layer class composition more flexibly than with straight inheritance)
  • Pattern matching (e.g., str match { case "foo" => ... case "bar" => ... } )
  • Case classes (i.e., classes with an automatically provided constructor that is pattern matchable, in addition to having automatically created display functions.  Case classes do not require "new SomeClass()" either, but appear more like functions: Tree(Tree(Leaf(1),Leaf(2)),Tree(Tree(Leaf(3),Leaf(4)),Leaf(5))) )
  • Advanced type control such as Variance (allows one to control covariants and contravariants), compound types, etc.
  • Views (i.e., a facility that lets one "view" existing classes under a new abstraction.  For example, one can use a view to abstract a string class as a class representing a sequence of characters.  Another example would be "viewing" a complex number type through an angular representation rather than a rectangular.)
  • Regular expression like patterns for general sequential data (seems like a *very* powerful idea)
  • Ability to operate on raw XML directly in the code, which is then treated as a nested data type which one can pattern match on, or inject scala computations into specific portions of (think kind of like php in reverse).  One can use a tool to transform XML DTD's into a series of classes to make the resulting data structure richly typed, rather than essentially operating on it as an untyped ADT.  This seems incredibly powerful and is probably the best facility for working with XML than I have seen from any language ever.
  • Automatic type dependent closures (i.e., you can use this with currying and the languages ability to use single parameter methods as infix operators to create arbitrary flow control constructs, but without resorting to a metaprogramming facility that requires direct manipulation of the language AST.  Think of it kind of like C++ template expression metaprogramming but without the ugliness, and designed from the start to produce natural looking syntax).
  • Arbitrary operator specification (i.e., any combination of non-alphabetic symbols that do not start with . may be used to create an operator, like ":*!@@@" for example.  Used with the infix notation mentioned above, this is incredibly powerful)
  • Full unicode support (i.e., you can use non-latin unicode characters for *identifiers*, such as symbols for lambda, gamma, omega, etc. for variables or objects. Strings are also unicode).
  • Compiles to either the JVM or the MS .NET CLR (defaults to JVM), which means not only does it run seamlessly on those platforms, but libraries from those platforms can be called transparently and used as native scala code, instantly giving access to a huge variety of industrial grade libraries.
  • Parser combinator library support (i.e., a way of parsing that operates upon the idea of building up a parser through a sequence of functions representing abstract operations on streams.. kind of like regular expressions through function composition)
... and those are just the ones I know of after a day looking at it.  More can be seen here and here.

The language is pretty new, having only been around since 2002, and really only having taken off in more recent times.  Previously, I was focusing more on Ruby as the OO language to watch, but I take that back after having seen Scala.  It seems that they pretty much got everything right on (that I've seen so far), having managed to marry the functional and OO style in a very unique and well thought out way.  I think it illustrates that a lot of the features which I have been faulting other languages in this thread as not supporting, can and do in fact have a place in an a more "traditional" OO paradigm. The design, as well as their documentation and the examples they give make it rather clear why both approaches combined into one are much better than just pure OO with no, or very little functional support.  They've managed to make OO much more "lightweight" and usable by implementing so many features that provide opportunities for higher order programming too.

It's also refreshing that they went with static typing with type inference rather than going the easy route and just doing it all dynamically.  Type inference when both parametric polymorphism and ad-hoc polymorphism with subtyping are simultaneously in the equation is not an easy thing to implement, but they've done it and done it well at that.  This will, I think, make the language incredibly scalable and suitable for very complex industrial tasks.  No surprise to see that they designed the language for essentially those purposes.  The XML manipulation power and structural regular expression support is just the tip of the iceberg with regard to what I think is possible there...

Oh, the program that I wrote was a simple untyped lambda calculus evaluator (sans parser since I ran out of time).  I wrote an almost purely functional version and then an almost purely OO version (just to test language flexibility).  The curious can find them here (note that the files are encoded as utf-8 and use some unicode chars).

Programming Languages

Reply #42
Quote
Thought I'd add a little something more to this thread...


There was a time when I would have looked at that list of features and thought "Wow!  What a cool language".  Now I read it and think "Wow!  Another research language I'll never get to use at work!".

That stuff might all sound great on paper but I'm not convinced it has that much real-world programming value.  Ruby is simple, mostly fits all in my head at once, and is straightforward enough that I have some chance of persuading my coworkers to try it.  We could have round 7999 of the static vs. dynamic typing debate, but it's impossible to argue that large complex systems *can't* be built in dynamic languages and I think the burden of proof really likes with strong typing advocates.

It's kind of like 20th century orchestral music.  Schoenberg and Xenakis' music sounds great from a theoretical point of view, but nobody listens to it and their most persuasive ideas have been cherry-picked and reworked into something more palatable to the general public.  So maybe the functional programmers are the 12 tone serialists of the software world?

Programming Languages

Reply #43
Quote
There was a time when I would have looked at that list of features and thought "Wow!  What a cool language".  Now I read it and think "Wow!  Another research language I'll never get to use at work!".


Some people might feel that way, but there are a couple of things that make the situation with Scala different I think.

The first big point is that it runs on the JVM or .NET.  Not even Ruby does that (Edit: it does seem that there is some sort of .NET bridge for Ruby here, however this is not the same as making it a native facility of the language itself).  This means that Java programmers (of which there are many) are going to be much more comfortable trying it because they don't have to rely on another VM and they can use all of their existing libraries and all of the libraries they've grown accustomed to over the years.  Using a Java library in Scala is transparent.  You simply import it just like a Scala library, and use it from there.  That is a huge advantage that is hard to understate for new languages to the scene.

The second big advantage is that from reading the Scala documentation, it's very clear that they are targetting the language at people from traditional OO backgrounds.  Almost every example is explained in a way an OO programmer would understand, rather than how a functional programmer would look at the situation.  The great thing, though, is that the designers were not so narrow minded as to throw out the huge advantages that functional programming offers simply because many people may not be familiar with them.

Quote
That stuff might all sound great on paper but I'm not convinced it has that much real-world programming value.  Ruby is simple, mostly fits all in my head at once, and is straightforward enough that I have some chance of persuading my coworkers to try it.


I might agree with you regarding Haskell.  In Haskell, any serious programming requires monads, and most of the time complex nested monads at that.  Honestly, those can get very hard.

Scala, on the other hand, can be used almost exactly the same way someone would use Java.  It just happens to offer incredible power when someone wants to dig deeper.  And amazingly, they managed to offer all of it in a way that is not convoluted and terribly complex.  They've managed to fit more flexibility into their language than a language such as Java has, but while keeping the specification, documentation, and learning curve at a mere fraction of the size.

I definitely think Scala has real world value, because most of its features are so clearly geared towards that, and it works now, with a huge wealth of existing libraries already available at its disposal.

Quote
We could have round 7999 of the static vs. dynamic typing debate, but it's impossible to argue that large complex systems *can't* be built in dynamic languages and I think the burden of proof really likes with strong typing advocates.


Complex systems can be built with dynamic typing, I never said otherwise.  But I find it strange that people don't seem to see the advantage in having the compiler doing the heavy lifting with regards to safety, rather than having to sprinkle type checking predicates or exceptions all throughout their code and cross their fingers after they've deployed the product.

I don't know, maybe I'm just missing something...

Quote
It's kind of like 20th century orchestral music.  Schoenberg and Xenakis' music sounds great from a theoretical point of view, but nobody listens to it and their most persuasive ideas have been cherry-picked and reworked into something more palatable to the general public.  So maybe the functional programmers are the 12 tone serialists of the software world?
[a href="index.php?act=findpost&pid=348729"][{POST_SNAPBACK}][/a]


And that's just it.  Scala is the cherry-picked, reworked-into-something-more-palatable version of functional programming for the OO generation.

You should take a look at it before assuming outright that it's useless.  I don't think that it's exactly a coincidence that the Sun guys are drawing so heavily from Scala for Fortress, their next "Big Language", considering all of these things.

Programming Languages

Reply #44
Quote
This means that Java programmers (of which there are many) are going to be much more comfortable trying it because they don't have to rely on another VM and they can use all of their existing libraries and all of the libraries they've grown accustomed to over the years.


Lots of languages have tried this but so far it doesn't seem to have helped Jython, Nemerle, or any of the JVM-targeting Schemes.  Again, sounds great in theory but so far in practice it's been a dud.

Quote
The second big advantage is that from reading the Scala documentation, it's very clear that they are targetting the language at people from traditional OO backgrounds.  Almost every example is explained in a way an OO programmer would understand, rather than how a functional programmer would look at the situation.


I think most OO programmers will run screaming away from the gnarly type errors.  The sad fact of this Paul Graham-ish "design for smart programmers" is that most programmers aren't really very good and a language needs these people to really thrive.  A recent case in point, reddit.com which was held up as a lisp success story until last week:

http://reddit.com/blog/2005/12/on-lisp.html

Notice that the main reason they switch to Python, despite the fact that they consider Lisp to be superior, is that they didn't have to do nearly as much grunt work because so many support libraries had already been written by the "average" programmers that never got around to doing the same thing for Lisp.

Peter Norvig, who's probably forgotten more about programming than the rest of us will ever know, grudgingly accepts Python here:

http://norvig.com/python-lisp.html

Quote
Complex systems can be built with dynamic typing, I never said otherwise.  But I find it strange that people don't seem to see the advantage in having the compiler doing the heavy lifting with regards to safety, rather than having to sprinkle type checking predicates all throughout their code and cross their fingers after they've deployed the product.


Trotting out the standard counter-argument: static typing only catches certain kinds of errors, you still have to do proper unit testing and q.a., and it's not clear that static typing doesn't cause as many problems as it solves via increased language complexity and conceptual overhead.

Quote
And that's just it.  Scala is the cherry-picked, reworked-into-something-more-palatable version of functional programming for the OO generation.

You should take a look at it before assuming outright that it's useless.


It looks to me like they're still clinging to most of the functional communities most cherished and untested assumptions, but I'll give it a closer look.

The burden on all language designers is to demonstrate that their language solves some important problem much better than existing alternatives.  Ruby's grown more in the last six months than in it's entire previous existence thanks to Rails.  The ball is in the functional community's court to do the same.

Programming Languages

Reply #45
Quote
Lots of languages have tried this but so far it doesn't seem to have helped Jython, Nemerle, or any of the JVM-targeting Schemes.  Again, sounds great in theory but so far in practice it's been a dud.


I don't know anything about Nemerle, but I think the reason this approach has failed with Jython and JVM-targeting schemes is because those are both dialects of a language which does not specify mandatory support for those facilities out of the box.

There is a huge difference between downloading Scala and using System.out.println in your first hello world program, and downloading a special, non-popular dialect or other poorly supported language extension facility to get the same functionality.

The only language I know of which does this sort of thing as well as Scala is D, which also happens to allow usage of C libraries transparently right out of the box.

Quote
Quote
The second big advantage is that from reading the Scala documentation, it's very clear that they are targetting the language at people from traditional OO backgrounds.  Almost every example is explained in a way an OO programmer would understand, rather than how a functional programmer would look at the situation.


I think most OO programmers will run screaming away from the gnarly type errors.


Well for starters there aren't going to be gnarly type errors for most typical OO programmers.  Why? Because they'll simply stick to your typical OO style programming with no parametric polymorphism, ADT's, or higher order programming.  When used in the most typical OO way, there should be no more type complexity at work than in Java.

Quote
The sad fact of this Paul Graham-ish "design for smart programmers" is that most programmers aren't really very good and a language needs these people to really thrive.


Sure.  I am excited about Scala because I think it has the potential not to scare these people away, since the advanced functionality of the language is not mandatory for average tasks, and is non-intrusive when it is used in sprinklings.

Yet, when someone like me wants something a little bit more, it has to the potential to provide me most of what I need for even the most complex tasks.

Quote
Notice that the main reason they switch to Python, despite the fact that they consider Lisp to be superior, is that they didn't have to do nearly as much grunt work because so many support libraries had already been written by the "average" programmers that never got around to doing the same thing for Lisp.


This seems to be a good argument in support of Scala with regards to its backwards compatibility with the JVM and .NET, doesn't it?

Quote
Trotting out the standard counter-argument: static typing only catches certain kinds of errors, you still have to do proper unit testing and q.a., and it's not clear that static typing doesn't cause as many problems as it solves via increased language complexity and conceptual overhead.


Catching half the errors is better than catching none of them.  The half that isn't caught is usually easier to prevent also -- most of the remaining errors in my Haskell programs after everything has been type checked are algorithmic errors.  For complex algorithms in real world code, most of the behavior of these algorithms will be proved before being implemented.  And of course one will still use unit testing.  Static typing with type inference can actually make unit testing much nicer, in fact.  See haskells quickCheck for example.

As for conceptual overhead, I'm not convinced.  If you're working on a complex problem, you need conceptual overhead to maintain integrity over the program operation.  The more complex a task is, the less you can afford for some aspects of the computation to behave in an unknown fashion.  You need to do a little house keeping to prevent this, whether you use type predicates and exceptions or static typing.  In all except the most extreme (and then probably poorly designed) cases, typing information should not get in the way so much that the rest of the computation becomes difficult.  As just mentioned, if that were the case, it would more than likely be indicative of poor design with regards to the solution one is employing.

The advantage with the static approach is you can use a combination of a variety of techniques (type checking, exceptions, unit testing), whereas with dynamic typing your options are more limited with regards to safety.

Quote
It looks to me like they're still clinging to most of the functional communities most cherished and untested assumptions, but I'll give it a closer look.


Out of curiosity, which untested assumptions are you referring to?

As I said before though, Scala is primarily an OO language, with proper functional support after that.  Their tutorials even say things like "Scala Tutorial for Java Programmers."  I don't think the Scala design team obsesses about functional programming, I just think they've seen how it's advantageous.

On the other hand, it's somewhat amusing how most major OO languages these days are now adding quite a few functional style features after years of people saying how useless fp is.  Strange isn't it?

Quote
The burden on all language designers is to demonstrate that their language solves some important problem much better than existing alternatives.  Ruby's grown more in the last six months than in it's entire previous existence thanks to Rails.  The ball is in the functional community's court to do the same.
[a href="index.php?act=findpost&pid=348735"][{POST_SNAPBACK}][/a]


One look at their XML support and the rest of the language features that operate well with this approach (the regex stuff I mentioned, pattern matching, etc.) should make it obvious to just about anyone with serious programming experience just how useful some of those things could be for "real world" programs.

Seriously, people can rail on (no pun intended ) all day about "real world" programming or "academic" languages, but those definitions are so amorphous as to be practically meaningless to me.  What I think is important is whether a language is both capable and practical.  I think Scala meets both of those criterion.  Whether it will become a major player in the language community lies more with the random whims of the Language Gods than it does with other factors; Scala, I think, has those covered already.

Programming Languages

Reply #46
Quote
There is a huge difference between downloading Scala and using System.out.println in your first hello world program, and downloading a special, non-popular dialect or other poorly supported language extension facility to get the same functionality.


A lot of the other JVM-targeting languages integrate very nicely with the rest of the Java runtime but it didn't seem to help them much.  Maybe things will be different for Scala but the precedents aren't encouraging.

Quote
Quote
Notice that the main reason they switch to Python, despite the fact that they consider Lisp to be superior, is that they didn't have to do nearly as much grunt work because so many support libraries had already been written by the "average" programmers that never got around to doing the same thing for Lisp.


This seems to be a good argument in support of Scala with regards to its backwards compatibility with the JVM and .NET, doesn't it?


Yeah, in theory, but again, there are a lot of skeletons littered along that yellow brick road.

Quote
Catching half the errors is better than catching none of them.  The half that isn't caught is usually easier to prevent also -- most of the remaining errors in my Haskell programs after everything has been type checked are algorithmic errors.


Most of the bugs I find in my code are things that a type-checker wouldn't catch, but this is a hard point to argue in a vacuum.  I think it's interesting that really exceptional programmers come down on both sides of this issue.  I think it's one of those engineering things where it's not a question of which one is "better" in any absolute sense but more of a question of what the tradeoffs of each approach are.

Quote
Quote
It looks to me like they're still clinging to most of the functional communities most cherished and untested assumptions, but I'll give it a closer look.


Out of curiosity, which untested assumptions are you referring to?


The whole idea that some kind of rigourous mathematical theoretical grounding is important and valuable for typical programming tasks.  All this type theory and analysis and lambda calculus and graph rewriting yadda yadda makes for interesting research papers but I still don't think anyone's demonstrated that it translates to more productive languages.  Personally I think Larry Wall was closer to the truth with his idea of programming languages that borrow from natural languages.  He went so far off the deep end with it in Perl that he's largely discredited it but I think he was on to something.  Ultimately it's the programmer that makes the biggest difference and the more you can do to lower the barriers between how they he about code and how they write code the more you'll do to make him more productive.

Quote
On the other hand, it's somewhat amusing how most major OO languages these days are now adding quite a few functional style features after years of people saying how useless fp is.  Strange isn't it?


I certainly wouldn't say fp is useless.  Higher-order functions have proven their worth in many contexts.  I'd say the jury's still out on some other fp ideas like lazy evaluation and non-mutability.

Quote
One look at their XML support and the rest of the language features that operate well with this approach (the regex stuff I mentioned, pattern matching, etc.) should make it obvious to just about anyone with serious programming experience just how useful some of those things could be for "real world" programs.


I don't think you can make XML processing easy enough that people will switch languages for it.  With the right libraries it's not that big of a deal in most commercial languages now.  You can probably do better but not so much better people will switch for that reason.  Declarative languages make for elegant examples but XSLT isn't exactly on fire either.

Quote
What I think is important is whether a language is both capable and practical.  I think Scala meets both of those criterion.  Whether it will become a major player in the language community lies more with the random whims of the Language Gods than it does with other factors; Scala, I think, has those covered already.


Time will tell, I guess.  The shift to web-oriented computing has opened the door to new languages and approaches, so there's at least a window for newcomers to prove their worth.

Programming Languages

Reply #47
Quote
A lot of the other JVM-targeting languages integrate very nicely with the rest of the Java runtime but it didn't seem to help them much.  Maybe things will be different for Scala but the precedents aren't encouraging.


I think they probably will, if only because of Fortress.  Scala in its current form may not take off, but Fortress is probably going to be a big deal, and since it borrows so much from Scala, and is probably also going to be interoperable with the JVM to some degree, I suspect you'll see at least many of the concepts behind Scala, as well as Scala programmers, becoming more relevant in the future.

Of course, it's very hard to predict what is going to happen in just a few years, so maybe it'll all be a wash.  If it is, I'd be both disappointed and surprised though...

Quote
Quote
Catching half the errors is better than catching none of them.  The half that isn't caught is usually easier to prevent also -- most of the remaining errors in my Haskell programs after everything has been type checked are algorithmic errors.


Most of the bugs I find in my code are things that a type-checker wouldn't catch, but this is a hard point to argue in a vacuum.  I think it's interesting that really exceptional programmers come down on both sides of this issue.  I think it's one of those engineering things where it's not a question of which one is "better" in any absolute sense but more of a question of what the tradeoffs of each approach are.


What I think bugs me the most about the typing debate is that a lot of time, what is being discussed isn't really relevant to typing itself, but is instead more relevant to practical implementation issues.

Most people (I'm not saying you fall into this category) argue about static vs. dynamic typing (usually against the former) without really knowing anything about type theory itself, or why, theoretically and fundamentally, there is a difference.  As a result, most people aren't even in a position to really argue over anything more than "ease of use," and thus a very rich part of the debate is completely glossed over.

Maybe that's one of those "theory is unimportant" bits, but I doubt it.  Milner said something to the effect of "types make computer programs tractable," and having played with implementing type systems and many variants of the Lambda Calculus, I fully agree.  Most of what makes modern programming palatable is due to advances in theory, which are often at least indirectly related to type theory.

Many people, including language implementors, choose to ignore all of this.  I'm just glad some people don't.  However, people with a thorough understanding of type theory seem to be having less influence on some of the more popular languages as of late, and thus static implementations are becoming more and more scarce.  I'm convinced this isn't because they are inferior (If anything, I think the opposite), but because static typing with type inference is hard.  But, I also think it's worth it.

I tend to think that the negative view most people have of static typing could be changed with a little bit of patience and learning about some fundamental points of type theory in general, but this probably isn't going to happen.

Dynamically typed languages are not inferior of course, but I view them as skipping over the usage of a very powerful tool/theoretical underpinning.

Quote
Quote
Quote
It looks to me like they're still clinging to most of the functional communities most cherished and untested assumptions, but I'll give it a closer look.


Out of curiosity, which untested assumptions are you referring to?


The whole idea that some kind of rigourous mathematical theoretical grounding is important and valuable for typical programming tasks.  All this type theory and analysis and lambda calculus and graph rewriting yadda yadda makes for interesting research papers but I still don't think anyone's demonstrated that it translates to more productive languages.  Personally I think Larry Wall was closer to the truth with his idea of programming languages that borrow from natural languages.  He went so far off the deep end with it in Perl that he's largely discredited it but I think he was on to something.  Ultimately it's the programmer that makes the biggest difference and the more you can do to lower the barriers between how they he about code and how they write code the more you'll do to make him more productive.


Type systems probably aren't going to have much of an effect on productivity beyond a certain point.  This is probably due to the fact that to gain an increase in productivity offered by richer type systems, programmers would need a greater degree of expertise to begin with.

However, that still leaves room for safety and optimization, both of which rigorous theoretical underpinnings have much to offer.

Oh, and let's not forget about concurrency.  Concurrency and distribution are only going to become more and more of an issue in programming, and both of those things are quite intractable in a language if you do not heed theory ahead of time...

Quote
I certainly wouldn't say fp is useless.  Higher-order functions have proven their worth in many contexts.  I'd say the jury's still out on some other fp ideas like lazy evaluation and non-mutability.


Non-mutability (which is optional in most fp languages) is important for certain things like concurrency and distribution.  At least insofar as being able to reason about constraining side effects to specific portions of computations.  Laziness, on the other hand, is a tougher one to argue in favor of.  It can make certain types of programming styles a lot easier, but it does have very serious drawbacks with regards to defaulting to that mode of operation (memory issues).

Programming Languages

Reply #48
I've taken a brief look at Scala, and although i dislike static typing, i like the syntax very much. Even with jumping right into examples, i can make sense of them without background knowledge on its syntax. Thus, after being familiar with its syntax, it should result in very clean and easy to understand code.

Some questions:
- i have not seen any references to prototyping. Personally, i like the idea of using prototypes instead of classes very much. Does Scala offer some way to "simulate" prototypes without the need to deepcopy?
- Compiling: AOT-only or also JIT? I would miss JIT quite alot, because especially in the beginning stages of projects, i prefer to be able to make just a few changes in the code, and then check the result immediatelly.

- Lyx
I am arrogant and I can afford it because I deliver.

Programming Languages

Reply #49
Quote
- i have not seen any references to prototyping. Personally, i like the idea of using prototypes instead of classes very much. Does Scala offer some way to "simulate" prototypes without the need to deepcopy?


It doesn't have prototyping support in the typical sense as far as I know.  I think that approach would clash with some of the other more fundamental design characteristics of Scala.

However, you can probably approximate some aspects of prototype-based programming in Scala through the use of mixins, anonymous classes, and views.  You're still going to have to use the class based approach there ultimately, but by using a combination of those features you can get a more ad-hoc style of extensibility, perhaps similar to prototyping, than is typical of most other OO languages.

Quote
- Compiling: AOT-only or also JIT? I would miss JIT quite alot, because especially in the beginning stages of projects, i prefer to be able to make just a few changes in the code, and then check the result immediatelly.
[a href="index.php?act=findpost&pid=348870"][{POST_SNAPBACK}][/a]


Depends on what you mean exactly...

Scala compiles to the JVM or .NET, which should both make use of JIT to dynamically compile portions of the bytecode to native code when needed.

I suspect you are probably meaning something more along the lines of whether it is possible to get away with not compiling .scala -> .class before running the program though.  In that sense, there is a REPL (scalaint) which is similar to the REPL in python or ruby (or lisp or haskell or ...), and from there you can load .scala files directly.  If you're an emacs user, then scala-mode is designed like most other modes for languages with a REPL, and you can evaluate your current editing buffer directly in the REPL on the fly.