Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Programming Languages (Read 35074 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Programming Languages

Reply #50
Thanks for the info Dibrom. I've played around with scala a bit in the recent days and also scouted the web to get an idea about available docs and support.

As an experienced grown up coder who likes high type-safety, i would definatelly like scala.

However, i'm neither of the above, plus RAD will be most important for my upcoming project. The situation is the following:

- it will be a huge task, definatelly above my current skills and work-capacity
- yet still, i want to do it, because its something which wanted since i was 4years old or so
- I've lost hope and confidence in finding the right person to help me doing the framework(which is the most difficult task)
- therefore, i'll have to do it alone... which means that the only sane way to achieve that will be making it longterm sideproject, and working often on it, but not with much effort at a given point in time - to avoid burnout and loss of motivation
- this in turn means that writing the code will need to happen fast and without much effort
- since "without much effort" has much to do with the language, already being familiar with ruby, as well as the big community and availability of documentation and tutorials for non-pro coders, is a big plus

Thus, for my specific needs, i think ruby fits better. There is the downside, that ruby's syntax is currently being reworked - thus, i'll have to idle a few months, until the most important syntax-changes are implemented into the CVS-trunk. But if thats the only drawback, then i'm willing to accept it.

Still, thank you for the info - it definatelly changed my view on strong-typed languages.

- Lyx
I am arrogant and I can afford it because I deliver.

Programming Languages

Reply #51
Quote
There is the downside, that ruby's syntax is currently being reworked - thus, i'll have to idle a few months, until the most important syntax-changes are implemented into the CVS-trunk.
[{POST_SNAPBACK}][/a]


Just a small comment regarding that...

You know, people can often easily say things to the effect that a rigorous mathematical specification of a language (e.g., a formal semantics, etc.) is unnecessary, but it's times like these that you really see how useful they would have been.

One of the biggest problems with popular programming languages is that so many of them are never very well defined from the start, and so when it's time for the language to grow, there's often not a clear path ahead of time for how to extend the syntax without breaking backwards compatibility.

You would have thought that after C, people would appreciate rigorous formal specifications a little bit more.  With C, really the language "specification" basically boils down to the behavior of the most popular compilers.  Because of that particular mess, you still see certain language features poorly supported years after they were proposed in some standard.  C++ compilers are just finally starting to become pretty consistent and reliable in behavior with highly complex code involving templates and other advanced features.

kuniklo said this earlier:

Quote
Languages tend to become less elegant as they evolve.


In response to my statement about Scheme being nicer than Common LISP.  I really don't think that this is correct though -- I think that, in particular, it's pathological of languages which disregard the importance of formal specification from the get go.

There are many functional languages (usually the languages that tend to have formal specifications more often than not... surprising?) that have been around for awhile that I think have probably become more elegant as they have evolved, particularly as theory has eventually aided them in their further flexibility.  Haskell is a great example of this, although not the only one.

I don't know if Scala will ever become a major language.  Chances are slim, mostly because the deck is highly stacked against any language not pushed by multiple major corporations.  However, I think that even if it doesn't become a major language, it has the potential to stick around for quite awhile simply because it is already well defined and well understood, and will be easy to extend as needed in the future without as much trouble as many of the current popular "scripting languages" will face.

I think Ruby is a nice language, but I'm thinking it's going to continue to face more growing pains in the future, beyond even currently planned changes, particularly if it is ever planning to tackle increasing concerns in programming like security, concurrency, and distribution.  These are difficult problems, and as I said earlier, they are practically impossible to handle in a straightforward fashion without a deep, well-designed, and mathematically precise model of the language's behavior -- something that goes well beyond it's "implementation" in whatever compiler or VM is standard.

As for your project -- of course you should use whatever works best for you.  Ruby is a fine choice for RAD, and it's probably true that it would "not get in the way" of your thinking as far as rapid implementation goes.  But that's also a double-edged sword: dynamic typing and the sort of flexibility offered by a "loose" language like Ruby offers you many more opportunities for poor design (from all of a maintainability, safety, and performance p.o.v.), although kuniklo will probably disagree

On the other hand, if you already have to wait "months," I don't see why you couldn't become a pro with Scala in that time easily.  In the brief time I've known about it, I've already written a dozen non-trivial applications.  I don't feel as comfortable in it as in Haskell or ML yet of course, but I'm not too far behind.  Despite most of the advanced features I listed that Scala supports (along with quite a few I learned of since then that I didn't mention), the core language is very simple.  It's true that not many 3rd-party tutorials exist for the language yet, but both the introductory tutorial and example pdf are pretty comprehensive.  If that isn't enough, most Java source code (from tutorials for example) can be translated almost directly over to Scala with a few mostly [a href="http://lamp.epfl.ch/~emir/bqbase/2005/01/21/java2scala.html]cosmetic changes[/url], if that's the style of programming you want to do.  And finally, the library support of Java is going to be hard to beat, even for Ruby.  Everything for Java "just works" with Scala.  I've written some SWT applications, some applications using OpenGL (via lwjgl), Cocoa-Java applications with Scala, etc.  Everything I've tried so far from Java, even some of the more experimental and weird stuff, just plugs right into Scala with no hassle.  I think that's a pretty big deal, even though I really don't like Java itself.

Anyway, good luck on your project

Programming Languages

Reply #52
Quote
I think Ruby is a nice language, but I'm thinking it's going to continue to face more growing pains in the future, beyond even currently planned changes, particularly if it is ever planning to tackle increasing concerns in programming like security, concurrency, and distribution.  These are difficult problems, and as I said earlier, they are practically impossible to handle in a straightforward fashion without a deep, well-designed, and mathematically precise model of the language's behavior -- something that goes well beyond it's "implementation" in whatever compiler or VM is standard.


Why optimize for the hard case?  Most applications never need to deal with these issues in a serious way and it significantly complicates a language to try to address them all.  All language designs involve tradeoffs and compromises, so why not leave the really hard stuff for specialized tools?  The Common Lisp people trot this argument out all the time but I think the net result of all their work to handle the "hard" cases is that CL is just too much for the average programmer and nobody uses it.  Have you ever read Richard Gabriel's "Worse is Better" essay?  (http://www.jwz.org/doc/worse-is-better.html)  Some real wisdom borne of experience, I think.  There's a reason perl is 100 times as popular as all the exotic languages put together and I think it has a lot to do with this philosophy.

As for the evolution of Ruby, I think Matz has so far shown remarkably good taste and I trust him to handle it well.  Ruby has a mental ergonomic that just fits and could never be specified in a simple mathematical formalism and I don't think you can really appreciate it until you've actually worked in the language for a while. 

I think Lyx's decision making process is going to be pretty common for the average programmer choosing between something like Python or Ruby and something like Haskell or Scala.

Programming Languages

Reply #53
Quote
Why optimize for the hard case?  Most applications never need to deal with these issues in a serious way and it significantly complicates a language to try to address them all.   All language designs involve tradeoffs and compromises, so why not leave the really hard stuff for specialized tools?


Most applications don't have to deal with these issues as much right now, but as I pointed out, they are going to have to do so much more commonly in the future.  In a language designed for the long haul, it makes sense to plan ahead, and a formal semantics and similar characteristics help immensely in this regard.  That was the point I was trying to make.

If you'd rather say: "well Ruby is a good language except I never expect it to handle those problems, and as those problems become more common, perhaps Ruby should be used less," well, OK

Quote
The Common Lisp people trot this argument out all the time but I think the net result of all their work to handle the "hard" cases is that CL is just too much for the average programmer and nobody uses it.


Well, there's LISP, and then there's everything else, right?  At least that's how the LISP people look at it.

But actually, there's a twisted sort of truth there.  LISP's overly simplistic syntax, which lends itself to the sort of macro heaven that LISP users are so infatuated with, also make it less expressive in some sense.  Common LISP takes the approach of adding features as solutions to these sorts of "hard problems" in the form of macro support-style domain specific languages via libraries.  This tends to make things harder to learn because features are more spread out and there's very little in the way of syntactical sugar to make things easier to remember and to aid in idiomatic programming styles.

As I see it, there's really two much better alternatives:

  • Do it like Haskell -- support domain specific languages, but do so by making the languages themselves much more idiomatic and self-contained by allowing arbitrary operator definitions and infix notation.  This makes a huge difference in usability.


  • Support a couple of additional concepts directly in the language itself.  AliceML is a good example of adding thorough concurrency and distribution to a well-established syntax without adding huge amounts of complexity in average usage.  Yes, Alice can get complicated when you start using all of those new features at once.  But at the same time, it can look just like any other ML.  Oz, too, is an example of a language that is very simple, but also supports all of these advanced features.  Scala can do most of it also, although it does things more in the Haskell style (although Haskell also has support for concurrency in certain dialects).


Quote
Have you ever read Richard Gabriel's "Worse is Better" essay?  (http://www.jwz.org/doc/worse-is-better.html)  Some real wisdom borne of experience, I think.  There's a reason perl is 100 times as popular as all the exotic languages put together and I think it has a lot to do with this philosophy.


I just skimmed it, but I'll read it and respond more later.

Quote
As for the evolution of Ruby, I think Matz has so far shown remarkably good taste and I trust him to handle it well.  Ruby has a mental ergonomic that just fits and could never be specified in a simple mathematical formalism and I don't think you can really appreciate it until you've actually worked in the language for a while.


  That last sentence is a particularly profound statement.  Can you really support that?

The tools available for language specification are not "simple" mathematical formalisms by any means.  Maybe you should read TaPL and/or ATTaPL, or any of the other dozen similar sources, if you really believe that.

Saying that Ruby has an "ergonomic" which cannot be specified in a simple mathematical formalism signifies to me that the design is simply inconsistent and not well understood.  Either that, or in fact it can be specified, but nobody has bothered to do it.  There is absolutely no need for a formal specification to conflict with an elegant "ergonomic" -- the two are not diametrically opposed.

Quote
I think Lyx's decision making process is going to be pretty common for the average programmer choosing between something like Python or Ruby and something like Haskell or Scala.
[a href="index.php?act=findpost&pid=350302"][{POST_SNAPBACK}][/a]


Of course it is.  But that doesn't necessarily mean it's always the right decision making process...

Programming Languages

Reply #54
Quote
Most applications don't have to deal with these issues as much right now, but as I pointed out, they are going to have to do so much more commonly in the future.  In a language designed for the long haul, it makes sense to plan ahead, and a formal semantics and similar characteristics help immensely in this regard.  That was the point I was trying to make.


I think the point of Gabriel's essay is that, ironically, in designing for the long haul you make sure you don't survive the short haul.

Quote
If you'd rather say: "well Ruby is a good language except I never expect it to handle those problems, and as those problems become more common, perhaps Ruby should be used less," well, OK


There are a lot of things I wouldn't even try to use Ruby for, but I'd say the same thing about any particular language.

Quote
But actually, there's a twisted sort of truth there.  LISP's overly simplistic syntax, which lends itself to the sort of macro heaven that LISP users are so infatuated with, also make it less expressive in some sense.


Agreed.  All the domain-specific languages in Lisp look the same, so it's not as much of a gain as you'd think.  I think Larry Wall was actually on the right track in trying to make Perl's syntax follow the function of the code.  I'm a big fan of non-alphanumerics in syntax because I think they can convey a lot of information in  a quickly grasped visual form.

Quote
Quote
As for the evolution of Ruby, I think Matz has so far shown remarkably good taste and I trust him to handle it well.  Ruby has a mental ergonomic that just fits and could never be specified in a simple mathematical formalism and I don't think you can really appreciate it until you've actually worked in the language for a while.


  That last sentence is a particularly profound statement.  Can you really support that?


Let me put it this way - I've studied and admired many of the functional languages and been impressed with their mathematical rigor.  For a while I thought it was somehow important that the entire language could be collapsed to a formalism as simple as lambda calculus or graph rewriting.  These days I'm very skeptical of this.  My mind works in a completely different way when I'm writing Ruby code and I can just crank out working, tested code at least 5x as fast as I ever could in anything else.  It just *fits* like a good tool.  And no, I don't think you could reduce Ruby to a simple logical formalism, although under the hood it's very scheme-like.

Quote
Saying that Ruby has an "ergonomic" which cannot be specified in a simple mathematical formalism signifies to me that the design is simply inconsistent and not well understood.  Either that, or in fact it can be specified, but nobody has bothered to do it.  There is absolutely no need for a formal specification to conflict with an elegant "ergonomic" -- the two are not diametrically opposed.


Maybe, but so empirically that's been the case in the languages I'm familiar with.  They all suffer from scheme disease to some extent.  They're not willing to pollute their model with things that make real programming easier.  That kind of stuff gives theorists and mathematicians a boner but seems pretty far removed from typical real-world tasks.

Quote
Of course it is.  But that doesn't necessarily mean it's always the right decision making process...


What does "right" even mean here?  A programming language is just a means to an end and for people like me and Lyx Ruby gets the job done.  I think you do have to choose between linguistic elegance and sophistication and wide acceptance and good tool and library support and the equation keeps coming down on the right side for me.

Anyway, I'd be curious to hear what you think of the Gabriel essay.  I first read it in the middle of my functional language phase and thought it was moronic.  I keep re-reading it and appreciating the wisdom of it more ever time now.  I think the natural world mostly works this way because its the only way to manage complexity.

Programming Languages

Reply #55
Quote
Ruby is a fine choice for RAD, and it's probably true that it would "not get in the way" of your thinking as far as rapid implementation goes. But that's also a double-edged sword: dynamic typing and the sort of flexibility offered by a "loose" language like Ruby offers you many more opportunities for poor design (from all of a maintainability, safety, and performance p.o.v.), although kuniklo will probably disagree wink.gif

I think the usefulness of this "safety" heavily depends on the kind of project you'll do. In my case, it will be simple but extensive OO-tasks. Almost everything will be about handling objects in its most literal definition. Thus, there really won't be much complex mathematical algorythms or string-manipulation going on. I'd expect 80% of the code just being about creating/destroying objects and reading/writing their properties. Now you may think that this is one of those cases where type-safety is important - however, the amount of accessors really is quite limited - safety-checks in those central accessors, as well as the process-management part of the program handling exceptions should already provide more than enough safety. Maybe your thought at this point is "okay, catching and handling errors at the outer defense is all well and good, but how about providing safety already in their roots?". Well, thats undesired - what i want to achieve would be heavily limited(and therefore beat its purpose) if the inner components of the program couldn't trust each other - (almost) unrestricted access is necessary there. Thus, there even isn't a need for a strong inner security model.

I dont think that the "double-edged sword" in my case is a disadvantage - the biggest effort is getting the whole framework done in the first place. Optimizing, debugging and extending it is easy afterwards. The reason why this is so is because everything related/depending on everything else - so, when getting it into place the first time the problem is that one needs to think about everything at once. Refining individual components afterwards is much easier, because then focussing is much easier.

Thus, "built it now, care about minor glitches later" in this case indeed makes sense.

Quote
On the other hand, if you already have to wait "months," I don't see why you couldn't become a pro with Scala in that time easily.

I worded that a bit unclear. Yes, i would have to idle a few months regarding writing code. However, the architecture-conceptwork isn't finished yet, so i can just use those months for finishing everything on paper and planning ahead. It would be *useful* if while doing that i could also try out some of it in code, but this is not a *requirement*.


- Lyx
I am arrogant and I can afford it because I deliver.

Programming Languages

Reply #56
Quote
Anyway, I'd be curious to hear what you think of the Gabriel essay.  I first read it in the middle of my functional language phase and thought it was moronic.  I keep re-reading it and appreciating the wisdom of it more ever time now.  I think the natural world mostly works this way because its the only way to manage complexity.
[a href="index.php?act=findpost&pid=350305"][{POST_SNAPBACK}][/a]


I've read Gabriel's article, and here are my thoughts on it.

First, the distinction between the "MIT approach" and the "New Jersey approach" has some validity, but not as much as is initially made out.  Gabriel basically says he's presenting it in "strawman" form, so this isn't a real big surprise, but I think it's important.

The problem, as I see it, is that simplicity and correctness are not necessarily orthogonal.  They can be in certain cases, but this is not axiomatic.  The exact same thing goes for consistency.  I don't see any particular reason why either of those are necessarily at odds with simplicity.

Completeness, now, is a different story.  It is true that completeness, almost by nature, is at odds with simplicity.  But on the other hand, proper design should hold that functionality is essentially abstracted where needed, and so it should be possible to leave out functionality without compromising simplicity, correctness, or consistency.

It may be hard to believe that about the consistency point, but it's true.  Most languages that I prefer to use have good support for domain-specific languages.  It is through this facility that they are able to sacrifice completeness without violating the other principles.  Extra functionality can be added through a domain specific language, and done in a simple, correct, and consistent way.  Haskell is the best example I know of this approach, although Fortress sounds like it's going to try to do it just as well.

Now, with regards to the "PC loser-ing" problem -- the author does have a valid point there.  But it is a special case, also.  He's talking about an OS design issue and portability.  The same needs in that context do not map directly to the programming language design space.  Portability is important, sure, but that can be solved in other ways while still having a complex implementation whilst maintaining a simple interface.  GHC, for example, can compile to C code first and solve portability that way.  We also have things like the JVM, the .NET CLR, SEAM, or LLVM to make things easier on that front as well.

In the programming language design space, there is no need to sacrifice safety in most contexts (I'm not saying everyone needs to go "purely" functional however).  It really isn't all that hard to come up with a proper design for the behavior of a language, using  well understood theory, and to apply that to meet the principles of simplicity, correctness, and consistency.  It's just that most language implementors for the particularly popular languages don't go that route.  Maybe it's because they don't like math or theory or whatever else -- I don't know.  All I do know is that I'm not the greatest at math, and I find the concepts in TaPL quite easy to understand and implement.

As for implementation simplicity from the language p.o.v., I don't really know what to think about what the author says about that.  Sure, it might be easy to make a basic C compiler, but it's almost impossible to make a really great one.  Compare that to a language like LISP or ML (the latter of which is essentially just a polymorphic, typed lambda calculus like System F with lots of sugar), where it's easy to make a basic compiler, and not impossible to make a great compiler -- the latter of which is possible because there's plenty of theory available that makes it abundantly clear just how to do it.  C, and C++, on the other hand, are total messes, and it doesn't take a genius to look at the current compiler situation after all these years to understand the problem.

Now, I do think I understand where the author was coming from.  His criticism is aimed particularly at CL.  After all these years, the recent issue with Reddit has highlighted that there are serious problems with the CL community with regards to relationship between their design, implementation, and expectations.  That I don't dispute.  What I do dispute is that such principles are generalizable.

Haskell, I don't believe, suffers from the problems the CL guys do.  Haskell has two really high quality free implementations: GHC and Hugs.  They run almost everywhere (importantly, win32 and Mac OS X where free CL implementations are often weak as I understand it).  They come with libraries that are both simple and complete.  There are lots of 3rd party libraries, and there is endless progress being made on all fronts in the Haskell development community, whether it's new features being researched, better libraries being developed, theory being conceptualized, or whatever else.

Haskell, I believe, meets the principles of simplicity (with a few exceptions that have more to do, I believe, with language education than actual complexity), correctness, consistency, and completeness.  I think it does this without falling into the trap that the author describes the "MIT approach" as affording.

I don't think there's anything "special" about Haskell in this regard either.  I think it's approach is reproducible elsewhere.  Scala, I don't think, is quite there yet, but it seems to me that they are on the right track, given their different design constraints.

Now, with regards to languages like Ruby, I don't think it meets the concept of implementation simplicity that the author of the article discusses so much.  When I consider implementation simplicity, part of that includes a formal specification that makes re-implementing the language simple.  If there is a mapping between the concepts of OS portability and language portability, I think this is the important part of it.  Having a well understood specification makes it easy to create other dialects (in effect "porting" the essence of the language to another "platform"), or to create higher quality implementations of the same dialects.

By introducing syntax changes that are non-backwards compatible and through all of the fuss over the new VM, it's clear to me that Ruby doesn't really meet the principle of implementation simplicity.  It may be the case that the actual Ruby runtime environment that currently exists is simple in its underlying code, but this isn't what really counts.

So, essentially, I don't buy the line that designing for the long haul ensures failure in the short term.  It certainly can happen if it's done incorrectly, but if anything it's a case of correlation, not causation.  The key to doing it right essentially boils down to handling abstraction properly.

In the past, it's been especially difficult handling abstraction properly because the theory was either underdeveloped or nonexistent.  Over time things have improved tremendously and the same isn't really true anymore.  Now, for example, we have incredibly rich and powerful polymorphic type systems with well understood algorithms for type inference, as well as a much better understanding of problems relating to concurrency and distribution (and frameworks for handling them as well), whereas none of that was available for the first languages.  And those are only 2 examples of many.

Aside from problems in early languages that are obvious, you might say that early Haskell (or Miranda or Gopher) was a failure.  Maybe (in the popular sense, probably even, but certainly not to the group of people it was intended as there is no other language with as rich of a research community that I know of).  Certainly if Haskell arrived on the scene today without Monads, there would be deep problems preventing its success.  But nowadays we have all of these tools in the form of theoretical frameworks developed over the years since the implementations of the first serious programming languages.  It makes sense for implementors of other languages (like Ruby) to take advantage of it all.  Doing so is not going to signal their failure at all -- in fact, I think quite the opposite.

Maybe something like Haskell or Scala isn't as viral as C or Ruby either, but in the end I don't think this is because of the "MIT approach" or the "New Jersey Approach," but most likely because of more subtle issues like language education (and CS education in general).

Programming Languages

Reply #57
Quote
Maybe something like Haskell or Scala isn't as viral as C or Ruby either, but in the end I don't think this is because of the "MIT approach" or the "New Jersey Approach," but most likely because of more subtle issues like language education (and CS education in general).


I think the analogy actually works pretty well, but I guess when we're speaking this abstractly it's going to be largely a matter of taste.

So, more concretely, one area where statically typed languages seem to fall really short is in metaprogramming - introspection, dynamic code generation, etc.  How do you accomplish these kinds of things in a language like Haskell?  For example, Rails dynamically adds methods to it's O/R mapping classes based on the structure of the database tables.  How would you handle this in a statically typed language?

Programming Languages

Reply #58
Quote
Quote

Maybe something like Haskell or Scala isn't as viral as C or Ruby either, but in the end I don't think this is because of the "MIT approach" or the "New Jersey Approach," but most likely because of more subtle issues like language education (and CS education in general).


I think the analogy actually works pretty well, but I guess when we're speaking this abstractly it's going to be largely a matter of taste.


I'm not sure if you mean the "MIT approach" vs the "New Jersey Approach" is an analogy that works well when applied to educational problems, or if you're referring to something else I said...

In the case of the former, well, I don't really know what to say to that.  On the one hand, there is a place for "rough and ready" education styles for people that need to do something simple like tinker with VB, or know some very basic C or something like that.  On the other hand, too often those same people try to use those same languages and approaches for something well beyond the scope for which they are fit.  It's all about the right tool for the right job, and that philosophy should be applied from education all the way down to programming language choice and even computation style.

I don't agree with the idea that "one size fits all" here (all though some can come close), and I don't agree with the concept that languages should be made to approximate natural language or to be "intuitive" to Joe Sixpack.  Why?  Because Joe Sixpack's intuitions are wrong more often than not, whether in the realm of design or something more concrete like implementation of a particular algorithm.

Computer science was, and still should be a mathematical enterprise, as far as I'm concerned.  Engineering is something else, and while it's laudable in its own right, people too often associate CS nowadays directly with the latter and view the former as some sort of unnecessary legacy baggage.

Quote
So, more concretely, one area where statically typed languages seem to fall really short is in metaprogramming - introspection, dynamic code generation, etc.  How do you accomplish these kinds of things in a language like Haskell?
[{POST_SNAPBACK}][/a]


First of all, very, very few languages have metaprogramming facilities that are on the level of LISP, even other dynamically typed languages.

Static languages have always had good properties for metaprogramming, although few of them expose this functionality in a way familiar to users of dynamic languages.  But as far back as the original usage of ML, the idea has been for these languages to support such properties.

Going back to metaprogramming on the level of LISP, Haskell in fact has a very powerful answer to that in the form of [a href="http://www.haskell.org/th/]Template Haskell[/url] (there's also MetaOCaml for OCaml, but I've not used this one yet), which is just as powerful, if not moreso, than LISP macros.  You get code generation, access to the abstract syntax, whatever you need just about.

Other static languages which are slightly less powerful in their metaprogramming languages are C++ and Scala.  Both use the idea of parametric polymorphism and objects to basically create a new AST from an expression by returning nested objects through the use of overloaded operators.  Both of them use this for domain specific language style programming, and C++ in particular can use it for other things like conditional compilation (which doesn't have much of an analog in the VM targeting Scala anyway).

As for introspection, besides Template Haskell, Haskell has Generics, which I linked you to a paper about ("Scrap your boilerplate").  You can also approximate "dynamic code creation" through rather basic uses of higher order programming with ADT's.  In both Haskell and Scala this is trivially easy because of first-class functions.  In C++ it's still not that hard, but a little uglier since function pointers suck (you usually end up using Functional Objects instead).  But it should be noted that introspection doesn't have a lot of use in a static language because if you design your solution properly, you don't need to "introspect" anyway.  You already know what kind of values an object or something like it is going to contain, and you specify and deal with that through ADT's.

However, in the case of Haskell and Scala, I'd say that dynamic code generation style programming is in fact better than what you get with a dynamic language because by defining your ADT beforehand, you basically setup a constraint which proves your code generated at compile time will not go wrong either.  Both that and pattern matching make it dead simple.

Aside from what I've already said, there's more on stuff somewhat related to this all throughout the Haskell Wiki at pages like this one.

Quote
For example, Rails dynamically adds methods to it's O/R mapping classes based on the structure of the database tables.  How would you handle this in a statically typed language?


Haskell actually has things like eval now, and can be used for dynamically loaded plugins (see: hs-plugins), so one way is simply to create a list or tuple or record or some other type which points to the structure of the database tables and evaluates the contents they represent.  "lambdabot" on #haskell for example, can both execute and give the type of arbitrary haskell code printed into the channel (e.g., @eval let fix f = f (fix f) in fix id).  It does this by using this dynamic approach.  This should give an idea of how you might be able to do something similar for the database problem.

Another way is to to specify a domain specific language for which the database contains snippets and then embed an evaluator in a monad in Haskell.  That might sound complex, but it's rather easy because of the way Haskell is already designed.  There are plenty of other approaches as well, but those are two that I think are more obvious.

This is assuming, by the way, that you mean that somehow the contents of the tables themselves contain something as rich as actual code snippets.  If not, then a much more basic approach is possible.

Other static languages I haven't talked much about yet like OCaml have a few other options as well, whether through serialization and unserialization of code (coupled with continuations maybe), or even support for dynamic types in the type system.  OCaml has beginning support for that with Dynaml (Haskell has it as well I believe, although with all the other ways I already outlined, it's not really needed).  AliceML does something sort of like this with it's package system which allows it to load arbitrary code across a network for something like computation servers and things like that.

So basically, I think it's wrong to say that static languages fall very short on metaprogramming facilities.  I've just outlined a wide variety of different ways that various static languages handle this problem quite well.

Programming Languages

Reply #59
Quote
Computer science was, and still should be a mathematical enterprise, as far as I'm concerned.  Engineering is something else, and while it's laudable in its own right, people too often associate CS nowadays directly with the latter and view the former as some sort of unnecessary legacy baggage.


I agree.  I think the problems arise when people from a CS background start suggesting that their tools can magically solve engineering problems (a common claim on the functional language group).  If anyone wants to claim that their language or paradigm has significant advantages in the field then the burden falls on them to prove their claim.

Quote
Haskell actually has things like eval now, and can be used for dynamically loaded plugins (see: hs-plugins), so one way is simply to create a list or tuple or record or some other type which points to the structure of the database tables and evaluates the contents they represent.  "lambdabot" on #haskell for example, can both execute and give the type of arbitrary haskell code printed into the channel (e.g., @eval let fix f = f (fix f) in fix id).  It does this by using this dynamic approach.  This should give an idea of how you might be able to do something similar for the database problem.

Another way is to to specify a domain specific language for which the database contains snippets and then embed an evaluator in a monad in Haskell.  That might sound complex, but it's rather easy because of the way Haskell is already designed.  There are plenty of other approaches as well, but those are two that I think are more obvious.

This is assuming, by the way, that you mean that somehow the contents of the tables themselves contain something as rich as actual code snippets.  If not, then a much more basic approach is possible.


I'm actually talking about the simple case of mapping database columns to attributes on objects, not executing arbitrary snippets of text as code.  How would you typecheck this kind of code?  For instance, in a typical rails program, you have lots of code manipulating attributes on objects that represent rows in database tables.  Since you don't know until runtime if all these attributes will actually be present, how do you statically check the correctness of the code?

Programming Languages

Reply #60
Quote
I agree.  I think the problems arise when people from a CS background start suggesting that their tools can magically solve engineering problems (a common claim on the functional language group).  If anyone wants to claim that their language or paradigm has significant advantages in the field then the burden falls on them to prove their claim.


Concurrency and distribution are two very difficult engineering problems, and functional languages lead the way here by far.  As far as I'm concerned, they've already proven their claims both mathematically and empirically.  People seem not to want to listen though, because it's not done in an imperative, OO way.

If you don't believe me, you're welcome to check out some of the state of the art languages and see how powerful they are and easy to use for these types of problems.  Erlang, Oz, Scala, or Alice are all good choices to try for something like that.

Quote
I'm actually talking about the simple case of mapping database columns to attributes on objects, not executing arbitrary snippets of text as code.  How would you typecheck this kind of code?  For instance, in a typical rails program, you have lots of code manipulating attributes on objects that represent rows in database tables.  Since you don't know until runtime if all these attributes will actually be present, how do you statically check the correctness of the code?
[a href="index.php?act=findpost&pid=351132"][{POST_SNAPBACK}][/a]


Hrmm.. I'm a little bit confused by this.  I assumed you meant the much more complex problem of dealing with actual code because the simpler case you described is a pretty basic problem.  I mean no offense by this, but I have to wonder: have you really actually done much functional programming?  You've implied you have, but I think the solution to this sort of problem is kind of obvious if you have.

As I already said, you basically use ADTs.

I hope you can get the jist of that.  If you're not dealing with arbitrary code snippets, that implies you know that your properties fall within a finite set, and that finite set can be specified as a variant adt like TableProperty.  What you'd do then is read the database contents to a string, then parse the string into various properties.  I gave the example of returning a list of properties (e.g., [TableProperty]), but in reality you'd have to have it be something like  "IO [TableProperty]" to keep it safe, unless you just didn't care and wanted to use unsafePerformIO

Maybe I'm still misunderstanding your problem and there's something a lot more difficult about it that I just don't get.  But as it stands, it seems pretty simple to deal with to me.  By making your ADT recursive, you can deal with just about any type of nested properties you might encounter also.  By then, it naturally starts to look a lot like a domain-specific language with an abstract syntax

If you're referring essentially just to the basic problem of how to deal with type safety with a computation that might fail (i.e., a property you expect not existing), well that's what monads are for.  For simple, non-IO cases you usually use Maybe (or if you want fancy non-deterministic computational support you can use lists, which are also monadic in haskell by default).  For example, a function that might fail would have a signature like this: elemExistsInList :: (a -> Bool) -> [a] -> Maybe a.  In the IO monad, you can do basically the same thing.  And you don't actually need full blown monad support to handle this problem.  OCaml can use other similar, but ultimately less general techniques to do the exact same thing.

Programming Languages

Reply #61
Quote
Concurrency and distribution are two very difficult engineering problems, and functional languages lead the way here by far.  As far as I'm concerned, they've already proven their claims both mathematically and empirically.  People seem not to want to listen though, because it's not done in an imperative, OO way.

If you don't believe me, you're welcome to check out some of the state of the art languages and see how powerful they are and easy to use for these types of problems.  Erlang, Oz, Scala, or Alice are all good choices to try for something like that.


  I turned down a job doing Erlang for Bluetail because I didn't want to go live in Sweden.  The Erlang guys have done a very good job of solving a particularly difficult engineering problem with a custom functional language, and non-mutability certainly plays a part.  Interestingly, they also feel *very* strongly that Erlang's dynamic typing was a crucial factor in both the stability and durability of their running applications and in their ability to bring their engineering staff up to a productive level in the new language quickly.

Quote
Hrmm.. I'm a little bit confused by this.  I assumed you meant the much more complex problem of dealing with actual code because the simpler case you described is a pretty basic problem.  I mean no offense by this, but I have to wonder: have you really actually done much functional programming?  You've implied you have, but I think the solution to this sort of problem is kind of obvious if you have.

As I already said, you basically use ADTs.


I'm talking about a full blown O/R mapping layer like Hibernate for Java.  It maps table rows into first class objects with arbitrary attributes that then get passed around like any other Ruby object except mutations are passed transparently back on to the database.  For instance, if I have a user table like so:

id | login | phone | building
....
....
....

I can have code that uses a "User" object like this:

Code: [Select]
u = User.find(1)
u.login = 'foo'
u.phone = 'x3352'
u.save


How would you typecheck this if you can't tell if u.login is valid until runtime?

Quote
If you're referring essentially just to the basic problem of how to deal with type safety with a computation that might fail (i.e., a property you expect not existing), well that's what monads are for.  For simple, non-IO cases you usually use Maybe (or if you want fancy non-deterministic computational support you can use lists, which are also monadic in haskell by default).  For example, a function that might fail would have a signature like this: elemExistsInList :: (a -> Bool) -> [a] -> Maybe a.  In the IO monad, you can do basically the same thing.  And you don't actually need full blown monad support to handle this problem.  OCaml can use other similar, but ultimately less general techniques to do the exact same thing.


So what do you do if 95% of your code can "fail" like this?  What is static checking buying you if every function has to be decorated and checked like this?

Programming Languages

Reply #62
Quote
I'm talking about a full blown O/R mapping layer like Hibernate for Java.  It maps table rows into first class objects with arbitrary attributes that then get passed around like any other Ruby object except mutations are passed transparently back on to the database.  For instance, if I have a user table like so:

id | login | phone | building
....
....
....

I can have code that uses a "User" object like this:

Code: [Select]
u = User.find(1)
u.login = 'foo'
u.phone = 'x3352'
u.save


How would you typecheck this if you can't tell if u.login is valid until runtime?


OK.  Now your problem is a little more clear to me.  But as I already said, you basically use Monads, or in simpler terms, you "tag" the return type of your functions (or the type of your objects, or whatever -- depending on how the system is being used) with another type specifying that such a computation is liable to fail.  This makes the semantics of the computation clear to the type checker.  It doesn't actually know that "Maybe a" is a type that might represent a failed computation of 'a', but it doesn't care -- all that is important to it is that you have abstracted this important semantic detail into the type.

Since Haskell doesn't have "objects," a concrete implementation of this would look very similar to the example I already gave, except that if you wanted to make it fit a little more with the O/R concept, you could return records with labeled fields representing the retrieved properties.  Since the properties might not exist, a field representing a property 'a' could have a type "Maybe a", or you could move the abstraction up the chain a little and enclose the type of the object in some similar abstraction.  There are many choices, and different ones would be appropriate for different types of usage.

Quote
Quote
If you're referring essentially just to the basic problem of how to deal with type safety with a computation that might fail (i.e., a property you expect not existing), well that's what monads are for.  For simple, non-IO cases you usually use Maybe (or if you want fancy non-deterministic computational support you can use lists, which are also monadic in haskell by default).  For example, a function that might fail would have a signature like this: elemExistsInList :: (a -> Bool) -> [a] -> Maybe a.  In the IO monad, you can do basically the same thing.  And you don't actually need full blown monad support to handle this problem.  OCaml can use other similar, but ultimately less general techniques to do the exact same thing.


So what do you do if 95% of your code can "fail" like this?  What is static checking buying you if every function has to be decorated and checked like this?
[a href="index.php?act=findpost&pid=351139"][{POST_SNAPBACK}][/a]


You don't decorate the functions manually usually because the typechecker will handle this for you.  You might need a few annotations to develop the original semantic abstraction, but for the most part you can leave annotations off 95% of everything else and the typechecker will just "get it."

The huge advantage to making subtle semantic details like failure-liable computations explicit in the types is that the type system will force you to not do something completely stupid like not checking for failure and then having the system blow up on you when you least expect it.  Having a strong static type system in this situation is desirable because it alleviates you from having to manually insert exceptions all over the place and cross your fingers, hoping you've managed to catch everything.

With the static approach, your code simply won't compile if it's possible you're doing something dangerous like not acknowledging the fact that a computation might fail somewhere throughout an expression.  Since you're using ADT's, dealing with these situations is easy, since you just pattern match off the result.

You can also tell the compiler to check and make sure all your pattern matching clauses are exhaustive, which adds another layer of safety.

And if you want something more powerful than just basic pattern matching, you can use an Error monad which basically gives you exceptions with the level of generality of continuations almost (or if you need that level, you can just use the continuation monad and get restartable errors even).

*shrug* Maybe at the end of the day we just have very different ideas about how to solve problems.  To me, it seems having a type system which allows one to make subtle semantic details about unsafe computations explicit in the type system, and then using that to make code very safe, is a good thing and not a hindrance.  I have a hard time seeing how the features I outlined above are not desirable for these types of problems.

Programming Languages

Reply #63
Quote
Example:
Code: [Select]
getSomeDbaseProperty :: DbaseHandle -> Maybe TableProperty
getSomeDbaseProperty handle =
 case dosomething handle of
   Nothing -> ... {- failure case -}
   Just property -> {- success case -}


You can also tell the compiler to check and make sure all your pattern matching clauses are exhaustive, which adds another layer of safety.

And if you want something more powerful than just basic pattern matching, you can use an Error monad which basically gives you exceptions with the level of generality of continuations almost (or if you need that level, you can just use the continuation monad and get restartable errors even).

*shrug* Maybe at the end of the day we just have very different ideas about how to solve problems.  To me, it seems having a type system which allows one to make subtle semantic details about unsafe computations explicit in the type system, and then using that to make code very safe, is a good thing and not a hindrance.  I have a hard time seeing how the features I outlined above are not desirable for these types of problems.


I've never seen one person on either side of this argument persuaded in 100s of threads in far more detail than this one, so I guess it's unlikely we'll be the first.  Since the standard rebuttal to your standard defense of static typing above answers your points, I'll just roll it out again:

1. you have to unit test everything anyway since static typing can't catch all kinds of important errors
2. your code will work without checking for these maybe | maybe not types all over the place because you've unit tested it and all the extra checking code brings down readability.  it's like checking for a null pointer every time you access *anything*
3. type inference and static type safety add a layer of semantic complexity to code that a lot of programmers find confusing or at least distracting

Like I said, I used to be pretty excited by the idea of static type checking with type inference but I've personally grown to dislike it after working in languages with it and without it.  Maybe we can bet each other a six pack Haskell, Scala etc remain interesting research languages and not much more in 2015?  My other prediction is that the dynamic languages gradually start to resemble Dylan more with optional static type declarations as type assertions and aids to the compiler.

Programming Languages

Reply #64
Quote
I've never seen one person on either side of this argument persuaded in 100s of threads in far more detail than this one, so I guess it's unlikely we'll be the first.  Since the standard rebuttal to your standard defense of static typing above answers your points, I'll just roll it out again:


Ok

Quote
1. you have to unit test everything anyway since static typing can't catch all kinds of important errors


Here's the difference.  It's unlikely you can develop unit tests to handle all combinations of cases, just through intuition.  On the other hand, by having the semantics of failure-liable computations explicit in the types, the compiler can prove (that's a powerful idea if you think about it) that in certain ways your code is simply not going to fail.

As for the kinds of "important errors" the type system isn't going to catch, they aren't going to be errors related to failed computations of the sort you've been describing.  The types of errors the type checker won't catch is incorrect algorithms, or doing things like checking the wrong fields perhaps (although in some cases it will catch it if the fields produce unexpected type mismatches).

So far, I've seen you mention the fact that the type checker won't catch many types of errors, but also I've not seen you give explicit examples of this.  Since I deal with type checkers all the time, I'll assume you're thinking of the same kinds of problems I am then, and as I just pointed out, failure liable computations (the kind important in your O/R example) are exactly the type of errors they do catch.

Quote
2. your code will work without checking for these maybe | maybe not types all over the place because you've unit tested it and all the extra checking code brings down readability.  it's like checking for a null pointer every time you access *anything*


Wrong.  Because of the way monads work, the failure checking is not made explicit in the syntax you are using.

This is difficult to understand, and difficult for me to explain briefly.  But I'll try.  Consider this example:

Code: [Select]
niftyFunction =
 do obj <- retrieveObjFromDbase dbase
    building <- building obj
    if building == "someSpecialPlace" then ... else ...


Now what you don't see is important.  Each line of code in that monad represents a result of a computation bound to a variable for another function.  Depending on how you design your monad, a failure by any one of those functions (say if the obj cannot be retrieved, or the building doesn't exist) will (or can, depending on how you design it) "fold" all the way out and cause the entire computation to fail in a safe way, without you ever having to make checks anywhere except for a few very key places (the "boundaries" of the monad).  This is because the "binder" function that operates on each line, binding the variables to the functions as I described, is the one that knows how to handle the failed computations so your other functions don't have to.  If it detects a failed computation, it triggers a particular process which might be like the "fold" that I mentioned.  But essentially, the point is that the effect of one computation will cascade across another, but by using a monad you don't have to interact with this process directly in the explicit syntax.

And still through all this, you've got a provably safe handling of failure-liable computations.  It's pretty cool really.  If you'd like to learn more, there's some great explanations and tutorials here.

Quote
3. type inference and static type safety add a layer of semantic complexity to code that a lot of programmers find confusing or at least distracting


Sure.  A lot of programmers find databases confusing too.  Or memory.  Or objects.  Or ... well, damn near anything non-trivial about programming.  But they can learn, usually, and type safety isn't really an exception.  It's just alien to people used to the "C" way, or nowadays maybe the "Python" or "Ruby" way.

Quote
Like I said, I used to be pretty excited by the idea of static type checking with type inference but I've personally grown to dislike it after working in languages with it and without it.   Maybe we can bet each other a six pack Haskell, Scala etc remain interesting research languages and not much more in 2015?  My other prediction is that the dynamic languages gradually start to resemble Dylan more with optional static type declarations as type assertions and aids to the compiler.
[a href="index.php?act=findpost&pid=351146"][{POST_SNAPBACK}][/a]


You're probably right that Haskell and Scala aren't going to be mainstream languages ever.  But that doesn't meant that static languages can't do metaprogramming  It also doesn't mean they aren't great choices for many types of things people think that dynamic languages can only do.

I really don't know what the future of programming is going to look like, so I wouldn't make bets too heavily either way.  I do hope that there's still a place for what I feel is a good approach that so many people just seem to not understand for whatever reason, though.

Programming Languages

Reply #65
Quote
As for the kinds of "important errors" the type system isn't going to catch, they aren't going to be errors related to failed computations of the sort you've been describing.  The types of errors the type checker won't catch is incorrect algorithms, or doing things like checking the wrong fields perhaps (although in some cases it will catch it if the fields produce unexpected type mismatches).


Right, but the tests you write to catch those problems will also catch the type mistakes because they won't get to point of exercising the algorithm if they don't.

Quote
Wrong.  Because of the way monads work, the failure checking is not made explicit in the syntax you are using.


I'll admit to still not really getting monads, and there's definitely some black magic possible there.  I guess I'll just say that the average ruby programmer understands the Ruby object and type system and basic metaprogramming tricks because it's really pretty simple.  I've seen a lot of very sharp and dedicated people struggle with monads.

Quote
I really don't know what the future of programming is going to look like, so I wouldn't make bets too heavily either way.  I do hope that there's still a place for what I feel is a good approach that so many people just seem to not understand for whatever reason, though.


It's certainly a much more interesting time to be a programmer than it has been for many years.  With so many applications moving on to the web there's an opportunity for a lot of new approaches.  It will be interesting to see how it all shakes out.  My guess is that the languages people are using 10 years from now will be hybrids of a lot of these ideas.

Programming Languages

Reply #66
Quote
Quote
As for the kinds of "important errors" the type system isn't going to catch, they aren't going to be errors related to failed computations of the sort you've been describing.  The types of errors the type checker won't catch is incorrect algorithms, or doing things like checking the wrong fields perhaps (although in some cases it will catch it if the fields produce unexpected type mismatches).


Right, but the tests you write to catch those problems will also catch the type mistakes because they won't get to point of exercising the algorithm if they don't.


To some extent, yes.  But in the case of something like failure-liable computations, this isn't true.  You need to add exception handling to the tests because since there's no semantic information at the type level about the behavior of the computation, there's nothing relating to that that will enter into the algorithm that might cause it to fail unless you actually have a failure somewhere else first which may not happen at unit test time even, for whatever reason, whether its related to the comprehensiveness of your tests, or network or other hardware related problems.

The jist of it is that unit tests can get you some of the safety of a type system, but they cover a different type of problem and are not suited for subtle semantic details.  As I mentioned before though, there's nothing wrong with having a strong static system and unit tests and then getting the best of both worlds.  You can't do that so easily from the other side.

Quote
Quote
Wrong.  Because of the way monads work, the failure checking is not made explicit in the syntax you are using.


I'll admit to still not really getting monads, and there's definitely some black magic possible there.  I guess I'll just say that the average ruby programmer understands the Ruby object and type system and basic metaprogramming tricks because it's really pretty simple.  I've seen a lot of very sharp and dedicated people struggle with monads.


I've seen the same thing.  I don't know what it is I guess.  Monads are like continuations, so simple and general they seem hard.  Then you realize they are easy and all of a sudden you sit there wondering what all the fuss was about

On a more serious note, the big problem with monads is that most Haskell people are not good at explaining them.  That tutorial I linked to is great though, and the best source I've seen on them.

Quote
It will be interesting to see how it all shakes out.  My guess is that the languages people are using 10 years from now will be hybrids of a lot of these ideas.
[a href="index.php?act=findpost&pid=351149"][{POST_SNAPBACK}][/a]


Yeah, that's probably the only certainty

Programming Languages

Reply #67
Just to give a more in depth example of how both type annotations and explicit error checking are unnecessary for most code dealing with the kinds of problems I was addressing, I've uploaded an example of some code I wrote earlier today.

It's a parser for the simply typed lambda calculus, with many extensions.  The parser is not written using a parser generator, but instead uses parser combinators.  Basically I just check for certain string combinations in sequence.  This is exactly the kind of thing you usually have to use a bunch of checks for, whether in the form of standard if-else clauses, or exceptions.  However, there's almost no error checking which is explicitly apparent in the code.  In fact, the one case it is present is related not to the error-prone string processing at all, but instead from a check to see whether a particular element of a list was found to exist or not.  In fact, even that case could have been "hidden" from the explicit syntax, but I felt it unnecessary.

Despite the highly error prone process of dealing with a ton of different possible string combinations in different orders at different times, the entire thing is type safe.

It's also a nice example of why domain-specific languages are so cool, since in fact it uses one to represent the parser combinators in a way that maps rather closely to something like ebnf.

Anyway, here's the example.

Programming Languages

Reply #68
Personally, my impression about language-trends is:

now(ongoing)
- merging of multiple paradigms and features

soon(as in starting in a few years)
- simplification. Basically polishing of the confusing mess created by the mergers. How? More merging, but of the previously included features, to bring down the amount of too-many too-similiar features.

future(as in 8-14 years)
- languages have become very simple & intuitive, yet powerful. However, a need for safety and "implicit" distinction arises. Thus, type-safety and similiar safety-features will be reimplemented but in a different way - probably less mandatory but instead more like user-annotations built into the syntax. In essence, moving the responsibility for safety and correctness from the language to the user, yet providing builtin features to make this an easy task.

Thus, i dont think that type-safety will go away in the long run. But i do think that the way how it is done today, will slowly go away.

With my limited experience, i could of course be very wrong. The above is just what my intuition is saying after having looked around a bit regardiing the various existing and newly-created languages.

- Lyx
I am arrogant and I can afford it because I deliver.

Programming Languages

Reply #69
I always thought C# would come and go. I guess I was wrong. Classic ASP  & VB.NET are what I mess with 90% time now.

A trend I see is simplification in IDEs also. Drop this object on the form, the IDE creates all the code for you. Sometimes the IDEs do more "work" than the programmer himself.

Programming Languages

Reply #70
Quote
Quote
Thought I'd add a little something more to this thread...

It's kind of like 20th century orchestral music.  Schoenberg and Xenakis' music sounds great from a theoretical point of view, but nobody listens to it and their most persuasive ideas have been cherry-picked and reworked into something more palatable to the general public.  So maybe the functional programmers are the 12 tone serialists of the software world?
[a href="index.php?act=findpost&pid=348729"][{POST_SNAPBACK}][/a]


Actually I enjoy listening to Xenakis (and others modern & experimental) and admit that I don't follow the theory behind the compositions. To me, it's incredibly psychedelic sounding music. But I admit my tastes aren't mainstream; and I don't listen to bizarre experimental all the time, only when it seems right.

As for whether the analogy is a good one, I don't know. I always thought the only reason classical music went weird is because there was no where else for it to go. It was a logical evolution in the context of history. Popular tastes in music were changing at the turn of the century from the classical paradigm to more folk based forms: Tin Pan Alley tunes, ragtime, Folk/country, etc. In the early 20th century, how much of these "new" forms of music were inspired by classical compositions (cherry picked)? I don't know.

Programming Languages

Reply #71
Bringing this thread back from the dead...

Here's a very interesting presentation from Tim Sweeney, Unreal's main programmer, on the future of programming languages for game development:

http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt

Since game developers tend to be a few years ahead of the mainstream in terms of technologies and challenges it's a good peek into the future.  He seems to think that a functional, statically typed approach could be the solution to the concurrency problems new architectures pose.  Curiously, he's a big fan of lazy evaluation but not type inference.

Programming Languages

Reply #72
A few general observations about the slides:
Two points catched my attention the most. The first is modularity of objects. I've noticed this with my own project - the future-path inevitably leads to highly modular object-oriented programming with high-concurrency (everything can affect everything). Creating and handling existing objects is easy, but this controlled chaos makes it very difficult to *undo* changes - in a plugin-environment, this means that it becomes easy to add plugins, but difficult to "uninstall" them. This is because either you save more object-properties in the modules themselves, thus making heavy sacrifices in flexibility, or you store all changes ever made to an object - which is impossible because of performance and storage-space issues. Irony is that "reality" faces the same problem - in the real world, anything can affect anything, and the system works "forwards" - but its almost impossible to undo changes without leaving traces back. Thus, one of the main challenges which i see in future game programming is how to handle plugin-uninstalls in a highly modular environment.

The other thing which i knew before, but noticed again, is the insanely high burden placed on development-resources by "eye-candy" - mainly regarding 3d-graphics. And the trend is for this burden to become worse and worse. Development-resources however are not infinite - so at some point it will more and more impact gameplay-design - especially regarding content-variety (the cost to add a feature to the game becomes higher and higher). Personally, i think this trend will crash and someday result in a counter-trend towards gameplay- and interaction-focussed gamedesign. This wouldn't be the first time something like this happened - it did happen in almost predicable cycles in history, for example with music and movies(the bigger, better, louder style is not a new phenomenon - it happened before, went away, and returned,...)

One thing in which i agree with the slides, is that the only solution to rapid-fire concurrency with ten-thousands of objects, is atomic transactions.

I also agree that exceptions currently are too unpredictable and need more reliable management in the mid-level code.

- - - - -

Other personal observations:
I've noticed, is that for high-level tasks, there is an increasing demand for "simple" languages(LUA and Io come to mind). At the same time, industrial use requires a complex language. The current short-term trend is to have languages which specialize on being simple(LUA, Io), and languages which specialize on being complex(Python, Ruby). Personally, i think that this approach will not last, and that sooner or later a merger will be desired. This in turn could only happen by doing it the firefox/foobar2000 way: making the core language simple, and implementing everything else as extensions/plugins. I'm aware that current complex languages already make use of extensions - however, they currently tend to have a very large standard-library - the core is still much larger and complex than the "simple languages". Making the core minimal yet having a vast amount of extensions, solves all issues at once:
- can be lightweight and simple
- can be embeddable
- can be complex with wide middleware support
- no transitions and wrappers required to communicate between simple and complex projects. No reprogramming needed if a simple project grows to become complex.

Or in short:  languages which can fully adapt to users needs.

- - - - -

Another opinion of me, with which probably a few will disagree is:
Classes will go away and be replaced with prototypes.
The reason is simple, classes IMHO are just a bad excuse for not having prototypes. I can think of nothing classes can do, which couldn't be done easily with prototypes which can have "behaviours" attached to them. So, classes have no advantages over prototypes(except of the more easy implementation in the language - prototypes are more difficult to "do it right"), but prototypes have advantages over classes. In addition to this, the concept of prototypes is much easier to understand to newbies, than the concept of classes(actually, classes are like prototypes with additional predefined special rules(the "behaviours" i mentioned before).

Here's why...

Current paradigm:
Classes = unusable immutable "blueprints"
Modules = immutable mixins and/or seperate namespace
Instances = to create usable objects from the blueprints

So, the paradigm clearly expects your code-architecture to follow a given pattern. But sometimes thats not possible without inefficient implementation. Further evidence for the fundamental flaw in this paradigm is that it tries to fix the emerging holes in its design with more and more exceptions like singletons. You end up with a dozen possible object-types, each with different rules, adding significant planning-overhead... or in other words, sometimes you're more busy with "translating" your scenario to the outlined system - instead of implementing it naturally.

In direct contrast - a modern prototype-paradigm:
- Only one kind of object: prototypes.
- From this universally-usable single object-type, you can tailor it to your needs with features like:
- freezing (immutable object-methods)
- muting (unusable object-methods)
- allow to either "clone" or "copy" a proto (clone=shallow copy, copy=deepcopy)

Thus, with just one object-type and 3 "behaviours", you can do anything which you can do with classes & modules, plus more. Basically, it is no longer the case that you have to adapt your implementation to the language - instead, the language adapts to your implementation... in line with the intention: "the human is the master, the machine is the slave".


- Lyx
I am arrogant and I can afford it because I deliver.

Programming Languages

Reply #73
Quote
Other personal observations:
I've noticed, is that for high-level tasks, there is an increasing demand for "simple" languages(LUA and Io come to mind). At the same time, industrial use requires a complex language. The current short-term trend is to have languages which specialize on being simple(LUA, Io), and languages which specialize on being complex(Python, Ruby).[a href="index.php?act=findpost&pid=361808"][{POST_SNAPBACK}][/a]

I think the primary difference between the languages you named is not the complexity of the languages, but the complexity and size of their standard libraries.

Programming Languages

Reply #74
Quote
The other thing which i knew before, but noticed again, is the insanely high burden placed on development-resources by "eye-candy" - mainly regarding 3d-graphics. And the trend is for this burden to become worse and worse. Development-resources however are not infinite - so at some point it will more and more impact gameplay-design - especially regarding content-variety (the cost to add a feature to the game becomes higher and higher). Personally, i think this trend will crash and someday result in a counter-trend towards gameplay- and interaction-focussed gamedesign.


I think you can see this happening already in the current generation of consoles.  All the Xbox 360 games are formula - first person shooters, racing games, and rpgs that look beautiful but don't take any risks with gameplay.  Nintendo's trying to buck the trend by building a lower-spec console and writing games that emphasize gameplay and new kinds of interaction.  Hopefully they'll succeed but I suspect the market will reward the more cinematic but dull 360 and PS3.