Hi Jason,

 That may be the case from your - and others - perspective - and I have
empathy for it -, however they are still valid techniques and others, such
as myself, don't share your perspective.
    

Sure, just as manual memory management is still valid and needed at
the lowest levels of programming.  It's just not valid in most
applications.
  

I've not yet seen any serious discussion of the case for your point of view which bridges the gap of complexity in concurrency as automatic memory management magically does. Please illuminate us with specific and complete details of your proposal for such a breakthrough in concurrency complexity.


Making the Squeak VM fully multi-threaded (natively) is going to be a lot of pain and
hard to get right.  Just ask the Java VM team.
  

Then either the hard work needs to be done, or the VM needs to be completely rethought.


The pay back of adding this obsolete (except in the lowest level
cases) method of dealing with threading just isn't going to be worth
the pain to implement it.
  

What are you going on about? What techniques are you saying are obsolete exactly? How are they obsolete?


 The reality of processor designs like the Tile 64 require us to have all
available techniques at our disposal.
    

Why?
  

Why? 64 processors on a single chip - with 128 coming next year and 1024 planned - that's why.

With that many processors on a single core it's important that systems and applications run smoothly taking advantage of all the opportunities for parallelism. This has many implications, some of which work better on one method of concurrency than on another. One size of shoe doesn't fit all solutions.


 Exactly my point. Thus the solutions proposed as being "simplier" are just
an illusion.
    

?  They are unquestionably simpler to the programmer who is using them
(which is what I meant).
  

You've missed the point. Even the simplest of concurrency methods proposed so far by people in the Squeak thread lead to the most complex concurrency control error scenarios. That's one of the points. Another is that the simplest of concurrency models can't handle all the scenarios.

As asked above please describe in detail and completely the proposed "simple" approach to concurrency that is being proposed. Links to appropriate descriptions if it exist would also be fine (unless it contains too much extraneous text).


They might be simplier in some cases but when you really need
complex concurrency controls sometimes you need the other "dirtier"
techniques at your disposal.
    

This is like saying that Smalltalk is wrong to not expose manual
memory management to you for when you need to get "down and dirty".
It's simply not the case.  You move to a higher level, just as we do
with all abstractions.
  

Nonsense it's not like saying that at all.

Sometimes moving to a higher level abstraction isn't the solution. Sometimes moving laterally provides the insight for the solution. Often moving down to the lowest levels and rethinking how they work provides the solutions without higher levels of abstraction.

A case in point for clarity: Exokernels. They remove higher levels of abstraction so that we have access to the power of the real hardware.

"The idea behind exokernels is to force as few abstractions as possible on developers, enabling them to make as many decisions as possible about hardware abstractions." - http://en.wikipedia.org/wiki/Exokernel.

The problem with concurrency is that it's much more complex by orders of magnitude than garbage collection. Much more complex a beast, so much more so that the comparison breaks down.

Stephen Wolfram's work on Cellular Automata (page 27 of A New Kind of Science http://www.wolframscience.com/nksonline/page-27) proves, yes, proves that even (some) simple systems can generate results (i.e. behaviour) that is as complex as any generated by a complex system.

Wolfram states: "The picture [of a cellular automation rule 30] shows what happens when one starts with just one black cell and then applies this rule over and over again. And what one sees is something quite startling - and probably the single most surprising scientific discovery I have ever made. Rather than getting a simple regular pattern as we might expect, the cellular automation instead produces a pattern that seems extremely irregular and complex." Most of the rest of his book is exploring just how complex this behavior really is.

Often this feature of simple systems is just what we want to take advantage of. Certainly Smalltalk leverages the power of this simplicity with it's simple syntax. So if there is a way (or ways) to have simple concurrency that is effective I'm all for it.

However, there is a dark side to Stephen Wolfram's discovery as well that needs addressing and that I'm attempting to point out here. The dark side is that simple systems can generate complex results. Complex results (beyond comprehension) is just want we don't want when we enter the world of concurrency. The rub is that there isn't anyway to avoid the complex results as far as I can see for simple systems can generate complexity as complex as complex systems.

I fear that the solutions space isn't as straightforward as having a set of simplified concurrency primitives as proposed by some for Smalltalk. The reality is harsher than you think. The solution space requires more than a simple set of concurrency primitives.


Smalltalk is supposed to be a computer language
with general power to control the computer and access it's true power and
potential. Limiting the solution space by only implementing a limited set of
concurrency primitives makes no sense. You'll just give the market to other
lesser systems like Erlang and Java type systems.
    

This last sentence is quite odd, and to be frank not well reasoned at all.
  

Thank you for calling it "odd". That's what happens when you think different, at first people think it odd. I often encourage people to think different as Apple does in their marketing of a few years ago.

As for how it's reasoned, yes, it is well reasoned even if you don't get it at first or even if I wasn't clear about it. Let me attempt to clarify the reasoning for you.

First of all Erlang is not lesser, it is in fact currently the leader
in this area.  

How is that?


It's funny though, that you suggest we would "give the
market over" to Erlang, since Erlang supports precisely *one* form of
concurrency:  share-nothing message passing.  

Yes, but Erlang is a purely function non-object-oriented non-keyword-message passing language.

While it has a form of message passing it's not the same as Smalltalk's. It's simply passing parameters to functions that run in separate green or native threads.

Yes it is impressive what they have accomplished, but it isn't the be all and end all.


Erlang can run in multiple threads, but only the interpreter does that, and it's
transparent to the processes running in the VM.
  

Every system needs improvement.

Second of all, do you seriously think adding fine-grained threading to
Smalltalk automatically will cause it to take over the market?  

No I don't think that it will "cause" Smalltalk to "automatically take over the market". Of course not! Nor did I imply that or intend to imply that.

I simply think that having all the tools at our disposal is important to maintaining and growing market share.


Ironically, the fact of the matter is: the languages that make
threading *simpler to implementers* are going to be the ones who win
in the apparently coming multi-core world.    

"Simpler to implement" concurrency leads to just as difficult to manage software systems as more complex well thought out concurrency. In fact, I think, that making it simplistic will lead many programmers to implement software that is impossible to debug without enormous efforts. The problem is that even the simplest concurrency leads to the nastiest and most complex bugs in software.

For example what could be simpler than Smalltalk's forking of blocks of code? It sure seems simple doesn't it; just a simple message to a block of code: "[ ... ] fork". In many cases it is simple and no harm is done as the code will execute and everything will be good and consistent afterwards. The problems begin when you "fork" code that wasn't designed to be forked and run concurrently. Then all hell can break loose and boom your program crashes unexpectedly with a mystery. It gets even worse when it only happens occasionally - try figuring it out then.

I just working on fixing a "fork" happy approach in a major Smalltalk production application. In the end we took out many of the forking statements or fixed them in other ways. That application was running in a Smalltalk with ONE native thread for all the Smalltalk processes (aka green lightweight threads). So much for simple threading being easier.

Part of the problem was the programmers being "fork" happy. Part of the problem is that the Smalltalk class library isn't designed to be thread safe. Very few class libraries are. Most of the problem is that the concurrency simply wasn't thought out well.

I don't see how you can have a simple concurrency threading model solve the problems of when and how to use concurrency properly to avoid the many pitfalls of threading. If you can see that please illuminate it for the rest of us.

Just ask Tim Sweeny.
  

Do you mean Tim Sweeney the game developer? http://en.wikipedia.org/wiki/Tim_Sweeney_(game_developer)

Alright even though I don't know Tim I'll take the bait and see where it goes, Tim Sweeney (or Sweeny) what do you think? (If someone who knows him would be kind enough to pass this thread on to him or post his thoughts on this topic that would be great - thanks).


 For example, when building hard core operating systems.
    

If you want to build a hard core operating system in Smalltalk you
have other more pressing issues to deal with then how threading is
accomplished.  

Yes, there are many issues in implementing an operating system in a language such as Smalltalk. In exploring these issues ZokuScript was born. Real native processes with protected memory spaces and multiple threads are just one of these important issues. Performance is another.

Threading including native threading on one core or N cores (where N can be large) under existing operating systems is very important to the future of Smalltalk.


I really don't see what it is you think you lose not having this old,
out dated fine-grained threading model.
  

For clarity purposes, please define in detail what you mean when you use the phrase "fine-grained threading model" so that we can make sure that we are on  the same page.



 There are many paths. I'm excited about the path that you are forging. All
I ask is that you don't make that the only path to travel for people using
Smalltalk.
    

Well, at the moment I'm forging nothing, only stating what I know of
the situation.  At some later point I do intend to look at what's
required to make this happen in Squeak, but I have some other more
pressing issues for the present.

  
 While I support Smalltalk inventing the future, keeping it from supporting
valid concurrency techniques is ignoring the future (and the past) of what
works!
    

We have very different definitions for "works".  Here you are using it
the same way someone would use for <insert crappy programming
language>.  It works in the same way you can paint a house with a
tooth brush.
  

You seem to think that there is some magical breakthrough in the world of concurrency that is on par with the magic of automatic garbage collection. I'd sure love to know what that is and how it avoids the pitfalls with even simple concurrency models and issues that occur in real world projects. If you could, please describe in full detail and completely with real world examples. Thanks very much.

All the best,

Peter William Lount
peter@smalltalk.org
Smalltalk.org Editor