Folks -
Now out on the servers, some pretty interesting goodies...
First, in time for the holidays, we have two classic games. Mines from David A. Smith is ready to gobble up your spare minutes and hours. And an entire game of Chess, from Andreas Raab, which he says needs a bit more programming to play a decent game. Who wants a Squeak game that needs more programming? Oh, I see ;-).
There are several bug fixes, plus a whole rewrite of Morphic borders from Andreas. Lots of cool new styles are now available.
Then there's a high-speed reader/writer for JPEG format from Juan Manuel Vuletich. And streaming sound support from John Maloney.
Last, but not least: An entire JPEG movie morph from John Maloney and Juan Manuel Vuletich. You mean MPEG, don't you? No! Juan's JPEG plugin is so fast that you can show movies from it at 30-40 frames per second. This means we have a movie format that is open, can be written from Squeak, and is something like 20 times more compact than our old .movie format.
Check them out
- Dan -------------------- 4543hideTabsFix-sw -- Scott Wallace -- 24 November 2001 Fixes the bug that could drop you into an endless series of debuggers if you hit the 'hide tabs' button in a flap-based Navigator tab"
4544Morphic-Chess-ar -- Andreas Raab -- 25 November 2001 A long-term project of mine has always been to write a nice little chess program for Squeak. But since I haven't done anything on it in the last few months I'm just going to throw it at the community to see if someone is interested in doing a bit more. It's a playing ... but not well :-( So, go get it from the objects tool and improve it!" Smalltalk at: #ChessConstants put: Dictionary new.
4545Mines-das -- David A. Smith -- 25 November 2001 Ah ... one of those terribly addicting games..."
4546projEnter-sw -- Scott Wallace -- 25 November 2001 Fixes a recently-introduced bug regarding the initial state of morphic projects."
4547viewBoxFix-crl -- Craig Latta -- 5 November 2001 Initialize the fullBounds of a pasteup when its viewBox is set -- fixes a bug seen when entering a newborn project that has no submorphs in it. "
4548BorderStyles-ar -- Andreas Raab -- 26 November 2001 This change set introduces explicit border styles in Morphic. At this point they are hacked into BorderedMorph to allow the dual existance of the borderWidth/borderColor pair with the new border styles (e.g., storing a value into borderWidth/borderColor will be reflected in the borderStyle when it is used). Six new complex border styles have been added - besides 'rounded' versions of inset and raised borders (as well as their inverse variants) framed borders are supported. The new border styles are available from the object properties dialog (e.g., when you click on the 'change color' halo - which really should be renamed). In addition, each border style can either track the color from its associated morph, or it can have a fixed color. It is therefore now possible to have a morph with a gradient fill and a manually selected color with a raised (or inset, or framed) border style, thus greatly improving the results for any fancy fill styles.
4549fastJPEGReadWriter-jmv -- Juan Manuel Vuletich -- 26 November 2001 JPEGReadWriter2 uses a free JPEG library from the Independent JPEG Group to provide: 1. encoding, as well as decoding of JPEG images 2. very good performance It has many applications, including JPEG movies. Many thanks to Juan Manual for this splendid addition to the Squeak toolbox!"
4550streamingSound-jm -- John Maloney -- 26 November 2001 Adds support for the creation and playback of monophonic streaming sampled sounds. The sounds can be compressed using any of Squeak's sound codecs. Random access (i.e. moving the playback position) is supported. Compatible with StreamingMP3Sound. Note: Random access does not work perfectly with the ADPCM codec. A fix is forthcoming. For now, mulaw or gsm compression are recommended."
4551jpegMovie-jm -- John Maloney -- 26 November 2001 Exploits the new fast JPEG plugin to support a new, compressed movie format: JPEG movies. A JPEG movie file (usually ending in '.jmv') contains a header, the JPEG compressed frames of the movie, and possibly one or more sound tracks. Like a Squeak movie--and unlike an MPEG movie--JPEG movies can be created, manipulated, and edited using only Squeak. However, they are much, much smaller than Squeak movies. For example, a several minute clip from Fantasia ('Night on Bald Mountain') shrunk from about 176 to under 6 megabytes. On the other hand, JPEG movies tend to be 1.5 to 3.0 times the size of an MPEG movie at a similar quality level. The class JPEGMovieFile contains methods for converting both MPEG or Squeak movies into JPEG movies. JPEGMovieFile implements a subset of the MPEGFile protocol so that JPEG movies can be played by MPEGMoviePlayerMorph. Also included in this changeset: 1. the ability to resize the movie display (use the yellow halo) 2. improved sound/picture sync that is robust in the face of sound pauses induced by OS activities
4552borderStylesInViewer-sw -- Scott Wallace -- 26 November 2001 Makes the full range of new border styles available in viewers"
The JPEG movie morph sounds wonderful! Is the compiled plugin (or new VMs) available somewhere? I didn't see them off the UIUC FTP server. Mark -------------------------- Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
More questions about the Movie-JPEG format, please:
- Why was it decided to invent a format rather than use the existing Motion-JPEG standard (which I didn't know about until Bolot sent me these URLs):
http://bmrc.berkeley.edu/research/cmt/versions/4.0/doc/cmtmjpeg/MJPEG_ chunkfile.html http://neptune.netcomp.monash.edu.au/cpe3013/MPEG/Reading/MJPEG/step1.htm
The Motion-JPEG format sounds like essentially the same thing as the movie-JPEG format, but I could be missing some of the subtleties.
- Why is the soundtrack audio format Sun mulaw rather than something more common like AIFF or WAV?
- It looks like it's possible to do multiple soundtracks (though the MPEG-to-Movie-JPEG example only create a single monophonic soundtrack). How does that work? Are they mixed during playback, or is there a way to choose between soundtracks during playback?
I'm eager to get the plugin/VM to try it, particular on BUILDING some movies!
Mark -------------------------- Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
Mark,
- Why was it decided to invent a format rather than use the existing
Motion-JPEG standard (which I didn't know about until Bolot sent me these URLs):
Simply put, because we didn't quite understand everything about MJPEG and wanted to get something into all of your hands to play with. Now that we have something that's fast enough we'd all be more than happy if one of your smart students would make Squeak read and write MJPEG proper ;-)
Cheers, - Andreas
At 1:34 PM -0500 11/27/01, Mark Guzdial wrote:
Thanks for your enthusiasm, Mark!
Re:
Why was it decided to invent a format rather than use the existing Motion-JPEG standard (which I didn't know about until Bolot sent me these URLs):
http://bmrc.berkeley.edu/research/cmt/versions/4.0/doc/cmtmjpeg/MJPEG_ chunkfile.html http://neptune.netcomp.monash.edu.au/cpe3013/MPEG/Reading/MJPEG/step1.htm
The Motion-JPEG format sounds like essentially the same thing as the movie-JPEG format, but I could be missing some of the subtleties.
Mostly because I didn't know about motion JPEG, either! (Well, I'd heard the term, but knew nothing about it. Thanks for the links.) Motion-JPEG seems fairly similar to JPEG movies, but with no provision for sound, as far as I can see. How widely used is it? Do programs such as Quicktime deal with it?
It certainly looks like it might be easy to write a converter to import an MJPEG movie into a Squeak JPEG movie. If there is a way to include soundtracks using arbitrary sound compression, then I'd have no objection to just replacing the Squeak JPEG format with the MJPEG format. I don't have time to do that myself, at least not in the near future.
Hmm, after studying these two web sites further, I'm less certain that Motion JPEG could easily exploit with our current JPEG plugin. The key thing about the Squeak JPEG movie format is that each frame is exactly a compressed JPEG image, completely independent of all other frames, that can be sent through the JPEG plugin. The links above suggest that perhaps Motion JPEG frames are not in precisely this format, even though it uses the JPEG algorithm. For example, individual frames might omit some header information that is common to all movie frames. While we could probably extend the JPEG plugin to handle such a format, the current JPEG movie uses exactly the same data format as still images. Feel free to send any links that clarify this issue...
Re:
- Why is the soundtrack audio format Sun mulaw rather than something
more common like AIFF or WAV?
Soundtracks are in Sun audio file format. This format supports uncompressed sample streams, as well as arbitrary codecs. I wanted a format that could be extended to use any new Squeak codecs that happened to come along. I nearly created a new audio file format, but then realized that was silly. :->
Sun audio files are *really* simple--just 6 header words plus the data.
Re:
- It looks like it's possible to do multiple soundtracks (though the
MPEG-to-Movie-JPEG example only create a single monophonic soundtrack). How does that work? Are they mixed during playback, or is there a way to choose between soundtracks during playback?
Multiple soundtracks haven't been implemented yet, either encoding or playback, but I wanted to leave the option open for supporting them later.
Right now, you get the first sound channel from an MPEG movie that has sound. If the movie has stereo sound, you get just the left channel. My thought for later was to create one StreamingMonoSound for each channel and use a mixer to mix them together. (Actually, tthat wouldn't quite work with the current implementation of StreamingMonoSound.) The other approach would be to make a StreamingMultichannelSound that mixes multiple sample streams. Note that compression makes things a bit more complex, since some of Squeak's codecs have provision for inter-mixed samples streams but others (e.g. the GSM codec) do not. Of course, we can always create a new format type for Sun audio files that does whatever kind of interleaving makes sense.
Incidentally, the GSM codec does a pretty good job on the AlienSong.mpg soundtrack, including a piano riff at the beginning. It's nowhere near as clean as MP3, but it's acceptable, especially if you're listening through bad laptop speakers anyhow. :->
The ADPCM codecs also work, but they don't handle random access properly. It's on my list to fix that. For now, avoid them. (5-bit ADPCM sounds nearly as good as mu-law, and it's only 5/8ths the bitrate.)
Re:
I'm eager to get the plugin/VM to try it, particular on BUILDING some movies!
We're also thinking about how kids and teachers could import movies from a digital video camera. It looks like it's reasonably easy with a firewire-equipped Mac and iMovie. You'd also need to buy Quicktime Pro, which gives you the ability to export a movie in a variety of formats, some of which could be read from Squeak. One of these formats is simply to export a collection of individual frames in PNG, JPEG, BMP, or various other formats. It's trivial to write a converter that builds a movie from a collection of frames.
If anyone knows how one might do this on Windows, I'd like to know. The goal is to avoid expensive software or video converter hardware. I think $25-$50 is the most a kid should have to spend to get a movie into Squeak.
-- John
On Tue, 27 Nov 2001 18:27:44 -0800, John.Maloney@disney.com wrote:
How widely used is it? Do programs such as Quicktime deal with it?
My aproximate understanding is that it's never used for "end product", as MJPEG files are much larger than their MPEG equivalents. The mid-level video capture hardware (Matrox, etc) seems to like using it - I think the key point is that its far easier to edit compared to a file format where you only get a key frame once in a while (ie regular MPEG).
Most of the cheaper tv-card style video grabbers that I've seen don't bother with it, and just go straight to MPEG.
I suppose this boils down to, "Support MJPEG if its nice and easy, but don't bust a gut over it".
HTH, Nick Brown
Thanks, Nick, this is very helpful.
You're right that stand-alone frames are much easier to edit. In my experiments with Squeak JPEG movies, JPEG movie are generally 1.2 to 3 times larger than the original MPEG movie at similar quality levels. So MPEG is definitely preferable for compactness in final distribution.
But there are many advantages to a format that can be authored and edited in Squeak without the need to buy any additional software. I'm hoping that we can find a way to export a Squeak JPEG movies in a form that can be imported into a high-end video program and turned into a movie in MPEG, Quicktime, or other popular digital video formats for those who desire to publish their movies outside the Squeak community. (I believe that Adobe Premier, for example, can create a movie from a folder full of individual frames, and it would be easy to export all the frames of a JPEG movie.)
-- John
At 7:08 PM +0000 11/28/01, Nick Brown wrote:
On Tue, 27 Nov 2001 18:27:44 -0800, John.Maloney@disney.com wrote:
How widely used is it? Do programs such as Quicktime deal with it?
My aproximate understanding is that it's never used for "end product", as MJPEG files are much larger than their MPEG equivalents. The mid-level video capture hardware (Matrox, etc) seems to like using it
- I think the key point is that its far easier to edit compared to a
file format where you only get a key frame once in a while (ie regular MPEG).
Most of the cheaper tv-card style video grabbers that I've seen don't bother with it, and just go straight to MPEG.
I suppose this boils down to, "Support MJPEG if its nice and easy, but don't bust a gut over it".
HTH, Nick Brown
John.Maloney@disney.com wrote:
So MPEG is definitely preferable for compactness in final distribution.
But there are many advantages to a format that can be authored and edited in Squeak without the need to buy any additional software. I'm hoping that we can find a way to export a Squeak JPEG movies in a form that can be imported into a high-end video program and turned into a movie in MPEG, Quicktime, or other popular digital video formats for those who desire to publish their movies outside the Squeak community. (I believe that Adobe Premier, for example, can create a movie from a folder full of individual frames, and it would be easy to export all the frames of a JPEG movie.)
There's quite a lot of stuff in the Gimp (none of which I've played with) which may be appropriate (and Gimp is free, of course).
In Gimp, open an animated .gif (this just for purposes of illustration). Open up the file dialogue and mouse down to "video". It seems that Gimp will (relatively) cheerfully take a series of jpegs and export them as an mpeg movie (given the mpeg2encode libraries) or strip any old mpeg or animated gif down to a series of individual frames. It's not that child friendly, though, and is pretty memory intensive.
My copy of Gimp is rather dated -- gimp -v GIMP version 1.1.18 -- and I'd expect this stuff has received some attention in later versions (let alone the all new rewrite).
HTH
Cheers
John
In Gimp, open an animated .gif (this just for purposes of illustration). Open up the file dialogue and mouse down to "video". It seems that Gimp will (relatively) cheerfully take a series of jpegs and export them as an mpeg movie (given the mpeg2encode libraries)
Well now it could be that I've an mpeg2encode library around that interfaces to Squeak, but somehow the murky laws about licensing etc kinda made me not publish it. Now if someone want to clarify that, one could do a mpeg encoder in Squeak too.
John M McIntosh wrote:
In Gimp, open an animated .gif (this just for purposes of illustration). Open up the file dialogue and mouse down to "video". It seems that Gimp will (relatively) cheerfully take a series of jpegs and export them as an mpeg movie (given the mpeg2encode libraries)
Well now it could be that I've an mpeg2encode library around that interfaces to Squeak, but somehow the murky laws about licensing etc kinda made me not publish it. Now if someone want to clarify that, one could do a mpeg encoder in Squeak too.
I peeked at the download site for a licence (and in the download itself) and there's nothing! It looks like they've made this stuff public domain with just the usual "without warantee" get out.
Maybe Andrew could drop them an email:
MPEG Software Simulation Group MPEG-L@netcom.com?
BTW, looks like, in Gimp at least, mpeg_encode 1.5 (ftp://mm-ftp.cs.berkeley.edu/pub/multimedia/mpeg/bmt1r1.tar.gz) supports JPEG (as well as YUV,PNM and PPM) while the mpeg2encode 1.2 (ftp://ftp.mpeg.org/pub/mpeg/mssg) -- the guys with the email address above -- supports only PPM and YUV.
We could well be dealing with 2 seperate licences here!
PS, I hope the *nix folk get a plug-in this time: I've still got a Squeak movie I could never play!
Cheers
John
Regarding "Motion JPEG" file format, I came across the following snippet at http://www.faqs.org/faqs/jpeg-faq/part1/section-20.html:
As was stated in section 1, JPEG is only for still images. Nonetheless, you will frequently see references to "motion JPEG" or "M-JPEG" for video. *There is no such standard*. Various vendors have applied JPEG to individual frames of a video sequence, and have called the result "M-JPEG". Unfortunately, in the absence of any recognized standard, they've each done it differently. The resulting files are usually not compatible across different vendors.
MPEG is the recognized standard for motion picture compression. It uses many of the same techniques as JPEG, but adds inter-frame compression to exploit the similarities that usually exist between successive frames. Because of this, MPEG typically compresses a video sequence by about a factor of three more than "M-JPEG" methods can for similar quality. The disadvantages of MPEG are (1) it requires far more computation to generate the compressed sequence (since detecting visual similarities is hard for a computer), and (2) it's difficult to edit an MPEG sequence on a frame-by-frame basis (since each frame is intimately tied to the ones around it). This latter problem has made "M-JPEG" methods rather popular for video editing products.
It's a shame that there isn't a recognized M-JPEG standard. But there isn't, so if you buy a product identified as "M-JPEG", be aware that you are probably locking yourself into that one vendor.
Recently, both Microsoft and Apple have started pushing (different :-() "standard" M-JPEG formats. It remains to be seen whether either of these efforts will have much impact on the current chaos. Both companies were spectacularly unsuccessful in getting anyone else to adopt their ideas about still-image JPEG file formats, so I wouldn't assume that anything good will happen this time either...
See the MPEG FAQ for more information about MPEG.
I'm not sure how current this FAQ is, but it agrees with what Jan wrore.
-- John
Also as a sidenote: there are some mpeg4 libs out there that compresses files to a fraction of the mpeg2 files. Good for distributing over the net etc. http://www.projectmayo.com/about/index.php http://www.3ivx.com/
Run time: 20:40 34945.avi (38 MB) mpeg4 ! 34945.mpg (365.9 MB) mpeg2 Karl
John.Maloney@disney.com wrote:
Thanks, Nick, this is very helpful.
You're right that stand-alone frames are much easier to edit. In my experiments with Squeak JPEG movies, JPEG movie are generally 1.2 to 3 times larger than the original MPEG movie at similar quality levels. So MPEG is definitely preferable for compactness in final distribution.
But there are many advantages to a format that can be authored and edited in Squeak without the need to buy any additional software. I'm hoping that we can find a way to export a Squeak JPEG movies in a form that can be imported into a high-end video program and turned into a movie in MPEG, Quicktime, or other popular digital video formats for those who desire to publish their movies outside the Squeak community. (I believe that Adobe Premier, for example, can create a movie from a folder full of individual frames, and it would be easy to export all the frames of a JPEG movie.)
-- John
At 7:08 PM +0000 11/28/01, Nick Brown wrote:
On Tue, 27 Nov 2001 18:27:44 -0800, John.Maloney@disney.com wrote:
How widely used is it? Do programs such as Quicktime deal with it?
My aproximate understanding is that it's never used for "end product", as MJPEG files are much larger than their MPEG equivalents. The mid-level video capture hardware (Matrox, etc) seems to like using it
- I think the key point is that its far easier to edit compared to a
file format where you only get a key frame once in a while (ie regular MPEG).
Most of the cheaper tv-card style video grabbers that I've seen don't bother with it, and just go straight to MPEG.
I suppose this boils down to, "Support MJPEG if its nice and easy, but don't bust a gut over it".
HTH, Nick Brown
At 12:34 PM -0800 11/28/01, John.Maloney@disney.com wrote:
I'm hoping that we can find a way to export a Squeak JPEG movies in a form that can be imported into a high-end video program and turned into a movie in MPEG, Quicktime, or other popular digital video formats for those who desire to publish their movies outside the Squeak community. (I believe that Adobe Premier, for example, can create a movie from a folder full of individual frames, and it would be easy to export all the frames of a JPEG movie.)
I use QuickTime Pro to do this regularly. For example, I recently had a movie of my screen to demonstrate CoWeb (captured via Timbuktu), but it was my whole 1152x780 screen, and I only wanted the upper left corner. iMovie can't crop frames. I understand that FinalCut Pro can, but I don't have that. I used QuickTime Pro to output the frames as a series of PNGs, then GraphicConverter to batch crop the images, then QuickTime Pro to suck them back into QuickTime movie. Worked great!
Mark
-------------------------- Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
You're right that stand-alone frames are much easier to edit. In my experiments with Squeak JPEG movies, JPEG movie are generally 1.2 to 3 times larger than the original MPEG movie at similar quality levels. So MPEG is definitely preferable for compactness in final distribution.
Ed Lazowska (Chair of U. Washington CS department) just gave a talk here at Georgia Tech where he pointed out that CPU speed is doubling every eighteen months (with some arguments saying that there is maybe 10 years left on Moore's Law), but disk space is doubling every NINE months and the backbone bandwidth is doubling every SIX months, with no limits in sight for those. Ed argues that in 10 years, things like streaming video protocols will make no sense at all: The CPU will be the bottleneck, not disk or bandwidth.
If he's right, then a less-compressed but more-easily-accessed and more-easily-constructed format makes more sense in the long run.
Mark
-------------------------- Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
If you try to set the scaleFactor of a morph (or should I say morph's player?), I get a walkback window headed "MessageNotUnderstood: scale:". I first discovered this using v3.2a, but the problem also appears in v3.0.
While I'm at it, and since I'm preparing to talk about Squeak/Morphic/Scripting to some middle school students, it would be great if I could get my discourse straightened out with regard to morphs and players, so that I can give them (and myself!) at least a quick and dirty "mental model" of what is going on.
Thanks!!
- Jerry B
------------------------- Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
Where my HyperCard background gets me in trouble with Squeak is when I'm repeatedly wanting/trying to "lift the hood" on things to see how they work, and my HC experience of how to do this and what you find there is just different. For example, Squeak inspectors on object instances are really only the tip of the iceberg, but browsing the class takes me all the way to the bottom of the ocean. I need better moves for negotiating the middle ground.
So in HyperCard I could always double-click on a button, see its point-clickable properties, and with one more click see the script for how that button works. In Squeak I seem to only be able to see how a button works in this way if it has been Etoy-scripted. Similarly with menu items: I'd love to be able to "lift the hood" on one of the many menu items that one can access through a morph's halos and peek at the code that implements that function. How can I (or anyone else learning Squeak) do this?
Another thing I haven't been able to figure out: How do I access or reference a morph that I have created via direct manipulation? I know I can get an inspector on it and call it "self" in the inspector's code pane ((language, language, hope I'm expressing things correctly)), but how can I actually write code that gets or sets something on that morph out of that special context?
- Jerry
------------------------- Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
Hi, Jerry,
Please say more about how this error happened. How did you "try to set the scaleFactor..."?
Was it through a viewer or tile scriptor in the etoy system?
I've just tried setting scaleFactors manually (from an etoy viewer) and programatically (from an etoy scriptor) for various objects in a 3.2a system and all appeared to be well.
Or perhaps you wrote direct Smalltalk code using a Scriptor in "textual mode," in which you sent a message (perhaps #scale: or #scaleFactor:) to some object, perhaps self, which you reasonably expected should respond to it?
(If that's the case, note that the "self" of the etoy script is a Player object and that the protocol for setting the scaleFactor is #setScaleFactor:.)
Or perhaps you were manually sending #scale: to some object from an Inspector???
Cheers,
-- Scott
At 11:11 AM -0800 11/28/01, Jerry Balzano wrote:
If you try to set the scaleFactor of a morph (or should I say morph's player?), I get a walkback window headed "MessageNotUnderstood: scale:". I first discovered this using v3.2a, but the problem also appears in v3.0.
While I'm at it, and since I'm preparing to talk about Squeak/Morphic/Scripting to some middle school students, it would be great if I could get my discourse straightened out with regard to morphs and players, so that I can give them (and myself!) at least a quick and dirty "mental model" of what is going on.
Thanks!!
- Jerry B
Dr. Gerald J. Balzano Teacher Education Program Dept of Music Laboratory for Comparative Human Cognition Cognitive Science Program UC San Diego La Jolla, CA 92093 (619) 822-0092 gjbalzano@ucsd.edu
Scott - Thanks for your response. In the 3.2 Etoy system, either manually or from a script, the star, curve, and polygon all ignore attempts to set their scale factor, although (as I just found out) the ellipse, rect, and round rect behave just fine. It must have been in 3.0 with the star specifically that I got the walkback error (from etoy system) I described in the previous message. - Jerry
At 12:49 PM -0800 11/28/01, Scott Wallace wrote:
Hi, Jerry,
Please say more about how this error happened. How did you "try to set the scaleFactor..."?
Was it through a viewer or tile scriptor in the etoy system?
I've just tried setting scaleFactors manually (from an etoy viewer) and programatically (from an etoy scriptor) for various objects in a 3.2a system and all appeared to be well.
At 11:11 AM -0800 11/28/01, Jerry Balzano wrote:
If you try to set the scaleFactor of a morph (or should I say morph's player?), I get a walkback window headed "MessageNotUnderstood: scale:". I first discovered this using v3.2a, but the problem also appears in v3.0.
Hi, Jerry,
Ah, yes, now I remember!
ScaleFactor works for every kind of morph *except* that it is broken, and has never worked, when it comes to PolygonMorph and its descendents.
The issue derives from the fact that Polygons (for their own good reasons) refuse to have "TransformationMorph" shells wrapped around them (a scaled polygon is itself just another polygon, after all,) and it's the TransformationMorph that holds the scaleFactor information.
Looking back I see that only last month I bumped into the very bug you reported, and after discussing the polygon situation within SC for a few rounds of email, I issued, in update 4465cleanups-sw dated 29 October, a workaround for this problem -- see item 6 in the preamble to that update.
You should now see no more scaleFactor-related debuggers, and the scaleFactor of polygons is maintained at 1 -- that's the best we can do until someone does the work to implement a scaleFactor within PolygonMorph.
Cheers,
-- Scott
At 1:07 PM -0800 11/28/01, Jerry Balzano wrote:
Scott - Thanks for your response. In the 3.2 Etoy system, either manually or from a script, the star, curve, and polygon all ignore attempts to set their scale factor, although (as I just found out) the ellipse, rect, and round rect behave just fine. It must have been in 3.0 with the star specifically that I got the walkback error (from etoy system) I described in the previous message. - Jerry
At 12:49 PM -0800 11/28/01, Scott Wallace wrote:
Hi, Jerry,
Please say more about how this error happened. How did you "try to set the scaleFactor..."?
Was it through a viewer or tile scriptor in the etoy system?
I've just tried setting scaleFactors manually (from an etoy viewer) and programatically (from an etoy scriptor) for various objects in a 3.2a system and all appeared to be well.
At 11:11 AM -0800 11/28/01, Jerry Balzano wrote:
If you try to set the scaleFactor of a morph (or should I say morph's player?), I get a walkback window headed "MessageNotUnderstood: scale:". I first discovered this using v3.2a, but the problem also appears in v3.0.
This is on my list.
I made polygons scale by transforming their vertices instead of flexing, which makes them much prettier and somewhat faster. Unfortunately it breaks the EToy code that expects them to be flexed. So much for good intentions.
I think there's a way to have the best of both worlds. I'll move it up higher on the list.
- Dan
Hi, Jerry,
Ah, yes, now I remember!
ScaleFactor works for every kind of morph *except* that it is broken, and has never worked, when it comes to PolygonMorph and its descendents.
The issue derives from the fact that Polygons (for their own good reasons) refuse to have "TransformationMorph" shells wrapped around them (a scaled polygon is itself just another polygon, after all,) and it's the TransformationMorph that holds the scaleFactor information.
Looking back I see that only last month I bumped into the very bug you reported, and after discussing the polygon situation within SC for a few rounds of email, I issued, in update 4465cleanups-sw dated 29 October, a workaround for this problem -- see item 6 in the preamble to that update.
You should now see no more scaleFactor-related debuggers, and the scaleFactor of polygons is maintained at 1 -- that's the best we can do until someone does the work to implement a scaleFactor within PolygonMorph.
Cheers,
-- Scott
More questions about the Movie-JPEG format, please:
- Why was it decided to invent a format rather than use the existing
Motion-JPEG standard (which I didn't know about until Bolot sent me these URLs):
http://bmrc.berkeley.edu/research/cmt/versions/4.0/doc/cmtmjpeg/MJPEG_ chunkfile.html http://neptune.netcomp.monash.edu.au/cpe3013/MPEG/Reading/MJPEG/step1.htm
I developed and sold a commercial M-JPEG video codec for Windows platforms for many years so know HEAPS about M-JPEG formats and digital video in general. Hopefully this message can educate people about video formats, and point out the potholes.
The first issue is there are a bunch of M-JPEG formats, not just "the" standard like MPEG. The two most widely used M-JPEG formats are probably Microsoft AVI's using an Open DML compatable codec and Quicktime using M-JPEG A or B. The above links are some proprietary format that happen to use JPEG like compression. Also note that even though companies claim to conform to a "standard" often they don't, so lots of compatibility issues come up with M-JPEG.
Unlike MPEG, the file format (AVI or QuickTime) are very distinct from the codec format (the compressed frame format). The thing to do would be to write Squeak code that understood one or both of these file formats, and then also had some code that implemented codec's. File formats tend to be pretty stable, codec's change rapidly. The simplest codec format is uncompressed RGBA or YCrCb.
M-JPEG as a codec format has some significant limitations compared to newer formats like DV or i-Frame MPEG. Including:
- there is no universal M-JPEG format
- it's quite tricky to get constant data rates using M-JPEG, easily available JPEG code sets a "quality" factor before compression, which depending on the frame contents will give a large range of compressed frame sizes (pure random noise frames actually can be larger after compression)
- JPEG compression also has a single "quality" (the quantization factor) for the WHOLE image, which is one reason it's hard to generate constant data rates, this also degrades image quality for a given compressed size, because you can't allocate more bits to picture areas that have more detail, both MPEG and DV can change the quantization dynamically through the frame, for M-JPEG the easy strategy to make a constant data rate (or at least not above some data rate) is to compress the frame and then keep recompressing it (a binary search), adjusting the global quality until the frame is an ok size (not so good for performance, but predicting quality settings from previous frames helps, except for scene transitions that suddenly change the amount of picture detail), there also are some patented algorithms to estimate the correct quality setting to use based one samples of the data
- movies for actual video display, as opposed to computer display, are also generally interlaced, blindly compressing a "frame" vs. just a field (every other scan line) will often not work so well, M-JPEG typically compresses each field separately, concatenating the result as a "frame", DV and MPEG-2 have algorithmic support to deal with picture areas where the two fields have significant interframe motion, specifically they have alternative DCT's (a normal one and one that understand that alternating lines may not correlate) that get chosen on a 8x8 cell basis
For high quality video editing, nothing beats uncompressed fields. Most of the compression formats subsample the color resolution, which makes things like chroma keying not work so well. If you have to make multiple compression/decompression passes, most codec's also introduce ugly artifacts, as you build up layers for the final output. The downside to uncompressed video editing is high data rates. CPU loads are actually less than with compressed formats, but merging two uncompressed full quality data streams and writing an output stream is a total disk data rate of 62 MBytes/sec (for NTSC 29.97 fps*720x480*2 bytes/pixel (assuming YCrCb color space). Seeking is also super simple on uncompressed data, as all frames are a fixed size.
Also note that NTSC or PAL video is NOT square pixels. I should also add that fun things like gamma correction and color gamut mapping should be done to make high quality output. It's a LOT more complex than just taking your RGB animation and feeding it to a JPEG algorithm.
A BIG advantage of M-JPEG format is it's almost totally free of patent issues. I've been told DV format is also mostly not a patent issue. MPEG on the other hand is a patent mine field.
A very viable way to edit video might be to keep a shadow file of metadata for an i-Frame MPEG file (or other video file format). Frames could be decompressed using a standard MPEG decoder code. Frames could be assembled by seeking to the correct file offset based on the metadata. Output could be uncompressed or i-Frame MPEG. Deferring decompression of all the frames (or even just bypassing decompressing, by copying the input to the output) is best, but often not posible. There would be a reasonable fast pre-edit step to parse the input file into metatdata (no decompression, just finding the frame boundaries). Having pluggable file format's (MPEG flavors, QuickTime, AVI) and compression formats (MPEG I/II, DV, uncompressed, M-JPEG) would be best.
Other random thoughts on video:
- alpha is important, and generally ignored by most compressed formats, uncompressed+alpha is probably the ideal working video format
- gamma corrected YCrCb with 4:2:2 subsampling is closest to most native video formats, so doing effects in a YCrCbA color space might be desirable, I see 160 GByte disks were for sale at $299
- sound is a whole can of worms, consumer DV cameras use what's called unlocked audio, which means the number of sound samples varies for each frame, this confuses timebase logic, generally you have to make video run at the correct frame rate (about 29.97 fps for NTSC, but not exactly), and fixup the sound samples. Pro video devices often have a common clock for video and audio samples, so can keep the two synchronized.
- all these details can be ignored if of you just want to play little videos on your computer screen, if you want to produce video that shows up on the Discovery Channel, you have to get it right
- Jan
Great info, Jan -- thanks!
Can you say something about the question of whether M-JPEG supports a soundtrack? It's not clear from the specs if it does.
Thanks! Mark -------------------------- Mark Guzdial : Georgia Tech : College of Computing : Atlanta, GA 30332-0280 Associate Professor - Learning Sciences & Technologies. Collaborative Software Lab - http://coweb.cc.gatech.edu/csl/ (404) 894-5618 : Fax (404) 894-0673 : guzdial@cc.gatech.edu http://www.cc.gatech.edu/gvu/people/Faculty/Mark.Guzdial.html
Mark Guzdial wrote:
Great info, Jan -- thanks!
Can you say something about the question of whether M-JPEG supports a soundtrack? It's not clear from the specs if it does.
In quicktime pro you have two m-jpeg codecs that both support sound.
Karl
Can you say something about the question of whether M-JPEG supports a soundtrack? It's not clear from the specs if it does.
M-JPEG the codec format only has to do with video. Common file formats that wrap M-JPEG compressed video frames like AVI or QuickTime do have audio streams too.
A MPEG video stream is video only, although a MPEG system stream (what we think of as a MPEG file) is a combination of the MPEG video stream and audio streams.
DV format actually intertwines video and audio pretty tightly. Native DV format is actually the format used to write blocks on a DV tape, which is why things are so intertwined. It's common for editing programs to extract out the audio part and put it in a separate stream, either as a pre-edit step or else while transferring the DV data into a platform local wrapping file format, like AVI or QuickTime.
- Jan
Jan,
Thank you for the *very* helpful information on M-JPEG! I saw something on the jpeg.org site that suggested that there was an ongoing attempt to create a standard for M-JPEG but, with several strong competing formats floating around, it might be many years before everyone complies with the new standard...
I looked at the AVI file format extensions relating to M-JPEG. Like Quicktime, it's packed with all kind of options and variations--useful for the professional, but they certainly complicate things...
Given the current lack of a universal M-JPEG format, the lack of a universal file format, the lack of soundtrack support (except as part of a QT or AVI file), and the fact that I'm not even sure that the individual frames of M-JPEG are compatible with our still-image JPEG plugin, I'm satisfied that creating a new file format for Squeak JPEG movies is appropriate. If and when we write converters between Squeak's JPEG movie files and AVI, QT, and/or the Berkeley multimedia group file formats for M-JPEG, writing the Squeak side will be trivial.
My primary goal for Squeak's JPEG movies was to create a simple, authorable movie format in Squeak that kids, teachers, and hobbiests could have fun with. By keeping it extremely simple, I hope to encourage experimentation with iMovie-like editors written in Squeak, the ability to create 3-D movies by rendering frames into a JPEG movie, and even simple tasks like scaling movies to a new size or cropping them (as Mark Guzdial discussed doing with GraphicConverter)--all with complete portability across Squeak platforms (assuming the IJPEG Group JPEG library is as portable as it's supposed to be). As long as we have a least one way to import movies into Squeak--and that's doable via individual frames, as well as from MPEG movies-- then we've achieved that goal. Of course, being able to import from additional formats would add convenience, but I suspect that there are formats that it would be more useful to import than M-JPEG, such as DV. (Although I've never worked with digital video in ANY form myself--this is new territory for me.) The ability to export from Squeak movies into some widely used format would also be useful. Again, using QT Pro, I think that can be done by exporting the Squeak movie as individual frames, but there might be a more convient format.
Jan, can you suggest a simple but widely supported import/export format? It has to be something we can encode from Squeak, of course. Maybe M-JPEG AVI or QT? Although there *are* open-source C libraries for doing MPEG encoding, it's my understanding that various patents apply to the MPEG encoding process, especially to MP3 sound encoding, so we would need to get permission from the patent holders to distribute such code with Squeak.
-- John
P.S. Jan wrote:
- all these details can be ignored if of you just want to play little
videos on your computer screen, if you want to produce video that shows up on the Discovery Channel, you have to get it right
Yep, I appreciate that doing digital video seriously is major undertaking--all the more after reading your message! Fortunately, the goal of Squeak JPEG movies is just to play with video on the computer screen. Thus, we can avoid all those messy things like frame interlace, non-square pixels, partial scan lines, etc.
I saw something on the jpeg.org site that suggested that there was an ongoing attempt to create a standard for M-JPEG but, with several strong competing formats floating around, it might be many years before everyone complies with the new standard...
I view M-JPEG as a format at the end of it's life cycle. I doubt there will be any new standards for it. For a while, it was the technology balance point because reasonable priced chips to do compression/decompression were available. Now, you can get reasonable priced chips to do MPEG-I/II and DV formats, which are technically superior.
I'm not even sure that the individual frames of M-JPEG are compatible with our still-image JPEG plugin, I'm satisfied that creating a new file format for Squeak JPEG movies is appropriate.
The M-JPEG compressed frames in a AVI or QuickTime are NOT directly compatible with JPEG files. Some of the header tags are different, and M-JPEG's all use a common huffman table. The color space is a bit different too (range limited YCrCb instead of YUV).
I'd tend to agree that a unique Squeak movie format is the way to go (see below), with import/export. Some of the theory behind current formats would be very appropriate to reuse.
Jan, can you suggest a simple but widely supported import/export format? It has to be something we can encode from Squeak, of course. Maybe M-JPEG AVI or QT? Although there *are* open-source C libraries for doing MPEG encoding, it's my understanding that various patents apply to the MPEG encoding process, especially to MP3 sound encoding, so we would need to get permission from the patent holders to distribute such code with Squeak.
I see the QuickTime file format is documented at http://developer.apple.com/techpubs/quicktime/qtdevdocs/QTFF/qtff.html although don't know if there are any legal issues. Uncompressed frames would be the simplest codec format for inport/export, with PCM sound. The "Photo JPEG" codec might be very close or even identical to the common JPEG file format. Even though there's lots of complexity defined in QuickTime file format, I believe the required stuff is pretty simple. Much of the info in a QuickTime file can be ignored or not written.
I'd avoid MPEG patent dangers, they are quite serious about defending their territory.
I haven't looked at recent Squeak code, so ignore all this if it's already done... For a close match to existing Squeak architecture, you could store movies in a hierarchical binary object stream format. Basically, the output from multiple ReferenceStream's, with the chunks listed in a directory of file offset's. Each video frame/sound chunk could be a different object tree. ReferenceStream doesn't seem like it has any way to prioritize the ordering of objects written, so you would need to keep the subtree small. Conceptually you might just want to have ReferenceStream write a whole movie, with meta info objects, video frame objects, and sounds chunk objects. Realistically, you need to control the ordering of little subtrees, and also allow reading of subtrees for playback. See below for more comments on streaming.
To manipulate the movie, you load the objects that represent the movie meta data (which includes an OrderedCollection of frames chunk offsets), and then load image/sound/whatever chunks dynamically. ReferenceStream looks like it already knows how to read/write multiple object trees, all that's needed is some way to randomly access chunks in the file for editing, like a chunk directory at the end with the starting offset of each chunk.
The goal would be to allow playing the movie as a stream, from the beginning, without random seeks, but yet access to each frame without scanning the whole file for editing, so the ORDERING of object subtrees is really important, as is having appropriate random access chunk directories. Having some playback meta info at the beginning of the file, and then frame/sound chunks, with a frame directory object tree at the end might do the trick. MPEG files (even I-Frame only) are not so good for editing because they lack this random access directory. Being able to write an output movie file as a stream would be desirable too, so then you can stream a Squeak movie file from a live camera to a remote Squeak for live playback over a network (i.e. videoconferencing).
Getting the file format to work nice for random access editing, streaming playback (no seeks), and streaming writing (no seeks) is the magic of a good video file format. There also are multiple streams (at least sound and video) that you want interleaved into the master stream, so you don't have to buffer huge amounts of one stream to get the next chunk of another stream. The interleaving of streams is closely related to timing. You can't just say for every video frame you'll emit 1000 bytes of sound samples, as the sound may be compressed to variable size, so 1000 bytes can represent variable times. Really you need to interleave streams with the timebase for each stream in sync. So the file has the video frame for 0:42:12.10 next to the sound that gets played at 0:42:12.10. Playback software can buffer things some, to account for different latencies in the video playback from the sound playback, but if the two streams aren't timebase synced, the buffer requirements get bigger and bigger as the movie plays, and if the stream aren't timebase synced, you may have to start doing a disk seek for a sound chunk and a seek for the video chunk, this is very bad for performance and smooth playback. Streaming of the video file is also impossible if the video and sound interleave aren't timebase synced.
Collecting large amount of directory data to emit before groups of frames is undesirable too, as you would have to buffer variable sized frames from future file positions, which will introduce latency in live video processing. This suggest directory info should be after the frames (or at the end to prevent interrupt in the smooth flow of video/audio data). If you've been watching the CNN interviews with reporters on satellite videoconference phones, you'll notice a couple seconds between when the local newscaster asks a question and you start to hear the response, this is because of the video compression latency. Having the Squeak video format work well with videoconferencing would be real nice.
For editing, movie frame and sound chunks need to be loaded as needed from any point randomly, and probably cached for a while, and dumped when memory is getting full, maybe by the the GC (is there a LRU cache collection object that pays attention to memory consumption?). Generating megabytes of garbage objects per second might be hard on the GC, although the number of objects and references to follow might not be very big (sound samples and video frames could be non-pointer objects).
The issue is you can't keep the whole movie in memory, so need to have part of the object tree loaded and dumped as needed, which seems like a very similar problem to code modules or projects dynamically coming and going. Movies are unique in that streaming playback is desirable, for use over a network and also just to optimize disk throughput (a random disk seek per frame would be bad).
Offhand, it seems like some Smalltalk wizard could put this all together pretty quick. I'd work on it myself, but am currently furiously working on a paying device driver project, with a deadline real soon now.
- Jan
Take a look at www.ogg.org for open-source codecs. They already have an audio spec which can be used by most current rippers and players, and they are working on a video spec (codenamed Tarkin).
Russell
-----Original Message----- From: squeak-dev-admin@lists.squeakfoundation.org [mailto:squeak-dev-admin@lists.squeakfoundation.org]On Behalf Of Jan Bottorff Sent: Saturday, 1 December 2001 11:23 PM To: squeak-dev@lists.squeakfoundation.org Subject: Re: Movie-JPEG and other video info
I saw something on the jpeg.org site that suggested that there was an ongoing attempt to create a standard for M-JPEG but, with several strong competing formats floating around, it might be many years before everyone complies with the new standard...
I view M-JPEG as a format at the end of it's life cycle. I doubt there will be any new standards for it. For a while, it was the technology balance point because reasonable priced chips to do compression/decompression were available. Now, you can get reasonable priced chips to do MPEG-I/II and DV formats, which are technically superior.
I'm not even sure that the individual frames of M-JPEG are compatible with our still-image JPEG plugin, I'm satisfied that creating a new file format for Squeak JPEG movies is appropriate.
The M-JPEG compressed frames in a AVI or QuickTime are NOT directly compatible with JPEG files. Some of the header tags are different, and M-JPEG's all use a common huffman table. The color space is a bit different too (range limited YCrCb instead of YUV).
I'd tend to agree that a unique Squeak movie format is the way to go (see below), with import/export. Some of the theory behind current formats would be very appropriate to reuse.
Jan, can you suggest a simple but widely supported import/export format? It has to be something we can encode from Squeak, of course. Maybe M-JPEG AVI or
QT?
Although there *are* open-source C libraries for doing MPEG encoding, it's my understanding that various patents apply to the MPEG encoding process, especially to MP3 sound encoding, so we would need to get permission from the patent holders to distribute such code with Squeak.
I see the QuickTime file format is documented at http://developer.apple.com/techpubs/quicktime/qtdevdocs/QTFF/qtff.html although don't know if there are any legal issues. Uncompressed frames would be the simplest codec format for inport/export, with PCM sound. The "Photo JPEG" codec might be very close or even identical to the common JPEG file format. Even though there's lots of complexity defined in QuickTime file format, I believe the required stuff is pretty simple. Much of the info in a QuickTime file can be ignored or not written.
I'd avoid MPEG patent dangers, they are quite serious about defending their territory.
I haven't looked at recent Squeak code, so ignore all this if it's already done... For a close match to existing Squeak architecture, you could store movies in a hierarchical binary object stream format. Basically, the output from multiple ReferenceStream's, with the chunks listed in a directory of file offset's. Each video frame/sound chunk could be a different object tree. ReferenceStream doesn't seem like it has any way to prioritize the ordering of objects written, so you would need to keep the subtree small. Conceptually you might just want to have ReferenceStream write a whole movie, with meta info objects, video frame objects, and sounds chunk objects. Realistically, you need to control the ordering of little subtrees, and also allow reading of subtrees for playback. See below for more comments on streaming.
To manipulate the movie, you load the objects that represent the movie meta data (which includes an OrderedCollection of frames chunk offsets), and then load image/sound/whatever chunks dynamically. ReferenceStream looks like it already knows how to read/write multiple object trees, all that's needed is some way to randomly access chunks in the file for editing, like a chunk directory at the end with the starting offset of each chunk.
The goal would be to allow playing the movie as a stream, from the beginning, without random seeks, but yet access to each frame without scanning the whole file for editing, so the ORDERING of object subtrees is really important, as is having appropriate random access chunk directories. Having some playback meta info at the beginning of the file, and then frame/sound chunks, with a frame directory object tree at the end might do the trick. MPEG files (even I-Frame only) are not so good for editing because they lack this random access directory. Being able to write an output movie file as a stream would be desirable too, so then you can stream a Squeak movie file from a live camera to a remote Squeak for live playback over a network (i.e. videoconferencing).
Getting the file format to work nice for random access editing, streaming playback (no seeks), and streaming writing (no seeks) is the magic of a good video file format. There also are multiple streams (at least sound and video) that you want interleaved into the master stream, so you don't have to buffer huge amounts of one stream to get the next chunk of another stream. The interleaving of streams is closely related to timing. You can't just say for every video frame you'll emit 1000 bytes of sound samples, as the sound may be compressed to variable size, so 1000 bytes can represent variable times. Really you need to interleave streams with the timebase for each stream in sync. So the file has the video frame for 0:42:12.10 next to the sound that gets played at 0:42:12.10. Playback software can buffer things some, to account for different latencies in the video playback from the sound playback, but if the two streams aren't timebase synced, the buffer requirements get bigger and bigger as the movie plays, and if the stream aren't timebase synced, you may have to start doing a disk seek for a sound chunk and a seek for the video chunk, this is very bad for performance and smooth playback. Streaming of the video file is also impossible if the video and sound interleave aren't timebase synced.
Collecting large amount of directory data to emit before groups of frames is undesirable too, as you would have to buffer variable sized frames from future file positions, which will introduce latency in live video processing. This suggest directory info should be after the frames (or at the end to prevent interrupt in the smooth flow of video/audio data). If you've been watching the CNN interviews with reporters on satellite videoconference phones, you'll notice a couple seconds between when the local newscaster asks a question and you start to hear the response, this is because of the video compression latency. Having the Squeak video format work well with videoconferencing would be real nice.
For editing, movie frame and sound chunks need to be loaded as needed from any point randomly, and probably cached for a while, and dumped when memory is getting full, maybe by the the GC (is there a LRU cache collection object that pays attention to memory consumption?). Generating megabytes of garbage objects per second might be hard on the GC, although the number of objects and references to follow might not be very big (sound samples and video frames could be non-pointer objects).
The issue is you can't keep the whole movie in memory, so need to have part of the object tree loaded and dumped as needed, which seems like a very similar problem to code modules or projects dynamically coming and going. Movies are unique in that streaming playback is desirable, for use over a network and also just to optimize disk throughput (a random disk seek per frame would be bad).
Offhand, it seems like some Smalltalk wizard could put this all together pretty quick. I'd work on it myself, but am currently furiously working on a paying device driver project, with a deadline real soon now.
- Jan
Jan Bottorff janb@pmatrix.com wrote:
- alpha is important, and generally ignored by most compressed formats, uncompressed+alpha is probably the ideal working video format
If someone is thinking of turning Squeak into the ultimate video processing environment (I wish!), please also consider the possibility of more than 8 bits per color plane for film and scientific work.
Ted
squeak-dev@lists.squeakfoundation.org