RSS/Atom feed Twitter
Site is read-only, email is disabled

Floats (was Re: Dreamworks, etc.)

This discussion is connected to the gimp-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.

This is a read-only list on gimpusers.com so this discussion thread is read-only, too.

23 of 24 messages available
Toggle history

Please log in to manage your subscriptions.

Dreamworks, Shrek, and the need for 16 Bit Marc) (A.) (Lehmann 01 Nov 01:36
  Dreamworks, Shrek, and the need for 16 Bit Guillermo S. Romero / Familia Romero 01 Nov 18:35
   Floats (was Re: Dreamworks, etc.) Nick Lamb 02 Nov 09:16
    Floats (was Re: Dreamworks, etc.) Sven Neumann 02 Nov 13:34
    Floats (was Re: Dreamworks, etc.) Marc) (A.) (Lehmann 02 Nov 13:46
     Floats (was Re: Dreamworks, etc.) Stephen J Baker 04 Nov 14:52
      Floats (was Re: Dreamworks, etc.) Steinar H. Gunderson 04 Nov 14:47
      Floats (was Re: Dreamworks, etc.) Sven Neumann 04 Nov 16:34
       Floats (was Re: Dreamworks, etc.) Steinar H. Gunderson 04 Nov 16:42
        Floats (was Re: Dreamworks, etc.) Sven Neumann 04 Nov 16:51
         Floats (was Re: Dreamworks, etc.) Steinar H. Gunderson 04 Nov 17:02
         Floats (was Re: Dreamworks, etc.) Stephen J Baker 04 Nov 18:11
       Floats (was Re: Dreamworks, etc.) Stephen J Baker 04 Nov 18:10
        Floats (was Re: Dreamworks, etc.) Sven Neumann 04 Nov 18:02
  Dreamworks, Shrek, and the need for 16 Bit Stephen J Baker 01 Nov 19:47
Dreamworks, Shrek, and the need for 16 Bit RW Hawkins 01 Nov 01:46
  Dreamworks, Shrek, and the need for 16 Bit Sven Neumann 01 Nov 02:11
   Dreamworks, Shrek, and the need for 16 Bit Robert L Krawitz 01 Nov 14:18
Dreamworks, Shrek, and the need for 16 Bit RW Hawkins 01 Nov 17:13
Dreamworks, Shrek, and the need for 16 Bit Nathan Carl Summers 01 Nov 18:38
  Dreamworks, Shrek, and the need for 16 Bit Guillermo S. Romero / Familia Romero 01 Nov 18:44
Dreamworks, Shrek, and the need for 16 Bit Piotr Legiecki 04 Nov 08:14
3DC279F6.5040105@ozemail.co... 07 Oct 20:21
  Dreamworks, Shrek, and the need for 16 Bit Sven Neumann 01 Nov 14:16
Marc) (A.) (Lehmann
2002-11-01 01:36:10 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

Just FYI (I have no specific goal with this mail ;): I met some guy from Dreamworks ("Shrek") at the LWE in Frankfurt, and he told me that their whole rendering infrastructure is 8 bit, including intermediate results (so the whole of Shrek was done at 8 bits, with a later dynamic adjustment of the results into the necessary range).

He also told me that they want to go to 16bits, for 8 bits is only ok for exclusively-rendered movies, that 8 bit intermediate results do hurt a lot, and that they do use gimp, for some unnamed adjustments and especially creating textures, where gimp works extremely well ;)

And finally he told me that the need for 16 bit and floating point is there in many but not most cases, so one _can_ get along without it, at leats for rendered scenes.

RW Hawkins
2002-11-01 01:46:11 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

True, 8 bit is OK for "animation" work (like Shrek) but for live action it just does not cut it. This is why film gimp supports 16 bit. Any house doing live action work is likely long switched over to 16 bit.

As a photographer my results are significantly better if I do scan and initial level adjustment using 16 bit and convert later to 8bit to use the better tools of the current GIMP.

(just a note of encouragement for those working on 16bit support, we need it!)

-RW

pcg@goof.com ( Marc) (A.) (Lehmann ) wrote:

Just FYI (I have no specific goal with this mail ;): I met some guy from Dreamworks ("Shrek") at the LWE in Frankfurt, and he told me that their whole rendering infrastructure is 8 bit, including intermediate results (so the whole of Shrek was done at 8 bits, with a later dynamic adjustment of the results into the necessary range).

He also told me that they want to go to 16bits, for 8 bits is only ok for exclusively-rendered movies, that 8 bit intermediate results do hurt a lot, and that they do use gimp, for some unnamed adjustments and especially creating textures, where gimp works extremely well ;)

And finally he told me that the need for 16 bit and floating point is there in many but not most cases, so one _can_ get along without it, at leats for rendered scenes.

Sven Neumann
2002-11-01 02:11:09 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

Hi,

RW Hawkins writes:

True, 8 bit is OK for "animation" work (like Shrek) but for live action it just does not cut it. This is why film gimp supports 16 bit. Any house doing live action work is likely long switched over to 16 bit.

As a photographer my results are significantly better if I do scan and initial level adjustment using 16 bit and convert later to 8bit to use the better tools of the current GIMP.

(just a note of encouragement for those working on 16bit support, we need it!)

we are aware of the need and will turn to it as soon as 1.4 is out of the door. Until then, here's an idea that would probably help: What if we improve the file plug-ins that read file types that support higher color depths (like TIFF) in such a way that they allow to do simple adjustments before the data is propagated down to 8bit? I was thinking of something like the levels tool. Do you think it would be possible to perform a reasonable first color adjustment only by looking at a histogram? In that case it should relatively easy to add that functionality to some of the file plug-ins. Since this wouldn't need any support from the GIMP core (albeit perhaps some helper functions in libgimp and libgimpwidgets) this could happen for GIMP-1.4. What do you think?

Salut, Sven

Sven Neumann
2002-11-01 14:16:03 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

Hi,

David Hodson writes:

What if
we improve the file plug-ins that read file types that support higher color depths (like TIFF) in such a way that they allow to do simple adjustments before the data is propagated down to 8bit?

My Cineon/DPX plugin does exactly this. (The Cineon and DPX formats are standards for scanned movie film.) The source is at:

http://www.ozemail.com.au/~hodsond/cineon.html

The documentation is a little out of date (sorry).

is there a way to interactively set black/grey/white points on load or do you just use known values? Do you think it would be useful to have an interface like the Levels tool in the plug-in? Is it correct to assume that the same values will be used for all frames of a scanned of a movie or are they adapted to scenese or even individual frames? Perhaps you could tell us a bit about the workflow involved here.

Salut, Sven

Robert L Krawitz
2002-11-01 14:18:47 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

On another note about 16 bits, I've been thinking about writing a standalone version of the Print plugin that could interactively print a file outside of the GIMP. I had this in mind more for Macintosh OS X, but would this be of more general use?

RW Hawkins
2002-11-01 17:13:59 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

That might not be a bad interim solution. My workflow goes something like this:
Scan with little to no adjustments
Level correct, possible minor overall color correct Convert to 8 bit
Create dodge/burn masks etc.
Sharpen
Output

So I could do most of my 16 bit work with a histogram. Of course I know others who do considerable color correction in that first step, especially people scanning film negatives (I use only positive transparencies so my scans are pretty close color balance wise) so they might not be fully satisfied but it's a start!

-RW

Sven Neumann wrote:

Hi,

RW Hawkins writes:

True, 8 bit is OK for "animation" work (like Shrek) but for live action it just does not cut it. This is why film gimp supports 16 bit. Any house doing live action work is likely long switched over to 16 bit.

As a photographer my results are significantly better if I do scan and initial level adjustment using 16 bit and convert later to 8bit to use the better tools of the current GIMP.

(just a note of encouragement for those working on 16bit support, we need it!)

we are aware of the need and will turn to it as soon as 1.4 is out of the door. Until then, here's an idea that would probably help: What if we improve the file plug-ins that read file types that support higher color depths (like TIFF) in such a way that they allow to do simple adjustments before the data is propagated down to 8bit? I was thinking of something like the levels tool. Do you think it would be possible to perform a reasonable first color adjustment only by looking at a histogram? In that case it should relatively easy to add that functionality to some of the file plug-ins. Since this wouldn't need any support from the GIMP core (albeit perhaps some helper functions in libgimp and libgimpwidgets) this could happen for GIMP-1.4. What do you think?

Salut, Sven

Guillermo S. Romero / Familia Romero
2002-11-01 18:35:10 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

pcg@goof.com (2002-11-01 at 0136.10 +0100):

Just FYI (I have no specific goal with this mail ;): I met some guy from Dreamworks ("Shrek") at the LWE in Frankfurt, and he told me that their whole rendering infrastructure is 8 bit, including intermediate results (so the whole of Shrek was done at 8 bits, with a later dynamic adjustment of the results into the necessary range).

I guess they work with linear data all the way. Just mainly cos I have been trying some tricks with a 3D app, and they went boom until I told the app to stop using gamma.

And finally he told me that the need for 16 bit and floating point is there in many but not most cases, so one _can_ get along without it, at leats for rendered scenes.

But not for real and render at the same time, and not for bad tuned render either. I am reading and getting info about this, seems linear and high range is best, if not, you have to choose how do you damage, but you hardly avoid it. Cineon is 10 bit and non linear, digital photo cameras start to give RAW dumps with more than 8 bit, some places use 32 bit float already (I would have say Dreamworks would have too... or at least 16 int)...

Why all this rant? The more info, the better, I am trying to write about all this, so users know what GIMP can do, and how to solve the problems (or get the less noticeable error), and coders can get info about desired usage.

GSR

Nathan Carl Summers
2002-11-01 18:38:03 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

On Fri, 1 Nov 2002, Guillermo S. Romero / Familia Romero wrote:

I think it should be visual, a window with the image in 8 bit, and controls that decide how to get that 8 bit from the original 16 or 32. Basically black & white points and a curve. I say visual, cos it could mean what one does in the lab, but digital (load some copies, adjust each one at will, and mask and mix all the copies to get the final image).

Would error-diffusing dithering be an option people would like?

Rockwalrus

Guillermo S. Romero / Familia Romero
2002-11-01 18:44:18 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

rock@gimp.org (2002-11-01 at 0938.03 -0800):

Would error-diffusing dithering be an option people would like?

Yes. Niklas should know a lot. :]

He gave me a case in which it was really bad, and no need of import, it was possible to get it with current tools (gradient and proper colours). I managed to find a way to fix it, but very poor man one (spread filter, to fake dither).

So with higher range formats it could happen too much.

GSR

Stephen J Baker
2002-11-01 19:47:32 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

On Fri, 1 Nov 2002, Marc wrote:

Just FYI (I have no specific goal with this mail ;): I met some guy from Dreamworks ("Shrek") at the LWE in Frankfurt, and he told me that their whole rendering infrastructure is 8 bit, including intermediate results (so the whole of Shrek was done at 8 bits, with a later dynamic adjustment of the results into the necessary range).

He also told me that they want to go to 16bits, for 8 bits is only ok for exclusively-rendered movies, that 8 bit intermediate results do hurt a lot, and that they do use gimp, for some unnamed adjustments and especially creating textures, where gimp works extremely well ;)

And finally he told me that the need for 16 bit and floating point is there in many but not most cases, so one _can_ get along without it, at leats for rendered scenes.

It makes a lot of difference whether those 8 bits are before or after gamma correction. 8 bits pre-gamma is definitely not enough for some things - 8 bits post-gamma should be enough for most applications.

---- Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sjbaker@link.com http://www.link.com Home: sjbaker1@airmail.net http://www.sjbaker.org

Nick Lamb
2002-11-02 09:16:37 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

On Fri, Nov 01, 2002 at 06:35:10PM +0100, Guillermo S. Romero / Familia Romero wrote:

some places use 32 bit float

This probably ought to be on our horizon too. Modern FPUs are very fast and RAM gets ever cheaper. Are there any concrete advantages (other than the 50% saving on storage) for 16-bit integers vs 32-bit float? We can't actually /display/ either of these things on conventional hardware so there's no difference there.

Naive tests suggest that floats are significantly faster for a compositor or similar code { i.e. R' = R * (1-a) + r * a } on commodity x86 CPUs. Has anyone with better profiling knowledge than me tried this kind of thing inside an actual working image app ?

Nick.

Sven Neumann
2002-11-02 13:34:24 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

Hi,

Nick Lamb writes:

On Fri, Nov 01, 2002 at 06:35:10PM +0100, Guillermo S. Romero / Familia Romero wrote:

some places use 32 bit float

This probably ought to be on our horizon too. Modern FPUs are very fast and RAM gets ever cheaper. Are there any concrete advantages (other than the 50% saving on storage) for 16-bit integers vs 32-bit float? We can't actually /display/ either of these things on conventional hardware so there's no difference there.

that's why GEGL supported floats from the very beginning. Looking at the current code in CVS I see:

gegl-color-model-gray-u16.h gegl-color-model-gray-u8.h
gegl-color-model-gray.h
gegl-color-model-rgb-float.h
gegl-color-model-rgb-u16.h
gegl-color-model-rgb-u8.h
gegl-color-model-rgb.h

Looks like something to start with.

Salut, Sven

Marc) (A.) (Lehmann
2002-11-02 13:46:32 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

On Sat, Nov 02, 2002 at 08:16:37AM +0000, Nick Lamb wrote:

This probably ought to be on our horizon too. Modern FPUs are very fast and RAM gets ever cheaper.

And caches get slower... and RAM is _slow_.

I don't say not to also support float, I just wanted to point out that performance is very much dependent on cache optimizations, as every fortran programmer knows ;)

the 50% saving on storage) for 16-bit integers vs 32-bit float? We can't actually /display/ either of these things on conventional hardware so there's no difference there.

I think that we very well can display 16 bits or higher, after all, we can have 16 bit linear and output 8 bit gamma-corrected.

Piotr Legiecki
2002-11-04 08:14:17 UTC (over 21 years ago)

Dreamworks, Shrek, and the need for 16 Bit

Sven Neumann wrote:

we are aware of the need and will turn to it as soon as 1.4 is out of the door. Until then, here's an idea that would probably help: What if we improve the file plug-ins that read file types that support higher color depths (like TIFF) in such a way that they allow to do simple adjustments before the data is propagated down to 8bit? I was thinking of something like the levels tool. Do you think it would be possible to perform a reasonable first color adjustment only by looking at a histogram? In that case it should relatively easy to add that functionality to some of the file plug-ins. Since this wouldn't need any support from the GIMP core (albeit perhaps some helper functions in libgimp and libgimpwidgets) this could happen for GIMP-1.4. What do you think?

My 10 cents. I think it is worth to look at this pages:

http://tme.szczecin.pl/~jacek/index1.html

They explain how to make good scans, step by step. You may see there, that gimp has some weak points like 8bits pre channel. It could be good solution to make good (working in 16 bits) plugin to read photoshop amp files (curves) and also good (working in 16 bits) color profiles plugin (to read *.icc files). Both don't need any visual preview, they are almost automatic tools. And they by all means should work in 16 bits beacuse they are applied to linear (ie RAW, not gamma corrected) files directy from scanner.

There is amp plugin (ugly, not working) and color management plugin (ugly, but working to some extent). Anyway such plugins would be very usefull not now but also in future versions of gimp.

Regards Piotr Legiecki

Steinar H. Gunderson
2002-11-04 14:47:00 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

On Mon, Nov 04, 2002 at 07:52:05AM -0600, Stephen J Baker wrote:

It might be interesting to consider doing some of the work of compositing in the graphics card - where the hardware supports it.

The latest generations of nVidia and ATI cards have support for full floating point pixel operations and floating point frame buffers. If you stored each layer as a texture and wrote a 'fragment shader' to implement the GIMP's layer combiners, you'd have something that would be *FAR* faster than anything you could do in the CPU.

Note that you still have:
- Texture upload issues (a lot of data have to go to the card every time you change a layer, think previewing here) - Texture _size_ issues (most cards support `only' up to 2048x2048) - Fragment shader length issues (okay, the NV30 and Radeon9700 both will support a lot longer shaders than you have today) - Limitations on the number of textures (Radeon9700 has maximum 8 texture coordinate sets, and 16 textures... for GIMP use, one would probably be limited to those 8, though).
- Some of GIMPs layer effects would probably be quite hard to implement in a fragment shader (simple blends etc. would be okay, though) - Problems with internal GIMP tiling vs. card's internal swizzling (if one settles for OpenGL, which would be quite natural given that GIMP is most common on *nix-based systems, one would have to `detile' the image into a linear framebuffer, _then_ upload to OpenGL).

Now, none of these are probably _real_ show-stoppers -- but I still think implementing this would be quite difficult. I'm not really sure how well GIMPs internal display architecture would work with this either.

That being said, it could be an interesting project :-)

/* Steinar */

Stephen J Baker
2002-11-04 14:52:05 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

On Sat, 2 Nov 2002, Marc wrote:

On Sat, Nov 02, 2002 at 08:16:37AM +0000, Nick Lamb wrote:

This probably ought to be on our horizon too. Modern FPUs are very fast and RAM gets ever cheaper.

And caches get slower... and RAM is _slow_.

I don't say not to also support float, I just wanted to point out that performance is very much dependent on cache optimizations, as every fortran programmer knows ;)

It might be interesting to consider doing some of the work of compositing in the graphics card - where the hardware supports it.

The latest generations of nVidia and ATI cards have support for full floating point pixel operations and floating point frame buffers. If you stored each layer as a texture and wrote a 'fragment shader' to implement the GIMP's layer combiners, you'd have something that would be *FAR* faster than anything you could do in the CPU.

Of course only people with sexy new graphics cards would reap the benefits - but I presume that people who care enough to want high precision pixels are probably professionals to whom a $500 graphics card wouldn't be an obstacle if it helped their work.

Going to full floating point per colour component would conclusively remove any issues of precision through any reasonable number of layering operations.

I think that we very well can display 16 bits or higher, after all, we can have 16 bit linear and output 8 bit gamma-corrected.

That's certainly true - and there are graphics cards out there that can do 10 bits per component per pixel.

---- Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sjbaker@link.com http://www.link.com Home: sjbaker1@airmail.net http://www.sjbaker.org

Sven Neumann
2002-11-04 16:34:26 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

Hi,

"Stephen J Baker" writes:

It might be interesting to consider doing some of the work of compositing in the graphics card - where the hardware supports it.

The latest generations of nVidia and ATI cards have support for full floating point pixel operations and floating point frame buffers. If you stored each layer as a texture and wrote a 'fragment shader' to implement the GIMP's layer combiners, you'd have something that would be *FAR* faster than anything you could do in the CPU.

apart from the point that we don't have the resources to implement graphics-card dependant GIMP backends this approach would only help to speed up the display pipeline. This is because reading from a gfxcard framebuffer is incredibly slow. If you need access to the results of the gfx operations, you better perform them on the CPU. Only if the result is only to be displayed it makes sense to use the gfx card. This is at least true for typical consumer hardware.

Of course only people with sexy new graphics cards would reap the benefits - but I presume that people who care enough to want high precision pixels are probably professionals to whom a $500 graphics card wouldn't be an obstacle if it helped their work.

well, if you could come up with the detailed specs of these sexy new graphics cards we could certainly consider to use these features. However judging from my experience as a DirectFB developer I'd say there's not much chance that the hardware vendors will give away these details unless you sign a pile of ugly contracts that effectively forbid to use the knowledge in an open source project.

Salut, Sven

Steinar H. Gunderson
2002-11-04 16:42:22 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

On Mon, Nov 04, 2002 at 04:34:26PM +0100, Sven Neumann wrote:

well, if you could come up with the detailed specs of these sexy new graphics cards we could certainly consider to use these features. However judging from my experience as a DirectFB developer I'd say there's not much chance that the hardware vendors will give away these details unless you sign a pile of ugly contracts that effectively forbid to use the knowledge in an open source project.

Umm... Both ATI and nVidia document their OpenGL extensions quite well. You'd definitely implement a system like this using OpenGL (or D3D if you wanted to do something Windows-only, but most likely you don't :-) ).

Hopefully, there will be a unified fragment shader extension quite soon, too -- ATM you'll have to write one backend per card. :-(

/* Steinar */

Sven Neumann
2002-11-04 16:51:01 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

Hi,

"Steinar H. Gunderson" writes:

Hopefully, there will be a unified fragment shader extension quite soon, too -- ATM you'll have to write one backend per card. :-(

a unified extension to what?

Salut, Sven

Steinar H. Gunderson
2002-11-04 17:02:23 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

On Mon, Nov 04, 2002 at 04:51:01PM +0100, Sven Neumann wrote:

Hopefully, there will be a unified fragment shader extension quite soon, too -- ATM you'll have to write one backend per card. :-(

a unified extension to what?

To OpenGL.

/* Steinar */

Sven Neumann
2002-11-04 18:02:31 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

Hi,

"Stephen J Baker" writes:

On 4 Nov 2002, Sven Neumann wrote:

well, if you could come up with the detailed specs of these sexy new graphics cards we could certainly consider to use these features.

The fragment shaders is a part of the OpenGL extensions for these boxes and are fully documented. For nVidia hardware, you program them in a C-like language called 'Cg'. However, it would probably be wiser to wait for the release of OpenGL 2.0 which will have a portable C-like shader language.

IIRC, they may document the OpenGL extensions, but they don't tell you how to program the hardware registers. So in order to use the features you refer to, you are forced to use a driver for which the source code is not available. You are thus bound to the X11 release supported by this module and have no chance to fix possible problems. I wouldn't call that well documented and I can only strongly suggest that every open source hacker refuses to support such attempts to keep valuable information closed.

Salut, Sven

Stephen J Baker
2002-11-04 18:10:49 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

On 4 Nov 2002, Sven Neumann wrote:

well, if you could come up with the detailed specs of these sexy new graphics cards we could certainly consider to use these features.

The fragment shaders is a part of the OpenGL extensions for these boxes and are fully documented. For nVidia hardware, you program them in a C-like language called 'Cg'. However, it would probably be wiser to wait for the release of OpenGL 2.0 which will have a portable C-like shader language.

However judging from my experience as a DirectFB developer I'd say there's not much chance that the hardware vendors will give away these details unless you sign a pile of ugly contracts that effectively forbid to use the knowledge in an open source project.

That is not a problem in this case...although some of the other issues mentioned earlier might be.

---- Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sjbaker@link.com http://www.link.com Home: sjbaker1@airmail.net http://www.sjbaker.org

Stephen J Baker
2002-11-04 18:11:21 UTC (over 21 years ago)

Floats (was Re: Dreamworks, etc.)

On 4 Nov 2002, Sven Neumann wrote:

Hi,

"Steinar H. Gunderson" writes:

Hopefully, there will be a unified fragment shader extension quite soon, too -- ATM you'll have to write one backend per card. :-(

a unified extension to what?

...to OpenGL.

----
Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sjbaker@link.com http://www.link.com Home: sjbaker1@airmail.net http://www.sjbaker.org