RSS/Atom feed Twitter
Site is read-only, email is disabled

comparing gimp speed

This discussion is connected to the gimp-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.

This is a read-only list on gimpusers.com so this discussion thread is read-only, too.

46 of 48 messages available
Toggle history

Please log in to manage your subscriptions.

comparing gimp speed David Neary 12 Nov 14:17
comparing gimp speed William Skaggs 16 Nov 19:33
20041111200108.A83F110DDF@l... 07 Oct 20:23
  comparing gimp speed Dov Kruger 11 Nov 21:28
   comparing gimp speed Carol Spears 11 Nov 22:24
   comparing gimp speed Sven Neumann 11 Nov 23:41
    comparing gimp speed Joao S. O. Bueno Calligaris 15 Nov 02:08
     comparing gimp speed Sven Neumann 16 Nov 12:07
      comparing gimp speed David Neary 16 Nov 14:07
       comparing gimp speed Michael Schumacher 16 Nov 14:27
        comparing gimp speed Carol Spears 16 Nov 18:31
         comparing gimp speed Michael Schumacher 16 Nov 18:52
          comparing gimp speed Carol Spears 16 Nov 18:59
          comparing gimp speed Sven Neumann 16 Nov 20:12
           comparing gimp speed Michael Schumacher 16 Nov 20:54
            comparing gimp speed Sven Neumann 16 Nov 23:51
            comparing gimp speed Sven Neumann 17 Nov 00:30
      comparing gimp speed Soren Hauberg 16 Nov 16:24
       comparing gimp speed Sven Neumann 16 Nov 20:11
      comparing gimp speed Alan Horkan 16 Nov 19:50
       comparing gimp speed David Neary 16 Nov 20:23
     comparing gimp speed Øyvind Kolås 16 Nov 14:51
   comparing gimp speed Alastair M. Robinson 12 Nov 00:16
    comparing gimp speed Steve Stavropoulos 12 Nov 01:13
     comparing gimp speed Daniel Egger 12 Nov 11:34
      comparing gimp speed Øyvind Kolås 12 Nov 18:42
     comparing gimp speed Tino Schwarze 12 Nov 11:37
      comparing gimp speed Laxminarayan Kamath 12 Nov 12:19
       comparing gimp speed Tino Schwarze 12 Nov 12:30
      comparing gimp speed Sven Neumann 12 Nov 13:12
       comparing gimp speed Tino Schwarze 12 Nov 14:23
        comparing gimp speed Robert L Krawitz 12 Nov 14:55
       comparing gimp speed Daniel Egger 12 Nov 15:11
        comparing gimp speed Sven Neumann 12 Nov 15:51
         comparing gimp speed Daniel Egger 12 Nov 18:08
          comparing gimp speed Manish Singh 12 Nov 18:51
           comparing gimp speed Daniel Egger 12 Nov 20:11
            comparing gimp speed Manish Singh 13 Nov 08:48
             comparing gimp speed Daniel Egger 14 Nov 13:51
           comparing gimp speed Laxminarayan Kamath 13 Nov 07:45
            comparing gimp speed Manish Singh 13 Nov 08:40
            comparing gimp speed miriam clinton (iriXx) 13 Nov 20:50
            comparing gimp speed Daniel Egger 14 Nov 14:13
    memory usage [was: comparing gimp speed] Sven Neumann 12 Nov 11:55
     memory usage [was: comparing gimp speed] Adam D. Moss 12 Nov 13:04
      memory usage Sven Neumann 12 Nov 15:36
200411181203.21548.gwidion@... 07 Oct 20:23
  comparing gimp speed Øyvind Kolås 19 Nov 02:59
Dov Kruger
2004-11-11 21:28:14 UTC (over 19 years ago)

comparing gimp speed

I noticed that gimp is very slow for large images compared with Photoshop. We were recently processing some 500Mb images, and on a fast machine with 2Gb, gimp is crawling along, while on a slower machine with only 512 Mb, photoshop is considerably faster. I attributed it to a massive amount of work in photoshop, using sse instructions, etc. but then noticed that the default viewer in redhat allows me to load images far faster even than adobe, and zoom in and out with the mouse wheel in realtime.

Granted, because you are editing the image, not just displaying it, there has to be some slowdown, but I wondered if there is any way I can tweak gimp, do I somehow have it massively de-optimized. When I first set up gimp-2.0, I tried both 128 and 512 Mb tile cache sizes. 512 seems to work a lot better, but it's still pretty bad. Any idea as to the area of the speed advantage of Adobe?

thanks, Dov

Carol Spears
2004-11-11 22:24:22 UTC (over 19 years ago)

comparing gimp speed

On Thu, Nov 11, 2004 at 03:28:14PM -0500, Dov Kruger wrote:

I noticed that gimp is very slow for large images compared with Photoshop. We were recently processing some 500Mb images, and on a fast machine with 2Gb, gimp is crawling along, while on a slower machine with only 512 Mb, photoshop is considerably faster. I attributed it to a massive amount of work in photoshop, using sse instructions, etc. but then noticed that the default viewer in redhat allows me to load images far faster even than adobe, and zoom in and out with the mouse wheel in realtime.

is this gimp on windows or gimp on linux?

you might need to change operating systems to have your gimp really work for you. that asks too much? dont use it then.

carol

Sven Neumann
2004-11-11 23:41:24 UTC (over 19 years ago)

comparing gimp speed

Hi,

Dov Kruger writes:

I noticed that gimp is very slow for large images compared with Photoshop. We were recently processing some 500Mb images, and on a fast machine with 2Gb, gimp is crawling along, while on a slower machine with only 512 Mb, photoshop is considerably faster. I attributed it to a massive amount of work in photoshop, using sse instructions, etc. but then noticed that the default viewer in redhat allows me to load images far faster even than adobe, and zoom in and out with the mouse wheel in realtime.

Granted, because you are editing the image, not just displaying it, there has to be some slowdown, but I wondered if there is any way I can tweak gimp, do I somehow have it massively de-optimized. When I first set up gimp-2.0, I tried both 128 and 512 Mb tile cache sizes. 512 seems to work a lot better, but it's still pretty bad. Any idea as to the area of the speed advantage of Adobe?

If you are processing large images and have 2GB available, why do you cripple GIMP by limiting it to only 512 MB of tile cache size?

Sven

Alastair M. Robinson
2004-11-12 00:16:49 UTC (over 19 years ago)

comparing gimp speed

Hi,

Dov Kruger wrote:

Granted, because you are editing the image, not just displaying it, there has to be some slowdown, but I wondered if there is any way I can tweak gimp, do I somehow have it massively de-optimized. When I first set up gimp-2.0, I tried both 128 and 512 Mb tile cache sizes. 512 seems to work a lot better, but it's still pretty bad. Any idea as to the area of the speed advantage of Adobe?

It's true that GIMP struggles with large images. I frequently need to edit 400 or even 600dpi full-colour A4 pages on a 256MB machine, and that's sailing pretty close to the wind.

The most important thing to do is balance your tile cache setting, as you've already found. You want it large enough that GIMP doesn't have to use its own virtual memory, but not so large that the OS has to use virtual memory to accommodate it. On a 2GB machine, I'd set to about 1.5GB, assuming GIMP has pretty much free reign over the machine.

The other thing that can help a lot is to set the maximum number of undo levels right down to 1, but set the maximum undo memory to something a bit higher - maybe 50 or 100 MB. That way you still get plenty of undo levels on small images, but don't waste memory with a long undo history for huge images. I've found that this solves most of the disk thrashing problems with GIMP/Win98 and A4 scans. Linux seems to have better memory management to start with, but this tweak can help here too.

All the best, --
Alastair M. Robinson

Steve Stavropoulos
2004-11-12 01:13:46 UTC (over 19 years ago)

comparing gimp speed

On Thu, 11 Nov 2004 23:16:49 +0000, Alastair M. Robinson wrote:

The most important thing to do is balance your tile cache setting, as you've already found. You want it large enough that GIMP doesn't have to use its own virtual memory, but not so large that the OS has to use virtual memory to accommodate it. On a 2GB machine, I'd set to about 1.5GB, assuming GIMP has pretty much free reign over the machine.

If the OS has better virtual memory than what available to gimp, then you would want to use that one. In Linux, I think in most cases, you would want to use the (often in multiple disks) swap partitions/files available to the OS.
If you want to keep the system friendly to other apps as well, you might consider a smaller than the available memory tile cache setting...

PS. SOT: many people have more than one disk on their system. In that case they should consider these example fstab entries: /dev/hdf1 swap swap defaults,pri=0 0 0 /dev/hdg5 swap swap defaults,pri=0 0 0 /stuff/swap swap swap defaults,loop,pri=0 0 0 (you might spell it as: "raid0 swap with three disks")

Daniel Egger
2004-11-12 11:34:35 UTC (over 19 years ago)

comparing gimp speed

On 12.11.2004, at 01:13, Steve Stavropoulos wrote:

If the OS has better virtual memory than what available to gimp, then you would want to use that one. In Linux, I think in most cases, you would want to use the (often in multiple disks) swap partitions/files available to the OS.

GIMP does tile swapping by hand, so if you hit the limits you'll get a lot of files in the .gimp directory of your homedirectory or whatever you set the swap area to.

I once tried to modify this to have the tile cache use mmap memory with file backing to truly let the OS decide where to put the tiles (memory or file), however this was a really sad performer so I ditched the code.

I wonder whether photoshop works with tiles at all or simply uses a linear memory segment and let the OS do the rest.

It would be really cool if the pixel data addressing was pluggable so one could easily write a different storage backend. On top of my head there would be several schemes I'd like to try: - A simple linear memory segment with COW for new layers - dito but with RLE compression (and thus more complex addressing) - Line based addressing with COW and aliasing for duplicate lines, with LUT for each line
- Planar memory segments (Shoot now! ;))

I don't know what GEGL will buy us exactly because we certainly need a change from "store those 32bit RGBA values" to something more variable but IIRC GEGL is only about pixel composition, not storage.

Servus,
Daniel

Tino Schwarze
2004-11-12 11:37:00 UTC (over 19 years ago)

comparing gimp speed

Hi there,

On Fri, Nov 12, 2004 at 02:13:46AM +0200, Steve Stavropoulos wrote:

The most important thing to do is balance your tile cache setting, as you've already found. You want it large enough that GIMP doesn't have to use its own virtual memory, but not so large that the OS has to use virtual memory to accommodate it. On a 2GB machine, I'd set to about 1.5GB, assuming GIMP has pretty much free reign over the machine.

If the OS has better virtual memory than what available to gimp, then you would want to use that one. In Linux, I think in most cases, you would want to use the (often in multiple disks) swap partitions/files available to the OS.

You don't want to use virtual memory if you don't have to. So give as much memory to GIMP as possible without making the OS swap.

If you want to keep the system friendly to other apps as well, you might consider a smaller than the available memory tile cache setting...

PS. SOT: many people have more than one disk on their system. In that case they should consider these example fstab entries: /dev/hdf1 swap swap defaults,pri=0 0 0 /dev/hdg5 swap swap defaults,pri=0 0 0 /stuff/swap swap swap defaults,loop,pri=0 0 0 (you might spell it as: "raid0 swap with three disks")

It has always been pointed out that the access patterns of GIMP are very specific to image operations - the tile cache is there because it gives a significant advantage compared to the OS's virtual memory. Besides, you get the advantage that the tile cache can be a lot larger than usable physical memory[1].

BTW: You can easily try OS-only virtual memory by setting the tile cache very large (like all of your swap), then compare whether it performs better than limiting the tile cache to physical memory (minus some amount for OS, Window environment and GIMP itself).

Bye, Tino.

[1] Working ain't gonna be fun - I once had an A1 poster at 300 dpi on an 6 GB machine and GIMP's swap grow as large as another 6 GB since GIMP didn't seem to be able to use more than 2 or 3 GB of memory altogether. (Is there a known limitation regarding maximum usable memory?)

Sven Neumann
2004-11-12 11:55:46 UTC (over 19 years ago)

memory usage [was: comparing gimp speed]

Hi,

there are probably a few things we could try to do to reduce memory usage when working with large images. The main problem here is that if you open an RGB image (no alpha channel) with say 1000 x 1000 pixels, you would expect GIMP to use 1000 * 1000 * 3 bytes to store the image data. You will however notice that GIMP instead needs 8 bytes per pixel. In addition to the 3 bpp for the RGB layer it allocates a projection the size of the image. This projection holds the result of compositing the layer stack. It is always allocated 4 bpp. Additionally a selection mask is allocated which adds another byte per pixel.

So what could be done to improve this? We could for example try to get around the need for a projection for the case where people are working with a single layer only. Instead of displaying from the projection, we could display directly from the layer. Of course we would still have to allocate the projection as soon as you start to work with layers or floating selections but at least we could reduce the memory footprint that is needed to open the image and have a first look at it. A more elegant way to implement this is is to share the projection tiles with layer tiles whenever possible (i.e. when the topmost layer is in Normal mode and the tile is completely opaque).

It might also help to allocate the selection lazily. That is to not allocate the tiles at all until the selection mask is altered. This might actually happen already, I am not sure about it.

Sven

Laxminarayan Kamath
2004-11-12 12:19:03 UTC (over 19 years ago)

comparing gimp speed

What about making gimp do a benchmaking on the machine and then let it automatically decide what method 2 use for that swapping/ tiling stuff.. < Hey, now dont beat me. I confess i actually know none of the stuff>

Tino Schwarze
2004-11-12 12:30:07 UTC (over 19 years ago)

comparing gimp speed

On Fri, Nov 12, 2004 at 04:49:03PM +0530, Laxminarayan Kamath wrote:

What about making gimp do a benchmaking on the machine and then let it automatically decide what method 2 use for that swapping/ tiling stuff.. < Hey, now dont beat me. I confess i actually know none of the stuff>

Unfortunately, this is not practical because:

a) you don't want to beat the machine for several minutes just to figure such things out
b) you don't know the usual workload of the machine. It may be a single user machine (BTW tomorrow it'll get a memory upgrade) or a multi user machine where currently no one is working or maybe 10 people are computing like hell
c) I/O intensive stuff may be running (user is burning CD, updatedb from locate is running, virus scanner runs or the machine is busy copying a DVD full of video or imagery) d) Linux kernel is upgraded tomorrow which has got a totally new VM (which is not very unlikely ;-> )

... add you own

Bye, Tino.

Adam D. Moss
2004-11-12 13:04:00 UTC (over 19 years ago)

memory usage [was: comparing gimp speed]

Sven Neumann wrote:

You will however notice that GIMP instead needs 8 bytes per pixel. In addition to the 3 bpp for the RGB layer it allocates a projection the size of the image. This projection holds the result of compositing the layer stack. It is always allocated 4 bpp. Additionally a selection mask is allocated which adds another byte per pixel.

(As an aside, once upon a time, we did have such a thing as greyscale projections.)

So what could be done to improve this? We could for example try to get around the need for a projection for the case where people are working with a single layer only. Instead of displaying from the projection, we could display directly from the layer.

I think we used to do this, too. At least, I struggled for a long time making the projection tiles be initialised to a lazy copy-on-write reference to the bottom layer (IIRC the tile hinting system would also preserve these cheap refs even when there were multiple layers where the upper layers were largely transparent). There were some annoying corner cases (duplicating a zoomed-in image) which I now don't remember if we ever got right. :(

But it still seems like the elegant way to do this (erk, but it probably did rely on the projection being able to assume the same depth as the image).

It might also help to allocate the selection lazily. That is to not allocate the tiles at all until the selection mask is altered. This might actually happen already, I am not sure about it.

Not sure. Might be able to do this elegantly (elegance again being in the eye of the beholder) by initialising all of the selection tiles to a COW of the same 'blank' tile (and doing the same in the 'clear selection' operation, etc).

--Adam

Sven Neumann
2004-11-12 13:12:33 UTC (over 19 years ago)

comparing gimp speed

Hi,

Tino Schwarze writes:

[1] Working ain't gonna be fun - I once had an A1 poster at 300 dpi on an 6 GB machine and GIMP's swap grow as large as another 6 GB since GIMP didn't seem to be able to use more than 2 or 3 GB of memory altogether. (Is there a known limitation regarding maximum usable memory?)

The operating system imposes a limit on the maximum amount of memory that can be allocated by a process. IIRC the limit is 3GB on Linux. Of course there's also a physical limit and you would need a 64bit CPU in order to use more than 4GB.

Sven

David Neary
2004-11-12 14:17:54 UTC (over 19 years ago)

comparing gimp speed

Hi,

Daniel Egger wrote:

It would be really cool if the pixel data addressing was pluggable so one could easily write a different storage backend. On top of my head there would be several schemes I'd like to try: - A simple linear memory segment with COW for new layers - dito but with RLE compression (and thus more complex addressing) - Line based addressing with COW and aliasing for duplicate lines, with LUT for each line
- Planar memory segments (Shoot now! ;))

I don't know what GEGL will buy us exactly because we certainly need a change from "store those 32bit RGBA values" to something more variable but IIRC GEGL is only about pixel composition, not storage.

There are better people to talk about this than me (Dan, are you reading?) but part of gegl is about data representation, and that includes its representation in memory (tiles, scanlines, whatever). I know that Dan Rogers was working on a GeglTiledImage structure at one stage, which had its own tile manager. Given the object structure, perhaps some of the alternate schemes you describe could be accomplished by inheriting from GeglImage and implementing the extra bits.

Cheers, Dave.

Tino Schwarze
2004-11-12 14:23:49 UTC (over 19 years ago)

comparing gimp speed

Hi,

On Fri, Nov 12, 2004 at 01:12:33PM +0100, Sven Neumann wrote:

[1] Working ain't gonna be fun - I once had an A1 poster at 300 dpi on an 6 GB machine and GIMP's swap grow as large as another 6 GB since GIMP didn't seem to be able to use more than 2 or 3 GB of memory altogether. (Is there a known limitation regarding maximum usable memory?)

The operating system imposes a limit on the maximum amount of memory that can be allocated by a process. IIRC the limit is 3GB on Linux.

Ah, then it was probably this limit.

Of course there's also a physical limit and you would need a 64bit CPU in order to use more than 4GB.

There's PAE36 or High memory[1]. You only need a kernel compiled with 4GB or 64GB support (the machine was Xeon with 6 GB).

Bye, Tino.

[1] Works like EMS from old DOS times. :-|

Robert L Krawitz
2004-11-12 14:55:01 UTC (over 19 years ago)

comparing gimp speed

Date: Fri, 12 Nov 2004 14:23:49 +0100 From: Tino Schwarze

Hi,

On Fri, Nov 12, 2004 at 01:12:33PM +0100, Sven Neumann wrote:

> > [1] Working ain't gonna be fun - I once had an A1 poster at 300 dpi on > > an 6 GB machine and GIMP's swap grow as large as another 6 GB since GIMP > > didn't seem to be able to use more than 2 or 3 GB of memory altogether. > > (Is there a known limitation regarding maximum usable memory?) >
> The operating system imposes a limit on the maximum amount of memory > that can be allocated by a process. IIRC the limit is 3GB on Linux.

Ah, then it was probably this limit.

32-bit Linux, that is. Get an Opteron/Athlon 64 or other 64-bit processor and you don't have this kind of a limit (it's much, much higther).

> Of course there's also a physical limit and you would need a 64bit CPU > in order to use more than 4GB.

There's PAE36 or High memory[1]. You only need a kernel compiled with 4GB or 64GB support (the machine was Xeon with 6 GB).

That has nothing to do with process size limit.

Daniel Egger
2004-11-12 15:11:23 UTC (over 19 years ago)

comparing gimp speed

On 12.11.2004, at 13:12, Sven Neumann wrote:

The operating system imposes a limit on the maximum amount of memory that can be allocated by a process. IIRC the limit is 3GB on Linux.

Typically the splitting point (user/kernel and peripheral memory) would be 2:2, but there is a way to easily get 3:1 (if you do not need the additional GByte for say a GPU framebuffer); hardcore users may also want to try a 3.5:0.5 splitting but IIRC that will only be possible with some nasty patch and have several limitations.

Of course there's also a physical limit and you would need a 64bit CPU in order to use more than 4GB.

Not necessarily, there some CPU extensions for x86 CPUs which allow larger memory sizes by using extra large pages (more overhead) or providing additional bits for the paging tables which allow for a maximum of 64 GByte on reasonably equipped motherboards.

Servus,
Daniel

Sven Neumann
2004-11-12 15:36:56 UTC (over 19 years ago)

memory usage

Hi,

"Adam D. Moss" writes:

But it still seems like the elegant way to do this (erk, but it probably did rely on the projection being able to assume the same depth as the image).

At the moment the projection is always RGBA but the code to do grayscale and indexed projections hasn't been removed. I don't know if it would still work since it hasn't been used for years but it should still be there.

Sven

Sven Neumann
2004-11-12 15:51:39 UTC (over 19 years ago)

comparing gimp speed

Hi,

Daniel Egger writes:

Of course there's also a physical limit and you would need a 64bit CPU in order to use more than 4GB.

Not necessarily, there some CPU extensions for x86 CPUs which allow larger memory sizes by using extra large pages (more overhead) or providing additional bits for the paging tables which allow for a maximum of 64 GByte on reasonably equipped motherboards.

That allows you to stuff more RAM into your box but you can still only give up to 4GB to a single process simply because you cannot handle more than 4GB in a 32bit address space.

Sven

Daniel Egger
2004-11-12 18:08:17 UTC (over 19 years ago)

comparing gimp speed

On 12.11.2004, at 15:51, Sven Neumann wrote:

That allows you to stuff more RAM into your box but you can still only give up to 4GB to a single process simply because you cannot handle more than 4GB in a 32bit address space.

You can, but not using the typical APIs. This is pretty important for database stuff....

Servus, Daniel

Øyvind Kolås
2004-11-12 18:42:44 UTC (over 19 years ago)

comparing gimp speed

On Fri, 12 Nov 2004 11:34:35 +0100, Daniel Egger wrote:

It would be really cool if the pixel data addressing was pluggable so one could easily write a different storage backend. On top of my head there would be several schemes I'd like to try:

- A simple linear memory segment with COW for new layers - dito but with RLE compression (and thus more complex addressing) - Line based addressing with COW and aliasing for duplicate lines, with LUT for each line
- Planar memory segments (Shoot now! ;))

I don't know what GEGL will buy us exactly because we certainly need a change from "store those 32bit RGBA values" to something more variable but IIRC GEGL is only about pixel composition, not storage.

GEGL is about image compositing, not pixel compositing, thus it has to deal with efficient memory representations as well. In my view of how things will be after a full integration, gimp uses GEGL for all it's image processing needs, even the paint tools will most likely be reimplemented to use GEGL.

The largest problem with making the image representation pluggable is that it either complicates op (short for image operation, plug-ins in GEGL) development or adds overhead due to additional copying of values needed to provide a simple interface.

Layers (or their equivalents after GEGL integration) can theoretically be unbounded surfaces instead of square,. there are various ways to such sparse allocation of images, and IIRC the tile based caching system Dan implemented in gegl/gegl/image would allow this.

After integration of GEGL various other speedups can be achived, by for instance caching static portions of the compositing graph, other optimizations and rearrangements are also possible on the graph level.

/pippin

Manish Singh
2004-11-12 18:51:26 UTC (over 19 years ago)

comparing gimp speed

On Fri, Nov 12, 2004 at 06:08:17PM +0100, Daniel Egger wrote:

On 12.11.2004, at 15:51, Sven Neumann wrote:

That allows you to stuff more RAM into your box but you can still only give up to 4GB to a single process simply because you cannot handle more than 4GB in a 32bit address space.

You can, but not using the typical APIs. This is pretty important for database stuff....

Whose use case is very different than GIMP's. And you do use the typical APIs, but the user does have to setup the shmfs on their own. And then you have to select between the shm segments yourself.

It's a whole bunch of contortions, and all pointless since amd64 hardware is competitively priced these days.

I've tried GIMPing a 6 GB image on a 64-bit platform (8 GB of ram) and it handles it just fine.

-Yosh

Daniel Egger
2004-11-12 20:11:54 UTC (over 19 years ago)

comparing gimp speed

On 12.11.2004, at 18:51, Manish Singh wrote:

You can, but not using the typical APIs. This is pretty important for database stuff....

Whose use case is very different than GIMP's. And you do use the typical
APIs, but the user does have to setup the shmfs on their own. And then you have to select between the shm segments yourself.

shm is a special case. I'm talking about allocating highmem segments.

It's a whole bunch of contortions, and all pointless since amd64 hardware is competitively priced these days.

Yep. ;)

I've tried GIMPing a 6 GB image on a 64-bit platform (8 GB of ram) and it handles it just fine.

Duh, my Dual Opteron has only 1 GB at the moment... ;)

Servus, Daniel

Laxminarayan Kamath
2004-11-13 07:45:22 UTC (over 19 years ago)

comparing gimp speed

Manish Singh
to Daniel, Sven, gimp-developer

On Fri, Nov 12, 2004 at 06:08:17PM +0100, Daniel Egger wrote: ...............
t's a whole bunch of contortions, and all pointless since amd64 hardware is competitively priced these days.

please dont concentrate only on those who can change pcs like shirts, concentrate on us poor people too. ;)

Manish Singh
2004-11-13 08:40:52 UTC (over 19 years ago)

comparing gimp speed

On Sat, Nov 13, 2004 at 12:15:22PM +0530, Laxminarayan Kamath wrote:

Manish Singh
to Daniel, Sven, gimp-developer

On Fri, Nov 12, 2004 at 06:08:17PM +0100, Daniel Egger wrote: ...............
t's a whole bunch of contortions, and all pointless since amd64 hardware is competitively priced these days.

please dont concentrate only on those who can change pcs like shirts, concentrate on us poor people too. ;)

Poor people can't afford > 4 GB of ram either, so the point is moot.

-Yosh

Manish Singh
2004-11-13 08:48:20 UTC (over 19 years ago)

comparing gimp speed

On Fri, Nov 12, 2004 at 08:11:54PM +0100, Daniel Egger wrote:

On 12.11.2004, at 18:51, Manish Singh wrote:

You can, but not using the typical APIs. This is pretty important for database stuff....

Whose use case is very different than GIMP's. And you do use the typical
APIs, but the user does have to setup the shmfs on their own. And then you have to select between the shm segments yourself.

shm is a special case. I'm talking about allocating highmem segments.

So, what is the userspace API for this?

-Yosh

miriam clinton (iriXx)
2004-11-13 20:50:02 UTC (over 19 years ago)

comparing gimp speed

Laxminarayan Kamath wrote:

Manish Singh
to Daniel, Sven, gimp-developer

On Fri, Nov 12, 2004 at 06:08:17PM +0100, Daniel Egger wrote: ...............
t's a whole bunch of contortions, and all pointless since amd64 hardware is competitively priced these days.

please dont concentrate only on those who can change pcs like shirts, concentrate on us poor people too. ;)

true, this has always been the focus of GNU/Linux - right from the start, and there are still projects like Sisela and LOAF which load the kernel and basic apps on a floppy disc for primitive laptops (or wireless scanning ;)

mC~

Daniel Egger
2004-11-14 13:51:32 UTC (over 19 years ago)

comparing gimp speed

On 13.11.2004, at 08:48, Manish Singh wrote:

shm is a special case. I'm talking about allocating highmem segments.

So, what is the userspace API for this?

AFAIK there's no direct userspace helper to address highmem segments; one can only map them in the Linux kernel and provide them to userspace (or not).[1] Since this does not lead to any particular improvement for userspace, without having a patched kernel, it does at least have the advantage of buffers in the kernel to be allocated from highmem first.

If you need to address more than the typical limits (1, 2 or 3 GiB) per process, you will need to write a kernel module that communicates with userspace through some syscall or device.

In case you want to see some real improvement, have a look at [2] which contains an (probably outdated) patch to have a real 4 GiB available for userspace.

[1] http://www.skynet.ie/%7Emel/projects/vm/guide/html/understand/ understand-html.html, chapters 3.4 and 10. [2] http://lwn.net/Articles/39283/

Servus, Daniel

Daniel Egger
2004-11-14 14:13:59 UTC (over 19 years ago)

comparing gimp speed

On 13.11.2004, at 07:45, Laxminarayan Kamath wrote:

t's a whole bunch of contortions, and all pointless since amd64 hardware is competitively priced these days.

please dont concentrate only on those who can change pcs like shirts, concentrate on us poor people too. ;)

Actually my focus is on having stuff more modular so one can choose which method to use and throw out unneeded stuff, thus saving memory. The mmap idea for instance would be a potential memory saver since the implementation is much smaller than the tile cacheing/swapping code we have now and could be configured to either use space for a swapfile or use the system swap instead. Unfortunately it would need some tuning to get decent performance, but since we do not have any plugging facilities here at the moment the point is moot.

In any case people are working on making everything much more modular and thus remove the resource need for functionality which is not used. Granted, the abstraction and the use of GTK+ 2.x were a huge loss at first but they paid off for normal users already and will do so even more in future, also for low-end machines and special uses like headless use.

Interestingly, while there seems to be some demand, it's really a seldom event that someone mentions those requirements and even more rare that someone affected by it works on it. So people, step up show some participation!

Servus, Daniel

Joao S. O. Bueno Calligaris
2004-11-15 02:08:08 UTC (over 19 years ago)

comparing gimp speed

On Thursday 11 November 2004 20:41, Sven Neumann wrote:

Hi,

Dov Kruger writes:

I noticed that gimp is very slow for large images compared with Photoshop. We were recently processing some 500Mb images, and on a fast machine with 2Gb, gimp is crawling along, while on a slower machine with only 512 Mb, photoshop is considerably faster. I attributed it to a massive amount of work in photoshop, using sse instructions, etc. but then noticed that the default viewer in redhat allows me to load images far faster even than adobe, and zoom in and out with the mouse wheel in realtime.

Granted, because you are editing the image, not just displaying it, there has to be some slowdown, but I wondered if there is any way I can tweak gimp, do I somehow have it massively de-optimized. When I first set up gimp-2.0, I tried both 128 and 512 Mb tile cache sizes. 512 seems to work a lot better, but it's still pretty bad. Any idea as to the area of the speed advantage of Adobe?

If you are processing large images and have 2GB available, why do you cripple GIMP by limiting it to only 512 MB of tile cache size?

The point here is no news for us. The GIMP is not as fast as it can possibly be one day for large images.

I've put some thought on it these days - (just thinking, no code), and one idea that came by. What I intend with writing this is that everybody have it in mind when making the transition to GEGL - when it will be a favorable time to implement it:

All images in the GIMP could be represented twice internaly - on e with the real image representation, and a second layer stack representing just what is being seeing on the screen. All work that should "feel" realtime-like should be done first on the screen representation, and them processed in background, on the actual layer stack.

This would allow, overall, a faster use of the tools, including the paint and color correction ones.
It could also clean-up some situations like the JPEG save preview layer, and the darkening seen in teh current crop tool - as these things would not be in the "real" image data, just on the display shadow.

In GEGL terms, that means two graphs for every image. Of course none of this is imediate, and I am thinking on a discussion that should mature from now to, say, some 3 or 4 months, if GEGL will be put in the next release.

While there may be a first impression that this would take up more memory and resources than having a single representation of the image, I'd like to put in consideration thde following numbers: A typical photo I open up for viewing/correcting is 2048x1576 (My camera's resolution). That would take up, in raw memory, no undo tiles considered, more than 9 Megabytes for a single layer. Each of which bytes should be "crunched" each time I make a small adjust on the curves tool.

On the other hand, I view this same image on a window that is about 800x600 -> 1.5MB in size.

Of course that care must be taken for that this doesn't slow everything down when

I know this is no news, it is hard to do, and all that. But it is nonetheless a model that we have to keep in mind, for, at this point, it seems no less important that implementing tiles had been some day.

Ok, I may also have got it all backwards, and there may be a way of optimizing the current model without two image graphs at all. :-) But it still a discussion that should be mature in a foreseable future.

Sven

Regards,

Joao

Sven Neumann
2004-11-16 12:07:31 UTC (over 19 years ago)

comparing gimp speed

Hi,

while we are discussing this. Would anyone object if we changed the default tile cache size from 64MB to 128MB? Memory is becoming cheap these days and IMO it is reasonable to adapt the default value from time to time.

Sven

David Neary
2004-11-16 14:07:56 UTC (over 19 years ago)

comparing gimp speed

Hi Sven

Sven Neumann wrote:

while we are discussing this. Would anyone object if we changed the default tile cache size from 64MB to 128MB? Memory is becoming cheap these days and IMO it is reasonable to adapt the default value from time to time.

I think that's reasonable.

Cheers, Dave.

Michael Schumacher
2004-11-16 14:27:17 UTC (over 19 years ago)

comparing gimp speed

David Neary wrote:

Hi Sven

Sven Neumann wrote:

while we are discussing this. Would anyone object if we changed the default tile cache size from 64MB to 128MB? Memory is becoming cheap these days and IMO it is reasonable to adapt the default value from time to time.

I think that's reasonable.

More important than a specific default value is IMO that the docs describe exactly what this setting is for and provide some reasonable examples for different setups.

Currently, a user can't really figure out what this setting does, at least by reading the docs.

Michael

Øyvind Kolås
2004-11-16 14:51:37 UTC (over 19 years ago)

comparing gimp speed

On Sun, 14 Nov 2004 23:08:08 -0200, Joao S. O. Bueno Calligaris > The point here is no news for us.

The GIMP is not as fast as it can possibly be one day for large images.

I've put some thought on it these days - (just thinking, no code), and one idea that came by. What I intend with writing this is that everybody have it in mind when making the transition to GEGL - when it will be a favorable time to implement it:

All images in the GIMP could be represented twice internaly - on e with the real image representation, and a second layer stack representing just what is being seeing on the screen. All work that should "feel" realtime-like should be done first on the screen representation, and them processed in background, on the actual layer stack.

I think this all belongs within GEGL, these are actually optimizations that can happen within GEGL, gimp should just request what it wants and GEGL should magically do the right thing.

Reasoning and explaination of magic follows,. I will be using layer stack / compositing graph interchangeably in this mail, since the layer stack / drawable tree will just be an abstraction layer above a graph.

We have our display,. no data loaded

Display

----------------------------

Then we load an image, either it is memory mapped from disk, or it is in memory or it is decoded from png each time,. there is no need to care about it.

Display
|
Image_data

---------------------------

How is the image_data displayed?, we might have a zoomed out version, that is clipped by the image window we can imagine the Display not to have two implied
nodes that crops and scales the image

.-[Display]--. | display
| |
| scale
| |
| crop
| |
`----------
|
Image_data

-----------------------------------

when we want to do color correction operation it is inserted between the image_data and the display.

display |
color_correct
|
image_data

-------------------------------------

thinking about it directly we would assume that all the pixels in the image have to be color corrected with this scheme,. but the graph presented here can be flattened (extending the implied nodes within display) and reordered without it affecting the final composited image.

display |
color_correct
|
scale
|
crop
|
image_data

scale and crop are both affine operations, all affine operations should, if possible,
be collapsed and move towards the image data sources of a graph (at this point layers and all higher level gimp data structures don't matter it's just a large graph
of operations to be processed)

display |
color_correct
|
affine
|
image_data

when doing changes to color_correct's parameters only, GEGL should be able, if it has enough cache memory available, to cache the image data resulting from the affine transform, thus only the changing part of the graph needs to be recomputed.

The point where such reorganizations become harder is for ops like blur, which depend on the spatial resolution of the data coming in, this essentially means that
a "rewriting" pass of the graph needs to change the blur radius etc, some operations are not possible to reorganize like this.

The problems I outline would also happen for the dual processing graph approach, and the same meta data would be needed about the ops involved. Thus what I present here is actually just saying that this optimization can happen later within GEGL without changing how gimp uses it.

/pippin

PS: please excuse the lousy ASCII diagrams, gmail uses a non proportional font and thus make it difficult.

Soren Hauberg
2004-11-16 16:24:17 UTC (over 19 years ago)

comparing gimp speed

Hi,
Sven Neumann wrote:

Hi,

while we are discussing this. Would anyone object if we changed the default tile cache size from 64MB to 128MB? Memory is becoming cheap these days and IMO it is reasonable to adapt the default value from time to time.

Wouldn't it make sense to to check to amount of memory available on the machine before suggestion the tile cache size? A default tile cache on 128MB wouldn't be nice on a 128MB system (I guess).

Sven

/Soren

Carol Spears
2004-11-16 18:31:00 UTC (over 19 years ago)

comparing gimp speed

On Tue, Nov 16, 2004 at 02:27:17PM +0100, Michael Schumacher wrote:

More important than a specific default value is IMO that the docs describe exactly what this setting is for and provide some reasonable examples for different setups.

Currently, a user can't really figure out what this setting does, at least by reading the docs.

http://www.gimp.org/unix/howtos/tile_cache.html

i dont think this document could be improved much the ideas are not simple. i do know that it also explained to my friend the reason that Photoshop7 interfered with her Outlook on an operating system which was not *nix.

carol

Michael Schumacher
2004-11-16 18:52:27 UTC (over 19 years ago)

comparing gimp speed

Carol Spears wrote:

On Tue, Nov 16, 2004 at 02:27:17PM +0100, Michael Schumacher wrote:

More important than a specific default value is IMO that the docs describe exactly what this setting is for and provide some reasonable examples for different setups.

Currently, a user can't really figure out what this setting does, at least by reading the docs.

http://www.gimp.org/unix/howtos/tile_cache.html

^^^^
This is part of the problem - some useful information is kept from users of other platforms.

i dont think this document could be improved much the ideas are not simple. i do know that it also explained to my friend the reason that Photoshop7 interfered with her Outlook on an operating system which was not *nix.

At least it could become part of the gimp docs and be translated - just like the man pages.

Michael

Carol Spears
2004-11-16 18:59:04 UTC (over 19 years ago)

comparing gimp speed

On Tue, Nov 16, 2004 at 06:52:27PM +0100, Michael Schumacher wrote:

Carol Spears wrote:

http://www.gimp.org/unix/howtos/tile_cache.html

^^^^
This is part of the problem - some useful information is kept from users of other platforms.

i made this problem.

when i was first building the web site, i had no idea how windows work. it is a few years later and i still have no idea how that operating system works. so it was written for the *nix and the fact that it worked for windows was discovered later.

completely due to my stupidity or lack of understanding of the system. i even had access to windows machines, so i cannot even claim lack of access for my lack of understanding.

the web site was almost finished and a few problems were there like this. i think at one point i suggested that it be linked to from the windows pages.

so all windows users, please forgive me for my ignorance when i misfiled this information.

carol

William Skaggs
2004-11-16 19:33:08 UTC (over 19 years ago)

comparing gimp speed

Michael Schumacher wrote:

Currently, a user can't really figure out what this setting does, at least by reading the docs.

http://www.gimp.org/unix/howtos/tile_cache.html

^^^^
This is part of the problem - some useful information is kept from users of other platforms.

[ . . . ]

At least it could become part of the gimp docs and be translated - just like the man pages.

It already is part of the GIMP docs -- see

http://docs.gimp.org/en/ch02.html#gimp-using-setup

But maybe it would help to have a better pointer to it.

Best, -- Bill


______________ ______________ ______________ ______________ Sent via the KillerWebMail system at primate.ucdavis.edu

Alan Horkan
2004-11-16 19:50:21 UTC (over 19 years ago)

comparing gimp speed

On Tue, 16 Nov 2004, Sven Neumann wrote:

Date: Tue, 16 Nov 2004 12:07:31 +0100 From: Sven Neumann
To: gimp-developer@lists.xcf.berkeley.edu Subject: Re: [Gimp-developer] comparing gimp speed

Hi,

while we are discussing this. Would anyone object if we changed the default tile cache size from 64MB to 128MB? Memory is becoming cheap these days and IMO it is reasonable to adapt the default value from time to time.

Would it be difficult to query the operating system and to automatically set the tile cache size to some percentage (50%?) of available RAM?

Increasing the default size sounds sensible given that even most low end computers come with at least 256MB of RAM. I dont know about other linux distributions but the Memory recommendations for Fedora 2 are as follows: Minimum for graphical: 192MB
Recommended for graphical: 256MB http://fedora.redhat.com/docs/release-notes/fc2/x86/

- Alan

Sven Neumann
2004-11-16 20:11:53 UTC (over 19 years ago)

comparing gimp speed

Hi,

Soren Hauberg writes:

Wouldn't it make sense to to check to amount of memory available on the machine before suggestion the tile cache size?

Haven't we discussed this like 200 times already?

A default tile cache on 128MB wouldn't be nice on a 128MB system (I guess).

It shouldn't hurt much since on a 128MB system you will need to have some amount of swap space anyway. Setting the tile cache to a large value doesn't mean that all this memory is used.

Actually, if you only have 128MB RAM in your system, setting the tile cache to 128MB is probably the best you can do.

Sven

Sven Neumann
2004-11-16 20:12:43 UTC (over 19 years ago)

comparing gimp speed

Hi,

Michael Schumacher writes:

At least it could become part of the gimp docs and be translated - just like the man pages.

The man pages are translated? Since when?

Sven

David Neary
2004-11-16 20:23:17 UTC (over 19 years ago)

comparing gimp speed

Hi,

Alan Horkan wrote:

Would it be difficult to query the operating system and to automatically set the tile cache size to some percentage (50%?) of available RAM?

In a portable way, impossible. Having different routines for each platform, perhaps. It would be nice if glib did something like this...

The other problem is that 50% of RAM (or even more) is reasonable for a simple-user machine, but for a multi-user machine (a terminal server, for example) that might be completely inappropriate. You would set it to maybe 20 or 25% of RAM in that case, since you expect to have several instances of the GIMP open at the same time.

Increasing the default size sounds sensible given that even most low end computers come with at least 256MB of RAM.

Computers which were low to mid range 2 years are still pretty common - that's a more reasonable target audience. But even 2 years ago 128M was usual and 256M was common.

Cheers, Dave.

Michael Schumacher
2004-11-16 20:54:31 UTC (over 19 years ago)

comparing gimp speed

Sven Neumann wrote:

Hi,

Michael Schumacher writes:

At least it could become part of the gimp docs and be translated - just like the man pages.

The man pages are translated? Since when?

No, they should be - and it would be good to have them in the docs, as almost no one seems to know about man pages anymore these days.

Michael

Sven Neumann
2004-11-16 23:51:34 UTC (over 19 years ago)

comparing gimp speed

Hi,

Michael Schumacher writes:

No, they should be - and it would be good to have them in the docs, as almost no one seems to know about man pages anymore these days.

We could consider this for the next release. The gimprc manpage is generated. Since most of these strings are also used as tooltips in the preferences dialog, a good deal of them are already marked for translation. With some more strings marked and some minor changes to app/config/gimpconfig-dump.c, we could generate internationalized man-pages. Not sure if we should install those but it could be useful to generate HTML from this. There are tools for this. Alternatively we could teach gimp to generate the gimprc documentation in Docbook XML.

Sven

Sven Neumann
2004-11-17 00:30:12 UTC (over 19 years ago)

comparing gimp speed

Hi,

Michael Schumacher writes:

and it would be good to have them in the docs, as almost no one seems to know about man pages anymore these days.

The GIMP man-pages are online at

http://www.gimp.org/unix/man-gimp-2.0.html http://www.gimp.org/unix/man-gimprc-2.0.html http://www.gimp.org/unix/man-gimptool-2.0.html http://www.gimp.org/unix/man-gimp-remote-2.0.html

linked from the bottom of http://www.gimp.org/unix/.

Sven

Øyvind Kolås
2004-11-19 02:59:02 UTC (over 19 years ago)

comparing gimp speed

On Thu, 18 Nov 2004 12:03:21 -0200, Joao S. O. Bueno Calligaris > Hi...

So, my point is that for getting the best possible performance a "realtime preview" of any tool not able to do it's job in realtime would have to be applyed after the crop and scale in this diagram - and them, the Image Data would be changed in a background thread.

snip<

scale down. Thus, even letting most of the work to GEGL, a dual layer tree - one for displaying, and one for the actual data, may still be a good solution.

For real time preview, there is only one tree, the tree for displaying, there is no need to calculate anything outside the view port,. nor to operate on a full size image. Even a brush stroke can be considered an Op in GEGL,. during interactive use that
op would have properties that are the path, the brush used, the paint mode, and the color used. The path would be grown incrementally.

For both approaches, a single compositing tree that is optimized according the the requested output size, and two separate pipelines the same amount of calculation of brush size etc needs to be done. The benefit of having a single unified tree is a cleaner design without having to keep track of two data models in the gimp at all times.

/pippin