Hello Ken,
i came across three PNG-files which Pngout doesn't support though i do not understand why. All three files are 24bit images and are handled fine by any other application i tried (Xnview, FastStone Image Viewer, Acdsee ...). In addition i tried the current PngoutWin trial version which also fails on these.
Would you be so kind to take a look?
The images in question can be downloaded here: http://rapidshare.de/files/23260008/deus-ex_-_Pngout_unsupported_PNG-files.zip.html
PNGOUT says "Unsupported format". All of them have bit depth 16, which is not supported... but for PNGOUT that should not matter... after all, PNGOUT simply could take the deflated data, uncompress it, then compress it using KZIP deflating algorithms. Maybe for PNGOUT, have an #ifdef in kplib.c that ignores the fact that it is 16-bit? After all, PNGOUT does not really need to interpret/understand the image data...
Ben Jos.
deus-ex at
Zardalu, thank you for answering. Because of your post i verified fileinformation with every of the previously mentioned viewers. While XnView states the images to be 24bit, Acdsee and FastStone Image Viewer reports them to have 48bit depth.
Pngout currently doesn't support 48bit images so that would explain the issue. I'm just surprised that Xnview gives a different result as it is well known as a very good and reliable image application.
Zardalu at
deus-ex said
While XnView states the images to be 24bit, Acdsee and FastStone Image Viewer reports them to have 48bit depth.
It's actually slightly different from that, but it's close enough. The PNG format allows every "sample" to be 16 bit. More specifically, each one of the R, G, B (and A) values can have 16 bits, so you could have 48 (without alpha) or 64 (with alpha) depth. I am not aware, however, (or, rather, I'm pretty sure they don't exist yet) of ANY video card that uses more than 8 bits per channel, so to display things, the values will always be reduced to 8-bit values for each channel...
Maybe the easiest way to compare it would be that you can choose your video card to display images using "high-colour" ("only" 32768 or 65536 colours) vs. "true-colour" (using 8 bits each for R, G, and B, so 24 bits total... 2^24 = 16,777,216 colours). PNG and a few more formats (PNM comes to mind) allow each channel to have 16 bits. The only way to display those is to reduce them to using 8 bits per channel. "Nobody" can display an image that uses 48 bits or whatever. They all scale them back to 8 bits per channel (and maybe lose some information... but would the human eye really be able to see the difference?)...
Ken just didn't bother to build in support for 16-bit depth in his kplib, probably for the very reason that it would have to be scaled down to 8 bits anyway for any video cards/monitors/humans that exist now or can be expected to exist within several years from now. So when the viewers you used display those images displayed them or boasted to support them, they didn't display the actual images, just a "scaled-down" version of them...
What I really meant in my reply was different from all of that, though. It was that I am pretty sure that Ken uses his kplib to read those images. And kplib was primarily meant to DISPLAY images. But, like I said before, when dealing with a piece of data that is deflated and that you simply want to inflate for some reason, you really do not have to know what that piece of data "means". PNGOUT could just take the deflated data, decompress it, then deflate it again without ever trying to interpret it. Just like any compression program will try to compress files, regardless of whether those files are executables, PNG files, JPG files, TXT files, DBF files, or whatever. Those compressors treat them simply and ONLY as pure data, without ever trying to UNDERSTAND the actual data. In short: I could "invent" a new file format called ".xyz". Every compressor should be able to have a go at compressing my .xyz files. Ken's PNGOUT was made more flexible than that... it could take not only PNG files, but also BMP, GIF, JPG, PNG, TGA, and more... but in the case of the input being PNG, he shouldn't have tried to "understand" the input image...
Ben Jos.
Zardalu at
deus-ex said
I'm just surprised that Xnview gives a different result as it is well known as a very good and reliable image application.
Actually, you shouldn't be surprised at what xnview said... xnview was correct. The others just scaled those images down to 24 bits (3 channels (RGB) times 8 bits) and reported the result. xnview reported what the input was, which was 3 channels (RGB) times 16 bits...
Even so, xnview is not THAT good. Sure, it's based on NetPNM, but I have found several images (using very rare "settings", yes, I admit) that xnview can't handle. Or, more ironically, I think the PNM format isn't well-defined. Actually, I could give you some images that the PNM-based readers would not be able to read correctly, even though those images are in a format that was DEFINED by the NetPNM library...
The best viewer I have come across so far is PMView Pro. At least if you want a program to be able to deal with a shitload of different image formats and for each format the most esoteric variations of those. xnview really isn't that good if you compare them on certain given formats. TIFF comes to mind. The thing is that most viewers claim to support certain formats (BMP, PCX, PNG, TGA, and so on), but most of them do NOT support all variations of those formats. And it's the same thing, more generally, with compressed files. WinRAR chokes on certain .gz files even though the GZIP format is well-documented in an RFC...
But, to get back to the subject, Ken never claimed to support all possible PNG format variations. And at least his programs nicely inform the user "unsupported format" as opposed to crashing or giving other cryptic error messages like "plugin failed, do you want to disable this plugin?".
I had a conversation with Ken recently about the image formats supported by PNGOUT and it was clear, both from that conversation and from the kplib source code, that most of the variations you will "almost never" see, he does not really care about. I had to look very hard to find some images compressed using the "Huffman1D" compression only used in OS/2 BMP files. And I only found them because I looked for them... . But... that is only related to viewers...
PNGOUT should not have any problems with PNG images that it does not support for viewing... it should just inflate the data and then deflate it again without trying trying to understand...
That's what my initial reply was about... for PNGOUT, there should be an #ifdef in the source code of kplib that simply inflates the data coming from a PNG and then deflates it again.
Hmm... I think I'll write a program to take a PNG file and convert it to some "nonsense" ZIP file. Then that ZIP file could be fed to ZipMax (actually, I mentioned something like that to Ken 3 years ago or so), so that every zip-deflate compatible program could have a go at it. Then, finally, have that same program convert the ZIP back into a PNG...
Just like I am close to a new version of DeflOpt that takes any deflated data (whether it's inside a GZIP, PNG, ZIP files) and optimises it... . ZipMax can only deal with ZIP files, but there are several more formats that use "Deflating" as their compression method...
Just for those people (and I know there are very few of them) who want to get a few extra bytes out of GZIP, PNG, ZIP files... ;)
Ben Jos.
deus-ex at
Thank you for your detailed answer and descent explanations. I already knew that 48bit images are made of RGB channels with each 16bit. But i wasn't aware of the fact that in order to display these they are required to be scaled down to 24bit which pretty much makes sense. The only reason i can think of which would require usage of 48/64bit images is the professional print sector (eg. Photo-Magazines). Regarding Graphiccards it comes to mind that John Carmack/iD Software was requesting built-in support for 64bit colordepth several years ago while talking about his then upcoming Doom3-Engine and next-gen 3D-Engines in general. He said that this would be required in order to be able to do more complex light and colour calculations where 24/32bit would be inaccurate at a certain level.
Your suggestion for Pngout sounds very reasonable to me. Then again, currently every image application with support for 48bit images has to scale these down in order to display them so i do not see a reason for not supporting 48bit in Pngout in the first place. Currently i have to resave these images with an external application to 24bit in order to recompress them with Pngout and that shouldn't be a necessity if possible.
krick at
Zardalu said
Hmm... I think I'll write a program to take a PNG file and convert it to some "nonsense" ZIP file. Then that ZIP file could be fed to ZipMax (actually, I mentioned something like that to Ken 3 years ago or so), so that every zip-deflate compatible program could have a go at it. Then, finally, have that same program convert the ZIP back into a PNG...
Just like I am close to a new version of DeflOpt that takes any deflated data (whether it's inside a GZIP, PNG, ZIP files) and optimises it... . ZipMax can only deal with ZIP files, but there are several more formats that use "Deflating" as their compression method...
Just for those people (and I know there are very few of them) who want to get a few extra bytes out of GZIP, PNG, ZIP files... ;)
I use ZipMax at work to re-compress java JAR files. However, since it doesn't support files with the JAR extension, I have to rename it to ZIP before dropping it on ZipMax, and then rename it back when it's finished.
It shouldn't be that hard to modify ZipMax so that it will do the renaming for me. You could probably do your PNG to ZIP conversion inside ZipMax as well.
Zardalu at
deus-ex said
Regarding Graphiccards it comes to mind that John Carmack/iD Software was requesting built-in support for 64bit colordepth several years ago while talking about his then upcoming Doom3-Engine and next-gen 3D-Engines in general.
I beg to disagree with John Carmack I know he has a LOT more experience with 3D things than I have, but I don't think that more than 8 bits per channel is necessary. I can look at images from digital cameras in JPG. And they look better to me than anything else. Including photographs that I made and had developed onto 8x12 or whatever size.
"We" don't need more bits of accuracy. We "just" need a 3D engine that can make things look "real". And 8 bits per channel would perfectly do the trick. If John Carmack and other 3D developers can't make things look "real", then it's NOT because graphics cards can't make things looks like hmm... "real videos"... it's because current 3D engines can't make things look "real enough". Did Carmack really say that? I'm reminded now of a conversation I had with Ken and in which I misremembered something. If Carmack really thinks he needs higher colour depth than it would mean he's out of "his depth". A graphics card wouldn't help him at this stage. I can download any JPG these days that is close enough to "photo quality" that I can't tell the difference. With 8 bits per channel. It would NOT require more bits per channel to make something look photographic/real.
Actually, you shouldn't be surprised at what xnview said... xnview was correct. The others just scaled those images down to 24 bits (3 channels (RGB) times 8 bits) and reported the result. xnview reported what the input was, which was 3 channels (RGB) times 16 bits...
Actually it was the other way around as i wrote above. Xnview reports 24bit, ACDSee and FastStone Image Viewer reports 48bit which is the correct information regarding the input format.
The best viewer I have come across so far is PMView Pro.
I didn't hear about this application before. I'm going to have a look at it.
xnview really isn't that good if you compare them on certain given formats. TIFF comes to mind.
True, but i think Xnviews philosophy is to support the common variants of any format and to put aside those which are rarely spread and implement them on user request into the main application or as a plugin.
Just like I am close to a new version of DeflOpt that takes any deflated data (whether it's inside a GZIP, PNG, ZIP files) and optimises it... . ZipMax can only deal with ZIP files, but there are several more formats that use "Deflating" as their compression method...
Just for those people (and I know there are very few of them) who want to get a few extra bytes out of GZIP, PNG, ZIP files... ;)
Count me in. :)
Zardalu at
krick said
Here's what Carmack actually said...
Yes, I thought about it some more, and then (after having thought about it), I read the article. And I agree to some extent. The first thing that came to mind after thinking about it some more was the propagation of errors and I think this is exactly what John Carmack is talking about. Suppose you have to calculate the sum of a long list of numbers. And all those numbers have been rounded to, say, 8 decimal places. On average, he errors would cancel each other out, but in the worst case, they would all add up. And if you add enough numbers, even though all of them have been calculated to 8 decimal places, the error can get so bad that not even 1 decimal place is correct at the end.
For example: Take a value like 0.000000049999999999999999999999999999... . When rounding to 8 decimal places, it is 0. And when adding it 1,000,000,000 times, the rounded version would add up to 0, but the "real" value would be 49.999999999...
And that's only when doing additions. When doing multiplications, errors propagate much faster.
So, yes, I agree. I was thinking about something else completely. Taking an image using an "infinite" number of bits per channel and then reducing it to 8 bits per channel. I don't think any human would see the difference. But when you do a lot of calculations, all using 8 bits per channel, then, yes, you might get a very big error.
Another thing I thought about was what happened with Intel's Pentium Pro processor. There might actually come a time when graphics cards can do 32 bits per channel faster than 8 bits per channel. Like the Pentium Pro could do 32-bit operations faster than 16-bit operations. That would be another reason to choose a larger colour depth.
I apologise (and mostly to John Carmack). I should have thought about it a little bit longer. All I was thinking about was "still images", photos, not renderings of movements.
Ben Jos.
Zardalu at
krick said
However, since it doesn't support files with the JAR extension, I have to rename it to ZIP before dropping it on ZipMax, and then rename it back when it's finished.
Hmm... you might not be completely happy about the new DeflOpt then either. Since I changed its wildcard handling (allowing wildcards in directory names as well), things could get out of hand very fast. A directory might contain tens of thousands of files and I did not want to make DeflOpt spend a lot of time scanning all files (regardless of extension) to see if their internal format matched GZIP, PNG, or ZIP. After all, you could rename a .ZIP file to .PNG or .TXT or .whatever. Ken's kplib does something like this. It scans a file, regardless of the extension and tries to recognise the format from that. I made a design decision for the next DeflOpt not to do something like that. If I hadn't done that, then a "C:\" file specification would have required DeflOpt to scan every single file on the hard drive and see if the "magic numbers"/"signatures" showed it was one of the supported formats. So I decided to only look at .gz, .tgz (a generally accepted abbreviation of .tar.gz), .png, and .zip files.
But maybe, in the next version, I'll have an "advanced type of option" that will make it scan every file regardless of extension. A way of moving the responsibility to the user...
krick said
It shouldn't be that hard to modify ZipMax so that it will do the renaming for me. You could probably do your PNG to ZIP conversion inside ZipMax as well.
It's not that easy with PNG files. A single file, even a single BLOCK within a single file, could be spread across multiple IDAT chunks. And then there are other chunks that the user may or may not wish to keep that use deflate as well. IDAT isn't the only type of chunk that uses deflate. zTXt and iCCP and (optionally) iTXt do too... . But my main point is that there are so many more utilities designed for handling deflated data within ZIP files than there are for handling deflated data within other files. A conversion utility is very easy and very cheap. A frontend and a backend. And those are not time-consuming compared to most other things. It's a lot less work to convert things to ZIP first and back to whatever original format afterwards than to make every utility out there deal with every possible format on its own. AdvanceCOMP comes to mind now. It states it can do .png files, but only if all the IDAT data is concatenated first. My point is that the conversion time before and after is usually negligible to what happens in between. So why not go the easy route?
Then again, you told me you had made changes to ZipMax after Roman and I had stopped working on it. Maybe YOU could add those "renaming to .zip" things yourself. ;)
Ben Jos.
Zardalu at
deus-ex said
True, but i think Xnviews philosophy is to support the common variants of any format and to put aside those which are rarely spread and implement them on user request into the main application or as a plugin.
I don't think "user request" is much of an incentive to implement exotic variations. It's open source. And the more exotic something is, the more it requires the user to also be a programmer and to care enough to write some code and submit it. When talking about something as "important" as an entire OS, yes, they will make sure that pretty much everything is supported, but an image reader? Something like that relies on user contributions in the form of source code, not just requests.
Just as an example that I think is significant: I used to have some images that I kept around for many many years. They were in some kind of "compressed Spectrum" format (not .spc, but .spx) that was used on... hmm... Amiga? Atari SP?... . So I just kept them around. From time to time, I tried to see if any viewers had implemented that format for my platform. From time to time, I tried to find format specifications. Some people obviously could view them, some people obviously knew how to decode them and convert them into a "universally viewable" format. It took YEARS before I found a way to view them. And I was persistant. I'm not like the average user.
That's why I mentioned PMView Pro. It might not support EVERY SINGLE format out there (probably NetPBM supports more), but the ones it does support, it supports COMPLETELY. As an aside note, that's exactly what I am doing too. I have my own code for several graphics formats. And each of those, it supports at least as well (so many times better) than any program out there that I have seen. It all boils down to design decisions. Support 99% of what's out there or try to support 100% of however many or however few formats you support or try to support EVERYTHING? When I decide to support a format, I set out to support even the most exotic variations of that format. But, really, does the average user care? ACDSee, IrfanView, they do the job for most users 100% of the time... . PMView Pro boasts about the variations. The site even has a page of images that most other viewers can't deal with. But that's just advertising. We have all seen the pages where a company sells a product and has a page where it compares its own product's features to features offered by others. And that's pretty much the opposite of the open source thing. They will never make their own product look worse than others. Still, I was impressed when throwing things PMView Pro didn't boast about at it.
Given enough time, a collection of programmers not afraid to share things could do "everything". But that's not how things work. The average user wants things "now". The average user is influenced by marketing. The average user does not want to put in any effort. If what Microsoft offers works, then that's where it stops.
Rambling...
I should stop now.
Ben Jos.
Thundik81 at
I tried PNGOUTWin (and pngout) on PngSuite from Christian "schnaader" Schneider.
Without surprise some files aren't supported : ones with 16-bits bit depth (there also are some corrupted files in PngSuite for testing)
Ken, could you add support for files with 16-bit depth ?
I know it's not very useful but to have a wider compatibility is great
Link : http://www.schaik.com/pngsuite/pngsuite.html
David at
Why PNGOUT must understand the input image
One advantage PNGOUT has over blind re-compressing is that it can detect ways to compress the image with significant savings based on the image data. Consider a large RGB image with 256 or less colors. If the file was just decompressed and re compressed without analyzing the image data, then PNGOUT wouldn't be able to change the image type to Palette. The same argument goes for reordering the palette, removing unused palette entries, etc.
In the case of the 16 bit RGB image, there is still a case where there would be a huge savings by analyzing the image data. If the image data is 16 bit RGB but the colors are all gray, then it would be better to convert the image to 16 bit gray.
That said, true 16 bit displays are very expensive (HP and SGI make them for their high end workstations), and therefore rare. Some scanners and printers work in 16 bit channel depths, but I don't think PNG has much of a foothold in those use cases.