Well I'm a tad bit fickle when people make clames to fame so I tested the pngout vs some other programs I could get my hands on (infranview will be included tommorow since I'm running out time, 1 hour almost per program -_-') but here are the results of png compression png -> png of the Paul schmidt 16 million collour test bmp which was converted to png with PSP with yielded the almost same size that Paul's compression got so I will use these on pauls tommorow possibly.
kens ratio is 873.69 (rounded) which is good but http://www.webcolors.freeserve.co.uk/png/png24.htm
claims 875:1 ratio
Awesoken at
You can usually do better if you're willing to play around with the options. I did "pngout 16million-pschmidt.png /f5 /b0" and got 57549 bytes - which is a 874.59:1 ratio. I suspect Kerry Watson was using an older version of PNGOUT for that tutorial - which used /b0 as default instead of /b256. /b256 is better for most images. /b0 is usually best for very small images, or for images that have a similar distribution throughout - meaning the filtered values look the same from top to bottom.
Awesoken at
Just for fun, I decided to compress that 16 million color test image in a different way. I wrote an EVALDRAW script : ) Interestingly, the compression process took about the same amount of time as PNGOUT - using a process called the "graduate student algorithm". Perhaps you're familiar with it : ) Anyway, here's my script:
(x,y,&r,&g,&b) //Paul Schmidt's test image
//from http://www.webcolors.freeserve.co.uk/png/png24.htm
y = -y; r = 0; b = 0; g = 0;
x += 8; ix = floor(x); if ((ix < 0) || (ix >= 16)) return(0);
y += 8; iy = floor(y); if ((iy < 0) || (iy >= 16)) return(0);
r = (y-iy)*256;
g = iy*16+ix;
b = (x-ix)*256;
0
That comes out to 309 bytes. If I remove comments and other nice things, I get this:
140 bytes : ) Don't bother zipping it because it's too small to benefit. I'm sure there are many other higher level languages where it could written in even fewer characters. You don't need to prove this to me. I just thought it was a fun experiment : P
BarrenSoul at
8) I think that turnning images into code would possibly (just possibly mind you ^_^) be considerd cheating ;)
Frobozz at
BarrenSoul said
8) I think that turnning images into code would possibly (just possibly mind you ^_^) be considerd cheating ;)
Maybe so, but it'd make for some really small (and probably slow loading) computer games. :D
BarrenSoul at
*dosn't want to imagine those load times* but yeah! that would be a small game! (don't forget though files that small take up ALOT more space than they actualy are because of the filesystems :) anyone know whats those sizes are? for fat32 and NTFS?)
BarrenSoul at
hmmmm... I'm confused, a .jpg of 800x600 size is 146,493 bytes
but if I use pngout it's 652,600 bytes!... now the part thats realy throwing me for a spin is how the schmidt test which is 4096x4096 can compress from a 50,331,702 byte bmp to a 57,608 byte file is alot bigger in resolution... can someone explain this to me please? thanks :) (no visual blocks on the jpg so...)
Edited at
DoubleG at
Oh, like Faarbrausch?
BarrenSoul said
*dosn't want to imagine those load times* but yeah! that would be a small game! (don't forget though files that small take up ALOT more space than they actualy are because of the filesystems Smile anyone know whats those sizes are? for fat32 and NTFS?)
Faarbrausch (a demoscene team) is well known for using nothing but procedurally-generated content in their demos to make them ridiculously small.
http://www.farb-rausch.com/
Here's their most popular demo (weighs in at 64k):
http://www.scene.org/file.php?file=/demos/groups/farb-rausch/fr08_final.zip&fileinfo
Necrovore at
pngout comparison
:lol: Great Fun! The "Graduate School Algorithm", I Love Puns And Such. My Ragards To Master Silverman, And Lord Jono, Without Who, Would Not Have Made Me The Slightly Insane And Obessive Gamer I Take Form As Today!
Reminds Me Of The Most Superior Lossless Compression On The Planet, Fractal Compression. Slower Than S*it, But Incredible Ratio's (A Trillion By A Trillion Mandelbrot, Like 300 Bytes)
Let Them Say What They Will Of The Way I Type! All Shall Bow Before My Dark Lord, And Suffer Their Opinions Ten Thousand Fold! :twisted:
JonoF at
I'm confused, a .jpg of 800x600 size is 146,493 bytes but if I use pngout it's 652,600 bytes
JPEG is a lossy compression format. It throws away information in the image that the human brain is unlikely to notice. PNG uses a lossless compression algorithm that gives back exactly what was put into it. If you save a JPEG, open it again, and save it again, and do that repeatedly, the image will eventually end up getting a bit awful.
My Ragards To Master Silverman, And Lord Jono
Ken's the "master" and I'm the "lord"? I think you may want to switch those two terms around. "Lord" is a title higher in the heirarchy than "master", and I certainly am not deserving of being placed higher than Ken.
Jonathon
Necrovore at
I Guess I'll Have To Rectify That, Master Jono :wink:
Yeah, It's Like Making A Copy, Of A Copy, Of A Copy :)
I Dispise JPEG Images... Mostly Because They Lose Quality...
Consequently, Vinyl Records Are Also Of A Higher Quality Than CD's, Because They Record Sounds AS They Come, Instead Of In Descrete Increments, Like Digital Sounds...
Analog Kicks Digitals @ss, As Long As The Analog Didn't Originally Come From A Source That Was Converted: A:D:A
Well, Gotta Go... Might Continue My Rambling In A Few Days, To A Week...
cragtek at
Necrovore said
Consequently, Vinyl Records Are Also Of A Higher Quality Than CD's, Because They Record Sounds AS They Come, Instead Of In Descrete Increments, Like Digital Sounds...
Yeah, but if I played you a file with a high sample rate (say, I don't know, 192kbps/44Khz) you'd be unlikely to notice the difference between that and vinyl. Lossy formats are OK because as JonoF points out, they chuck out parts of the file that the brain can't perceive... what's the use of complaining over losing a part of a file you wouldn't be able to hear/see anyway?
To that end, I can't see a great advantage of analogue over digital. Digital compression makes images and sounds much more practical for transmission and storage purposes - and as long as the compression isn't too high the pro's far outweigh the losses.
Unless you're a machine, or have a highly refined ear I doubt you'd notice any difference between vinyl and 192/44. (Of course it depends on your speaker setup to some extent). Could that be a Pepsi challenge? :-)
cragtek at
Re: Oh, like Faarbrausch?
DoubleG said
Faarbrausch (a demoscene team) is well known for using nothing but procedurally-generated content in their demos to make them ridiculously small... Here's their most popular demo (weighs in at 64k):
http://www.scene.org/file.php?file=/demos/groups/farb-rausch/fr08_final.zip&fileinfo
My goodness, that's really quite impressive considering it's 64k! I can't recall ever seeing anything quite so pretentious before though, but that can be excused. "The product will make you beautiful"? What that's all about?
Still, 64k! Very impressive indeed.
JonoF at
pngout comparison
".the .product" is the name of the demo itself. "fr-08: .the .product". I think all ".the .product .will .make .you .(beautiful|understand|etc...)" stuff is satire of marketing or something like that.
Jonathon
Hazard at
Re: Oh, like Faarbrausch?
cragtek said
Still, 64k! Very impressive indeed.
Well, look at THIS!
It's ZOOM3, the most impressive 64kb Demo i've ever seen. It's the winner of the Assembly 2003 64kb Contest.
Necrovore at
pngout comparison
Well If You Were Analysing Sound On A VERY Small Scale, You WILL Start To Notice A Loss OF Quality As The Sample's Per Second Drop Dramatiacally
BarrenSoul at
that may be true but is it possible that the human ear can only proccess sounds so fast just like images? thus meaning sample rates would exceed human abilitys.,
Necrovore at
:? Not If The Sound Sample Was Slowed Down Enough. You WOULD Start To Notice The Sound "Beating"
Anonymous at
Is the fr-08 thing supposed to have text all over the place, stuff like "hight.00" and "trager" and "prdkt" over some of the objects? Because it does, and it doesn't seem intentional. I have a terrible video card, but that seems like a really weird anomaly.
JonoF at
I'm pretty sure those labels are intentional.
Jonathon
Anonymous at
There are three thinks you should see:
http://www.theprodukkt.com/werkkzeug1.html
http://www.theprodukkt.com/kkrieger.html
http://conspiracy.intro.hu/releases.html
Kerry at
Paul schmidt's 16 million color test image
Ken:
I did "pngout 16million-pschmidt.png /f5 /b0" and got 57549 bytes - which is a 874.59:1 ratio. I suspect Kerry Watson was using an older version of PNGOUT for that tutorial
Yes, the figure 875:1 is actually 874.59 rounded up. I didn't do the test myself. I found the compressed example at sourceforge.net, on a Feature Requests page for AdvancePNG.
Kerry
BarrenSoul at
pngout comparison
I'm totaly psycho :) I made a program to scramble a bmp file (just for fun) which can scramble teh 16mil schmidt so well that it looks grey (if a picture like schmidts is scrambled it should look grey from a distance if it's nearly random) in about 36 seconds (33 million iterations anyone?) on a pentium 3 666 but thats not relivant what is is that the image can't be compressed very well by kens pngout :)
as you can see kens prorgam saved a LARGER file even though it reported the size as about 5KB smaller, even winzip wasn't able to do anything to it :\ this is a good sign for my program but most interesting for ken :) mabe you should compair the actual file size (if the original was a lossless format) with the actual size of the output :\ meh but it's kinda wierd :) (and cool, I like breaking programs)
Dioxaz at
You forgot one main thing when recompressing it through PNGOut: the "/f5" commutator. Or... did you?
Actually, this is the kind of image that take its full compression power only with PNG filters. That's why they've been "invented" :P.
As I don't know exactly what your program did with the image, can you post the BMP file (or a PNG version) you used here?
counting_pine at
Dioxaz said
You forgot one main thing when recompressing it through PNGOut: the "/f5" commutator. Or... did you?
Actually, this is the kind of image that take its full compression power only with PNG filters. That's why they've been "invented" :P.
Filters certainly make a world of difference on the original image, but if the bitmap data has been completely scrambled, I'm not sure any filter would have much effect on the file size.
I've studied a bit of combinatorics, and there is probably a way of compressing a large file if you know each 3-byte combination is used exactly once, but I don't think you can save more than about 3 megabytes or so, and the method isn't available in PNG anyway. Also, I have a feeling that to be fully efficient, it would take up a lot of memory and CPU time.
Dioxaz said
As I don't know exactly what your program did with the image, can you post the BMP file (or a PNG version) you used here?
Actually, could you post a link to the program instead? It'd probably be easier for all concerned than linking to a 48MB PNG file, and I think it sounds like an interesting program. I have to say, I'd have no idea how to scramble a 48MB bitmap in 36 seconds. What does your algorithm look like?
BTW, have you tried OptiPNG on this file? I wouldn't expect a substantial savings, but I think ZLib can sometimes cope better with data that don't compress well.
masterlee at
as i cann see the programm is realy simple structured.
lotsa stuff removed in this code (this is the work horse that does the scrambling though)
Rule #1:
I don't use default pseudo-random nuimber gens (like the one in the code you posted cause they blow major balls :) ) I use high quality ones. (TIRandom is from an external library)
Rule #2:
according to Entropy the more randoma file is the less compressable
Rule#3:
the difference in pixel colour has nothing to do with it, I used teh example photo from y first post, the 50MB the went down to 58KB, it all has to do with how random stuff is.
but my program has one strange problem, the actual size of the image I can rearrange on this computer is small for some reason (get a file load error) but since it's a school comp and has a bunch of network layer programs they might be interfearing with it since this program works fine on any other computer outside of the school.
Things my program is capable of:
Scrambling a 2147483647x2147483647 bmp (thats the max res a bmp can be since the resolution is stored as long)
HIGH quality of scrambling in only 1 trial (when using the ANSI rand() at 1 trial I can easily make out the original image because it sucks :) )
Limited to 24-bit bitmaps for now :)
counting_pine at
OK, so what it basically does (repeatedly) is to randomly pick two pixels and swap them. Simple, but effective.
I suspect there are a lot of pixels that don't move at all, though. So, I guess this is where you need a good random number generator, so it doesn't keep choosing the same pixels? I wonder if you could get more scrambling using a normal RNG if you draw five RNs instead of four, or even maybe if you just draw them in a different order?
markcramer at
Picking two random pixels and swapping them could leave a LOT of unswapped pixels unless your iteration count is a lot higher than the number of pixels in the image.
A far more effective technique is to go through each pixel in the image and swap it with another pixel from the current position onward (including the option to not move it)
I am presuming TIRandom(0,Xs-1) produces a random integer between 0 and Xs-1, so something like,
This would give a totally randomized picture in one pass.
BarrenSoul at
counting_pine said
OK, so what it basically does (repeatedly) is to randomly pick two pixels and swap them. Simple, but effective.
I suspect there are a lot of pixels that don't move at all, though. So, I guess this is where you need a good random number generator, so it doesn't keep choosing the same pixels? I wonder if you could get more scrambling using a normal RNG if you draw five RNs instead of four, or even maybe if you just draw them in a different order?
counting you seem to be a bit lost
fir = TIRandom(0,Xs-1)+(Xs*(TIRandom(0,Ys-1)));
sec = TIRandom(0,Xs-1)+(Xs*(TIRandom(0,Ys-1)));
is a mathematical method of accessing 2D in 1D :) I use this because I don't have a 2D array but the info I'm scrambling is 1D
markcramer: no there are probably not alot of similarities in images, I'm iterating only twice the size per scramble loop (X*Y << 1) but I'll do a check on one that I only loop over once and see how many indexes are the same. and I'm not iterating a very large amount at all :) only twice the size. if I was to only loop over once a repeated value would most likely not appear in mine but it could easily repeat in yours because you are systematically scanning the list Ex.
if your's happened to switch pixl 2350 with pixel 1 and when it got to 2350 the random number returned was one you've thus swapped them back, mine is much more efficient because of the very long cycle length on the RNG.
markcramer at
Barren_Soul, re:
if your's happened to switch pixl 2350 with pixel 1 and when it got to 2350 the random number returned was one you've thus swapped them back, mine is much more efficient because of the very long cycle length on the RNG.
Umm, no, if you read what I wrote, this doesn't happen with my code, pixel 1 can get swapped with any pixel from 1 to the end of the image, pixel 2350 only gets swapped with any pixel between 2350 and the end of the image, so it can't get swapped back to pixel 1.
What I posted is pretty standard code for scrambling a list that will produce all possible permutations with equal probability.
The probability of any pixel ending up in any position is exactly 1/(Xs*Ys)
BarrenSoul at
ok so I tested my program as is (except I made the forloop only do X*Y) and then compared it to the original file (I of course used my program to do this) and guess what :) when iterating over an array of 16777216 elements the amount of pixels the same was 0.
counting_pine at
Sorry, long post
BarrenSoul said
counting_pine said
OK, so what it basically does (repeatedly) is to randomly pick two pixels and swap them. Simple, but effective.
I suspect there are a lot of pixels that don't move at all, though. So, I guess this is where you need a good random number generator, so it doesn't keep choosing the same pixels? I wonder if you could get more scrambling using a normal RNG if you draw five RNs instead of four, or even maybe if you just draw them in a different order?
counting you seem to be a bit lost
Firstly, you've no doubt realised that counting isn't my first name, you can call me Matt if you want.
OK, I'll try explaining my suggestion again to defend my sanity, although I guess now it's a bit redundant, since Mark's algorithm seems to sidestep the problem altogether:
Anyway, let's say that each random number you generate has a property, we'll call it its Gen number. Each time you generate a number its Gen number is one higher than the last. e.g. the first number you draw has Gen no. 1, the tenth number has Gen no. 10, etc.
Now, with your algorithm:
fir = TIRandom(0,Xs-1)+(Xs*(TIRandom(0,Ys-1)));
sec = TIRandom(0,Xs-1)+(Xs*(TIRandom(0,Ys-1)));
Let's say fir=x1+xs*y1, sec=x2+xs*y2
x1 and x2 have the same Gen number modulo 2, as do y1 and y2.
From what I've seen of lousy RNGs, this means they are more likely to be the same number.
Assuming for a second that TIRandom is one of those "LRNGs" (because I don't know any other RNGs in C).
If you did:
fir = TIRandom(0,Xs-1)+(Xs*(TIRandom(0,Ys-1)));
sec = (Xs*(TIRandom(0,Ys-1)))+TIRandom(0,Xs-1);
then the x's and y's would have different Gen numbers mod 2, so are more likely to give different pixels.
If you did:
fir = TIRandom(0,Xs-1)+(Xs*(TIRandom(0,Ys-1)));
TIRandom(0,0);
sec = TIRandom(0,Xs-1)+(Xs*(TIRandom(0,Ys-1)));
Then the Gen numbers now have a pattern modulo 5, which should mix things up even more.
BarrenSoul said
ok so I tested my program as is (except I made the forloop only do X*Y) and then compared it to the original file (I of course used my program to do this) and guess what :) when iterating over an array of 16777216 elements the amount of pixels the same was 0.
FYI: I believe a permutation where none of the elements is the same is called a derangement. With a truly random shuffler, the odds that it produces a derangement is about 1/e [e is roughly 2.718].
So your program seems to be quite random, if it wasn't, the probability would 'probably' be less. Maybe you might like to try the program several times, see how often a derangement crops up. It should be reasonably frequent, but non-derangements should be slightly more likely.
Anonymous at
pngout comparison
arrr to lazy to log on, how many times should I rearrange the pxls? I've only tried 4 times in a row and each time nothing was in the same spot as the original
counting_pine at
BarrenSoul (?) said
arrr to lazy to log on, how many times should I rearrange the pxls? I've only tried 4 times in a row and each time nothing was in the same spot as the original
Well that's not impossible, maybe just a bit unlikely. You could try a few more times. I've double-checked the probabilities mentioned at the end of my last post and I'm pretty sure they're right, Which algorithm are you using? Is it yours or Mark's?
BarrenSoul at
mine. I'll do a test run 255 times (it only takes 15 seconds to scramble it)
KillerQ13 at
What's with all the work on impossible to compress image noise? We could all disconnect the cable from our TVs and watch that. :-)
BarrenSoul at
thats not caotic enough, you need to take a reading from a wed cams CCD (one of the 4 TRUE random number gens I've found use this method, one uses half life, one uses radio waves, and the other uses a lava lamp) (these generaters are the only thing random enough to be use for OTP encryption)
VERY strange goings on now, I don't know if it's my code or what but I'm geting an almost exact copy of the last image each time -_-' (I've been editing my code a bit so I may have screwed something up in switching varaibles from a prorgam that scrambles to one that sorts) but this is confusing, so I'm going to give marks a shot and see what I can do
matt what do you mean has a pattern mod 2?
this is the code I'm using (it's properly implimented for speed)
*notes:
variable YX = X*Y (it's declared else where in my program)
trial_roof = 1 for the purpose of this experiment
scrambler_roof = X*Y (no this won't overwrite the array because at value X*Y it terminates the loop, thus meaning your old code woul miss the last pxl :) )
Buffer[] = original picture
we have small problem it seems TIRandom dosn't like the first and second value being the same :) so I guess we'll have to leave that last pixel alone.(to fix take one off scrambler roof for the scrambling loop)
ok got it running, nice job (now all I have to do is see what the output looks like) but it seems that you have a chance of having 1 or 2 pxls on (this chance would remain the same regardless of the image size)
counting_pine at
BarrenSoul said
matt what do you mean has a pattern mod 2?
OK, firstly, I'm using mod and modulo interchangably, depending on which I can be bothered to type. MOD is a function in QB, I think it's equivalent to % in C.
examples of mod:
5=12 (mod 7)
1023=255 (mod 256)
I've done some experimenting with RND in QBASIC, and it has some very repetitive patterns in it. Each bit seems to have its own recursive pattern. The least significant bit just alternates 1,0,1,0,... The next bit seems to have a pattern that repeats every four generations, and I think as bits get more significant the pattern repeats half as frequently. I think the entire number repeats every 2^24 or 2^32 generations or something. I don't have QB to hand so I can't confirm it at the moment.
Anyway, with your original program, you were generating 4 random numbers each iteration. If you were using RND that would mean each x1, y1, x2, y2 would have the same two least significant bits. These probably get rounded off, but you do a lot of itertions between reseeds, so you could start seeing significant bits repeating every so often, with periods 2^n, e.g. 1024, 2048, Not sure how many it takes before the effect is noticable...
If you generate 5 numbers every time, that messes up this pattern, you shouldn't get repeating significant bits.
Hope this makes sense.
BarrenSoul at
I'm well aware of what mod means I just wanted to know your theory :) I doubt this is the same with C++'s ANSI rand() function but it's worth a looksee so I'll test that tonight. (the problem with your theory is that your comparing QB's rand algorithm to the one ANSI C++ uses which is mostlikely different)
counting_pine at
Sorry, I didn't mean to sound patronising, but I wasn't sure where I'd lost you. I hope it all makes sense now.
I suspect there are various ways of generating a random number, some more effective than others. I usually find the QBASIC one suffices, but I did see some patterns once when trying to generating a random image of width 256.
It's possible that C and QB use the same algorithm, there might be some standard, computationally cheap, method of generating random numbers. But let me know how it turns out.
It looks like QB and VB use the same Rnd, which I guess isn't surprising. I've just checked and both seem to repeat fully after 2^24 generations. And it looks like the least significant n bits repeat after 2^n generations.
BarrenSoul at
wanna know something realy interesting? I think my freind was using QBASICS rand function for some formula and he was rotating it (the pixel place ment) and we created a serpinskis gasket :)
masterlee at
Most times i use RANUNI for experiments its described here:
http://support.sas.com/91doc/getDoc/lrdict.hlp/a000202926.htm#a000156103
But this is not avaible in C++ so there i use RANROT:
http://www.agner.org/random/?e=0,25#0
Or most times some sort of wired algorithm.
BarrenSoul at
http://www.agner.org/random/randoma.htm
thats the lib I used (it's good stuff)
counting_pine at
* Blows dust off topic *
I found a better filter method for 16million_pschmidt.png - If you set the first row filter to 1 (dx) and all the others to 2 (dy), then you can get better compression. If you use TweakPNG to hack the header so that the width is set to 3072 and the colour type to RGBA, then most programs will refilter it in the same way on filter method 5.
I'm not sure these results count in the test, since I had to do it manually. Still, the results are rather interesting:
advdef -z2: 71630 bytes
advdef -z3: 71619 bytes
advdef -z4: 67152 bytes
(original optipng -zc9 -zm9 -zs0 -f5): 59457 bytes advdef -z1: 59146 bytes
(original pngout /b0 /f5): 57549 bytes pngout /b0 /f5: 57385 bytes
optipng -zc9 -zm9 -zs1 -f5: 55772 bytes
I think it's strange that AdvDef compresses the image so much worse than the other two. It's also strange that AdvDef compresses better with -z1 than -z4.
Surprisingly, OptiPNG compresses it best, but given ZLib normally leaves a bit of room for improvement, I'd expect it's possible to get it smaller than that.
I'll post the OptiPNG one up here.
http://img225.echo.cx/img225/8841/16millionpschmidt3pd.th.png
By the way, Ken, I had a look at your EvalDraw script for the image and made a couple of changes to it. I think it still does the same thing. It takes longer to compute, but manages to save a few bytes in the code size. Not sure whether that's a good compromise or not.
Man, you sure have a lot of free time ... and so do I : P I'm impressed that you played around with EVALDRAW... BTW, I plan to release an update sometime soon with support for arrays and other goodies : )
That's a clever use of the '%' operator. 121 bytes, eh? Check this out:
OptiPNG - Revision history
==========================
Legend
------
++ Added or improved feature that might affect compression ratio
or speed.
+ Added or improved feature.
- Removed feature.
* Modified code (e.g. for architectural improvements).
! Fixed bug.
!! Fixed dangerous bug that might cause accidental data loss.
Version 0.4.8 10-may-2005
-------------
+ Upgraded libpng to version 1.0.18-optipng [private]
!! Fixed a palette-to-gray reduction problem that occurred when an
RGB triple had both an alpha below max, and an alpha equal to max.
(Thanks to Nicolas Le Gland for the report.)
+ Packed the Windows executable using UPX.
(Thanks to [Warriant] for the suggestion.)