Decreasing colours (algorithm)
Moderators: simonsunnyboy, Mug UK, Zorro 2, Moderator Team
Decreasing colours (algorithm)
As you all know ST-Low permits 16 colours in total to be displayed at the same time. Spectrum 512 crossed this border and can display AFAIK up to 48 colours per line. But it comes at a hefty price: 80% CPU load as I have read.
So I was thinking of something else and I do not know if this is feasible i.e. leads to improvements.
I would be happy to display just 16 colours/line with decreased CPU load as I would like to play 9600Hz digi-sound via PSG at the same time.
So my thoughts are to take a 256 colour BMP file and check each line if it displays more than 16 colours. If so I have to decrease the number of colours down to 16. The algorithm I have in mind would work like this: compare each colour used with each other colour used in that line.
As each colour is represented by RGB I would take the R,G and B items as X,Y,Z of a vector in a 3D cube which represents all the colours that can be displayed. Take colour #1 and see it as vector v1. Then do the same with the next colour (=v2). Now calc v = v2 - v1. Then calc |v| = sqrt(v.x^2 + v.y^2 + v.z^2) to get the length of that new vector v. Store the result and do this for all pairs of vectors (n). Now the minimal length of a stored vector v[n] means that the two colours are the most "similar" of that line - at least that is what I expect. Create a new colour (i.e. vector v3) that "points" right to the center between v1 and v2 and replace the colours of v1 and v2 with v3. Do this until the used colours per line drops to 16.
I wouldn't do this at runtime but in a separate application that stores the new picture with a palette for each display line in a new file.
Then during runtime re-program Shifter's palette during HBL for the next line.
BTW: I'd use this for static display only, i.e. no animation at all.
Would this decrease CPU load significantly? Is the colour replacement correct or would it lead to strange colours?
Thanks, Arne
So I was thinking of something else and I do not know if this is feasible i.e. leads to improvements.
I would be happy to display just 16 colours/line with decreased CPU load as I would like to play 9600Hz digi-sound via PSG at the same time.
So my thoughts are to take a 256 colour BMP file and check each line if it displays more than 16 colours. If so I have to decrease the number of colours down to 16. The algorithm I have in mind would work like this: compare each colour used with each other colour used in that line.
As each colour is represented by RGB I would take the R,G and B items as X,Y,Z of a vector in a 3D cube which represents all the colours that can be displayed. Take colour #1 and see it as vector v1. Then do the same with the next colour (=v2). Now calc v = v2 - v1. Then calc |v| = sqrt(v.x^2 + v.y^2 + v.z^2) to get the length of that new vector v. Store the result and do this for all pairs of vectors (n). Now the minimal length of a stored vector v[n] means that the two colours are the most "similar" of that line - at least that is what I expect. Create a new colour (i.e. vector v3) that "points" right to the center between v1 and v2 and replace the colours of v1 and v2 with v3. Do this until the used colours per line drops to 16.
I wouldn't do this at runtime but in a separate application that stores the new picture with a palette for each display line in a new file.
Then during runtime re-program Shifter's palette during HBL for the next line.
BTW: I'd use this for static display only, i.e. no animation at all.
Would this decrease CPU load significantly? Is the colour replacement correct or would it lead to strange colours?
Thanks, Arne
Re: Decreasing colours (algorithm)
Take a look at this online tool. I believe you want to use the NEOchrome Raster or PHCG 16 output formats, which does exactly what you want (one paletteswitch per line).
Jo Even
VanillaMiNT - Falcon060 - Milan060 - Falcon040 - MIST - Mega STE - Mega ST - STM - STE - Amiga 600 - Sharp MZ700 - MSX - Amstrad CPC - C64
VanillaMiNT - Falcon060 - Milan060 - Falcon040 - MIST - Mega STE - Mega ST - STM - STE - Amiga 600 - Sharp MZ700 - MSX - Amstrad CPC - C64
Re: Decreasing colours (algorithm)
Hi,
I have never tried the tool mentionned by joska, but it seems interesting !
On my side I have used home-made converters to achieve this. I have tried the method you have in mind, Arne, but it provided worse results than simply reducing the bitdepth for each color until you have a maximum of 16 colors.
But finally the method I have used is to split the picture in multiple one line image files, then use XnView to convert each line file to x colors (using the batch converter and selecting one dithering method), then reassembling the converted files.
I have never tried the tool mentionned by joska, but it seems interesting !
On my side I have used home-made converters to achieve this. I have tried the method you have in mind, Arne, but it provided worse results than simply reducing the bitdepth for each color until you have a maximum of 16 colors.
But finally the method I have used is to split the picture in multiple one line image files, then use XnView to convert each line file to x colors (using the batch converter and selecting one dithering method), then reassembling the converted files.
- thomas3
- Captain Atari
- Posts: 165
- Joined: Tue Apr 11, 2017 8:57 pm
- Location: the people's republic of south yorkshire, uk.
Re: Decreasing colours (algorithm)
Hey.
One thing I'd be thinking is that the cycles taken to use movem.l to load 32 bytes to registers and then write these 32 bytes back to the palette, along with the time taken to handle the interrupt, will mean that your palette changes somewhere to the right of the left border. So you will have some artefacts on the left of the image. You also may have some jitter depending on what else is happening.
I think you'd need to do this as sync locked, like spec 512.
But someone can correct me if wrong x
One thing I'd be thinking is that the cycles taken to use movem.l to load 32 bytes to registers and then write these 32 bytes back to the palette, along with the time taken to handle the interrupt, will mean that your palette changes somewhere to the right of the left border. So you will have some artefacts on the left of the image. You also may have some jitter depending on what else is happening.
I think you'd need to do this as sync locked, like spec 512.
But someone can correct me if wrong x
Re: Decreasing colours (algorithm)
Playing a 9600Hz digi-sound means the replay routine should be executed 192 times per VBL.
Changing the palette every line means changing it 200 times per VBL.
If you use interrupts for both the replay routine (Timer-A) and the color changes (HBL), it will be difficult to avoid interrupts jitter (leading to unstable color changes).
Changing the palette every line means changing it 200 times per VBL.
If you use interrupts for both the replay routine (Timer-A) and the color changes (HBL), it will be difficult to avoid interrupts jitter (leading to unstable color changes).
Re: Decreasing colours (algorithm)
You could look also at DML'S photochrome http://www.leonik.net/dml/sec_pcs.py
('< o o o o |''| STM,2xSTFM,2xSTE+HD,C-Lab Falcon MK2+HD,Satandisk,Ultrasatandisk,Ethernat.
Re: Decreasing colours (algorithm)
It might be more interesting to have a sound at around 15kHz, which corresponds to an interrupt per line, so it can be the same than for the palette.Spikey wrote:Playing a 9600Hz digi-sound means the replay routine should be executed 192 times per VBL.
Changing the palette every line means changing it 200 times per VBL.
If you use interrupts for both the replay routine (Timer-A) and the color changes (HBL), it will be difficult to avoid interrupts jitter (leading to unstable color changes).
Re: Decreasing colours (algorithm)
Yep, a single interrupt per scanline is better, but not sufficient.uko wrote:It might be more interesting to have a sound at around 15kHz, which corresponds to an interrupt per line, so it can be the same than for the palette.
When the first instruction of the HBL handler is executed, there is about 100 cycles left before the first pixel of the corresponding line is displayed, so there is not enough time for changing the palette (which takes about 160 cycles).
You need to busy-wait for the end of each line to change the palette during the right border.
There are different techniques for that (see http://thethalionsource.w4f.eu/Artikel/Rasters.htm).
The drawback is that is consumes all the CPU and makes the use of interrupts irrelevant in that case.
The best way is probably to do that sync locked as mentioned by thomas3.
Last edited by Spikey on Wed Feb 12, 2020 9:26 pm, edited 1 time in total.
Re: Decreasing colours (algorithm)
The idea works, but you would not really want to use RGB as you will find it doesn't really give you a good measure of similarity. If you first convert both pixels into the Lab colourspace (CIELAB / La*b*) then you will find the closeness of colours is closer to the linear distance between the points as that was what that colour space was designed for. HSV would probably not be as good, but probably still a better choice than RGB.Arne wrote: As each colour is represented by RGB I would take the R,G and B items as X,Y,Z of a vector in a 3D cube which represents all the colours that can be displayed. Take colour #1 and see it as vector v1. Then do the same with the next colour (=v2). Now calc v = v2 - v1. Then calc |v| = sqrt(v.x^2 + v.y^2 + v.z^2) to get the length of that new vector v. Store the result and do this for all pairs of vectors (n). Now the minimal length of a stored vector v[n] means that the two colours are the most "similar" of that line - at least that is what I expect.
I think Doug mentioned doing things like desaturating the image to make the distance between colours less without distorting the colours too much.
Also look into Doug's interlaced frames approach since that sounded like a good one...
Re: Decreasing colours (algorithm)
Thanks for the insight.AnthonyJ wrote: The idea works, but you would not really want to use RGB as you will find it doesn't really give you a good measure of similarity. If you first convert both pixels into the Lab colourspace (CIELAB / La*b*) then you will find the closeness of colours is closer to the linear distance between the points as that was what that colour space was designed for. HSV would probably not be as good, but probably still a better choice than RGB.

I think you mean this conversion: https://de.wikipedia.org/wiki/Lab-Farbr ... RGB_zu_Lab?
Who? Any link?AnthonyJ wrote: I think Doug mentioned doing things like desaturating the image to make the distance between colours less without distorting the colours too much.
Also look into Doug's interlaced frames approach since that sounded like a good one...
Re: Decreasing colours (algorithm)
See here: http://atari.anides.de/
This will create two images that you alternate between every frame, thus creating an illusion of more colours.
This will create two images that you alternate between every frame, thus creating an illusion of more colours.
Jo Even
VanillaMiNT - Falcon060 - Milan060 - Falcon040 - MIST - Mega STE - Mega ST - STM - STE - Amiga 600 - Sharp MZ700 - MSX - Amstrad CPC - C64
VanillaMiNT - Falcon060 - Milan060 - Falcon040 - MIST - Mega STE - Mega ST - STM - STE - Amiga 600 - Sharp MZ700 - MSX - Amstrad CPC - C64
Re: Decreasing colours (algorithm)
My german isn't great, but yea, that's the one. About 8 years back there was an error in the maths in the English version of Wikipedia when I last looked at La*b*, but presumably that is fixed by now.Arne wrote: Thanks for the insight.![]()
I think you mean this conversion: https://de.wikipedia.org/wiki/Lab-Farbr ... RGB_zu_Lab?
Doug = Doug Little = DML. Author of Photochrome, Apex Media, Bad Mood etc.AnthonyJ wrote: Who? Any link?
http://www.leonik.net/dml/sec_crypto.py is an old page related to it, and I think https://bitbucket.org/d_m_l/agtools/wiki/CryptoChrome might be a newer page. He came up with complex pre-processing to identify very carefully chosen palettes to provide optimal colours for mixing together via interlaced frames to get you an effective palette (~100 colours?) simply with interlacing.
He used it in his AGT tools - https://bitbucket.org/d_m_l/agtools/src/default/
And also in his ST Doom test - http://www.atari-forum.com/viewtopic.ph ... &start=375 which gets 64 colours while doing wall/floor/ceiling textured raycasting on an STe.
Re: Decreasing colours (algorithm)
Interesting tech but... 50Hz on a CRT is awful but interlacing frames? Wouldn't that flicker even more?AnthonyJ wrote:(...) for mixing together via interlaced frames to get you an effective palette (~100 colours?) simply with interlacing.
Seems I am aiming for the impossible. So maybe I choose Spectrum 512 without sound on the ST and something else with DMA sound on the STE.
Anyway: thanks for your suggestions and links!
Re: Decreasing colours (algorithm)
Try it first, you might be surprised at the quality that can be achieved.Arne wrote:Interesting tech but... 50Hz on a CRT is awful but interlacing frames? Wouldn't that flicker even more?
Jo Even
VanillaMiNT - Falcon060 - Milan060 - Falcon040 - MIST - Mega STE - Mega ST - STM - STE - Amiga 600 - Sharp MZ700 - MSX - Amstrad CPC - C64
VanillaMiNT - Falcon060 - Milan060 - Falcon040 - MIST - Mega STE - Mega ST - STM - STE - Amiga 600 - Sharp MZ700 - MSX - Amstrad CPC - C64
- thomas3
- Captain Atari
- Posts: 165
- Joined: Tue Apr 11, 2017 8:57 pm
- Location: the people's republic of south yorkshire, uk.
Re: Decreasing colours (algorithm)
I don't agree! Coding this in synclock would not be hard, and if you're only doing one predictably positioned palette change per scanline, you will have lots and lots of free CPU time (even though you will have to arrange "extra" code around the palette change in the right border). Whilst you're here, you could make this fullscreen too (changing palette in fullscreen limits some of your flexibility around the point of each line at which you write to the palette, but not in a way that will cause you drama).Arne wrote:Interesting tech but... 50Hz on a CRT is awful but interlacing frames? Wouldn't that flicker even more?AnthonyJ wrote:(...) for mixing together via interlaced frames to get you an effective palette (~100 colours?) simply with interlacing.
Seems I am aiming for the impossible. So maybe I choose Spectrum 512 without sound on the ST and something else with DMA sound on the STE.
Anyway: thanks for your suggestions and links!
So, going from the spec512 routine to this new routine would definitely have advantages.
I have a part in a project I'm currently working on which generates a plasma effect by *generating* new palette data, and then switching palette every scanline in full overscan, and I still have plenty of time left over for other stuff.
Re: Decreasing colours (algorithm)
So this weekend I wrote a little Delphi tool to split and merge single-line BMPs. I used XConvert to reduce the colours for each 1-line BMP and I have to say that the results look promising.uko wrote: But finally the method I have used is to split the picture in multiple one line image files, then use XnView to convert each line file to x colors (using the batch converter and selecting one dithering method), then reassembling the converted files.

Thanks for pointing out this method.

Re: Decreasing colours (algorithm)
You're welcome !Arne wrote:So this weekend I wrote a little Delphi tool to split and merge single-line BMPs. I used XConvert to reduce the colours for each 1-line BMP and I have to say that the results look promising.uko wrote: But finally the method I have used is to split the picture in multiple one line image files, then use XnView to convert each line file to x colors (using the batch converter and selecting one dithering method), then reassembling the converted files.![]()
Thanks for pointing out this method.
Re: Decreasing colours (algorithm)
You alors have https://github.com/zerkman/mpp