[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MiNT] XaAES bugtrack (admin and) feature requests



2009/12/17 Johan Klockars <johan@klockars.net>:
>>> Speaking of gradients - this should really be a VDI feature. It could be
>>> implemented as a variant of v_bar(). This way XaAES does not have to bother
>>> with pixel format detection, and (I don't know if graphics cards can do
>>> gradients in hardware) possibly speed up things.
>>
>> I totally agree.
>> Putting a low-level drawing feature like gradients in the AES is a nonsense,
>> it has do be done in the VDI.
>
> Well, besides the problem of supporting old VDI:s, there are a couple
> of major problems with VDI level gradients, as I see it:
>
> - What kind of gradients should be supported?
>  + top to bottom - obviously
>  + left to right - I suppose so
>  + corner to corner - possibly
>  + any angle - well...
>  + elliptical, polygonal, etc - uhm...
>
Ozk (?) uses a _method_ value in a case statement, the algorithms are
simple, an most can be seen in the interface when GRADIENTS=1 is set,
the most uncommon one, is the one in the Alert title and the
ScrollList Title. There are currently 5 gradint methods

see: create_gradient function in TRNFM.C

> - To what kind of drawing should they apply?
>  + Boxes - obviously
>  + Rounded boxes - I suppose so
>  + ellipses, polygons - well...
>  + lines, text, etc - uhm...
>
all :)

you forgot to mention outlined boxes and DRAW_3D

But I agree with what is said below, about bliting

> - How should the colours be defined?
>  + Extra palette entry for second colour perhaps?
>  + Special gradient palette entries?
>
this is the related gradient struct as used in XaAES, I think it is
rather flexible:
struct xa_gradient
{
	struct xa_data_hdr *allocs;
	short wmask, hmask;
	short w, h;
	short method;
	short n_steps;
	short steps[8];
	struct rgb_1000 c[16];
};

> - How should the gradient itself be defined?
>  + Linear in RGB?
>  + Linear in some more suitable colour space?
>  + Not linear at all? How then? Gamma?
>
this is an example (verticle slide backbround):
struct xa_gradient otop_vslide_gradient =
{
	NULL,
	-1, 0,
	 0, 16,
	1, 0, {0},
	{{400,400,500},{600,600,700}},
};

This is the gradient for window frame buttons
static struct xa_gradient indbutt_gradient =
{
	NULL,
	-1,   0,
	 0,  16,
	4, 1, { -35, 0, },
	{{700,700,700},{900,900,900},{700,700,700}},	
};

These RGB can simply be extended to allow alpha channel values. All
RGB values can be down scaled from 32bit. There is the use of RGB to
PALETTE mapping tables to allow <=256 color gradients.

>From what I understand, the resulting gradient is stores and a bitmap
texture, the same as loaded IMG's, both being converted and
load/creation time using the appropriate INTEL or MOTO format in 15,
16, 24, or 32 bit pixel depths

> And how do you get decent performance?
> The preferred implementation depends on the use case. Some methods use
> up lots of memory, but can be much faster than others, especially if
> the gradient is going to be used more that once. And in the case of a
> graphics card, you may want to allocate memory on the card for
> pre-created gradients, or not.
>
I think some high speed pixel conversion routines would be a good
start. (as above). VDI know that rez the workstations are operating
in, if an input or output format are not specified. If on card
allocation was offered, resulting output should be able to be output
directly to there from within VDI.

Optionally, VDI should allow for texture management to assist in
maintaining "close to screen" rendering (or bliting). Two istances
come to mind.:
  1) a single TOS app (ie, the only VDI user).
  2) app asks for "managed rendering".
Here the "managed" part is, VDI allows on card allocation, if card
does not support this, runs a separate "texture only" virtual
workstation. VDI keeps track of it.

>
> In all, I don't believe that the VDI could make the "right" decision
> regarding how to do the gradient. I'd rather see VDI support for
> making the various gradient choices possible, with good performance.
> Such as:
>
> - Off-screen, but on card (when applicable) bitmaps
>  To make use of the really high bandwidth that the graphics cards
> have internally.
yes, I think this is a must have, especially when people start using
modern 3d cards. The ability to upload functions would be useful too,
but as I understand it, atm there is no hardware being used that can
do this (am I wrong?)

> - "area-fill-blits"
>  Blitting a block repeatedly, to fill some shape (possibly only rectangular).
useful for for both textures and gradients

> - "poly-blits"
>  Blitting the same thing to many places with a single drawing call.
very useful, especially for some (most) AES components

> - Masked blitting
>  Using a binary (or alpha) mask to specify where a blit (including
> the ones mentioned above) actually goes.
binary AND alpha mask (important to have both)
one other to be included should be a color mask, both RGB and PALETTE

These same masks, maybe with different routines, should allow for
bgtexture-mask-fgtexture in a single pass, where a background is
provided for the mask process. This leads to the possibility of bump
mapping.

Because of VDI's closeness to the hardware, VDI should be allowed to
upload algorithms to hardware that support it, and allow its own
algorithms to be replaced for certain functions (or something
similar).

Certain parts cross over into the fields DirectX and OpenGL cover. I
would be a bit of a coup-de-tete if VDI could be asked to provide
these (or access to these) via the VDI api, even if they were just
mapped to libraries.

> - Multi-coloured poly-lines
>  For simple gradients that do not require the setup of bitmaps for
> blitting, and other things.
This is the equivalent of a pre-rendered bitmap stored as an
algorithm, isn't it? where color values and steps can be passed to
affect output. If an algorithm could also be upload (passed) this
would become even more flexible

> Several of these would be very useful not only for gradients, but for
> texture mapped backgrounds and games as well.
>
> /Johan
>
just what I was thinking :)

Does VDI supply access to any other hardware, or just screen hardware.
If it does, it would be a good place to build a fuller api for game
related uses, like an HID extension, to access certain devices in a
generic way, and allow transparency. For example, arrow keys to be
used in place of joystick, joystick to be used and mouse. However if
it were to also supply low level access, or a "low level mode" certain
higher level functionality could be unloaded, freeing memory and
optimizing thru-put, leavin gthe app to supply any higher level
functions

Specifically for single TOS apps, and certain fullscreen apps
(depending on AES user adjustable settings, or when AES in not
present) a BLOCKING mode should be allowed, giving the app full
command over VDI's attention, at least where actual on screen pixels
are being rendered (with respect to other AES apps) and should
basically be a "hog mode" to the point where VDI can connect the
calling app directly to the hardware, simple because it is the only
one doing actual screen stuff.

This could be with managed restrictions depending on AES usage, or VDI
could move all on card AES stuff to a temporary or vitual area. If for
some reason the user/AES does not want and screen hogging or
fullscreen apps taking full control, a generic virtual hardware device
could be provided, equal to a physical workstation, one that can then
be used as a view port thru VDI from AES (or another app)

An AES running in blocking mode would theoretically be able to do more
with interface related "gee-whiz" stuff, in exactly the same way any
game would.

if VDI could supply "algorithm change" and "function replace", at
least for specific routines, VDI need never become outdated (because
of new hardware) nor incompatible (older routines can be uploaded if
needed). Routine that cater for a specific hardware combination could
then be supplied with the drivers, or by others, an certain routines
that have a hefty speed to size weight difference be changed according
to circumstances, as an example..

without getting into the lower half of this post, what sort of time
frame are we talking with regards to developing the above things, and
does any of the lower half warrant inclusion or further examination?

Is it worth document some of this and previous posts, even in outline
form, to try an build an API which other currently unspoken of parts
an slot into (become part of)


Paul