The problem of OpenGL is it is great for rendering bitmaps etc. that can be used in 'crossplatform' way and I'd like to make such frontend for R3.... But if you want to implement something like DRAW in OpenGLyou are getting into troubles. To do it properly you would need to use GL shader fragments.
When it comes to shaders, that means my 8 years old notebook won't be able to render it :-)
Also note that on Android for example the Canvas class (which is comparable to DRAW) is still not HW accelerated AFAIK.
I guess same is on iOS....the "draw' api is also not using GPU
The HW acceleration is still currently mostly used only for moving textured bitmaps...kind of fast blitter in the 2d graphics area.
Ofcourse if you want to create classical 3d game from textured polygons, shaders etc. that's the area where OpenGL excells (and the developement was mainly focused)
But good news is some people are experimenting with shaders to created 2d oriented engines but until the main players on the market won't set some standard it can still take lot of time.
Couldn't DRAW work the same way as it does now, but render to a bitmap? Then that bitmap could be rendered by OpenGL like it does any other bitmaps.
Yes, that's actually the method I'd like to upgrade the View engine...but not only for DRAW. The code would be modular so you can render to textures using any other library if you want.
I believe the Android porting effort will show us what is the optimal solution. It is good to find a balance between highly optimized but not much compatible HW engine and smoething that is fast enough and can be ported without big pain.
Just think, though, if OpenGL was the default renderer for graphics in R3, you could create a flat surface for a 2D screen, but during the same session, you could create another flat surface for another 2D screen at a 90% Z-axis orientation to the first, and then rotate from the first 2D screen to the next 2D screen in a fluid 3D way.
I did quite a few tests with OpenGL on R2, but the killer for me was the inherent delay with R2 calling the methods (functions) in the OpenGL dll. I was still able to rotate an object created with over 1000 coordinate points 30 times per second on a quad-core computer, but that's probably 1000 times slower than you could do it with straight C on the same computer.
I toyed with the idea of writing a C-based dll that could take all the information for rendering an entire data structure from R2 so R2 would only have to make one method call to a dll per frame instead of thousands, but couldn't get enough higher priority items off my list to get started on it.
I don't say you should use OpenGL for DRAW. There is no drawing api in Stage3D as well. I was proposing to use AGG to draw to bitmap with the best possible quality and let the HW do what it's made for.. to move pixels around. Bo understands it. That's actually how are modern apps made to have 60fps. To make animations just in AGG will not work (and honestly never worked). But maybe I'm too involved in animations and for many people would be enough to have static content. But that is not for future.
But of course, it's not an easy task. One must be careful to make it right.. all the draw command batching, texture packing etc.
Oldes, I don't argue with you and Bo about that. I think we all know the state of this technology. I've already did several prototypes of such "engine" so I have some ideas how this could be done for R3 it's just matter of prioritizing&time&resources. I wrote about the drawing apis just so other people know OpenGL is not any Messiah if you want to do hi-quality 2d vector graphics in realtime. I'm not against HW acceleration at all. It's just not easy topic in this non-ideal programming world as you pointed out. I see the solution with good high quality rasterizer + HW accelerated compositing engine. That should be flexible setup at least IMHO. Plus this way also we got the classic 3d api for free.
Bo, I even tried to HW accelerate the AGG renderer code so it is completely using OpenGL...works well and you can use draw directly inside the OpenGL context mixed with 2d surfaces or 3d objects...lot of fun. But still , lot of stuff is still computed on CPU that way. Nevertheless its still better that fully SW based renderer.
The best solution for nowadays gfx HW would be to rewrite most of the AGG code for GPU using shaders. That would be state-of-the-art 2d engine for future. But also pretty big task ;)
What do you think is the best roadmap for the graphics engine in R3 right now? Simply port VID to R3 to start, and then in R3v2 change out the graphics engine with hardware-based code?
There are plenty of possibilities here. Either port VID and have to deal with it's flaws and the history with it or go the path of the RebGUI or redo VID I have read somewhere that Carl expected someone to come up with something better than VID. I like VID yet it has its oddities, like when positioning elements using 'at. It could be improved in some of its behaviours, if you import it you may be hindered by this aspect, and it may get harder than restarting with a restricted base set of widgets.