September 2005

Newton’s new Vehicle Container

It has long been promised and finally it’s gonna happen : Newton’s overhauled vehicle container is on the way. And luckily I’m an SDK developer, so I already have access to the new beta SDK and implemented it directly into my current build of the NewtonPlayGround.
And with one word : awesome. The container prior to 1.35 was not only inflexible but also very hard to configure. Creating vehicles with a stiff suspension was almost impossible and getting a vehicle to react the way you wanted it to react often took hours of testing and tweaking.
But gone are those days! The new vehicle container even allows for vehicles with a totally stiff suspension (like a kart or a Forumla1 car) and also fixes some bugs the old container had. One of those bugs stopped me from creating a truck with an attached trailer (back in the 1.32-days, the trailer hopped around randomly due to this bug), but this now works in the current 1.35 beta!

And to show you how the new vehicle container looks in action, I have created a small video in my NewtonPlayGround. Go and grab it here, it’s around 4 MBytes big and is compressed using WMV (so if you have a newer MediaPlayer installed, it should play).

Different Shadow Techniques

This time it’s not a topic about physics, but about rendering. One important thing that gives the user a feeling about how a body is positioned are shadows. Since shadows are a nature given “feature” of reality everyone will be able to at least guess the position of an object by the size and perspective of it’s shadow. A very small shadow usually tells you that an object is far above the shadowed object, and the bigger the shadow get’s the closer the object approaches the shadow receiver. So in my opinion an application like the NewtonPlayGround, which exclusivley deals with dynamic objects, must have shadows. It makes things much easier. Without those shadows, it’s often hard to guess where the object currently is.
So I already implemented shadows some months ago using Shadow Volumes. It’s not the best method, so some days ago I evaluated another method which uses shadow mapping. But both methods have their positive and negative sides, so I’m right now thinking about which of those two methods to use in the release version (maybe I’ll give the user the possibility to switch between them).
But as this blog is geared towards programmers and not end-users, I decided to do a small write-up on the two different techniques to maybe give coders that never implemented them a hint on deciding what they want to use.

Technique 1 : Shadow Volumes

Description :
This was the first technique I implemented and I already used it in other projects. It’s physically correct, cause (as the name suggest) it threatens shadows as a volume. But generating the shadow volumes iinvolving heavy maths. You first need to get the silhouette of an object, the outline of it as seen from the light source. This also involves generating generating connectivity information of all the triangles in a mesh, since you need to iterate through the object’s edges for getting that outline. So after you have the outline of the object as seen from the light’s point-of-view the next step is to extrude those edges away from the light source, at best towards infinity. After that math is done you use the stencil buffer (which is a common feature since nVidia’s RivaTNT) to mask the volume of the shadows and then render the scene a second time.

The good :

  • Sharp and correct shadows. No aliasing, no matter what geometry and what light position.
  • Works for all kinds of light. No matter if directional (e.g. sun) or omni-directional.


The bad :

  • Heavy on maths and therefore CPU-bound. Calculations could be offloaded to the GPU using shaders, but the playground is also aimed at lower-end systems, so that’s no choice.
  • Also heavy on the GPU cause they burn a huge amount of stencil-fillrate. Especially when AA is enabled they give a huge performance-hit.
  • Objects need to be closed. Non-closed objects cause shadow errors. There is no easy way to fix this. So either tell the user to import only closed objects or just disable them if a non-closed object is rendered.
  • To have them robust (e.g. when the viewer enters a shadow volume) the shadow volume must be capped (both front and back), which needs additional fillrate and calculations.
  • Increased memory usage. For each object you need to store additional information, like a list that stores connectivity information.


Technique 2 : Shadow mapping

Description :
I just recently implemented this, but already did a demo on this two years ago. This technique is a so-called “image-space based” shadowing technique which basicly means you don’t need to know anything about the geometry of a scene (so no silhouette-determination, no connectivity, no extruding of the edges). This makes it very easy and fast to implement : First you render only the depth information of your scene into a separate texture (I do this using an offscreen-buffer, cause the shadow map usually is bigger than your viewport). Then you render your scene and project this texture onto this scene using projective texturing from the light’s point-of-view. You also need to set-up some depth compare functions, and then the hardware does the rest for you.


The good :

  • Very easy to implement.
  • No knowledge of the geometry required. So objects don’t need to be closed.


The bad :

  • Depending on size of the depth texture and position of the light-source it gives heavy artifacts and often also low resolution for the shadows.
  • Big shadow maps needed (2048 in the above screenshot) which means that copying depth to this texture is heavy on the GPU.
  • Need to configure depth-bias to get rid of selfshadowing-artifacts. This is very hard (nearly impossible) for a tool like the NewtonPlayGround where scenes can be any size and can contain any number of objects.
  • Normal approach only allows for directional lights. Omni-directional lights need e.g. the use of a shadow-mapping cubemap.
  • Objects leaving (and outside of) the light’s frustum give graphical errors due to reaching edge/end of the shadow map.

Note : The first point can be resolved using modern techniques like Lightspace Perspective Shadow maps. But the other drawbacks remain.

So what do I use?
Well, as it looks right now I’ll stay with the shadow volumes. As I already stated in the “bad” section for the shadow maps I can’t determine the size of a scene since the user has absolute freedom about this. He can make a scene that extends to 10 units in all directions, but also one that extends 1000 units in all directions. So tweaking the texture size of the depthmap, the polygon offset for removing artifacts and then setting the light so that the whole scene falls into the light’s frustum would be very hard to realize.

That’s it for today. I hope some of you out there that weren’t shure about what shadow technique to use now have a clue on what to go for. And if you like to discuss my talkings here, please don’t forget that the NewtonPlayGround is also aimed at lower-end systems. This means that I can’t make use of the newest features around there (like using shaders for smoothing the shadow-maps). Please don’t forget this before posting something like “Why don’t you just use shaders to fix negative point X of technique Y”.

NewtonPlayGround again and Dawn of War Addon

So yesterday I finally finished getting motorized and limited joints to work in my NewtonPlayGround. This means that hinges and universals can now not only be motorized but they also can be limited and the motor then will change it’s direction after one of the limits is reached. This is especially interesting for creating complexer walkers which e.g. have a walk cycle. So on a quick test I put together a humanoid walker that almost walks on two legs using those motorized and limited joints. Although I know exactly nothing about humanoid robotics it’s almost able to walk and was done in less than 5 minutes using my NewtonPlayGround. If you’re interested ins seeing it moving, head over to this thread on the newton forums to get more informations, a screenshot and a video.
So now that almost all features I wanted to have are in and in a finished state I started writing the manual. But since the NewtonPlayGround is so complex and since you can do so much stuff with it, it really turns out to be a hard task. So I’m kind of struggling with how to structure the manual, but hopefully I’ll get this done more or less right.

On another side-note I received my copy of the Warhammer 40k : Dawn Of War – Addon called “Winter Assault” this morning. I must admit that Dawn of War (and the Add-On too, after playing some missions) was (and still is with this Add-On) one of the very very few commercial games that I really enjoyed this year. I played many recent games and demos (Age Of Empires III, Serious Sam 2, BloodRayne2, F.E.A.R.) and none of them really could divert me longer than half an hour. So either I’m getting too old, or PC gaming is getting too mainstream to attract “old” players (well, I’m just 24) anylonger.

It’s walking

As a follow-up to my last post where I showed you a first version of a more complex walker I was building using the newly implemented features for my PlayGround I now have a video (WMV, 2.7 MBytes) of a more simple walker walking around.

Due to a bug in the current beta of Newton, the legs of the other more complex walker won’t work as supposed, so I just created this more simple walker and will start getting the other one to walk as soon as this bug is corrected (Julio, the main head behind NGD already is working on it. He is usually fast as a lightning in fixing bugs).

One thing I noticed on the rather tedious process of creating this walker is the fact that the user interface of my PlayGround should get some overhaul and additions, so that it won’t take hours of learning before new users can use the PlayGround to create such complex objects.
So that’s what’s next on my list : After implementing another set of new features I always wanted to have in, like importing a scene into the existing one, so you can make e.g. a template for a catapult and put multiples of them into one scene and a feature that let’s the camera follow a selected body (this is great fun if you e.g. follow the bullet of a catapult in first-person mode), I started to make the user interface of the PlayGround more beginner friendly (but also powerfull for more advanced users).

And another thing that’ll take much time is the manual. I already had written one some months ago, but since then so much stuff has changed that I’ll just totally redo it after the features are finished.

So hopefully I’ll be getting this one out in the near future without too much bugs (it’s complex and users can do stuff I maybe never could imagine, so I bet there will be bugs in it), so that I can finally start working on some other stuff. I have nothing bigger planned, but rather like to put out some smaller demos with source code (like the SDL demos in my newton page). One demo I’m thinking of is a 2D platformer using Newton for physics and OpenGL for rendering it in 3D (maybe I’ll also use some kind of skeletal animation for the characters), and the other one is the tank demo I posted on the newton forums recently.

What I’m currently working on

I had some hours to spare recently, so I decided to put in some new features into my NewtonPlayGround. Some of you know that tool, it’s an app for showing off what the Newton Game Dynamics engine is capable off. It allows the user to create it’s own scene (even very complex ones) and play around with them, save them and even pass them around. The version currently in the makings will be released shortly after NGD 1.35 is released (which isn’t too far off) and is almost a totally new program compared to the last release you can currently download (so it may be better to wait before trying out the outdated version of the playground).

So on to the new features I’m working on : The first one let’s you connect already existing bodies into a compound. This is a feature I wanted to have in for a long time, as it allows you to create complex collision shapes out of existing primitives and then e.g. use them with the next feature. It’s rather easy to use : You fire up the context menu of the first body you want to connect and select “connect” and then you select all the other bodies you want to be connected with this body. After pressing enter, the application will create a compound based on the selected objects. In the above screenshot I used this to create the legs of a walker.

The second recently added feature goes hand-in-hand with the above on (or better said : To use this feature for complexer stuff, you actually need to above feature). It’s joint motors. And it allows you to create motors for different joint types (right now only for hinges, but others will follow) which can be used to animate stuff without user interaction. So you create a joint, and then you can open up a new dialog that let’s you enable/disable a motor for this joint and set different motor parameters like the omega that’s applied to it and the limits of the motor.

Those two features together give up a great set of new possibilites to do stuff with the NewtonPlayground and I can’t wait to see my walker walking around. But what I’m far more interested is what other users will come up when this new version is finally released.

And those are just the recent features I added. As already told much stuff changed since the last public release and the changelog already contains over 50 more or less big changes.

Growing up with and shaping up an API

This is something I wanted to get out for some time now :
When I first found out about the Newton Game Dynamics SDK it was a fresh physics SDK around the block and much stuff was lacking (although it already was very impressive in terms of how it calculated physics). So I saw an opportunity to not only create demos and stuff for a very new API, but also to help shaping up and evolving with that API. So I had no hard time in deciding to where I’d switch my priorities.

I’ve been using (more or less, depending on how you see it) finished APIs like OpenGL for years before and thought it was nice to learn and master them. But it was not really that satisfying (apart from the fact that physics make things move, and moving things usually are more interesting than static things). When using such an established and widely spread API as OpenGL there are hundreds or thousands of people that already did the same stuff you did and propably will do the same stuff you’d do in the future too (e.g. demos for new extensions, shaders stuff, and so on). And moreover : Your opportunities to put your ideas or invent something new that gets later on implemented in such a standardized and common API is very small.

With such a new and evolving API (SDK) like NGD it’s the exact opposite and it’s really satisfying to work with the Newton guys. In the past months I’ve been going from zero to getting called a “Dark Master of Newton Physics” (that’s what the main man behind NGD calls me from time to time). I often do stuff that’s not working 100% with Newton in it’s current state (e.g. this tank-demo that uses material-callbacks for moving a tracked tank), and other Software-Makers would just either ignore it or tell to try with another method. But with Newton it’s totally different : Julio usually contacts me and tries (and up until now never failed 😉 ) to get Newton to make that stuff work, and that’s something that makes me enjoy working and using Newton.

So that’s just something I wanted to share with you guys, as maybe some of you wondered why I switched my focus to coding (when I get the time) physics stuff with Newton instead of graphically impressive demos and games with Newton. It’s just so satisfying to grow up with Newton and also help shaping it up, and that’s something I’ve missed when only doing OpenGL stuff.

Ageia putting an end to realistic in-game physics…

Well it seems that Ageia is planning to get a monopoly on real-time physics in the gaming sector. After announcing their PPU some months ago they recently just purchased Meqon, who provided another real-time physics SDK aimed at game developers. This somehow reminds me of what Creative Labs did some years ago when they realized that Aureal 3D had some good sound hardware : They just bought them, took all usefull stuff for their own and abandoned everything Aureal 3D related.

In my eyes Ageia’s actions and their PPU are really no good thing at all. As I stated I feel that they want to monopolize real-time gaming physics. First they create that PPU (recenlty they even got some hardware vendors like BFG on board) which in my eyes will do more harm than good. Not only does it cost as much as a high-end graphics card (250-300$) but it’s also a closed piece of hardware. So unless you want to “sell” your physics SDK to those guys, you won’t be able to expose this hardware with your own SDK. And that’ll make it hard for the free physics SDKs still out there (with Newton being my SDK of choice, but also think of e.g. ODE or TrueAxis). And second is now to buy up your “enemies”. I’m wondering what company/developer they buy away from the competition next. You know : Competition not only drives innovation (just take a look at the GPU market) but also keeps prices low.

So this is going to hurt everybody. Lack of alternatives, and a piece of hardware for a physics SDK that cheats on almost every aspect of physics. Just go and create the exact same scene in Novodex and in Newton and apply the same forces on the bodies and you’ll see what I mean : Newton’s solver will make the bodies move totally realstic, but the solver behind Novodex won’t. You’ll clearly see and feel that often things in Novodex just don’t react like in reality. And that’s the shame : The only thing “better” about Novodex is that it can handle hundreds (with the PPU even thousands) of rigid bodies without a huge slowdown. But to what price? Lacking realistic behaviour.

So now that all those big game developers out there start adopting Novodex (I think that many of them even didn’t evaluate other SDKs, maybe Ageia even send them some money to “help” them evaluate) users and gamers will get used to the unrealistic and fake physical behaviour that’s behind Novodex, and that’s in my eyes a very very sad thing to happen. I really can’t believe that all big developers (like Epic) fall for such an unrealistic solver model, but maybe they just see those hundreds of moving bodies and totally forget about the rest.

You know, I’m not easily to upset, but that whole thing Ageia is doing there is really bothering me and I hope people will start to think the same, so don’t ever never buy that PPU. Not only because the stuff I wrote above, but also ’cause of the fact that both AMD and Intel are ramping up their multiple core CPU’s (AMD already announced their quadcore CPUs). So when those CPUs hit the street you’ll always have at least one single CPU core (remember : That’s up to 4 GHz and more of raw processing power) to do physics, and even more cores in the future. So who the heck needs a dedictad PPU for that then? In my eyes it’s almost like wanting to get back dedicated sprite-processors that have been used back in the good old Sega Mega Drive days.

First Entry

Welcome to my blog, this is my first entry to it and I intend to update it at least 2 or 3 times a week. Most of the visitors here may know me from my primary homepage www.delphigl.de and the various forums (NGD, PGD, DGL) I am (or better said “was”) active.

I will mainly use it for news and talks about my programming stuff (Newton, OpenGL, Delphi), and maybe some technical stuff that caters to those topics (hardware and so on). But since my real life recently has grown into a monster trying to eat my alive, you may (or better will) also find some real-life rants here and there.