Frictional Games Forum (read-only)

Full Version: Thomas, 2008-06-18, "Character version 2.0"
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5
As for me 2nd step I simply derive everything I need to know from the the G-buffer. So when I render the light (shading, attenuation, etc) I just add a few more lines and do:

1) Project the shadow map into screen space.
2) Check if fragment is occluded by comparing the view space depth against the value stored in the shadow map.

There is never any redrawing of any geometry or anything like that. All I do is to render the scene to shadow map (from the perspective of the light) using a single "forward rendering" pass, which is something you are doing as well and something that must be done. To cut down on overdraw in this state I make sure to render front to back (so that the z test will discard most fragments) and I also do occlusion culling.

Now for your code.
#1
I assume u mean:
Code:
out.depth = in.position.z / in.position.w;
This is just like my first pass and as I said this is the only re-rendering (of scene geometry) I ever do.

#2
Am I correct when I say that you render all geometry intersecting both light and view frustum and then get the distance to light for every affected fragment?
If so then this is pretty much what I do in my light shader code.

#3
Don't you do any light rendering here (shading, attenuation, etc)? Do you just modulate the shadow with the screen or something?

If you do light rendering in this pass then this is exactly like my 2nd step. The only difference is that I incorporate your 2nd step in there too.


Assuming I have gotten all of your stuff right then I am afraid that I cannot see any reason why your method would give any speed ups. If you are using a deferred shader then u can get the view space position (and must do this for attenuation anyway) in the light shader and can use that to calculate if the fragment is occluded (or level of occlusion, if several samples are used).

EDIT: Just saw your second post after writing the above. I think the above covers what that too though.
EDIT2: Text under step 1# made more clear.
Thomas Wrote:There is never any redrawing of any geometry or anything like that. All I do is to render the scene to shadow map (from the perspective of the light) using a single "forward rendering" pass, which is something you are doing as well and something that must be done. To cut down on overdraw in this state I make sure to render front to back (so that the z test will discard most fragments) and I also do occlusion culling.

Assuming I have gotten all of your stuff right then I am afraid that I cannot see any reason why your method would give any speed ups. If you are using a deferred shader then u can get the view space position (and must do this for attenuation anyway) in the light shader and can use that to calculate if the fragment is occluded (or level of occlusion, if several samples are used).

hmmm, from the sounds of it, we are not exactly doing the same thing but it appears to be similar. Out of curiosity, are you preforming a matrix multiplication inside the pixel shader to preform shadowing? Because if so I think I understand your method.

I know that occlusion and z-sorting is done, and I previously did the same thing. Frankly, I'm not entirley sure why I got such an enourmous speed up from the method. I would have to do more reasearch on how graphics cards render on a vertex->pixel level. What I can say is that when sampling a huge resolution depth map in the deferred method (2048 x 2048), it only droped the framerate 8 fps, and a (4096 x 4096) depth map only dropped it 15 fps from the origional. This is compared to forward that at 2048, dropped an entire 25 fps and with a 4096 depth map, it became unplayable. Another thing I had to consider was the possiblity of multiple shadow casters, which at higher resolutions was out of the question with forward shadowing.

In theory, it does make sense that there would be no advantage as long as proper occlusion culling and z-sorting was used, but the same can be said about deferred lighting as well.
Quote:Out of curiosity, are you preforming a matrix multiplication inside the pixel shader to preform shadowing?

Yes. I use it to project the view space position to a uv coordinate and use the same uv / matrix when using gobos.

As for your speed up:

Did you original method derive the view space position from the G-buffer? If not, how did you derive it?

Quote:Another thing I had to consider was the possiblity of multiple shadow casters, which at higher resolutions was out of the question with forward shadowing.
I cannot see what this should be a problem. I use multiple shadow casters with great ease in the algorithm I use. I simply reuse the shadow map.

The main reason why a speed boost does not make any sense is that you are basically doing the same thing I am, but adding an additional pass with 128 bit render target (where u re-render scene geometry too!). This should seriously affected the fill rate in a negative way. If you this gives you better frame rate then your previous implemenation then I think it is more likely that your previous code has some issue.

I cannot actually see the reason for having this 128 bit render target at all. It just seems like it stores the light space coordinates of the affected geometry, something that is very fast to get in the light shader.

I still might be missing something though Smile So please say if I have gotten anything wrong.
Thomas Wrote:I still might be missing something though Smile So please say if I have gotten anything wrong.

Well, I actually tried to get the light-space coordinates that way origionally, but on my card matrix multiplication runs rediculously slow. Perhaps it's my graphics card or driver, it's an 8600M gt. What's your card you test on?

I'll try re-forumalting to see if I did anything wrong. It's possible we are still slightly not understnading eachother, but I think we are doing very simliar things. Don't you hate that? Tongue

Anyway though, for whatever tech you guys use, I just want to say I really appreciate what you guys are doing for the genre of games (first person prespective I mean). I am overly sick of all the mindless violence in games. I mean, it's not like I really mind the violence itself (I think GOW is pretty cool looking lol) but it's getting to where that's the ONLY thing they put in games.

I think the time I hit rock bottom and realized this was when I played Bioshock last year. Don't get me wrong, it's got a phenominal story and really outstanding mechanics, but I was severley disappointed to learn that %95 of the game was running around killing random people with no real motivation to get creative.

I had this notion that most of the combat was with the big daddies, which actually was fun, but the splicers were the equivenant of swatting annoying flys that weren't even a challenge. Not to mention the game was supposed to be a successor to horror games, and it wasn't scary in the least.

Anyway, keep up the good work Wink
I am using a cheap ATI Radeon HD 2600 PRO that is not all that good. Your 8600 should be better Smile

And glad you are like our games so far! Smile
Awesome engine!
It looks that it's really well written.
I can imagine how hard would it be combining all these things you have said (sloping system, knockbacks, etc) fully compatible with each other.

As far as I know and seen, when dragging dynamic object in HPL and HPL2 the anchor point isn't attached to the position you have clicked.
In case of dragging it is but not when you walk while holding an item.

Example:
[Image: eio1fl.jpg]

It's not that important but it gets difficult when pushing heavy objects. You basically can't rotate them.

Tell me if you get my point.
Keth:

You are correct and the HPL2 allows you to vary this behavior. It all depends if the anchor position allows rotation or not. So one can have both ways Smile
Thomas Wrote:Keth:

You are correct and the HPL2 allows you to vary this behavior. It all depends if the anchor position allows rotation or not. So one can have both ways Smile

Well, it would be awesome if it could be possible with a [Shift] key or some other key to select the anchor point always. It's easier to rotate things in that way for expert players.

Because the anchor point is always outside (on the border) of an object so it woudln't be possible to select center point. This is why it would be better to have selection between anchor point and center point.

Is there any chance from your side of implenting this possibility to HPL2 engine?
Keth:

This is not really an engine question but rather how the game chooses to implement it in the game. The attach-thingy shown in the HPL2 character video is totally new to the engine. In Penumbra a totally different technique is used.
A little OT.. But is there any new engine features ready to be shown fo the public, Thomas..? Smile

Just curious since it has been a while. ^^
Pages: 1 2 3 4 5