Showing posts with label game. Show all posts
Showing posts with label game. Show all posts

Monday, 5 September 2011

Kind of Proper Lighting

Back to school tomorrow, already up too late. Ah well. Progress has been made on lighting - apart from shadowing and specular reflection, it's now fully usable. There are a number of speed improvements which still need to be done, but those changes will have no real effect on the image. Rendering speed is still acceptable as-is with the single light I've been using to test, but it can easily be pushed up far faster than it is currently - I'm using the simple but time-consuming approaches for now.


Charlie

Monday, 22 August 2011

Sponza Atrium

Here's a simple first-look at the engine. Currently it's showing the base texture and tangent-space surface normals of Sponza, a model used for tests (mainly lighting). The next step is to shift the normals into viewing space (using the transpose of the inverse of the transformation matrix, for reasons I neither understand nor need to), and use them to light the scene. This now works on GL2 (DX9-level) hardware, with ideas to extend to GL3.X (DX10) or GL4 (DX11) and add new features (re-building a high polygon count from a set of smaller points and a map - geometry shader).

Base texture

Tangent-space normals - the light blue colour represents (0.5,0.5,1) as a colour, which works out to (0,0,1) as a vector - the
"up" direction with respect to the surface. I have since fixed the out-of-place stone texture - a bug in the model, not in my code for once!
Still a fair way off actually lighting it - I need a way of quickly working out the distances between points amongst other things. At my current rate that might be done in a few days. Skeletal animation, a game engine, net code and sound system are a while off.

Come next Sunday, I'll be camping and so may not have internet access. If that is the case I'll have a scheduled post prepared. With any luck it'll feature working lighting.
Fuck wasps.
Charlie

Monday, 15 August 2011

Fixed glitches, Minecraft

First things first, I fixed the graphical glitches which were occurring in my last screenshot - the depth buffer hadn't been set up correctly. The renderer can now project and transform objects, and essentially also supports textures. Still to add are normals, displacements, scene manager, skeletal animation, blend shader and lighting - but the heart of the deferred pipeline is now fully functional, which is always nice. In a couple of days I might be able to get a copy running in a web-browser, this post will be edited if I manage.

Redness represents X axis, Green Y-ness, and Blue Z-ness. It's also possible to "fly" around the cube.
Secondly, I started playing Minecraft Classic for a bit, then fell a little in love with the game. I purchased the Beta at about midnight, and have been playing it solidly since. It's absurdly addictive, but also brilliantly simple, and the DRM-free, play-wherever, indie quality of the game makes it all the better. It's also written by one man, in Java and using OpenGL (exactly the same as that ^^).

Minecraft? More like mindcrack?
Charlie

Monday, 8 August 2011

Back in the UK

The timelapse and images promised are in the process of being uploaded, and I'm suffering from a fairly bad headache, so this will become a fully-fledged post tomorrow. Return in 18 hours. That's an order, soldier.

Edit: OK, so the above might have been an exaggeration. I have made the timelapse but it's suffered from compression or rendering glitches. I shall re-render in something other than AE in the near future and post back. This edit is mainly to show what little progress I've actually made on the game:


If you look carefully at the full-size image you might be able to see the cube-ness of it (the lack of more than one colour deprives it of any real "shape" as such). It might not seem like much but it is perspective-ly correct (only the most technical of terms for you dedicated readers), and so means that there are no maths-related errors, anywhere in my code. It's also the beginning of a completely deferred pipeline (which means that lighting complexity is independent of geometric complexity, which is a huge benefit for complex scenes with lots of lights).

Edit 2:
So I added some colour, and it turns out my maths might not be so perfect after all:
I appear to be missing a face.
Bugs shall be fixed and a basic engine shall be running by next post. If I stop being lazy.

That one picture warranted staying up until 5:43AM? YES!
Charlie

Monday, 25 July 2011

Evenings are dark again!

Here's one more short post, and a warning that I'm on holiday for one week beginning Friday, so my next post may end up being scheduled and therefore old news.

I shall be here in a week. Expect numerous photographs, of nowhere near this artistic quality.
I've actually made decent progress with the game - I'm now at 6.5K lines of Java code, which to be fair doesn't sound like much, but does represent significant progress. By comparison, Unreal Engine + Editor is reportedly over 1.4M lines, Source SDK at 780K, Quake 3 at 310K, and the Crysis SDK at 280K lines. In any case, judging a program by the number of lines of code is like judging an aircraft by its weight [quote shamelessly paraphrased from Bill Gates]. I'm aiming to have a working renderer with terrain, water, models and UI working in "real" space within the week - which should be achievable. One significant annoyance for me is that OpenGL intentionally doesn't have any native way of handling fonts - which is going to add a significant amount of code (unless I cheat, and just bake every bit of text into images, which isn't possible for stuff like resource count).

Once I've got the renderer up and running, the rest of the engine is actually much less complex in terms of the maths, and doesn't have to be nearly as efficient or fast. Keeping everything in sync across a network isn't too bad as long as you aren't doing any prediction (for those who are interested, there's a very impressive explanation of how games like Counter-Strike predict movement over in the Valve wiki or for the older Quake here) From there all I need to know in terms of engine is skeletal animation (I could probably implement it now, but don't fully understand it - there's some maths we simply haven't covered - Quaternions - from what I understand, essentially multi-dimensional imaginary numbers - the extension of an Argand diagram into 3 or 4 dimensional space - too many hyphens - I think not!). After that the client needs sound, encryption and streaming of textures/file manager, and then is finished.  Server-side requires a file server, gameplay logic, account manager and AI (PvP would of course solve this, but it's worth having very basic AI to help the game start and get people acquainted with controls and basic gameplay). From there, my job is level design tools. supporting the artists and gameplay, and the website/backend.

Ambitious? Yes. Achievable? Let's find out, shall we?
Charlie

Monday, 27 June 2011

Filler posts? Who, me?

I've been out all weekend filming (more over the next month - and no spoilers I'm afraid) so I haven't had much time to work on the engine or any other projects - though I have ordered the motor, charger and batteries for the quadrotor. The work I have done towards the game is mostly non-visual (i.e. importing 3D models, materials, user input, and animating colour transitions). I'll be finishing the model and material importer, and designing the renderer properly over the next couple of weeks.

CryTek know how to do lighting.

Once model importing is sorted (not too far off) the next big problem will be lighting. Lighting is easy in certain situations but next to impossible in others. Take the above image as an example (a model of Sponza Atrium, rendered in CryEngine 3). If you consider light as moving only in a straight line, then any covered areas would be completely dark. We know from experience that this is not the case, however - light is bounced off all surfaces, providing light to the covered areas. This is known as radiosity. Games like Half Life 2 pre-compute the radiosity but this method has limitations (most notably, the lighting is only valid when the sun is in one position) so increasingly games have started to handle more advanced lighting in real-time. There's still no "standard model" to speak of as yet - though there are some techniques in common between virtually all games. Advanced illumination like this won't be relevant in an RTS game, but I prefer to plan ahead and think about how features could be incorporated. 

Project size to date: 384mb
Charlie

Monday, 20 June 2011

Progress. Kind of.


I do Linux. It's a thing now.
This is going to be another short post, I'm afraid. Current progress towards the game is shown above - I've set it up to do real-time colour toning of scenes. It's currently just set to turn Red and Green down a lot, and Blue down slightly, and then switch the Red and Blue channels around (fixing the odd colouring issue I was having at the time of last week's post).  It's also rendering to an off-screen buffer at the same time - which means that the in-game content could all be treated in the same way, not just a raw image as shown here.

For anyone interested, the fragment shader code itself is shown. It really isn't that bad, actually. The code that goes with it (feeding it the data points, textures, etc) is horrific at first sight though. Finer-grain control over colours will be possible, when I write some code to handle one-dimensional textures (multiplying by 0.9, 0,7 and 0.6 were really just to test the concept).

The switch to Linux is an interesting one - I had previously done so on my laptop whilst "revising" - but now it's on my desktop, and it's permanent. On the whole I love the OS - everything is free, without any hassle - installing programs is as easy as typing its name, and pressing "Install". Programming is far easier than under Windows - the entire O/S feels more geared towards it. Even advanced changes only require a couple of lines of text. I'll only cave in and install Windows 7 when my craving to play Portal 2 gets too great, or when I discover that there's no fix for a bug I have with it. (The choice to use a Portal-related texture as my test image suggests I'm not going to last too long...) The only real barriers against wide adoption is the lack of apps like Photoshop, After Effects, and a large library of decent games, the perceived increase in complexity (no steeper than Windows - just not what people are used to), and maybe the self-imposed "free software" barriers (again, not a problem to me - technically formats like MP3 and DVD are licensed, and drivers without source code available throw up a "non-free" warning). That's how my experience of it as my "Main O/S" has been, anyway.

Linux is the one responsible for destroying the performance - it's not actually that slow.
Charlie

Monday, 13 June 2011

Shaders Aren't Easy

What I've done there is failed. 
I'm afraid there isn't going to be a substantial post today - I'm just going to show what little progress I've actually made on an "engine" (term used loosely) - my code can import a texture and show it on a slightly-mutated ex-square. Colours are a work in progress, and it's also using a complete hack to set the texture coordinates, but I think it shows that I'm not far off doing "useful" things in OpenGL. It's not going to win any awards for pretty code (once I have model importing, basic illumination and muti-texturing sorted, I can start writing pretty, fast and flexible stuff), but it's all practice and experience in the graphics language.

The texture on the right is the original, the two on the left are various attempts (guesses) at the right colour profile.

I've managed to keep the performance, at least...


"Failure is just delayed success". Is what failures say.
Charlie

Monday, 6 June 2011

5000 FPS. Like a boss.

(Yes, the Physics exam is in 10 hours.)
I didn't manage to hit 9001 FPS, and admittedly it's only 2 polygons, but pretty cool anyway...

I'm just going to leave that here as a little teaser of what's to come. That's actually a very basic shader program - which means it's running on my graphics card and not CPU. In non-technical terms, that means that you get much more speed on modern hardware, and a huge amount of flexibility, at the cost of a bit of complexity. Shaders in modern games control pretty much everything you see, in some way or another - from lighting to the position of objects.

This is only the start, but in the relatively near future, this will become a more full-featured game engine - is anyone interested in helping create a "complete" game, beyond who I've already spoken to? You don't have to be technically minded - there's a huge amount of non-technical work involved in actually creating a decent game - the more hands, the merrier. There may well be a Facebook group in the near future. I have some ideas, as do several other people. RTS/FPS/3PP are all possibilities - it depends on what people would like to do. In fact, some might be useful as miniature test projects, as the code/collaboration improves.

I'm clearly not Chuck Norris
Charlie

Monday, 30 May 2011

The Maths Behind Computer Games: Rendering

I wanted to do something a little different with this post - and it'll probably get fairly technical, but it'll hopefully be interesting nonetheless. I've been fascinated with the maths and engineering behind games for almost as long as I can remember, and thought it might be interesting to condense some of that into a post (with pretty pictures) - if it's interesting I might do some more of these in the future. For now I'm going to explain the process of projection, with maybe a dash of texturing and lighting - there are actually some short-cuts and tricks used to speed the process up, but I'm mostly ignoring them for simplicity.

How it Works
One of the fundamental and vital things about drawing a 3D scene, is that almost everything can be approximated as a load of small, flat pieces (polygons) each joined at the edges, with a visible material applied to them. Computers generally use triangles, but you can also use quads or any n-gons if you feel so inclined. Each of the points which define the triangles can be expressed as a vector - 3 numbers, representing its X, Y and Z coordinates in space.

Mostly triangles in there...
Using some pretty self-explanatory maths, you can take a set of several different models, each positioned at different points in the world, and express them as a set of points (vertices) in the scene. So you now have a set of points in space. So what? They are pretty much useless, without some kind of viewing-point. The position and location of a camera in the world can also be expressed as a set of coordinates and a rotation - by using these bits of information, every point in the world can be shifted into a nice, simple set of values which are now oriented with respect to the camera - that is, the axes are aligned to the view-point and not the world or the objects. If you're confused, don't worry too much - essentially, what you now have is the points, but shifted so that the numbers now represent X, Y and Depth rather than the position in the world.

Unfortunately, our view on the world isn't quite as simple as this - if you imagine a simple cube, as described above, as seen face-on, each corner would be directly in front or behind of the rear face. In reality as we experience it, things which are further away appear smaller, so we shift points towards the centre of the screen depending on how deep they are. This now gives a pretty nice view of where each point on the model corresponds to on the screen - you can say that each point has been projected onto the 2D screen.

The above diagrams represent perspective and orthographic space (perspective is on the left) - the space which is actually seen by the camera (the far end is the furthest object you can see - computer games limit how far you can look into the distance, sometimes with sneaky tricks)
The initial projection I described is what's known as orthographic projection, which is useful if you need to work to a specific scale - for example, when designing a product with CAD. The depth-adjusted version is perspective projection, and is used when something needs to be displayed as it would be seen.

Once you have your points as they would be positioned on the screen, you can then draw in the triangles themselves. The easiest way would be to stretch a texture onto each triangle - though even this can have some quirks, which fall way beyond the scope of this blog post. A much more interesting and mathematical thing you can do is break each triangle into pixels and work out the colour of each - this is known as shading and can be done incredibly fast on modern graphics hardware.

All credit to Wikipedia for the image.

The Phong model of lighting (named after the first person to describe it, Bui Tuong Phong) can be used to shade a variety of material types very effectively - it's typically used for plastics or non-transparent liquids - Gels in Portal 2 for example. The principle is that the overall light reflected from a surface can be thought of as the sum of several different components, each of which is seen above.

The ambient (emissive) light is a term considered to always exist, but is often excluded because it is not strictly necessary. The diffuse (scattered) light is reflected in all roughly all directions by the material and does not depend on the viewing direction, only on the lighting direction. The specular (highlight) light is the intense highlight reflection you may see off shiny surfaces. Together, they can be summed up to give a pretty good representation of a material as a whole.

Restrain yourself, Jamie!
The Ka, Kd, Ks and alpha are material constants - representing the material's ambient, diffuse, specular and "shininess" constants. Things with a hat on (^)  are normalised vectors representing light and viewing directions, and the surface's normal. Ip is the output - the colour of a single pixel. This process is repeated for every single pixel of a scene (well, technically fragment - transparency can cause two or more fragments to be drawn for a single pixel).

Not personally a fan of the cave missions, but they definitely looked good. 
Of course, there are a huge number of other things you can (and indeed need to) do to make a realistic-looking scene which still performs well. For example, textures can be sampled to give much finer detail on items, and other shaders can be custom-written to handle reflective, refractive or translucent materials, metals, goo, or even supposedly simple things such as bumpy surfaces, as seen above. The cave's surfaces are actually flat - the surface shader is given a bump-map - a texture representing the height of a material rather than its colour. This can be used with a light vector (such as the torch) to give gorgeous, realistic-looking lighting effects, at much faster speeds.

Confused? Now remember that the computer handles about 100,000 vertices, depending on the game, and 1,000,000 pixels, 50 times a second, just in graphics calculations for a modern game.

Things I Haven't Mentioned
What about the other side of objects?
Sneaky trick with how you choose the triangles - you number the vertices in an anti-clockwise order around each polygon. When you're about to draw it, check if the order is still anticlockwise. If it is, you know it's facing you  and should be drawn. If not, you're either looking at it sideways-on or from behind - which probably means you shouldn't be drawing it.


What about things behind you?
In the part where you move things with respect to the camera, things behind you end up getting a different sign on the depth to everything in front of you. You can just throw out any vertices with depth values below zero.


Anything else?
There's probably a solution or an explanation, but I'm really not going to pre-empt anything and everything you could ask. Ask in the comments, or wait for future installments, if this was actually interesting.


Yours geekily,
Charlie