color = max_col - (vertex1.z + vertex2.z + vertex3.z) / a,where a is some suitable number dividing by which we can get the z average to the range 0..max_col. Because we're using a coordinate system in which the z-axis points to the direction to which you probably are looking right now ;) the z value grows as it goes farther so we must substract the value we've got from the biggest possible color value max_col.
The idea is the following: we present the light source as a vector. For each frame we calculate the normal vector of every polygon (by creating two vectors from the polygon and by taking the cross product of them) and calculate the cosine of the angle between the normal and the light source with the help of dot product -- the smaller angle, the more light. Using suitable coefficients we can fit this value in a desired range, for example in a RGB mode to the range 0..63 by multiplying the cosine by 63. Finally we check if the color value is negative. If it is, we change it to zero and the polygon is not seen. Some pseudo code again:
If the length of both of the vectors is one, we can forget most of the muls, the div, and both of the sqrt's (how to ensure that the length of a vector is one, see 1.1.3). We can precalculate the length of the light source vector; it remains the same even though we wanted to rotate it. With the normal vectors we can do the same thing but scale the vectors by max_col so we can save one mul more. Now we can rotate the normals as if they were coordinates, and the speedup is remarkable. So, in the init part:
Now the function substitutes to the form
Vertex normals are normals of the object's surface (the object being actually approximated using polygons) at the point, so they are in every vertex perpendicular to the object's (the real object, not the approximated one) surface. Calculating that kind of normal isn't easy, so we take just a nice approximation of the real normal by calculating the average of the normals of the polygons hitting the vertex:
Not an easy job, implementing that, so here's some pseudo code again. The pseudo uses an another possible way:
This is of course a slower way, but I just hadn't time to code a pseudo of the faster technique :I Anyway, it works.
And the code is used like in Lambert flat.
Note! This technique doesn't work right in this way: if a face is made of many triangles (as in 3DS), it is added twice to the sum of face normals and gets thus too much weight. This may look annoying. The problem can be solved by checking if a normal value has already been used when calculating a vertex normal.
color = ambient + (cos x) * diffuse + (cos x)^n * specular,where ambient is the color of a polygon when it's not hit by light (minimum color that is), diffuse is the original color of the polygon, and specular the color of a polygon when it's hit by a perpendicular light (the maximum color). x is the angle between the light vector and the normal, and it's allowed to change between -90 and 90 degrees. Why not 0..360 degrees? Because when the angle is over 90 degrees, no light hits the polygon and we ought to use the minimum value ambient. So we must perform a check, and if the angle is not in the required range, we give it the value 90 degrees, and cosine gets the value zero the color getting the value ambient. n is the shininess of the polygon (some people maybe remember it from rendering programs). Try and find a suitable value for each purpose!
At the beginning we checked if the z coefficient of the light vector is positive or negative. This because positive and negative coefficients require different calculations; positive values require a bit fixing. With negative values of the coefficient we can calculate the coordinates into the env-map like this (as in the pseudo code): we substract the normal x and y coefficients from the light coefficients before transforming them into the env-map space. Where's the z coefficient? We don't need it, but because these vectors should be unit vectors, we can give it weight by decrementing the values of x and y coefficient: for example if the x and y coefficients are both 0.5, the z coefficient has the weight 0.7 (vector length: 0.5^2 + 0.5^2 + 0.7^2 = 1).
This technique doesn't work with light sources having a positive z coefficient, they require the following: the z coefficient is positive and the vector (-LS.i,-LS.j,-LS.k) is the opposite for the light vector. If we fool the routine to think the light vector to be the opposite (at the other side), we can get exactly the opposite result as we need. Why like this? This opposite light vector has of course a negative z coefficient and we can use the technique above. We can get the right result from the opposite one by moving the values at the center to the edges and vice versa -> tada: we've got the original light vector!
color = ambient + (cos b) * diffuse + (cos x)^n * specularNote the locations of b and x. Ambient is the color value of a surface (this is the same for every pixel in the surface but may vary from object to object) when there's no light hitting the point at all. Diffuse is the texel value (bitmap pixel color) at the current point, specular is the light value reflecting from the object depending on the angle between the reflection ray and the camera, and n is the shininess of the object.
Don't wonder if the edges of your spotlight look weird or it bugs in some other way when you're using gouraud or flat shading. The problem is that when we're interpolating linearly between vertices, different polygons get different-length shades, and the spotlight may look quite annoying. Any good solutions for the problem would be appreciated ("real phong" is not accepted ;) Chem suggested splitting the polygons into smaller ones when going too close to them. Could work, but I can't say anything about the speed or reliability.
This is of course not the one and only way, there sure are many others, too. I just happen to think this is the best one (yes, I have tried the 1/distance^2 method :)