So in a lot of non-photorealistic shaders, one of the things they do is they implement an edge detection algorithm to well, give things an edge. For example, the toon shader that (hopefully) everyone in the class has done requires an edge to make it look proper and cartoony.
Like all the other aspects of programming, there are many ways to do edge detection. The ghettoist way to do it would be to just render all the geometry twice. The first set of geometry would be the main geometry, untouched and normal. The second set would be slightly larger, say, 3% or so. Made completely black and then front face culled.
That's what I did for this toon shader.
That way is pretty hacky, but it works. It works because obviously if its the same object but scaled a bit, it'll fit the edges of the object. Then if it's black and then front face culled, all you see would be the absolute edges of the secondary object which would make the original object look like it's front face culled.
It's not bad to be honest, but there are obviously lots of shortcomings when using this algorithm. The first is that you have to render geometry twice. Which is fine for low poly scenes, but scenes with tens, if not hundreds of thousands of polies would quickly make this algorithm unfeasible.
Also another downfall of this algorithm is the fact that it doesn't capture inner edges that well. It only does the outer edges and the inner edges are really sketchy.
A mountain like this has like a trillion polies. Rendering it twice would suck.
Another way to do it would be to base it off the normals. Since the normals tell you what direction each face is facing, you can use those to figure out if they are an edge or not. The way you would do that would be to quite simply figure out the direction vector from your camera to your object, and then based on that, if a normal's direction is near perpendicular or something, that would be an edge.
You can also calculate edges based on depth too. If an object is placed on a scene with a simple background, then obviously all the objects have different depth values as well. If you were to sample an area and there is a drop off in depth, then you could assume that there is an edge there as well.
An another algorithm to do it is using a Sobel filter. I personally like this one since I don't have to deal with depth information or normals. It's pretty straightforward too. Like the depth-based edge detection, you sample the image and see if there is a dropoff in colour. So instead of basing it off of depth, you base it off of colour.
Naturally there are issues with this too since if the colours of the object and the environment are too similar, the edge detection fails. Or for example if the colours of your object were too distinct, then theres a possibility that you would get random edges on your character that you may or may not want.
Notice the character's armband isn't properly edged.
It works reasonably well though. As you can see from above things are properly outlined in our GDW game. It's not 100% perfect, but it works well enough.
One thing that the Sobel filter has on the depth-based one though is the fact that if you had a textured quad, the depth based one would fail since the quad has a solid depth. Whereas if the colours were distinct enough, then you would still get edges on it. Notice how our quad trees are edged in the above picture. The depth-based one wouldn't do that.