I have recently been working on AI-Vision and Terrain Instancing.
One of the main problems with AI-Vision is the high cost of scanning the AI’s view for visible players. We know how powerful the GPU is at performing simple instructions, in parallel, at very high speeds, so we decided to use the GPU as a scanner to search the AI’s view for opponents. The algorithm described below scans a texture (the AI’s view) for visible opponents and then returns the target player’s position to the CPU for the AI to use.
The method we were using to render our terrain was to use a mesh of triangles of varying LODs (center had highest LOD, sides had lowest) which was 4 times the size of our board and centered on the player’s current position. Then, in shader code, based on the current vertex’s position we would lookup a texture map which stored the heights of the vertex (basically a height-map). We also read the vertex normal using a similar method from a normal-map.
The main problem with the above method was the large number of vertices which had to be passed to the gpu every time we needed to draw the terrain. The solution to this problem was to instance a smaller high LOD mesh around the player.