Sorry to ask this again- but why is it not feasible for the AI to learn from the players, and maybe if people play MP or connect to Impulse, other players.
You mentioned not doing this in part 1, why is it impractical? An adaptive from game to game AI does seem like it would be the best way to reduce errors, or quickly find game-play flaws (such as a one optimal solution)
Alstein any form of learning Ai has (so far as i have seen) one mayor problem: Debugging. At some point you get very complex set of rules/neuronal connections/genes that you cant fathom out what the Ai does (ok you can but it needs time). It becomes a Black-box unless you have some genius way to sort and visualize this overly complex systems.
Also at this point of Ai development its better to have some predefined Rules and Rule-systems. Like the layers of animal and Human behaviour which has (from a simplistic POV) 3 Layers. The first layer are biological base function and needs like breathing, little programs that run all the time with preset parameters - for elemental thats i think mostly the 1.09 Ai.
The second level are Instincts thus behaviours with variable parameters (some of frogboys stuff). Cocks for example have an "flock defence" instinct - they attack anything that is "furry", in a preset but variable size category (loosely "everything smaller then me") and approaches the nests/flocks. For the last thing the distance varies. In a highly dangerous environment the distance is bigger then in a relatively save area. Deer as another example show bigger Flight distances in Area with Human hunters then say in National parks. The instinct get reinforced thus some variables get lower or higher to fit.
Another interesting case which are Baby-ducks which can distinguish the shape of flying birds. At theyr birth they flee from any bird or flying object. The general shape triggers the instinct any time but the shapes get sorted by statistic. Baby-Ducks dont flee from other flying ducks because the shape of ducks is normaly far more common then the shape of a predator like a hawk. This is more Patern recognition and very near the 3rd layer.
Preset Instincts and even more so base programs can be extremely fast. Locusts have collision detection that works in nanoseconds and was adapted for cars.
The third layer is now learning and it gets pretty messy. First of all Learning needs much more Memoryspace which for a Game that is for example pathing heavy anyway a natural problem. Wolves learn to not attack Skunks and porcupines but need to "shape" (which is different from reinforcing a existing one) neuronal pathways to make the connection. They have to store far more parameters for a porcupine like the destinct smell, common collors, expressed behaviour and what not and have to link it to something. But to gether this knowledge the wolf needs the instinct to hunt prey in the first place. The learned behaviour overwrites the instinct/reflex here. The thing is that the wolf creates a subroutine that over-writes the older pre installed one - in terms of computers it means that the game would creates a database of exceptions.
If the game creates to much exceptions your DB gets far to big and the game slows down to much because you have to do long searches. If the game now does not create enough exceptions it falls for the same ruse 20 times in a raw before it catches up if it ever does. THe game needs also the ability to recognize patterns and to apply them.
Learning ais have still have some other problems. Often they take the wrong parameters into account but may seem to work in certain situation. Some pattern recognition stuff went hilariously wrong (some years back) because it took the wrong parameters into account iirc. It was trained to recongize cars with an archive of pictures and the archive had only older more blocky modells were upon the system failed to recognize new more rounder models because they didnt have so sharp angles.
In my opinion learning Ai needs at least some minimal scaffolding to build upon and is often very hard to debug and to analyse. That Frogster does now all the layer 2ish stuff creates a also very stable scaffolding to which a Learning Ai could fall back if it fails to win.