|
Re: Neat try, but not really comparable.
04/24/13 09:50 PM
|
|
|
But the thing you guys keep missing, which is discussed a few posts above, is that the playfun game-playing program doesn't 'learn' anything. It just searches button mashing inputs for the best sequence. When the dude found that his search algorithm was not getting good results, he manually implemented heuristics to allow his search function to get better results. And repeated that process a few times. The algorithm didn't learn how to play Mario better - its author just learned how to alter the algorithm to do better searches in less time.
The playfun program takes just as long to find the best set of inputs for a sequence of gameplay the first time it's run as it does the 100th time. It doesn't 'learn' anything.
Also, no offense, but I find your Army story a little implausable. Surely anyone implementing such a system would do some trial runs before demoing the thing. And surely you wouldn't just happen to have every trial run use the same photos of tanks as the others, and surely the additional photos of tanks used in the trial runs wouldn't just also happen to have enemy tanks always photoed later in the day than friendly tanks. And even if you were ... careless ... enough to only use photos sourced from the same places for all training, dry run, and demoing purposes, surely you had some way to inspect the program to see why it was making the choices it made (i.e. highlight on the photo those pixels which contributed to the friend/foe decision), and would have seen right away that the thing was always focusing on the leaf color or shiny surfaces facing west or whatever.
Edited by Bryan Ischo (04/24/13 09:56 PM)
|
|