My Photo

Photo Albums

Blog powered by Typepad
Member since 02/2004
AddThis Social Bookmark Button

« 2019: Spring | Main | The Past Through Tomorrow »

Sunday, 27 January 2019

Comments

John B

Games are poor things to assess computer progress with, given the constant re-writing, the levels of indirection, and often the focus on other things than performance. A good scientific code is a better bet. The UKMO Unified Model was started around 1990, and I think the same codebase is in use now (though I'm 6 years out of date, I can't see them throwing it away). And the 50000 times performance gain has gone on doing 50 times more calculations on 1000 more data points (ish) But its been optimised all that time, unlike the games you talk about, where you'd be emulating down and down, wasting resources all the way.

I suspect the AI techniques are horribly inefficient as they explore the search space without culling daft strategies well, its like setting those infinite monkeys on something, so I can see where the cycles go

Paul M. Cray

That's a very good point and it noteworthy that DeepMind do mention weather forecasting as a possible application. StarCraft II itself has been stable as a game for many years (also, of course, classic Civilization), so it is a good example of a plausible model to work on, but it is quite possible that a lot of those two hundred years of training time were wasted on complete no-starter strategies that would "obviously" not work to a human. Of course, it's always possible to think of harder and harder tests for the A(G)I systems. What about a first class degree in mathematics or physics. But a double first in Mods and Greats would be even harder. Then we might look for the first scientific paper entirely written by machine, the first PhD, the first Nobel Prize. It is quite plausible to argue that we still have decades or centuries to go.

The comments to this entry are closed.

Books We are Reading