Neural Network Chess Computer Abandons Brute Force For "Human" Approach 95
An anonymous reader writes: A new chess AI utilizes a neural network to approach the millions of possible moves in the game without just throwing compute cycles at the problem the way that most chess engines have done since Von Neumann. 'Giraffe' returns to the practical problems which defeated chess researchers who tried to create less 'systematic' opponents in the mid-1990s, and came up against the (still present) issues of latency and branch resolution in search. Invented by an MSc student at Imperial College London, Giraffe taught itself chess and reached FIDE International Master level on a modern mainstream PC within three days.
Re: (Score:2)
Well, yes, that kind of is the issue. The computation chess masters make, the actual thoughts, could be handled on a 1950 computer no problem.
The question is how. It isn't brute force, though they do delve into plies ass desired. The real trick is knowing which handful to explore mentally. And if it were just pattern matching against known games, it would ne done by computer already that way, too.
Re: Gotta love neural networks! (Score:1)
See subject.
Re:Gotta love neural networks! (Score:4, Informative)
Well, yes, that kind of is the issue. The computation chess masters make, the actual thoughts, could be handled on a 1950 computer no problem.
The question is how. It isn't brute force, though they do delve into plies [wikipedia.org] as desired. The real trick is knowing which handful to explore mentally. And if it were just pattern matching against known games, it would be done by computer already that way, too.
What?
FTFY... (although perhaps a few players I know might be thinking about it the original way it was written)
Re: (Score:2)
The question is how. It isn't brute force, though they do delve into plies as desired.
As you mention, grandmasters know when it is appropriate to search through the move-tree, when to look deeply into a position. They prune the search tree very hard.......so the question is, how can we get computers to know which branches are ok to prune, and which aren't? Computers still aren't very good at that.
Incidentally, it amazes me how often Tal would say, "and in this position, I decided to calculate every variation all the way to mate." There are not many people who can keep the moves that clea
Re:Gotta love neural networks! (Score:5, Insightful)
Sigh. The reason that humans experts are no longer competitive is because human experts prune where Deep Ply fears to trust static analysis. Pitted against a relentless algorithm which resists intuitive pruning, grand-master human pruning leaks a full pawn or two per game.
It's damn amazing how well grand-master level pruning actually works, but don't mistake this for flawless chess. Beautiful? Maybe. Flawless? Not even close.
When it was still somewhat competitive between man and machine, the human chess players would think they were pressing an overwhelming advantage, only to discover themselves mired in tiny, unanticipated tactical disadvantages move after move after move after move. "The damn thing keeps finding these fiddling resources!" If you weren't careful, you could easily lose from what had initially appeared to be a won position (and it probably would have been, against a human opponent blind to all those fiddling resources).
The trick for the competitive chess programmer was to achieve the right balance in the static evaluator so that tangible material gains didn't consistently outweigh less tangible advantage of tempo. Matthew Lai in his paper does not seem to grasp this essential trajectory of computer chess. He seems to think it's remarkable that his Oldsmobile displays more rigidity on the impact sled than the lunar lander, when it's pretty clear to everyone else involved that no Oldsmobile ever made was going to win the space race. The ply-based chess engines had their static evaluators hand-tuned by experts over many decades within a space gram clock-cycle budget.
Until he actually defeats all these programs on existing commodity hardware at existing tournament time controls, he's comparing watermelons to kiln-dried coconut flakes.
It's the same problem with new technology. It isn't enough to merely be better in some personally favoured dimension of merit. Your immature new thing has to be better enough to actually pass the mature old thing on its own terms.
Got a better substrate than silicon? Yeah? What's your defect density cranking out 10,000 wafers per month? Oh, you haven't actually developed all that quality-control infrastructure yet, but you figure you can do it at half the price once you work out the final kink from your strained bullerene crystal lattice?
Awesome progress, pal, but I think I'll invest my own Bitcoin elsewhere.
For the record, I've long believed that the trade-off moving from depth to sophistication wouldn't prove particularly steep (for the right sophistication). But any gradient that's a net loss (no matter how small) provides pretty much no immediate competitive incentive for anyone to invest any real effort hoeing that row.
The great thing about neural networks is that they don't actually require much real effort. The machine itself does most of the work in 72 hours. And then what have you got? A RISC chip that never actually kills x86 (because those idiots were busy touting microcosmic instruction set efficiency long after the real game had shifted to streamlining the cache hierarchy, where's there's no low-hanging ideological shortcut to help you overcome the first-mover fat-payroll advantage).
I have seen something else under the sun: The race is not to the swift or the battle to the strong, nor does food come to the wise or wealth to the brilliant or favor to the learned; but sunk cost and legacy happen to them all.
Re: (Score:2)
Sigh. The reason that humans experts are no longer competitive is because human experts prune where Deep Ply fears to trust static analysis. Pitted against a relentless algorithm which resists intuitive pruning, grand-master human pruning leaks a full pawn or two per game.
lol yes, but that's why we consider computers stupid, even though they can still win at chess through brute force. The fact remains that the vast majority of chess moves in a given position are bad, and the computer program that learns to prune them first will have a huge advantage.
Re: (Score:3)
The point is not to win. Chess supercomputers already do that. The point is to write a program that can play well using limited resources, and maybe learn something about how humans do it.
You said yourself: "It's damn amazing how well grand-master level pruning actually works."
Re: (Score:2)
Neural networks are anything but resource-efficient.
Compared to how a tradition chess engine searches the game tree, neural networks are extremely resource efficient.
The top engines use an algorithm called PVS() (Principle Variation Search), which is just a variant of AlphaBeta() that includes a null-window aspiration search. They do some advanced stuff such as pruning and extensions, but in the end the core of it is still AlphaBeta(). These engines still have to search through literally billions of positions on each move in order to beat the top humans.
Re: (Score:2)
Cut the kid some slack (Score:5, Interesting)
You make some very important points in your post: for your new product to take over, it needs to do everything the old product does, and then do something better. However, take this into account:
1) The team that built Deep Blue were IBM employees, and had so they had different resources available. I doubt this student (I call him kid) had a grandmaster available to help him fine-tune his evaluator, or a fab to build custom silicon for his chess-playing machine. Also, it is very instructive to watch the documentary "Game Over" to learn a few things about how IBM used the game against Kasparov to push up their share price. That should gave some idea of the resources they have thrown at the project.
2) The same Deep Blue team were coming from the CS department at Carnegie-Mellon Univ. where they did their Ph.D. on computer chess, and studied with a prof that spent a lot of his career on this subject. They were grown-ups with a lot of experience in the field, and much wiser than a young student.
3) The current computer chess champion (Komodo) again had its evaluator fine-tuned by a grandmaster: https://en.wikipedia.org/wiki/... [wikipedia.org]
4) Most of the top chess programs have been written by programmers that have written other chess engines before. Their "success" is their 3rd of 4th re-write of a chess engine, and no amount of talent can replace that kind of experience.
Given all these points (and a lot more that can be identified along the same lines) I would say this kid did a good job.
Re: (Score:2)
Re: (Score:2)
20 million neural networks in my brain... all interconnected...
It means I have a neural internet in my brain! That's fucking cool!
now it needs to play other computers to impress me (Score:4, Informative)
the big Computer tournaments are run by TCEC at chessdom.com - there it would be paired against other engines, of whom Komodo and Stockfish have been pretty much dominating every year since season 2 -
truth is, all computer chess is computer vs. computer nowadays - the losses come from different evaluations of positions - then the programmers try to correct it, etc - but since all engines are running the same hardware with resources, the best performers should win -
you can follow Season 8 (round 1b right now) here
http://tcec.chessdom.com/live.... [chessdom.com]
Re: (Score:1)
His method of dealing with the move horizon is cool, but I'm sure someone has thought of it before (since I have, and if I thought of it, someone else surely has).
Re:now it needs to play other computers to impress (Score:5, Insightful)
It only plays at around the level of GnuChess, so don't be impressed.
You should be impressed. Not by it's level of play (which is not impressive), but by the fact that it:
1. Taught itself to play
2. Reached FIDE Master Level in THREE DAYS.
To be honest, I'm not even sure why this is a story.
See #1 and #2 above.
Re: (Score:1)
Re: (Score:2)
What's the difference?
The rules were programmed in advance.
For example, you couldn't put it in front of scrabble and expect it to do something reasonable.
Re: (Score:2)
Re: (Score:1)
Giraffe's magic is supposedly in the decision-pruning algorithms. Surely some of the concepts involved would be transferable to other games, like Scrabble.
Maybe you won't be satisfied until an actual humanoid robot is moving pieces by itself, having bought the chess set from a local shop.
Re: (Score:2)
Maybe you won't be satisfied until an actual humanoid robot is moving pieces by itself, having bought the chess set from a local shop.
Mate, if you're going to say it can teach itself, then it better be able to actually teach itself. I have no objection to this chess program as a clear demonstration of weak AI.
Re: (Score:2)
Talking about the phrase "Teach itself" is a mere semantic dispute. I would rather discuss what the AI actually does.
Very well said. I will think more deeply about that in my future conversations.
Re: (Score:2)
Talking about the phrase "Teach itself" is a mere semantic dispute. I would rather discuss what the AI actually does.
The more important semantic dispute is whether you should throw ever use the unqualified phrases "AI" or "Artificial Intelligence" about a limited computer program.
I can see that "weak AI" is acceptable as a technical term, as it is clearly differentiated from anything to do with intelligence as generally understood.
Re: (Score:1)
What's the difference?
The rules were programmed in advance
A person who would be considered having taught themselves to play would still have to have access to the rules from somewhere.
Your comment makes it sound like you define "teach itself to play" as needing to reinvent the entire game independently
Re: (Score:2)
I suppose it would be more impressive if it learned how to play without knowing the rules. On the other hand, that's a little unfair, no?
Although, there are types of neural networks that have learned to play things like Mario Kart by watching human players play.
Re: (Score:2)
Re: (Score:2)
It uses a neural network to recognize good moves. If you don't consider training a neural network "changing it's program" then I've got some bad news regarding your own autonomy.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
To some extent neural nets do model what happens in a human brain, but they also do things that we're fairly sure human neurons dont, most notable being back propagation, or at least not in the format we do it with neural networks. Thats not to say there are analogous mechanisms, in fact there *must* be one (how else to explain the elasticity of inputs). But there are critical differences.
Now that doesnt mean of course that a computer neural network is stupider. In fact cell for cell our neural networks out
Re: (Score:2)
Re: (Score:2)
Why are you so certain that a neural network matches a human brain?
Maybe because a neural network, for a fact, matches the human brain.
This is well understood. What isnt well understood is the learning mechanism, but we do know for certain that it is to a large degree a timing-dependent hebbian learning process. To be quite specific, Hebb's Rule is a good predictor of neuron activation in brains. People did fucking science.
At least become casually acquainted with the subject before acting like a know-it-all. Clearly you don't know shit. What possesses people like you
Re: (Score:2)
Thats not to say there are analogous mechanisms, in fact there *must* be one (how else to explain the elasticity of inputs).
The neurons in a brain to a large extent use some form of hebbian learning. We know this for a fact because Hebb's Rule is proven to be a good predictor of neuron activations.
Re: (Score:2)
Come on guy... you know you are wholly ignorant on the subject, so why are you acting like some fucking knowledgeable person about it? You do know that its wrong to do that, right? Its not just wrong, its dishonest. That makes you a dishonest fuck.
Oooh, insults, you sound so intelligent when you insult me.
Maybe because a neural network, for a fact, matches the human brain. This is well understood.
No it's not lol. They match some aspects of neurons, but not all of them. We don't even entirely understand what neurons do. It's unlikely we even know all the different types of neurons that exist.
Getting back to the chess-playing neural network in this story..........it is a specific, chess-playing neural network. As a result, it clearly belongs in the subset of weak-AI. Neural networks in general may match a human brain (something we don't kno
Re: (Score:2)
You are just an ignorant fuck that likes to pretend that he knows something, even when you know exactly zero.
It's unlikely we even know all the different types of neurons that exist.
There are 8 types you ignorant fuck.
Re: (Score:2)
Shall I try to enrage you some more? Did you know there are over 50 types of neuron in the retina alone?
Re: (Score:1)
No, that is what existing high level chess programs do, and exactly what this one doesn't do. Please go and learn about what a neural network is before commenting on this story again.
Are we sure it taught itself to play? (Score:5, Funny)
Are we sure it did not just learn how to install and launch GnuChess?
Re: (Score:2)
The fact that we are sure makes me sad.
Re: (Score:2)
Re: (Score:3)
There are a few blocks with "input" and "hidden layer 1"/ hidden layer 2. What does that mean? Absolutely nothing
At some point you have to stop explaining subject specific phrases in an article, "hidden layer" means something something to people who have a basic understanding of the subject, google it if you don't.
like GnuChess (Score:5, Interesting)
Humans are able to play chess at a high level because they are able to brutally prune the decision tree.....a grandmaster can quickly eliminate most moves as useless (although he/she will probably think of it in reverse terms: saying he/she quickly identified the important moves in the position). A computer that could combine that kind of pruning with the massive searching power would be ridiculously powerful. Better than our current computers by an order of magnitude.
Re: (Score:2)
My curiosity has always been with the human brain's ability to play really well, like top five-in-the-World chess.
Is it likely that a player like Bobby Fischer dedicated so much of his memory to the pursuit that he was forced to sacrifice processing power elsewhere?
Re: (Score:2)
Is it likely that a player like Bobby Fischer dedicated so much of his memory to the pursuit that he was forced to sacrifice processing power elsewhere?
I don't think so. I've looked, but never found evidence that a human brain can "fill up." I estimated once that playing chess at a top level is similar in mental dedication to learning a second language very well (based on summing the total knowledge base required to play chess at a top level. I spent a lot of time finding grandmaster explanations of how they think).
Since around 1980 the amount of knowledge has increased, since the grandmaster opening book grows deeper and deeper.
Excellent post.
Thankyou, good sir. I ho
Re: (Score:2)
I've looked, but never found evidence that a human brain can "fill up."
This short, informal, but informative film begs to differ:
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Once I ended up with soy sauce on my pancakes instead of syrup.
Re: (Score:2)
Excellent post.
Thankyou, good sir. I hope you have an excellent day.
Jesus, I've fallen into a parallel universe.
Re: (Score:2)
The human succeeds because they multi-task the solution. The idea is to tackle the problem in parallel. So one part works on immediate moves, another works of mid length strategy and another works on final strategy. This is replicated to track the opposition, so their immediate moves, their mid length strategy and their final strategy. So the actual move and strategy become a composite of the outcomes of each parallel outcome and their influence upon each other. So a plan to eliminate say a critical piece,
Re: (Score:2)
The place the parallelism really takes place is when searching for what grandmasters call candidate moves. Much like you can look at a scene and immediately recognize what objects are, a grandmaster can look at a chess board and quickly see which moves are the important ones.
Re: (Score:2)
Computer
Re: (Score:2)
Computers are unable to understand positions. Take the final setup mentioned here - http://scienceblogs.com/evolut... [scienceblogs.com]
That's a good example. You don't really explain what it mean to 'understand' a position though. I consider "understand" to merely mean "recognize which branches are prunable."
Re: (Score:2)
Chess was AI until the computers started doing it well, then it became "not AI". So AI is defined as whatever humans do better than computers at the current time, a list which is getting smaller and smaller. I guess someday there will be nothing left.
This is the kind of argument you see from people who think a chatterbox thinks. There's been a clear understanding of the difference between "hard AI" and "weak AI" for decades. Chess playing computers are clearly weak AI. This isn't an insult, it's the way these things are categorized.
Re: (Score:2)
I must be old (Score:2)
Re: (Score:2)
I've gotten 2,415 times smarter since then.....
Good (Score:2)
Despite the fact that computers can now beat even the best human players at chess, I've always been of the opinion that beating a human at chess was not really a solved problem, because where chess programs do so by exhaustively examining millions of board combinations to make even a single move, a grand master chess player will generally contemplate but the tiniest fraction of that amount.... and they can still play chess pretty damn well. If a computer only considered as many board combinations as a gran
Re: (Score:2)
I don't think chess is a good general AI problem any more. It really has limited combinations, there are only so many games that can be played. when you get right down to it is possible, though time consuming, to just calculate all the possible games from a given position and objectively select the best move each time. One could pre-calculate all possible positions and simply program a look up table and produce the best possible outcome every time. Much like Tic-Tac-Toe can be programed. We already ef
Re: (Score:3)
One could pre-calculate all possible positions
Shannon calculated the number of chess positions as 10 to the power 50.
At that rate, it would take longer than the expected life of our galaxy to compute one move, even if every molecule of the earth were turned into a supercomputer.
Re: (Score:2)
10 ^ 50? no, I don't think that's right for positions... There are only 32 possible pieces on 64 squares with most pieces having significant limits on where they could possibly be.. For instance, a bishop will always be on 1 of 32 squares and a pawn can only advance to any of 39 squares by eliminating the opponents pieces (thus weeding down the possible positions considerably) and there are a whole host of "impossible" positions which can never be reached without having broken a rule (such as when a queen
Re: (Score:2)
LMGTFY:
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
One could pre-calculate all possible positions
Shannon calculated the number of chess positions as 10 to the power 50. At that rate, it would take longer than the expected life of our galaxy to compute one move, even if every molecule of the earth were turned into a supercomputer.
So, just another engineering problem then?
Re: (Score:2)
We are only partly conscious of the decision making processes in our brains. That is why we think we have some sort of creative ability that AI can never match. This is most likely an error.
Brain architecture hypothesis (vastly simplified subset model):
sub-conscious (SC) processes --> executive function (EF) center, and rational conscious (RC) processes --> EF.
Also, feedback paths: RC Finally, feed-forward path SC --> RC
It is important to understand that EF is "influenced" by RC, but EF is no
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Would you like to start again and have another go, please?
Von Neumann? Try Shannon (Score:3)
I thought Claude Shannon https://en.wikipedia.org/wiki/... [wikipedia.org] wrote the first program to play chess. Among other things he dabbled in.
Re: (Score:2)
But Von Neumann was something special. [st-and.ac.uk]
A polymath [wikipedia.org] and a polyglot, [google.com] his early work with chess is not to be scoffed at.