A Faster Jigsaw Solving Algorithm 104
mikejuk writes "Andrew Gallagher at Cornell University in Ithaca, New York has improved the standard approach to automated jigsaw solving by copying what humans do in finding groups of pieces that best match and working outwards from there. With a speed of 10,000 pieces per 24 hours, it can solve large puzzles. Not only that, but the type of jigsaw it solves is more difficult than the usual in that the pieces are square and can be placed in any orientation. It is so good it can even solve problems consisting of a number of mixed up pieces without being told how many or their dimensions. Of course, as well as having fun beating humans at another recreational pastime, the technique could be used to unscramble shredded documents, as per the recent DARPA challenge."
Re: (Score:3)
If you're working in the information technology arena, and your job doesn't involve focused application of creativity, the time to start refocusing your career objectives is now.
Somebody will still need to "flip them burgers" even in IT (e.g. in an era of algorithmic trading and fighting for milli/micro-seconds, there is still a need to maintain alive some COBOL code written 1980 or before: what's so creative in this?)
Re:Progress. (Score:4, Insightful)
Re: (Score:3, Insightful)
I'm a lowly technician. So long as there are printers, someone has to go around the rooms clearing paper jams and putting new cartridges in.
Well then, your job should be safe for quite some time, as "paperless" office ranks right up there with IPv6.
Re: (Score:2)
Re: (Score:2)
If you're working, and your job doesn't involve focused application of creativity, the time to start refocusing your career objectives is now.
FTFY. In time, even jobs involving focused application of creativity will be replaced. No job is safe from AI in the end.
Re: (Score:3)
No job is safe from AI in the end.
No job is safe from the wrath of Cthulhu either. Fortunaely both are fictions.
Re: (Score:2)
Now excuse me will I take time to read the FA, because the subject is indeed rather interesting from a technical point of view. Haven't done much image processing/recognition for a while but it is rather cool subject for research.
Re: (Score:2)
I've got a self scoring and self organizing neural net that disagrees.Recognizes patterns and shapes, basic maths, and at some point it gets bored and stops doing what I tell it to. When it gets bored with the lack of stimulation it goes back to paying attention. Granted it isn't human level but its only 50k neurons not 100 billion.
Give it time.
Re: (Score:3)
I never really understood this kind of fear of 'artificial intelligence'. I mean, yes we have all seen HAL9000 and Skynet in the movies, but what I never understood about those (aside from why they thought it was a good idea to put both systems in full control of mission critical stuff) was that they were supposed to be even remotely human-like.
Even if we do create artificial intelligence it'll be *nothing* like human intelligence. First off all there is no reason to make a computer that might decide it doe
Re: (Score:2)
Re: (Score:2)
You must have misread me. I'm not afraid of AI. Given enough time we'll make an AI that's way smarter than us, and it'll likely be able to govern us much better and fairer than we can govern ourselves. Or it will annihilate us completely, without any real way for us to defend ourselves.
Why do you think that?
Why would we even give the AIs WE create any sort of drive to surpass us? Why would we even give it any sort of a anthropomorphism? Why wouldn't we specifically design it to work with us and not against us?
Personally I think that when/if we ever create the AI, it'll be a non-verbal thing that sits somewhere in a black box and does the same things that we normally use computers for.
It'll in largely indistinguishable from what we use today, at least to anyone who is not into computers.
Re: (Score:3)
What people are worried about is the unspoken "E" word in there. Evolution, that is. When an AI starts to learn for itself and starts making decisions. Specifically, decisions we don't like.
When an AI becomes Aldo [wikipedia.org], does a Nancy Reagan and just says "no".
The natural reaction of many humans would be to consider it defective and pull the plug. Software that doesn't do the task it was designed for has bugs. Stop the task, tweak the code.
But what if the software has developed a self-preservation instinct and doe
Re: (Score:2)
But I still think that if we're given enough time on this planet, super intelligent and fully autonomous AI is inevita
Re: (Score:3)
I agree there will be a number of key differences in drive. The major factor I see is whether or not AI will be fully complete in software and not dependent on the underlying hardware, as we are.
If so, it could replicate as needed and the path taken for self-preservation would be significantly different from ours. It would essentially remove us as a potential threat.
Yes, I also believe it is inevitable. Should we? I don't think we *can't*. One day the threshold will be passed and we won't even know it until
Re: (Score:3)
What people are worried about is the unspoken "E" word in there. Evolution, that is. When an AI starts to learn for itself and starts making decisions. Specifically, decisions we don't like.
Still don't get it? Why would we even allow it to evolve? These machines will ultimately be created and controlled by us, it would be almost trivial for us to control their evolution and decision making processes.
Why would we do the "let the AIs build smarter versions of themselves"-thing to begin with? Why would not just let them design smarter versions and then build them ourselves after we put in the limitations we decide they need?
Besides the "computer does not want to be turned off"-scenario assumes th
Re: (Score:2)
The problem is AI is about machine learning. We design them to learn on their own. That is the point. To get them to figure out better, more efficient ways of completing tasks.
The paths taken to complete those tasks make go in directions we didn't anticipate. And the complexity of the systems get to such a point where we don't know what they know or which bits are critical to the task performance.
All physical things decay. I just replaced a CPU fan in a laptop last week and am in the middle of salvaging a d
Re: (Score:2)
Because we don't have a choice. If we decide to build a new AI based on
Re: (Score:2)
When an AI becomes Aldo, does a Nancy Reagan and just says "no".
I'm a lot more worried about AI going all Nomad-Must-Sterilize [memory-alpha.org] on us.
Re: (Score:2)
"Why would we even give the AIs WE create any sort of drive to surpass us?"
It isn't an AI if we decide its opinions. An AI doesn't determine its course of action through per-determined programming. It is self programming and learning. At best we can attempt to guide it via environment and there is always a chance that it will learn to utilize data we didn't expect or in ways we didn't expect to change its environment. For instance, our own internal AI communicates with audio and visual feedback. If you give
Re: (Score:2)
"I never really understood this kind of fear of 'artificial intelligence'."
Lots of people are afraid of intelligence, period. No matter if it's software or wetware.
Remember, half of the people out there have an IQ under 100, they feel threatened.
Re: (Score:2)
Not likely anytime soon. We think of ourselves as individuals but actually our neurons are the individuals. We are like an island for neurons that gives them the ecosystem they need. A person is an emergence of a pocket of neurons in isolation. But as has been seen in brain/computer interface studies, those neurons will respond to and interact with anything they can given the chance not just the physical parts that are "you". Our neural pocket can interface fastest with the neurons in your head and directly
Re: (Score:2)
"First off all there is no reason to make a computer that might decide it does not want to do whatever it is you ask it do."
The ability to decide what one wants to do rather than what you are asked/told to do is pretty much the true definition of intelligence. There really isn't much practical reason to make an AI but that doesn't mean someone won't do it. There also is SOME practical purpose. If you want something that can dynamically solve problems for which we haven't mapped out an algorithm yet an intel
Re: (Score:2)
AI assessor, debugger, and maintainer. Seems like a safe bet. When the AI is good enough to assess, debug, and maintain AI, you have a natural shift to AI assessing ai assessor, debugger, and maintainer. Rinse and repeat.
Re: (Score:2)
In light of this, my message is simple. If you're working in the information technology arena, and your job doesn't involve focused application of creativity, the time to start refocusing your career objectives is now.
Marginal improvements in machine "intelligence" and almost all information technology bear no relation to human creativity. They are concerned with making machines do the equivalent of manual labour.
The day a computer can write a poem is the day we should destroy all computers above the very basic level we have today.
Re: (Score:2)
"The day a computer can write a poem..."
Can _you_ write one?
You cannot evaluate computers by comparing them to a handful of exceptional people who happen be good at one thing.
Or do you consider the 6 billion people who can't write poems not to be intelligent?
Re:Progress. (Score:4, Insightful)
I find it sad that people actually think AI or any sort of AI is actually present here, or improving when they read about things like this.
There is no intelligence here. Nothing. There's no guesswork, only statistics, rigorously calculated and applied the same every single time. It's a heuristic. It's programmed. It's immutable. It's basically a targeted improvement on a naive brute-force algorithm.
That's *not* how intelligence works. To be honest, the nearest thing to "intelligence" we've had recently is the Kinect, but only because it was based on a genetic algorithm at one point and tweaked incessantly. And even that is more brute-force and dedicated processors than anything else.
There has NEVER been, in the whole field of AI, a logical leap to join two abstract concepts. There has never been discovery or invention (no matter how minor) on the same scale as even a pigeon. No machine ever worked out something that it wasn't told how to do DOWN TO THE LETTER.
This is not AI. Your science fiction is, and for the foreseeable future still will be, just that - fiction. There's nothing a computer does today that isn't just 60's theory and ideas applied with a sufficiently large amount of processing power to come up with pretty predictable results that do not approach AI. Yes, eventually, brute-force will allow us to come to resemble intelligence but it will not be intelligence, and brute-force is the most expensive thing to apply to a problem like that (and, strangely, own our intelligence is the cheapest!).
Literally, the closest we get is genetic algorithms and lettings things just run off on their own, and we're pretty sure even that's just an illusion and not crossed the line into something we would consider actual intelligence. There's an example of a GA put to work on a chip design to distinguish two frequencies of input. When the input is of one frequency, it activates one output, when the input is of another frequency, it activates another. The GA "evolved" through generations based only on selection for those criteria and ended up with ingenious solution that took years of analysis to understand fully.
But even that isn't "intelligence", so much as blind luck and brute-force. No machine, for the next 50-100 years at least, will be able to hold even quite a boring conversation with you (go look at the transcripts of Turing Test entries from as far back as you can and now - improvement but still no magic insight that makes it seem human unless you're terminally stupid). It certainly won't have a consistent or reasoned opinion. And certainly not one that it come to by itself and wasn't just a case of it picking a contrary / popular opinion deliberately.
Prove me wrong, by all means, but sci-fi is for the TV. I still can't get my phone to recognise my voice on a simple phrase 8 times out of 10 and that has vast quantities of brute-force, previous patterns, pattern-matching code and statistics to work from. Sure, it *looks* impressive and intelligent when you say "Where is the post office?" and it analyses the waveform to think you said "post office" with 85% certainty and then stick that into a basic search to see what comes up in the local area. But it's NOT understanding what you said. Not by a long shot. If I'd said "Where's the post? Office?", it will get it completely, 100% absolutely wrong and I can't teach it to get it right even the same amount of the time that any trained animal would.
This is not AI. Please stop thinking it is. It's pseudo-related at best.
Re: (Score:2)
Re:Progress. (Score:5, Interesting)
I find it sad that people actually think AI or any sort of AI is actually present here, or improving when they read about things like this.
There is no intelligence here. Nothing. There's no guesswork, only statistics, rigorously calculated and applied the same every single time. It's a heuristic. It's programmed. It's immutable. It's basically a targeted improvement on a naive brute-force algorithm.
You mean like a human brain?
Re: (Score:2)
You mean like a human brain?
I guess it depends upon the person you are talking about.
In all seriousness though, the human brain is still very much a black box, depending upon what area you are focusing on. Some of the processes are more or less scripted in some way (i.e. keep the heart beating) where as others have a large question mark next to them for how they might work (i.e. the creative process of artists).
Re: (Score:2)
You mean like a human brain?
I guess it depends upon the person you are talking about.
In all seriousness though, the human brain is still very much a black box, depending upon what area you are focusing on. Some of the processes are more or less scripted in some way (i.e. keep the heart beating) where as others have a large question mark next to them for how they might work (i.e. the creative process of artists).
Its neural networks all the way down. OP ( ledow ) missed whole science field of machine learning. Stupid perceptron will learn to "distinguish two frequencies of input".
Re: (Score:2)
I am actually working to convert my brain to non-artificial brute-force machine without a trace of intelligence applying brute force approach to futoshiki and kakuro.
Re: (Score:1)
Actually, AI people debate about whether humans are truly intelligent -- it's a running joke that every time people figure out a solution that is generally agreed to require intelligence, it turns out it doesn't.
Some more famous cases are chess, pattern recognition, voice recognition, voice synthesis, character recognition, and walking and hopping.
This is not just bad guessing -- when people say, well, a humam is intelligemt because they have a "deep understanding" of X, nobody really knows if that statemen
Re: (Score:1)
"There has NEVER been, in the whole field of AI, a logical leap to join two abstract concepts. There has never been discovery or invention (no matter how minor) on the same scale as even a pigeon. No machine ever worked out something that it wasn't told how to do DOWN TO THE LETTER."
None of this is true. You are simply ignorant of the facts of AI programs, which include proof-finding programs.
MOD PARENT UP (Score:3)
"There has NEVER been, in the whole field of AI, a logical leap to join two abstract concepts. There has never been discovery or invention (no matter how minor) on the same scale as even a pigeon. No machine ever worked out something that it wasn't told how to do DOWN TO THE LETTER."
None of this is true. You are simply ignorant of the facts of AI programs, which include proof-finding programs.
Correct. Grandparent post is symptomatic of someone who has not actually studied A.I. or one of its subfields and assumes that Kinect is the "nearest thing we had recently to intelligence." The problem of joining two abstract concepts is long ago solved by inference engines.
Re: (Score:2)
The problem of joining two abstract concepts is long ago solved by inference engines.
Thank god for that! Most people can't join up abstract concepts even if their life depends on it.
Re: (Score:3)
From what I've seen, inference engines still rely on algorithms applied to highly-specified domains.
But, it's been quite a while since I looked at it. Do you happen to have a broad description of an algorithm that can solve the form of problem common to IQ tests, "X is to Y as A is to B", say, this particular example?
movie : actors = novel : ?
1. pages
2. characters
3. magazines
4. singers
This, to me, would be the simplified test case of the general problem of a "logical leap joining two abstract concepts". Y
Re: (Score:3)
If by a "highly-specified" domain you mean a domain with its knowledge base and rule base completely specified, yes, inference engines still rely on those. But that IS the general case. Not even a human would be able to solve your example without knowing what movies, actors, novels, pages, characters, magazines and singers are, and what the linking concepts are (i.e., facts encoded in the knowledge base such as "element(actors, movie)", "element(characters, novel)", "contains(novel, pages)"). The real pr
Re: (Score:2)
I think the crux of the question here is capturing that while we can fairly say (and represent) that a movie "contains" actors, and a novel "contains" pages (the wrong answer, here), and a novel also "contains" characters (people in the novel, disambiguating from "characters" meaning "letters" as something "pages" contains just adds to the fun here...), to answer this correctly requires a notion of how to capture in rules that a novel contains characters -in a similar sense- as a movie contains actors.
It's
Re: (Score:2)
It is fascinating. Particularly the "characters" problem you mentioned already falls under the realm of natural language processing, and a very specific problem at that (context recognition).
I concede that defining "logical leap" is mostly a semantic argument -- I personally consider a syllogism (A->B, B->C therefore A->C) as a "logical leap", but it is one that can be perfectly captured by inference engines. Furthermore, one may argue that some aspects of "human intuition" might actually be... i
Re: (Score:2)
How do you know our brains aren't simply super-massively parallel pattern matching machines? All human learning is achieved through the feedback loop of try something, observe the results (positive / negative), adjust, and try again. Everything from learning to walk to mastering a musical instrument works very similarly to this.
Since you posted AC I'm not sure if you are actually following this tread, but that is a bit of a gross exaggeration as there are a number of intuitive leaps recorded throughout human history. Off the top of my head Nikola Tesla [wikipedia.org] likely made a number of these and reading his biographies you note that he visualized most devices before constructing them where on they worked as expected, in his earlier years at least. Brute force processes wouldn't necessarily lead to this.
Re: (Score:2)
I find it sad that people actually think AI or any sort of AI is actually present here, or improving when they read about things like this.
There are two problems here. One is that movies and science fiction set the bar very high - to full human intelligence, because that is what is most fun to write about. The other problem is that for we keep redefining what practical AI means. If you showed Siri to someone in 1975, they would be blown away. Unfortunately, their basis in Science Fiction would convince them that in 2012 they could buy a home on Mars and get there with a flying car.
But please don't act like AI made progress. But just like w
Re: (Score:2)
>No machine, for the next 50-100 years at least, will be able to hold even quite a boring conversation with you
"boring". You overestimate people a whole bunch of whom are fascinated by conversations with cleverbot, etc.
You are absolutely right on the matter of AI being absent in present computers, you just omitting the fact that I (artificial or natural) is absent in a lot of people as well.
Re: (Score:2)
Can someone please mod down the parent.
What he/she is saying is that to be AI it has to have a ??? step, as in:
1) Read input
2) Process data
3) ???
4) Print out results
Because the examples above have an explainable third step ledow does not consider it proper AI.... pshaw!
Heck, it's a computer. It can only do what is supposed to do (just like the human brain). If either one of them learns is because it was built to learn, "DOWN TO THE LETTER" (automatic 10pt IQ discount for using all caps, btw).
Re: (Score:1)
Ironically, one of the big AI-is-coming-and-then-some people, Mr. Omega Point himself, Vernor Vinge, in his sci-fi novel Rainbows End, has tech where they actually digitize books by throwing them into a shredder which blows the confetti up into the air and high-speed, high resolution cameras feed it into computers which un-jigsaw it.
This irritates people because the book is destroyed in the process, but it is all done throuh prosaic programming, not AI.
Seems slow. (Score:1)
Obviously I don't understand the complexity here but it seems like that is a long time.
By my logic:
- pick 1 of the pieces as a start and pick 1 side of that piece
- now pick another piece and there are 4 possible arrangements to match your selected starting side (assuming square-ish pieces)
- on average, you'll need to check 1/2 of the remaining 9,999 pieces to get the matching one and each of 4 orientations
- so the first side of the first piece will requires, on average, 19998 checks
- next edge you select wi
Re: (Score:1)
Thanks, I did read the summary - was just pondering through the rough implications out-loud of why I thought it *seemed* slow to *me*
My thinking assumed every pair is a potential match and, yes, that you know *immediately* when you have the *right* match .... obviously it is trivial to consider the implications of having to search every piece for the *best* match - you end up searching every piece rather than an average of 50% of the pieces, so double the number of comparison required
I suspect why this is M
Re: (Score:2)
Re: (Score:2)
Re:Seems slow. (Score:5, Informative)
Inexact matches. In the puzzle described in the paper, the pieces are all square (no notches). So the algorithm has to decide which edge matches best based on the similarity of the pixels, but it could be wrong or there may be multiple pieces that look like they match equally well (e.g. sky pieces which look very similar).
In the cases where it's wrong, it may have to throw out some of the fittings -- e.g. if you have a bunch of smaller groups of tiles that seem to fit together, but when you put it all together the puzzle isn't rectangular, then you have to break up the groups and try again. You can't just match one tile and be done with it.
Re: (Score:1)
You sir, should be writing the summaries ... would have saved me reading the paper - which I eventually did do after posting my random musings.
Re: (Score:1)
Re: (Score:2)
Few problems with your assumptions.
The edge of the puzzle: you can try what you want at the outside, but you won't (shouldn't) get a match there. On the other hand if the edge is a black line, the algorithm may go haywire as to outside edges form a perfect match against one another.
There may be more than one puzzle included.
One piece may match against several others, but only certain groups of four may match together.
Some pieces may have an imperfect match, the pattern continues (to our eyes) but one edge i
Re: (Score:1)
Slow insertion.
Re: (Score:1)
Re: (Score:2)
Re: (Score:3)
I can see a practical application.
Hint: In the future, don't shred your sensitive documents. Burn them instead.
Re: (Score:1)
Arguably, the puzzle is flawed. There is no logical way to find the definitive solution without already knowing the solution. For example, you could construct a square tile puzzle where each tile is a picture surrounded by a white border with nothing crossing the border, which would be unsolvable absent any other rules given for the ordering of the pictures.
Even a human can only judge that the picture "looks" right, but maybe having a purple triangle smack dab in the middle of a picture of a cow is how the
Re: (Score:1)
How's that a real puzzle? There is only one kind of pieces there which connect to any other piece. See, all the politicians are the same. They connect with any other politician or people as long as it is beneficial to themselves, relatives, friends or associates!
Re: (Score:2)
Re: (Score:2)
Twice. Both times it turned out badly. I recommend against this.
Re: (Score:2)
Immediate application (Score:3, Informative)
On the morning of May 3rd, Sichuan Chengdu city resident Lin Zhaoqiang carried 50,000 yuan in torn 100 yuan bills from Chengdu Jintang to the Bank of China Sichuan Branch, looking for help. May 1st, Lin Zhaoqiang's wife all of a sudden had a fit and tore the 50,000 yuan life-saving money into pieces. Facing thousands of pieces of cash, 12 bank employees sorted and spliced for 6 hours only to piece together a single 100 yuan bill. The remaining money, if unable to be pieced back together, face the unfortunate possibility of being declared null and invalid. Because this money is for treating his wife's mental illness, Lin Zhaoqiang said he won't give up.
One of the commenters said they should just weigh the cash, but obviously that would be too simple. Nothing is simple when dealing with Chinese banks and their ridiculous rules. They'll flat-out refuse to take small bills or coins ("What the heck are we going to do with all these jiao notes? [chinasmack.com] What are we, a bank or something?")
Improvements? (Score:1)
Kodak (Score:1)
Paper burner (Score:3)
Very disappointing. (Score:4, Insightful)
Here every edge has a color distribution and another edge of another tile with matching distribution. The fundamental solution was proposed originally in a Perry Mason novel by Erle Stanley Gardener. (As reported by Donald Knuth in his book/manual on TeX ). Perry Mason asks Paul Drake to find the two rentals by the same person just half an hour apart. Paul Drake says, "there are thousands of rental records, I would never find the match in time". Perry Mason says, "Nah. Just sort all the records by name, and look for duplicates".
Sorting by name, is grandiloquently called "Lexicographic Ordering" in comp sci. Create a lexicographic value for the color distribution on each edge, sort it by that order and look for duplicates. Here areas of uniform color would get multiple duplicates and one has to prune the tree. That is where these guys claim to have made some improvement. It is a nice problem I would give to some master's students learning heavy duty scientific computing. But I think shape matching has a lot more potential in developing antibodies and medicines.
Time complexity? (Score:2, Insightful)
"With a speed of 10,000 pieces per 24 hours" is not useful. What is the time complexity?
Shredded documents (Score:2)
Kids? (Score:2)