That’s where the company’s new software tool Qbsolv comes in. Qbsolv is designed to help developers program D-Wave machines without needing a background in quantum physics. A few of D-Wave’s partners are already using the tool, but today the company released Qbsolv as open source, meaning anyone will be able to freely share and modify the software.
“Not everyone in the computer science community realizes the potential impact of quantum computing,” says Fred Glover, a mathematician at the University of Colorado, Boulder who has been working with Qbsolv. “Qbsolv offers a tool that can make this impact graphically visible, by getting researchers and practitioners involved in charting the future directions of quantum computing developments.”
It should be: https://www.wired.com/2017/01/d-wave-turns-open-source-democratize-q…
Do you do the html by hand? Wow.
Given how almost all CMS systems are steaming bags of poop, or so inadequate you will still need to do significant amounts of HTML by hand, no reason not to.
Still a surprise given that, if you’re homegrowing your own CMS as I remember to be the case with OSNews, you already understand the codebase well enough to at least slap something like a Markdown parser onto the input stage to make that kind of issue more difficult to overlook.
(Of course, this is OSNews, which has had a problem with generating broken markup related to nested quoting for as long as I can remember.)
I’m actually in the middle of renovating my cruftiest old hobby codebase still in active use because, while it may be a static templater that’s been superceded by Jekyll in every other way, it has one component I’d like to retain: Glue code to perform offline validation of all CSS, HTML, and XML (eg. RSS) that it generates, plus an incomplete (local only, no fragment references) but also offline checker for broken links.
I tried it, but the answers I keep getting from it are neither here nor there.
D-WAVE does not have a quantum computer:
http://www.scottaaronson.com/blog/?p=1400
F*ck Wired and its clickbaity infomercials.
If it’s not Turing complete then it’s not a computer. Period. From what I could find on the Internet it’s not Turing complete. I leave the conclusion as an exercise for the reader.
so analogue computers are not computers… okkkkk….
Have D-Wave ever claimed that their solution is universal?
That’s nonsense. A computer does not have to be Turing complete. It only needs to be capable of some sort of computing.
Like the other poster noted, analogue computers are not Turing complete and several early digital computers were also not Turing complete. But they were and are still computers.
A rolodex is also a computer, and a nice one at that.
Edited 2017-01-12 10:24 UTC
Sure it can be. It just isn’t a Universal Turing Machine. That’s not the same thing.
No real computer can reach the computational power of the Turing machine and there will never be a universal computer. Physics limits real things while the Turing machine is a theoretical construct.
The idea that computer means Turing complete (able to simulate a single-tape Turing machine given infinite time and infinite storage) is wrong, the concept of computers existed long before Turing was born.
I’ll reply to myself to address several comments at once. In Computer Science, running a Turing Machine is what it means to compute. That’s how you define the term computation. The universality has nothing to do with this. Any program for a Universal TM can be converted to a fixed TM. It’s the principle underlying it that matters.
Of course, one of the features of Turing Machines is the infinite tape. But this is only to address the size of the problem. For any given size of the problem to solve, the amount of tape used must be bounded. If a problem of specific size requires infinite amount of tape it will not terminate and is therefore not computable.
Now, there are some formal systems which are strictly less powerful than a Turing Machine*. Take Finite State Automata for example. And here is the rub – regardless of how big FSA you are allowed to build, some problems are not solvable with it, whereas they are solvable by a TM.
That was the essence of my comment. If it’s not Turing complete in its operating principle, then it’s not a computer. Yes, it can perform some computations but not all possible computations. Therefore it’s not a computer but a Special Purpose Device.
* The interesting fact is, that if some formal system is at least as powerful as a TM then it’s equivalent to a TM. That goes for real (maybe I should say theoretical) quantum computers as well – they can compute some things faster but can not solve more problems in principle.
That’s not true. Turing machines can compute all computable functions, but a function is computable if it is computable by a machine.
Edit: see Church-Turing thesis
Edited 2017-01-12 15:04 UTC
Have it your way. But then any physical process is computation of some sort. I don’t think this is a very useful definition. To me a computer is synonymous with Turing-complete. Maybe because I’m a programmer. When I hear “computer”, I know what I can do with it at least in principle. Now you say, that if I hear “computer” I don’t know anything about it. It’s some thing that does something. Got it! Makes it hell of a lot easier to explain to somebody
Well, according to Wikipedia:
”
Pancomputationalism or the computational universe theory[edit]
Pancomputationalism (also known as pan-computationalism, naturalist computationalism) is a view that the universe is a computational machine, or rather a network of computational processes which, following fundamental physical laws, computes (dynamically develops) its own next state from the current one.[18]
“
That might be. However that limits your ability to communicate with the rest of us, which have a broader definition of “computers”. Turing machine is a precise term for a special kind of computer, so why conflate the two terms?
… Originally a “computer” was actually a person. I don’t think humans are Turing complete 🙂
Why? What would be the limitation of humans?
kokara4a,
I think it’s very common to conflate these terms when using them normally – I know I do. I don’t ordinarily care about being so pedantic, but what an interesting question! I’ve always understood “turning machine” as a theoretical machine that explicitly has infinite capacity. It can only ever be approximated with a computer, since infinite computers cannot physically exist. However if we ignore the infinite state of a turning machine, or if we assume our computers had infinite state, then they would be equivalent to turing machines.
Maybe not everyone agrees with this, but given what I’ve said above, I would argue that humans are not turing machines with infinite capacity. However they can (or could theoretically) do everything a computer could because a computer is following simple steps. Given enough time and paper, everything a computer does can be computed by a humans pretending to be a CPU, albeit very inefficiently!
A related question is the inverse: Is there anything a human can mentally think up that a computer fundamentally cannot compute? A lot of people want to believe that sentient life is “special”, but I’m not sure how that could ever be proven.
Edited 2017-01-12 18:55 UTC
Well AlphaGo has been kicking human arse over the last few weeks. When given the right adaptive computational structures, computers can compute better than humans. Especially when they’re allowed to explore bad solutions as a way to get to good solutions.
kwan_e,
Here’s the thing though, according to wikipedia, google’s latest configuration is 1920 CPUs and 280 GPUs.
Modern CPUs have 5B+ transistors and so do GPUs (roughly)
https://en.wikipedia.org/wiki/Transistor_count
So google’s go cluster may have about 11,000B transistors.
Adult mortals only have 90billion neurons or so and many of them are used for things outside of go.
We humans are still kicking the computer’s behind on “intuition” and efficiency!
Edited 2017-01-13 00:05 UTC
Are we really? Look how much education over the years it takes to raise a child today to do what computers can do.
AlphaGo can be said to be playing intuitively (because it doesn’t brute force). I personally think intuition is just when we don’t know how we thought, and that is the case with AlphaGo. You can’t really look at its neural nets and figure out what went into any single decision.
kwan_e,
you fall into the medieval thinking fallacy.
last i;ve heard, humans were compared to machines in the last hundreds of years.
Computing is not equal to mental action.
Machines will just try to compute, comparing of values, storing it for future reference and then retrieving it in accordance to the code that was encoded. There is no mental action there. Humans can perform computation through mental action.
And what is mental action?
Kind of silly to use a vague term that you’ll no doubt shift around as discussion goes on.
kwan_e,
You probably don’t have kids, right? You’d be surprised with what they learn to do on their own. Admit it, today’s computers are downright lousy at improvising in unknown situations in the real world. Even comprehending “simple” speech is a problem. With enough hardware to build huge neural networks on racks of computers, we will eventually exceed human intelligence in general domains, but that’s going to be many millions of dollars of computers using many megawatts of energy. A single human brain is extremely efficient by comparison.
Many human brains are “over provisioned” for the menial tasks they do, which makes those tasks easy targets for computers to take over, but even in those cases I’d be willing to bet nature still has more efficient solutions, like a bee brain. So I think nature will continue to win against computers in terms of efficiency until we start building biological computers
Edited 2017-01-13 03:45 UTC
Unlike humans, computers do not have eyes, hands, lips, arms or legs (or a gut, even) for the most part. A lot of human development is aided by having such inputs to train the brain with. It’s all brute force.
kwan_e,
You should watch the transitions babies make, it’s really remarkable how much they absorb. Us programmers have a lot to learn about how we teach things to computers, we’re doing things too manually. I suspect a massive server farm with trillions of transistors for our own personal neural nets would help too
Edited 2017-01-13 06:54 UTC
AlphaGo proves that is not necessary. It’s made many questionable moves. It prefers a guaranteed win, over dominating the opponent, and in so doing often plays “slower” (in Go terms) than what professionals would consider a perfect game. Have you watched game 3 and 4 against Lee Sedol, and the commentators’ reactions to the respective “God moves”?
kwan_e,
I didn’t watch the games. Google’s engineers said they didn’t care about points, only the win, which explains that. In principal it could have been trained to go for points as well.
Edited 2017-01-13 09:04 UTC
I watched them all. The DeepMind engineers talked about what they were seeing in AlphaGo before each game, and after, and the commentators themselves said that the way DeepMind approached the design of AlphaGo was more like how they themselves think or feel when playing.
From my interpretation of what they said, they probably couldn’t have trained it to go for points. I don’t know how much you understand go, but point counting is not easy. In fact in many parts of all five games against Lee Sedol, none of the commentators were sure who was in front by points. Many of those commentators were 9-dan professionals.
There are many situations in go, which they call fights, which involve lots of exchanges that completely makes the calculation of points impossible. You can make a move on one side of the board, and it would completely alter the points on another side. It’s these kind of subtle influences that make go hard, and makes “brute force” a completely inaccurate account of what AlphaGo achieved. And certainly a naive neural net cannot achieve.
Training for points is a dead end because good moves can turn into bad moves too easily, and the modern game is more about gaining influence all over the board rather than securing points. In fact, in one of the games, Lee Sedol thought he could expose AlphaGo’s weakness by playing aggressively for fights, which is a winning tactic against previous Go AIs that were trained for points because they would over focus on one side of the board. That turned out to be wholly inadequate because AlphaGo somehow worked out how to play those fights without having specifically trained for them. Human Go players do train for fights because making a mistake is disastrous.
Edited 2017-01-13 10:59 UTC
kwan_e,
I don’t mean to sound rude or anything, although I know it’s going to come across this way anyways, but what those 9-dan professional go players think is totally irrelevant. The neural net approach doesn’t care and will incrementally develop stronger algorithms for whatever goals they have set. Beating humans at these games with an artificial neural net is trivial given enough trials and computational resources. Some people might not like to admit this, but it’s the truth.
Edited 2017-01-13 16:04 UTC
That’s not the point of the discussion. You made a statement claiming that all AlphaGo was was a naive neural net that simply brute forced learning, using a definition of brute force so broad that everything is basically brute force under that definition – even how humans learn and create.
kwan_e,
A player’s 9-dan skill at go doesn’t necessarily translate to knowledge of engineering go algorithms. Please cite what it is you think they’d disagree with.
Edited 2017-01-13 19:46 UTC
Really? I’m not trying to be rude either however you assume that neural networks have the same (or greater) computational power as a human brain. Unless that’s the case your statement is trivially false. There’s no proof that a neural network is as universal as a living brain. There are also known limitations with neural networks.
And trivial? Nope.
Megol,
Maybe I didn’t explain it well, but when I said the neural net is trivial, I meant it solves the strategies trivially in comparison to the the conventional non-neural net approach. It may help to contrast the two:
A non-neural net approach requires developers to carefully analyzing the game, and manually formulate strategies by becoming proficient themselves and/or trying to get experts to explain in detail how to make moves and respond to the opponent moves, to observe and explain what the AI needs to do better, etc. All these algorithms need to be painstakingly programmed by the developer. And yet after all this tedious work of implementing a strategy, a human player can still come along with a better strategy and beat the go program. In this case it’s back to the drawing board.
So a NN developer, by contrast, can pull it off without any knowledge of winning go strategies. Alpha-go’s NN was able to improve it’s own strategies by winning and loosing games against itself and adjusting accordingly. It starts out naive, but since it keeps improving itself after every game, eventually given enough computations it beats the non-neural-net approach. Eventually it’s strategies will exceed the comprehension of human developers and even go players. The key is that all this happens in a black box without any input from the developers whatsoever! And that’s what makes it trivial.
Hopefully the statement makes more sense and is less controversial now It was only meant to focus on building the actual AI strategies and not so much the other necessary pluming to set everything up initially.
Edited 2017-01-13 22:30 UTC
The truly interesting question is how much of what humans do every day comes from training a neural network. Perhaps even more important, what drives what “wins” in our own neural networks. The last few years have shown a neural network can be trained into doing quite impressive things.
dpJudas,
Never the less, “several orders of magnitude” seems like it could be feasible for a well funded entity like the NSA. It’s just a shame we pay so much tax money for this clandestine work for which we’ll never see the results publicly. At least NASA puts on a show.
I can mentally think up how I love my family. How can you replicate that mental function to a computing device?
We should not forget this fact:
Computing != mental action.
Given infinite time and storage space a human would be.
So you are using a short-hand version, that’s not uncommon. You also probably assume that the computer mentioned is digital, is a stored-program Von Neumann design and use 8 bit bytes. There have been electric computers that weren’t/didn’t. People that use the terms mini and micro-computers today probably mean smaller than normal computers and not the old definitions where a micro-computer was based on a micro-processor (or more ICs) and a minicomputer was anything smaller than a mainframe computer…
Normal computers aren’t Turing machines, they are finite state machines which can compute a subset of programs a Turing machine can solve (both given infinite time).