IBM has announced it has cleared a major hurdle in its effort to make quantum computing useful: it now has a quantum processor, called Eagle, with 127 functional qubits. This makes it the first company to clear the 100-qubit mark, a milestone that’s interesting because the interactions of that many qubits can’t be simulated using today’s classical computing hardware and algorithms.

But what may be more significant is that IBM now has a roadmap that would see it producing the first 1,000-qubit processor in two years. And, according to IBM Director of Research Darío Gil, that’s the point where calculations done with quantum hardware will start being useful.

I feel like quantum computing is one of those things that will eventually have a big impact on various aspects of our world, but at this point, it’s far too complicated and early days to really make any predictions.

Never forget IBM are trying to sell a product and like most US companies is they always want you to cite them so you don’t go wandering off and discover an equal or better alternative. The US will also prioritise the needs of US companies which means products and optimisations will favour US business needs and priorities not necessarily Europe or anywhere else. The US did this before with the IT and supercomputer industry and just because it now has the word “quantum” in it doesn’t mean they won”t do it again.

I don’t have any citations offhand but I have read commentary saying varying numbers of qubit’s are of increasing use and 1000 is where it begins to get useful. This includes some useful commentary on progress with cancelling out noise.

The Americans are always trying to make themselves look like top dog, or look as if they have a product when they don’t, and grab all the eyeballs so they can sound like leaders and sound exciting so they can grab yet more media attention and political attention and funding and customer interest and investment interest and poach all our best brains. Do not be fooled.

> IBM are trying to sell a product

Yeah and? What do you want a company to do? How do people feed their families without trade?

>The US will also prioritise the needs of US companies

Same again. What do you want the US to do? Fuck over their own people?

>Americans are always trying to make themselves look like top dog

No the USA *is* top dog. There’s not a better replacement if they get toppled. For all their problems they’ve been a shining beacon of light.

>Do not be fooled

Yeah you aren’t fooling anyone

The world (or at least the west) seem to have a morality issue in modern times but that isn’t just an IBM, USA or *gasp* Microsoft issue (good on you for restraining yourself and not mentioning Microsoft in this post btw). If you wanna talk tech then talk tech, If you want to talk politics or be tribalistic how about you go find a more appropriate website?

**purr**

“I have read commentary saying varying numbers of qubit’s are of increasing use and 1000 is where it begins to get useful.”

It literally is in the blurb about the article.

You forgot to take your meds again.

@javiercero1

My statement clearly said that other authoritative sources are available rather than just copy-pasting an IBM marketing release. Neither IBM or the US are the only people operating in this area and they deserve some promotion which they rightfully earned. It is not possible to derive any other meaning from that statement.

I’m not going to respond to your insulting whataboutary and twisting. Everyone knows you make snide personal attacks for no reason and that’s on you.

Nah, you and that alfman guy are the only two people who seem to have their victim complex triggered whenever I engage either of you on your nonsense.

I’m surprised you didn’t find a way to make this about Microsoft Windows 11.

javiercero1,

It’s water under the bridge javiercero1, Learn to let go and be friendly. None of us are here to be victims or bullies. It isn’t easy, but let us work towards making a positive community instead.

I guess I’ll be more enthusiastic about news like this when it results in a meaningful impact on my life. I’m tired of getting excited over new technology that either falls short of its promise or doesn’t ever really get off the ground.

I’m certain when it comes out the NSA was reading your TLS connections you’ll see some ‘excitement’. So be careful what you wish for 🙂

“quantum” is bullshit from the start. Same with multiverse theory. Either religion or “not appicable here” or just plain bullshit. If you can do 1hz quantum processor, then you have unlimited power as you can rely on all other states and non-states at once thourgh time.

Quantum theory of space is for example that you can stack unlimited stuff in the same area as it can and can not share the same space.

It is all religious.

Regardless of what you think, they are doing some real world calculations with it. It might not be the kind of ‘Quantum’ that was promised or explained but it’s starting to make progress.

It literally doesn’t mean that. In fact there’s a fundamental rule called the Pauli Exclusion Principle which specifically states why you can’t.

Quantum mechanics is weird, sure, but it also generates predictable outcomes that are important to the design of modern microelectronics, medical imaging technology, and more. Due to their importance in microprocessor design, quantum mechanical principles end up playing a significant role in our daily life.

Wow! I really don’t understand how someone can have such a strong and vicious opinion about a subject that they understand so little.

It sounds like you watched a video on YouTube and completely misunderstood what it said. None of what you said is even remotely true about how quantum mechanics works.

The irony of your comment is that it required the theory behind quantum physics to be right in order for you to be able to post it online.

Quantum computing is real. How do you think you exist. One moment got computed from the next at the quantum level. It may be that our analogies, like qubits, are wrong, but the underlying phenomenon that we call quantum computing is real. Consider fire. It is releasing the energy stored in chemical bonds. Ditto for nuclear. Quantum computing releases the computing power underlying the generation of successive states of existence. There is a good chance we will figure out how to use it before we really understand it. We are at the stage where people have spotted fire around the place and a few are trying to make it on demand. And a few more don’t believe them. Approximately one million years later some else figures out chemistry and voila, no need for alchemy, salamanders or fire gods. It will be the same with quantum, except faster.

IMO, one immediate impact of quantum computing is the break of Church–Turing thesis. Now think about all Complexity/Computability undergraduate courses all over the world that have to update their syllabus.

eugel,

Is it broken though? Some problems that are of exponential complexity on turing computer become linear on a quantum computer, but I’m not sure that contradicts anything about turing completeness? It’s been a long time since I’ve looked at that though.

Assuming we can build quantium computers with sufficient quantity of bits, it should eventually be able to solve certain exponential problems that just take too many resources to solve conventionally. But I’m not sure if we’ve reached that point yet. It’s not too difficult to write quantum algorithms, a popular one being shors algorithm, which would be capable of defeating modern crypto assuming we can get hardware with sufficient q-bits.

https://quantum-computing.ibm.com/composer/docs/iqx/guide/shors-algorithm

But it’s unclear to me if adding new quantum bits can become trivial or if it becomes an exponentially difficult undertaking. If it’s the later, then it kind of puts a limit on what quantum computation can achieve in practice. These numbers are totally fictitious, but say it ends up costing $1B to solve a problem using a quantum computer, and $25M to solve it using conventional computers, then it’s still not really worth it. On the other hand, if the numbers are reversed, then it could be. There’s tons of literature about quantum math, but I’ve seen very little that talks about a cost/benefit analysis.

“It’s been a long time since I’ve looked at that though.”

Me too, so I cannot argue much here. I tried to refresh my knowledge instantly but failed. The only statement from the complexity class that I thought I remember correctly is that a quantum computer can simulate all execution paths of a non-deterministic Turing Machine at once. And that is considered as counter example for the Turing-Church thesis. Is this correct at all? – I’m not sure myself after reading a couple of Wikipedia articles.

I need to review what people who know what they are talking about have to say about the topic as so much goes in one ear and out the next. I think the qubit length thing is less to do with processing and more to do with eradicating noise. There’s been some movement on mitigating decoherence and advances in superconducting. So there are benefits outside of straight computation. In some respects this proves the worth of real research even where there’s not an immediate cost-benefit. The benefits often come later or are indirect. Research in one area can stimulate ideas elsewhere and something which passed attention by can find an application area by surprise. I suspect that’s why you won’t see much literature on cost-benefit analysis. The thing is we broadly know that a level of R&D spend which is spent reasonably responsibly leads to benefits. So much of outcomes depends too on who is running projects and which way the political wind is blowing. It can be quite messy.

As for the math thing there will be certain classes of problem which benefit from quantum computers. Understanding of this will drive maths too as well as all the other science which will benefit from the critical loop being shortened. For some types of problems spending an arbitrarily large amount of money to make it happen would be justifiable.

Another thing is money and the real economy aren’t the same thing. Just ask anyone who acquires a Microsoft professional certificate or award… All those none jobs and boosters making hay fleecing customers for stuff which doesn’t work and getting paid again to fix the problems it caused? It’s not just Microsoft. This kind of thing is everywhere in the private and public sector and third sector. It’s one reason why I say “money is not a matter of how much but who gets it”.

HollyB,

Qbits directly affect the problems you can solve. Rather than testing each combination of bits one at a time, a quantum computer is supposed to test all combinations at the same time. The goal is to have enough qbits to solve a given problem. But they need to reduce the error rate at the same time, otherwise more qbits is going to exacerbate the errors and and the results will be noise. The article speaks about this and they’re using huge cryogenic coolers to reduce errors and maintain coherency but even then it’s only momentary. They’re talking about extremely large qbit sizes, but who knows how easy that will be.

I’m ok with research for research’s sake, but it’s hard to see it displacing conventional computers unless justified by a cost benefit analysis. Commercial success is generally contingent on it.

@Alfman

The bigger qubit thing is wrapped up with noise eradication so it’s a bit of a funny one. You need less noise for the qbits to work and more qubits helps cancel out effects of noise. It’s not just cooling but how they go about things too which effects noise.

As for the commercial thing this is why it’s mostly a function of the state to fund R&D. That’s not saying commercial companies can’t or won’t do R&D but when a state is funding R&D it amortizes the risk and allows for a longer game to be played. A state may also wish to encourage new sectors or be able to leverage R&D with public policy in other areas whereas a commercial company which typically operates only in one sector won’t be able to do nor necessarily have a direct commercial interest in.

As for end users I agree it’s extremely unlike they will invest in quantum computers just because they can. This would be reckless. It may also take time before people begin to discover applications which make them commercially viable. The thing is in the real world a lot of spending decisions aren’t rational so there may be spending on quantum computers which doesn’t make strict commercial sense. If anyone thought too hard they’d never buy Oracle and Oracle spend a lot of money on convincing waffle and feature lock-ins making sure people don’t think too hard about it. But I digress. I suspect some will buy into it just to see what happens or have a play. But for most people unless the use cases are there I agree it would be daft to buy into it.

HollyB,

The article mentions qubit error correction, although not in any detail. Maybe there’s a way to sacrifice qubits for error tolerance, but in general adding qubits would actually make the errors more likely. And that’s what makes scaling tricky. I found this paper describing some novel ideas for combating qubit errors, but I don’t know how close any of them are to being realized.

http://www.cs.virginia.edu/~robins/Computing_with_Quantum_Knots.pdf

@Alfman

When they condense it down to ordinary language the technical and mathematical thingies are interesting.

I suspect the reason why we don’t read too much is almost everything is bleeding edge about this and people don’t want to wreck their own patent claims. Plus you can’t work and write academic papers and yack all day on the internet all at the same time like what we do.

The environment and ethics are concerns too. It’s a bit dumbed down and gee whizz but gives an indication as to where thinking is at on the application side and benefits and pitfalls.

https://www.oxinst.com/news/quantum-technology-our-sustainable-future

Leading Quantum Computing Experts Explore tech’s sustainability role in new documentary.

https://www.youtube.com/watch?v=iB2_ibvEcsE

This new documentary feature from The Quantum Daily explores insights from leading quantum computing experts and tech giants such as Google, IBM, Oxford Instruments and Intel as well as start-ups such as PsiQuantum to discuss key sustainability topics, including how quantum technologies could reduce the energy required for complex computations even as demand continues to rise. The documentary also looks at the challenge of minimizing quantum computing’s own potential environmental impact whilst ensuring the development of applications to address global sustainability issues is prioritized.

https://www.youtube.com/watch?v=5qc7gpabEhQ

The Quantum Daily is proud to release Quantum Ethics: A Call to Action. This mini-documentary is meant to raise awareness and generate discussion about the ethical decisions that face society in the quantum era. This community-driven, solutions-based video relies on quantum experts and thought leaders who offer an overview of quantum technology, revealing its power to improve our lives, its potential pitfalls and how we can maximize its benefits while mitigating possible problems. Our team is looking forward to continuing this conversation and working with the community to develop concrete actions that will help guide and focus the power of this incredible technology to benefit the most people in the best way possible.

The original Turing Machine had infinite paper and time. Yet this made no difference to solving the Halting Problem. A quantum computer gets you closer to the idealized Turing Machine, but not past it. The proof of Turing’s Thesis is not dependent of the number of iterations. It beats them all at any stage in the computation regardless of how long you wait. This is due to the infinity of natural numbers. I can’t speak to Church. All Greek (Lambda) to me.

The term “Non Deterministic Turing Machine” is a bit of a misnomer. It just means that multiple paths through the machine are possible. Maybe the paths get selected at random, so now the whole universe is part of your machine. This does not solve the Halting Problem either.

My impression is that a post Turing Machine can do things like go forward in time to see the answer and come back to repot. Or it reduces entropy in a closed system, like an entropy leak or so called free will. It is really (by definition) outside our normal reality, even more so than quantum effects. We might be one. It’s a hot unsolved and maybe unsolvable topic.

Iapx432,

A bit tangential, but the halting problem is a fun topic for discussion. there’s a few reasons I think it’s implications are overstated though. For starter’s it should not be used as a blanket assertion that the halting problem can never be solved for all algorithms. It has been readily solved for a wide range of practical algorithms and most of our everyday algorithms in practice can be proven to halt or not. In principal the halting problem is logically tractable for any finite machine: either there’s an infinite cycle where states repeat forever, or there’s a halt. There’s zero possibility for an in-between on a finite machine.

This would be extremely inefficient, but consider this naive approach: A finite machine has X bits of state where X can be arbitrarily large. If you run this machines for 2^X cycles and it hasn’t halted, then even without taking a look at the code we know that it never will because we’ve exhausted all the possible machine states. Either the machine would have been forced into a cycle already or halt.

Of course the halting problem isn’t a proof about finite machines or typical programs. It’s intended to analyze the theoretical limits of computation itself. something that cannot ever be computed for arbitrary input. It’s setup as a proof by contradiction: Assume we have an oracle that tells whether a program will halt or not.

Now write a program that uses the oracle to contradict the oracle:

That’s the halting problem in a nutshell, disproving the possibility of such an oracle, However it’s worth pointing out that this has not disproven the possibility of an oracle with 3 possible outputs: 1) program halts, 2) programs loops, 3) program contradicts oracle. Thus the contradiction no longer exists and the oracle can logically distinguish between the 3. A 3 choice halting oracle can’t be disproven using the same trick.

Alfman,

That is pretty clever and I bet this debate could loop without halting..

You can’t use “we” because it has to be a program. That is the whole point. According to Penrose and others there are things we know that computer can’t, other than you type it into the database. Also a Turing Machine is not a state machine. Calculating that a state machine halts is not enough.

I agree that Turing went to way too much trouble to point out that any language (or mathematical system) can express a contradiction and so is inconsistent.

You can halt right there. Oracles are by definition not Turing Machines. They are the entities than can do what a Turing Machine can’t. That is why Turing had to invent them.

I think to get to 2) or 3) you have to eliminate 1). Also you seem to have two entries – the program and the oracle. Ignoring that oracles are magic, you need to feed the oracle itself. And if it can tell it contradicts itself then it doesn’t.

It is fun for sure. But it’s also serious. AI is hitting another wall. There might be something else going on. Penrose thinks it’s quantum juju, which is useless, as per this whole thread. Quantum computers have the same issue. Hofstadter thinks the issue can be overcome if the program or entity is inherently self referential. And Feynman thinks positrons are electrons going backwards in time. So maybe Azimov nailed it after all.

lapx432,

The halting problem wasn’t merely meant to identify the limits of a program on a machine, I think it really was intended to identify the logical limits of computation itself, including “us”.

We need to be careful how we refer to systems relative to one another. A computer CAN know everything about a given system, but that system CANNOT include the computer itself. This goes hand in hand with Gödel’s incompleteness theorem, The logic is independent from the implementation. We, as humans, are not an exception. We cannot know and prove everything about ourselves, but at least in principal another being or machine greater than ourselves could.

That’s how the proof works though. The “oracle” is just a name used to refer to the black box without having to define the mechanics of how it works. Disproving the existence of an oracle is to say the problem is unsolvable regardless of algorithm. Anyways I don’t think we need to get stuck on semantics, so I can drop the “oracles” if that helps.

Each case is well defined. The program always halts (for a given input). It always loops. Or its behavior changes because it’s trying to contradict itself. We can analyze any given program to make an objective assessment as to which category it belongs to. The proof that was used to contradict the 2 output halting problem no longer causes any contradiction now. Therefor we’d need some other proof to disprove the feasibility of this 3 bucket halting problem.

Artificial intelligence is a big topic on it’s own. Gödel has some implications here. I think randomness plus selective evolution has played a crucial role in our own development and we have to duplicate that in order to create artificial intelligence capable of more than it’s original programming. Do you want to hold this topic for future discussion, or discuss it now? Haha.

Let’s hold. Glad nobody said ML is AI. Long way to go.

The theorem still stands with Quantum computers.

Perhaps you’re thinking of Big-O complexity an the NP-complete, NP-hard domains?