“A group of researchers has fabricated a single-atom transistor by introducing one phosphorous atom into a silicon lattice. Through the use of a scanning tunnelling microscope and hydrogen-resist lithography, Martin Fuechsle et al. placed the phosphorous atom precisely between very thin silicon leads, allowing them to measure its electrical behavior. The results show clearly that we can read both the quantum transitions within the phosphorous atom and its transistor behavior. No smaller solid-state devices are possible, so systems of this type reveal the limit of Moore’s law – the prediction about the miniaturization of technology – while pointing toward solid-state quantum computing devices.”
Could you ever do a transister by manipulating the magnetic fields inside the atoms.
You cant predict where it is (Heisenberg) but can estimate where it is.
Now the atom is not part of the switch, but the transister itself.
One relative pole is 0, shift to other is 1
Now we need to figure out how to break appart an atom 🙂
Let’s do the patent first. That is the easiest part.
I saw some drawings of patents lately. It’s easy.
We draw some stickmen, a circle and some arrows.
Ought to fly through the patents office.
yeh, the end of all things is near.. especially moore’s law (it’s not really a law is it?) i think every time we reach a new level of lithography process, some random tech ignorant thinks it’s impossible to go any “further” and tries to predict the end of Moore’s law. shurely it’s the end now!
Brunis’ Law – Every 18 months windows code size will double, effectively negating all hardware progress!
Edited 2012-02-20 23:20 UTC
Well… we can improve this with sub-atomic particles!
Speaking seriously, it will superb when we could mass manufacture these things. And live enough to see the security hell that this will create too, since almost all your current most popular cryptography algorithms can be broken (at least in theory) by quantum computers.
I suspect most governments would declare martial law as soon as they got word of quantum computation in the wild, and probably attempt to imprison or execute anyone who they thought was involved.
(Governments do not like it when the citizens spy on them. What we call whistleblowing, they call treason.)
Just set your passwords at 1000 character/letter/symbol combination. You’ll be fine then.
err… Actually no. Quantum Computers are NP complete for the same problems as digital computers are. So aside from maybe being faster, quantum computers don’t have any advantage.
Except all of our currently used public key cryptography algorithms can be broken by a quantum computer. See https://en.wikipedia.org/wiki/Post-quantum_cryptography . It turns out that NP-complete problems seem to be only hard in the worst case and easy in the average case, so no one knows how to use them for cryptography. Grover’s algorithm does allow for faster solutions to NP-complete problems, but they remain exponential.
With a sufficiently powerful quantum computer, Shor’s algorithm defeats RSA in polynomial time and a generalization of it can solve the discrete logarithm problem (DSA, Diffie–Hellman, ElGamal) in polynomial time.
There is a simple solution once you have quantum computers…quantum encryption.
The situation is even easier: quantum tagging / degradation
Once the message is read, it is destroyed. And, with a low probability of natural failure, would mean that a single transmitted secure-mode packet would signal a security breach and a new secure protocol would be enacted.
Snooping would disrupt communications of secure data, but the information itself would be largely secure (save for the odd packet here or there…).
–The loon
looncraz,
It’s true, while quantum computing closes the door on conventional encryption, it opens another door for quantum encryption. But unfortunately quantum encryption is not a direct substitute for PKI, leaving us to revert to point to point security. Without a CA, quantum entangled material would need to be exchanged and managed between parties beforehand (think every devices*website, wholly unrealistic). Or the traffic would have to be routed through a trusted proxy which has a secure quantum encrypted channel to both parties and is responsible for securing the traffic between them (more likely, but less ideal).
Also, quantum encryption suffers from the same bootstrapping issues as conventional encryption. You may have gotten a secure “quantum encryption card” in the mail, but without an out of band mechanism to validate it’s authenticity (traditionally PKI), it’s vulnerable to man in the middle attacks.
User <-> Service // normal secure quantum tunnel
User <-> Attacker <-> Service // attacker impersonates user to service, and impersonates service to user by swapping the quantum atoms and proceeds to mount a conventional man in the middle attack.
To address this using quantum encryption, one would probably need a third party which is already secure to validate that the bits the user is transferring match those seen by the service. This test would need to be done at the beginning of every session. However all this introduces more complexity and new failure modes because there’s no quantum equivalent to PKI’s offline authentication.
I’m just learning about quantum encryption so let me know if there are any errors in my understanding.
Don’t confuse this with a quantum computer those have yet to be realized (if ever).
All this proves is that we could possibly make traditional computing devices at this scale.
FunkyELF,
“Don’t confuse this with a quantum computer those have yet to be realized (if ever).”
You are right, it’s possible that they won’t pan out. I suppose even if we’re thinking of conventional transistors though, this development of single atom transistors might make future computers fast enough to render today’s cryptographic systems vulnerable to brute force attacks. Does anyone know just how fast this single atom transistor is compared to those in 32nm CPUs? That would give us a much better idea of just how future-proof current cryptographic schemes are.
Algorithms like RSA and elliptic curves can naturally be extended to any bit length desired (although most implementations have an upper limit of 4096 bits).
Unfortunately, most symmetric ciphers are only defined for limited block sizes as a matter of standardization and offer no standardized way to extend them. For example, AES is hard coded to 128bit blocks, with key sizes limited to 256bits. Most of the time making the key size larger is trivial even if non-standard, and one could easily project the new parameters. For example, the number of rounds for AES-128 is 11, AES-192 is 13, AES-256 is 15, so we might project AES-384 to be 17 rounds and AES-512 to be 19 rounds, but the fact that it’s non-standard is a problem. Unfortunately increasing the AES block size from 128bit would require a whole new algorithm since it’s integral to the AES function.
Mind you this is all just fun cryptographic theory, I don’t see a very compelling need for larger AES key or block lengths today.
BLOWFISH, by comparison, supports key sizes up to 448bits, but has a smaller block size of 64bits. In my opinion, this small block size is too small for comfort. In theory, even without cracking the 448 bit key (which is unfathomable using conventional means), one might begin to map out 64bit blocks directly. Given 1GB of data, there’s a 0.02% chance a block will be reused and become cryptographically weak. Each additional GB increases the odds of a collision: for 2GB,3GB,4GB,5GB it goes to 0.08%,0.19%,0.34%,0.53% respectively.
The information leaked may be rare and of little value, but it could facilitate a meet in the middle attack against the key function, which if successful would decrypt the entire stream. For this reason, I think AES is better despite it’s smaller key size.
Quark computing – you heard it here first!
I can see the book possibilities now, “Bosons for Bozos.”
or quarks for quacks
Edited 2012-02-21 18:21 UTC
If it takes a particle accelerator to build that computer, I get power efficiency and portability won’t be there though
Maybe such would be a computer distantly related to its ancient ancestors of Jupiter or matrioshka brains, but going all the way to being composed “from” (~”in”) a quark star… I imagine, in such, power efficiency and portability would also have different scope
It will be disappointing to learn this technology only happening in the iphone 85 while everybody hoped to see it in the 84s.
Moore’s law is not about technology, it’s about economy of the IC process development. That’s why the growth is exponential – better chips produce more demand, more demand brings more money, more money make better chips – a positive feedback loop. The rate of growth was limited only by engineering effort needed for solving a large number of non-critical issues.
As with any exponential growth it finally ends (it must end because neither physics nor total resources depend on money), and it ends rapidly. We don’t even have to go down to the atomic scale – in many applications the exponential growth has already stopped.
The Moore’s law, as it was originally formulated (density of transistors doubling every X months) is still valid in applications like memories or FPGAs (which are constrained by density and are easy to scale). But in CPUs and ASICs there is hardly any performance scaling anymore, people are now focusing on incrementally optimizing performance/power instead. That’s because we are no longer able to scale down the supply voltage (transconductance of MOS transistors is fixed and mismatches are growing) as we did a decade ago. For now, we can work this around by making e.g. more CPU cores on the chip (so that we can utilize larger density of transistors) but that’s no longer an _exponential_ growth. Performance doesn’t scale proportionally to the number of cores (IO, software parallelism) and complexity and cost grow faster than it.
A one atom transistor surrounded by 3m^2 of machinery isn’t a very high transistor density. I think there’s still scope for improvement here
This is not necessarily a problem for some applications though, as long as they manage to pack a lot of stuff in that big box.
Case in point : the other day, I’ve heard about a breakthrough in quantum computers based on trapped ions. By using the near field of regular microwave antennas instead of stabilized lasers for cooling, they managed to pack ten times as much ions on a chip as they did before.
Now if they manage to have tons of qubits per chip and tons of chips per cryogenic vacuum chamber, they might get something that is worthwhile for HPC…
Edited 2012-02-21 15:03 UTC
This experiment was done at 0.020 Kelvin otherwise it wouldn’t have worked.
I used to draw transistors for a living back when the smallest feature was 10,000 atoms wide. IBM and others have been pushing atoms around for maybe a decade or two writing their logos.
To be really interesting it would have to be a full blown logic circuit, at least an inverter chain, better still a small adder, or SRAM memory cell with the ability to read and write a few bits.
The interconnects will dominate though, the only thing that matters is how thin a wire can be drawn that will reliably work for years. In the picture it seems a 5 Si atom wide wire might work. The channel length is effectively 40 atoms wide.
I’d suspect logic circuits that look familiar might still work with 10nm or 100 atom features. Current 30nm is only 300 helium atoms across.
There will be some invention that uses strings or smaller particles or finds some way to use 3 D computing or some other improvement.
Strings are not particles sort of by definition; and there wouldn’t be “smaller” beyond elementary ones. 3D chips aren’t about miniaturisation (plus I believe they have the usual, if not more severe, power dissipation issues)
But yeah, we can(tm) – look how we’re finally, after over 2k years, on the verge of getting around Archimedes’ Law!
Hm, or maybe not.
zima,
“Strings are not particles sort of by definition; and there wouldn’t be ‘smaller’ beyond elementary ones.”
The theory, which is a clever interpretation of the statistical data we have available, could never the less be wrong. But yes it seems the OP jumped to the conclusion that strings could be “reprogrammed”; that might invalidate the premise that they are already programmed as they are to explain the universal laws of physics. Changing them would in affect create a different universe.
“3D chips aren’t about miniaturisation (plus I believe they have the usual, if not more severe, power dissipation issues)”
3D chips will undoubtedly offer tremendous gains, yes the heat dissipation is a bummer. But what about superconductors?
Or here’s another idea for thermal computers…
(no idea about the plausibility of such a thing though)
http://spectrum.ieee.org/biomedical/devices/thermal-transistor-the-…
The Standard Model will NOT be “wrong” – it works too well for that, is too useful (also http://chem.tufts.edu/AnswersInScience/RelativityofWrong.htm ).
OTOH its faults are beyond “could never the less” – we KNOW it’s basically interim (I won’t use the word “wrong”) …mass of neutrinos, not explaining that which forms most of our universe (dark matter and energy), gravity (at odds with general relativity overall), or apparent absence of antimatter, and so on.
Post above it was just about mixing concepts; how particles are distinct from the idea of strings, a sort of expression of them. And an example of elementary (how people often directly name them in such wishes) particles being the limit, since that’s how they are defined (the goalpost might move of course, it happened few times; but not the definition).
Mildly frustrating, people finding some catchwords and throwing them around; or naively extrapolating rates of progress (scientific method and such did give us the capability to unravel and exploit the world in more swift fashion than was the case throughout most of our existence – hence also made us realize hard limits; and tech plateaus – short spurts of progress are actually rather typical), worse if it leads to cargo cults of sorts.
“3D chips will undoubtedly offer tremendous gains” …maybe, maybe not – we will see.
Superconductors – what about them? We don’t know if high-temp ones are feasible, of the types adequate here (and maybe even just not practical, maybe properties of some other necessary chip components getting in the way; maybe, say, power dissipation of interconnects not being that much of a problem; anyway, do we really want terminators walking around? )
Edited 2012-02-27 22:21 UTC