Linked by Thom Holwerda on Mon 20th Feb 2012 22:53 UTC
Hardware, Embedded Systems "A group of researchers has fabricated a single-atom transistor by introducing one phosphorous atom into a silicon lattice. Through the use of a scanning tunnelling microscope and hydrogen-resist lithography, Martin Fuechsle et al. placed the phosphorous atom precisely between very thin silicon leads, allowing them to measure its electrical behavior. The results show clearly that we can read both the quantum transitions within the phosphorous atom and its transistor behavior. No smaller solid-state devices are possible, so systems of this type reveal the limit of Moore's law - the prediction about the miniaturization of technology - while pointing toward solid-state quantum computing devices."
Thread beginning with comment 508025
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: Sub-atomic?
by Alfman on Tue 21st Feb 2012 20:19 UTC in reply to "RE: Sub-atomic?"
Alfman
Member since:
2011-01-28

FunkyELF,

"Don't confuse this with a quantum computer those have yet to be realized (if ever)."

You are right, it's possible that they won't pan out. I suppose even if we're thinking of conventional transistors though, this development of single atom transistors might make future computers fast enough to render today's cryptographic systems vulnerable to brute force attacks. Does anyone know just how fast this single atom transistor is compared to those in 32nm CPUs? That would give us a much better idea of just how future-proof current cryptographic schemes are.

Algorithms like RSA and elliptic curves can naturally be extended to any bit length desired (although most implementations have an upper limit of 4096 bits).

Unfortunately, most symmetric ciphers are only defined for limited block sizes as a matter of standardization and offer no standardized way to extend them. For example, AES is hard coded to 128bit blocks, with key sizes limited to 256bits. Most of the time making the key size larger is trivial even if non-standard, and one could easily project the new parameters. For example, the number of rounds for AES-128 is 11, AES-192 is 13, AES-256 is 15, so we might project AES-384 to be 17 rounds and AES-512 to be 19 rounds, but the fact that it's non-standard is a problem. Unfortunately increasing the AES block size from 128bit would require a whole new algorithm since it's integral to the AES function.

Mind you this is all just fun cryptographic theory, I don't see a very compelling need for larger AES key or block lengths today.


BLOWFISH, by comparison, supports key sizes up to 448bits, but has a smaller block size of 64bits. In my opinion, this small block size is too small for comfort. In theory, even without cracking the 448 bit key (which is unfathomable using conventional means), one might begin to map out 64bit blocks directly. Given 1GB of data, there's a 0.02% chance a block will be reused and become cryptographically weak. Each additional GB increases the odds of a collision: for 2GB,3GB,4GB,5GB it goes to 0.08%,0.19%,0.34%,0.53% respectively.

The information leaked may be rare and of little value, but it could facilitate a meet in the middle attack against the key function, which if successful would decrypt the entire stream. For this reason, I think AES is better despite it's smaller key size.

Reply Parent Score: 2