Linked by Thom Holwerda on Mon 6th Aug 2018 21:01 UTC
Hardware, Embedded Systems

A few weeks ago, an interesting question cropped up: How fast is a PS/2 keyboard? That is to say, how quickly can it send scan codes (bytes) to the keyboard controller?

One might also ask, does it really matter? Sure enough, it does. As it turns out, the Borland Turbo Pascal 6.0 run-time, and probably a few related versions, handle keyboard input in a rather unorthodox way. The run-time installs its own INT 9/IRQ 1 handler (keyboard interrupt) which reads port 60h (keyboard data) and then chains to the original INT 9 handler… which reads port 60h again, expecting to read the same value.

That is a completely crazy approach, unless there is a solid guarantee that the keyboard can’t send a new byte of data before port 60h is read the second time. The two reads are done more or less back to back, with interrupts disabled, so much time cannot elapse between the two. But there will be some period of time where the keyboard might send further data. So, how quickly can a keyboard do that?

I love these questions.

Order by: Score:
Key bounce
by quatermass on Mon 6th Aug 2018 21:27 UTC
quatermass
Member since:
2005-08-03

I guess it depends if the keyboard has it's own key bounce detector onboard or if the computer has to do it.

Reply Score: 1

RE: Key bounce
by Megol on Tue 7th Aug 2018 13:42 UTC in reply to "Key bounce"
Megol Member since:
2011-04-11

I guess it depends if the keyboard has it's own key bounce detector onboard or if the computer has to do it.


Debouncing is done in the keyboard before sending the key up/down code.

Reply Score: 3

Bad design
by Treza on Mon 6th Aug 2018 22:44 UTC
Treza
Member since:
2006-01-11

(as hinted in the article comments)
This kind of tricks makes emulation and virtualisation more difficult.

And Turbo Pascal 6.0 was released in 1990 ! Intel was already selling the i486.
Such hack could be excusable for 8088 IBM PC software, but, by that time, expecting any kind of timing garantees on computers able to multitask and run VMs was lucridous.

Reply Score: 4

RE: Bad design
by jal_ on Tue 7th Aug 2018 07:40 UTC in reply to "Bad design"
jal_ Member since:
2006-11-02

And Turbo Pascal 6.0 was released in 1990 ! Intel was already selling the i486.

That were still using PS/2, and the PS/2 timing didn't change.
Such hack could be excusable for 8088 IBM PC software, but, by that time, expecting any kind of timing garantees on computers able to multitask and run VMs was lucridous.

No 1990s PC was able to run VMs, they were far too slow and didn't have the necessary virtualisation possibilities (other than running DOS VMs). And timing guarantees were fine when pertaining to hardware.

Reply Score: 1

RE[2]: Bad design
by Delgarde on Wed 8th Aug 2018 02:45 UTC in reply to "RE: Bad design"
Delgarde Member since:
2008-08-19

No 1990s PC was able to run VMs, they were far too slow and didn't have the necessary virtualisation possibilities (other than running DOS VMs).


No early-90's PC, at least... you'd not be running VMs on a 486. But in fact, it was the 90's when virtualisation started to appear - e.g the first VMWare release was in 1999...

Reply Score: 2

RE[3]: Bad design
by jal_ on Wed 8th Aug 2018 08:10 UTC in reply to "RE[2]: Bad design"
jal_ Member since:
2006-11-02

So very late 90s ;) . Probably needed a very high-end P6 or the like.

Reply Score: 2

RE[4]: Bad design
by Megol on Wed 8th Aug 2018 12:40 UTC in reply to "RE[3]: Bad design"
Megol Member since:
2011-04-11

So very late 90s ;) . Probably needed a very high-end P6 or the like.


The 486 didn't support that kind of virtualization, it requires split ("Harvard") instruction/data caches among other things. The 486 used an unified cache.

Reply Score: 3

RE[2]: Bad design
by zima on Thu 9th Aug 2018 00:07 UTC in reply to "RE: Bad design"
zima Member since:
2005-07-06

>Intel was already selling the i486.

That were still using PS/2

Actually, I think most were still using the DIN AT keyboard connector / the one previous to PS/2...

Reply Score: 3

RE: Bad design
by Megol on Tue 7th Aug 2018 13:40 UTC in reply to "Bad design"
Megol Member since:
2011-04-11

(as hinted in the article comments)
This kind of tricks makes emulation and virtualisation more difficult.

And Turbo Pascal 6.0 was released in 1990 ! Intel was already selling the i486.
Such hack could be excusable for 8088 IBM PC software, but, by that time, expecting any kind of timing garantees on computers able to multitask and run VMs was lucridous.


No 486 system used virtualization. It wouldn't be a problem with this solution if it did anyway, that would be a bug in the virtual host.

Multitasking doesn't matter. This is an interrupt routine that disables interrupts until the proper (chained) handler have finished.

I don't know why you call this a hack given that it's the proper design for the problem at hand.

Reply Score: 2

RE[2]: Bad design
by Alfman on Tue 7th Aug 2018 14:25 UTC in reply to "RE: Bad design"
Alfman Member since:
2011-01-28

Megol,

No 486 system used virtualization. It wouldn't be a problem with this solution if it did anyway, that would be a bug in the virtual host.

Multitasking doesn't matter. This is an interrupt routine that disables interrupts until the proper (chained) handler have finished.

I don't know why you call this a hack given that it's the proper design for the problem at hand.


Speaking as someone who's used those tools on more recent computers to run legacy software, we would experience keyboard bugs with borland tools, and now we know why. Haha.

Relying on arbitrary hardware timing characteristics to function is what our old friend Neolander would have called an ostrich algorithm:
https://en.wikipedia.org/wiki/Ostrich_algorithm

Some early software/games would rely on unchanging hardware performance and consequently are unusable on modern systems. IMHO this was amateurish back then (I was guilty of hard coding timing assumptions when I was learning to program). But there was less hardware variety back then, so it sometimes could pass as commercial software. This is obviously bad practice today.


Very interesting article BTW!

Edited 2018-08-07 14:31 UTC

Reply Score: 4

RE[3]: Bad design
by Megol on Tue 7th Aug 2018 14:41 UTC in reply to "RE[2]: Bad design"
Megol Member since:
2011-04-11

Megol,

Speaking as someone who's used those tools on more recent computers to run legacy software, we would experience keyboard bugs with borland tools, and now we know why. Haha.

Relying on arbitrary hardware timing characteristics to function is what our old friend Neolander would have called an ostrich algorithm:
https://en.wikipedia.org/wiki/Ostrich_algorithm

But this doesn't do that.
Worst case scenario would be a 4.7MHz 8088 with a PS/2 interface attached.

Worst case timing would be the time of the keyboard buffer read in the interrupt routine to the reading of the keyboard buffer in the chained routine. Will that timing ever be exceeding the minimum time between keyboard interrupts? Nope.

This is basic real-time stuff.


Some early software/games would rely on unchanging hardware performance and consequently are unusable on modern systems. IMHO this was amateurish back then (I was guilty of hard coding timing assumptions when I was learning to program). But there was less hardware variety back then, so it sometimes could pass as commercial software. This is obviously bad practice today.


This isn't hard-coded timing assumptions and not relevant. If things get faster the routine still work and the timing can't get worse in hardware!

If one claim to emulate hardware and don't actually do it, well, the problem isn't in the original code.

Edit: quotes in bold as the ****** comment system doesn't accept quote tags.

Edited 2018-08-07 14:46 UTC

Reply Score: 3

RE[4]: Bad design
by Alfman on Tue 7th Aug 2018 15:19 UTC in reply to "RE[3]: Bad design"
Alfman Member since:
2011-01-28

Megal,

But this doesn't do that.
Worst case scenario would be a 4.7MHz 8088 with a PS/2 interface attached.

Worst case timing would be the time of the keyboard buffer read in the interrupt routine to the reading of the keyboard buffer in the chained routine. Will that timing ever be exceeding the minimum time between keyboard interrupts? Nope.

This is basic real-time stuff.


The breakage is real at least on some newer hardware. It was at a job I had years ago, but as I recall everything in DOS would work perfectly, including things like DOS edit.com. However when you started interactive borland apps the keyboard would ack up. We never solved it, but this theoretically explains the symptoms: "As it turns out, the Borland Turbo Pascal 6.0 run-time, and probably a few related versions, handle keyboard input in a rather unorthodox way. The run-time installs its own INT 9/IRQ 1 handler (keyboard interrupt) which reads port 60h (keyboard data) and then chains to the original INT 9 handler… which reads port 60h again, expecting to read the same value."

This is an inherently fragile hack that may or may not work as hardware gets cloned and evolves. Maybe they had not anticipated how it could break, but at least in hindsight software should not assume that the hardware is too slow to update the value between reads. This may have happened to have been true on the original hardware, but it clearly isn't a great practice for software to do this. This is all IMHO of course.



This isn't hard-coded timing assumptions and not relevant. If things get faster the routine still work and the timing can't get worse in hardware!

If one claim to emulate hardware and don't actually do it, well, the problem isn't in the original code.


Re-read the part I quoted. I'm taking it at face value, but the hardware might not wait for the software to read the same value twice.

Edited 2018-08-07 15:34 UTC

Reply Score: 3

Possibly unrelated...
by tony on Tue 7th Aug 2018 19:20 UTC
tony
Member since:
2005-07-06

I was sitting in front of an F5 BIG-IP back in the 1999/2000 timeframe. At the time, it was BSDI as the base operating system (they've since moved to Linux). It was being slammed by web requests and was frozen, because someone on live TV said "go to our website" and kabooom.

It exhibited a behavior I've not seen before or since. I would type something on the keyboard (old PS/2 interface), and on the VGA screen it would take several seconds for it to appear. I've always seen overwhelmed systems at least echo my typing back when on the VGA console. But this one didn't. I wonder if it's related.

Reply Score: 3

RE: Possibly unrelated...
by Alfman on Tue 7th Aug 2018 21:55 UTC in reply to "Possibly unrelated..."
Alfman Member since:
2011-01-28

tony,

I was sitting in front of an F5 BIG-IP back in the 1999/2000 timeframe. At the time, it was BSDI as the base operating system (they've since moved to Linux). It was being slammed by web requests and was frozen, because someone on live TV said "go to our website" and kabooom.

It exhibited a behavior I've not seen before or since. I would type something on the keyboard (old PS/2 interface), and on the VGA screen it would take several seconds for it to appear. I've always seen overwhelmed systems at least echo my typing back when on the VGA console. But this one didn't. I wonder if it's related.


I don't know anything about that specific computer system, but I would guess it has to do with the screen interaction code not being interrupt driven.

When the screen is updated from within the keyboard interrupt handler, it ought to update immediately regardless of system activity. Technically, code executing "cli" would temporarily inhibit all system interrupt handlers, but interrupts don't get disabled for a prolonged period in a normal application/OS setting, even on a busy system.


However, in applications that don't use interrupt handlers and process screen interactions outside of interrupts, it could mean that the keystrokes will wait in a buffer doing nothing until they application polls for them.


On a related note I believe many operating systems handle the mouse pointer in an interrupt to minimize mouse pointer latency even during high system load.

Reply Score: 3

Comment by Megol
by Megol on Wed 8th Aug 2018 12:37 UTC
Megol
Member since:
2011-04-11

Borlands solution isn't a hack - it's a proper design that works. There are no timing differences that matter, faster hardware will work and the slowest hardware possible will work. It works.

So you say some buggy software will fail to handle this case correctly? Sucks, but the fault is in that software that doesn't handle things correctly. And handling it correctly isn't exactly hard - real emulators do much worse things than emulating ~1msec signals.
Edit: excessive and removed.

Edited 2018-08-08 12:41 UTC

Reply Score: 3

RE: Comment by Megol
by Alfman on Fri 10th Aug 2018 16:02 UTC in reply to "Comment by Megol"
Alfman Member since:
2011-01-28

Megol,

Borlands solution isn't a hack - it's a proper design that works. There are no timing differences that matter, faster hardware will work and the slowest hardware possible will work. It works.

So you say some buggy software will fail to handle this case correctly? Sucks, but the fault is in that software that doesn't handle things correctly. And handling it correctly isn't exactly hard - real emulators do much worse things than emulating ~1msec signals.


To me, these two paragraphs contradict each other since borland's own software breaks on some modern hardware/controllers.

Expecting the same value twice from port IO creates a timing race condition that would not exist if you only read it once. Perhaps they assumed that the race condition would be fairly safe on the hardware they had then, but this has the potential to introduce fragility with modern controllers (USB/bluetooth/etc) that might send key sequences immediately as they are read using port IO without adding the PS/2's inter-character delays. "Normal" keyboard handlers are ready to handle the next character as soon as it read the last. Borland's handler, on the other hand, doesn't handle that because of it's unique requirement to read the same input character twice.


If you want, I can budge and meet you somewhere in the middle: Borland's approach worked back when hardware was more homogeneous and everyone's computer used identical controllers, but they made assumptions that could break with new hardware.

Reply Score: 2