Cryptography needs random numbers, numbers unpredictable to an adversary. Computers are built to be as non random as possible, so this is a problem. Intel created an instruction, RDRAND, that supposedly creates a random number on each read.
This instruction appears to be backdoored by the NSA.
The numbers produced by any random physical process are always offwhite, in ways characteristic of that physical process, and diagnostic of the underlying physics. Because the numbers are whitened on chip, because we don’t see the raw output, we cannot tell if the underlying physical process generating randomness has failed, has been turned off by the NSA, or ever existed at all. The decision to whiten the numbers on chip is a most strange one. It costs a bit of silicon, and lessens the utility of the physical random number generator, assuming there is a physical random number generator, for because of on chip whitening, there is no way to tell if there really is a physical random number generator or not.
This strange decision created much suspicion.
One of the designers responded to this suspicion, and my interpretation of his words is that he is a very bad liar, which is to say, his lies are both wicked and incompetent.
On 2013-09-08 3:48 AM, on the Cryptography Mailing list, David Johnston wrote:
It interesting to consider the possibilities of corruption and deception that may exist in product design. It’s a lot more alarming when it’s your own design that is being accused of having been backdoored. Claiming the NSA colluded with intel to backdoor RdRand is also to accuse me personally of having colluded with the NSA in producing a subverted design. I did not.
A quick googling revealed many such instances of statements to this effect, strewn across the internet, based on inferences from the Snowden leaks and resulting Guardian and NYT articles.
I personally know it not to be true and from my perspective, the effort we went to improve computer security by making secure random numbers available and ubiquitous in a low attack-surface model is now being undermined by speculation that would lead people to use less available, less secure RNGs. This I expect would serve the needs of the NSA well.
Firstly, an honest person does not tell us that it is hurtful that his integrity is doubted, since that is an effort to shut down discussion. An honest person instead provides evidence of his integrity. A dishonest person wants to shut down discussion. An honest person wants discussion of the truth told his way.
Secondly, an honest person would not tell us that any doubt of RDRAND is doubt in his integrity, since Snowden has just revealed that the NSA has been subverting cryptography with secret agents acting behind the scenes. He would instead tell us that that part of the system of which he has knowledge has not been subverted in ways he can detect, that if RDRAND has been subverted, it has been subverted by a secret agent playing clever tricks behind his back. An honest person concerned for his good name would avoid the risk of taking the blame for furtive acts done by secret agents.
Thirdly, an honest person would spontaneously and unprompted provide some innocent explanation for the strange and suspicious looking design decision to whiten RDRAND on chip.
Suppose you see some people showing up at your neighbor’s house while your neighbors are away. You go out to see what is up. If they immediately and spontaneously start explaining what is going on, probably innocent. If they cry indignantly that it is outrageous that you act as if this is suspicious behavior, they are guilty as hell.
When David Johnston starts off by saying how hurtful it is to doubt him, rather than “We chose to whiten on chip because …â€, that is like the Wizard of Oz saying “ignore the man behind the curtainâ€.
From which I conclude that David Johnston, one of the designers of RDRAND, is as guilty as hell.
Your analysis points to some definite fishiness here.
At least the Soviets were honest when they put overt political officers in all areas of life they wanted to monitor.
So it seems that the best cryptographers of all the US are all in the NSA payroll, and there’s little that can be done about it.
I don’t think the best cryptographers on the state payroll. Curve25519 is a better curve than any of the NIST curves. People use the NIST curves because of government pressure to do so, because it is politically correct to do so. Regardless of whether the NIST curves are backdoored or not, they are inferior.
Actually, since Americans are the dumbest people on the planet, on average…and NSA being at least somewhat limited to using foreigners constantly… they will come up short on intellectual might. But the ‘best cryptographers’ in US might very well be on US payroll… without saying much, comparitively speaking.
So does RDRAND pass the diehard tests? Generate some random numbers with it and run them through and see what we get.
Irrelevant.
If it produced an AES encryption of a count, it would pass the diehard tests, yet be perfectly predictable to anyone who had the secret key.
You cannot tell if something is random by looking at it. You can only tell if something is random by knowing how it is produced.
And you cannot tell how it is produced unless you have access to the raw offwhite entropy source.
Random generators are tested by looking at their output, not looking at how they are implemented.
wrong
Why wrong?
If my random number generator spat out the numbers 3, 1, 4, 1, 5, 9 etc then any nerd reading this thread would immediately challenge it as a base-10 parser of pi – that is, not a random number generator. Maybe even where it started with 3.
That is an extreme case of course, but other “random” number generators with a bit more sophistication can be cracked with similar means.
To crack a code is to judge the code. And this is done – must be done – by their “fruits”, to use the Christian expression.
No they cannot. What if the output of RDRAND is a two hundred and fifty six bit counter with random initialization ECB-AES-256 encrypted with a key known only to the NSA?
Which it probably is.
Based on what he says in https://plus.google.com/117091380454742934025/posts/SDcoemc9V3J , David Johnston looks honest to me.
He says:
But it is not “his” RDRAND – he is claiming more knowledge of what is in the chip than he can reliably possess. It is not only sealed against us, but also sealed against him.
He says he has examined the chip with an electronic microscope. He also acknowledges that in principle, a microcode update can change the values returned by RDRAND.
That is rather like claiming that you examined the object code to check that it corresponds to the source code that you wrote. He is not only claiming that he personally is not involved, but improbable vigilance against other people being involved. The latter claim casts doubt on the former claim.
The chip is not only designed so to make it hard for us to detect a backdoor, rather than to efficiently produce randomness, it is designed so as to make it hard for him to detect a back door, rather than to efficiently produce randomness.
You are excused from Jury Duty.
You are excused from Constabulary Duty.
Those are perfectly human reactions. It’s quite human for instance when threatened to snarl and bare your teeth. I don’t think his reactions and outage either convict him or clear him. It just means he’s a monkey like us.
Don’t ever make a life or death decision that isn’t self-defense. I think you’ll get the wrong guy. Or at least the first target. It’s about getting someone, not any culpablity. Pissed off chimp looking to kick some ass.
This is a very academic witch hunt. For those who scoff at the decision making progress, this post is an excellent example of how the decisions are actually made.
But he is not snarling and bearing his teeth. He is whimpering that we are being cruel and hurtful. If he was snarling and baring his teeth, he would be telling us why we were wrong, rather than saying that it is terribly unkind of us to think we are right.
The normal human reaction to being hurt by what someone says is to tell them that they are wrong, and why they are wrong, and maybe add that they stupid idiots, not tell them that it hurts.
“The normal human reaction to being hurt by what someone says is to tell them that they are wrong, and why they are wrong, and maybe add that they stupid idiots, not tell them that it hurts.”
NO IT ISN’T. That might be what academics do.
His reactions are perfectly human.
I don’t understand the arguments technically enough to go toe to toe with you, however I think your view of Human Nature to be academic.
don’t ever make a life/death decision you don’t have to, you’re excused.
I don’t know his guilt, innocent, whether it’s compromised.
I’m pretty sure you’re out to hang somebody and it don’t matter who…
You don’t have a reasonable suspicion based on above, never mind conviction.
I would leave him alone. And concentrate on the actual technical aspects of the chip.
No one ever tells me my words hurt them. Do I never hurt anyone?
Fairly routinely I accuse leftists of using Alinskyite tactics, or of being afraid to speak the truth, or of following the official line without regard to reality. I have never been told that hurts, even when it is obvious that it does.
I’m not telling anyone they’re being cruel and hurtful. I’m just saying that they are wrong about a back door in the rdrand instruction because I happen to know there isn’t because of my position as the designer. Also I said it was alarming to find your design being questioned across the interwebs. It is. It is not a normal thing.
Is a normal thing.
In crypto, you are supposed to assume bad intent, and you are usually right. Same principle as lawyers drawing up contracts. A lawyer should assume that if he puts a comma wrong, the other guy will claim it is a full stop and use it to weasel out of the contract with his client, and a cryptographer should assume that any secrets he is not privy to will be used against his client.
Let us compare IPassword’s response to suspicion with Intel’s response to suspicion.
Intel’s response is suspicious. OOoh, that hurts. Trust us. We are nice people.
An IPassword like response would be to make the unencrypted output of the entropy source available to the owner of the CPU.
Intel has repeatedly been asked for the output of the entropy source, the un”enhanced” output of the entropy source, and has not provided it. Why is the output of RDSEED “enhanced”? (“Enhanced” meaning encrypted against the owner of the CPU, encrypted against the person relying on that output.)
RDSEED provides the output of the enhanced non-deterministic random number generator (ENRNG
Which is “enhanced” by being whitened.
And therefore makes it just as impossible to tell if the supposed randomness is backdoored as RDRAND does.
What we need is the output of the entropy source.
Supposedly we have a circuit that generates fairly random offwhite noise. (The entropy source) This is then AES encrypted (the enhanced non deterministic number generator), and the enhanced non deterministic random number generator (RDSEED) then continuously seeds a pseudo random number generator, RDRAND
To tell if there is a backdoor or not, we need the output of the entropy source, unenhanced.
If the entropy source is real, it will show its analog characteristics leaking into the digital abstraction. The correlations and anti correlations between nearby bits will reflect the analog values of the circuit, thus no two chips will show quite the same correlations, and the correlations will vary with temperature and overclocking. These analog variations would be compelling evidence that the entropy source is the claimed circuit or something very like the claimed circuit.
Because RDSEED gives us the encrypted output of the entropy source, we cannot tell if the entropy source is a real entropy source, or a counter encrypted with the NSA’s secret key.
Since the whitening is deterministic, it is potentially reversible, but Intel does not appear to be releasing sufficient information to reverse it. An Ipassword like response would be to allow the end user to unencrypt the results of RDSEED so as to get the raw output of the entropy source.
RDSEED is “enhanced” by being AES encrypted against the owner of the chip, against the person using the output of RDSEED. To gain trust, give him what he needs to decrypt it.
I suspect this public response will make it easier for me to justify publishing a complete, executable model of the DRNG and ES. We have samples of the raw entropy we could share. But allowing people to unlock the DRNG on one thread to expose the internal state would compromise the security of another thread that is assuming the output of the DRNG is meeting the SP800-90 requirements.
We have published many details of the ES and the DRNG algorithms.
This guy has gathered together lots of public sources of information and has put them on a web page: http://pmokeefe.blogspot.com/2012/03/randomness-on-your-next-chip.html
I don’t see the IDF slides anywhere on the web, I’ll see if we can’t get those online also.
We don’t use AES to ‘encrypt’. We us it in CBC-MAC mode to condense the raw data into higher quality ‘conditioned’ data that is effectively full entropy. Those full entropy seeds are used to reseed an SP800-90A CTR-DRBG at about 2 million times a second.
I am a security guy. I understand the dangers of inappropriate trust. This has to be balanced against the very significant problems of how to do you deliver an actually good RNG in a mass market chip that is maximally secure, make the manufacturing test work without compromising security, get it built, tested, reliable (9 sigma!), functional and a permanently supported part of the PC architectural model. A design that has as part of its architecture, the ability to be switched into an effectively broken mode by any bad actor out there is not one that we know how to make secure.
Intel discards some of the output of the entropy source. Presumably values of condensed data that are read by RDSEED are not used to reseed the PRNG, so some of the condensed data is discarded also.
So, whatever entropy is read by one process, could then be discarded and incapable of affecting any other process. The worst one process could do to another is slow it down by resource hogging the noise generation, which is unlikely to be effective since the supply of noise is so abundant, and amount of noise needed so small.
The truly paranoid developer (such as myself) could then read the Entropy Source, where any Intel misconduct would be likely to show up in the color of the noise, and do the condensing, whitening, and PRNG seeding himself. (It being very hard to create a digital pseudo noise source that displays subtly varying color at high speed, while hardware true random noise sources almost unavoidably display subtly varying noise color.)
Crypto wars II is about furtively hobbled security tools. So we have to build verifiable systems. Intel has to release the necessary information to make RDSEED verifiable. If they will not, we should assume RDSEED and RDRAND are backdoored by the NSA.
[…] Firstly, an honest person does not tell us that it is hurtful that his integrity is doubted, since t… […]
The truly paranoid developer needs only XOR the output of RDRAND with the output of some other RNG. (Even XORing two/three/five RDRAND calls may be enough to foil NSA’s evil plans.) Or you could AES-encrypt the RDRAND output with a random key.
http://people.umass.edu/gbecker/BeckerChes13.pdf
I agree with you Jim. By making the implementation a “black box” there is no opportunity to view. If the RNG exposed itself on JTAG perhaps we could be satisfied. Until this is implemented more transparently Tso’s solution of using RdRAND to feed the kernel entropy pool (which uses multiple sources) is a good compromise.
It’s unlikely that people at Johnson’s level would have knowledge of collusive acts. They are told what to do and, to some degree, how to do it.
What came of this? Can the NSA break all encryption?
The NSA can break any encryption that relies on RDRAND alone for its source of randomness.
Fortunately all cryptographers are aware of this, and most, Microsoft’s cryptographers among them, decline to rely on RDRAND alone
RDRAND randomness will protect you against everyone who does not know the secret, and the secret is narrowly held, so you should use as many sources of randomness and unpredictability in addition as possible, for what is known to one attacker will be unknown to another.
And this is in fact widely done.