Friday, April 13, 2012

Random number generation in Unix & Performance


Random number generation and server start up is slow on Unix platforms for some of the servers. This is because of /dev/random is used in Unix platforms for random number generation. 


I am including an explanation and solution here. Feel free to go to Solution directly in case you understand the problem already. 


java.security.SecureRandom is designed to be crypto secure. It provides strong and secure random numbers. SecureRandom should be used when high-quality randomness is important and is worth consuming CPU. 

SecureRandom uses OS provided entropy for generating strong random numbers. Depending on the machine and environment, sources of entropy varies. The Operating system knows how and where to collect the entropy from. So it collects this entropy and makes it available via an API like CryptGenRandom() on windows and by reading from /dev/random device file for Unix like systems e.g. Linux, Solaris, Mac.

To obtain a series of bytes of entropy, you can call SecureRandom.getSeed(), with an instance of SecureRandom.  For Oracle Java, SecureRandom.generateSeed() is a wrapper for  os provided source of entropy, if one is available.

For Unix platforms /dev/random this entropy is generating by recording user actions with devices such as mouse clicks, key board strokes,  arriaval/access to disk/network packets etc.

While using SecureRandom, you should keep following in mind: 
  • in the worst case, the call SecureRandom.generateSeed() may block until the required entropy has been generated.
  • you should consider bits of entropy as a "valuable, shared resource"— if you request entropy faster than it is generated, then your application will block, and so potentially will other applicatons also requesting entropy.


This is exactly the problem that causes these servers to load slow. The alternative to this is /dev/urandom.


The difference between /dev/random and /dev/urandom is that 
/dev/random provides a limited (but relatively large) number of random bytes, and will block waiting that the kernel gives some more if the buffer is outrun, 


while /dev/urandom will always provide random bytes though they may become of a lesser quality once the initial buffer is outrun. However note that I am not implying that it is not secure at all.

Before Java 1.5, you could tell java to use /dev/urandom in case the application has higher need of performance. However  Java 1.5 onward, using "/dev/urandom" is ignored  and /dev/urandom file is mapped to /dev/random. See Bug 6202721
Solutions:
There are some solutions mentioned on the internet. I will list all those and then will highlight the one which i found working perfect in all situations -
  1. Leave it as is where /dev/random is used (even if set to /dev/urandom) and use some third party tool to introduce sufficient random entropy into your system so /dev/random doesn't block so slowly.
  2. Add  “-Djava.security.egd=file:/dev/./urandom” (/dev/urandom does not work) to java parameters.
  3. mv /dev/random /dev/random.ORIG ; ln /dev/urandom /dev/random
  4. Change $JAVA_HOME/jre/lib/security/java.security. Replace securerandom.source with securerandom.source=file:/dev/./urandom.
Note: #1 is the best solution and most secure. However for development phase and test environments (even for systems where you are ready to compromise on security), #4 solution is what I used and I recommend. 


Hope this explains and saves some time for you all. 
Have fun.



8 comments:

  1. Excellent post Shilpi! I am sure this will help a lot of people

    ReplyDelete
  2. There is actually more to it as to where the random bits come from. Oracle JDK has several different PRNG implementations, depending on the OS and configuration: PKCS11, NativePRNG, SHA1PRNG, MSCAPI's WINDOWS-PRNG. The workaround you've described is only applicable to SHA1PRNG.

    ReplyDelete
  3. Thanks for sharing this. I was finding scenarios where SHA1PRNG was used.

    ReplyDelete
  4. If using solution #1, what third-party tools do you recommend to introduce sufficient random entropy into one's system?

    ReplyDelete
  5. I have personally not tried or researched on tools to introduce entropy. However Wikipedia recommends some of tools for various operating systems.

    Shilpi

    ReplyDelete
  6. The difference between /dev/random and /dev/urandom is that
    /dev/random provides a limited (but relatively large) number of random bytes, and will block waiting that the kernel gives some more if the buffer is outrun,

    This is false on any platform where one should be running code intended for security: see https://en.wikipedia.org/wiki//dev/random#FreeBSD for details

    It should be noted that Linux's random number subsystem is generally considered insecure for most purposes.

    ReplyDelete
  7. I tried solution #1 with rngd on CentOS without any positive result. The CFHTTP calls still take much longer than on our dev machine which is running CF9.
    On CF10 the first CFHTTP is fast (0.1seconds), all others are very slow (about 5seconds for google.com) - entropy_avail is good filled with about 3.8k-4k
    Before we installed rngd it dropped to about 150 or 300 sometimes. But the lack of entropy is most likely not the reason why the CFHTTP calls get throttled.
    Using /dev/./urandom (solutions #2,#3,#4 lol) reads like it is a unsecure workaround, which I cannot use.

    ReplyDelete
  8. Hi Shilpi,

    I followed your Solutions and the CF 10 cfhttp called is still very slow. I am running on CentOS 6. What Linux did you test this?

    Thanks.

    ReplyDelete

You can subscribe to the comments by licking on "Subscribe by email".