Random number generation and server start up is slow on Unix platforms for some of the servers. This is because of /dev/random is used in Unix platforms for random number generation.
I am including an explanation and solution here. Feel free to go to Solution directly in case you understand the problem already.
java.security.SecureRandom is designed to be crypto secure. It provides strong and secure random numbers. SecureRandom should be used when high-quality randomness is important and is worth consuming CPU.
SecureRandom uses OS provided entropy for generating strong random numbers. Depending on the machine and environment, sources of entropy varies. The Operating system knows how and where to collect the entropy from. So it collects this entropy and makes it available via an API like CryptGenRandom() on windows and by reading from /dev/random device file for Unix like systems e.g. Linux, Solaris, Mac.
To obtain a series of bytes of entropy, you can call SecureRandom.getSeed(), with an instance of SecureRandom. For Oracle Java, SecureRandom.generateSeed() is a wrapper for os provided source of entropy, if one is available.
For Unix platforms /dev/random this entropy is generating by recording user actions with devices such as mouse clicks, key board strokes, arriaval/access to disk/network packets etc.
While using SecureRandom, you should keep following in mind:
This is exactly the problem that causes these servers to load slow. The alternative to this is /dev/urandom.
The difference between /dev/random and /dev/urandom is that
/dev/random provides a limited (but relatively large) number of random bytes, and will block waiting that the kernel gives some more if the buffer is outrun,
while /dev/urandom will always provide random bytes though they may become of a lesser quality once the initial buffer is outrun. However note that I am not implying that it is not secure at all.
- in the worst case, the call SecureRandom.generateSeed() may block until the required entropy has been generated.
- you should consider bits of entropy as a "valuable, shared resource"— if you request entropy faster than it is generated, then your application will block, and so potentially will other applicatons also requesting entropy.
This is exactly the problem that causes these servers to load slow. The alternative to this is /dev/urandom.
The difference between /dev/random and /dev/urandom is that
/dev/random provides a limited (but relatively large) number of random bytes, and will block waiting that the kernel gives some more if the buffer is outrun,
while /dev/urandom will always provide random bytes though they may become of a lesser quality once the initial buffer is outrun. However note that I am not implying that it is not secure at all.
Before Java 1.5, you could tell java to use /dev/urandom in case the application has higher need of performance. However Java 1.5 onward, using "/dev/urandom" is ignored and /dev/urandom file is mapped to /dev/random. See Bug 6202721.
Solutions:
There are some solutions mentioned on the internet. I will list all those and then will highlight the one which i found working perfect in all situations -
There are some solutions mentioned on the internet. I will list all those and then will highlight the one which i found working perfect in all situations -
- Leave it as is where /dev/random is used (even if set to /dev/urandom) and use some third party tool to introduce sufficient random entropy into your system so /dev/random doesn't block so slowly.
- Add “-Djava.security.egd=file:/dev/./urandom” (/dev/urandom does not work) to java parameters.
- mv /dev/random /dev/random.ORIG ; ln /dev/urandom /dev/random
- Change $JAVA_HOME/jre/lib/security/java.security. Replace securerandom.source with securerandom.source=file:/dev/./urandom.
Note: #1 is the best solution and most secure. However for development phase and test environments (even for systems where you are ready to compromise on security), #4 solution is what I used and I recommend.
Hope this explains and saves some time for you all.
Have fun.
Hope this explains and saves some time for you all.
Have fun.
Excellent post Shilpi! I am sure this will help a lot of people
ReplyDeleteThere is actually more to it as to where the random bits come from. Oracle JDK has several different PRNG implementations, depending on the OS and configuration: PKCS11, NativePRNG, SHA1PRNG, MSCAPI's WINDOWS-PRNG. The workaround you've described is only applicable to SHA1PRNG.
ReplyDeleteThanks for sharing this. I was finding scenarios where SHA1PRNG was used.
ReplyDeleteIf using solution #1, what third-party tools do you recommend to introduce sufficient random entropy into one's system?
ReplyDeleteI have personally not tried or researched on tools to introduce entropy. However Wikipedia recommends some of tools for various operating systems.
ReplyDeleteShilpi
The difference between /dev/random and /dev/urandom is that
ReplyDelete/dev/random provides a limited (but relatively large) number of random bytes, and will block waiting that the kernel gives some more if the buffer is outrun,
This is false on any platform where one should be running code intended for security: see https://en.wikipedia.org/wiki//dev/random#FreeBSD for details
It should be noted that Linux's random number subsystem is generally considered insecure for most purposes.
I tried solution #1 with rngd on CentOS without any positive result. The CFHTTP calls still take much longer than on our dev machine which is running CF9.
ReplyDeleteOn CF10 the first CFHTTP is fast (0.1seconds), all others are very slow (about 5seconds for google.com) - entropy_avail is good filled with about 3.8k-4k
Before we installed rngd it dropped to about 150 or 300 sometimes. But the lack of entropy is most likely not the reason why the CFHTTP calls get throttled.
Using /dev/./urandom (solutions #2,#3,#4 lol) reads like it is a unsecure workaround, which I cannot use.
Hi Shilpi,
ReplyDeleteI followed your Solutions and the CF 10 cfhttp called is still very slow. I am running on CentOS 6. What Linux did you test this?
Thanks.