Edit : and using what follows, I always get "38" ! What a randomness !
srand((unsigned)time(0));
int lowest=1, highest=100;
int range=(highest-lowest)+1;
int r = lowest+int(range*rand()/(RAND_MAX + 1.0));
As an explanation, here is what is going on. The first 3 lines are fairly simple:
Code:
<seed the randomness>
lowest=1
highest = 100
range = 99
The last one is messy. I've added some extra braces to help you understand the order of precedence:
Code:
int r = lowest+int([B]([/B]range*rand()[B])[/B]/(RAND_MAX + 1.0));
So Let's start putting in some numbers (I'm working on a 64-bit system, so YMMV).
Code:
int r = 1 + int([COLOR=Red][B]([/B][/COLOR]99*rand()[COLOR=Red][B])[/B][/COLOR]/(2147483647 + 1.0));
The important thing to notice here is that rand() returns an int (which is in the range 0 - RAND_MAX (inclusive). It is then multiplied by 99. However, 99 is also an int and one int multiplied by an int returns another int. In this situation, you can get rollover, where the 8-byte integer runs out of space to store it and just wraps around. So in C code (when dealing with 64-bit ints), 2147483647 + 1 =
-2147483648. So that means that (99 * rand()) will always be in the range (-2147483648, 2147483647)
Now with the denominator, we have 2147483647 + 1.0. The first number in an int, the second is a float. So what happens is that the int gets cast to a float and then the two added, resulting in (on my compiler) 2147483648.000000. So now we have:
Code:
int r = 1 + int([B]([/B][I]R[/I][B])[/B]/(2147483648.0));
R is in the range -2147483648 ≤
R ≤ 2147483647.
Then the division is performed resulting in:
Code:
int r = 1 + int([I]S[/I]);
Where S is in the range -1 ≤
S < 1. Then you cast it to an int, where it all gets squashed to 0 (apart from the really rare exception where R = -2147483648):
r is (pretty much always) 1. I don't know why you got 38, but this will explain why you weren't getting your expected range.