Python Forum

Full Version: why do most algorithm only take number between -1 and 1 as input or output?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Why do algorithm like Perlin noise and neural network and many more algorithm must have the input and output to be between -1 and 1,is there a reason for that?
In other words,why do value have to be normalized in algorithm like neural network?
Can I use actual number ,instead of number between -1 and 1?

Second question:
Is gradient interpolation/lerp a must in Perlin noise and simplex noise?
What does it mean?

Third question(not very important):
How do I do infinite terrain using chunk?
This probably would have been better posted in the 'Bar' sub-forum, since at this point, none of the three questions have anything to do with Python coding.  As to the questions themselves, a little time with your favorite search engine will probably supply with with many answers, from simple to complex.

As to your first question, my guess  would be that it just makes computing and graphing easier with a more manageable size.  Since there are an infinite number of numbers between 0 and 1 and 0 and -1, what ever number you choose will fall within that range, just prefixed by a decimal point.  Just a guess though  Dodgy
Well, numbers from -1 to 1 have certain properties that can be useful. For example, if you take any two numbers in the range -1 to 1 and multiply them by each other, the result will be between -1 and 1. That is not true for the range -2 to 2. If you're doing an iterative process this can be useful.
Learned something new, thanks.
numbers in the range from -1 to 1 or 0 to 1 are often used for scaling things. having studied a lot of maths it is common to see numbers like this.
Thanks everyone.