«The Story of a Tree» solved using depth-first search

Look up surd in Wiktionary, the free dictionary.

Look up radical in Wiktionary, the free dictionary.



If N is an approximation to  , a better approximation can be found by using the Taylor series of the square root function:

 

The radical or root may be represented by the infinite series:

 

with  . This expression can be derived from the binomial series.

  • J.M. McNamee: «Numerical Methods for Roots of Polynomials — Part I», Elsevier (2007).
  • J.M. McNamee and Victor Pan: «Numerical Methods for Roots of Polynomials — Part II», Elsevier (2013).



An unknown Babylonian mathematician somehow correctly calculated the square root of 2 to three sexagesimal «digits» after the 1, but it is not known exactly how. The Babylonians knew how to approximate a hypotenuse using

 
 

The denominator in the fraction corresponds to the nth root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots with logarithm tables or slide rules.

Lucas sequence method

the Lucas sequence of the first kind Un(P,Q) is defined by the recurrence relations:

and the characteristic equation of it is:

it has the discriminant   and the roots:

so when we want  , we can choose   and  , and then calculate   using   and  for large value of  .
The most effective way to calculate   and  is:

then when  :

A two-variable iterative method

The initialization step of this method is

 
 

while the iterative steps read

 
 

Then,   (while  ).

The convergence of  , and therefore also of  , is quadratic.

The proof of the method is rather easy. First, rewrite the iterative definition of   as

 .

Then it is straightforward to prove by induction that

 
 

This can be used to construct a rational approximation to the square root by beginning with an integer. If   is an integer chosen so   is close to  , and   is the difference whose absolute value is minimized, then the first iteration can be written as:

 

Using the same example as given with the Babylonian method, let   Then, the first iteration gives

 

Likewise the second iteration gives

 

In this answer, let it be clear, I presume the reader is able to understand the difference betweeen modern shells like bash, zsh and others vs portable POSIX shells like dash.

I believe there is not much to explain here since the highly voted answers do a good job of explaining much of it.

Yet, if there is anything to explain further, don’t hesitate to comment, I will do my best by filling the gaps.


It was once conjectured that all polynomial equations could be solved algebraically (that is, that all roots of a polynomial could be expressed in terms of a finite number of radicals and elementary operations). However, while this is true for third degree polynomials (cubics) and fourth degree polynomials (quartics), the Abel–Ruffini theorem (1824) shows that this is not true in general when the degree is 5 or greater. For example, the solutions of the equation

 

cannot be expressed in terms of radicals. (cf. quintic equation)

  1. In addition to the principal square root, there is a negative square root equal in magnitude but opposite in sign to the principal square root, except for zero, which has double square roots of zero.
  2. The factors two and six are used because they approximate the geometric means of the lowest and highest possible values with the given number of digits:   and  .
  3. The unrounded estimate has maximum absolute error of 2.65 at 100 and maximum relative error of 26.5% at y=1, 10 and 100
  4. If the number is exactly half way between two squares, like 30.5, guess the higher number which is 6 in this case
  5. This is incidentally the equation of the tangent line to y=x2 at y=1.

Combinations of methods

Brent’s method is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent’s method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.

Ridders’ method is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root. This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method.

Many root-finding processes work by interpolation. This consists in using the last computed approximate values of the root for approximating the function by a polynomial of low degree, which takes the same values at these approximate roots. Then the root of the polynomial is computed and used as a new approximate value of the root of the function, and the process is iterated.

Two values allow interpolating a function by a polynomial of degree one (that is approximating the graph of the function by a line). This is the basis of the secant method. Three values define a quadratic function, which approximates the graph of the function by a parabola. This is Muller’s method.

Regula falsi is also an interpolation method, which differs from the secant method by using, for interpolating by a line, two points that are not necessarily the last two computed points.

The problem I’m working on is this:

I’m not sure what I’m supposed to be doing here. I don’t believe my base case is correct for this function. I don’t have a good grasp on what exactly the problem is asking.

public class rootchecker { public static void main(String[] args) { System.out.println(squareRoot(125)); System.out.println(squareRoot(4)); } // helper public static double squareRoot(double x) { return squareRootGuess(x, x-1); } // recursive public static double squareRootGuess(double x, double g) { if (x == 1) { return 1; } else { double ans = (g + (x/g) / 2); return squareRootGuess(x-1, ans); } }
}
public class rootchecker { public static void main(String[] args) { System.out.println(squareRoot(125)); System.out.println(squareRoot(4)); } // helper public static double squareRoot(double x) { return squareRootGuess(x, x-1); } // recursive public static double squareRootGuess(double x, double g) { if (Math.abs(Math.sqrt(x) - g) < 0.001) { return g; } else { double ans = (g + (x/g) / 2); return squareRootGuess(x, ans); } }
}

Continuing on for the problem, is my logic correct for the else statement? I don’t believe it is, considering I get a Stack Overflow Error.

  1. Bansal, R.K. (2006). New Approach to CBSE Mathematics IX. Laxmi Publications. p. 25. ISBN 978-81-318-0013-3.
  2. Silver, Howard A. (1986). Algebra and trigonometry. Englewood Cliffs, NJ: Prentice-Hall. ISBN 978-0-13-021270-2.
  3. «Definition of RADICATION». www.merriam-webster.com.
  4. «radication – Definition of radication in English by Oxford Dictionaries». Oxford Dictionaries. Archived from the original on April 3, 2018.
  5. «Earliest Known Uses of Some of the Words of Mathematics». Mathematics Pages by Jeff Miller. Retrieved .
  6. McKeague, Charles P. (2011). Elementary algebra. p. 470. ISBN 978-0-8400-6421-9.
  7. B.F. Caviness, R.J. Fateman, «Simplification of Radical Expressions», Proceedings of the 1976 ACM Symposium on Symbolic and Algebraic Computation, p. 329.
  8. Richard Zippel, «Simplification of Expressions Involving Radicals», Journal of Symbolic Computation 1:189–210 (1985) doi:10.1016/S0747-7171(85)80014-6.
  9. Wantzel, M. L. (1837), «Recherches sur les moyens de reconnaître si un Problème de Géométrie peut se résoudre avec la règle et le compas», Journal de Mathématiques Pures et Appliquées, 1 (2): 366–372.

Iterative methods for reciprocal square roots

  • Applying Newton’s method to the equation   produces a method that converges quadratically using three multiplications per step:
     
  • Another iteration is obtained by Halley’s method, which is the Householder’s method of order two. This converges cubically, but involves five multiplications per iteration:[]
     , and
     .
  • If doing fixed-point arithmetic, the multiplication by 3 and division by 8 can implemented using shifts and adds. If using floating-point, Halley’s method can be reduced to four multiplications per iteration by precomputing   and adjusting all the other constants to compensate:
     , and
     .

The first way of writing Goldschmidt’s algorithm begins

 
  (typically using a table lookup)
 
 
 
 
 
 

until   is sufficiently close to 1, or a fixed number of iterations. The iterations converge to

 , and
 .

Note that it is possible to omit either   and   from the computation, and if both are desired then   may be used at the end rather than computing it through in each iteration.

A second form, using fused multiply-add operations, begins

  (typically using a table lookup)
 
 
 
 
 

until   is sufficiently close to 0, or a fixed number of iterations. This converges to

 , and
 .

Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root has been found. They generally use the intermediate value theorem, which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval. Therefore, they require to start with an interval such that the function takes opposite signs at the end points of the interval. However, in the case of polynomials there are other methods (Descartes’ rule of signs, Budan’s theorem and Sturm’s theorem) for getting information on the number of roots in an interval. They lead to efficient algorithms for real-root isolation of polynomials, which ensure finding all real roots with a guaranteed accuracy.

False position (regula falsi)

The false position method, also called the regula falsi method, is similar to the bisection method, but instead of using bisection search’s middle of the interval it uses the -intercept of the line that connects the plotted function values at the endpoints of the interval, that is

 

False position is similar to the secant method, except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method can be faster than the bisection method and will never diverge like the secant method; however, it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for ; typically, this may occur if the rate of variation of is large in the neighborhood of the root.

«Heron’s method» redirects here. For the formula used to find the area of a triangle, see Heron’s formula.

More precisely, if is our initial guess of   and is the error in our estimate such that S = (x+ ε)2, then we can expand the binomial

 

and solve for the error term

  since  .

Therefore, we can compensate for the error and update our old estimate as

 
  1. Begin with an arbitrary positive starting value (the closer to the actual square root of , the better).
  2. Let xn + 1 be the average of and (using the arithmetic mean to approximate the geometric mean).
  3. Repeat step 2 until the desired accuracy is achieved.

It can also be represented as:

 
 
 

This algorithm works equally well in the -adic numbers, but cannot be used to identify real square roots with -adic square roots; one can, for example, construct a sequence of rational numbers by this method that converges to +3 in the reals, but to −3 in the 2-adics.

To calculate , where = 125348, to six significant figures, use the rough estimation method above to get

 

Therefore, ≈ 354.045.

Semilog graphs comparing the speed of convergence of Heron’s method to find the square root of 100 for different initial guesses. Negative guesses converge to the negative root, positive guesses to the positive root. Note that values closer to the root converge faster, and all approximations are overestimates. In the SVG file, hover over a graph to display its points.

Suppose that x0 > 0 and S > 0. Then for any natural number n, xn > 0. Let the relative error in xn be defined by

 

Then it can be shown that

 

And thus that

 

and consequently that convergence is assured, and quadratic.

Worst case for convergence

 

Thus in any case,

 
 
 
 
 
 
 
 

Rounding errors will slow the convergence. It is recommended to keep at least one extra digit beyond the desired accuracy of the being calculated to minimize round off error.

  1. Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). «Chapter 9. Root Finding and Nonlinear Sets of Equations». Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
  2. Mourrain, B.; Vrahatis, M. N.; Yakoubsohn, J. C. (2002-06-01). «On the Complexity of Isolating Real Roots and Computing with Certainty the Topological Degree». Journal of Complexity. 18 (2): 612–640. doi:10.1006/jcom.2001.0636. ISSN 0885-064X.
  3. Vrahatis, Michael N. (2020). Sergeyev, Yaroslav D.; Kvasov, Dmitri E. (eds.). «Generalizations of the Intermediate Value Theorem for Approximating Fixed Points and Zeros of Continuous Functions». Numerical Computations: Theory and Algorithms. Lecture Notes in Computer Science. Cham: Springer International Publishing. 11974: 223–238. doi:10.1007/978-3-030-40616-5_17. ISBN 978-3-030-40616-5. S2CID 211160947.
  4. «Iterative solution of nonlinear equations in several variables». Guide books. Retrieved .
  5. Stenger, Frank (1975-03-01). «Computing the topological degree of a mapping inRn». Numerische Mathematik. 25 (1): 23–38. doi:10.1007/BF01419526. ISSN 0945-3245. S2CID 122196773.
  6. Kearfott, Baker (1979-06-01). «An efficient degree-computation method for a generalized method of bisection». Numerische Mathematik. 32 (2): 109–127. doi:10.1007/BF01404868. ISSN 0029-599X. S2CID 122058552.
  7. Vrahatis, M. N.; Iordanidis, K. I. (1986-03-01). «A rapid Generalized Method of Bisection for solving Systems of Non-linear Equations». Numerische Mathematik. 49 (2): 123–138. doi:10.1007/BF01389620. ISSN 0945-3245.
  8. Vrahatis, Michael N. (2020-04-15). «Intermediate value theorem for simplices for simplicial approximation of fixed points and zeros». Topology and its Applications. 275: 107036. doi:10.1016/j.topol.2019.107036. ISSN 0166-8641.

Solving an equation f(x) = g(x) is the same as finding the roots of the function h(x) = f(x) – g(x). Thus root-finding algorithms allow solving any equation defined by continuous functions. However, most root-finding algorithms do not guarantee that they will find all the roots; in particular, if such an algorithm does not find any root, that does not mean that no root exists.

Most numerical root-finding methods use iteration, producing a sequence of numbers that hopefully converges towards the root as its limit. They require one or more initial guesses of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root. Since the iteration must be stopped at some point, these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values. The limit is thus a fixed point of the auxiliary function, which is chosen for having the roots of the original equation as fixed points, and for converging rapidly to these fixed points.

The behavior of general root-finding algorithms is studied in numerical analysis. However, for polynomials, root-finding study belongs generally to computer algebra, since algebraic properties of polynomials are fundamental for the most efficient algorithms. The efficiency of an algorithm may depend dramatically on the characteristics of the given functions. For example, many algorithms use the derivative of the input function, while others work on every continuous function. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for polynomials, there are specific algorithms that use algebraic properties for certifying that no root is missed, and locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically Newton’s method) to the unique root so located.

Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted
,
, or
) of a real number. Arithmetically, it means given
, a procedure for finding a number which when multiplied by itself, yields
; algebraically, it means a procedure for finding the non-negative root of the equation
; geometrically, it means given two line segments, a procedure for constructing their geometric mean.

The continued fraction representation of a real number can be used instead of its decimal or binary expansion and this representation has the property that the square root of any rational number (which is not already a perfect square) has a periodic, repeating expansion, similar to how rational numbers have repeating expansions in the decimal notation system.

The method employed depends on what the result is to be used for (i.e. how accurate it has to be), how much effort one is willing to put into the procedure, and what tools are at hand. The methods may be roughly classified as those suitable for mental calculation, those usually requiring at least paper and pencil, and those which are implemented as programs to be executed on a digital electronic computer or other computing device. Algorithms may take into account convergence (how many iterations are required to achieve a specified precision), computational complexity of individual operations (i.e. division) or iterations, and error propagation (the accuracy of the final result).

Procedures for finding square roots (particularly the square root of 2) have been known since at least the period of ancient Babylon in the 17th century BCE. Heron’s method from first century Egypt was the first ascertainable algorithm for computing square root. Modern analytic methods began to be developed after introduction of the Arabic numeral system to western Europe in the early Renaissance. Today, nearly all computing devices have a fast and accurate square root function, either as a programming language construct, a compiler intrinsic or library function, or as a hardware operator, based on one of the described procedures.

Every complex number other than 0 has n different nth roots.

The two square roots of a complex number are always negatives of each other. For example, the square roots of are and , and the square roots of are

 

If we express a complex number in polar form, then the square root can be obtained by taking the square root of the radius and halving the angle:

 

A principal root of a complex number may be chosen in various ways, for example

 

which introduces a branch cut in the complex plane along the positive real axis with the condition , or along the negative real axis with .

Using the first(last) branch cut the principal square root   maps   to the half plane with non-negative imaginary(real) part. The last branch cut is presupposed in mathematical software like Matlab or Scilab.

Roots of unity

The number 1 has n different nth roots in the complex plane, namely

 
 

These roots are evenly spaced around the unit circle in the complex plane, at angles which are multiples of  . For example, the square roots of unity are 1 and −1, and the fourth roots of unity are 1,  , −1, and  .

Every complex number has n different nth roots in the complex plane. These are

 
 

In polar form, a single nth root may be found by the formula

 

Here r is the magnitude (the modulus, also called the absolute value) of the number whose root is to be taken; if the number can be written as a+bi then  . Also,   is the angle formed as one pivots on the origin counterclockwise from the positive horizontal axis to a ray going from the origin to the number; it has the properties that     and  

Thus finding nth roots in the complex plane can be segmented into two steps. First, the magnitude of all the nth roots is the nth root of the magnitude of the original number. Second, the angle between the positive horizontal axis and a ray from the origin to one of the nth roots is  , where   is the angle defined in the same way for the number whose root is being taken. Furthermore, all n of the nth roots are at equally spaced angles from each other.

If n is even, a complex number’s nth roots, of which there are an even number, come in additive inverse pairs, so that if a number r1 is one of the nth roots then r2 = –r1 is another. This is because raising the latter’s coefficient –1 to the nth power for even n yields 1: that is, (–r1)n = (–1)n × r1n = r1n.

As with square roots, the formula above does not define a continuous function over the entire complex plane, but instead has a branch cut at points where θ / n is discontinuous.

In mathematics, an nth root of a number x is a number r which, when raised to the power n, yields x:


where n is a positive integer, sometimes called the degree of the root. A root of degree 2 is called a square root and a root of degree 3, a cube root. Roots of higher degree are referred by using ordinal numbers, as in fourth root, twentieth root, etc. The computation of an th root is a root extraction.

For example, 3 is a square root of 9, since 32 = 9, and −3 is also a square root of 9, since (−3)2 = 9.

Any non-zero number considered as a complex number has different complex th roots, including the real ones (at most two). The th root of 0 is zero for all positive integers , since 0n = 0. In particular, if is even and is a positive real number, one of its th roots is real and positive, one is negative, and the others (when ) are non-real complex numbers; if is even and is a negative real number, none of the th roots is real. If is odd and is real, one th root is real and has the same sign as , while the other () roots are not real. Finally, if is not real, then none of its th roots are real.

Roots of real numbers are usually written using the radical symbol or radix
, with
denoting the positive square root of if is positive; for higher roots,
denotes the real th root if is odd, and the positive nth root if is even and is positive. In the other cases, the symbol is not commonly used as being ambiguous. In the expression
, the integer n is called the index and is called the radicand.

When complex th roots are considered, it is often useful to choose one of the roots, called principal root, as a principal value. The common choice is to choose the principal th root of as the th root with the greatest real part, and when there are two (for real and negative), the one with a positive imaginary part. This makes the th root a function that is real and positive for real and positive, and is continuous in the whole complex plane, except for values of that are real and negative.

A difficulty with this choice is that, for a negative real number and an odd index, the principal th root is not the real one. For example,
has three cube roots,
,
and
The real cube root is
and the principal cube root is

Roots can also be defined as special cases of exponentiation, where the exponent is a fraction:


Roots are used for determining the radius of convergence of a power series with the root test. The th roots of 1 are called roots of unity and play a fundamental role in various areas of mathematics, such as number theory, theory of equations, and Fourier transform.

Found this problem in hackerrank.

enter image description here

Learning the fact, Bob invented an exciting new game and decided to play it with Alice. The rules of the game is described below:

  1. Bob picks a random node to be the tree’s root and keeps the identity of the chosen node a secret from Alice. Each node has an equal probability of being picked as the root.

  2. For each correct guess, Alice earns one point. Alice wins the game if she earns at least \$k\$ points (i.e., at least \$k\$ of her guesses were true).

Alice and Bob play \$q\$ games. Given the tree, Alice’s guesses, and the value of \$k\$ for each game, find the probability that Alice will win the game and print it on a new line as a reduced fraction in the format p/q.

Solution: There is a tree with some edges marked with arrows. For every vertex in a tree you have to count how many arrows point towards it. For one fixed vertex this may be done via one depth-first search (DFS). Every arrow that was traversed during DFS in direction opposite to its own adds 1. If you know the answer for vertex \$v\$, you can compute the answer for vertex \$u\$ adjacent to \$v\$ in \$O(1)\$.

def gcd(a, b): if not b: return a return gcd(b, a%b)
def dfs1(m, guess, root, seen): '''keep 1 node as root and calculate how many arrows are pointing towards it''' count = 0 stack = [] stack.append(root) seen.add(root) while len(stack): root = stack.pop(len(stack)-1) for i in m[root]: if i not in seen: seen.add(i) count += (1 if root in guess and i in guess[root] else 0) stack.append(i) return count
def dfs2(m, guess, root, seen, cost, k): '''now make every node as root and calculate how many nodes are pointed towards it; If u is the root node for which dfs1 calculated n (number of arrows pointed towards the root) then for v (adjacent node of u), it would be n-1 as v is the made the parent now in this step (only if there is a guess, if there is no guess then it would be not changed)''' stack = [] stack.append((root, cost)) seen.add(root) t_win = 0 while len(stack): (root, cost) = stack.pop(len(stack)-1) t_win += cost >= k for i in m[root]: if i not in seen: seen.add(i) stack.append((i, cost - (1 if root in guess and i in guess[root] else 0) + (1 if i in guess and root in guess[i] else 0))) return t_win
q = int(raw_input().strip())
for a0 in xrange(q): n = int(raw_input().strip()) m = {} guess = {} seen = set() for a1 in xrange(n-1): u,v = raw_input().strip().split(' ') u,v = [int(u),int(v)] if u not in m: m[u] = [] m[u].append(v) if v not in m: m[v] = [] m[v].append(u) g,k = raw_input().strip().split(' ') g,k = [int(g),int(k)] for a1 in xrange(g): u,v = raw_input().strip().split(' ') u,v = [int(u),int(v)] if u not in guess: guess[u] = [] guess[u].append(v) cost = dfs1(m, guess, 1, seen) seen = set() win = dfs2(m, guess, 1, seen, cost, k) g = gcd(win, n) print("{0}/{1}".format(win/g, n/g))

Abramowitz, Miltonn; Stegun, Irene A. (1964). Handbook of mathematical functions with formulas, graphs, and mathematical tables. Courier Dover Publications. p. 17. ISBN 978-0-486-61272-0.

Bailey, David; Borwein, Jonathan (2012). «Ancient Indian Square Roots: An Exercise in Forensic Paleo-Mathematics» . American Mathematical Monthly. Vol. 119, no. 8. pp. 646–657. Retrieved .

Campbell-Kelly, Martin (September 2009). «Origin of Computing». Scientific American. 301 (3): 62–69. Bibcode:2009SciAm.301c..62C. doi:10.1038/scientificamerican0909-62. JSTOR 26001527. PMID 19708529.

Cooke, Roger (2008). Classical algebra: its nature, origins, and uses. John Wiley and Sons. p. 59. ISBN 978-0-470-25952-8.

Fowler, David; Robson, Eleanor (1998). «Square Root Approximations in Old Babylonian Mathematics: YBC 7289 in Context» . Historia Mathematica. 25 (4): 376. doi:.

Gower, John C. (1958). «A Note on an Iterative Method for Root Extraction». The Computer Journal. 1 (3): 142–143. doi:.

Guy, Martin; UKC (1985). «Fast integer square root by Mr. Woo’s abacus algorithm (archived)». Archived from the original on 2012-03-06.

Heath, Thomas (1921). A History of Greek Mathematics, Vol. 2. Oxford: Clarendon Press. pp. 323–324.

Lomont, Chris (2003). «Fast Inverse Square Root» .

Markstein, Peter (November 2004). Software Division and Square Root Using Goldschmidt’s Algorithms . 6th Conference on Real Numbers and Computers. Dagstuhl, Germany. CiteSeerX .

Piñeiro, José-Alejandro; Díaz Bruguera, Javier (December 2002). «High-Speed Double-Precision Computationof Reciprocal, Division, Square Root, and Inverse Square Root». IEEE Transactions on Computers. 51 (12): 1377–1388. doi:10.1109/TC.2002.1146704.

Sardina, Manny (2007). «General Method for Extracting Roots using (Folded) Continued Fractions». Surrey (UK).

Simply Curious (5 June 2018). «Bucking down to the Bakhshali manuscript». Simply Curious blog. Retrieved .

Steinarson, Arne; Corbit, Dann; Hendry, Mathew (2003). «Integer Square Root function».

Wilkes, M.V.; Wheeler, D.J.; Gill, S. (1951). The Preparation of Programs for an Electronic Digital Computer. Oxford: Addison-Wesley. pp. 323–324. OCLC 475783493.

Although all root-finding algorithms proceed by iteration, an iterative root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a fixed point (up to the desired precision) of the auxiliary function is reached, that is when the new computed value is sufficiently close to the preceding ones.

Newton’s method (and similar derivative-based methods)

Newton’s method assumes the function f to have a continuous derivative. Newton’s method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method, and is usually quadratic. Newton’s method is also important because it readily generalizes to higher-dimensional problems. Newton-like methods with higher orders of convergence are the Householder’s methods. The first one after Newton’s method is Halley’s method with cubic order of convergence.

Replacing the derivative in Newton’s method with a finite difference, we get the secant method. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order is approximately 1.6 (golden ratio)). A generalization of the secant method in higher dimensions is Broyden’s method.

If we use a polynomial fit to remove the quadratic part of the finite difference used in the Secant method, so that it better approximates the derivative, we obtain Steffensen’s method, which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton’s method but does not require a derivative.

Fixed point iteration method

We can use the fixed-point iteration to find the root of a function. Given a function   which we have set to zero to find the root ( ), we rewrite the equation in terms of   so that   becomes   (note, there are often many   functions for each   function). Next, we relabel each side of the equation as   so that we can perform the iteration. Next, we pick a value for   and perform the iteration until it converges towards a root of the function. If the iteration converges, it will converge to a root. The iteration will only converge if  .

 ,
 ,
 ,
 , or
 .

The appearance of complex values in interpolation methods can be avoided by interpolating the inverse of f, resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.

This is a method to find each digit of the square root in a sequence. This method is based on the binomial theorem and basically an inverse algorithm solving  . It is slower than the Babylonian method, but it has several advantages:

  • It can be easier for manual calculations.
  • Every digit of the root found is known to be correct, i.e., it does not have to be changed later.
  • If the square root has an expansion that terminates, the algorithm terminates after the last digit is found. Thus, it can be used to check whether a given integer is a square number.
  • The algorithm works for any base, and naturally, the way it proceeds depends on the base chosen.
  • Inconveniences are that the algorithm becomes quite unhandleable for higher roots and that it is not allowing inaccurate guesses or inaccurate sub-calculations as they, unlike the self correcting approximations like with Newton’s method, lead to every following digit of the result being wrong. Furthermore this algorithm, even though being efficient enough on paper, is way too expensive for software implementations as the many calculations become larger and larger and load the memory while still only allowing digit by digit progressions leading the algorithm to become slower and slower with every following digit.

Napier’s bones include an aid for the execution of this algorithm. The shifting th root algorithm is a generalization of this method.

First, consider the case of finding the square root of a number , that is the square of a two-digit number , where is the tens digit and is the units digit. Specifically:

 

Now using the digit-by-digit algorithm, we first determine the value of . is the largest digit such that is less than or equal to from which we removed the two rightmost digits.

In the next iteration, we pair the digits, multiply by 2, and place it in the tenth’s place while we try to figure out what the value of is.

Since this is a simple case where the answer is a perfect square root , the algorithm stops here.

The same idea can be extended to any arbitrary square root computation next. Suppose we are able to find the square root of by expressing it as a sum of n positive numbers such that

 .

By repeatedly applying the basic identity

 

the right-hand-side term can be expanded as

 

This expression allows us to find the square root by sequentially guessing the values of  s. Suppose that the numbers   have already been guessed, then the m-th term of the right-hand-side of above summation is given by   where   is the approximate square root found so far. Now each new guess   should satisfy the recursion

 

such that   for all   with initialization   When   the exact square root has been found; if not, then the sum of  s gives a suitable approximation of the square root, with   being the approximation error.

For example, in the decimal number system we have

 

where   are place holders and the coefficients  . At any m-th stage of the square root calculation, the approximate root found so far,   and the summation term   are given by

 
 

Here since the place value of   is an even power of 10, we only need to work with the pair of most significant digits of the remaining term   at any m-th stage. The section below codifies this procedure.

Decimal (base 10)

Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into pairs, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the square. One digit of the root will appear above each pair of digits of the square.

  1. Starting on the left, bring down the most significant (leftmost) pair of digits not yet used (if all the digits have been used, write «00») and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by 100 and add the two digits. This will be the current value c.
  2. Find p, y and x, as follows:
    • Let p be the part of the root found so far, ignoring any decimal point. (For the first step, p = 0.)
    • Determine the greatest digit x such that  . We will use a new variable y = x(20p + x).
      • Note: 20p + x is simply twice p, with the digit x appended to the right.
      • Note: x can be found by guessing what c/(20·p) is and doing a trial calculation of y, then adjusting x upward or downward as necessary.
    • Place the digit   as the next digit of the root, i.e., above the two digits of the square you just brought down. Thus the next p will be the old p times 10 plus x.
  3. Subtract y from c to form a new remainder.
  4. If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration.

Find the square root of 152.2756.

  1 2. 3 4  / \/ 01 52.27 56 01 1*1 <= 1 < 2*2 x=1  01  y = x*x = 1*1 = 1 00 52 22*2 <= 52 < 23*3 x=2  00 44  y = (20+x)*x = 22*2 = 44 08 27 243*3 <= 827 < 244*4 x=3  07 29  y = (240+x)*x = 243*3 = 729 98 56 2464*4 <= 9856 < 2465*5 x=4  98 56  y = (2460+x)*x = 2464*4 = 9856 00 00 Algorithm terminates: Answer=12.34

Binary numeral system (base 2)

This section uses the formalism from the digit-by-digit calculation section above, with the slight variation that we let  , with each   or  .
We iterate all  , from   down to  , and build up an approximate solution  , the sum of all   for which we have determined the value.
To determine if   equals   or  , we let  . If   (i.e. the square of our approximate solution including   does not exceed the target square) then  , otherwise   and  .
To avoid squaring   in each step, we store the difference   and incrementally update it by setting   with  .
Initially, we set   for the largest   with  .

As an extra optimization, we store   and  , the two terms of   in case that   is nonzero, in separate variables  ,  :

 
 
 

  and   can be efficiently updated in each step:

 
 
 , which is the final result returned in the function below.
    "sqrt input should be non-negative"              // dₙ which starts at the highest power of four <= n       // The second-to-top bit is set. // Same as ((unsigned) INT32_MAX + 1) / 2.                     // if Xₘ₊₁ ≥ Yₘ then aₘ = 2ᵐ      // Xₘ = Xₘ₊₁ - Yₘ        // cₘ₋₁ = cₘ/2 + dₘ (aₘ is 2ᵐ)       // cₘ₋₁ = cₘ/2 (aₘ is 0)     // dₘ₋₁ = dₘ/4    

Many iterative square root algorithms require an initial seed value. The seed must be a non-zero positive number; it should be between 1 and  , the number whose square root is desired, because the square root must be in that range. If the seed is far away from the root, the algorithm will require more iterations. If one initializes with   (or  ), then approximately   iterations will be wasted just getting the order of magnitude of the root. It is therefore useful to have a rough estimate, which may have limited accuracy but is easy to calculate. In general, the better the initial estimate, the faster the convergence. For Newton’s method (also called Babylonian or Heron’s method), a seed somewhat larger than the root will converge slightly faster than a seed somewhat smaller than the root.

In general, an estimate is pursuant to an arbitrary interval known to contain the root (such as  ). The estimate is a specific value of a functional approximation to   over the interval. Obtaining a better estimate involves either obtaining tighter bounds on the interval, or finding a better functional approximation to  . The latter usually means using a higher order polynomial in the approximation, though not all approximations are polynomial. Common methods of estimating include scalar, linear, hyperbolic and logarithmic. A decimal base is usually used for mental or paper-and-pencil estimating. A binary base is more suitable for computer estimates. In estimating, the exponent and mantissa are usually treated separately, as the number would be expressed in scientific notation.

Typically the number   is expressed in scientific notation as   where   and n is an integer, and the range of possible square roots is   where  .

Scalar methods divide the range into intervals, and the estimate in each interval is represented by a single scalar number. If the range is considered as a single interval, the arithmetic mean (5.5) or geometric mean ( ) times   are plausible estimates. The absolute and relative error for these will differ. In general, a single scalar will be very inaccurate. Better estimates divide the range into two or more intervals, but scalar estimates have inherently low accuracy.

 

This estimate has maximum absolute error of   at a = 100, and maximum relative error of 100% at a = 1.

For example, for   factored as  , the estimate is  .  , an absolute error of 246 and relative error of almost 70%.

A better estimate, and the standard method used, is a linear approximation to the function   over a small arc. If, as above, powers of the base are factored out of the number   and the interval reduced to  , a secant line spanning the arc, or a tangent line somewhere along the arc may be used as the approximation, but a least-squares regression line intersecting the arc will be more accurate.

A least-squares regression line minimizes the average difference between the estimate and the value of the function. Its equation is  . Reordering,  . Rounding the coefficients for ease of computation,

 

To divide by 10, subtract one from the exponent of  , or figuratively move the decimal point one digit to the left. For this formulation, any additive constant 1 plus a small increment will make a satisfactory estimate so remembering the exact number isn’t a burden. The approximation (rounded or not) using a single line spanning the range   is less than one significant digit of precision; the relative error is greater than 1/22, so less than 2 bits of information are provided. The accuracy is severely limited because the range is two orders of magnitude, quite large for this kind of estimation.

 

The maximum absolute errors occur at the high points of the intervals, at a=10 and 100, and are 0.54 and 1.7 respectively. The maximum relative errors are at the endpoints of the intervals, at a=1, 10 and 100, and are 17% in both cases. 17% or 0.17 is larger than 1/10, so the method yields less than a decimal digit of accuracy.

In some cases, hyperbolic estimates may be efficacious, because a hyperbola is also a convex curve and may lie along an arc of Y = x2 better than a line. Hyperbolic estimates are more computationally complex, because they necessarily require a floating division. A near-optimal hyperbolic approximation to x2 on the interval   is y=190/(10-x)-20. Transposing, the square root is x = -190/(y+20)+10. Thus for  :

 

The floating division need be accurate to only one decimal digit, because the estimate overall is only that accurate, and can be done mentally. A hyperbolic estimate is better on average than scalar or linear estimates. It has maximum absolute error of 1.58 at 100 and maximum relative error of 16.0% at 10. For the worst case at a=10, the estimate is 3.67. If one starts with 10 and applies Newton-Raphson iterations straight away, two iterations will be required, yielding 3.66, before the accuracy of the hyperbolic estimate is exceeded. For a more typical case like 75, the hyperbolic estimate is 8.00, and 5 Newton-Raphson iterations starting at 75 would be required to obtain a more accurate result.

A method analogous to piece-wise linear approximation but using only arithmetic instead of algebraic equations, uses the multiplication tables in reverse: the square root of a number between 1 and 100 is between 1 and 10, so if we know 25 is a perfect square (5 × 5), and 36 is a perfect square (6 × 6), then the square root of a number greater than or equal to 25 but less than 36, begins with a 5. Similarly for numbers between other squares. This method will yield a correct first digit, but it is not accurate to one digit: the first digit of the square root of 35 for example, is 5, but the square root of 35 is almost 6.

The final operation is to multiply the estimate by the power of ten divided by 2, so for  ,

 

The method implicitly yields one significant digit of accuracy, since it rounds to the best first digit.

The method can be extended 3 significant digits in most cases, by interpolating between the nearest squares bounding the operand. If  , then   is approximately k plus a fraction, the difference between and k2 divided by the difference between the two squares:

  where  

The final operation, as above, is to multiply the result by the power of ten divided by 2;

 

is a decimal digit and is a fraction that must be converted to decimal. It usually has only a single digit in the numerator, and one or two digits in the denominator, so the conversion to decimal can be done mentally.

Example: find the square root of 75. 75 = 75 × 10· 0, so is 75 and is 0. From the multiplication tables, the square root of the mantissa must be 8 point something because 8 × 8 is 64, but 9 × 9 is 81, too big, so is 8; something is the decimal representation of . The fraction is 75 — k2 = 11, the numerator, and 81 — k2 = 17, the denominator. 11/17 is a little less than 12/18, which is 2/3s or .67, so guess .66 (it’s ok to guess here, the error is very small). So the estimate is 8 + .66 = 8.66. to three significant digits is 8.66, so the estimate is good to 3 significant digits. Not all such estimates using this method will be so accurate, but they will be close.

When working in the binary numeral system (as computers do internally), by expressing   as   where  , the square root   can be estimated as

 

which is the least-squares regression line to 3 significant digit coefficients.   has maximum absolute error of 0.0408 at  =2, and maximum relative error of 3.0% at  =1. A computationally convenient rounded estimate (because the coefficients are powers of 2) is:

 [Note 5]

which has maximum absolute error of 0.086 at 2 and maximum relative error of 6.1% at  =0.5 and  =2.0.

For  , the binary approximation gives  .  , so the estimate has an absolute error of 19 and relative error of 5.3%. The relative error is a little less than 1/24, so the estimate is good to 4+ bits.

An estimate for   good to 8 bits can be obtained by table lookup on the high 8 bits of  , remembering that the high bit is implicit in most floating point representations, and the bottom bit of the 8 should be rounded. The table is 256 bytes of precomputed 8-bit square root values. For example, for the index 111011012 representing 1.851562510, the entry is 101011102 representing 1.35937510, the square root of 1.851562510 to 8 bit precision (2+ decimal digits).

Why I post this solution after so many years the OP has asked

Well, if I see correctly, there does seem to be a missing piece of code above.

You see, there are many variables which have to be taken into account, and one of them is combining performance and reliability.


Computing principal roots

Using Newton’s method

The th root of a number can be computed with Newton’s method, which starts with an initial guess and then iterates using the recurrence relation

 

until the desired precision is reached. For computational efficiency, the recurrence relation is commonly rewritten

 

This allows to have only one exponentiation, and to compute once for all the first factor of each term.

The approximation is accurate to 25 decimal places and is good for 51.

Newton’s method can be modified to produce various generalized continued fractions for the nth root. For example,

 

Digit-by-digit calculation of principal roots of decimal (base 10) numbers

Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into groups of digits equating to the root being taken, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the radicand. One digit of the root will appear above each group of digits of the original number.

  1. Starting on the left, bring down the most significant (leftmost) group of digits not yet used (if all the digits have been used, write «0» the number of times required to make a group) and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by   and add the digits from the next group. This will be the current value c.
  2. Find p and x, as follows:
    • Let   be the part of the root found so far, ignoring any decimal point. (For the first step,  ).
    • Determine the greatest digit   such that  .
    • Place the digit   as the next digit of the root, i.e., above the group of digits you just brought down. Thus the next p will be the old p times 10 plus x.
  3. Subtract   from   to form a new remainder.
  4. If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration.

Find the square root of 152.2756.

  1 2. 3 4    / \/ 01 52.27 56
 01 100·1·00·12 + 101·2·01·11 ≤ 1 < 100·1·00·22 + 101·2·01·21 x = 1  01  y = 100·1·00·12 + 101·2·01·11 = 1 + 0 = 1 00 52 100·1·10·22 + 101·2·11·21 ≤ 52 < 100·1·10·32 + 101·2·11·31 x = 2  00 44  y = 100·1·10·22 + 101·2·11·21 = 4 + 40 = 44 08 27 100·1·120·32 + 101·2·121·31 ≤ 827 < 100·1·120·42 + 101·2·121·41 x = 3  07 29  y = 100·1·120·32 + 101·2·121·31 = 9 + 720 = 729 98 56 100·1·1230·42 + 101·2·1231·41 ≤ 9856 < 100·1·1230·52 + 101·2·1231·51 x = 4  98 56  y = 100·1·1230·42 + 101·2·1231·41 = 16 + 9840 = 9856 00 00 Algorithm terminates: Answer is 12.34

Find the cube root of 4192 to the nearest hundredth.

  1 6. 1 2 4 3 / \/ 004 192.000 000 000
 004 100·1·00·13 + 101·3·01·12 + 102·3·02·11 ≤ 4 < 100·1·00·23 + 101·3·01·22 + 102·3·02·21 x = 1  001  y = 100·1·00·13 + 101·3·01·12 + 102·3·02·11 = 1 + 0 + 0 = 1 003 192 100·1·10·63 + 101·3·11·62 + 102·3·12·61 ≤ 3192 < 100·1·10·73 + 101·3·11·72 + 102·3·12·71 x = 6  003 096  y = 100·1·10·63 + 101·3·11·62 + 102·3·12·61 = 216 + 1,080 + 1,800 = 3,096 096 000 100·1·160·13 + 101·3·161·12 + 102·3·162·11 ≤ 96000 < 100·1·160·23 + 101·3·161·22 + 102·3·162·21 x = 1  077 281  y = 100·1·160·13 + 101·3·161·12 + 102·3·162·11 = 1 + 480 + 76,800 = 77,281 018 719 000 100·1·1610·23 + 101·3·1611·22 + 102·3·1612·21 ≤ 18719000 < 100·1·1610·33 + 101·3·1611·32 + 102·3·1612·31 x = 2  015 571 928  y = 100·1·1610·23 + 101·3·1611·22 + 102·3·1612·21 = 8 + 19,320 + 15,552,600 = 15,571,928 003 147 072 000 100·1·16120·43 + 101·3·16121·42 + 102·3·16122·41 ≤ 3147072000 < 100·1·16120·53 + 101·3·16121·52 + 102·3·16122·51 x = 4 The desired precision is achieved: The cube root of 4192 is about 16.12

The principal nth root of a positive number can be computed using logarithms. Starting from the equation that defines r as an nth root of x, namely   with x positive and therefore its principal root r also positive, one takes logarithms of both sides (any base of the logarithm will do) to obtain

 

The root r is recovered from this by taking the antilog:

 

(Note: That formula shows b raised to the power of the result of the division, not b multiplied by the result of the division.)

Finding roots in higher dimensions

The Poincaré–Miranda theorem gives a criterion for the existence of a root in a rectangle, but it is hard to verify, since it requires to evaluate the function on the entire boundary of the triangle.

Portable pure POSIX solution + Example of usage of the above function

#!/bin/sh
# bool function to test if the user is root or not (POSIX only)
is_user_root ()
{ [ "$(id -u)" -eq 0 ]
}
if is_user_root; then echo 'You are the almighty root!' # You can do whatever you need...
else echo 'You are just an ordinary user.' >&2 exit 1
fi

Simplified form of a radical expression

  1. There is no factor of the radicand that can be written as a power greater than or equal to the index.
  2. There are no fractions under the radical sign.
  3. There are no radicals in the denominator.
 
 
 
 

Simplifying radical expressions involving nested radicals can be quite difficult. It is not obvious for instance that:

 

The above can be derived through:

 

Let  , with and coprime and positive integers. Then   is rational if and only if both   and   are integers, which means that both and are nth powers of some integer.

Proof of irrationality for non-perfect nth power x

Assume that   is rational. That is, it can be reduced to a fraction  , where and are integers without a common factor.

This means that  .

Since   and  ,  .

This means that   and thus,  . This implies that   is an integer. Since x is not a perfect nth power, this is impossible. Thus   is irrational.

Continued fraction expansion

Quadratic irrationals (numbers of the form  , where a, b and c are integers), and in particular, square roots of integers, have periodic continued fractions. Sometimes what is desired is finding not the numerical value of a square root, but rather its continued fraction expansion, and hence its rational approximation. Let S be the positive number for which we are required to find the square root. Then assuming a to be a number that serves as an initial guess and r to be the remainder term, we can write   Since we have  , we can express the square root of S as

 

By applying this expression for   to the denominator term of the fraction, we have

 
 
 
 

For , the value of   is 1, so its representation is:

 

Proceeding this way, we get a generalized continued fraction for the square root as
 

 

Step 2 is to reduce the continued fraction from the bottom up, one denominator at a time, to yield a rational fraction whose numerator and denominator are integers. The reduction proceeds thus (taking the first three denominators):

 
 
 

Finally (step 3), divide the numerator by the denominator of the rational fraction to obtain the approximate value of the root:

  rounded to three digits of precision.

The actual value of is 1.41 to three significant digits. The relative error is 0.17%, so the rational fraction is good to almost three digits of precision. Taking more denominators gives successively better approximations: four denominators yields the fraction  , good to almost 4 digits of precision, etc.

In general, the larger the denominator of a rational fraction, the better the approximation. It can also be shown that truncating a continued fraction yields a rational fraction that is the best approximation to the root of any fraction with denominator less than or equal to the denominator of that fraction — e.g., no fraction with a denominator less than or equal to 70 is as good an approximation to as 99/70.

Optimized all-round solution for performance and reliability; all shells compatible

# bool function to test if the user is root or not
is_user_root () { [ "${EUID:-$(id -u)}" -eq 0 ]; }

Benchmark (save to file is_user_root__benchmark)

#+------------------------------------------------------------------------------+
#| is_user_root() benchmark |
#| "Bash is fast while Dash is slow in this" |
#| Language: POSIX shell script |
#| Copyright: 2020-2021 Vlastimil Burian |
#| M@il: info[..]vlastimilburian[..]cz |
#| License: GPL 3.0 |
#| Version: 1.2 |
#+------------------------------------------------------------------------------+
readonly iterations=10000
# intentionally, the file does not have an executable bit, nor it has a shebang
# to use it, just call the file directly with your shell interpreter like:
# bash is_user_root__benchmark ## should take a fraction of one second
# dash is_user_root__benchmark ## could take around 10 seconds
is_user_root () { [ "${EUID:-$(id -u)}" -eq 0 ]; }
print_time () { date +"%T.%2N"; }
print_start () { printf '%s' 'Start : '; print_time; }
print_finish () { printf '%s' 'Finish : '; print_time; }
printf '%s\n' '___is_user_root()___'; print_start
i=1; while [ "$i" -lt "$iterations" ]; do is_user_root i=$((i+1))
done; print_finish

Examples of use and duration:

$ dash is_user_root__benchmark
___is_user_root()___
Start : 03:14:04.81
Finish : 03:14:13.29
$ bash is_user_root__benchmark
___is_user_root()___
Start : 03:16:22.90
Finish : 03:16:23.08

Explanation


Negative or complex square

If S < 0, then its principal square root is

 

If S = a+bi where a and b are real and b ≠ 0, then its principal square root is

 
 

is the modulus of S. The principal square root of a complex number is defined to be the root with the non-negative real part.

Identities and properties

Expressing the degree of an nth root in its exponent form, as in  , makes it easier to manipulate powers and roots. If   is a non-negative real number,

 

Every non-negative number has exactly one non-negative real nth root, and so the rules for operations with surds involving non-negative radicands   and   are straightforward within the real numbers:

 

Subtleties can occur when taking the nth roots of negative or complex numbers. For instance:

  but, rather,  

Since the rule   strictly holds for non-negative real radicands only, its application leads to the inequality in the first step above.

Approximations that depend on the floating point representation

A number is represented in a floating point format as   which is also called scientific notation. Its square root is   and similar formulae would apply for cube roots and logarithms. On the face of it, this is no improvement in simplicity, but suppose that only an approximation is required: then just   is good to an order of magnitude. Next, recognise that some powers, , will be odd, thus for 3141.59 = 3.1415910 rather than deal with fractional powers of the base, multiply the mantissa by the base and subtract one from the power to make it even. The adjusted representation will become the equivalent of 31.415910 so that the square root will be 10.

A table with only three entries could be enlarged by incorporating additional bits of the mantissa. However, with computers, rather than calculate an interpolation into a table, it is often better to find some simpler calculation giving equivalent results. Everything now depends on the exact details of the format of the representation, plus what operations are available to access and manipulate the parts of the number. For example, Fortran offers an EXPONENT(x) function to obtain the power. Effort expended in devising a good initial approximation is to be recouped by thereby avoiding the additional iterations of the refinement process that would have been needed for a poor approximation. Since these are few (one iteration requires a divide, an add, and a halving) the constraint is severe.

So for a 32-bit single precision floating point number in IEEE format (where notably, the power has a bias of 127 added for the represented form) you can get the approximate logarithm by interpreting its binary representation as a 32-bit integer, scaling it by  , and removing a bias of 127, i.e.

 

For example, 1.0 is represented by a hexadecimal number 0x3F800000, which would represent   if taken as an integer. Using the formula above you get  , as expected from  . In a similar fashion you get 0.5 from 1.5 (0x3FC00000).

 
/* Assumes that float is in the IEEE 754 single precision floating point format */            /* Convert type, preserving bit pattern */ * ((((val.i / 2^m) - b) / 2) + b) * 2^m = ((val.i - 2^m) / 2) + ((b + 1) / 2) * 2^m) * b = exponent bias * m = number of mantissa bits    /* Subtract 2^m. */  /* Divide by 2. */    /* Add ((b + 1) / 2) * 2^m. */ /* Interpret again as float */

The three mathematical operations forming the core of the above function can be expressed in a single line. An additional adjustment can be added to reduce the maximum relative error. So, the three operations, not including the cast, can be rewritten as

              

Reciprocal of the square root

                            /* The next line can be repeated any number of times to increase accuracy */             

Definition and notation

The four 4th roots of −1,
none of which are real

The three 3rd roots of −1,
one of which is a negative real

An nth root of a number x, where n is a positive integer, is any of the n real or complex numbers r whose nth power is x:

 

Every positive real number x has a single positive nth root, called the principal nth root, which is written  . For n equal to 2 this is called the principal square root and the n is omitted. The nth root can also be represented using exponentiation as x1/n.

For even values of n, positive numbers also have a negative nth root, while negative numbers do not have a real nth root. For odd values of n, every negative number x has a real negative nth root. For example, −2 has a real 5th root,   but −2 does not have any real 6th roots.

Every non-zero number x, real or complex, has n different complex number nth roots. (In the case x is real, this count includes any real nth roots.) The only complex root of 0 is 0.

The nth roots of almost all numbers (all integers except the nth powers, and all rationals except the quotients of two nth powers) are irrational. For example,

 

All nth roots of rational numbers are algebraic numbers, and all nth roots of integers are algebraic integers.

A square root of a number x is a number r which, when squared, becomes x:

 

Every positive real number has two square roots, one positive and one negative. For example, the two square roots of 25 are 5 and −5. The positive square root is also known as the principal square root, and is denoted with a radical sign:

 

Since the square of every real number is nonnegative, negative numbers do not have real square roots. However, for every negative real number there are two imaginary square roots. For example, the square roots of −25 are 5i and −5i, where i represents a number whose square is .

A cube root of a number x is a number r whose cube is x:

 

Every real number x has exactly one real cube root, written  . For example,

  and  

Every real number has two additional complex cube roots.

Conclusion

As much as you possibly don’t like it, the Unix / Linux environment has diversified a lot. Meaning there are people who like bash, zsh, and other modern shells so much, they don’t even think of portability (POSIX). It is nowadays a matter of personal choice and needs.

Дополнительно:  Почему не работает клавиатура на ноутбуке с Windows 10
Оцените статью
Master Hi-technology