0

This is my first question on this community. I am a applied scientist, not a mathematician.

I have the following simplified problem:

Let u:[0,1]R+ a real valued function and kR. The function u() is decreasing and may be continuous or not. Let x(k) the value that satisfies u(x(k))=k.

I need to get the numerical value x(k) for any arbitrary u(), using a computational Rscript.

Intuitively, I have decided to follow this procedure: I define a deviation e(x)=ku(x). Valued at x(k), e(x(k))=0. Then, x(k) minimizes the squared deviation or the absolute value deviation. x(k)=argmin

The code is not the problem. I wrote a script that return the graph of function x^*(k) using any decreasing function u, i.e. u=ae^{-bx}, u=a - bx^2, ..., or other more complicated examples.

Now, I want to known:

  1. What is the mathematical/theoretical name of this procedure?
  2. In which bibliography can I learn about that?

Thanks.

fnd
  • 3
  • There are quite a few ways to approach this but generally-speaking your problem is not one that can be solved algorithmically. Provided your function u isn't too unreasonable, you have a lot of methods at your disposal. Using a mid-point method on the intervals where u is continuous would be one. How is your function u defined? – Ryan Budney Mar 01 '16 at 23:50
  • I edited my question with more details about u definition. – fnd Mar 02 '16 at 04:32
  • Thanks! I will change to regula falsi methods. Now I known its names!!. – fnd Mar 02 '16 at 19:08
  • Start from https://en.wikipedia.org/wiki/Root-finding_algorithm, or (even better) from any undergraduate numerical analysis book. – Federico Poloni Mar 04 '16 at 18:22

1 Answers1

2

squaring a function to find its zeroes is generally no good idea; first you can't exploit sign changes of the function values to conclude that a certain intervall must contain a zero (or discontinuity with sign change for left and right limit); another problem is that numeric precision gets worse because the slopes vanish at the zeros of the squared function.

Worst case scenario is that a sign transition may not result in a zero of the squared function, e.g. f(x) = \begin{cases} -\sqrt{-(x-0.5)}, & x < 0.5 \\ +\sqrt{+(x-0.5)}+2k, & x \ge 0.5 \end{cases} squaring yields (f(x)-k)^2=abs(x-0.5)+k

the original transition from 0 to 2k doesn't generate a zero and applying a derivative-based method like Newton-Raphson produces a ping pong between 0.5-k and 0.5+k.

The Numerical Recipes book is probably the best resource to recommend to you.

Manfred Weis
  • 12,594
  • 4
  • 34
  • 71