The change of a function at a certain point and is defined as the limit of the increment of the function to the increment of the argument, which tends to zero. To find it, use the table of derivatives. For example, the derivative of the function y = x3 will be equal to y’ = x2.

Equate this derivative to zero (in this case x2=0).

Find the value of the given variable. These will be the values ​​when this derivative will be equal to 0. To do this, substitute arbitrary numbers in the expression instead of x, at which the entire expression will become zero. For instance:

2-2x2=0
(1-x)(1+x) = 0
x1=1, x2=-1

Apply the obtained values ​​​​on the coordinate line and calculate the sign of the derivative for each of the obtained ones. Points are marked on the coordinate line, which are taken as the origin. To calculate the value in the intervals, substitute arbitrary values ​​that match the criteria. For example, for the previous function up to the interval -1, you can choose the value -2. For -1 to 1, you can choose 0, and for values ​​​​greater than 1, choose 2. Substitute these numbers in the derivative and find out the sign of the derivative. In this case, the derivative with x = -2 will be equal to -0.24, i.e. negative and there will be a minus sign on this interval. If x=0, then the value will be equal to 2, and a sign is put on this interval. If x=1, then the derivative will also be equal to -0.24 and a minus is put.

If, when passing through a point on the coordinate line, the derivative changes its sign from minus to plus, then this is a minimum point, and if from plus to minus, then this is a maximum point.

Related videos

Useful advice

To find the derivative, there are online services that calculate the required values ​​and display the result. On such sites, you can find a derivative of up to 5 orders.

Sources:

  • One of the services for calculating derivatives
  • maximum point of the function

The maximum points of the function along with the minimum points are called extremum points. At these points, the function changes its behavior. Extrema are determined on limited numerical intervals and are always local.

Instruction

The process of finding local extrema is called a function and is performed by analyzing the first and second derivatives of the function. Before starting the exploration, make sure that the specified range of argument values ​​belongs to the allowed values. For example, for the function F=1/x, the value of the argument x=0 is invalid. Or for the function Y=tg(x), the argument cannot have the value x=90°.

Make sure the Y function is differentiable over the entire given interval. Find the first derivative of Y". Obviously, before reaching the local maximum point, the function increases, and when passing through the maximum, the function becomes decreasing. The first derivative in its own way physical meaning characterizes the rate of change of the function. As long as the function is increasing, the rate of this process is a positive value. When passing through a local maximum, the function begins to decrease, and the rate of the process of changing the function becomes negative. The transition of the rate of change of the function through zero occurs at the point of the local maximum.

$E \subset \mathbb(R)^(n)$. It is said that $f$ has local maximum at the point $x_(0) \in E$ if there exists a neighborhood $U$ of the point $x_(0)$ such that for all $x \in U$ the inequality $f\left(x\right) \leqslant f \left(x_(0)\right)$.

The local maximum is called strict , if the neighborhood $U$ can be chosen in such a way that for all $x \in U$ different from $x_(0)$ there is $f\left(x\right)< f\left(x_{0}\right)$.

Definition
Let $f$ be a real function on an open set $E \subset \mathbb(R)^(n)$. It is said that $f$ has local minimum at the point $x_(0) \in E$ if there exists a neighborhood $U$ of the point $x_(0)$ such that for all $x \in U$ the inequality $f\left(x\right) \geqslant f \left(x_(0)\right)$.

A local minimum is said to be strict if the neighborhood $U$ can be chosen so that for all $x \in U$ different from $x_(0)$ $f\left(x\right) > f\left(x_( 0)\right)$.

A local extremum combines the concepts of a local minimum and a local maximum.

Theorem ( necessary condition extremum of a differentiable function)
Let $f$ be a real function on an open set $E \subset \mathbb(R)^(n)$. If at the point $x_(0) \in E$ the function $f$ has a local extremum at this point as well, then $$\text(d)f\left(x_(0)\right)=0.$$ Equality to zero differential is equivalent to the fact that all are equal to zero, i.e. $$\displaystyle\frac(\partial f)(\partial x_(i))\left(x_(0)\right)=0.$$

In the one-dimensional case, this is . Denote $\phi \left(t\right) = f \left(x_(0)+th\right)$, where $h$ is an arbitrary vector. The function $\phi$ is defined for sufficiently small modulo values ​​of $t$. Moreover, with respect to , it is differentiable, and $(\phi)’ \left(t\right) = \text(d)f \left(x_(0)+th\right)h$.
Let $f$ have a local maximum at x $0$. Hence, the function $\phi$ at $t = 0$ has a local maximum and, by Fermat's theorem, $(\phi)' \left(0\right)=0$.
So, we got that $df \left(x_(0)\right) = 0$, i.e. function $f$ at the point $x_(0)$ is equal to zero on any vector $h$.

Definition
The points at which the differential is equal to zero, i.e. those in which all partial derivatives are equal to zero are called stationary. critical points functions $f$ are those points at which $f$ is not differentiable, or its equal to zero. If the point is stationary, then it does not yet follow that the function has an extremum at this point.

Example 1
Let $f \left(x,y\right)=x^(3)+y^(3)$. Then $\displaystyle\frac(\partial f)(\partial x) = 3 \cdot x^(2)$,$\displaystyle\frac(\partial f)(\partial y) = 3 \cdot y^(2 )$, so $\left(0,0\right)$ is a stationary point, but the function has no extremum at this point. Indeed, $f \left(0,0\right) = 0$, but it is easy to see that in any neighborhood of the point $\left(0,0\right)$ the function takes both positive and negative values.

Example 2
The function $f \left(x,y\right) = x^(2) − y^(2)$ has the origin of coordinates as a stationary point, but it is clear that there is no extremum at this point.

Theorem (sufficient condition for an extremum).
Let a function $f$ be twice continuously differentiable on an open set $E \subset \mathbb(R)^(n)$. Let $x_(0) \in E$ be a stationary point and $$\displaystyle Q_(x_(0)) \left(h\right) \equiv \sum_(i=1)^n \sum_(j=1) ^n \frac(\partial^(2) f)(\partial x_(i) \partial x_(j)) \left(x_(0)\right)h^(i)h^(j).$$ Then

  1. if $Q_(x_(0))$ – , then the function $f$ at the point $x_(0)$ has a local extremum, namely, the minimum if the form is positive-definite and the maximum if the form is negative-definite;
  2. if the quadratic form $Q_(x_(0))$ is indefinite, then the function $f$ at the point $x_(0)$ has no extremum.

Let's use the expansion according to the Taylor formula (12.7 p. 292) . Taking into account that the first order partial derivatives at the point $x_(0)$ are equal to zero, we get $$\displaystyle f \left(x_(0)+h\right)−f \left(x_(0)\right) = \ frac(1)(2) \sum_(i=1)^n \sum_(j=1)^n \frac(\partial^(2) f)(\partial x_(i) \partial x_(j)) \left(x_(0)+\theta h\right)h^(i)h^(j),$$ where $0<\theta<1$. Обозначим $\displaystyle a_{ij}=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right)$. В силу теоремы Шварца (12.6 стр. 289-290) , $a_{ij}=a_{ji}$. Обозначим $$\displaystyle \alpha_{ij} \left(h\right)=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}+\theta h\right)−\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right).$$ По предположению, все непрерывны и поэтому $$\lim_{h \rightarrow 0} \alpha_{ij} \left(h\right)=0. \left(1\right)$$ Получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left.$$ Обозначим $$\displaystyle \epsilon \left(h\right)=\frac{1}{|h|^{2}}\sum_{i=1}^n \sum_{j=1}^n \alpha_{ij} \left(h\right)h_{i}h_{j}.$$ Тогда $$|\epsilon \left(h\right)| \leq \sum_{i=1}^n \sum_{j=1}^n |\alpha_{ij} \left(h\right)|$$ и, в силу соотношения $\left(1\right)$, имеем $\epsilon \left(h\right) \rightarrow 0$ при $h \rightarrow 0$. Окончательно получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left. \left(2\right)$$ Предположим, что $Q_{x_{0}}$ – положительноопределенная форма. Согласно лемме о положительноопределённой квадратичной форме (12.8.1 стр. 295, Лемма 1) , существует такое положительное число $\lambda$, что $Q_{x_{0}} \left(h\right) \geqslant \lambda|h|^{2}$ при любом $h$. Поэтому $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right) \geq \frac{1}{2}|h|^{2} \left(λ+\epsilon \left(h\right)\right).$$ Так как $\lambda>0$, and $\epsilon \left(h\right) \rightarrow 0$ for $h \rightarrow 0$, then the right side is positive for any vector $h$ of sufficiently small length.
Thus, we have come to the conclusion that in some neighborhood of the point $x_(0)$ the inequality $f \left(x\right) >f \left(x_(0)\right)$ is satisfied if only $x \neq x_ (0)$ (we put $x=x_(0)+h$\right). This means that at the point $x_(0)$ the function has a strict local minimum, and thus the first part of our theorem is proved.
Suppose now that $Q_(x_(0))$ is an indefinite form. Then there are vectors $h_(1)$, $h_(2)$ such that $Q_(x_(0)) \left(h_(1)\right)=\lambda_(1)>0$, $Q_ (x_(0)) \left(h_(2)\right)= \lambda_(2)<0$. В соотношении $\left(2\right)$ $h=th_{1}$ $t>0$. Then we get $$f \left(x_(0)+th_(1)\right)−f \left(x_(0)\right) = \frac(1)(2) \left[ t^(2) \ lambda_(1) + t^(2) |h_(1)|^(2) \epsilon \left(th_(1)\right) \right] = \frac(1)(2) t^(2) \ left[ \lambda_(1) + |h_(1)|^(2) \epsilon \left(th_(1)\right) \right].$$ For sufficiently small $t>0$, the right side is positive. This means that in any neighborhood of the point $x_(0)$ the function $f$ takes values ​​$f \left(x\right)$ greater than $f \left(x_(0)\right)$.
Similarly, we obtain that in any neighborhood of the point $x_(0)$ the function $f$ takes values ​​less than $f \left(x_(0)\right)$. This, together with the previous one, means that the function $f$ does not have an extremum at the point $x_(0)$.

Consider special case of this theorem for a function $f \left(x,y\right)$ of two variables defined in some neighborhood of the point $\left(x_(0),y_(0)\right)$ and having continuous partial derivatives of the first and second orders. Let $\left(x_(0),y_(0)\right)$ be a stationary point and let $$\displaystyle a_(11)= \frac(\partial^(2) f)(\partial x ^(2)) \left(x_(0) ,y_(0)\right), a_(12)=\frac(\partial^(2) f)(\partial x \partial y) \left(x_( 0), y_(0)\right), a_(22)=\frac(\partial^(2) f)(\partial y^(2)) \left(x_(0), y_(0)\right ).$$ Then the previous theorem takes the following form.

Theorem
Let $\Delta=a_(11) \cdot a_(22) − a_(12)^2$. Then:

  1. if $\Delta>0$, then the function $f$ has a local extremum at the point $\left(x_(0),y_(0)\right)$, namely, a minimum if $a_(11)>0$ , and maximum if $a_(11)<0$;
  2. if $\Delta<0$, то экстремума в точке $\left(x_{0},y_{0}\right)$ нет. Как и в одномерном случае, при $\Delta=0$ экстремум может быть, а может и не быть.

Examples of problem solving

Algorithm for finding the extremum of a function of many variables:

  1. We find stationary points;
  2. We find the differential of the 2nd order at all stationary points
  3. Using the sufficient condition for the extremum of a function of several variables, we consider the second-order differential at each stationary point
  1. Investigate the function to the extremum $f \left(x,y\right) = x^(3) + 8 \cdot y^(3) + 18 \cdot x — 30 \cdot y$.
    Solution

    Find partial derivatives of the 1st order: $$\displaystyle \frac(\partial f)(\partial x)=3 \cdot x^(2) — 6 \cdot y;$$ $$\displaystyle \frac(\partial f)(\partial y)=24 \cdot y^(2) — 6 \cdot x.$$ Compose and solve the system: $$\displaystyle \begin(cases)\frac(\partial f)(\partial x) = 0\\\frac(\partial f)(\partial y)= 0\end(cases) \Rightarrow \begin(cases)3 \cdot x^(2) - 6 \cdot y= 0\\24 \cdot y^(2) - 6 \cdot x = 0\end(cases) \Rightarrow \begin(cases)x^(2) - 2 \cdot y= 0\\4 \cdot y^(2) - x = 0 \end(cases)$$ From the 2nd equation, we express $x=4 \cdot y^(2)$ — substitute into the 1st equation: $$\displaystyle \left(4 \cdot y^(2)\right )^(2)-2 \cdot y=0$$ $$16 \cdot y^(4) — 2 \cdot y = 0$$ $$8 \cdot y^(4) — y = 0$$ $$y \left(8 \cdot y^(3) -1\right)=0$$ As a result, 2 stationary points are obtained:
    1) $y=0 \Rightarrow x = 0, M_(1) = \left(0, 0\right)$;
    2) $\displaystyle 8 \cdot y^(3) -1=0 \Rightarrow y^(3)=\frac(1)(8) \Rightarrow y = \frac(1)(2) \Rightarrow x=1 , M_(2) = \left(\frac(1)(2), 1\right)$
    Let us check the fulfillment of the sufficient extremum condition:
    $$\displaystyle \frac(\partial^(2) f)(\partial x^(2))=6 \cdot x; \frac(\partial^(2) f)(\partial x \partial y)=-6; \frac(\partial^(2) f)(\partial y^(2))=48 \cdot y$$
    1) For point $M_(1)= \left(0,0\right)$:
    $$\displaystyle A_(1)=\frac(\partial^(2) f)(\partial x^(2)) \left(0,0\right)=0; B_(1)=\frac(\partial^(2) f)(\partial x \partial y) \left(0,0\right)=-6; C_(1)=\frac(\partial^(2) f)(\partial y^(2)) \left(0,0\right)=0;$$
    $A_(1) \cdot B_(1) - C_(1)^(2) = -36<0$ , значит, в точке $M_{1}$ нет экстремума.
    2) For point $M_(2)$:
    $$\displaystyle A_(2)=\frac(\partial^(2) f)(\partial x^(2)) \left(1,\frac(1)(2)\right)=6; B_(2)=\frac(\partial^(2) f)(\partial x \partial y) \left(1,\frac(1)(2)\right)=-6; C_(2)=\frac(\partial^(2) f)(\partial y^(2)) \left(1,\frac(1)(2)\right)=24;$$
    $A_(2) \cdot B_(2) — C_(2)^(2) = 108>0$, so there is an extremum at the point $M_(2)$, and since $A_(2)>0$, then this is the minimum.
    Answer: The point $\displaystyle M_(2) \left(1,\frac(1)(2)\right)$ is the minimum point of the function $f$.

  2. Investigate the function for the extremum $f=y^(2) + 2 \cdot x \cdot y - 4 \cdot x - 2 \cdot y - 3$.
    Solution

    Find stationary points: $$\displaystyle \frac(\partial f)(\partial x)=2 \cdot y - 4;$$ $$\displaystyle \frac(\partial f)(\partial y)=2 \cdot y + 2 \cdot x — 2.$$
    Compose and solve the system: $$\displaystyle \begin(cases)\frac(\partial f)(\partial x)= 0\\\frac(\partial f)(\partial y)= 0\end(cases) \ Rightarrow \begin(cases)2 \cdot y - 4= 0\\2 \cdot y + 2 \cdot x - 2 = 0\end(cases) \Rightarrow \begin(cases) y = 2\\y + x = 1\end(cases) \Rightarrow x = -1$$
    $M_(0) \left(-1, 2\right)$ is a stationary point.
    Let's check the fulfillment of the sufficient extremum condition: $$\displaystyle A=\frac(\partial^(2) f)(\partial x^(2)) \left(-1,2\right)=0; B=\frac(\partial^(2) f)(\partial x \partial y) \left(-1,2\right)=2; C=\frac(\partial^(2) f)(\partial y^(2)) \left(-1,2\right)=2;$$
    $A \cdot B - C^(2) = -4<0$ , значит, в точке $M_{0}$ нет экстремума.
    Answer: there are no extrema.

Time limit: 0

Navigation (job numbers only)

0 of 4 tasks completed

Information

Take this quiz to test your knowledge of the topic you just read, Local Extrema of Functions of Many Variables.

You have already taken the test before. You cannot run it again.

Test is loading...

You must login or register in order to start the test.

You must complete the following tests to start this one:

results

Correct answers: 0 out of 4

Your time:

Time is over

You scored 0 out of 0 points (0 )

Your score has been recorded on the leaderboard

  1. With an answer
  2. Checked out

    Task 1 of 4

    1 .
    Number of points: 1

    Investigate the function $f$ for extrema: $f=e^(x+y)(x^(2)-2 \cdot y^(2))$

    Right

    Not properly

  1. Task 2 of 4

    2 .
    Number of points: 1

    Does the function $f = 4 + \sqrt((x^(2)+y^(2))^(2))$

The extremum point of a function is the point of the function's domain, at which the value of the function takes on a minimum or maximum value. The function values ​​at these points are called extrema (minimum and maximum) of the function.

Definition. Dot x1 function scope f(x) is called maximum point of the function , if the value of the function at this point is greater than the values ​​of the function at points close enough to it, located to the right and left of it (that is, the inequality f(x0 ) > f(x 0 + Δ x) x1 maximum.

Definition. Dot x2 function scope f(x) is called minimum point of the function, if the value of the function at this point is less than the values ​​of the function at points close enough to it, located to the right and left of it (that is, the inequality f(x0 ) < f(x 0 + Δ x) ). In this case, the function is said to have at the point x2 minimum.

Let's say the point x1 - maximum point of the function f(x) . Then in the interval up to x1 function increases, so the derivative of the function is greater than zero ( f "(x) > 0 ), and in the interval after x1 the function is decreasing, so function derivative less than zero ( f "(x) < 0 ). Тогда в точке x1

Let us also assume that the point x2 - minimum point of the function f(x) . Then in the interval up to x2 the function is decreasing and the derivative of the function is less than zero ( f "(x) < 0 ), а в интервале после x2 the function is increasing and the derivative of the function is greater than zero ( f "(x) > 0 ). In this case also at the point x2 the derivative of the function is zero or does not exist.

Fermat's theorem (a necessary criterion for the existence of an extremum of a function). If point x0 - extremum point of the function f(x) , then at this point the derivative of the function is equal to zero ( f "(x) = 0 ) or does not exist.

Definition. The points at which the derivative of a function is equal to zero or does not exist are called critical points .

Example 1 Let's consider a function.

At the point x= 0 the derivative of the function is equal to zero, therefore, the point x= 0 is the critical point. However, as can be seen on the graph of the function, it increases in the entire domain of definition, so the point x= 0 is not an extremum point of this function.

Thus, the conditions that the derivative of a function at a point is equal to zero or does not exist are necessary conditions for an extremum, but not sufficient, since other examples of functions can be given for which these conditions are satisfied, but the function does not have an extremum at the corresponding point. So must have sufficient indications, which make it possible to judge whether there is an extremum at a particular critical point and which one - a maximum or a minimum.

Theorem (the first sufficient criterion for the existence of an extremum of a function). Critical point x0 f(x) , if the derivative of the function changes sign when passing through this point, and if the sign changes from "plus" to "minus", then the maximum point, and if from "minus" to "plus", then the minimum point.

If near the point x0 , to the left and to the right of it, the derivative retains its sign, this means that the function either only decreases or only increases in some neighborhood of the point x0 . In this case, at the point x0 there is no extremum.

So, to determine the extremum points of the function, you need to do the following :

  1. Find the derivative of a function.
  2. Equate the derivative to zero and determine the critical points.
  3. Mentally or on paper, mark the critical points on the numerical axis and determine the signs of the derivative of the function in the resulting intervals. If the sign of the derivative changes from "plus" to "minus", then the critical point is the maximum point, and if from "minus" to "plus", then the critical point is the minimum point.
  4. Calculate the value of the function at the extremum points.

Example 2 Find extrema of a function .

Solution. Let's find the derivative of the function:

Equate the derivative to zero to find the critical points:

.

Since for any values ​​\u200b\u200bof "x" the denominator is not equal to zero, then we equate the numerator to zero:

Got one critical point x= 3 . We determine the sign of the derivative in the intervals delimited by this point:

in the range from minus infinity to 3 - minus sign, that is, the function decreases,

in the range from 3 to plus infinity - a plus sign, that is, the function increases.

That is, point x= 3 is the minimum point.

Find the value of the function at the minimum point:

Thus, the extremum point of the function is found: (3; 0) , and it is the minimum point.

Theorem (the second sufficient criterion for the existence of an extremum of a function). Critical point x0 is the extremum point of the function f(x) , if the second derivative of the function at this point is not equal to zero ( f ""(x) ≠ 0 ), moreover, if the second derivative is greater than zero ( f ""(x) > 0 ), then the maximum point, and if the second derivative is less than zero ( f ""(x) < 0 ), то точкой минимума.

Remark 1. If at a point x0 both the first and second derivatives vanish, then at this point it is impossible to judge the presence of an extremum on the basis of the second sufficient sign. In this case, you need to use the first sufficient criterion for the extremum of the function.

Remark 2. The second sufficient criterion for the extremum of a function is also inapplicable when the first derivative does not exist at the stationary point (then the second derivative does not exist either). In this case, it is also necessary to use the first sufficient criterion for the extremum of the function.

The local nature of the extrema of the function

It follows from the above definitions that the extremum of a function has a local character - it is the largest and smallest value features compared to nearby values.

Suppose you consider your earnings in a time span of one year. If in May you earned 45,000 rubles, and in April 42,000 rubles and in June 39,000 rubles, then the May earnings are the maximum of the earnings function compared to the nearest values. But in October you earned 71,000 rubles, in September 75,000 rubles, and in November 74,000 rubles, so the October earnings are the minimum of the earnings function compared to nearby values. And you can easily see that the maximum among the values ​​of April-May-June is less than the minimum of September-October-November.

Generally speaking, a function may have several extrema on an interval, and it may turn out that any minimum of the function is greater than any maximum. So, for the function shown in the figure above, .

That is, one should not think that the maximum and minimum of the function are, respectively, its maximum and minimum values ​​on the entire segment under consideration. At the maximum point, the function has the greatest value only in comparison with those values ​​that it has at all points sufficiently close to the maximum point, and at the minimum point, the smallest value only in comparison with those values ​​that it has at all points sufficiently close to the minimum point.

Therefore, we can refine the above concept of extremum points of a function and call the minimum points local minimum points, and the maximum points - local maximum points.

We are looking for the extrema of the function together

Example 3

Solution. The function is defined and continuous on the whole number line. Its derivative also exists on the entire number line. Therefore, in this case, only those at which , i.e., serve as critical points. , whence and . Critical points and divide the entire domain of the function into three intervals of monotonicity: . We select one control point in each of them and find the sign of the derivative at this point.

For the interval, the reference point can be : we find . Taking a point in the interval, we get , and taking a point in the interval, we have . So, in the intervals and , and in the interval . According to the first sufficient sign of an extremum, there is no extremum at the point (since the derivative retains its sign in the interval ), and the function has a minimum at the point (since the derivative changes sign from minus to plus when passing through this point). Find the corresponding values ​​of the function: , and . In the interval, the function decreases, since in this interval , and in the interval it increases, since in this interval.

To clarify the construction of the graph, we find the points of intersection of it with the coordinate axes. When we obtain an equation whose roots and , i.e., two points (0; 0) and (4; 0) of the graph of the function are found. Using all the information received, we build a graph (see at the beginning of the example).

Example 4 Find the extrema of the function and build its graph.

The domain of the function is the entire number line, except for the point, i.e. .

To shorten the study, we can use the fact that this function is even, since . Therefore, its graph is symmetrical about the axis Oy and the study can only be performed for the interval .

Finding the derivative and critical points of the function:

1) ;

2) ,

but the function suffers a break at this point, so it cannot be an extremum point.

Thus, the given function has two critical points: and . Taking into account the parity of the function, we check only the point by the second sufficient sign of the extremum. To do this, we find the second derivative and determine its sign at : we get . Since and , then is the minimum point of the function, while .

To get a more complete picture of the graph of the function, let's find out its behavior on the boundaries of the domain of definition:

(here the symbol indicates the desire x to zero on the right, and x remains positive; similarly means aspiration x to zero on the left, and x remains negative). Thus, if , then . Next, we find

,

those. if , then .

The graph of the function has no points of intersection with the axes. The picture is at the beginning of the example.

We continue to search for extremums of the function together

Example 8 Find the extrema of the function .

Solution. Find the domain of the function. Since the inequality must hold, we obtain from .

Let's find the first derivative of the function:

Let's find the critical points of the function.

$E \subset \mathbb(R)^(n)$. It is said that $f$ has local maximum at the point $x_(0) \in E$ if there exists a neighborhood $U$ of the point $x_(0)$ such that for all $x \in U$ the inequality $f\left(x\right) \leqslant f \left(x_(0)\right)$.

The local maximum is called strict , if the neighborhood $U$ can be chosen in such a way that for all $x \in U$ different from $x_(0)$ there is $f\left(x\right)< f\left(x_{0}\right)$.

Definition
Let $f$ be a real function on an open set $E \subset \mathbb(R)^(n)$. It is said that $f$ has local minimum at the point $x_(0) \in E$ if there exists a neighborhood $U$ of the point $x_(0)$ such that for all $x \in U$ the inequality $f\left(x\right) \geqslant f \left(x_(0)\right)$.

A local minimum is said to be strict if the neighborhood $U$ can be chosen so that for all $x \in U$ different from $x_(0)$ $f\left(x\right) > f\left(x_( 0)\right)$.

A local extremum combines the concepts of a local minimum and a local maximum.

Theorem (necessary condition for extremum of a differentiable function)
Let $f$ be a real function on an open set $E \subset \mathbb(R)^(n)$. If at the point $x_(0) \in E$ the function $f$ has a local extremum at this point as well, then $$\text(d)f\left(x_(0)\right)=0.$$ Equality to zero differential is equivalent to the fact that all are equal to zero, i.e. $$\displaystyle\frac(\partial f)(\partial x_(i))\left(x_(0)\right)=0.$$

In the one-dimensional case, this is . Denote $\phi \left(t\right) = f \left(x_(0)+th\right)$, where $h$ is an arbitrary vector. The function $\phi$ is defined for sufficiently small modulo values ​​of $t$. Moreover, with respect to , it is differentiable, and $(\phi)’ \left(t\right) = \text(d)f \left(x_(0)+th\right)h$.
Let $f$ have a local maximum at x $0$. Hence, the function $\phi$ at $t = 0$ has a local maximum and, by Fermat's theorem, $(\phi)' \left(0\right)=0$.
So, we got that $df \left(x_(0)\right) = 0$, i.e. function $f$ at the point $x_(0)$ is equal to zero on any vector $h$.

Definition
The points at which the differential is equal to zero, i.e. those in which all partial derivatives are equal to zero are called stationary. critical points functions $f$ are those points at which $f$ is not differentiable, or its equal to zero. If the point is stationary, then it does not yet follow that the function has an extremum at this point.

Example 1
Let $f \left(x,y\right)=x^(3)+y^(3)$. Then $\displaystyle\frac(\partial f)(\partial x) = 3 \cdot x^(2)$,$\displaystyle\frac(\partial f)(\partial y) = 3 \cdot y^(2 )$, so $\left(0,0\right)$ is a stationary point, but the function has no extremum at this point. Indeed, $f \left(0,0\right) = 0$, but it is easy to see that in any neighborhood of the point $\left(0,0\right)$ the function takes both positive and negative values.

Example 2
The function $f \left(x,y\right) = x^(2) − y^(2)$ has the origin of coordinates as a stationary point, but it is clear that there is no extremum at this point.

Theorem (sufficient condition for an extremum).
Let a function $f$ be twice continuously differentiable on an open set $E \subset \mathbb(R)^(n)$. Let $x_(0) \in E$ be a stationary point and $$\displaystyle Q_(x_(0)) \left(h\right) \equiv \sum_(i=1)^n \sum_(j=1) ^n \frac(\partial^(2) f)(\partial x_(i) \partial x_(j)) \left(x_(0)\right)h^(i)h^(j).$$ Then

  1. if $Q_(x_(0))$ – , then the function $f$ at the point $x_(0)$ has a local extremum, namely, the minimum if the form is positive-definite and the maximum if the form is negative-definite;
  2. if the quadratic form $Q_(x_(0))$ is indefinite, then the function $f$ at the point $x_(0)$ has no extremum.

Let's use the expansion according to the Taylor formula (12.7 p. 292) . Taking into account that the first order partial derivatives at the point $x_(0)$ are equal to zero, we get $$\displaystyle f \left(x_(0)+h\right)−f \left(x_(0)\right) = \ frac(1)(2) \sum_(i=1)^n \sum_(j=1)^n \frac(\partial^(2) f)(\partial x_(i) \partial x_(j)) \left(x_(0)+\theta h\right)h^(i)h^(j),$$ where $0<\theta<1$. Обозначим $\displaystyle a_{ij}=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right)$. В силу теоремы Шварца (12.6 стр. 289-290) , $a_{ij}=a_{ji}$. Обозначим $$\displaystyle \alpha_{ij} \left(h\right)=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}+\theta h\right)−\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}} \left(x_{0}\right).$$ По предположению, все непрерывны и поэтому $$\lim_{h \rightarrow 0} \alpha_{ij} \left(h\right)=0. \left(1\right)$$ Получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left.$$ Обозначим $$\displaystyle \epsilon \left(h\right)=\frac{1}{|h|^{2}}\sum_{i=1}^n \sum_{j=1}^n \alpha_{ij} \left(h\right)h_{i}h_{j}.$$ Тогда $$|\epsilon \left(h\right)| \leq \sum_{i=1}^n \sum_{j=1}^n |\alpha_{ij} \left(h\right)|$$ и, в силу соотношения $\left(1\right)$, имеем $\epsilon \left(h\right) \rightarrow 0$ при $h \rightarrow 0$. Окончательно получаем $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right)=\frac{1}{2}\left. \left(2\right)$$ Предположим, что $Q_{x_{0}}$ – положительноопределенная форма. Согласно лемме о положительноопределённой квадратичной форме (12.8.1 стр. 295, Лемма 1) , существует такое положительное число $\lambda$, что $Q_{x_{0}} \left(h\right) \geqslant \lambda|h|^{2}$ при любом $h$. Поэтому $$\displaystyle f \left(x_{0}+h\right)−f \left(x_{0}\right) \geq \frac{1}{2}|h|^{2} \left(λ+\epsilon \left(h\right)\right).$$ Так как $\lambda>0$, and $\epsilon \left(h\right) \rightarrow 0$ for $h \rightarrow 0$, then the right side is positive for any vector $h$ of sufficiently small length.
Thus, we have come to the conclusion that in some neighborhood of the point $x_(0)$ the inequality $f \left(x\right) >f \left(x_(0)\right)$ is satisfied if only $x \neq x_ (0)$ (we put $x=x_(0)+h$\right). This means that at the point $x_(0)$ the function has a strict local minimum, and thus the first part of our theorem is proved.
Suppose now that $Q_(x_(0))$ is an indefinite form. Then there are vectors $h_(1)$, $h_(2)$ such that $Q_(x_(0)) \left(h_(1)\right)=\lambda_(1)>0$, $Q_ (x_(0)) \left(h_(2)\right)= \lambda_(2)<0$. В соотношении $\left(2\right)$ $h=th_{1}$ $t>0$. Then we get $$f \left(x_(0)+th_(1)\right)−f \left(x_(0)\right) = \frac(1)(2) \left[ t^(2) \ lambda_(1) + t^(2) |h_(1)|^(2) \epsilon \left(th_(1)\right) \right] = \frac(1)(2) t^(2) \ left[ \lambda_(1) + |h_(1)|^(2) \epsilon \left(th_(1)\right) \right].$$ For sufficiently small $t>0$, the right side is positive. This means that in any neighborhood of the point $x_(0)$ the function $f$ takes values ​​$f \left(x\right)$ greater than $f \left(x_(0)\right)$.
Similarly, we obtain that in any neighborhood of the point $x_(0)$ the function $f$ takes values ​​less than $f \left(x_(0)\right)$. This, together with the previous one, means that the function $f$ does not have an extremum at the point $x_(0)$.

Let us consider a special case of this theorem for a function $f \left(x,y\right)$ of two variables defined in some neighborhood of the point $\left(x_(0),y_(0)\right)$ and having continuous partial derivatives of the first and second orders. Let $\left(x_(0),y_(0)\right)$ be a stationary point and let $$\displaystyle a_(11)= \frac(\partial^(2) f)(\partial x ^(2)) \left(x_(0) ,y_(0)\right), a_(12)=\frac(\partial^(2) f)(\partial x \partial y) \left(x_( 0), y_(0)\right), a_(22)=\frac(\partial^(2) f)(\partial y^(2)) \left(x_(0), y_(0)\right ).$$ Then the previous theorem takes the following form.

Theorem
Let $\Delta=a_(11) \cdot a_(22) − a_(12)^2$. Then:

  1. if $\Delta>0$, then the function $f$ has a local extremum at the point $\left(x_(0),y_(0)\right)$, namely, a minimum if $a_(11)>0$ , and maximum if $a_(11)<0$;
  2. if $\Delta<0$, то экстремума в точке $\left(x_{0},y_{0}\right)$ нет. Как и в одномерном случае, при $\Delta=0$ экстремум может быть, а может и не быть.

Examples of problem solving

Algorithm for finding the extremum of a function of many variables:

  1. We find stationary points;
  2. We find the differential of the 2nd order at all stationary points
  3. Using the sufficient condition for the extremum of a function of several variables, we consider the second-order differential at each stationary point
  1. Investigate the function to the extremum $f \left(x,y\right) = x^(3) + 8 \cdot y^(3) + 18 \cdot x — 30 \cdot y$.
    Solution

    Find partial derivatives of the 1st order: $$\displaystyle \frac(\partial f)(\partial x)=3 \cdot x^(2) — 6 \cdot y;$$ $$\displaystyle \frac(\partial f)(\partial y)=24 \cdot y^(2) — 6 \cdot x.$$ Compose and solve the system: $$\displaystyle \begin(cases)\frac(\partial f)(\partial x) = 0\\\frac(\partial f)(\partial y)= 0\end(cases) \Rightarrow \begin(cases)3 \cdot x^(2) - 6 \cdot y= 0\\24 \cdot y^(2) - 6 \cdot x = 0\end(cases) \Rightarrow \begin(cases)x^(2) - 2 \cdot y= 0\\4 \cdot y^(2) - x = 0 \end(cases)$$ From the 2nd equation, we express $x=4 \cdot y^(2)$ — substitute into the 1st equation: $$\displaystyle \left(4 \cdot y^(2)\right )^(2)-2 \cdot y=0$$ $$16 \cdot y^(4) — 2 \cdot y = 0$$ $$8 \cdot y^(4) — y = 0$$ $$y \left(8 \cdot y^(3) -1\right)=0$$ As a result, 2 stationary points are obtained:
    1) $y=0 \Rightarrow x = 0, M_(1) = \left(0, 0\right)$;
    2) $\displaystyle 8 \cdot y^(3) -1=0 \Rightarrow y^(3)=\frac(1)(8) \Rightarrow y = \frac(1)(2) \Rightarrow x=1 , M_(2) = \left(\frac(1)(2), 1\right)$
    Let us check the fulfillment of the sufficient extremum condition:
    $$\displaystyle \frac(\partial^(2) f)(\partial x^(2))=6 \cdot x; \frac(\partial^(2) f)(\partial x \partial y)=-6; \frac(\partial^(2) f)(\partial y^(2))=48 \cdot y$$
    1) For point $M_(1)= \left(0,0\right)$:
    $$\displaystyle A_(1)=\frac(\partial^(2) f)(\partial x^(2)) \left(0,0\right)=0; B_(1)=\frac(\partial^(2) f)(\partial x \partial y) \left(0,0\right)=-6; C_(1)=\frac(\partial^(2) f)(\partial y^(2)) \left(0,0\right)=0;$$
    $A_(1) \cdot B_(1) - C_(1)^(2) = -36<0$ , значит, в точке $M_{1}$ нет экстремума.
    2) For point $M_(2)$:
    $$\displaystyle A_(2)=\frac(\partial^(2) f)(\partial x^(2)) \left(1,\frac(1)(2)\right)=6; B_(2)=\frac(\partial^(2) f)(\partial x \partial y) \left(1,\frac(1)(2)\right)=-6; C_(2)=\frac(\partial^(2) f)(\partial y^(2)) \left(1,\frac(1)(2)\right)=24;$$
    $A_(2) \cdot B_(2) — C_(2)^(2) = 108>0$, so there is an extremum at the point $M_(2)$, and since $A_(2)>0$, then this is the minimum.
    Answer: The point $\displaystyle M_(2) \left(1,\frac(1)(2)\right)$ is the minimum point of the function $f$.

  2. Investigate the function for the extremum $f=y^(2) + 2 \cdot x \cdot y - 4 \cdot x - 2 \cdot y - 3$.
    Solution

    Find stationary points: $$\displaystyle \frac(\partial f)(\partial x)=2 \cdot y - 4;$$ $$\displaystyle \frac(\partial f)(\partial y)=2 \cdot y + 2 \cdot x — 2.$$
    Compose and solve the system: $$\displaystyle \begin(cases)\frac(\partial f)(\partial x)= 0\\\frac(\partial f)(\partial y)= 0\end(cases) \ Rightarrow \begin(cases)2 \cdot y - 4= 0\\2 \cdot y + 2 \cdot x - 2 = 0\end(cases) \Rightarrow \begin(cases) y = 2\\y + x = 1\end(cases) \Rightarrow x = -1$$
    $M_(0) \left(-1, 2\right)$ is a stationary point.
    Let's check the fulfillment of the sufficient extremum condition: $$\displaystyle A=\frac(\partial^(2) f)(\partial x^(2)) \left(-1,2\right)=0; B=\frac(\partial^(2) f)(\partial x \partial y) \left(-1,2\right)=2; C=\frac(\partial^(2) f)(\partial y^(2)) \left(-1,2\right)=2;$$
    $A \cdot B - C^(2) = -4<0$ , значит, в точке $M_{0}$ нет экстремума.
    Answer: there are no extrema.

Time limit: 0

Navigation (job numbers only)

0 of 4 tasks completed

Information

Take this quiz to test your knowledge of the topic you just read, Local Extrema of Functions of Many Variables.

You have already taken the test before. You cannot run it again.

Test is loading...

You must login or register in order to start the test.

You must complete the following tests to start this one:

results

Correct answers: 0 out of 4

Your time:

Time is over

You scored 0 out of 0 points (0 )

Your score has been recorded on the leaderboard

  1. With an answer
  2. Checked out

    Task 1 of 4

    1 .
    Number of points: 1

    Investigate the function $f$ for extrema: $f=e^(x+y)(x^(2)-2 \cdot y^(2))$

    Right

    Not properly

  1. Task 2 of 4

    2 .
    Number of points: 1

    Does the function $f = 4 + \sqrt((x^(2)+y^(2))^(2))$

Definition: The point x0 is called the point of local maximum (or minimum) of the function, if in some neighborhood of the point x0 the function takes the largest (or smallest) value, i.e. for all х from some neighborhood of the point x0 the condition f(x) f(x0) (or f(x) f(x0)) is satisfied.

Points of local maximum or minimum are united by a common name - points of local extremum of a function.

Note that at the points of local extremum, the function reaches its maximum or minimum value only in some local region. There are cases when, according to the value of уmaxуmin .

A necessary criterion for the existence of a local extremum of a function

Theorem . If a continuous function y = f(x) has a local extremum at the point x0, then at this point the first derivative is either equal to zero or does not exist, i.e. the local extremum takes place at critical points of the first kind.

At the local extremum points, either the tangent is parallel to the 0x axis, or there are two tangents (see figure). Note that critical points are a necessary but not sufficient condition for a local extremum. A local extremum takes place only at critical points of the first kind, but not all critical points have a local extremum.

For example: a cubic parabola y = x3, has a critical point x0=0, at which the derivative y/(0)=0, but the critical point x0=0 is not an extremum point, but there is an inflection point in it (see below).

A sufficient criterion for the existence of a local extremum of a function

Theorem . If, when the argument passes through a critical point of the first kind, from left to right, the first derivative y / (x)

changes sign from “+” to “-”, then the continuous function y(x) has a local maximum at this critical point;

changes sign from “-” to “+”, then the continuous function y(x) has a local minimum at this critical point

does not change sign, then there is no local extremum at this critical point, there is an inflection point.

For a local maximum, the area of ​​increasing function (y/0) is replaced by the area of ​​decreasing function (y/0). For a local minimum, the area of ​​decreasing function (y/0) is replaced by the area of ​​increasing function (y/0).

Example: Investigate the function y \u003d x3 + 9x2 + 15x - 9 for monotonicity, extremum and build a graph of the function.

Let us find the critical points of the first kind by defining the derivative (y/) and equating it to zero: y/ = 3x2 + 18x + 15 = 3(x2 + 6x + 5) = 0

We solve the square trinomial using the discriminant:

x2 + 6x + 5 = 0 (a=1, b=6, c=5) D=, x1k = -5, x2k = -1.

2) Let us divide the numerical axis by critical points into 3 regions and determine the signs of the derivative (y/) in them. Using these signs, we will find the monotonicity (increase and decrease) areas of functions, and by changing the signs, we will determine the points of the local extremum (maximum and minimum).

The results of the study are presented in the form of a table, from which the following conclusions can be drawn:

  • 1. On the interval y /(-10) 0, the function increases monotonically (the sign of the derivative y was estimated from the control point x = -10 taken in this interval);
  • 2. On the interval (-5; -1) y /(-2) 0, the function monotonically decreases (the sign of the derivative y was estimated from the control point x = -2 taken in this interval);
  • 3. On the interval y /(0) 0, the function increases monotonically (the sign of the derivative y was estimated from the control point x = 0 taken in this interval);
  • 4. When passing through the critical point x1k \u003d -5, the derivative changes sign from "+" to "-", therefore this point is a local maximum point
  • (ymax(-5) = (-5)3+9(-5)2 +15(-5)-9=-125 + 225 - 75 - 9 =16);
  • 5. When passing through the critical point x2k \u003d -1, the derivative changes sign from "-" to "+", therefore this point is a local minimum point
  • (ymin(-1) = -1 + 9 - 15 - 9 = - 16).

x -5 (-5 ; -1) -1

3) We will build a graph based on the results of the study with the involvement of additional calculations of the values ​​of the function at control points:

we build a rectangular coordinate system Oxy;

show the coordinates of the maximum (-5; 16) and minimum (-1; -16) points;

to refine the graph, we calculate the value of the function at control points, selecting them to the left and right of the maximum and minimum points and inside the middle interval, for example: y(-6)=(-6)3 +9(-6)2+15(-6 )-9=9; y(-3)=(-3)3+9(-3)2+15(-3)-9=0;

y(0)= -9 (-6;9); (-3;0) and (0;-9) - calculated control points, which are plotted to build a graph;

we show the graph in the form of a curve with a bulge up at the maximum point and a bulge down at the minimum point and passing through the calculated control points.