Calculus III 13.09 Extrema Applications

From University
Jump to: navigation, search
Previous Calculus III 13.08 Extrema for Functions with Two Variables
Next Calculus III 13.10 Lagrange Multipliers

13.9 Extrema Applications

  • Solve optimization problems involving functions with multiple.
  • Least squares method.

Applied Optimization Problems

Example 13.9.1 Finding Maximum Volume

Figure 13.9.1

A rectangular box is resting on the \(xy\)-plane with one vertex at the origin. The opposite vertex lies in the plane

\( 6x+4y+3z=24\)

as shown in Figure 13.9.1. Find the box's maximum volume.
Solution Let \(x\), \(y\), and \(z\) represent the box's length, width, and height. Because one vertex for the box lies in the plane \( 6x+4y+3z=24\), and \(z = 1/3(24-6x-4y)\) and the volume \(xyz\) can be written as a function with two variables.

\(V(x,y)\) \(=(x)(y)[1/3(24-6x-4y)] \)
\(= 1/3(24xy-6x^{2}y-4xy^{2})\)

Find the partial derivatives for \(V\).

\(V_{x}(x,y) \) \(=1/3(24y-12xy-4y^{2}) \) \(= y/3(24-12x-4y) \)
\(V_{y}(x,y) \) \(=1/3(24x-6x^{2}-8xy) \) \(= x/3(24-6x-8y) \)

Note that the first partial derivatives are defined for all \(x\) and \(y\). By setting \(V_{x}(x,y) \) and \(V_{y}(x,y) \) equal to zero and solving the critical points \((0,0)\), \((4/3,2)\) are produced. At \((0,0)\), the volume is zero, a global minimum. At \((4/3,2)\) the Second Partials Test can be applied.

\(V_{xx}(x,y) \) \(= -4y \)
\(V_{yy}(x,y) \) \(= \frac{-8x}{3}\)
\(V_{xy}(x,y) \) \(= \frac{1}{3}(24-12x-4y) \)

Because

\( V_{xx}(4/3,2)V_{yy}(4/3,2) -[V_{xy}(4/3,2)]^{2} = (-8)(-32/9)-(-8/3)^{2}= 64/3 > 0 \)

and

\( V_{xx}(4/3,2) = -8 < 0\)

the maximum volume is

\( V_{xx}(4/3,2) = 1/3[24(4/3)(2)-6(4/3)^{2}(2)-4(4/3)(2^{2})]=64/9 \) cubic units.

Note that the volume is zero at the boundary points for the triangular domain for \(V\).

Example 13.9.2 Finding the Maximum Profit

Extrema applications in economics and business often involve more than one independent variable. For instance, a company may produce a product in several models. The price per unit and profit per unit are usually different for each model. The demand for each model is often a function with the prices for the other models, as well as its own price, are variables.

An electronics manufacturer determines that the profit \(P\), in dollars, obtained by producing and selling LCD televisions, \(x\) units, and plasma televisions, \(y\) units, is approximated by the model

\(P(x,y) = 8x+10y-(0.001)(x^{2}+xy+y^{2}) - 10,000. \)

Find the production level that produces a maximum profit. What is the maximum profit?
Solution The partial derivatives for the profit function are

\( P_{x}(x,y) = 8-(0.001)(2x+y) \)

and

\(P_{y}(x,y) = 10-(0.001)(x+2y). \)

Setting these partial derivatives equal to zero produces the linear equations,

\( 8-(0.001)(2x+y) \) \(= 0 \) \( \rightarrow \) \( 2x+y \) \(=8000 \) \( \rightarrow \) \(x=2000\)
\( 10-(0.001)(x+2y) \) \(= 0\) \( \rightarrow \) \(x+2y \) \( 10,000 \) \( \rightarrow \) \(y=4000. \)

The second partial derivatives for \(P\) are

\(P_{xx}(2000,4000) \) \(= -0.002 \)
\(P_{yy}(2000,4000) \) \(= -0.002 \)
\(P_{xy}(2000,4000) \) \(= -0.001. \)

Because \(P_{xx} < 0 \) and

\( P_{xx}(2000,4000) P_{yy}(2000,4000) - [P_{xy}(2000,4000)]^{2} = (-0.002)^{2}-(-0.001)^{2}\)

is greater than zero. The conclusion is that a production level with \(x=2000\) units and \(y=4000\) units yields a maximum profit. The maximum profit is

\(P(2000,4000) = $18,000.\)

Least Squares Method

Many examples have use mathematical models. Example 13.9.2 involves a quadratic model for profit. There are several ways to develop such models; this one is called the Least Squares Method.

In constructing a model to represent a particular phenomenon, the goals are simplicity and accuracy, which sometimes conflict. For instance, a simple linear model for the points in Figure 13.9.2 is

\( y=1.9x - 5\)

Figure 13.9.3 shows that by choosing the slightly more complicated quadratic model

\( y=0.20x^{2} -0.7x+1\)

greater accuracy is achieved.

Figure 13.9.2

Figure 13.9.3

As a measure for how well the model \(y=f(x)\) fits the point collection

\( \{(x_{1},y_{1}),(x_{2},y_{2}),(x_{3},y_{3}),...,(x_{n},y_{n})\}, \)

take the differences between the actual \(y\)-values and the values given by the model, square them, sum them, to produce the sum of the squared errors.

$$ S = \sum_{i=1}^{n} [f(x_{i})-y_{i}]^{2} \:\:\:\: \color{red}{ \text{ Sum of the squared errors.}}$$
Sum of the squared errors: \(S=d_{1}^{2}+d_{2}^{2}+d_{3}^{2}\)
Figure 13.9.4

Graphically, \(S\) can be interpreted as the sum of the squares of the vertical distances between \(f\)'s graph and the given points in the plane, as shown in Figure 13.9.4. If the model is perfect, then \(S = 0\). A less than perfect model minimizes \(S\). For instance, the sum of the squared errors for the linear model in Figure 13.9.2 is

\(S=17.6. \)

Statisticians call the linear model that minimizes \(S\) the least squares regression line. The proof that this line actually minimizes \(S\) involves the minimizing a function with two variables.

Theorem 13.9.1 Least Squares Regression Line

The least squares regression line for \( \{(x_{1},y_{1}),(x_{2},y_{2}),(x_{3},y_{3}),...,(x_{n},y_{n})\} \) is given by \(f(x)=ax+ b\), where

\( a= \) $$ n \sum_{i=1}^{n} x_{i}y_{i} - \sum_{i=1}^{n} x_{i}\sum_{i=1}^{n}y_{i} $$

$$ n \sum_{i=1}^{n} x_{i}^{2} - \left( \sum_{i=1}^{n} x_{i} \right)^{2}$$

and

$$ b=\frac{1}{n} \left( \sum_{i=1}^{n} y_{i} - a \sum_{i=1}^{n} x_{i} \right). $$

Proof Let \(S(a,b)\) represent the sum of the squared errors for the model

\(f(x)=ax+b\)

and the given point set. That is

\(S(a,b)\) $$= \sum_{i=1}^{n} [f(x_{i}) - y_{i}]^{2} $$
$$= \sum_{i=1}^{n} (ax_{i} +b - y_{i})^{2}$$

where the points \((x_{i},y_{i})\) represents constants. Because \(S\) is a function for \(a\) and \(b\), the methods in Section 13.8 to find the minimum value for \(S\). The first partial derivatives are

\(S_{a}(a,b)\) $$= \sum_{i=1}^{n} 2x_{i}(ax_{i} +b - y_{i})$$
$$= 2a\sum_{i=1}^{n} x_{i}^{2} + 2b \sum_{i=1}^{n} x_{i} - 2 \sum_{i=1}^{n} x_{i}y_{i} $$

and

\(S_{b}(a,b)\) $$= \sum_{i=1}^{n} 2(ax_{i} +b - y_{i})$$
$$= 2a\sum_{i=1}^{n} x_{i}^{2} + 2nb - 2 \sum_{i=1}^{n} y_{i}. $$

Set both partial derivatives equal to zero and solve for \(a\) and \(b\).

Example 13.9.3 Finding the Least Squares Regression Line

Least squares regression line.
Figure 13.9.5

Find the least squares regression line for the points

(-3,0), (-1,1), (0,2), and (2,3).

Solution The Table 13.9.1 shows the calculations involved in finding the least squares regression line using \(n=4\).

Table 13.9.1
x y xy x^{2}
-1 0 0 9
-1 1 -1 1
0 2 0 0
2 3 6 4
$$ \sum_{i=1}^{n} x_{i} = -2 $$ $$ \sum_{i=1}^{n} y_{i} = 6 $$ $$ \sum_{i=1}^{n} x_{i}y_{i} = 5 $$ $$ \sum_{i=1}^{n} x_{i}^{2} = 14 $$

Applying Theorem 13.9.1 produces

\( a= \) $$ n \sum_{i=1}^{n} x_{i}y_{i} - \sum_{i=1}^{n} x_{i}\sum_{i=1}^{n}y_{i} $$

$$ n \sum_{i=1}^{n} x_{i}^{2} - \left( \sum_{i=1}^{n} x_{i} \right)^{2}$$
$$ = \frac{4(5)-(-2)(6)}{4(14)-(-2)^{2}} $$
$$ = \frac{8}{13} $$

and

\(b\) $$= \frac{1}{n} \left( \sum_{i=1}^{n} y_{i} - a \sum_{i=1}^{n} x_{i} \right)$$
$$= \frac{1}{4} \left[ 6 - \frac{8}{13}(-2) \right] $$
$$= \frac{47}{26}. $$

The least squares regression line is

$$ f(x) = \frac{8}{13} x + \frac{47}{26}$$

as shown in Figure 13.9.5.


Square X.jpg

Internal Links

Parent Article: Calculus III 13 Functions with Several Variables