### Archive

Archive for December, 2007

## Binomial Coefficients

During vacation, I was reading through C. Boyer’s ‘A History of Mathematics.’  I had been introduced to Pascal’s triangle early in probability courses in college and then again during my first computational geometry course.  It was interesting to read about the history of Yang Hui’s work on the arithmetic triangle and publications from the ‘Precious Mirror’ .

For fun, I ported the Pascal’s triangle example from the AS 2 downloads section to AS 3 (not much of a port) and added a binomial coefficient generator to the Singularity.Numeric package.  I’ll use this in future examples dealing with k-th order Bezier curves (and illustration of numerical instability) as the Bernstein polynomials form the basis for these curves.

Categories: General, Math Tags:

## Limit Proofs

At first, I thought someone was sending me their calculus homework, but this time of the semester, it’s most likely a question that was asked on a final.  Anyway, I don’t mind posting solutions on topics that are of general interest to high-school and college students … whenever I have the time 🙂

Question:  What is the limit as x→2 of the function x2 + 3 , using only the definition of a limit?

So-called epsilon-delta proofs can be cumbersome for polynomials of degree greater than one because of the apparent lack of a single delta value. To review, limit as x→c of f(x) = f(c) means that for any ε > 0, there exists δ > 0 such that |x – c| < δ ==> |f(c) – f(x)| < ε .  Intuitively, the definition means we can make f(x) arbitrarily close to f(c) by making x ‘sufficiently’ close to c.

For the problem at hand, it appears that the limit as x→2 is 7.  To prove this assertion using the definition, then given any ε > 0 , we must show there exists a δ > 0 (remember that the delta value is a function of epsilon) such that |f(x) – 7| < ε.  Now,

|f(x) – 7| < ε ==> |x2 – 4| = | (x+2)(x-2) | = |x+2||x-2| < ε . Notice that lim as x→2 of |x-2| = 0 (this is easily proven from the definition), so the trick is to choose an ‘easy’ bound for one factor and limit the other. Consider two deltas, δ1 and δ2. Since x-2 approaches zero as x→2, set δ1 = 1.  Now,

|x-2| < δ1 = 1 ==> -1 < x – 2 < 1 ==> 3 < x + 2 < 5 ==> |x+2| < 5. Take δ2 = ε /5 and δ = min(δ1, δ2).

Case 1: Any ε ≥ 5. |x-2| < δ = min(δ1, δ2) ==> |x-2| < 1, so

|f(x)-7| = |x+2||x-2| < 5|x-2| < 5, which is less than ε .

Case 2: Any 0 < ε  < 5. |x-2| < δ = min(δ1, δ2) ==> |x-2| < ε/5, so

|f(x)-7| = |x+2||x-2| < 5|x-2| < 5(ε/5) < ε .

Strictly speaking, separate cases were not necessary, but I find that students understand the concept easier the first few times this way.

Categories: Math Tags: ,

## UltraShock V2

Just got back from vacation and noticed that the very long-awaited UltraShock V2 now appears to be up and running.  Check it out here.

Categories: Flash

## Kennedy Space Center

December 9, 2007 1 comment

Spent most of the day, yesterday at Kennedy Space Center.  It was the space program in the 1960’s and early 70’s that got me interested in math and science in the first place (okay, the original Star Trek series had some influence as well).  If you ever have the opportunity to visit, it’s well worth the \$\$\$.  Unfortunately, due to the latest schedule update, it looks like I’m going to miss the STS-122 launch today.

Blog posts are on hold until I get back from the cruise.  Until then, enjoy the artwork of Alan Foxx – I met him last night at his shop in Downtown Disney.  The online images don’t do justice to the actual prints, though.

Categories: General Tags: ,

## Lagrange Multipliers II

As I head out for vacation, here is a simple question.  What is the distance from the plane

ax + by + cz = d

to the origin?  The answer is a simple click away (Mathworld).  For fun, suppose we also wanted to know the coordinates of the point on the plane closest to the origin.  This can be solved as a constrained optimization problem, serving as another example of the technique of Lagrange multipliers.

Informally, we wish to minimize x2 + y2 + z2 subject to the constraint ax + by + cz – d = 0. Relax the constraint into the objective and form the function,

L(x,y,z) = x2 + y2 + z2 – λ(ax + by + cz – d) .

The stationary points are determined from

δL/δx = 2x – aλ = 0 , or x = aλ/2 [1]

δL/δy = 2y – bλ = 0 , or y = bλ/2 [2]

δL/δz = 2z – cλ = 0 , or z = cλ/2 [3]

δL/δλ = 0 implies the original constraint, ax + by + cz = d [4]

Subsititue x, y, and z from [1-3] into [4] to obtain λ = 2d/(a2 + b2 + c2) [5].

Substitute the λ value from [5] into [1-3] to obtain the coordinates of the point closest to the origin,

e = a2 + b2 + c2

x*= ad/e, y*= bd/e, z*= cd/e . The distance from (x*, y*, z*) to the origin is

D = [((ad)2 + (bd)2 + (cd)2)/e2]1/2

= d(e/e2)1/2 or D = d/√(a2 + b2 + c2), which matches the result from analytic geometry. Of course, calculus is not necessary (or even convenient) for this derivation, but it is another good illustration of Lagrange multipliers.

Formally proving that the stationary point represents a minimum is well beyond a blog post, but intuitively you can imagine that the distance from the origin to any point on the plane can be made arbitrarily large.  The stationary point can not represent a maximum distance.

Categories: Math

## Lagrange Multipliers

I never expected one Twitter comment to generate so many e-mails. There are a lot of college students in the Flash community, many of which are taking calculus courses, so it probably makes sense. I could say check out Wikipedia, but there are a lot of topics for which a theoretical introduction is difficult to comprehend. When I work with calc. III students studying extrema of functions, I like to start out with this problem.

minimize x2 + y2 + z2

on the hyperbolic cylinder x2 – z2 = 1 [1]

This is a constrained optimization problem. Set the partial derivatives of the objective to zero. The partials are only zero at the point (0, 0, 0) which is not in the domain of the constraint. From the constraint, z2 = x2 – 1. Eliminate z from the objective which becomes

minimize w = 2x2 + y2 – 1 [2]

Now, δw/δx = 4x and δw/δy = 2y . These are only zero at x = 0 and y = 0. From [1], this implies z2 = -1 , yielding an imaginary solution. As an exercise, eliminate x from the objective, minimizing

w = 1 + y2 + 2z2 [3]

yielding y = 0, z = 0 and x = +/-1. The domain of the constraint is |x| ≥ 1. This constrains the domain of [2], while the domain of [3] is all reals for y and z. Since |x| must be ≥ 1, a minimum has been found.

Contrast this to solving the problem with Lagrange multipliers. This technique relaxes the constraint into the objective, turning a constrained optimization problem into an unconstrained problem. Create the function

L(x,y,z) = x2 + y2 + z2 – λ(x2 – z2 – 1)

Now, δL/δx = 2x(1-λ), δL/δy = 2y, δL/δz = 2z(1+λ), and δL/δλ = x2 – z2 – 1

Setting the first three partials to zero yields λ = 1, y = 0, and z = 0. Plugging z = 0 into the last partial and setting it equal to zero yields x2 = 1 or x = +/- 1.

Lagrange multipliers can be useful for finding stationary points of a constrained function, particularly when the classic approach breaks down or is cumbersome. Depending on the constraint(s), both maxima and minima may be found. Also, beware of saddle points.

If you want to read more about the method and why it works, here are some good introductions,

http://dbpubs.stanford.edu:8091/~klein/lagrange-multipliers.pdf

http://tutorial.math.lamar.edu/classes/calcIII/lagrangemultipliers.aspx

Categories: Math