Linear Algebra, Matemática aplicada - Appd Math

Linear Algebra – Foundations

Linear Algebra

Start Line:

  1. DLM
    1. Appd Mathematics
      1. Linear Algebra…What You Will Learn:
        • Represent quantities that have a magnitude and a direction as vectors.
        • • Read, write, and interpret vector notations.
        • • Visualize vectors in R2.
        • • Perform the vector operations of scaling, addition, dot (inner) product.
        • • Reason and develop arguments about properties of vectors and operations defined on them.
        • • Compute the (Euclidean) length of a vector.
        • • Express the length of a vector in terms of the dot product of that vector with itself.
        • • Evaluate a vector function.
        • • Solve simple problems that can be represented with vectors.
        • • Create code for various vector operations and determine their cost functions in terms of the size of the vectors.
        • • Gain an awareness of how linear algebra software evolved over time and how our programming assignments fit into this
        • (enrichment).
        • • Become aware of overflow and underflow in computer arithmetic (enrichment).
        • Become practical with the use of Matlab to apply all these framework

 

PAGE_BREAK: PageBreak

Date: August, September 2017. Location: Quito, Pichincha, Ecuador.

Actividad WBS (Vector Algebra – )
Martes 15, 06:32 am

Martes 29, 04:41 am

Jueves 31, 04:41 am

Lunes 04, 5:37 am

Lunes 11, 5:37 am

Martes 12, 4:46 am

Jueves 14, 5:15 am

I keep attending:

  1. LAFF: Linear Algebra – Foundations to Frontiers
    1. Overview of the Course
    2. 0.3.3 MATLAB Basics
    3. Origins of MATLAB
    4. Vectors in Linear Algebra
      1. Notation
      2. Unit Basis Vectors
      3. Simple Vector Operations
        1. Equality, Assignment and Copy
        2. Vector Addition
        3. Scaling
        4. Subtraction
      4. Advanced Vector Operations
        1. Scaled Vector Addition (AXPY)
        2. Linear Combinations of Vectors
        3. Dot or Inner Product (DOT)
        4. Vector Length (NORM2)
        5. Vector Functions
        6. Vector Functions that map a vector to a vector
    5. The Science of NFL Football: Vectors

 

 

This course is not only designed to teach the standard topics in a typical linear algebra course, but it also investigates how to translate theory into algorithms. Like typical in the algebra courses, we will often start studying operations with small matrices. In practice, however, one often wants to perform operations with large matrices so we generalize the techniques to formulate practical algorithms and their implementations
If you want to learn more about MATLAB, here are some suggestions you may want to investigate:

  • Matlab Onramp is a free 2-hour interactive online tutorial.
  • MATLAB Central is a place where people interested in MATLAB can be part of a community. Here, you may want to check out ThingSpeak™, the open loT Platform with MATLAB Analytics that allows you to aggregate, visualize, and analyze live data streams in the cloud.

 

Definition from Vectors in Linear Algebra

Definition 1.1 We will call a one-dimensional array of n numbers a vector of size n:

denotes the set of all vectors of size n with components in R.

Unit Basis Vectors

 

 

 

Equality, Assignment and Copy

 

Now we could talk about an algorithm for setting y equal to x. We’re computing y becomes x. We’ve already seen that each of the components of y has to be set to a corresponding component of x. So psi sub i has to become chi sub i. We have to do this for all indices i from 0 to n minus 1. We start indexing at 0, and therefore, if the vectors are of length n, we have to run this until n minus 1. We create an algorithm for this assignment by now writing this as a FOR LOOP.

 

Vector Addition

Now if we wanted an algorithm for computing the result vector, z, that results from adding x to y, then we can expose the components of z, the components of x and y, and we recognize that the ith of z just equals to the some of the ith components of x and y. And we can then summarise that as a little loop that says for all components of z, for i from 0 to n minus 1, the ith component of z, zeta sub i, should just be computed as chi sub i, added to psi sub i.

Scaling

 

What if instead we want an algorithm that computes vector y as the stretched vector x stretched by a scaling factor alpha? y becomes alpha times x. We expose the components of y. We need to compute alpha times x, where here we expose the individual components of x. And all we need to do is set the appropriate element of y to the corresponding element of x scaled by alpha. If we do this as an algorithm, then what we need to do to set psi i equal to alpha times chi i.

Subtraction

 

Let’s review the parallelogram method for vector addition. You lay out your vectors as such. And then, the diagonal becomes x:

You can do the same thing for vector subtraction. You lay out your vectors. And then, the other diagonal becomes the vector x minus y. Obviously, you have to make sure that it points in the right direction.

Now, how do you compute x minus y? Well, you expose the different components of vectors x and y. And you simply subtract each component of y off the corresponding component of x.

In Summary:

Scaled Vector Addition (AXPY)

 

We’re now going to talk about an operation that is going to be very important as we start looking at more complex operations, and then the algebra later on. It’s hard to picture though. It’s known as the axpy operation, and it takes a vector, scales it, and then adds it to another vector. Given two vectors x and y of size n and a scalar alpha, the axpy operation is given by y is equal to alpha x plus y.

Specifically

 

These kinds of vector operations have been very important since the 1970s, and back then the language of choice in this area was Fortran 77. Fortran 77 had the limitation that the variables and subroutines had to be identified with at most six letters and numbers. So they had to be somewhat innovative about how to name operations, and subroutines that implemented them. And the axpy here is simply an abbreviation of alpha times x plus y. So it stands for scalar alpha, the a, times x, plus, p, y–axpy.

 

If we now want to look at an algorithm for performing this operation, notice that the i-th component of y has to be updated by scaling the i-th component of x and adding it to the i-th component of y. So psi i becomes alpha times chi i plus psi i. And as usual, we need to put a loop around that so this is done for all components zero to n minus 1.

About the AXPY operation, it is often emphasized that it is typically used in situations where the output vector overwrites the input vector y.

Linear Combinations of Vectors

If we’re given two vectors of length m, u and v, and two scalars, alpha and beta, then taking the linear combination of u and v with coefficients alpha and beta is given by alpha times u plus beta times v. So that’s the scalar times the vector u, plus the scalar times the vector v. If we expose the components of u and v, then what does this mean? It means that we scale the first vector, that mean scale each of the individual components, by alpha. And we scale the components of vector v by beta. And that gives us, this right here. So taking this linear combination of vectors u and v, using the coefficients alpha and beta, means that we take the same linear combination of each of the components of u and v.

More generally

Well, instead of writing things like this, we could write them like this. What is that? That’s an AXPY. Why? 0 is a vector. And then this is a scalar times a vector, which you add to that vector. Once you’ve computed this vector, you take a scalar times the second vector and add it to that. So now, the first AXPY, we computed this. The second AXPY computes this. And you can imagine that we can do that for all of the vectors until we’re completely done.

This then motivates the following algorithm. You start by setting w equal to 0. And then for j equals 0 to n minus 1, you perform this operation right here where you take a the scalar chi j times v j and at that to w. So for j equals 0, that’s this operation. That then is stored in w. Then this here is what you do for j equals 1 and so forth.

Shortly, this will become really important as we make the connection between linear combinations of vectors, linear transformations, and matrices.

Dot or Inner Product (DOT)

 

If we’re given vectors x and y of size n, then the dot product is defined as follows.

Now what do we have here? We multiply the first components together, and then we multiply the second components together, and add those to the first components. And then we keep doing that until we get to the last components. We multiply those together, and we add those in as well. We can write this more concisely as I equals 0 to n minus one, of Chi i times Psi i.

Now to motivate an algorithm, let’s look at this slightly differently. Let’s think of this as, take an alpha, and first assigning 0 to it. After that, we multiply the first two components together, and we add that to 0. After that, we multiply the next two components together, and we add that to what already is in alpha, and so forth…This motivates the algorithm given here. You start by setting alpha equal to zero. And then, for i equals zero to n minus 1, you take what is already accumulated in alpha, and you add to it the product of the components Chi i and Psi i.

Now often we will use a slightly different notation to denote the dot product. OK, the dot product is given by this. We will often write this as x transpose y. OK? This T here means transposition. Now what does transposition mean? If we expose the components of x and y, then the transposition means that you take x, which is a column vector, and you make it into a row vector, as such. So the column vector turns into a row vector. Transposition means taking the vector and putting it on its side. And then multiplying the row vector times a column vector means multiplying the first components together, and adding that to the second components multiplied together, and so forth.

Vector Length (NORM2)

If we take that further and we look at a vector of size n, then the length of that vector is given by the square root of the squares of the components,which we can use shorthand to write as the sum of the squares of the components.

There’s a relation between the dot product and the length of a vector.

And therefore, we conclude that the length of vector x is just a square root of the dot products of x with itself. We summarize that right here.

Vector Function

 

A vector function is a function that takes one or more scalars and/or one or more vectors as inputs and then produces a vector as an output.

Well, let’s look an an example. So here we have an example of f, which is a function of two scalars. How do we know these are scalars? Well, notice that I’m using Greek letters. We agree that the Greek lowercase letters we were going to use for scalars. So it takes two scalars as input, alpha and beta, and then produces a vector of size two as output, where the first component adds the two input scalars and the second component subtracts the second scalar from the first scalar.

If we want to evaluate f of -2, 1, then all we do is we substitute -2 in for alpha. And we substitute 1 in for beta. So this here then would be the vector -2 plus 1, -2, minus 1. And if you do the arithmetic, you get the vector -1, -3. That’s summarized right here.

Let’s do another example. Here we have a function of the scalar and a vector of size three. And the output is that vector, except that each of its components has been changed by adding the scalar to it. So if you want to evaluate this function for a specific input, -2 for the scalar and the vector 1, 2, 3, then again what we need to do is substitute the -2 in for alpha. And we need to substitute 1, 2, and 3 in for the components of the vector that’s the input.

We have seen other examples already. We saw the AXPY operation, which if you think of it as a function, is the function axpy of a scalar alpha and then vectors x and y. And the output is the vector alpha times x plus y. We also saw the DOT function. And notice that in the DOT function, you have two vectors as input, x and y. And the result is the DOT product of the two factors, which is a scalar. Now you might say a scalar is not a vector. But we’re going to think of a scalar often as a vector of size one.

What we will see in the next unit is that we can think of these vector functions as mapping one vector to another vector.

Vector Functions that map a vector to a vector

Now we’re ready to look at functions that map vectors to vectors. Next week, we’ll look at a special case of those kinds of functions called «linear transformations.» We’re going to be looking at our functions that map a vector of size n to a vector of size m.

In the previous units, we looked at a function that took two scalars as input and produced a vector as an output.

We can look at a function g that takes as input a vector with components alpha and beta and then produces the exact same vector as the function f produced.

Here was another example of a function that took as input a scalar and a vector.

We can instead look at a function g, but now stacks scalar on top of the vector creating a vector that is of size four instead of the size three vector that we had before and then evaluates in exactly the same way.

The whole point being that now we have a function that takes as input a vector and as output, produces a vector.

So in summary, this insight allows us to focus on vector functions that simply take one vector as input and produce one vector as output. What we will see next week is that there’s a special class of such functions called «linear transformations» that are of great importance to linear algebra.

Written by: Larry Francis Obando – Technical Specialist

Escuela de Ingeniería Eléctrica de la Universidad Central de Venezuela, Caracas.

Escuela de Ingeniería Electrónica de la Universidad Simón Bolívar, Valle de Sartenejas.

Escuela de Turismo de la Universidad Simón Bolívar, Núcleo Litoral.

Contact: Ecuador (Quito, Guayaquil, Cuenca)

WhatsApp: 00593984950376

email: dademuchconnection@gmail.com

Copywriting, Content Marketing, Tesis, Monografías, Paper Académicos, White Papers (Español – Inglés)