# DMelt:JMathlab/9 Vectors and Matrices

## Contents

# Vectors and Matrices

These data types are either used for multidimensional objects, or for simultaneous calculations on large numbers of data, e.g. for statistical problems. In this chapter we discuss this latter aspect.

Vectors are marked with square brackets. The elements are entered as comma-separated list. The commas may be left if the elements can be distinguished in a unique manner, which however fails in the second example below:

>> x=[1,-2,3,-4] x = [ 1 -2 3 -4 ] >> x=[1 - 2 3 -4] % Caution: 1-2=-1 x = [ -1 3 -4 ]

Colon and the function line space are used to define ranges of numbers as vectors.

>> y=1:10 % 1 to 10, step 1 y = [ 1 2 3 4 5 6 7 8 9 10 ] >> y=1:0.1:1.5 % 1 to 1.5, step 0.1 y = [ 1 1.1 1.2 1.3 1.4 1.5 ] >> y=linspace(0,2,5) % 5 from 0 to 2.5, equidistant. y = [ 0 0.5 1 1.5 2 ]

The number of elements in a vector x is calculated with the function length(x), individual elements are extracted by providing the index k like x(k). This index k must be a number in the range 1 to (including) length(x). The colon operator plays a special role: Used as index, all elements of the vector are returned. Additionally, ranges of numbers can be used as index.

>> y(2) % single element ans = 0.5 >> y(:) % magic colon ans = [ 0 0.5 1 1.5 2 ] >> y(2:3) % index between 2 and 3 ans = [ 0.5 1 ] >> y(2:length(y)) % all from index 2 ans = [ 0.5 1 1.5 2 ] >> y([1,3,4]) % indices 1,3 and 4 ans = [ 0 1 1.5 ] >> y([1,3,4]) = 9 % insert ans = [ 9 0.5 9 9 2 ] >> y([1,3,4]) = [1,2,3] % insert ans = [ 1 0.5 2 3 2 ]

Matrices are handled in a similar way, only with two indices for rownumber (first index) and columnnumber (second index). Rows are separated by either a semicolon or a linefeed during input.

>> M=[1:3 ; 4:6 ; 7:9] M = 1 2 3 4 5 6 7 8 9 >> M([1 3],:) ans = 1 2 3 7 8 9 >> C=M<4 C = 1 1 1 0 0 0 0 0 0

The operators of chapter 2.2 may be applied to vectors and matrices. If scalar, per-element operation is desired, some operators (* / ^) must be preceded by a point to distinguish them from the quite different linear-algebra versions of these operations (see chapter 2.9). Further useful functions are sum(vector) and prod(vector) which return the sum and product of the vectors elements.

### Matrix and Vector Operations

Name(Arguments) | Function | Mod | Ref |
---|---|---|---|

linspace($var_1$,$var_2$,COUNT) | vector with COUNT numbers ranging from $var_1$ to $var_2$ | O | |

length($vector$) | number of elements in $vector$ | M,O | |

zeros(ROWS[,COLUMNS]) | matrix of zeros | M,O | |

ones(ROWS[,COLUMNS]) | matrix of ones | M,O | |

eye(ROWS[,COLUMNS]) | matrix with diagonal one | M,O | |

rand(ROWS[,COLUMNS]) | matrix of random numbers | M,O | |

hilb(RANK) | Hilbertmatrix | M,O | |

invhilb(RANK) | Inverse Hilbertmatrix | O | |

size($matrix$) | number of rows and columns | M,O | |

sum($var$) | if $var$ is a vector: sum of elements, if $var$ is a matrix: sum of columns | M,O | |

find($var$) | indices of nonvanishing elements | M,O | |

max($var$) | largest element in $var$ | M,O | |

min($var$) | smallest element in $var$ | M,O | |

diag($var$,[OFFSET]) | if $var$ is a vector: matrix with $var$ as diagonale, if $var$ is matrix: diagonale as vector | M,O | |

det($matrix$) | determinante | M,O | |

eig($matrix$) | eigenvalues | M,O | |

inv($matrix$) | inverse | M,O | |

pinv($matrix$) | pseudoinverse | M,O | |

lu($matrix$) | LU-decomposition | M,O | |

svd($matrix$) | singular value decomposition (Lapack) | M,O | |

qr($matrix$) | QR-decomposition (Lapack) | M,O | |

eigen($matrix$) | eigenvalues (Lapack) | O |

### Working with vectors

[math] \sum_{k=1}^{k=n}k [/math] [math] \sum_{k=1}^{k=n}k^2 [/math][math] \sum_{k=1}^{k=n}k^3 [/math]

for n=10, n=100, n=10000. The solution is given below:

>> n=10; k=1:n; >> sum(k), sum(k.*k), sum(k.^3) % point! ans = 55 ans = 385 ans = 3025 >> n=100; k=1:n; >> sum(k), sum(k.*k), sum(k.^3) ans = 5050 ans = 3.3835E5 ans = 2.5503E7 >> n=10000; k=1:n; >> sum(k), sum(k.*k), sum(k.^3) ans = 5.0005E7 ans = 3.3338E11 ans = 2.5005E15

### Functions

Several standardmatrices are created by means of functions without specifying individual elements: ones(n,m), zeros(n,m), rand(n,m) return matrices with elements 1, 0 or random numbers between 0 and 1. eye(n,m) has diagonalelements 1, else 0, and hilb(n) creates the n-th degree Hilbert-matrix.

>> A=rand(1,3) A = 0.33138 0.94928 0.56824 >> B=hilb(4) B = 1 1/2 1/3 1/4 1/2 1/3 1/4 1/5 1/3 1/4 1/5 1/6 1/4 1/5 1/6 1/7

The following functions are provided for matrix calculations: diag(x) (extracts diagonal elements), det(x) (determinante), eig(x) (eigenvalues), inv(x) (inverse), pinv(x) (pseudoinverse). The adjunct matrix is created using the operator '.

>> det(hilb(4)) ans = 1/6048000 >> M=[2 3 1; 4 4 5; 2 9 3]; >> M' ans = 2 4 2 3 4 9 1 5 3 >> eig(M) ans = [ 11.531 -3.593 1.062 ] >> inv(M) ans = 0.75 0 -0.25 4.5455E-2 -9.0909E-2 0.13636 #0.63636 0.27273 9.0909E-2

The nontrivial functions are all based on the LU-decomposition, which is also accessible as a function call lu(x). It has 2 or 3 return values, therefor the left side of the equation must provide multiple variables, see example below:

>> M=[2 3 1; 4 4 5; 2 9 3] >> [l,u,p]=lu(M) % 2 or 3 return values l = % left triangular matrix (perm.) 0.5 0.14286 1 1 0 0 0.5 1 0 u = % right upper triangular matrix 4 4 5 0 7 0.5 0 0 -1.5714 p = % permutation matrix 0 0 1 1 0 0 0 1 0

Without preceding point the arithmetic operators function as matrix operators, e.g. * corresponds to matrix and vector multiplication.

>> x=[2,1,4]; y=[3,5,6]; >> x.*y % with point ans = [ 6 5 24 ] >> x*y % without point ans = 35

If one of the arguments is a scalar datatype, the operation is repeated for each element of the other argument:

>> x=[2,1,4]; >> x+3 ans = [ 5 4 7 ]

Matrix division corresponds to multiplication by the pseudoinverse. Using the operator \ leads to left-division, which can be used to solve systems of linear equations:

>> M=[2 3 1; 4 4 5; 2 9 3]; >> b=[0;3;1]; >> x=M\b % solution of M*x = b x = #0.25 #0.13636 0.90909 >> M*x % control ans = 0 3 1

Systems of linear equations can (and should) be solved directly with the function linsolve(A,b).

### Lapack

The application jasymca (not the applet or midlet) contain JLAPACK [9], the Java-port of the LAPACK [10]-routines with extended and better algorithms for matrix calculations. However, these are limited to matrices with real coefficients in floating point format. The LAPACK routines are accessed by the following functions:

**svd(A)**

Singular value decomposition of A (1 or 3 returnvalues).

>> A=[2 3 1; 4 4 5; 2 9 3]; >> svd(A) ans = [ 12.263 3.697 0.9705 ]

**qr(A)**

QR-decomposition of A (2 returnvalues).

>> A=[2 3 1; 4 4 5; 2 9 3]; >> [q,r]=qr(A) q = ###0.40825 -5.3149E-2 -0.91132 ###0.8165 -0.4252 0.39057 ###0.40825 0.90354 0.13019 r = ###4.899 -8.165 -5.7155 0 6.2716 0.53149 0 0 1.4321

**linsolve2( A, b)**

Solves $A\cdot x=b$ (1 returnvalue). Example in chapter 2.13.1.

**linlstsq(A, b)**

Solves $A\cdot x=b$, overdetermined (1 return value). For an example see insert ``Comparison of LAPACK and Jasymca Routines.

**eigen(A)**

Eigenvalues of A (1 returnvalue).

>> A=[2 3 1; 4 4 5; 2 9 3]; >> eigen(A) ans = [ 11.531 1.062 -3.593 ]

Comparison of LAPACK and Jasymca Routines We calculate the 4-th degree regression polynomial for the following x,y-data:

>> x=[1:6],y=x+1 x = [ 1 2 3 4 5 6 ] y = [ 2 3 4 5 6 7 ] >> polyfit(x,y,4) p = 5.1958E-14 -9.6634E-13 -2.4727E-12 1 1

The coefficients p(1),p(2),p(3) should vanish since x and y represent a perfect straight line. This is an unstable problem, and it can be easily extended to make Jasymca completely fail. In our second attempt we use the Lapack-routine linlstsq:

>> x=[1:6],y=x+1; >> l=length(x);n=4; >> X=(x'*ones(1,n+1)).^(ones(l,1)*(n:-1:0)) >> linlstsq(X,y') ans = #1.6288E-18 #7.0249E-17 1.0653E-15 1 1

The coefficients p(1),p(2),p(3) are now significantly smaller. This particular problem can be solved exactly using Jasymca-routines and exact numbers, which avoids any rounding errors:

>> x=rat([1:6]);y=x+1; >> polyfit(x,y,4) p = 0 0 0 1 1