首页 > > 详细

MATH3204/7234 Assignment 02

 MATH3204/7234 (S2-2022): Assignment 02

Due: 09-September-2022 @13:59
1. [5 marks each]
(a) Show that if A ∈ C
n×n and λ is an eigenvalue of A, then ¯λ is an eigenvalue of A∗
.
(b) Let A ∈ C
n×n
. For an arbitrary matrix induced norm show that
ρ(A) ≤
 
 
 
1/k
, k = 1, 2, . . . ,
where ρ(A) is the spectral radius of A.
(c) Consider A, B ∈ C
n×n
. Prove that if one of A or B is non-singular, then AB and
BA are similar.
2. [10 marks each]
Suppose A ∈ C
m×n
, B ∈ C
n×m, and m ≤ n.
(a) We know AB and BA are not in general similar. However, we can still say some￾thing about their spectrum. Show that the n eigenvalues of BA are the m eige￾navlues of AB together with n − m zeros.
(b) Suppose A ∈ C
n×n
is invertible and x, y ∈ C
n
. Use the first part of this question
and show that
det(A + xy
| ) = det(A)(1 + y
| A−1x).
3. [5 marks]
Consider the least squares problem
min
x∈Rn
1
2
k
Ax − bk
2
,
where A ∈ R
m×n
is rank deficient. Show that among all possible solutions to this
problem, A†b has the smallest Euclidean norm.
1
4. [5 marks each]
In this question, you will study one of the reasons why forming the matrix-inverse ex￾plicitly is so much frowned upon.
(a) For n = 100, 200, 500, 1000, 2000, 5000, 10000, consider a series of linear systems
Anxn = bn where An = randn(n, n), bn = ones(n, 1) if you are coding in Matlab
or An = np.random.randn(n, n), bn = np.ones((n, 1)) in Python. Plot the time
it takes to solve Anxn = bn using Matlab’s inv or Python’s np.linalg.inv as a
function of n. On the same plot, do the same using Matlab’s celebrated “backslash”
or Python’s np.linalg.solve. (Note: If your system’s memory cannot handle
n = 10000, then only do this up to n = 5000. However, the larger the size is, the
better your can see the pattern.)
(b) Does your observation in comparing the time for these two implementations roughly
match that predicted by the theory in terms of “flops”? Provide some explanation
for your answer.
On paper, these functions are implementing the same thing, namely xn = A−
n
1bn,
but on computer, they perform differently! This is one of the places where numerical
linear algebra differs from linear algebra.
(c) Now generate a similar plot and make similar comparisons as above. However,
this time construct An using sprandn(n,n,0.1) or sp.sparse.random(n,n,0.1,
format=’csc’) if you are using Matlab or Python, respectively. These commands
construct random but sparse matrices. Feel free to check the relevant documenta￾tion and see what each argument of these functions is used for. The third argument
controls the density of the matrix, i.e., what percentage of the entries of this matrix
consists of non-zeros values. You can also compute its number of non-zero entries
by the commands nnz(A) in Matlab or A.nnz in Python. Don’t make it too sparse,
as it might end up being singular; in this question 0.1 sparsity level is good to see
what is going on. If you are coding in Python, your relevant functions for solving
the linear systems are sp.sparse.linalg.inv and sp.sparse.linalg.spsolve.
(d) Does your observation in comparing the time for these two implementations roughly
match that predicted by the theory in terms of “flops”? Provide some explanation
for your answer.
Side note 1: You might have actually seen that for the same size matrix, sparse
solvers take longer than when the matrix is densely stored. This is has in part to do
with the way sparse matrices are stored and the issues related to the cache/memory
access/locality, etc. So sparse matrices are easy to store and more efficient to
perform many computations with, however, depending on the circumstances, it
might be more time consuming to solve sparse linear systems.
2
Side note 2: In the same spirit as the side note above, for our question here, it
might actually be a lot more efficient to first convert An to a dense matrix using
A.todense(), and then use sp.linalg.inv and sp.linalg.solve (feel free to
try this, but no need to reprt anything on this side note). But, for very large n
depending on your memory, storing the dense matrix might not be possible at all,
and we have to work with sparse matrix representations. We assume this is the
situation here.
5. [5 marks each]
For solving Ax = b, consider the stationary Richardson iteration
xk+1 = xk + α (b − Axk).
For this question, we assume that all eigenvalues of A are real.
(a) Find the iteration matrix.
(b) Show that if λmin < 0 < λmax, the method will always be divergent for some initial
iterate.
(c) So assuming that all eigenvalues are positive, find the spectral radius of the iteration
matrix in terms of α, λmin and λmax?
(d) Find αmin < αmax such that the method converges for any αmin < α < αmax. In
other words, find the largest range for α that ensures the method always stays
convergent.
(e) What is the optimal value for α? In other words find αopt that minimizes the
spectral radius of the iteration matrix.
(f) Suppose A  0 . Using this optimal step-size, obtain the optimal convergence rate
in terms of the matrix condition number.
(g) Implement the stationary Richardson iteration and apply it to the matrix gener￾ated using the following code. For this assignment, we use beta = gamma = 0,
N = 50, x0 = 0, and a right hand-side vector b containing all ones. Compare
the performance using three step-sizes: αopt, 0.5αopt, and 1.1αopt. What do you
observe? Terminate the algorithms if the Euclidean norm of the residual vector
k
rkk 2 = k b − Axkk 2 ≤ 10−12 or if a maximum number of iteration of 20, 000 is
reached.
1 function A = lcd (beta ,gamma , N )
2 %
3 % Simple code to generate test matrices for this question and
maybe some later assignments
4 % Usage :
3
5 % to generate symmetric positive definite , call with beta =
gamma =0
6 % to generate nonsymmetric call , e.g. , with beta = gamma =1/2
7 %
8 % Note : output matrix is of size N^2 -by -N^2
9
10 ee = ones (N ,1) ;
11 a =4; b = -1 - gamma ; c = -1 - beta ; d = -1+ beta ; e = -1+ gamma ;
12 t1 = spdiags ([ c * ee , a * ee , d * ee ] , -1:1 ,N , N ) ;
13 t2 = spdiags ([ b * ee , zeros (N ,1) ,e * ee ] , -1:1 ,N , N ) ;
14 A = kron ( speye ( N ) , t1 ) + kron ( t2 , speye ( N ) ) ;
15 end
16
: For Matlab use the above code snippet.
1 import numpy as np
2 import scipy as sp
3 def lcd ( beta , gamma , N ) :
4 # Simple code to generate test matrices for this question
and maybe some later assignments
5 # Usage :
6 # to generate symmetric positive definite , call with beta
= gamma =0
7 # to generate nonsymmetric call , e.g. , with beta = gamma
=1/2
8 #
9 # Note : output matrix is of size N^2 -by -N^2
10
11 ee = np . ones ((1 , N ) )
12 a , b , c , d , e = 4 , -1 - gamma , -1 - beta , -1 + beta , -1 +
gamma
13 t1 = sp . sparse . spdiags ( np . vstack ([ c * ee , a * ee , d * ee ]) ,
np . arange ( -1 ,2) , N , N )
14 t2 = sp . sparse . spdiags ( np . vstack ([ b * ee , np . zeros ((1 , N ) ) , e
* ee ]) , np . arange ( -1 ,2) , N , N )
15 A = sp . sparse . kron ( sp . sparse . eye (N , N ) , t1 ) + sp . sparse . kron (
t2 , sp . sparse . eye (N , N ) )
16
17
: For Python use the above code snippet.
(h) Implement the accelerated variant of the Richardson iteration and, on the same test
problem and starting from the same x0 as above, compare its performance with the
stationary version. For both methods, use the step-size α = 2/(λmin + λmax).
100 marks in total
4
Note:
• This assignment counts for 15% of the total mark for the course.
• Although not mandatory, if you could type up your work, e.g., LaTex, it would be
greatly appreciated.
• Show all your work and attach your code and all the plots (if there is a programming
question).
• Combine your solutions, all the additional files such as your code and numerical
results, all in one single PDF file.
• Please submit your single PDF file on Blackboard.
5
联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!