Thursday, November 24, 2016

Topic Covered in the Textbook

Chapter 1
1.2 Round-off errors and computer arithmetic

YES: Concept of machine epsilon; limitation of finite precision calculations; ways
to avoid the problem of loss of significant digits.
NO: Number of bits/bytes used in the IEEE single/double precision. 

Chapter 2
2.1 Bisection method
2.2 Fixed-point iteration
2.3 Newton's method and its extensions
2.4 Error analysis for iterative methods

YES: Mean Value Theorem; Taylor series expansion; Intermediate Value Theorem; how to use Bisection method, fixed point iteration and Newton’s method; application of the fixed-point theorem (Theorem 2.3); order of convergence of a sequence. 
NO: Bisection/Newton’s method for more than 3 iterations. 

Chapter 3
3.1 Interpolation and the Lagrange polynomial
3.3 Divided differences

YES: Interpolation using the Lagrange interpolating polynomials and the Newton’s divided difference; error bound for using the polynomial interpolation; piecewise interpolation. 
NO: Interpolation with more than 3 data points. 

Chapter 4
4.1 Numerical Differentiation
4.3 Elements of numerical integration
4.4 Composite numerical integration
4.7 Gaussian quadrature

YES: Numerical differentiation; derivation of approximation rules using polynomial interpolation and Taylor expansion. Trapezoidal rule, Simpson's rule and Midpoint rule; Derivation of their error term; Degree of accuracy; Composite Trapezoidal/Simpson's/Midpoint rules. 
NO: Invert a linear system of size larger than 3-by-3. 

Chapter 6
6.1 Linear systems of equations
6.2 Pivoting strategies
6.3 Linear algebra and matrix inversion
6.5 Matrix factorization
6.6 Special types of matrices

YES: Gaussian elimination (GE) with backward substitution; with partial pivoting; LU decomposition; PA=LU decomposition.
NO: Number of operations for GE; Invert a linear system of size larger than 3-by-3. 

Chapter 7
7.1 Norms of vectors and matrices
7.3 The Jacobi and Gauss-Siedel iterative techniques
7.4 Relaxation techniques for solving linear systems
7.5 Error bounds and iterative refinement

YES: Vector and matrix norms; Jacobi, Gauss-Seidel and SOR iteration; Their matrix representations; Convergence of the classical iterative methods.  
NO: Invert a linear system of size larger than 3-by-3. 

Chapter 8
8.1 Discrete least squares approximation

YES: Normal equation; least squares approach with general set of basis.
NO: Invert a linear system of size larger than 3-by-3.

Chapter 5
5.1 The elementary theory of initial-value problems
5.2 Euler's method
5.3 High order Taylor methods
5.4 Runge-Kutta methods
5.9 Higher-order equations and systems of differential equations
5.11 Stiff ODE's

YES: Forward Euler method, backward Euler method, Trapezoidal method; their derivation, derivation of the local truncation error and the global error; Lipschitz constant; converting a high order ODE to a system of first order ODE; high order Taylor method; Runge Kutta; Interval of absolute stability
NO: The exact expressions for the local truncation error and the global error; table form of RK; RK4

Lecture 24 (Nov 25)

Stiff ODE. Interval of absolute stability. A-stable methods.

Numerical methods to check the accuracy in a numerical scheme.

Tuesday, November 22, 2016

Lecture 23 (Nov 23)

High order ODE's.

High order Taylor methods.

Runge-Kutta methods.

Thursday, November 17, 2016

Lecture 22 (Nov 18)

Local truncation error for the forward Euler and the backward Euler methods.

Global error for the forward Euler method.

Tuesday, November 15, 2016

Lecture 21 (Nov 16)

Forward Euler Method, Backward Euler Method and Trapezoidal Method.

Friday, November 11, 2016

Lecture 20 (Nov 11)

Least squares fitting.

Normal equation. Other basis functions for least squares.

Tuesday, November 8, 2016

Lecture 19 (Nov 9)

Convergence of the iterative methods. Convergence for some special matrices.

Error bounds of the iterative method.

Friday, November 4, 2016

HW5 and HW6

HW5
(Postponed from HW4) Chapter 1: 32,33,34(a-d),35
Chapter 1: 34(e),36,37,38,39,41,43
Chapter 2: 10, 11

BackwardSubstitution.m: [click here]
ForwardSubstitution.m [click here]

Due: Nov 22 (Tue) or 23 (Wed) in your tutorial.


HW6
Chapter 1: 44,45,46,48,49(a)
Chapter 2: 12,13

Due: Dec 2 (Fri) to the TA.


Lecture 18 (Nov 4)

Induced norm: matrix 1-norm and inf-norm

Spectral radius: rho(A)=max |e-value(A)|

Tuesday, November 1, 2016

Lecture 17 (Nov 2)

Classical iterative methods: Jacobi, Gauss-Seidel and SOR.

Matrix Analysis. Vector norm: 1-, 2-, inf-norms

Lecture 16 (Oct 28)

Midterm