ID | Seat Number |
09531703 | 103 |
20089844 | 3 |
20090219 | 22 |
20124644 | 66 |
20126549 | 93 |
20143951 | 102 |
20153774 | 39 |
20175289 | 88 |
20176403 | 71 |
20176611 | 87 |
20182816 | 5 |
20186795 | 18 |
20187309 | 42 |
20187799 | 6 |
20191958 | 37 |
20191984 | 72 |
20192574 | 69 |
20193396 | 21 |
20193425 | 68 |
20194285 | 20 |
20198920 | 61 |
20199132 | 92 |
20200797 | 1 |
20200876 | 94 |
20200943 | 19 |
20203191 | 85 |
20217051 | 100 |
20219190 | 95 |
20253756 | 63 |
20253897 | 24 |
20255651 | 64 |
20265747 | 48 |
20266090 | 7 |
20268218 | 44 |
20268311 | 17 |
20268335 | 41 |
20268414 | 40 |
20271186 | 23 |
20271863 | 70 |
20273586 | 8 |
20273603 | 65 |
20273706 | 67 |
20274358 | 43 |
20274592 | 86 |
20274645 | 45 |
20274657 | 62 |
20275613 | 46 |
20276007 | 4 |
20278237 | 38 |
20278471 | 47 |
20279554 | 104 |
20414489 | 90 |
20415342 | 99 |
20415524 | 101 |
20415548 | 96 |
20415550 | 98 |
20415562 | 91 |
20417704 | 2 |
20418162 | 89 |
20419465 | 97 |
Monday, December 5, 2016
Seating Plan
Tuesday, November 29, 2016
Thursday, November 24, 2016
Topic Covered in the Textbook
Chapter 1
1.2 Round-off errors and computer arithmetic
YES: Concept of machine epsilon; limitation of finite precision calculations; ways
to avoid the problem of loss of significant digits.
NO: Number of bits/bytes used in the IEEE single/double precision.
Chapter 2
2.1 Bisection method
2.2 Fixed-point iteration
2.3 Newton's method and its extensions
2.4 Error analysis for iterative methods
YES: Mean Value Theorem; Taylor series expansion; Intermediate Value Theorem; how to use Bisection method, fixed point iteration and Newton’s method; application of the fixed-point theorem (Theorem 2.3); order of convergence of a sequence.
NO: Bisection/Newton’s method for more than 3 iterations.
Chapter 3
3.1 Interpolation and the Lagrange polynomial
3.3 Divided differences
YES: Interpolation using the Lagrange interpolating polynomials and the Newton’s divided difference; error bound for using the polynomial interpolation; piecewise interpolation.
NO: Interpolation with more than 3 data points.
Chapter 4
4.1 Numerical Differentiation
4.3 Elements of numerical integration
4.4 Composite numerical integration
4.7 Gaussian quadrature
YES: Numerical differentiation; derivation of approximation rules using polynomial interpolation and Taylor expansion. Trapezoidal rule, Simpson's rule and Midpoint rule; Derivation of their error term; Degree of accuracy; Composite Trapezoidal/Simpson's/Midpoint rules.
NO: Invert a linear system of size larger than 3-by-3.
Chapter 6
6.1 Linear systems of equations
6.2 Pivoting strategies
6.3 Linear algebra and matrix inversion
6.5 Matrix factorization
6.6 Special types of matrices
YES: Gaussian elimination (GE) with backward substitution; with partial pivoting; LU decomposition; PA=LU decomposition.
NO: Number of operations for GE; Invert a linear system of size larger than 3-by-3.
Chapter 7
7.1 Norms of vectors and matrices
7.3 The Jacobi and Gauss-Siedel iterative techniques
7.4 Relaxation techniques for solving linear systems
7.5 Error bounds and iterative refinement
YES: Vector and matrix norms; Jacobi, Gauss-Seidel and SOR iteration; Their matrix representations; Convergence of the classical iterative methods.
NO: Invert a linear system of size larger than 3-by-3.
Chapter 8
8.1 Discrete least squares approximation
YES: Normal equation; least squares approach with general set of basis.
NO: Invert a linear system of size larger than 3-by-3.
Chapter 5
5.1 The elementary theory of initial-value problems
5.2 Euler's method
5.3 High order Taylor methods
5.4 Runge-Kutta methods
5.9 Higher-order equations and systems of differential equations
5.11 Stiff ODE's
YES: Forward Euler method, backward Euler method, Trapezoidal method; their derivation, derivation of the local truncation error and the global error; Lipschitz constant; converting a high order ODE to a system of first order ODE; high order Taylor method; Runge Kutta; Interval of absolute stability
NO: The exact expressions for the local truncation error and the global error; table form of RK; RK4
1.2 Round-off errors and computer arithmetic
YES: Concept of machine epsilon; limitation of finite precision calculations; ways
to avoid the problem of loss of significant digits.
NO: Number of bits/bytes used in the IEEE single/double precision.
Chapter 2
2.1 Bisection method
2.2 Fixed-point iteration
2.3 Newton's method and its extensions
2.4 Error analysis for iterative methods
YES: Mean Value Theorem; Taylor series expansion; Intermediate Value Theorem; how to use Bisection method, fixed point iteration and Newton’s method; application of the fixed-point theorem (Theorem 2.3); order of convergence of a sequence.
NO: Bisection/Newton’s method for more than 3 iterations.
Chapter 3
3.1 Interpolation and the Lagrange polynomial
3.3 Divided differences
YES: Interpolation using the Lagrange interpolating polynomials and the Newton’s divided difference; error bound for using the polynomial interpolation; piecewise interpolation.
NO: Interpolation with more than 3 data points.
Chapter 4
4.1 Numerical Differentiation
4.3 Elements of numerical integration
4.4 Composite numerical integration
4.7 Gaussian quadrature
YES: Numerical differentiation; derivation of approximation rules using polynomial interpolation and Taylor expansion. Trapezoidal rule, Simpson's rule and Midpoint rule; Derivation of their error term; Degree of accuracy; Composite Trapezoidal/Simpson's/Midpoint rules.
NO: Invert a linear system of size larger than 3-by-3.
Chapter 6
6.1 Linear systems of equations
6.2 Pivoting strategies
6.3 Linear algebra and matrix inversion
6.5 Matrix factorization
6.6 Special types of matrices
YES: Gaussian elimination (GE) with backward substitution; with partial pivoting; LU decomposition; PA=LU decomposition.
NO: Number of operations for GE; Invert a linear system of size larger than 3-by-3.
Chapter 7
7.1 Norms of vectors and matrices
7.3 The Jacobi and Gauss-Siedel iterative techniques
7.4 Relaxation techniques for solving linear systems
7.5 Error bounds and iterative refinement
YES: Vector and matrix norms; Jacobi, Gauss-Seidel and SOR iteration; Their matrix representations; Convergence of the classical iterative methods.
NO: Invert a linear system of size larger than 3-by-3.
Chapter 8
8.1 Discrete least squares approximation
YES: Normal equation; least squares approach with general set of basis.
NO: Invert a linear system of size larger than 3-by-3.
Chapter 5
5.1 The elementary theory of initial-value problems
5.2 Euler's method
5.3 High order Taylor methods
5.4 Runge-Kutta methods
5.9 Higher-order equations and systems of differential equations
5.11 Stiff ODE's
YES: Forward Euler method, backward Euler method, Trapezoidal method; their derivation, derivation of the local truncation error and the global error; Lipschitz constant; converting a high order ODE to a system of first order ODE; high order Taylor method; Runge Kutta; Interval of absolute stability
NO: The exact expressions for the local truncation error and the global error; table form of RK; RK4
Lecture 24 (Nov 25)
Stiff ODE. Interval of absolute stability. A-stable methods.
Numerical methods to check the accuracy in a numerical scheme.
Numerical methods to check the accuracy in a numerical scheme.
Tuesday, November 22, 2016
Lecture 23 (Nov 23)
High order ODE's.
High order Taylor methods.
Runge-Kutta methods.
High order Taylor methods.
Runge-Kutta methods.
Thursday, November 17, 2016
Lecture 22 (Nov 18)
Local truncation error for the forward Euler and the backward Euler methods.
Global error for the forward Euler method.
Global error for the forward Euler method.
Tuesday, November 15, 2016
Lecture 21 (Nov 16)
Forward Euler Method, Backward Euler Method and Trapezoidal Method.
Friday, November 11, 2016
Lecture 20 (Nov 11)
Least squares fitting.
Normal equation. Other basis functions for least squares.
Normal equation. Other basis functions for least squares.
Tuesday, November 8, 2016
Lecture 19 (Nov 9)
Convergence of the iterative methods. Convergence for some special matrices.
Error bounds of the iterative method.
Error bounds of the iterative method.
Friday, November 4, 2016
HW5 and HW6
HW5
(Postponed from HW4) Chapter 1: 32,33,34(a-d),35
Chapter 1: 34(e),36,37,38,39,41,43
Chapter 2: 10, 11
BackwardSubstitution.m: [click here]
ForwardSubstitution.m [click here]
Due: Nov 22 (Tue) or 23 (Wed) in your tutorial.
HW6
Chapter 1: 44,45,46,48,49(a)
Chapter 2: 12,13
(Postponed from HW4) Chapter 1: 32,33,34(a-d),35
Chapter 1: 34(e),36,37,38,39,41,43
Chapter 2: 10, 11
BackwardSubstitution.m: [click here]
ForwardSubstitution.m [click here]
Due: Nov 22 (Tue) or 23 (Wed) in your tutorial.
HW6
Chapter 1: 44,45,46,48,49(a)
Chapter 2: 12,13
Due: Dec 2 (Fri) to the TA.
Lecture 18 (Nov 4)
Induced norm: matrix 1-norm and inf-norm
Spectral radius: rho(A)=max |e-value(A)|
Spectral radius: rho(A)=max |e-value(A)|
Tuesday, November 1, 2016
Lecture 17 (Nov 2)
Classical iterative methods: Jacobi, Gauss-Seidel and SOR.
Matrix Analysis. Vector norm: 1-, 2-, inf-norms
Matrix Analysis. Vector norm: 1-, 2-, inf-norms
Wednesday, October 26, 2016
Lecture 15 (Oct 26)
Gaussian elimination with partial pivoting.
LU decomposition, i.e. A=LU.
(P^T)LU decomposition, i.e. PA=LU.
LU decomposition, i.e. A=LU.
(P^T)LU decomposition, i.e. PA=LU.
Thursday, October 20, 2016
Lecture 14 (Oct 21)
Lecture cancelled due to the weather condition.
HW4 due Oct 25 (Tue) or 26 (Wed):
Chapter 1: #29,30,31,32,33,34(a-d),35
Midterm will cover everything up to, including, materials from Lecture 12. Will NOT cover Gaussian Elimination.
HW4 due Oct 25 (Tue) or 26 (Wed):
Chapter 1: #29,30,31,
Wednesday, October 19, 2016
Lecture 13 (Oct 19)
Gaussian elimination with backward substitution.
Operations count.
Operations count.
Friday, October 14, 2016
Lecture 12 (Oct 14)
Composite rules. Gaussian Quadrature.
Wednesday, October 12, 2016
Midterm
Date: Oct 28 (Friday)
Time: 130pm-230pm
Venue: Room 2407
Closed book and closed note.
No calculator.
Test up to and including materials from the lecture onOct 21 (Friday) Oct 14 (Friday).
Time: 130pm-230pm
Venue: Room 2407
Closed book and closed note.
No calculator.
Test up to and including materials from the lecture on
Lecture 11 (Oct 12)
Numerical integration: Trapezoidal rule, Midpoint rule, Simpson's rule. Their derivation based on polynomial interpolation and Taylor's expansion.
Degree of accuracy.
Composite rules.
Degree of accuracy.
Composite rules.
Monday, October 10, 2016
HW4
Chapter 1: #29-33,34(a-d),35
Due: Oct 25 (Tue) or 26 (Wed) in your tutorial.
Midterm will cover every up to, and including, this HW set.
Due: Oct 25 (Tue) or 26 (Wed) in your tutorial.
Midterm will cover every up to, and including, this HW set.
Friday, October 7, 2016
Lecture 10 (Oct 7)
Numerical differentiation.
Approach 2: Taylor's expansion.
Higher order derivatives.
Approach 2: Taylor's expansion.
Higher order derivatives.
Tuesday, October 4, 2016
Lecture 9 (Oct 5)
Numerical differentiation:
Approach 1: Given data points -> interpolating polynomial -> differentiate it and evaluate it at a given location.
Forward difference, backward difference, central difference.
Error bound based on polynomial interpolation.
Approach 1: Given data points -> interpolating polynomial -> differentiate it and evaluate it at a given location.
Forward difference, backward difference, central difference.
Error bound based on polynomial interpolation.
Friday, September 30, 2016
HW3
Chapter 1: #18-19,21-28
Chapter 2: #8
Due: Oct 18 (Tue) or 19 (Wed) in your tutorial.
Chapter 2: #8
Due: Oct 18 (Tue) or 19 (Wed) in your tutorial.
Lecture 8 (Sep 30)
Piecewise linear interpolation.
Interpolation in two dimensions.
Scattered data: triangulation and then piecewise linear interpolation.
A MATLAB implementation can be found here: [click here]
Structured data: bilinear interpolation
Interpolation in two dimensions.
Scattered data: triangulation and then piecewise linear interpolation.
A MATLAB implementation can be found here: [click here]
Structured data: bilinear interpolation
Wednesday, September 28, 2016
Lecture 7 (Sep 28)
Error estimate of using polynomial interpolation.
Newton's divided difference.
Newton's divided difference.
Friday, September 23, 2016
Lecture 6 (Sep 23)
Modified Newton's method for finding root with multiplicity m.
Interpolation: Polynomial interpolation.
Lagrange Interpolating Polynomial. We will prove the error estimate in the next lecture.
Interpolation: Polynomial interpolation.
Lagrange Interpolating Polynomial. We will prove the error estimate in the next lecture.
Wednesday, September 21, 2016
Lecture 5 (Sep 21)
Order of convergence.
General fixed point iteration: For a converging iteration and if g'(p)!=0, the method converges only linearly.
Otherwise, if g'(p)=0 and g'' continuous and strictly bounded, we have at least quadratic convergence.
Modified Newton's method for finding root with multiplicity $m$.
General fixed point iteration: For a converging iteration and if g'(p)!=0, the method converges only linearly.
Otherwise, if g'(p)=0 and g'' continuous and strictly bounded, we have at least quadratic convergence.
Modified Newton's method for finding root with multiplicity $m$.
Tuesday, September 20, 2016
HW2
Chapter 1: #10,12-17
Chapter 2: #5-7
Data file for #2.6: Schrodinger.mat
Due: Oct 4 (Tue) or 5 (Wed) in your tutorial.
Note: For Q1.16, the theorem is the one we discuss in Lecture 7 on Sep 28.
Note: For Q2.7, the statement in the bracket is not right. Please ignore it.
Chapter 2: #5-7
Data file for #2.6: Schrodinger.mat
Due: Oct 4 (Tue) or 5 (Wed) in your tutorial.
Note: For Q1.16, the theorem is the one we discuss in Lecture 7 on Sep 28.
Note: For Q2.7, the statement in the bracket is not right. Please ignore it.
Tuesday, September 13, 2016
Lecture 4 (Sep 14)
Error estimates for fixed point iterations.
Newton's method. Geometrical construction. Convergence.
Secant method.
Newton's method. Geometrical construction. Convergence.
Secant method.
Friday, September 9, 2016
HW1
HW set: [click here]
Chapter 1: #1-8
Chapter 2: #1,3,4
Due: Sep 20 (Tue) or 21 (Wed) in your tutorial.
Chapter 1: #1-8
Chapter 2: #1,3,4
Due: Sep 20 (Tue) or 21 (Wed) in your tutorial.
Lecture 3 (Sep 9)
Existence and Uniqueness of a fixed point. Example.
Fixed Point Iteration: Method.
Fixed Point Theorem (Thm 2.3 from the textbook).
Example.
Error estimates.
Fixed Point Iteration: Method.
Fixed Point Theorem (Thm 2.3 from the textbook).
Example.
Error estimates.
Thursday, September 8, 2016
Lecture 2 (Sep 7)
Root-finding
1) Bisection method. Algorithm. Convergence and an error estimate.
2) Fixed point iteration. Definition of fixed point. Existence and Uniqueness of a fixed point.
1) Bisection method. Algorithm. Convergence and an error estimate.
2) Fixed point iteration. Definition of fixed point. Existence and Uniqueness of a fixed point.
Thursday, September 1, 2016
Lecture 1 (Sep 1)
Course overview
Numerical/Computer Arithmetic
IEEE 754 standard
Integers (4 bytes), single precision (4 bytes), double precision (8 bytes)
Machine epsilon
Numerical/Computer Arithmetic
IEEE 754 standard
Integers (4 bytes), single precision (4 bytes), double precision (8 bytes)
Machine epsilon
Sunday, August 14, 2016
Subscribe to:
Posts (Atom)