0% found this document useful (0 votes)
24 views

Group Assignment SageMath2023

1. The document outlines group assignments on various computational mathematics topics to be completed in Sage and presented later in the week. 2. The groups will work on tasks involving linear optimization problems, square-free factorization of polynomials, arithmetic functions, and solving ordinary differential equations numerically. 3. Students are instructed to explain basic concepts, provide examples, and compare their Sage implementations to built-in commands for each problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Group Assignment SageMath2023

1. The document outlines group assignments on various computational mathematics topics to be completed in Sage and presented later in the week. 2. The groups will work on tasks involving linear optimization problems, square-free factorization of polynomials, arithmetic functions, and solving ordinary differential equations numerically. 3. Students are instructed to explain basic concepts, provide examples, and compare their Sage implementations to built-in commands for each problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

AIMS-Cameroon

Computational Mathematics Using Sage: Group assignment


October 10, 2023
Dr. Daniel Duviol Tcheutia

ˆ This is a group assignment to be presented on Friday, 13th October, 2023.

ˆ You have to submit your group work as a SAGE worksheet (.sws file), .pdf file and beamer
latest Friday, 13th October, 2023, 6:45 AM.

General remark: For each of the exercises, please explain a little bit the notion (basic concept)
with some basic examples. At the end, find some good examples to illustrate the efficiency of your
implementation compare to the built-in commands.

Group 1: Simplex method


A linear optimization problem (linear programming problem) can be written in normal form, that is:

Maximize
z = f (x) = c1 x1 + . . . + cn xn
subject to the constraints
a11 x1 + . . . + a1n xn = b1
a21 x1 + . . . + a2n xn = b2
......
am1 x1 + . . . + amn xn = bm
xi ≥ 0 (i = 1, 2, . . . , n).

There exists an iterative method called simplex method to find an optimal solution of this problem.
This method is described in section 22.3, starting at page 958 of the book by Erwin Kreyszig.

1. Write a program for graphing a region R in the first quadrant of the x1 x2 -plane determined by
linear constraints.

2. Write a program for maximizing z = a1 x1 + a2 x2 in R

3. Write a program for maximizing z = a1 x1 + . . . + an xn subject to linear constraints.

4. Apply your programs to problems in the problem set of section 22.3.

1
Group 2: Square-free factorisation
To find the square-free factorisation of a polynomial a(x), that is the irreducible factors a1 (x), . . . , am (x)
such that m
Y
a(x) = ak (x)k , m ∈ N,
k=1

compute g(x) = GCD (a(x), a (x)), h(x) = a(x)/g(x) and set k = 1, result=1.
step 1: if g(x) ̸= 1, compute b(x) = GCD(g(x), h(x)) to get a1 (x) = h(x)/b(x) and result ←
result × a1 (x)k .
step 2: set k ← k + 1, h(x) ← b(x) and g(x) ← g(x)/b(x) and go back to step 1.
step 3: when g(x) = 1, return result × h(x)k .

1. Take a simple example of your choice to explain the notion of “square-free factorisation” and
compute the “square-free part” of your example.

2. Implement the above algorithm in a function SquarefreeFactor(a,x).

3. Implement a function SquarefreePart(a,x) which returns the square-free part of a polynomial


a(x).

4. Use your function to find the square-free factorisation and the square-free parts of

f (t) = (t2 − 1)(t2 − 9)3 (t2 − 25)5 , g(x) = x10 + 2x9 + 2x8 + 2x7 + x6 + x5 + 2x4 + x3 + x2 + 2x + 1.

Compare your results with the ones obtained using the Sage command squarefree_part.

2
Group 3: Arithmetic Functions Let n be a positive integer, σ(n) the sum of all positive
divisors of n (including n), and τ (n) the number of divisors of n, and the Euler function ϕ(n) the
number of integers j with 1 ≤ j ≤ n such that gcd(j, n) = 1. We also consider the Mobius function
(
(−1)r if n is a product of r distinct primes
µ(n) = .
0 otherwise

1. Carmichael conjectured that if x and m are positive integers with ϕ(x) = m, then there is
an integer y ̸= x, with ϕ(y) = m. In 1998, Ford showed that this statement is true for all
10
m < 1010 . Verify that it is true for all m ≤ 100.
P √
2. Mertens conjectured that | n≤x µ(n)| ≤ x for all x ≥ 1. Odlyzko and te Riele showed in
1985 that there are (very large, non-explicit) values of x such that the conjecture is false. Show
that the conjecture is true for all x ≤ 1000.

3. A positive integer n is called highly composite if n has more divisors than any smaller number.
That is, τ (n) > τ (m) for all m ∈ Z with 1 ≤ m < n. The first few highly composite numbers
are 1, 2, 4, 6, 12, 24.

(a) There are 20 highly composite numbers less than 10000. Can you find them?
(b) Notice that the numbers k! for 0 ≤ k ≤ 7 are highly composite. Show that 8! and 9! are
not highly composite.

4. A positive integer n is called highly abundant if the sum of the divisors of n is larger than the
sum of the divisors of each smaller positive integer. That is, σ(n) > σ(m) for all m ∈ Z with
1 ≤ m < n. The first few highly abundant numbers are 1, 2, 3, 4, 6, 8, 10.

(a) There are eight highly abundant numbers between 500 and 1000. Can you find them?
(b) n = 9! = 362880 is not highly abundant. Find m < n with σ(m) > σ(n).

3
Group 4: Chinese remainder theorem
Let p1 , . . . , pn be integers with pi ≥ 2 for i = 1, . . . , n and gcd(pj , pk ) = 1 for j ̸= k and a1 , . . . , an be
any integers. Then the system of equations


 x ≡ a1 (mod p1 )

x ≡ a2 (mod p2 )

.. (1)


 .

x ≡ a (mod p )
n n

has an integer solution x ∈ Z, which is unique up to an additive multiple of p1 × · · · × pn . In fact, if


gcd(p1 , p2 ) = 1, there exist s, t ∈ Z such that sp1 + tp2 = 1 and then x = a2 sp1 + a1 tp2 (mod p1 p2 )
is solution of the system (
x ≡ a1 (mod p1 )
x ≡ a2 (mod p2 )
and (1) becomes 

 x ≡ a2 sp1 + a1 tp2 (mod p1 p2 )

x ≡ a3 (mod p3 )

..


 .

x ≡ a (mod p ).
n n

1. Implement a recursive function CR(list1,list2) which returns the solution x of (1). Take
into consideration the hypothesis.

2. Test your function on the following examples:


• x ≡ 11 (mod 74), x ≡ 13 (mod 63);
• x ≡ 11 (mod 74), x ≡ 13 (mod 37);
• x ≡ 997 (mod 2001), x ≡ 998 (mod 2002), x ≡ 999 (mod 2003);
• x ≡ 1 (mod 3), x ≡ 3 (mod 5), x ≡ 4 (mod 7), x ≡ 2 (mod 11).

4
Group 5: Review of some methods used to solve numerically ODEs Numerical methods
for ordinary differential equations approximate solutions to initial value problems of the form
dy
y ′ (x) = = f (x, y), y(x0 ) = y0 .
dx
The result is approximations yn ≈ y(xn ) for the value of y(x) at discrete points xn = x0 + nh
(xn+1 = xn + h), where h is the constant step and n is an integer.
There exist many method to solve numerically ODEs with initial conditions. Here are some of
the methods we want to implement.

1. Euler’s forward’s method


yn+1 = yn + hf (xn , yn ),

2. Heun’s method
h
yn+1 = yn + (f (xn , yn ) + f (xn+1 , pn+1 ))
2
pn+1 = yn + hf (xn , yn );

3. Improved Euler method

k1 = hf (xn , yn )
k2 = hf (xn+1 , yn + k1 )
1
yn+1 = yn + (k1 + k2 )
2
.
1
4. Second-order Runge-Kutta method with A = 2

h
yn+1 = yn + (k1 + k2 )
2
k1 = f (xn , yn )
k2 = f (xn + h, yn + k1 h);

5. Second-order Runge-Kutta method with A = 0

yn+1 = yn + hk2
k1 = f (xn , yn )
h hk1
k2 = f (xn + , yn + );
2 2
2
6. Second-order Runge-Kutta method with A = 3

h
yn+1 = yn + (k1 + 2k2 )
3
k1 = f (xn , yn )
3 3
k2 = f (xn + h, yn + k1 h);
4 4

5
7. Third-order Runge-Kutta method

h
yn+1 = yn + (k1 + 4k2 + k3 )
6
k1 = f (xn , yn )

h h
k2 = f xn + , yn + k1
2 2
k3 = f (xn + h, yn − k1 h + 2k2 h).

Task to do

1. For each of the above methods, write a function with input (f, y, x0, b, y0, h, ivar)
which returns the list of approximations [[xn , yn ]] for n = 0, 1, . . . , N where N = (b − x0 )/h,
f = [f1 , f2 , . . . , fn ] (n = 1, 2, . . .) is a function of the variables ivar = x and y, x ∈ [x0, b], h is
the step, y0 = y(x0).

2. Consider the differential equations below. Plot on the same graphs, for each differential equa-
tion, the solutions obtained using the sage built-in command desolve_system_rk4 for the
fourth-order Runge-Kutta method and the ones obtained using your implementation. Use
graphics_array for each differential equation to display the plots as a 4 by 2 matrix for each
differential equation. This means the 4 by 2 matrix will contain for each differential equation
7 plots and each plot corresponds to one method and the plot for the built-in command.

(a) y ′ (x) = (y − x)2 , y(0) = 0,


(
θ̈1 = 5 sin(θ2 − θ1 ) − 0.1θ̇1 , θ1 (0) = π6 , θ̇1 (0) = 0
(b)
θ̈2 = 5 sin(θ1 − θ2 ) − 0.1θ̇2 , θ2 (0) = 0, θ̇2 (0) = 0, t ∈ [0; 20].

6
Group 6: Interpolation polynomial
If x1 , . . . , xn are distinct, then for any y1 , . . . , yn there exists a unique interpolation polynomial
Pn−1 (x) of degree at most n − 1 such that the conditions Pn−1 (xi ) = yi , i = 1, . . . , n are satisfied.
(a) The Newton form of the interpolation polynomial is

Pn−1 ([[x1 , y1 ], . . . , [xn , yn ]], x) = c1 +c2 (x−x1 )+c3 (x−x1 )(x−x2 )+. . .+cn (x−x1 )(x−x2 ) · · · (x−xn−1 )
(2)
with

P0 ([[x1 , y1 ]], x) = c1 = y1 ,
yk − Pk−1 ([[x1 , y1 ], . . . , [xk−1 , yk−1 ]], xk )
ck = , k ≥ 1,
(xk − x1 )(xk − x2 ) · · · (xk − xk−1 )
Pk ([[x1 , y1 ], . . . , [xk , yk ]], x) = Pk−1 ([[x1 , y1 ], . . . , [xk−1 , yk−1 ]], x) + ck (x − x1 )(x − x2 ) · · · (x − xk−1 ).

Implement a function NewtonPol1(list,x) which returns the Newton interpolation polynomial de-
fined by the above recursive approach.
(b) For the divided difference method, we set in (2) ci = f (x1 , . . . , xi ) where the so-called divided
difference f (xi , xi+1 , . . . , xi+j−1 ) ≡ F (list, i, j) (list = [[x1 , y1 ], . . . , [xn , yn ]]) satisfies the recursion
formula
f (xi+1 , xi+2 , . . . , xi+j−1 ) − f (xi , xi+1 , . . . , xi+j−2 )
f (xi , xi+1 , . . . , xi+j−1 ) = .
xi+j−1 − xi
The algorithm to compute the divided difference is as follows:
• Set F (list, i, 1) = yi , i = 1, . . . , n
• Use the following recursive relation to compute F (list, i, j):

F (list, i + 1, j − 1) − F (list, i, j − 1)
F (list, i, j) = ,1≤i<j≤n
xi+j−1 − xi

• For each i from 1 to n, set ci = F (list, 1, i) ≡ f (x1 , . . . , xi )


• The interpolation polynomial is then
n
X i−1
Y
Pn−1 (x) = ci (x − xj ) .
i=1 j=1

Implement a function F(list,i,j) and use it to define a function NewtonPol2(list,x) which


returns the interpolation polynomial Pn−1 .
(c) Use NewtonPol1(list,x) and NewtonPol2(list,x) to find the interpolation polynomial of the
data list = [[0, 0], [1, 1], [4, 2], [9, 3], [16, 4], [25, 5], [36, 6], [49, 7]], and compare the results.

7
Group 7: Determinant and inverse of a matrix

1. Determinant of a matrix

(a) Using row operations, compute the determinant of the matrix


 
1 2 −3 4
−4 2 1 3
 .
 3 0 −2 0
1 0 2 −5

(b) Write a sage function called Mydet(A) which computes the determinant of a given square
matrix A.
(c) Use your function and @interact to compute the determinant of the general matrix
 
1 x0 x20 . . . xn0
1 x1 x2 . . . xn 
 1 1
1 x2 x2 . . . xn 
 2 2  , for n ∈ [1, 6], xi ̸= xj for i ̸= j.
 .. .. .. .. 
. . . ... . 
1 xn x2n . . . xnn

Deduce the general expression of this determinant for any n ∈ N.

2. Inverse of a matrix

(a) Using row operations, find the inverse of the matrix


 
1 1 1 1
1 2 −1 2
1 −1 2 1 .
 

1 3 3 2

(b) Write the sage function augment_matrix(A,B) which takes two matrices A and B and
returns the augmented matrix (A|B).
(c) Using your function augment_matrix, write the sage function Myinverse(A) which com-
putes, using row operations, the inverse of a given matrix A when it exists. It should
return “singular matrix” when the matrix is singular and “Matrix is not square” when
the matrix is not square.
(d) Use your function Myinverse obtained in question 3 to determine which of the following
matrices are invertible and find their inverses:
 
      1 1 1 1
1 1 1 1 2 1 1 2 2
1 2 3 , 1 3 2 , 1 3 1 , 2 4 8 16  .
 
3 9 27 81 
0 1 1 1 0 1 1 1 3
4 16 64 256

8
Group 8: Fibonacci sequence
Consider the Fibonacci sequence (ai )i≥0 defined by

an+2 = an+1 + an , ∀n ≥ 0, a0 = 0, a1 = 1. (3)

We can write (3) as a system of recurrence equations in the following way:


(
an+2 = an+1 + bn+1
bn+2 = an+1 .

1. Write the above system in the form Xn+2 = AXn+1 , where A is a matrix and

an
Xn =
bn

2. Find the eigenvalues λ1 and λ2 of A and the associated eigenvectors.

3. Find the matrices P and D such that A = P DP −1 and deduce An .

4. Using the divided-and-conquer approach, write the function MatPower(B, n) to compute B n


where B is a matrix and n a nonnegative integer.

5. Use MatPower to compute An in terms of λ1 and λ2 .

6. Deduce that
1
an = (λn − λn1 ) (4)
λ2 − λ1 2

7. Write functions Fibo1(n) and Fibo2(n) to compute the nth Fibonacci number using, respec-
tively, (3) and (4) and compare the timings.

9
Group 9: Lagrange interpolation and Gauss quadrature formula
Given the task of evaluating an integral Z b
f (t)dt
a

for some function f (t) on an interval [a, b], basic quadrature rules are derived by replacing f (t) with an
interpolating polynomial ϕ(t) and integrating the latter exactly. If there are s distinct interpolation
points c1 , . . . , cs , then we can write the interpolating polynomial of degree < s in Lagrange form
s
X
ϕ(t) = f (cj )Lj (t), (5)
j=1

where s
Y t − ci
Lj (t) = .
i=1,i̸=j
cj − ci

Then s
Z b X
f (t)dt ≈ ωj f (cj ), (6)
a j=1

where the weights ωj are given by


Z b
wj = Lj (t)dt.
a

1. The Legendre polynomials Pn (x) are defined by


(
Pn+1 (x) = 2n+1
n+1
n
xPn (x) − n+1 Pn−1 (x), n = 1, 2, . . . ,
(7)
P0 (x) = 1, P1 (x) = x.

Implement in Sage the function Legendre(n,x) which computes Pn (x) using the recurrence
Equation (7).

2. Let c1 , . . . , cs be the zeros of Ps (x) obtained in Sage using sol = solve(Legendre(s,x), x)


and [real(N(sol[i].rhs())) for i in range(s)] (since all the zeros of Ps (x) are real).

(a) Implement in Sage the function Lagrange_Int_Pol(f,s, t) which computes the La-
grange interpolating polynomial (5) using the zeros c1 , . . . , cs of Legendre(s,x).
(b) Consider the function f (x) = 1/(1 + x2 ). Plot on [−1, 1] the curve of f and animate
the curves of Lagrange_Int_Pol(f,s, t) for s = 3, 4, . . . , 9. One should see from the
animation how the Lagrange polynomials converge to f on [−1, 1].

3. (a) Implement in Sage the function Gauss_Quad(f, a,b,s) which returns a numerical ap-
Rb
proximation of a f (t)dt using Equation (6) where c1 , . . . , cs are the zeros of Legendre(s,x).
R1
(b) Use your function Gauss_Quad(f, a,b,s) to estimate −1 1/(1 + x2 )dx for s = 3, 4, . . . , 9
and compare your results with the one obtained using the Sage built-in command.
(c) Choose two random examples for further illustration.

10
Group 10: The Chebyshev polynomials of the first kind
The Chebyshev polynomials of the first kind

Tn (x) = cos(n arccos(x))

can be computed using:


the series representation
⌊n/2⌋ ⌊n/2⌋
n X (−1)k (n − k − 1)! n−2k n X
Tn (x) = (2x) = ak ,
2 k=0 k!(n − 2k)! 2 k=0

where ⌊n/2⌋ is the floor of n/2,


the three-term recurrence relation

2xTn (x) = Tn+1 (x) + Tn−1 (x), T0 (x) = 1, T1 (x) = x,

the Rodrigues formula

(−1)n dn 2 n− 12

Tn (x) = 1 (1 − x ) ,
1
(1 − x2 )− 2 dxn

2n 2 n

where (a)n = a(a + 1) · · · (a + n − 1),


the generating function

1 − xt X 1
F (t) ≡ 2
= Tn (x)tn , Tn (x) = F (n) (0),
1 − 2xt + t n=0
n!

the matrix power


n−1
Tn (x) 2x −1 Tn−1 (x) 2x −1 x
= = ,
Tn−1 (x) 1 0 Tn−2 (x) 1 0 1

the divided and conquer approach

T2n (x) = 2(Tn (x))2 − 1, T2n−1 (x) = 2Tn (x)Tn−1 (x) − x.

Implement a procedure for each of the above methods and compare the timings. For the power
series representation method, use the relation between ak and ak−1 to compute ak .

11

You might also like