Introduction To Matrix Theory Arindama Singh download
Introduction To Matrix Theory Arindama Singh download
download
https://ebookbell.com/product/introduction-to-matrix-theory-
arindama-singh-34022358
https://ebookbell.com/product/introduction-to-matrix-theory-1st-
edition-arindama-singh-51158888
https://ebookbell.com/product/introduction-to-matrix-theory-with-
applications-in-economics-and-engineering-ferenc-szidarovszky-53987470
https://ebookbell.com/product/a-problem-based-journey-from-elementary-
number-theory-to-an-introduction-to-matrix-theory-the-president-
problems-abraham-berman-46969534
https://ebookbell.com/product/highdimensional-covariance-matrix-
estimation-an-introduction-to-random-matrix-theory-1st-edition-aygul-
zagidullina-35997430
Introduction To Modern Algebra And Matrix Theory 2nd Edition Otto
Schreier And Emanuel Sperner
https://ebookbell.com/product/introduction-to-modern-algebra-and-
matrix-theory-2nd-edition-otto-schreier-and-emanuel-sperner-42660858
https://ebookbell.com/product/computational-physics-an-introduction-
to-monte-carlo-simulations-of-matrix-field-theory-ydri-7435466
https://ebookbell.com/product/an-introduction-to-queueing-theory-and-
matrixanalytic-methods-1st-edition-lothar-breuer-2323796
https://ebookbell.com/product/matrix-groups-an-introduction-to-lie-
group-theory-andrew-baker-7183506
https://ebookbell.com/product/introduction-to-matrixanalytic-methods-
in-queues-2-srinivas-r-chakravarthy-46323010
Arindama Singh
Introduction to
Matrix Theory
Introduction to Matrix Theory
Arindama Singh
Introduction to Matrix
Theory
Arindama Singh
Department of Mathematics
Indian Institute of Technology Madras
Chennai, India
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publishers, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publishers nor the authors or
the editors give a warranty, express or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publishers remain neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Practising scientists and engineers feel that calculus and matrix theory form the
minimum mathematical requirement for their future work. Though it is recommended
to spread matrix theory or linear algebra over two semesters in an early stage, the
typical engineering curriculum allocates only one semester for it. In addition, I found
that science and engineering students are at a loss in appreciating the abstract methods
of linear algebra in the first year of their undergraduate programme. This resulted
in a curriculum that includes a thorough study of system of linear equations via
Gaussian and/or Gauss–Jordan elimination comprising roughly one month in the
first or second semester. It needs a follow-up of one-semester work in matrix theory
ending in canonical forms, factorizations of matrices, and matrix norms.
Initially, we followed the books such as Leon [10], Lewis [11], and Strang [14]
as possible texts, referring occasionally to papers and other books. None of these
could be used as a textbook on its own for our purpose. The requirement was a
single text containing development of notions, one leading to the next, and without
any distraction towards applications. It resulted in creation of our own material. The
students wished to see the material in a book form so that they might keep it on their
lap instead of reading it off the laptop screens. Of course, I had to put some extra
effort in bringing it to this form; the effort is not much compared to the enjoyment
in learning.
The approach is straightforward. Starting from the simple but intricate problems
that a system of linear equations presents, it introduces matrices and operations
on them. The elementary row operations comprise the basic tools in working with
most of the concepts. Though the vector space terminology is not required to study
matrices, an exposure to the notions is certainly helpful for an engineer’s future
research. Keeping this in view, the vector space terminology is introduced in a
restricted environment of subspaces of finite-dimensional real or complex spaces.
It is felt that this direct approach will meet the needs of scientists and engineers.
Also, it will form a basis for abstract function spaces, which one may study or use
later.
Starting from simple operations on matrices, this elementary treatment of matrix
theory characterizes equivalence and similarity of matrices. The other tool of Gram–
Schmidt orthogonalization has been discussed leading to best approximations and
v
vi Preface
least squares solution of linear systems. On the go, we discuss matrix factorizations
such as rank factorization, QR-factorization, Schur triangularization, diagonaliza-
tion, Jordan form, singular value decomposition, and polar decomposition. It includes
norms on matrices as a means to deal with iterative solutions of linear systems and
exponential of a matrix. Keeping the modest goal of an introductory textbook on
matrix theory, which may be covered in a semester, these topics are dealt with in a
lively manner.
Though the earlier drafts were intended for use by science and engineering
students, many mathematics students used those as supplementary text for learning
linear algebra. This book will certainly fulfil that need.
Each section of the book has exercises to reinforce the concepts; problems have
been added at the end of each chapter for the curious student. Most of these problems
are theoretical in nature, and they do not fit into the running text linearly. Exercises
and problems form an integral part of the book. Working them out may require some
help from the teacher. It is hoped that the teachers and the students of matrix theory
will enjoy the text the same way I and my students did.
Most engineering colleges in India allocate only one semester for linear algebra
or matrix theory. In such a case, the first two chapters of the book can be covered
in a rapid pace with proper attention to elementary row operations. If time does not
permit, the last chapter on matrix norms may be omitted or covered in numerical
analysis under the veil of iterative solutions of linear systems.
I acknowledge the pains taken by my students in pointing out typographical errors.
Their difficulties in grasping the notions have contributed a lot towards the contents
and this particular sequencing of topics. I cheerfully thank my colleagues A. V.
Jayanthan and R. Balaji for using the earlier drafts for teaching linear algebra to
undergraduate engineering and science students at IIT Madras. They pointed out
many improvements, which I cannot pinpoint now. Though the idea of completing
this work originated five years back, time did not permit it. IIT Madras granted me
sabbatical to write the second edition of may earlier book on Logics for Computer
Science. After sending a draft of that to the publisher, I could devote the stop-gap for
completing this work. I hereby record my thanks to the administrative authorities of
IIT Madras.
It will be foolish on my part to claim perfection. If you are using the book, then
you should be able to point out improvements. I welcome you to write to me at
[email protected].
1 Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Examples of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Basic Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Transpose and Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Elementary Row Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Row Reduced Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7 Computing Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2 Determining Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4 Solvability of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Gauss–Jordan Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3 Matrix as a Linear Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1 Subspace and Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2 Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.3 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.4 Coordinate Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5 Coordinate Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.6 Change of Basis Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.7 Equivalence and Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.1 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.2 Gram–Schmidt Orthogonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.3 QR-Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.4 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
vii
viii Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
About the Author
ix
Chapter 1
Matrix Operations
x1 + x2 = 3
x1 − x2 = 1
Subtracting the first from the second, we get −2x2 = −2. It implies x2 = 1. That
is, the original system is replaced with the following:
x1 + x2 = 3
x2 = 1
x1 + x2 = 3
x1 − x2 = 1
2x1 − x2 = 3
The first two equations have a unique solution, and that satisfies the third. Hence,
this system also has a unique solution x1 = 2, x2 = 1. Geometrically, the third equa-
tion represents the straight line that passes through (0, −3) and has slope 2. The
intersection of all the three lines is the same point (2, 1). So, the extra equation does
not put any constraint on the solutions that we obtained earlier.
But what about our systematic solution method? We aim at eliminating the first
unknown from all but the first equation. We replace the second equation with the one
obtained by second minus the first. We also replace the third by third minus twice
the first. It results in
x1 + x2 = 3
−x2 = −1
−3x2 = 3
Notice that the second and the third equations coincide, hence the conclusion. We
give another twist. Consider the system
x1 + x2 = 3
x1 − x2 = 1
2x1 + x2 = 3
The first two equations again have the solution x1 = 2, x2 = 1. But this time, the
third is not satisfied by these values of the unknowns. So, the system has no solution.
Geometrically, the first two lines have a point of intersection (2, 1); the second
and the third have the intersection point as (4/3, 1/3); and the third and the first have
the intersection point as (0, 3). They form a triangle. There is no point common to
all the three lines. Also, by using our elimination method, we obtain the equations
as:
x1 + x2 = 3
−x2 = −1
−x2 = −3
The last two equations are not consistent. So, the original system has no solution.
Finally, instead of adding another equation, we drop one. Consider the linear
equation
x1 + x2 = 3
The old solution x1 = 2, x2 = 1 is still a solution of this system. But there are other
solutions. For instance, x1 = 1, x2 = 2 is a solution. Moreover, since x1 = 3 − x2 ,
by assigning x2 any real number, we get a corresponding value for x1 , which together
give a solution. Thus, it has infinitely many solutions.
1.1 Examples of Linear Equations 3
Geometrically, any point on the straight line represented by the equation is a solu-
tion of the system. Notice that the same conclusion holds if we have more equations,
which are multiples of the only given equation. For example,
x1 + x2 = 3
2x1 + 2x2 = 6
3x1 + 3x2 = 9
We see that the number of equations really does not matter, but the number of
independent equations does matter. Of course, the notion of independent equations
is not yet precise; we have some working ideas only.
It is not also very clear when does a system of equations have a solution, a unique
solution, infinitely many solutions, or even no solutions. And why not a system
of equations has more than one but finitely many solutions? How do we use our
elimination method for obtaining infinite number of solutions?
To answer these questions, we will introduce matrices. Matrices will help us in
representing the problem in a compact way and will lead to a definitive answer.
We will also study the eigenvalue problem for matrices which come up often in
applications. These concerns will allow us to represent matrices in elegant forms.
Exercises for Sect. 1.1
1. For each of the following system of linear equations, find the number of solutions
geometrically:
(a) x1 + 2x2 = 4, −2x1 − 4x2 = 4
(b) −x1 + 2x2 = 3, 2x1 − 4x2 = −6
(c) x1 + 2x2 = 1, x1 − 2x2 = 1, −x1 + 6x2 = 3
2. Show that the system of linear equations a1 x1 + x2 = b1 , a2 x1 + x2 = b2 has a
unique solution if a1 = a2 . Is the converse true?
x1 + x2 = 3 x1 + x2 = 3 x1 = 2
x1 − x2 = 1 ⇒ x2 = 1 ⇒ x2 = 1
We can minimize writing by ignoring the unknowns and transform only the num-
bers in the following way:
4 1 Matrix Operations
1 1 3 1 1 3 1 0 2
1 −1 1 ⇒ 0 1 1 ⇒ 0 1 1
To be able to operate with such array of numbers and talk about them, we require
some terminology. First, some notation:
A = [ai j ], ai j ∈ F for i = 1, . . . , m, j = 1, . . . , n.
Thus, the scalar ai j is the (i, j)th entry of the matrix [ai j ]. Here, i is called the row
index and j is called the column index of the entry ai j .
The set of all m × n matrices with entries from F will be denoted by Fm×n .
A row vector of size n is a matrix in F1×n . Similarly, a column vector of size
n is a matrix in Fn×1 . The vectors in F1×n (row vectors) will be written as (with or
without commas)
[a1 , . . . , an ] or as [a1 · · · an ]
bn
for scalars b1 , . . . , bn . The second way of writing is the transpose notation; it saves
vertical space. Also, if a column vector v is equal to u t for a row vector u, then we
1.2 Basic Matrix Operations 5
(a1 , . . . , an ).
When Fn is F1×n , you should read (a1 , . . . , an ) as [a1 , . . . , an ], a row vector, and
when Fn is Fn×1 , you should read (a1 , . . . , an ) as [a1 , . . . , an ]t , a column vector.
The ith row of a matrix A = [ai j ] ∈ Fm×n is the row vector
[ai1 , . . . ain ].
We also say that the row index of this row is i. Similarly, the jth column of A is
the column vector
[a1 j , . . . am j ]t .
A = B iff ai j = bi j for 1 ≤ i ≤ m, 1 ≤ j ≤ n.
Here, 1 is the first diagonal entry, 3 is the second diagonal entry, and 5 is the third
and the last diagonal entry.
The super-diagonal of a matrix consists of entries above the diagonal. That is, the
entries ai,i+1 comprise the super-diagonal of an n × n matrix A = [ai j ]. Of course,
i varies from 1 to n − 1 here. In the following matrix, the super-diagonal is shown
6 1 Matrix Operations
in bold: ⎡ ⎤
1 2 3
⎣2 3 4⎦ .
3 4 0
diag(d1 , . . . , dn ).
The following is a diagonal matrix. We follow the convention of not showing the
non-diagonal entries in a diagonal matrix, which are 0.
⎡ ⎤ ⎤ ⎡
1 1 0 0
diag(1, 3, 0) = ⎣ 3 ⎦ = ⎣0 3 0⎦ .
0 0 0 0
The identity matrix is a diagonal matrix with each diagonal entry as 1. We write
an identity matrix of order m as Im . Sometimes, we omit the subscript m if it is
understood from the context.
I = Im = diag(1, . . . , 1).
We write ei for a column vector whose ith component is 1 and all other compo-
nents 0. The jth component of ei is δi j . Here,
1 if i = j
δi j =
0 if i = j
A scalar matrix is a diagonal matrix with equal diagonal entries. For instance,
the following is a scalar matrix: ⎡ ⎤
3
⎢ 3 ⎥
⎢ ⎥.
⎣ 3 ⎦
3
ci j = ai j + bi j for 1 ≤ i ≤ m, 1 ≤ j ≤ n.
Thus, we informally say that matrices are added entry-wise. Matrices of different
sizes can never be added. It is easy to see that
A + B = B + A, A+0=0+ A = A
For A = [ai j ], the matrix −A ∈ Fm×n is taken as one whose (i, j)th entry is −ai j .
Thus,
−A = (−1)A, (−A) + A = A + (−A) = 0.
Mark the sizes of A and B. The matrix product AB is defined only when the number
of columns in A is equal to the number of rows in B. The result AB has number of
rows as that of A and the number of columns as that of B.
A particular case might be helpful. Suppose u is a row vector in F1×n and v is a
column vector in Fn×1 . Then, their product uv ∈ F1×1 . It is a 1 × 1 matrix. Often,
we identify such matrices with scalars. The product now looks like:
⎡ ⎤
b1
⎢ .. ⎥
a1 · · · an ⎣ . ⎦ = [a1 b1 + · · · + an bn ].
bn
The ith row of A multiplied with the jth column of B gives the (i, j)th entry in AB.
Thus to get AB, you have to multiply all m rows of A with all r columns of B, taking
one from each in turn. For example,
⎡ ⎤⎡ ⎤ ⎡ ⎤
3 5 −1 2 −2 3 1 22 −2 43 42
⎣ 4 0 2⎦ ⎣5 0 7 8⎦ = ⎣ 26 −16 14 6 ⎦ .
−6 −3 2 9 −4 1 1 −9 4 −37 −28
1 2 0 1 4 7 0 1 1 2 2 3
= but = .
2 3 2 3 6 11 2 3 2 3 8 13
Here, e j is the standard jth basis vector, the jth column of the identity matrix of
order n; its jth component is 1, and all other components are 0. The above identity
can also be seen by directly multiplying A with e j , as in the following:
10 1 Matrix Operations
⎡ ⎤⎡ ⎤ ⎡ ⎤
a11 · · · a1 j · · · a1n 0 a1 j
⎢ .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥
⎢ . ⎥ ⎢.⎥ ⎢ . ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
Ae j = ⎢ a
⎢ i1 · · · a ij · · · a ⎥⎢ ⎥ ⎢ ⎥
in ⎥ ⎢1⎥ = ⎢ ai j ⎥ = jth column of A.
⎢ .. ⎥ ⎢.⎥ ⎢ . ⎥
⎣ . ⎦ ⎣ .. ⎦ ⎣ .. ⎦
am1 · · · am j · · · amn 0 am j
Unlike numbers, product of two nonzero matrices can be a zero matrix. For
instance,
1 0 0 0 0 0
= .
0 0 0 1 0 0
Let A ∈ Fm×n . We write its ith row as Ai and its kth column as Ak .
We can now write A as a row of columns and also as a column of rows in the
following manner:
⎡ ⎤
A1
⎢ ⎥
A = [aik ] = A1 · · · An = ⎣ ... ⎦ .
Am
Then, their product AB can now be written in block form as (ignoring extra brackets):
⎡ ⎤
A1 B
⎢ ⎥
AB = AB1 · · · ABr = ⎣ ... ⎦ .
Am B
C = C I = C(AB) = (C A)B = I B = B.
A A−1 = I = A−1 A.
We talk of invertibility of square matrices only; all square matrices are not invert-
ible. For example, I is invertible but 0 is not. If AB = 0 for nonzero square matrices
A and B, then neither A nor B is invertible. Why?
If both A, B ∈ Fn×n are invertible, then (AB)−1 = B −1 A−1 . Reason:
Invertible matrices play a crucial role in solving linear systems uniquely. We will
come back to the issue later.
Exercises for Sect. 1.2
1. Compute AB, C A, DC, DC AB, A2 , D⎡2 and A3⎤
B 2 , where⎡ ⎤
−1 2 3 2 1
2 3 4 −1
A= , B= , C = ⎣ 2 −1⎦ , D = ⎣4 −6 0 ⎦.
1 2 4 0
1 3 1 −2 − 2
Discovering Diverse Content Through
Random Scribd Documents
long before any of us were thought of. Ah, there was a man; an
Englishman without guile, and of a type well nigh extinct!
Lord Palmerston never attained pre-eminence on the Turf, and when
Mainstone—as was suspected—was tampered with before the big
race, and when, on a later occasion, Baldwin broke down in his
training, he decided to abandon the sport; what more noble than the
letter he wrote to Lord Naas giving him his favourite to place at the
stud? No auctioneering, no huckstering—but a free gift such as only
a great Englishman would have conceived.
And who that frequented the Curragh meetings in the long-ago
sixties has not admired the noble form of this same Lord Naas
(assassinated in ’72 in the Andaman Islands), accompanied by those
stalwart Irishmen, the late Marquises of Conyngham and Drogheda?
England must indeed “wake up”—to quote a phrase as old as the
hills—if such records are to be maintained, and seek—perhaps in
vain—for other giants such as these mighty dead, if we are to be
what we were in sport and politics amongst the nations of the earth.
For like the ripples on a placid lake before some great convulsion of
nature, a Cromwell is succeeded by a Charles, and the Palmerstons
make way for less sturdy clay, and then the great upheaval comes,
which ends in chaos, or the prosperity that is associated with “a
great calm.”
Whether these momentous events will occur, simultaneously with the
establishment of a Duma, and a great penny daily in Jerusalem, and
the abandonment of historical English and Scottish seats for castles
on the Rhine, it would require a modern Jeremiah to foretell, but the
pendulum is oscillating ominously, with a throb that is not to be
mistaken.
Lord Falmouth, whom no earwig ever ventured to associate with a
fishy act, holds the proud distinction of never having backed his
opinion in his life, if we except the threadbare tale that every
biographer sets out as if it were not known to everybody, of how he
once bet sixpence, and paid it in a coin surrounded by diamonds.
With this attribute universally known, it is perhaps not difficult to
explain the immunity he obtained from innuendo when his horse
Kingcraft won the Derby in the memorable year that the Ring
“approached” James Merry, despite the fact that he only ran third to
MacGregor in the Two Thousand.
That Lord Falmouth was a successful horse-owner may be accepted
by the £300,000 he undoubtedly won in stakes during the twenty
years of his career; that no one begrudged it him is shown by the
unanimous regret of the racing public when he practically retired
from the Turf, and that even so “close” a man as Fred Archer, the
jockey, should have subscribed towards a presentation silver shield
speaks volumes for his popularity.
Lord Falmouth, like his grand old naval ancestor, is now a matter of
history, and nothing remains but the two guns outside the family
town house in St. James’s Square to remind the passer-by of two
great men, who in their respective spheres were sans peur et sans
reproche.
To Fred Archer, as a phenomenon of a later period, who was latterly
Lord Falmouth’s jockey, it is out of the sphere of these annals of the
sixties to refer, but seeing him as I often have over his usual
breakfast of hot castor-oil, black coffee, and a slice of toast, it seems
incredible that he should have lived even to his thirtieth year.
Constantly “wasting” to try and attain 8st. 7lb. his mind and body
soon became a wreck, and then the sad end came by his own hand
with which we are all familiar.
Bob Hope-Johnstone and his brother David (“Wee Davy”) were two
as fine specimens of the genus man as can well be conceived; but
like Napoleon—who, according to experts, ought to have died at
Waterloo—Bob outlived the glory of his youth, and became a
morose, cantankerous wretch, who spent half his time at the
hostelry now known as Challis’s, which in the sixties was the resort
of every jockey—straight or crooked—that held a licence from the
Jockey Club.
Another shining light about this period was Prince Soltykoff, whose
wife was one of the handsomest women in England.
It was after her death that he came into prominence as an admirer
of beautiful women in general, and of little Graham of the Opera
Comique in particular, and—later on—of goodness knows how many
more. Many a time have I seen him at Mutton’s at Brighton, loaded
with paper bags full of every indigestible delight, which the
imperious little woman beside him continued unmercifully to add to.
Lord Glasgow, who was distinguished in the sixties as possessing the
longest string of useless yearlings, was, in addition to other
peculiarities, the most hot-tempered explosive that epoch produced.
Kind of heart in the bluffest of ways, and throwing money about with
a lavish hand, I remember on one occasion finding myself on the
railway station at Edinburgh as his plethoric lordship was purchasing
his ticket. Tendering a £5 note, the clerk requested him to endorse
it, which, having been done with a churlish air, his temper rose to
fever pitch when the clerk, returning it, said, “I didn’t ask you where
you were going; I want your name, man!” A volley of abuse, in
which he was a past-master, then followed, and the abashed official
realised that what he had mistaken for a grazier was the redoubtable
Earl of Glasgow.
The sporting critic of the Morning Post, who wrote under the name
of “Parvo,” once felt the weight of his indignation for what, after all,
was a fair criticism of the great man’s stud, and when, in ’69, an
obituary article appeared in the Post, the incident and the exact wish
his lordship had given expression to were conveyed in flowery
symbolism as a hope “that he might live to water his grave, but not
with tears.”
The Earl of Aylesford in the sixties was the owner of Packington Hall,
and a princely income, and it was whilst I was staying with George
Graham (owner of the famous Yardley stud where the great Stirling
“stood”) that a jovial party drove over from Packington. Luncheon
as served in those days was an important item in the programme,
and long before the Packington party began to think of returning
more than one had succumbed to the rivers of champagne that
flowed. Bob Villiers (a brother of the then Earl of Jersey) was one of
the first to collapse, and as he disappeared under the table the
kindly host’s anxiety was curbed by a shout from Joe Aylesford,
“Never mind, George, he’s only tried himself a bit too high.”
A few years later Joe was one of the party, selected in company with
Beetroot (as Lord Alfred Paget was affectionately called) and others,
to accompany the Prince of Wales to India, and it was during his
absence that the troubles that culminated in disaster overtook the
popular Earl. “Don’t go to India, Joe, if you value your domestic
happiness,” was the advice of an old friend, but go he did, and then
began the intrigues of a titled libertine, which ended in strong drinks
and the mortgaging of the ancestral acres.
Amid this genial phalanx no better host was to be found than old
Fred Gretton, and it was apropos of the Cambridgeshire that the
following incident occurred.
Seated round the festive board were some dozen sportsmen, young
men from town and old men from the shires; dear old George
Graham (the breeder of Stirling) and his brother; Duffer Bruce
(father of the late Marquis of Aylesbury), deafer than usual, but
shouting the house down; myself, Peter Wilkinson, and three or four
worthies of the farmer class who had come in the wake of Fred
Gretton.
“I should like you to win a large stake,” whispered to me a jolly old
squire who had been my neighbour at dinner.
“Nothing would give me greater pleasure,” I replied; “the more so as
this is positively the last meeting I am ever likely to be at before
going to Gibraltar.”
“Eh, lad, and why so?” persisted my well-wisher. “I should like you
to win a large stake,” and realising that it was now or never, I boldly
replied: “Look here, Mr. Bowden, if you can put me on to a good
thing I shall be eternally grateful.”
“I suppose you’ve never heard of Playfair?” inquired Mr. Bowden.
“He’s Fred’s horse, and he’s certain to win the Cambridgeshire; he’s
only got 6st. 3lb., the acceptances are just out, but, for God’s sake,
don’t let Fred know. Now, lad, do as I tell you; I’ve taken a liking to
you.”
It must be admitted I had never heard of Playfair—very few had—
but acting up to the tenets I had learnt during my two years’
intimacy with the late Hastings, I boldly took 1,000 to 15 within the
hour with the leviathan Steele.
“What are you backing?” inquired Mr. Gretton, who that moment
came hurriedly up, and on being informed by the bookie, he turned
to me and whispered into my ear, “There’s only one man could have
told you, and that’s that d— drunken old blackguard Bowden; but
not a word, mind you, you keep to that 1,000.” And so the kind old
man toddled off. Shortly before the race, at the Bath Hotel,
Piccadilly, where he always stayed in Town, he inquired of the two
barmaids if they would like a sovereign each on his horse; and whilst
the foolish virgin expressed a preference for the coin, the wise virgin
elected to be “on,” and after the race received from the genial
punter £35—a sum considerably in excess of the price.
Suffice to say, Playfair won the Cambridgeshire for Mr. Gretton in ’72,
and it is no exaggeration to add that his taking to racing to the
extent he then did suggested the idea—afterwards elaborated—of
turning Bass and Co. into a limited liability company.
CHAPTER X.
THE EPIDEMIC OF CARDS.
ebookbell.com