154 terms

# Handron 241

#### Terms in this set (...)

Different sequences of row operations can lead to different echelon forms for the same matrix.
T
If a linear system has four equations and seven variables, then it must have infinitely many solutions.
F
If a linear system has seven equations and four variables, then it must be inconsistent.
F
If a linear system has the same number of equations and variables, then it must have a unique solution.
F
If m < n, then a set of m vectors cannot span Rⁿ.
T
If a set of vectors includes 0, then is cannot span Rⁿ.
F
Suppose A is a matrix with n rows and m columns. If n < m, then the columns of A span Rⁿ.
F
Suppose A is a matrix with n rows and m columns. If m < n, then the columns of A span Rⁿ.
F
If A is a matrix with columns that span Rⁿ, then Ax = b has a solution for all b in Rⁿ.
T
If {u1, u2, u3} does not span R³, then neither does {u₁, u₂, u₃, u₄}.
F
If {u₁, u₂, u₃, u₄} spans R³, then so does {u₁, u₂, u₃}.
F
If {u₁, u₂, u₃, u₄} does not span R³, then neither does {u₁, u₂, u₃}.
T
If u₄ is a linear combination of {u₁, u₂, u₃}, then span{u₁, u₂, u₃, u₄} = span{u₁, u₂, u₃}.
T
If u₄ is a linear combination of {u₁, u₂, u₃}, then span{u₁, u₂, u₃, u₄} ≠ span{u₁, u₂, u₃}.
F
If u₄ is not a linear combination of {u₁, u₂, u₃}, then span{u₁, u₂, u₃, u₄} = span{u₁, u₂, u₃}.
F
If u₄ is not a linear combination of {u₁, u₂, u₃}, then span{u₁, u₂, u₃, u₄} ≠ span{u₁, u₂, u₃}.
T
If a set of vectors in Rⁿ
is linearly dependent, then the set must span Rⁿ.
F
If m > n, then a set of m vectors in Rⁿ
is linearly dependent.
T
If A is a matrix with more rows than columns, then the columns of A are linearly independent.
F
If A is a matrix with more columns than rows, then the columns of A are linearly independent.
F
If A is a matrix with linearly independent columns, then Ax = 0 has nontrivial solutions.
F
If A is a matrix with linearly independent columns, then Ax = b has a solution for all b.
F
If {u₁, u₂, u₃} is linearly independent, then so is {u₁, u₂, u₃, u₄}.
F
If {u₁, u₂, u₃} is linearly dependent, then so is {u₁, u₂, u₃, u₄}.
T
If {u₁, u₂, u₃, u₄} is linearly independent, then so is {u₁, u₂, u₃}.
T
If {u₁, u₂, u₃, u₄} is linearly dependent, then so is {u₁, u₂, u₃}.
F
If u₄ is a linear combination of {u₁, u₂, u₃}, then {u₁, u₂, u₃, u₄} is linearly independent.
F
If u₄ is a linear combination of {u₁, u₂, u₃}, then {u₁, u₂, u₃, u₄} is linearly dependent.
T
If u₄ is not a linear combination of {u₁, u₂, u₃}, then {u₁, u₂, u₃, u₄} is linearly independent.
F
If u₄ is not a linear combination of {u₁, u₂, u₃}, then {u₁, u₂, u₃, u₄} is linearly dependent.
F
If A is an invertible n × n matrix, then the number of solutions to Ax = b depends on the vector b in Rⁿ.
F
A must be a square matrix to be invertible.
T
If an n × n matrix A is singular, then the columns of A must be linearly independent.
F
If the columns of an n × n matrix A span Rⁿ, then A is singular.
F
If A and B are invertible n × n matrices, then the inverse of AB is B⁻¹A⁻¹.
T
If A and B are invertible n × n matrices, then the inverse of A+B is A⁻¹+B⁻¹.
F
If A is invertible, then
(A⁻¹)⁻¹=A.
T
If A is an n × n matrix and b ≠ 0 is in Rⁿ, then the solutions to Ax = b do not form a subspace.
T
If A is a 5 × 3 matrix, then null(A) forms a subspace of R⁵.
F
If A is a 4 × 7 matrix, then null(A) forms a subspace of R⁷.
T
Let T : R⁶ → R³ be a linear transformation. Then ker(T) is a subspace of R⁶.
T
Let T : R⁵ → R⁸ be a linear transformation. Then ker(T) is a subspace of R⁸.
F
Let T : R² → R⁷ be a linear transformation. Then range(T) is a subspace of R².
F
Let T : R³ → R⁹ be a linear transformation. Then range(T) is a subspace of R⁹.
T
The union of two subspaces of Rⁿ forms another subspace of Rⁿ.
F
The intersection of two subspaces of Rⁿ forms another subspace of Rⁿ.
T
Let S₁ and S₂ be subspaces of Rⁿ, and define S to be the set of all vectors of the form s₁ + s₂, where S₁ is in s₂ is in S₂. Then S is a subspace of Rⁿ.
T
Let S₁ and S₂ be subspaces of Rⁿ, and define S to be the set of all vectors of the form s₁ - s₂, where s₁ is in S₁ and s₂ is in S₂. Then S is a subspace of Rⁿ.
T
The set of integers forms a subspace of R.
F
A subspace S ≠ {0} can have a finite number of vectors.
F
If S₁ and S₂ are subsets of Rⁿ but not subspaces, then the intersection of S₁ and S₂ cannot be a subspace of Rⁿ.
F
If S₁ and S₂ are subsets of Rⁿ but not subspaces, then the union of S₁ and S₂ cannot be a subspace of Rⁿ.
F
If S = span{u₁, u₂, u₃}, then dim(S) = 3.
F
If a set of vectors U spans a subspace S, then vectors can be added to U to create a basis for S.
F
If a set of vectors U is linearly independent in a subspace S, then vectors can be added to U to create a basis for S.
T
If a set of vectors U spans a subspace S, then vectors can be removed from U to create a basis for S.
T
If a set of vectors U is linearly independent in a subspace S, then vectors can be removed from U to create a basis for S.
F
Three nonzero vectors that lie in a plane in R³ might form a basis for R³.
F
If S₁ is a subspace of dimension 3 in R⁴, then there cannot exist a subspace S₂ of R⁴ such that S₁ ⊂ S₂ ⊂ R⁴ but S₁ ≠ S₂ ≠ R⁴.
T
The set {0} forms a basis for the zero subspace.
F
Rⁿ has exactly one subspace of dimension m for each of m = 0,1,2,...,n.
F
Let m > n. Then U = {u₁, u₂,...,um} in Rⁿ can form a basis for Rⁿ if the correct m - n vectors are removed from U.
F
Let m > n. Then U = {u₁, u₂,...,um} in Rⁿ can form a basis for Rⁿ if the correct n - m vectors are added to U.
F
If {u₁, u₂, u₃} is a basis for R³, then span{u₁, u₂} is a plane.
T
If A is a matrix, then the dimension of the row space of A is equal to the dimension of the column space of A.
T
If A is a square matrix, then row(A) = col(A).
F
The rank of a matrix A can not exceed the number of rows of A.
T
If Ax = b is a consistent linear system, then b is in row(A).
F
If A is a 4 x 13 matrix, then the nullity of A could be equal to 5.
T
Suppose that A is a 9 x 5 matrix, and that T(x) = Ax is a linear transformation. Then T can be onto.
F
Suppose that A is a 9 x 5 matrix, and that T(x) = Ax is a linear transformation. Then T can be one-to-one.
T
Every matrix A has a determinant.
F
If A is an n x n matrix with all positive entries, then det(A) > 0.
F
If A is a diagonal matrix, then all of the minors of A are also diagonal.
F
If the cofactors of an n x n matrix are all nonzero, then det(A) ≠ 0.
F
If A and B are 2 x 2 matrices, then det(A - B) = det(A) - det(B).
F
Interchanging the rows of a matrix have no effect on its determinant.
F
If det(A) ≠ 0, then the columns of A are linearly independent.
T
If E is an elementary matrix, then det(E) = 1.
F
If A and B are n x n matrices, then det(A + B) = det(A) + det(B).
F
If A is a 3 x 3 matrix and det(A) = 0, then rank(A) = 0.
F
If A is a 4 x 4 matrix and det(A) = 4, then nullity(A) = 0.
T
Let A, B, and S be n x n matrices, and S be invertible. If B = S⁻¹AS, then det(A) = det(B).
T
If A is an n x n matrix with all entries equal to 1, then det(A) = n.
F
Suppose that A is a 4 x 4 matrix, and that B is a matrix obtained by multiplying the third column of A by 2. Then det(B) = 2det(A).
T
Cramer's rule can be used to find the solution to any system that has the same number of equations as unknowns.
F
If A is a square matrix with integer entries, then so is adj(A).
T
If A is a 3 x 3 matrix, then adj(2A) = 2adj(A).
F
If A is a square matrix that has all positive entries, then so does adj(A).
F
If A is an n x n matrix with det(A) = 1, then A⁻¹ = adj(A).
T
T
An eigenvalue λ must be nonzero, but an eigenvalue u can be equal to the zero vector.
F
The dimension of an eigenvalue is always less than or equal to the multiplicity of the associated eigenvalue.
T
If u is a nonzero eigenvector of A, then u and Au point in the same direction.
F
If λ₁ and λ₂ are eigenvalues of a matrix, then so is λ₁ + λ₂.
F
If A is a diagonal matrix, then the eigenvalues of A lie along the diagonal.
T
If 0 is an eigenvalue of A, then nullity(A) > 0.
T
If 0 is the only eigenvalue of A, then A must be the zero matrix.
T
If each eigenspace of A has dimension equal to the multiplicity of the associated eigenvalue, then A is diagonalizable.
T
If an n x n matrix A has n distinct eigenvectors, then A is diagonalizable.
T
If A is not invertible, then A is not diagonalizable.
F
If A is diagonalizable, then so is A^T.
T
If A is a diagonalizable n x n matrix, then rank(A) = n.
F
If A and B are diagonalizable n x n matrices, then so is AB.
F
If A and B are diagonalizable n x n matrices, then so is A + B.
F
If A is a diagonalizable n x n matrix, then there exist eigenvectors of A that form a basis for Rⁿ.
T
Vectors must be columns of numbers.
F
A set of vectors V in a vector space V can be linearly independent or can span V, but cannot do both.
F
Suppose that f and g are linearly dependent functions in C[1,4]. If f(1) = -3g(1), then it must be that f(4) = -3g(4).
F
Let {v₁,...,vK} be a linearly independent subset of a vector space V. If c≠0 is a scalar, then {cv₁,...,cvk} is also linearly independent.
T
Suppose that V₁⊂V₂ are sets in a vector space V. If V₂ spans V, then so does V₁.
F
Let {v1, . . . , vk} be a linearly independent subset of a vector space V. For any v ≠ 0 in V , the set{v + v1, . . . , v + vk} is also linearly independent.
F
If {v1, v2, v3} is a linearly independent set, then so is {v1, v2 − v1, v3 − v2 + v1}.
T
If V1 and V2 are linearly independent subsets of a vector space V , then so is V1 ∩ V2.
T
The size of a vector space basis varies from one basis to another.
F
There is no linearly independent subset V of P5 containing 7 elements.
T
No two vector spaces can share the same dimension.
F
If V is a vector space with dim(V ) = 6 and S is a subspace of V with dim(S) = 6, then S = V .
T
If V is a finite dimensional vector space, then V cannot contain an infinite linearly independent subset V.
T
If V1 and V2 are vector spaces and dim(V1) < dim(V2), then V1 ⊂ V2.
F
If V spans a vector space V , then vectors can be added to V to produce a basis for V .
F
If V is a finite dimensional vector space, then every subspace of V must also be finite dimensional
T
If {v1, . . . , vk} is a basis for a vector space V , then so is {cv1, . . . , cvk}, where c is a scalar.
F
If S1 is a subspace of a vector space V and dim(S1) = 1, then the only proper subspace of S1 is S2 = {0}.
T
If ‖u − v‖ = 3, then the distance between 2u and 2v is 12.
F
If u and v in Rn have nonnegative entries, then u · v ≥ 0.
T
‖u + v‖ = ‖u‖ + ‖v‖ for all u and v in Rn
F
Suppose that {s1, s2, s3} is an orthogonal set, and that c1, c2, and c3 are scalars. Then {c1s1, c2s2, c3s3} is also an orthogonal set.
T
If A is an n × n matrix and u is in Rn, then ‖u‖ ≤ ‖Au‖.
F
If u1 · u2 = 0 and u2 · u3 = 0, then u1 · u3 = 0.
F
If ‖u − v‖ = ‖u + v‖, then u and v are orthogonal.
T
If u is in R5 and S is a 3-dimensional subspace of R5, then projSu is in R3.
F
If S is a subspace, then projSu is in S.
T
If u and v are vectors, then projvu is a multiple of u.
F
If u and v are orthogonal, then projvu = 0.
T
If projSu = u, then u is in S.
T
For a vector u and a subspace S, projS(projSu)
= projSu.
T
For vectors u and v, proju(projvu) = u.
F
If T : V → W is a linear transformation, then T(v1 − v2) = T(v1) − T(v2).
T
If T : V → W is a linear transformation, then T(v) = 0W implies that v = 0V.
F
If T : V → W is a linear transformation, then dim(ker(T)) ≤ dim(range(T)).
F
If T : V → W is a linear transformation and {v1, · · · , vk} is a linearly independent set, then so is {T(v1), · · · , T(vk)}.
F
If T : V → W is a linear transformation and {v1, · · · , vk} is a linearly dependent set, then so is {T(v1), · · · , T(vk)}.
T
If T : R2×2 → P6
, then is it impossible for T to be onto.
T
If T : P4 → R6
, then is it impossible for T to be one-to-one
F
Let T : V → W be a linear transformation and w a nonzero vector in W. Then the set of all v in V
such that T(v) = w forms a subspace.
F
If hu, vi = 3, then h2u, −4vi = −24.
T
If u and v are orthogonal with ‖u‖ = 3 and ‖v‖ = 4, then ‖u + v‖ = 5.
T
If u = cv for a scalar c, then u = projuv.
F
If {u, v} is an orthogonal set and c1 and c2 are scalars, then {c1u, c2v} is also an orthogonal set.
T
−‖u‖‖v‖ ≤ hu, vi for all u and v in V .
T
‖u − v‖ ≤ ‖u‖ − ‖v‖ for all u and v in V .
F
<f, g>= ∫¹-₁xf(x)g(x) dx is an inner product on
C[−1, 1].
F
If T : V → Rn
is a linear transformation, then <u, v> = T(u) · T(v) is an inner product.
F