Vector Product, Product Reduction

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Vector Product, Product Reduction as PDF for free.

More details

  • Words: 2,119
  • Pages: 9
Vector Product of Two Vectors in Higher Dimensions  W. F. Esterhuyse August 2007

ABSTRACT We show that the higher dimensional cross product can be defined using the determinant expansion (see ref. [5]) by showing that the cross product of unit vectors that occur in this formula can be reduced in any dimension, just like it can in the 3-dimensional case. We show this cross product can be defined in two different ways (producing equivalent results), one of them using 3xn determinants (using a new definition). We introduce a new "dot product" needed in defining the area of the/a parallelogram spanned by two vectors in any dimension.

Keywords: Vectors, Vector Product, Cross Product, Higher Dimensions, Lie Algebra, Jacobi Condition.

CONTENTS: 1. Cross Product of two Vectors Appendix A Bibliography

1 Cross Product of two Vectors Troughout this article the basis vectors will be denoted as [k] or ek. We quote the determinant expansion of the cross product (ref. [5]): uxv =



| ui uj | [i]x[j]

(1)

i<j | vi vj | Compute this in 4D: uxv = = | u1u2 | [1]x[2] + | u1 u3 | [1]x[3] + | u1u4 | [1]x[4] + | u2 u3 | [2]x[3] + | u2 u4 | [2]x[4] | v1 v2 | | v1 v3 | | v 1 v4 | | v2 v3 | | v2 v4 | + | u3 u4 |[3]x[4] | v3 v4 |

(2)

where the dimension (=d) is the maximum value of j. This is a bit unsatisfactory since the cross products of the unit vectors can be reduced in 3-dimensions. For this to be a direct analog we would need to be able to do something similar in any dimension. This is possible. We first make a table of values that extends the 3D table. The table is (in 4D): [1]x[2] = [3] - [4] [1]x[3] = - [2] + [4] [1]x[4] = [2] - [3] [2]x[1] = [2]x[3] = [1] - [4] [2]x[4] = - [1]+ [3] [3]x[1] = [3]x[2] = [3]x[4] = [1] - [2] [4]x[1] = [4]x[2] = [4]x[3] = Where we only have to complete the entries [m]x[k] such that m < k. We need the rest however because we need to compute the signs. They must be in the following pattern: + - + - + .... - + - + - ....

... The further logic of the table are that RS has unit vectors in smaller than (left < right) order and all the numbers not in LS must occur in RS. It is mentioned here that the table does not extend by this logic for odd dimensions however (3) and the number triangle below holds in any dimension (details in ref. [7]). The table for odd dimensions follows this logic except for the signs: anticommutivity determines the first column signs everytime the leftmost k in [k] changes and the rest of the row is + - +... or - + -... (see 5D table in Appendix A). We may now rewrite the cross product in the standard basis in IR^d using the following definition. The reader may verify that (3) produces equivalent results as substituting the RS of the table into (2). Define the operator:

A (n, m) • as an assignment operator taking a double indexed operand (with n and m indices) such that it assigns the pair values to the operand indices (in the order of the index pair) and changes into +1. (n, m) ∈ Tk x Tk are pairs from a number triangle depending on the indices k. For d the number of dimensions, d >= 3: d

u×v =

∑ e (-1)^(k+1) ∑ k

k=1 in 4D the Tk's are: T1: 23 | 42 ____34

A (n, m) u n v m - A (m, n) u n v m

(n,m) ∈ (Tk x Tk)

(3)

T2: 31 | 14 ___ 43 obtained from: 13 | 41 ___ 34 T3: 12 | 41 ___24 T4: 21 | 13 ____32 obtained from: 12 | 31 ___23 Where T1 determines the rest by sucessive replacement (and pair inversion for every even k). These follows the table with the order determining the signs. The replacements are explicitly: 1 replaces 2 in triangle of T1 2 replaces 3 in original triangle of T2 etc. Where the original triangle is the one before pair inversion. T1 is extendable in higher dimensions: T1: 2342_2562_2782_29... __34_5336_7338_93... _____4564_4784_49... _______56_7558_95... __________6786_69... ____________78_97... _______________89... The formula (3) above follows from my derivation (in other terminology) of the cross product together with the same number triangles (see ref. [7]) which generates the table in 4D.

Alternatively one may just accept the the table and derive the formula in 4D by direct replacement ([3] - [4] replaces [1]x[2] etc. in (2)) and then generalise. The number triangles was determined using the orthogonality requirement. We may call this replacement from the table: "product of unit vector reduction". If we define a new version of the dot product (symbol *• ) such that: [n]x[m] *• [n]x[m] = 1 and for any other permutation of indices we have: [n]x[m] *• [i]x[m] = 0 for i <> n [n]x[m] *• [n]x[i] = 0 for i <> m [n]x[m] *• [i]x[j] = 0 for i <> n, j <> m and this operator takes precedence over product of unit vector reduction, then the (hyper) area of the parallelogram defined by two vectors u and v in any dimension can be stated as: A^2 = (uxv *• uxv)

=



| u_i u_j | ^ 2

(4)

i<j | v_i v_j | however (uxv *• uxv) ≠ uxv • uxv = || uxv ||^2 in dimensions larger than 3. Since (3) is somewhat complex to remember we now show an alternative formulation as a non-square determinant. Using (2) we have: (uxv)1 = { | u2u3| [2]x[3] + | u2u4 | [2]x[4] + | u3u4 | [3]x[4] } • [1] | v2v3 | | v2v4 | | v3v4 | because the table shows us these products of unit vectors would reduce to a sum including [1] and no others would. By replacement: (uxv)1 = { | u2u3 | {[1] - [4]} + | u2u4 | {-[1] + [3]} + | u3u4 | {[1] - [2]} } • [1] | v2v3 | | v2v4 | | v3v4 |

= | u2u3 | - | u2u4 | + | u3u4 | | v2v3 | | v2v4 | | v3v4 | which is what the 2x3 determinant reduces to under definition 1.3 below. This determinant equals: | u2 u3 u4 | = D1 | v2 v3 v4 | and reduces as (with [] as symbol for null-entry): | u2 u3 u4 | = | u2 u3 [] | + | u2 [] u4 | + | [] u3 u4 | | v2 v3 v4 | | v2 v3 [] | | v2 [] v4 | | [] v4 v4 | = | u2u3 | - | u2u4 | + | u3u4 | | v2v3 | | v2v4 | | v3v4 | D1 may be obtained by ordinary development by row 1 (definition equivalent to that of the square case) of: [1] • | [1] X X X | | u1 u2 u3 u4 | | v1 v2 v3 v4 | where X's denote "don't care entries". It is possibe to derive in the same way the following: (uxv)2 = = [2] • | X [2] X X | | u1 u2 u3 u4 | | v1 v2 v3 v4 | (uxv)3 = = [3] • | X X [3] X | | u1 u2 u3 u4 | | v1 v2 v3 v4 | (uxv)4 = = [4] • | X X X [4] | | u1 u2 u3 u4 | | v1 v2 v3 v4 | Which allows us to define the nD vector product as: | [1] [2] [3] ... [n] | = uxv | u1 u2 u3 ... un | | v1 v1 v3 ... vn |

by straight terminological extension. The definition of non-square determinants are as follows:

1.3 Definition: nxm Determinant The definition follows that of ref. [2] almost exactly. The only difference is an "and" instead of an "or", the column deletions in the cofactors added as a specification and the development by row or column specified additionally. The "reduction rule" states that we can only reduce a 2xn determinant as follows: take all combinations of column deletions so that 2 columns remain (list all the resulting terms and explicitly include deleted (null) entries, indicating them with a symbol: []). Write the terms so obtained as a sum. Then in every term the columns must be permuted so that all null entries are at rightmost, count the number of transpositions needed (say this = p). If p is odd then the term must be multiplied by -1 otherwise by 1. After this is done the null entries may be dropped, producing a sum of 2x2 determinants. For nx2 determinants the same applies just with row deletions and all null entries must be at bottommost. As we indicated there are two determinants for each nxm matrix: one developed by column, the other by row. For a nxm matrix A (n < m) the determinant developed by row written: DR A is defined by: DR A = aj1 Cj1 + aj2 Cj2 + ... ajm Cjm

( 21 )

j = 1, 2, ..., or n and develop by row untill reaching 2x[m-(n-2)] determinants and then reduce to a sum of 2x2 determinants by all combinations of column deletions (following the reduction rule). To determine the deleted columns we would need all combinations of 2 ones and m - 2 zeros. This is so since we need n-2 developments before reaching 2x[m-(n-2)] determinants. If m < n the cofactors are developed untill [n-(m-2)]x2 stage an then reduce by all

combinations of row deletions. The deletions are determined from all combinations of 2 ones and n - 2 zeros. For a nxm matrix A (n < m) the determinant developed by column written: DC A is defined by: DC A = a1k C1k + a2k C2k + ... ank Cnk

( 22 )

k = 1, 2, ..., or m with the same type of reduction of cofactors. The proof that the definition of DR is unambiguous (i.e. developable by any row) follows that of ref. [3], the cofactors and minors used in the proof would just be nonsquare. The development by column is not equivalent to development by row since after development once by row we have m terms while development once by colunm produces n terms, and n <> m. The alternate "definition" of the non-square determinant in which the effect of row (column) swapping is ignored is not tenable since this does not respect the + - + - pattern essential for the square case and the pattern must carry over to the non-square case. Properties: The properties as stated in ref. [3] p. 404 applies for DR and DC suitably modified as will be shown in ref. [6]. DR and DC satisfies between them almost every property of the square case (just with special considerations for n < m or m < n and left and right multiplication). Just the D A = D (A transposed) rule can't be said to hold, instead: DR A = DC (A transposed) holds. Note that for a nxm matrix A if n < m we will have column deletions at the 2xp stage of development, while for m > n we would have row deletions at the px2 stage.

Appendix A

The table in 5D is: [1]x[2] = [3] - [4] + [5] [1]x[3] = -[2] + [4] - [5] [1]x[4] = [2] - [3] + [5] [1]x[5] = -[2] +[4] - [5] [2]x[1] = -[3] + [4] - [5] [2]x[3] = [1] - [4] + [5] [2]x[4] = -[1] + [3] - [5] [2]x[5] = [1] - [3] + [4] [3]x[1] = [2] - [4] + [5] [3]x[2] = -[1] + [4] - [5] [3]x[4] = [1] - [2] + [5] [3]x[5] = -[1] + [2] - [4] [4]x[1] = -[2] + [3] - [5] [4]x[2] = [1] - [3] + [5] [4]x[3] = -[1] + [2] - [5] [4]x[5] = +[1] - [2] + [5]

Bibliography [1] Ellis, Gulick. Calculus. 1986. HBJ. [2] Kreyszig. Advanced Engineering Mathematics. 1988. Wiley [3] Z K Silagadze. Multidimensional Vector Product. 2002. Budker Institute of Nuclear Physics. arXiv:math/0204357V1 [4] Wikipedia. Lie Algebra. 2007. http://en.wikipedia.org/wiki/Lie_algebra [5] Wikipedia. Geometric Algebra. http://en.wikipedia.org/wiki/Geometric algebra [6] W. F. Esterhuyse. FromVector Product to Non-square Determinants. 2007. Unpublished. [7] W. F. Esterhuyse. Vector Product in Higher Dimensions. 2008. Unpublished.

Related Documents

Product
November 2019 48
Product
July 2020 32
Product
May 2020 26