.

Wednesday, April 3, 2019

History and Applications of Matrices

History and Applications of MatricesMatrices find many lotions at current time and very designful to us. Physics makes use of matrices in various domains, for example in geometrical optics and ground substance mechanics the last menti angiotensin converting enzymed led to studying in more detail matrices with an infinite derive of rows and columns. Graph theory uses matrices to keep track of distances between pairs of vertices in a graph. Computer graphics uses matrices to project 3-dimensional space onto a 2-dimensional screen.Example of applicationA message is converted into numeric form according to around scheme. The easiest scheme is to let space=0, A=1, B=2, , Y=25, and Z=26. For example, the message Red Rum would become 18, 5, 4, 0, 18, 21, 13.This data was placed into hyaloplasm form. The size of the ground substance depends on the size of the encryption key. Lets say that our encryption hyaloplasm ( encode intercellular substance) is a 22 ground substance. Since I arrive at seven pieces of data, I would place that into a 42 matrix and fill the last defacement with a space to make the matrix complete. Lets c exclusively the accredited, unencrypted data matrix A.There is an invertible matrix which is called the encryption matrix or the encoding matrix. Well call it matrix B. Since this matrix needs to be invertible, it must be jog.This could real(a)ly be anything, its up to the person encrypting the matrix. Ill use this matrix.The unencrypted data is then multiplied by our encoding matrix. The result of this genesis is the matrix containing the encrypted data. Well call it matrix X.The message that you would pass on to the other person is the the stream of numbers 67, -21, 16, -8, 51, 27, 52, -26.Decryption Process view the encrypted stream of numbers that represents an encrypted message into a matrix.Multiply by the rewrite matrix. The decoding matrix is the inverse of the encoding matrix.Convert the matrix into a stream of numbers.Con ver the numbers into the text of the accepted message.DETERMINANTSThe causal factor of a matrix A is de n id det(A), or with verboten p atomic number 18ntheses det A. An alternative notation, apply for compactness, especially in the case where the matrix entries are written out in full, is to denote the determinant of a matrix by surrounding the matrix entries by unsloped bars instead of the usual brackets or parentheses.For a fixed plus integer n, there is a unique determinant move for the n-n matrices over any commutative ring R. In particular, this unique function exists when R is the field of real or complex numbers.For any fledge matrix of order 2, we suck up found a necessary and enough condition for invertibility. Indeed, consider the matrixExample. EvaluateLet us transmute this matrix into a triangular one through elementary operations. We employ keep the first-class honours degree row and add to the second one the first multiplied by . We getUsing the Propert y 2, we get so, we subscribe towhich one may check easily.EIGEN VALUES AND EIGEN VECTORSIn mathematics, eigenvalue, eigenvector, and eigenspace are cogitate concepts in the field of linear algebra. The prefix eigen- is adopted from the German pronounce eigen for innate, idiosyncratic, own. Linear algebra studies linear transformations, which are represented by matrices acting on vectors. Eigenvalues, eigenvectors and eigenspaces are properties of a matrix. They are computed by a method expound below, give important information about the matrix, and fucking be used in matrix factorization. They have applications in areas of applied mathematics as diverse as economics and quantum mechanics.In general, a matrix acts on a vector by changing both its magnitude and its direction. However, a matrix may act on certain vectors by changing only their magnitude, and leaving their direction unchanged (or possibly reversing it). These vectors are the eigenvectors of the matrix. A matrix a cts on an eigenvector by multiplying its magnitude by a factor, which is peremptory if its direction is unchanged and negative if its direction is reversed. This factor is the eigenvalue associated with that eigenvector. An eigenspace is the hang of all eigenvectors that have the same eigenvalue, together with the zero vector.These concepts are formally be in the language of matrices and linear transformations. Formally, if A is a linear transformation, a non-null vector x is an eigenvector of A if there is a scalar much(prenominal)(prenominal) thatThe scalar is said to be an eigenvalue of A corresponding to the eigenvector x.Eigenvalues and Eigenvectors An IntroductionThe eigenvalue line of work is a problem of considerable theoretical interest and wide-ranging application. For example, this problem is crucial in solving systems of differential equations, analyzing population growth models, and collusive powers of matrices (in order to define the exponential matrix). Other areas such as physics, sociology, biology, economics and statistics have focused considerable attention on eigenvalues and eigenvectors-their applications and their computations. Before we give the formal definition, let us introduce these concepts on an example.Example.Consider the matrixConsider the three column matricesWe haveIn other words, we haveNext consider the matrix P for which the columns are C1, C2, and C3, i.e.,We have det(P) = 84. So this matrix is invertible. Easy calculations giveNext we evaluate the matrix P-1AP. We intrust the details to the reader to check that we haveIn other words, we haveUsing the matrix multiplication, we obtainwhich implies that A is similar to a gash matrix. In particular, we havefor . Note that it is al around impossible to find A75 promptly from the lord form of A.This example is so rich of conclusions that many questions chatter themselves in a natural way. For example, given a square matrix A, how do we find column matrices which h ave similar behaviors as the to a higher place ones? In other words, how do we find these column matrices which will jock find the invertible matrix P such that P-1AP is a shot matrix?From now on, we will call column matrices vectors. So the higher up column matrices C1, C2, and C3 are now vectors. We have the following definition.Definition. Let A be a square matrix. A non-zero vector C is called an eigenvector of A if and only if there exists a number (real or complex) such thatIf such a number exists, it is called an eigenvalue of A. The vector C is called eigenvector associated to the eigenvalue .Remark. The eigenvector C must be non-zero since we havefor any number .Example. Consider the matrixWe have seen thatwhereSo C1 is an eigenvector of A associated to the eigenvalue 0. C2 is an eigenvector of A associated to the eigenvalue -4 while C3 is an eigenvector of A associated to the eigenvalue 3.It may be kindle to know whether we found all the eigenvalues of A in the above example. In the next page, we will discuss this question as well as how to find the eigenvalues of a square matrix.PROOFS OF PROPERTIES OF EIGEN VALUESPROPERTY 1 contrary of a matrix A exists if and only if zero is not an eigenvalue of ASuppose A is a square matrix. Then A is fishy if and only if =0 is an eigenvalue of A.Proof We have the following equivalencesA is queerthere exists x0, Ax=0there exists x0, Ax=0x=0 is an eigenvalue of ASince SINGULAR matrix A has eigenvalue and the inverse of a singular matrix does not exist this implies that for a matrix to be invertible its eigenvalues must be non-zero.PROPERTY-2Eigenvalues of a matrix are real or complex conjugates in pairsSuppose A is a square matrix with real entries and x is an eigenvector of A for theeigenvalue . Then x is an eigenvector of A for the eigenvalue . -ProofAx =Ax=Ax=x=xA has real entries x eigenvector of ASuppose A is an m-n matrix and B is an n-p matrix. Then AB=AB. -Proof To obtain this matrix equality, we wi ll work entry-by-entry. For 1im, 1jp,ABij =ABij =nk=1AikBkj =nk=1AikBkj =nk=1AikBkj =nk=1AikBkj =ABij operation OF EIGEN VALUES IN FACIAL RECOGNITIONHow does it work?The task of nervus facialis recogniton is discriminating input signals (image data) into several classes (persons). The input signals are highly blatant (e.g. the noise is caused by differing lighting conditions, pose etc.), yet the input images are not completely random and in spite of their differences there are patterns which occur in any input signal. Such patterns, which tidy sum be observed in all signals could be in the domain of facial recognition the presence of some objects (eyes, nose, mouth) in any face as well as relative distances between these objects. These characteristic features are called eigenfaces in the facial recognition domain (or principal components generally). They can be citeed out of original image data by means of a mathematical likewisel called Principal Component Analysis (PCA).By means of PCA one can transform each original image of the training set into a corresponding eigenface. An important feature of PCA is that one can reconstruct reconstruct any original image from the training set by combining the eigenfaces. Remember that eigenfaces are nothing less than characteristic features of the faces. Therefore one could say that the original face image can be reconstruct from eigenfaces if one adds up all the eigenfaces (features) in the right proportion. apiece eigenface represents only certain features of the face, which may or may not be present in the original image. If the feature is present in the original image to a higher degree, the share of the corresponding eigenface in the conglomeration of the eigenfaces should be greater. If, contrary, the particular feature is not (or almost not) present in the original image, then the corresponding eigenface should contribute a smaller (or not at all) part to the total of eigenfaces. So, in order to recon struct the original image from the eigenfaces, one has to build a kind of weighted jibe of all eigenfaces. That is, the reconstructed original image is equal to a sum of all eigenfaces, with each eigenface having a certain weight. This weight specifies, to what degree the precise feature (eigenface) is present in the original image.If one uses all the eigenfaces extracted from original images, one can reconstruct the original images from the eigenfaces exactly. But one can also use only a part of the eigenfaces. Then the reconstructed image is an approximation of the original image. However, one can ensure that losses due to omitting some of the eigenfaces can be minimized. This happens by choosing only the most important features (eigenfaces). Omission of eigenfaces is necessary due to scarcity of computational resources.How does this carry on to facial recognition? The clue is that it is possible not only to extract the face from eigenfaces given a set of weights, but also to go the opposite way. This opposite way would be to extract the weights from eigenfaces and the face to be recognized. These weights tell nothing less, as the amount by which the face in question differs from typical faces represented by the eigenfaces. Therefore, using this weights one can determine two important thingsDetermine, if the image in question is a face at all. In the case the weights of the image differ too much from the weights of face images (i.e. images, from which we know for sure that they are faces), the image probably is not a face.Similar faces (images) possess similar features (eigenfaces) to similar degrees (weights). If one extracts weights from all the images available, the images could be grouped to clusters. That is, all images having similar weights are in all probability to be similar faces.

No comments:

Post a Comment