Fabio made me realize today that the matrices I am using are too large. Getting the eigenvalues in a 55000x55000 matrix would take too much time, even having the memory it'll be in the range of years. This means the approach I was thinking on following, just saving the matrices, won't work; I need to reduce them to about 10000x10000.
The simplest way to reduce the matrices is reduce the images. A 187x296px image will result in a 55352x1 vector, and that eventually leads to a 55352x55352 matrix. I have made a database of reduced photos and I'm testing the code against its data. This reduction will probably result in a decrease of the accuracy of the algorithm, so I will study different ways of reducing the size of the matrix.
An idea I've been thinking about lately is dividing the face in regions and identify each region separately, then make a "score" for each face in the database and select the face with biggest score. The score of each feature would be weighted based on experiments, giving a higher score if a difficult part of the face is recognized. For example, eyes would have a higher score than foreheads as in first glance, foreheads don't seem to offer much information. This method would use a subspace for each feature, making instead of 1 really big matrix, a bunch of smaller ones.
Another one is making an average of the pixels, get a pixel, get the average value of all the surrounding pixels and store the average. Most probably, this average will also be calculated as a weighted average, so no edges are missed and accuracy is compromised the least. This will reduce the matrix sizes, but it is esentially the same as reducing the image, the quality will be lower (less px/in) so accuracy will be affected.
The problem with these two methods is that I'm still implementing fisherfaces to use as a control algorithm, so I can't really modify the algorithm in such a big way, at least for the first method, or it won't serve as a control algorithm. However, they might prove useful for the real algorithm.