Most hyperspectral processing techniques suffer from high execution times. They are iterative with each step's complexity dependent on the size of the data. As the size of the data continues to increase the computational cost to process will only go up. In this paper a new group of distributed algorithms for linear unmixing is introduced. The approach employs parallelization of recent techniques such as Nonnegative Matrix Factorization. A theoretical introduction to NMF is presented and its computational costs are discussed. Next the design of parallel algorithms that minimize the data distribution and communication overhead is shown. The experimental results support the claim that the distributed algorithms provide a significant computational compared to their sequential counterparts.