We present the development and evaluation of a semantic text processing system that evaluates student essays. The system processes n-many documents and suggests a letter grade, identifies papers that may need additional teacher action based on component/composite scores, and allows optional teacher input on features to generate the grade. The system was developed in Python using open-source libraries and is also available as open-source. Using a human-in-the-loop approach, expert teachers were interviewed as part of the design process. Assessing the documents on token, sentence, readability, dependency distance, and part of speech with user guided feature selection the system generated automated results where the true letter grade and machine letter grade corresponded exactly in 46% of papers and in a ±1 letter grade interval in 86% of papers. The program can be further extended to flag grades for potential human review based on user defined criteria with example code provided for papers marked as written above the high school level.