Right hemisphere damage (RHD) is caused by trauma or injury to the right hemisphere of the brain. This project was motivated by a significant gap in research on automatic detection of RHD. Although previous authors have researched this disorder, only one previous study used machine learning (ML) approaches to the classification of RHD from spoken data. This aim of this project was to benchmark a set of machine learning techniques for the automatic classification of RHD from spoken data which had not been applied to this task before. The data used in the project was participant only speech taken from loosely structured interviews from thirty-four Control participants and thirty-four participants experiencing RHD, and data was filtered to remove interviewer speech and segmented at individual speaker turns, leaving participant only speech. Five different ML models were implemented for this task: a baseline model trained on neurotypical speech, an SVM model trained on one acoustic feature set, an ensemble model, a fine-tuned Wav2Vec model, and a prompted large language model. StratifiedGroupKFold was implemented for cross validation, and predictions were made at speech segment level, and then aggregated into individual speaker level predictions using majority voting for all non-LLM based approaches since prompting experiments relied on transcribed data. The ensemble model achieved the highest accuracy compared to other approaches on its own full dataset (0.686 ± 0.162), and the fine-tuned Wav2Vec2 model achieved the highest accuracy on the common speakers (0.697). These findings show that machine learning (ML) approaches can distinguish speech from someone with RHD from someone without, with the more complex models showing the strongest performance overall. Overall, these findings suggest that the application of ML to the study of RHD could be promising, and should be pursued further in future works.