Predictive Coding (PC) is a neuroscientific theory that has inspired a variety of training algorithms for biologically inspired deep neural networks (DNN). However, many of these models have only been assessed in terms of their learning performance, without evaluating whether they accurately reflect the underlying mechanisms of neural learning in the brain. This study explores whether predictive coding inspired Deep Neural Networks can serve as biologically plausible neural network models of the brain. We compared two PC-inspired training objectives, a predictive and a contrastive approach, to a supervised baseline in a simple Recurrent Neural Network (RNN) architecture. We evaluated the models on key signatures of PC, including mismatch responses, formation of priors, and learning of semantic information. Our results show that the PC-inspired models, especially a locally trained predictive model, exhibited these PC-like behaviors better than a Supervised or an Untrained RNN. Further, we found that activity regularization evokes mismatch response-like effects across all models, suggesting it may serve as a proxy for the energy-saving principles of PC. Finally, we find that Gain Control (an important mechanism in the PC framework) can be implemented using weight regularization. Overall, our findings indicate that PC-inspired models are able to capture important computational principles of predictive processing in the brain, and can serve as a promising foundation for building biologically plausible artificial neural networks. This work contributes to our understanding of the relationship between artificial and biological neural networks such as the brain, and highlights the potential of PC-inspired algorithms for advancing brain modelling as well as brain-inspired machine learning.