While Federated Learning (FL) enables training by only sharing model updates rather than data, FL can still be prone to privacy leaks. Therefore, many efforts have been made to adopt homomorphic encryption or differential privacy approaches to prevent this. However, these solutions come with several issues that may limit their widespread adoption in applications that involve sensitive data sitting in silos. Such issues include but are not limited to trust in the aggregation server, the accuracy of the model, potential collusion among clients, and limited aggregation function support. To address these issues, we advocate using secure Multiparty Computation (MPC) to offer privacy-preserving computation. Specifically, we propose an FL framework that enables outsourcing the model aggregation to MPC parties on untrusted cloud environments and offers correctness verification to the model owners. Unlike differential privacy-based solutions, the proposed framework offers the same level of accuracy as models that are trained on the clear and minimize the possibility of collusion among clients and MPC parties. We implemented and evaluated the proposed framework under various conditions. The results showed that our framework can match the accuracy of centralized FL training while maintaining the required level of privacy and security in malicious cross-silo settings.