Distributed machine learning allows model training on decentralised data residing on various devices, such as mobile phones or IoT devices. However, all these edge devices usually have limited communication bandwidth to transfer the initial global model and local gradient updates. Limited bandwidth is one of the major bottlenecks that hinder applying FL in practice. We would like to leverage permutation invariance of neural networks to make these devices start learning from locally initialised, rather than a global model and enable partial parameter updates. This approach exploits update locality and should considerably reduce bandwidth usage. The goal of the thesis would be to implement the approach and compare it with stateof-the-art distributed machine learning implementations. Click here for more information. Interested? Contact us for more details!