IBM Research recently introduced their perspective on a machine learning paradigm called Federated Learning in which multiple parties can all participate in training a single model with a shared goal. You can use data that is distributed between competitors, or even data distributed in one company across multiple geographies. They can participate in this so securely without sharing their raw data, and consequently get models that are much more generalizable than they would otherwise be able to achieve on their own.
Links related to this episode:
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More