Researchers at NTT, a global provider of telecoms and IT services, have developed software and algorithms that lets computers distributed at the edge of the network coordinate and train on machine learning models without having to perform training on data in one centralized location. The novel method for AI/ML training could provide edge computing service providers—including telcos—opportunities to provide new analytics and AI services.
“Recent machine learning, especially deep learning, generally involves training models, such as image/speech recognition, by aggregating data at a fixed location such as a cloud data center,” the researchers said in a statement. “However, in the IoT era, where everything is connected to networks, aggregating vast amounts of data on the cloud is complicated.” The other benefit of not moving the data: better compliance with privacy regulations such as GDPR.
“Our research is investigating a training algorithm to obtain a global model as if it is trained by aggregating data in a single server, even when the data are placed in distributed servers, such as in edge computing,” according to the statement.
NTT’s proposed technology has enabled developers to successfully train a global model in early experiments-even in cases where different types of data are used and the communication between servers is “asynchronous,” meaning that each compute node’s results are not dependent on receiving data and results from another node.
NTT notes that interest in edge computing is growing because of the benefits for lower application latency, and expects that there will be community interest in the application of its research to edge compute and networking services.
The company said it will continue to develop the technology for commercial applications, and will release the source code to promote collaboration.
AI | distributed computing | edge AI | edge computing | ML | NTT