Together, an open-source AI startup, recently announced a $20M seed funding round led by Lux Capital to build an open-source AI and cloud platform. This platform is designed to enable customers to experiment with leading generative AI models, allowing anyone to access AI, no matter their location.
The company plans to use the recently received funding to enhance its specialized cloud platform. The platform is intended to scale training and inference for large models through “distributed optimization.”
Together wants to simplify the customization and integration of foundation models into production tasks, provide developers and organizations with more AI control and avoid vendor dependency or privacy risks.
“We are at the beginning of a new era of AI,” says Vipul Ved Prakash, the co-founder and CEO of Together. “I am so excited for what the future holds and humbled to be a part of the incredible open-source AI movement.”
Besides Lux Capital, this funding round was backed by several venture funds and prominent entrepreneurs, including Factory, SV Angel, First Round Capital and Scott Banister (co-founder of PayPal).
The founders of Together say they shared a belief in the need for open and decentralized alternatives to closed systems in the AI industry, which led them to establish the company. Their goal is to ensure that the future of AI will involve input from the open community of innovators.
“Together, we were driven by the belief that open and decentralized alternatives to closed systems were going to be important — and possibly critical for business and society,” says Prakash.
According to Together, since its inception, the company has formed a strong team of AI professionals and partnered with decentralized infrastructure providers, open-source groups, and academic and corporate research labs to further its mission. Several projects, such as GPT-JT, OpenChatKit and RedPajama have been released to the public to support hundreds of thousands of AI developers, the company says.
“This is just the beginning,” continues Prakash. “Our aim is to help create open models that outrival closed models and establish open-source as the default way to incorporate AI.”
While the details are scant, Together is apparently working on a cloud platform that leverages compute resources from a variety of locations (hence, the term distributed) that can be used for model training. It’s like Seti@Home for generative AI. (SETI@home was a volunteer computing project launched in May 1999 that analyzed radio signals to search for signs of extraterrestrial intelligence).
Currently, training LLMs require a huge amount of compute resources (and money). To take advantage of this technology for a specific industry, for example, would require training models on specific datasets that aren’t used by (or available) popular services like OpenAI’s ChatGPT. The process might be too expensive or time-consuming for an individual enterprise, but Together is working on two parts of the problem: the datasets and the infrastructure for training. Enterprises will have the option to participate in training by sharing data sets to models, and the process of training will be distributed across a variety of cloud services rather than concentrated in one provider, such as Microsoft Azure OpenAI Service.
In our view, this is a significant development for the edge computing ecosystem that highlights the impact that generative AI is having on nearly every industry. We’ve covered how LLMs are running on edge devices for inferencing while developers are accessing generative AI for coding applications for IoT and edge computing. Democratizing access to data and infrastructure needed for LLMs is an important step toward the adoption of the technology, and Together’s efforts should be on the radar for infrastructure providers.
AL/ML | distributed computing | generative AI | LLM | Microsoft | OpenAI | Together | venture capital