Despite advocating for a halt to AI training in industry, Elon Musk A major artificial intelligence project within Twitter has reportedly been shut down. The company has already bought around 10,000 GPUs and recruited AI talent from DeepMind, including a large language model (LLM), reports business Insider,
A source with knowledge of the matter said that Musk’s AI project is still in its early stages. However, according to another person, receiving a significant amount of additional computational power shows their dedication to taking the project forward. Meanwhile, the exact purpose of generative AI is unclear, but potential applications include improving search functionality or generating targeted advertising content.
At this point, it is unknown what exact hardware was purchased by Twitter. However, Twitter has reportedly spent hundreds of millions of dollars on these compute GPUs despite Twitter’s ongoing financial problems, which Musk describes as an ‘untenable financial position’. These GPUs are expected to be deployed in one of Twitter’s two remaining data centers, with Atlanta being the most likely destination. Interestingly, Musk shut down Twitter’s primary datacenter in Sacramento in late December, which markedly reduced the company’s compute capabilities.
In addition to purchasing GPU hardware for its generative AI project, Twitter is hiring additional engineers. Earlier this year, the company recruited AI Research engineers Igor Babushkin and Manuel Kroes. deepmind, is a subsidiary of Alphabet. Musk has been actively seeking talent in the AI industry since at least February to compete with OpenAI’s ChatGPT.
OpenAI used Nvidia’s A100 GPU continues to use these machines to train and run its ChatGPT bot. So far, Nvidia has launched the successor to the A100 H100 Compute GPUs that are many times faster at roughly the same power. Twitter will likely use Nvidia Hopper H100 Or similar hardware for its AI projects, though we’re just speculating here. Given that the company hasn’t yet determined what its AI project will be used for, it’s hard to estimate how many Hopper GPUs it might need.
When large companies like Twitter buy hardware, they buy at special rates because they buy thousands of units. Meanwhile, Nvidia’s H100 boards could cost north of $10,000 per unit when purchased separately from retailers like CDW, giving an idea of how much the company has spent on hardware for its AI initiatives. would have spent