Comments on: With MTIA v2 Chip, Meta Can Do AI Inference, But Not Training https://www.nextplatform.com/2024/04/10/with-mtia-v2-chip-meta-can-do-ai-training-as-well-as-inference/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Fri, 19 Apr 2024 18:24:13 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: E https://www.nextplatform.com/2024/04/10/with-mtia-v2-chip-meta-can-do-ai-training-as-well-as-inference/#comment-223053 Thu, 11 Apr 2024 22:09:38 +0000 https://www.nextplatform.com/?p=143970#comment-223053 Interesting to note that outside of the addition of sparsity to certain datatypes, the efficiency of the MTIA v2 (90W) actually _decreases_ in efficiency as measured in TOPS/Watt vs the MTIA v1 (25W).

]]>
By: HuMo https://www.nextplatform.com/2024/04/10/with-mtia-v2-chip-meta-can-do-ai-training-as-well-as-inference/#comment-223034 Thu, 11 Apr 2024 13:10:55 +0000 https://www.nextplatform.com/?p=143970#comment-223034 Ah, Triton, the cerulean merman cacophonous conch bugler son of that seafaring member of the twelve cheeky Olympians, Poseidon V3 of the Neoverse! A rebel to be sure, mythologically tridented as Triton-C, Triton-IR, and Triton-JIT, with threads like that of marsh frogs. Did v1’s RISC-V PE impudently challenge MTIA v2’s Triton to a contest of musical genAI training, and then drowned architecturally in punishment per Meta’s poetic Virgil Aeneid? Inquisition minds … q^8

]]>