[ad_1]
IBM is releasing the latest version of its mainframe hardware that includes new updates meant to accelerate AI adoption.
The hardware and consulting company on Monday announced IBM z17, the latest version of its mainframe computer hardware. This fully encrypted mainframe is powered by an IBM Telum II processor and is designed for more than 250 AI use cases, the company says, including AI agents and generative AI.
Mainframes might seem like old hat, but they’re used by 71% of Fortune 500 companies today, according to one source. In 2024, the mainframe market was worth an estimated $5.3 billion, per consulting firm Market Research Future.
The z17 can process 450 billion inference operations in a day, a 50% increase over its predecessor, the IBM z16, which was released in 2022 and ran on the company’s original Tellum processor. The system is designed to be able to fully integrate with other hardware, software, and open-source tools.
Tina Tarquinio, VP of product management and design for IBM Z, told TechCrunch that this mainframe upgrade has been in the works for five years — well before the current AI frenzy that started with the release of OpenAI’s ChatGPT in November 2022.
IBM spent more than 2,000 research hours getting feedback from over 100 customers as it built the z17, Tarquinio said. She thinks it’s interesting to see that, now, five years later, the feedback they got aligned with where the market ended up heading.
“It has been wild knowing that we’re introducing an AI accelerator, and then seeing, especially in the later half of 2022, all of the changes in the industry regarding AI,” Tarquinio told TechCrunch. “It’s been really exciting. I think the biggest point has been [that] we don’t know what we don’t know about what’s coming, right? So the possibilities are really unlimited in terms of what AI can help us do.”
The z17 is set up to adapt and accommodate where the AI market heads, Tarquinio said. The mainframe will support 48 IBM Spyre AI accelerator chips upon release, with the plan to bring that number up to 96 within 12 months.
“We are purposely building in headroom,” Tarquinio said. “We’re purposely building in AI agility. So as new models are introduced, [we’re] making sure that we’ve built in the headroom for bigger, larger models — models that maybe need more local memory to talk to each other. We’ve built in that because we know it’s really the approach that will change, right? The new models will come and go.”
Tarquinio said that one of the highlights of this latest hardware — although she joked it was like being asked to pick her favorite child — is that the z17 is more energy-efficient than its predecessor and supposedly competitors, too.
“On-chip, we’re increasing the AI acceleration by seven and a half times, but that’s five and a half times less energy than you would need to do, like, multi-model on another type of accelerator or platform in the industry,” Tarquinio said.
The z17 mainframes will become generally available on June 8.
[ad_2]
Source link