![]() They’ve become pop culture, or what I’ve been calling Benchmark LARP, live action roll-play. But they don’t really tell you how to run them, what the numbers actually mean, or give you any of the context needed to interpret them. They spit out numbers, sometimes highly relative and abstract, with almost zero time and effort, all pretty for posting. Mostly because, so many benchmarks now have been all wrapped up into neat little apps or games that literally anyone, including, terrifyingly - me! - can just download, run once and done. ![]() It’s just chum in the headlines and comments sections for people who don’t really get how benchmarking really work anyway. So, while UltraFusing two separate 32-core GPU blocks into one massive 64-core GPU Metal target with up to 128GB of RAM and 800GB of memory bandwidth is an unprecedented table slap of silicon nerdery, it doesn’t really change anything fundamental about either ecosystem. And anyone who just wants massive GPU with massive RAM on their Mac… can’t even use Nvidia… much less get it there. Honestly, so what if M1 Ultra does slightly better than 3090 on Aztec 4K off-screen and slightly worse on Wildlife Unlimited, no one who wants CUDA cores or high-end, Ray-traced gaming really cares how the M1 Ultra compares anyway. But then Topaz Video Enhance AI frames on the M1 Ultra, when you take the limiters off… and… They do target Apple’s low-level Metal API, which can theoretically perform as good or even better on M1 than it does on Intel Mac, but that depends entirely on the quality of the API implementation, and how well its optimized for Mac compared to Windows… and wow look at all those frames Nvidia gets. Because I was looking at Shadow of the Tomb Raider tests, which run cross platform, but run as x86 through Rosetta translation on M1 Macs. everything consumed to get that gas into the tank as well.Īnd it gets even weirded. Until I realized was it actually only measuring power on GPU, not power through system, which varies radically between Apple’s SoC-based approach compared to Nvidia’s discreet card-in-a-slot. Like, what were they even saying? Performance per watt up to max watt for the Ultra? Is that like… gas mileage with a 100 mile-per-hour speed limit? Couldn’t the Nvidia card just keep burning fuel to 300? Did I care most about peak high number or being able to actually fit into a small enclosure without melting into Super Mario lava? Then I was trying to decipher Apple’s M1 Ultra vs. I needed to test with 4K off-screen in order to actually see what the Ultra could do. I was barely putting a dent on the Max, never mind the Ultra. Like… testing towing capacity between a Camry and a … Tacoma… with all the weight of a MacBook Air. Then, I realized, I was testing GFXBench Aztec at 1440 on screen, and… that just wasn’t enough load. M1 Ultra GPU performance on the Mac Studio and… I just wasn’t seeing the difference I expected. Ok, real talk, so don’t judge - I was testing M1 Max vs.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |