Apple Thread - The most overrated technology brand?

What killed Steve Jobs?

  • Pancreatic Cancer

    Votes: 21 14.7%
  • AIDS from having gay sex with Tim Cook

    Votes: 122 85.3%

  • Total voters
    143

Smaug's Smokey Hole

no corona
kiwifarms.net
Basically the Apple tax premium over Windows machines has gone up significantly over time. And Apple hardware has gone down in build quality. Also, there are two ways the transition to Arm can go. One is that Apple half-asses it and puts an A12Z with LPDDR4 in the Macbook Pro. This will be cheap to make but the performance will be disappointing. The other is they go for something more over-engineered like WideIO or HBM. Performance might be good - better than Intel even - but they'll be very expensive machines. Or maybe they'll half-ass it and keep selling the machines at the current inflated prices.
How do you figure that HBM would make ARM beat Intel? A full stack would be massive overkill if I'm not completely missing something about what they are doing. And if they're designing their own mobo/chipset they can just forego Intel/PC standards and up the buswidth by using more chips, 8x8gb modules and they have a cheap(ish) 256bit DDR4 bus on a 8GB laptop.
 

Gustav Schuchardt

Trans exclusionary radical feminazi.
True & Honest Fan
kiwifarms.net
How do you figure that HBM would make ARM beat Intel? A full stack would be massive overkill if I'm not completely missing something about what they are doing. And if they're designing their own mobo/chipset they can just forego Intel/PC standards and up the buswidth by using more chips, 8x8gb modules and they have a cheap(ish) 256bit DDR4 bus on a 8GB laptop.
Because I think x64 systems' main advantage over ARM64 ones is in memory subsystem performance. ARM systems are descended from mobile applications where you get narrow LPDDR and small caches. Intel systems are descended from desktop systems where you get wide DDR and large caches.

One advantage of high main memory system bandwidth, and indeed HBM, is that the penalty for caches misses is so much less. If you can pull 1024 bits out of HBM in one read rather than 64-128 bits out of (LP)DDR it matters a lot less if you get a cache miss.

Actually it turns out that Fujitsu makes an ARM chip with HBM

https://www.networkworld.com/article/3535812/can-fujitsu-beat-nvidia-in-the-hpc-race.html
https://archive.vn/v9wxl

In early benchmarks, Fujitsu claims to trounce the Xeon Platinum, Intel’s top of the line, and is competitive with Nvidia’s Volta line of HPC GPUs. However that’s not final silicon, and I always wait for third-party benchmarks.
Here's where the link goes

https://web.archive.org/web/2020041....gov/srt/conferences/Scala/2019/keynote_2.pdf

Unfortunately, all the benchmarks are HPC ones which are probably very memory intensive. They're also comparing it against an Intel® Xeon® Platinum 8168 Processor (archive), which is a Many Integrated Cores design - lots of in-order cores. So it's pretty different from a desktop or server Intel chip. Edit : it's a Skylake chip, just with a lot of cores.

There's an argument that you need some 'big' i.e. out of order cores for desktop stuff. Though it's worth pointing out that the A12Z is already a big/little design.

Intel had some very interesting results for Larabee which was also a MIC design - you can have a lot more in order cores than out of order ones. Larrabee used a core that was derived from the P54C but had x64 and AVX-512 added. The Intel Xeon Platinum 8168 used Atom cores, once again with AVX-512. Anyway, Intel had a paper where they were running Windows on some of the cores and using the rest for a GPU and they got very impressive scaling of DirectX performance with the number of cores. Larrabee and the Xeon are both based on DDR memory, though the Xeon has 6 channels of it.

The A64FX is supposed to be a chip where you can run either CPU or GPU tasks on the cores and you've got vast amounts of memory bandwidth compared to a traditional DDR design, even with a wide DDR interface.

And, like I said, HBM3 is supposed to be cheaper than current HBM. HBM or WideIO are lower power than DDR too because you can drive low voltage signals a very short distance over a very carefully tuned transmission line. One stack of HBM on the same module as the SOC or even WideIO memory mounted package on package on top of the SOC are both going to use less power and deliver more bandwidth than 6-8 channels of DDR.

Incidentally, if you look at Apple Silicon on Wikipedia you see this

https://en.wikipedia.org/wiki/Apple_Silicon#A_series_list (archive)

1595108630257.png


So one of the things they did for the A12Z for the iPad Pro and Developer Transition kit was to double the DDR channel width from 64 bit to 128. Obviously Wide IO or HBM would be pushing this further.

And if you look at the patent and job vacancy I linked to earlier you can sort of infer that Apple are at least looking into this.

If I were them I'd go for Wide IO for the ARM Macbook Air and HBM for the Mac Pro/Macbook Pro. Then again Apple being Apple they could probably just chuck an A12Z in a Macbook Air and sell for more $ than the Intel version based on battery life. Or do A13Z with 256-bit quad-channel LPDDR4X.

In fact, DDR5 is supposed to be 2x the performance of DDR4. So they could double the bus width and use DD5 rather than 4 and get 4x the bandwidth. I still reckon that Wide IO for the Air and HBM for the Pro is the way to go, though. There's going to be a significant advantage both in bandwidth and power consumption.
 
Last edited:

Smaug's Smokey Hole

no corona
kiwifarms.net
Because I think x64 systems main advantage over ARM64 is in memory subsystem performance. ARM systems are descended from mobile applications where you get narrow LPDDR and small caches. Intel systems are descended from desktop systems where you get wide DDR and large caches.

One advantage of high main memory system bandwidth, and indeed HBM, is that the penalty for caches misses is so much less. If you can pull 1024 bits out of HBM in one read rather than 64-128 bits out of (LP)DDR it matters a lot less if you get a cache miss.

Actually it turns out that Fujitsu makes an ARM chip with HBM

https://www.networkworld.com/article/3535812/can-fujitsu-beat-nvidia-in-the-hpc-race.html
https://archive.vn/v9wxl



Here's where the link goes

https://web.archive.org/web/2020041....gov/srt/conferences/Scala/2019/keynote_2.pdf

Unfortunately, all the benchmarks are HPC ones which are probably very memory intensive. They're also comparing it against an Intel® Xeon® Platinum 8168 Processor (archive), which is a Many Integrated Cores design - lots of in-order cores. So it's pretty different from a desktop or server Intel chip. There's an argument that you need some 'big' i.e. out of order cores for desktop stuff. Though it's worth pointing out that the A12Z is already a big/little design.

Intel had some very interesting results for Larabee which was also a MIC design - you can have a lot more in order cores than out of order ones. Larrabee used a core that was derived from the P54C but had x64 and AVX-512 added. The Intel Xeon Platinum 8168 used Atom cores, once again with AVX-512. Anyway, Intel had a paper where they were running Windows on some of the cores and using the rest for a GPU and they got very impressive scaling of DirectX performance with the number of cores. Larrabee and the Xeon are both based on DDR memory, though the Xeon has 6 channels of it.

The A64FX is supposed to be a chip where you can run either CPU or GPU tasks on the cores and you've got vast amounts of memory bandwidth compared to a traditional DDR design, even with a wide DDR interface.

And, like I said, HBM3 is supposed to be cheaper than current HBM. HBM or WideIO are lower power than DDR too because you can drive low voltage signals a very short distance over a very carefully tuned transmission line. One stack of HBM on the same module as the SOC or even WideIO memory mounted package on package on top of the SOC are both going to use less power and deliver more bandwidth than 6-8 channels of DDR.

Incidentally, if you look at Apple Silicon on Wikipedia you see this

https://en.wikipedia.org/wiki/Apple_Silicon#A_series_list (archive)

View attachment 1457096

So one of the things they did for the A12Z for the iPad Pro and Developer Transition kit was to double the DDR channel width from 64 bit to 128. Obviously Wide IO or HBM would be pushing this further.

And if you look at the patent and job vacancy I linked to earlier you can sort of infer that Apple are at least looking into this.

If I were them I'd go for Wide IO for the ARM Macbook Air and HBM for the Mac Pro/Macbook Pro. Then again Apple being Apple they could probably just chuck an A12Z in a Macbook Air and sell for more $ than the Intel version based on battery life. Or do A13Z with 256-bit quad-channel LPDDR4X.

In fact, DDR5 is supposed to be 2x the performance of DDR4. So they could double the bus width and use DD5 rather than 4 and get 4x the bandwidth. I still reckon that Wide IO for the Air and HBM for the Pro is the way to go, though. There's going to be a significant advantage both in bandwidth and power consumption.
128bit data channel might just meant they are using more channels akin to the PCs "Dual channel". Because they don't have to use the slot-standard of PC/MacBook there's no reason for them to just keep doing that, use 8x32bit chips(not in a phone of course, because of space) and they have a 256bit channel capable of 200+GB/s compared to the i9-9900k that's specced at 41.6GB/s(according to intel w. 2666 memory).
Not that I think the ARM is starved for bandwidth to that degree.

Problem with HBM would be different memory configurations and that no one would be able to use it for anything. Using existing DRAM and going the console route with it seems both the easiest, most sensible and cheapest option if they're going custom (and they're already soldering in their lpddr4 so....).
 

LegoTugboat

True & Honest Fan
kiwifarms.net
Because I think x64 systems' main advantage over ARM64 ones is in memory subsystem performance. ARM systems are descended from mobile applications where you get narrow LPDDR and small caches. Intel systems are descended from desktop systems where you get wide DDR and large caches.

One advantage of high main memory system bandwidth, and indeed HBM, is that the penalty for caches misses is so much less. If you can pull 1024 bits out of HBM in one read rather than 64-128 bits out of (LP)DDR it matters a lot less if you get a cache miss.

Actually it turns out that Fujitsu makes an ARM chip with HBM

https://www.networkworld.com/article/3535812/can-fujitsu-beat-nvidia-in-the-hpc-race.html
https://archive.vn/v9wxl



Here's where the link goes

https://web.archive.org/web/2020041....gov/srt/conferences/Scala/2019/keynote_2.pdf

Unfortunately, all the benchmarks are HPC ones which are probably very memory intensive. They're also comparing it against an Intel® Xeon® Platinum 8168 Processor (archive), which is a Many Integrated Cores design - lots of in-order cores. So it's pretty different from a desktop or server Intel chip. Edit : it's a Skylake chip, just with a lot of cores.

There's an argument that you need some 'big' i.e. out of order cores for desktop stuff. Though it's worth pointing out that the A12Z is already a big/little design.

Intel had some very interesting results for Larabee which was also a MIC design - you can have a lot more in order cores than out of order ones. Larrabee used a core that was derived from the P54C but had x64 and AVX-512 added. The Intel Xeon Platinum 8168 used Atom cores, once again with AVX-512. Anyway, Intel had a paper where they were running Windows on some of the cores and using the rest for a GPU and they got very impressive scaling of DirectX performance with the number of cores. Larrabee and the Xeon are both based on DDR memory, though the Xeon has 6 channels of it.

The A64FX is supposed to be a chip where you can run either CPU or GPU tasks on the cores and you've got vast amounts of memory bandwidth compared to a traditional DDR design, even with a wide DDR interface.

And, like I said, HBM3 is supposed to be cheaper than current HBM. HBM or WideIO are lower power than DDR too because you can drive low voltage signals a very short distance over a very carefully tuned transmission line. One stack of HBM on the same module as the SOC or even WideIO memory mounted package on package on top of the SOC are both going to use less power and deliver more bandwidth than 6-8 channels of DDR.

Incidentally, if you look at Apple Silicon on Wikipedia you see this

https://en.wikipedia.org/wiki/Apple_Silicon#A_series_list (archive)

View attachment 1457096

So one of the things they did for the A12Z for the iPad Pro and Developer Transition kit was to double the DDR channel width from 64 bit to 128. Obviously Wide IO or HBM would be pushing this further.

And if you look at the patent and job vacancy I linked to earlier you can sort of infer that Apple are at least looking into this.

If I were them I'd go for Wide IO for the ARM Macbook Air and HBM for the Mac Pro/Macbook Pro. Then again Apple being Apple they could probably just chuck an A12Z in a Macbook Air and sell for more $ than the Intel version based on battery life. Or do A13Z with 256-bit quad-channel LPDDR4X.

In fact, DDR5 is supposed to be 2x the performance of DDR4. So they could double the bus width and use DD5 rather than 4 and get 4x the bandwidth. I still reckon that Wide IO for the Air and HBM for the Pro is the way to go, though. There's going to be a significant advantage both in bandwidth and power consumption.
Well, again, if you believe the benchmarks from earlier, the A12Z Minis are supposed to be roughly on par with an iPad Pro (7% slower, but later benchmarks are on par).

Some information did come out about those benchmarks, being that Geekbench was one of those who got a devkit, and the initial results was with a conversion from Intel to ARM, but with no optimisation done, so I'd tenatively assume the boost was from from Geekbench being worked on.

For Rise of the Tomb Raider, there was some people on other forums saying it was a mobile version, which is not actually a thing, plus if it was I doubt they'd have put it on the Mac App Store just to do it.
On Intel UHD 630 (2018 Mac Mini), it's playable, but I'm seeing varying results from looking it up, from 24 fps on low to 20-30 fps on medium (I doubt the latter, tbh).

It's hard to say from WWDC, but most punters were saying 24-30 fps on medium on the dev kit Mini, which is already interesting. Granted, I don't know the difference between low and medium for RoTR, but I'd extremely tenetively say a 20% boost to GPU.
 

Least Concern

Pretend I have a waifu avatar like everyone else
kiwifarms.net
Some information did come out about those benchmarks, being that Geekbench was one of those who got a devkit, and the initial results was with a conversion from Intel to ARM, but with no optimisation done, so I'd tenatively assume the boost was from from Geekbench being worked on.
I heard the initial GeekBench results were from the Intel build running via Rosetta.
On Intel UHD 630 (2018 Mac Mini), it's playable, but I'm seeing varying results from looking it up, from 24 fps on low to 20-30 fps on medium (I doubt the latter, tbh).
With my experience on trying to game with the aging Intel iGPU on my current MBP, 20-30 fps on medium sounds about right. Intel GPUs from the last half decade are so are a lot more usable than they used to be and you can get playable performance on not-too-new AAA games out of them if you're willing to crank down the settings.
 
  • Agree
Reactions: The Real SVP

Gustav Schuchardt

Trans exclusionary radical feminazi.
True & Honest Fan
kiwifarms.net
I heard the initial GeekBench results were from the Intel build running via Rosetta.
This raises another interesting problem. Back when Qualcomm and Microsoft were pushing Windows on ARM Intel threatened to sue them if they emulated patented instructions. Which means SSE and anything later. Now Windows on ARM will only emulate 32 bit x86 code, not 64 bit x64.

https://www.extremetech.com/computi...ly-threatens-microsoft-qualcomm-x86-emulation (archive)

Intel carefully protects its x86 innovations, and we do not widely license others to use them… In the early days of our microprocessor business, Intel needed to enforce its patent rights against various companies including United Microelectronics Corporation, Advanced Micro Devices, Cyrix Corporation, Chips and Technologies, Via Technologies, and, most recently, Transmeta Corporation. Enforcement actions have been unnecessary in recent years because other companies have respected Intel’s intellectual property rights.

However, there have been reports that some companies may try to emulate Intel’s proprietary x86 ISA without Intel’s authorization. Emulation is not a new technology, and Transmeta was notably the last company to claim to have produced a compatible x86 processor using emulation (“code morphing”) techniques. Intel enforced patents relating to SIMD instruction set enhancements against Transmeta’s x86 implementation even though it used emulation…

Only time will tell if new attempts to emulate Intel’s x86 ISA will meet a different fate. Intel welcomes lawful competition, and we are confident that Intel’s microprocessors, which have been specifically optimized to implement Intel’s x86 ISA for almost four decades, will deliver amazing experiences, consistency across applications, and a full breadth of consumer offerings, full manageability and IT integration for the enterprise. However, we do not welcome unlawful infringement of our patents, and we fully expect other companies to continue to respect Intel’s intellectual property rights.
1595143552154.png


US patents last 20 years. So SSE2 and SSE3 are still patented. In x64 SSE is part of the ABI - you need it for floating-point and SIMD. And Intel has said any attempt to execute those instructions whether in hardware, via instruction by instruction emulation, or JITting them to a native SIMD instruction and executing that which is what Transmeta did, means you're violating their patent.
 

Least Concern

Pretend I have a waifu avatar like everyone else
kiwifarms.net
This raises another interesting problem. Back when Qualcomm and Microsoft were pushing Windows on ARM Intel threatened to sue them if they emulated patented instructions. Which means SSE and anything later. Now Windows on ARM will only emulate 32 bit x86 code, not 64 bit x64.
That's interesting. How does AMD get away with it? Are they having to pay licensing fees to their rival to be able to implement this stuff in their chips? (Pardon if that's a dumb question; I'm more of a software person and not as savvy about the particulars of hardware.)

Anyway, if it's just a matter of forking over some licensing fees to be able to implement this stuff, I'm sure Apple can scrounge up enough from couches and ashtrays.
 

Vecr

"nanoposts with 90° spatial rotational symmetries"
kiwifarms.net
That's interesting. How does AMD get away with it? Are they having to pay licensing fees to their rival to be able to implement this stuff in their chips? (Pardon if that's a dumb question; I'm more of a software person and not as savvy about the particulars of hardware.)

Anyway, if it's just a matter of forking over some licensing fees to be able to implement this stuff, I'm sure Apple can scrounge up enough from couches and ashtrays.
They used to be Intel's second source for x86 CPUs, I'm not sure exactly what the specifics of their agreement is, probably secret.
 

Gustav Schuchardt

Trans exclusionary radical feminazi.
True & Honest Fan
kiwifarms.net
That's interesting. How does AMD get away with it? Are they having to pay licensing fees to their rival to be able to implement this stuff in their chips? (Pardon if that's a dumb question; I'm more of a software person and not as savvy about the particulars of hardware.)
They used to be Intel's second source for x86 CPUs, I'm not sure exactly what the specifics of their agreement is, probably secret.
Intel and AMD had a lengthy lawsuit and ended up cross-licensing patents. So AMD pays Intel a fee if it uses a technology Intel patented. However, the reverse is not true - when Intel implemented AMD's x64 architecture it does not pay a license fee. I.e. the fees go one way.

E.g. from here

https://corporate.findlaw.com/contr...agreement-advanced-micro-devices-inc-and.html
https://archive.vn/wip/OJqfh

4. ROYALTY PAYMENTS BY AMD

4.1. AMD agrees to pay INTEL a royalty on the Net Revenue from sales and
other dispositions of Royalty-Bearing Units as a percentage of such
Net Revenue according to the following schedule:
However, it's possible that AMD managed to get out of this later by suing Intel again and getting $1.25 billion and, it is rumored, an end to royalty payments.

https://www.guru3d.com/news-story/amd-no-longer-pays-for-x86-license.html
https://archive.vn/wip/c1mzA

Looks like AMD certainly got a good deal with Intel. Last week we already reported that the AMD and Intel settled -- Intel will pay AMD $1.25 billion to squash a legal battle over Intel's sales tactics.

As it now seems there's more to the deal than we know. Both Intel and AMD agreed to a cross-license platfrom that is now royalty free. AMD doesn't have to pay anymore for the x86 license / patents owned by Intel and vice versa this is likely the same -- Intel doesn't have to cough up dough for the 64-bit patents owned by AMD.
Of course with this sort of stuff, it's hard to know for sure. I do remember when Intel decided to implement AMD64 because Itanium was such a disappointment most industry observers thought that the royalty payments only went one way.

https://www.cnet.com/news/intel-to-pay-amd-1-25-billion-in-antitrust-settlement/
https://archive.vn/3Wjks

The cross-license agreement has been updated to reflect AMD's move to spin off its processor manufacturing business into a separate company, Globalfoundries, which currently is an AMD subsidiary. Under the updated agreement, AMD will be able to operate as a "fabless" processor company--one that relies on others to build its chips. In addition, Globalfoundries "is free to operate independently and go after third-party business without issues," Prairie said.

Another change: in the earlier patent cross-license agreement, AMD had to pay Intel royalties. Now neither company makes payments, Prairie said.
This was a really good deal - AMD stopped paying royalties and they got to go fabless and build on either TSMC or Global Foundries. It's hard to see Ryzen being as competitive if they had to pay Intel royalties and only build the chips in house.

Incidentally, this sort of thing makes me think Apple should approach AMD for chips for Macbook Pros if they release another x64 one before the transition to ARM. I bet they'd get a good deal until AMD realizes, like Intel, Motorola, IBM and every single company making software for Apple platforms, that Apple is glamorous but psychotic and more trouble than their market share is worth.

Funny thing is Intel and AMD collaborated on Kaby Lake G, i.e. an Intel CPU and an AMD GPU, and that looked it was designed for a Macbook Pro 15 inch which had Intel CPUs and AMD graphics. Apple declined to use it though and it ended up used in a few NUC devices and then getting EOL'd. (archive)
 
Last edited:

Gustav Schuchardt

Trans exclusionary radical feminazi.
True & Honest Fan
kiwifarms.net
Here's an interesting article about Intel's Foveros chip. It's a stacked die design where the dies communicate via Through Silicon Vias.

https://www.anandtech.com/show/15877/intel-hybrid-cpu-lakefield-all-you-need-to-know/2
https://archive.vn/V2eOU

It has this slide on the cost of communicating a bit for various interconnects

1595422254210.png


I.e. it's 20pJ per bit for discrete DDR. It's 1pJ for on-package memory - HBM, Intel's EMIB or AMD's compute chiplets talking to their IO die. And it's 0.1pJ on-die - e.g. a cache talking to a CPU on the same physical piece of silicon. As the article puts it:

Here’s a slide from Intel’s 2018 Hot Chips talk about new data transfer and connectivity suggestions. On the left is the ‘on-board’ power transfer through a PCB, which runs at 20 pJ/bit. In the middle is on-package data transfer, akin to what AMD did with 1st Gen EPYC’s numbers, around 1-5 pJ/bit (depends on the technique), and then we get on-silicon data movement, which is more 0.1 pJ/bit. Intel’s Foveros die-to-die interconnect is most like the latter.
So you can get between 20 and 200x the power saving compared to DDR in a separate package for a TSV design. And you can get wider buses and take up less space on a PCB.

There's an interesting NVidia presentation which I can't find right now where they say that 'computation is free, communication is expensive and memory access is really expensive'

https://aacbb-workshop.github.io/slides/2019/HE_Biology_0219_A.pdf (attached)

1595423974542.png


1595424003840.png


Here's the Horowitz talk.

https://www.youtube.com/watch?v=7gGnRGko3go



Bringing down those stonkingly humongous numbers for DRAM access is worth doing, even if TSVs or HBM/EMIB like die to die interconnects are tricky to get working and will be expensive.

That's not to say that Foveros/Lakefield will be commercially successful - it's a big/little 1/4 design with no AVX which seems a bit low end. Still the technologies it contains - like the stacked dies and the first big/little x64 design - are interesting. Also, I think die stacking the memory on top of the CPU or including it on the package is something that could be used for very high performance but power-efficient notebook/desktop/server chips.

Incidentally that NVidia slide with operation costs really makes you think. Back in Ye Olde Days you'd replace a complex calculation with a lookup table and things would get faster. Now it seems like we're in a world where the reverse is true.
 

Attachments

Last edited:

LegoTugboat

True & Honest Fan
kiwifarms.net
Okay, boys. Couple of interesting happenings.

1. Due to yield issues Intel's delayed the 7 nms to late 2022-early 2023, which puts them 12 months behind their revised timeline.

2. Native Geekbench results have come for the Dev Kit Minis, and they're interesting, showing single core of 1098 vs 800, and multi-core of 4555 vs 2600, which puts them at around the i5 2018 Mac Mini mark.
 

LegoTugboat

True & Honest Fan
kiwifarms.net
Huh. I went to Geekbench's website to read up on how their benchmarking works and this is the first image they show:
View attachment 1477447
That caught me a bit off guard, since I had a squiz at that model iMac, and the numbers I found are slightly higher for it (1242 and 8289), but yeah, that's a screenshot on Geekbench of the original Geekbench 5. Had a looksee to see what would cause that, and it seems they optimised it a bit with 5.1, resulting in slightly higher scores across the board.

As far as I know, Geekbench uses the average benchmark for a submitted model type, after cutting out the extreme oddities, since I've found a few others of the same model that show higher results, and a few that show comically low results (one result that had 500ish single core performance, which would be single core performance on par with a 2013 HP 500-056 microtower.)
 
  • Informative
Reactions: Smaug's Smokey Hole

Smaug's Smokey Hole

no corona
kiwifarms.net
That caught me a bit off guard, since I had a squiz at that model iMac, and the numbers I found are slightly higher for it (1242 and 8289), but yeah, that's a screenshot on Geekbench of the original Geekbench 5. Had a looksee to see what would cause that, and it seems they optimised it a bit with 5.1, resulting in slightly higher scores across the board.

As far as I know, Geekbench uses the average benchmark for a submitted model type, after cutting out the extreme oddities, since I've found a few others of the same model that show higher results, and a few that show comically low results (one result that had 500ish single core performance, which would be single core performance on par with a 2013 HP 500-056 microtower.)
Yeah, the devkit isn't loaded with an unknown amount of user installed background processes that creates varying results. My instinct to the numbers you posted was to treat it like the mystery metrics that Intel/AMD/Nvidia/Apple always show at conferences for their new products. If they're real that's amazing.
 
Tags
None