Apple is adding “one last chip” to the M1 processor family. The M1 Ultra is a new design that uses “UltraFusion” technology to strap two M1 Max chips together, resulting in a huge processor that offers 16 high-performance CPU cores, four efficiency cores, a 64-core integrated GPU, and support for up to 128GB of RAM.
It looks like Apple is using a chiplet-based design for the M1 Ultra, just like AMD is doing for many of its Ryzen chips. A chiplet-based approach, as we’ve written, uses multiple silicon dies to make larger chips and can result in better yields since you don’t need to throw a whole monolithic 20-core chip out if a couple of cores have defects that keep them from working.
This is a beast of a chip, and it fits in this neat little new Mac, called the Mac Studio. Apple also unveiled a new, more “affordable” monitor, but I’m not sure a monitor that maxes out at 60Hz in 2022 is worth €1779.
My main question is with the RAM limitation on the otherwise ‘pro’ machine.
Price, compatibility, and real performance aside, they seem to max out at 2/4GB per core, which is actually limiting if you do high performance computing.
We’ll still need to see how their architecture will be designed for the actual Mac Pro replacement. Without extensible RAM (or pcie support), it will be rather limited to the tasks listed. I think that is why they focused a lot on rendering and multimedia on their product page.
Apple mainly targets prosumer/content creation markets with their desktops. I don’t think there are that many high performance/engineering aps on the macOS plaform anymore.
Sure, I see them focusing on content creation.
However if I wanted all those cores, I would want some RAM with them.
Even content creation will soon need it anyway. A 4K frame is 32MB, a 8K one is 128MB. At 2GB per core, they can only store 16 frames at a time. This may soon become a limiting factor even for those workloads. Though I must admit their RAM bandwidth is impressive.
Well, the chip has several Video ecode/decode engines, that are faster than doing video processing on a scalar core. Plus it’s not like anyone does video on a per single raw frame encoding, that’s what the compression codecs are for.
Also, there is a 128GB option… which I think it’s decent memory size for this type of system.
I was thinking about video production and CGI effects. Things like Adobe Premiere or After Effects.
Though these software are (were?) notoriously single threaded in many effects. And most current video is still 4K.
That makes the “Studio” actually good for today’s studio relevant work, though. Seems like Apple is producing one chip for each consumer segment (A1, M1, M1 Pro, M1 Max, M1 Ultra).
I wonder what would the response be from Puget systems:
https://www.pugetsystems.com/recommended/Recommended-Systems-for-Adobe-After-Effects-144
sukru,
Yes, it probably is limiting for holding large amounts of data in ram, however on the other hand if the software is able to stream the data to/from media efficiently then it may not be as important that all the data fits in ram at the same time. This would result in more latency from storage plus compression/decompression, but I guess this is probably normal for larger projects anyways.
I’m not sure whether typical video frame editors keep the raw data cached in RAM or if they immediately compress it. Do you know? There could be quality implications for lossy codecs that perform frequent compression & decompression during editing. 4K content creaters apparently use lossy compression for intermediate formats (it is faster and saves more space).
https://www.4kshooters.net/2014/09/06/which-codec-is-most-suitable-for-your-4k-workflow/
Another reason to have more RAM is that it saves the limited NAND flash life, which may need to be replaced more frequently when working regularly with huge memory mapped files.
Another consideration that comes to mind is that this is a NUMA design that doesn’t scale well with certain types of SMP workloads and adding more cores and memory this way can actually increase the bottlenecks for naive and parallel-hard SMP algorithms. That’s why AMD provided a “game mode” to disabled cores on their high end CPUs. I don’t know how apple schedules SMP software across NUMA boundaries, but unoptimized software may actually perform worse across more cores and memory boundaries.
I do wonder how difficult it will be to cool two M1 max processors given their very close proximity to each other. Using discrete components would have allowed for better heat dissipation. I would have prefered seeing a higher performance dedicated GPU with it’s own dedicated memory. Nevertheless this setup with two M1 max processors is interesting because you could setup one M1 to sort of behave like a dedicated GPU while the other M1 can run CPU jobs without shared memory overhead between them.
Alfman,
I would expect at least some media editing applications to be highly optimized for this setup, even making good use of NUMA split like you mentioned. That is the advantage Apple has, since there are only a few configurations to target.
Depending on how you distribute the work, different kind of problems will show up.
Do we split by time, or space? If we divide the frame into sub-regions, then motion would be difficult. If we divide the task into even time periods, then we have free parallelism, but are forced to introduce more key-frames, and lose compression potential.
Also some effects might be suited one mode of parallelism, while others would go with the opposite selection. Add in NAND data streaming as you said, decompression, blending of multiple sources, if becomes really fun.
Though, it requires a firm grasp on signal processing and specialization in specific parts of computer graphics. And that is not one of my strong points…
sukru,
I agree.
I think the easiest form of parallelism to use generically would be to break up the work along the timeline. This way threads need very little IPC even for intraframe analysis. But this approach to parallelism doesn’t really work for interactive tasks.
We can divide into sub regions, but obviously not all algorithms are suited for this. Things like compression algorithms may need to exchange data with neighbors, maybe once or more per pixel. Rasterization techniques like screenspace reflection need to be coordinated across threads/regions. Even a rasterization algorithm that doesn’t need to communicate with neighbors may nevertheless divide the work inefficiently because threads end up independently crawling the same geometry culling surfaces and calculating vertex properties redundantly compared to what a larger region could have done with fewer overall calculations. Some algorithms will quickly reach diminishing returns with more cores.
On the other hand some algorithms work extremely well in parallel, like the CineBench raytracing benchmark. This kind of load can scale almost linearly pushing hardware to the max and doing so efficiently. This is a good type of algorithm to test just how well the hardware handles scalability. But there can still be bottlenecks, like shared memory, thermal or power limits, etc that impede parallel speedup.
IMHO things really changed and such products aren’t in touch with the reality anymore. It’s basically a bubble and it will burst. A decade back we would be like wow. That really is something. Some real progress that in general we all desire for it to happen. But in these days it’s more or less a predictable meh. That is first thing to take into consideration is desktop market share already stagnated for a while. Most of people realized they can do most of their work on a mobile phone. Second thing to take into consideration is crypto fever. It just killed consumer GPU market. Knowing, most people can do most of their work on a mobile phone, that means most of the people can do most of their work on a 10 years old desktop tech. Without any issues involved. What we are getting lately is this massive computing machines most of the people don’t have any use for. They are expensive and consume a lot of energy. Computer processors, Apple devices, GPUs, mobile phone dimensions … This just isn’t in touch with reality anymore. Regular people don’t need this and won’t buy it. An average mobile phone does the job for them better.
It doesn’t help that this is a cellphone CPU in a desktop thermal envelop… rather than an actually designed for purpose workstation CPU.
The M1 Ultra has 2x the mem BW of EPYC w faster single core performance. So…
javiercero1,
EPYC is designed for multicore performance though. We need benchmarks to know how well these stack up against each other.
Here are benchmarks I found for the M1 Ultra and EYPC
https://browser.geekbench.com/v5/cpu/13330272
https://browser.geekbench.com/v5/cpu/7717857
M1 Ultra
Single-Core Score 1793
Multi-Core Score 24055
EYPC
Single-Core Score 1249
Multi-Core Score 75539
Please note that the EPYC benchmark is for a server with two processors, I had troubling finding GB5 scores for only one processor, everyone seems to use two. Let’s say one processor is roughly half that: 37770.
So…
One EPYC processor is 1.57X faster than M1 ultra for multi-core.
Two EPYC processors are 3.14X faster than M1 ultra for multi-core.
The M1 ultra processor is 1.44X faster than EPYC for single-core.
Maybe apple could release a two M1-Ultra processor configuration in the future…
you meant “socket” not processor.
M1 is “designed for multicore as well.
Yes, a server with significantly more cores can be expected to be faster for multicore workloads than a system with less core.
javiercero1,
You can call it what you want, I was quoting the geekbench page.
Generally enterprise servers opt for more slower cores over fewer faster cores. This is why servers tend to have slower single core performance. If you want faster single core performance, it typically means you sacrifice cores.
The top single core performer at GB has bad multi-core performance, but it’s single-core performance is much higher than you’d find in high core CPUs.
https://browser.geekbench.com/v5/cpu/12490806
Single-Core Score=2973
Multi-Core Score=2856
Wow. Really? I had no idea.
:rolls eyes:
javiercero1,
No need to roll eyes. In the context of your earlier statement “The M1 Ultra has 2x the mem BW of EPYC w faster single core performance.” I thought it was worth explaining the difference.
And you shouldn’t feel the need to boomersplain basic concepts pertaining CPUs to someone who makes a living designing them.
javiercero1,
I’m glad that we are in agreement that massively parallel server systems have lower single core performance, but then logically you ought to concede the M1 beating EPYC on single core isn’t as big deal as your post seemed to imply. I think we both agree that it isn’t, so we can just move on.
The only thing I agree is that your boomersplaining adds zero value to the conversation.
“It doesn’t help that this is a cellphone CPU in a desktop thermal envelop”
Great joke dude, made me spit my coffee on the desk!
“IMHO things really changed and such products aren’t in touch with the reality anymore. It’s basically a bubble and it will burst. ”
Every single leap in the performance of computing devices is met with the same comment that’s not really needed. Then the performance becomes the new normal.
A lot of of the dissonance around Apple silicon comes from those most comfortable with the previous Intel ecosystem who can’t believe an alternative ecosystem is taking the lead. As the benchmarks for the new Ultra chip come in, especially the real world performance, bear in mind that this is just generation one of this new chip family.
It’s true though, for personal computing most types of applications have already reached the limit of what was actually needed many years ago. There are a diminishing set of applications that are still pushing the high end for more power but they are increasingly niche and of questionable real value. Games and video processing for example probably don’t need any more power, but instead the required power is artificially increased with larger resolutions and increasing framerates, but at some point that leads to diminishing returns (I think we’re already there).
The Mac Studio is what a modern G4 Cube should be, right down to the price, and I’m here for it. I’m sorely tempted to let go of my M1 mini and scrounge up the funds to get one. I don’t *need* it; the M1 mini does everything I need a Mac for and it doesn’t even break a sweat, but all that extra connectivity means I can ditch the USB-C hub, and the extra performance means I can make it last several years with my modest needs.
The Mac Studio is the NeXT Cube 3.0 (the G4 cube was 2.0).
Good point, I was in high school when the NeXT machines came out and sadly never got to experience one.
So true.
It basically took 3 decades for NeXTStep to run properly on a cubic form factor 😉
Just to be clear they are using tape and not glue correct? We’ll I guess that is fine then…
They used Flex Tape, it will outlive the rest of the computer.
I, on the other hand, am not sure what this deal with 5K resolutions is. Was Tim Cook annoyed by the lack of up-scaling artifacts when viewing 4K videos on 4K displays?
5K has been the standard resolution for Apple’s displays on the desktiop (XDR Pro and iMac 27)
Which begs the question: Why did they “standardise” to 5K? Was Tim Cook annoyed by the lack of up-scaling artifacts when viewing 4K videos on 4K displays?
You can watch 4K content in a 5K display just fine.
Also, sharper text.
The M1 series of CPUs should be a cause of concern in the PC industry, because it defies the OEM model.
For those not in the know: The OEM model says that you have competing component manufacturers (for example Intel vs AMD, AMD vs Nvidia) each acting in their own interests, aka each trying to one-up each other in order to get those sweet OEM orders, and this naturally moves the ecosystem forward and outstrips every other single-vendor ecosystem in performance.
In plain English, the OEM model says that no matter how hard companies like SGI, Sun, and Apple milk their respective niches (aka no matter how hard SGI milked the Hollywood studios and defense simulation industries, Sun milked the banks and accounting firms, and Apple milked the creative individuals and music studios), they would never be able to match the performance of ecosystems employing the OEM model.
For a while, the OEM model seemed like a clear winner. All companies mentioned above had either exited the market (SGI, Sun) or switched to the PC ecosystem (Apple). I still remember the days in the late 2000s when people predicted the “end of history” for ISAs for desktops and laptops, because x86 would eclipse everything. The only way the OEM model can be defied, they said, is if some company like Apple somehow came up with billions of dollars in hand and then sunk those billions of dollars in hardware R&D (which back then seemed a near-impossibility).
Well, guess what, Apple did come up with those billions (from the App Store commissions and iPhone sales), and now they are sinking them into hardware R&D, making hardware that leapfrogs the “incremental” performance gains enabled by the OEM model by a wide margin.
tl;dr: Apple may win big here. The PC may soon be in the same position Macs were just before the Intel transition (trying to sell thick noisy laptops running outdated CPUs), but unlike Apple, the PC can’t use M1 chips. AMD is our only saving grace here (it seems that Intel has made the decision to let their foundries sink the company) but still, AMD doesn’t have Apple’s hardware R&D budget.
kurkosdr,
I agree with your assessment about the OEM model. And being able to get the best parts from where-ever really empowered consumers to build the best computers they would without forced compromises. I’m glad that apple has been able to make ARM a serious contender for high performance computing. This is good for competition, however I don’t like that the M1 can only be bought in a forced bundle. As a consumer it’s kind of disappointing that these high performing ARM processors aren’t going to be available in other products.
Nobody can deny Intel’s huge missteps these past few years, but they are better now with their 12th gen topping the single core charts again, and AMD is at the top for multicore.
https://browser.geekbench.com/v5/cpu/singlecore
https://browser.geekbench.com/v5/cpu/multicore
I just hope we don’t end up with more fab consolidation. I think the world is seeing how harmful too much consolidation is to supply chain reliability.
Still, Apple is outstripping both Intel and AMD, a worrying trend.
kurkosdr,
Maybe, but what are you basing that on?
I’d wager that the M1 wins on power efficiency, ARM processors usually do.
But based on CPU performance (see links above), the M1 ultra still won’t be the fastest for either single or multicore.
The way these things often play out is that competitors jump frog over one another because all of them have been working on upgrades and the newest products don’t remain at the top for long. At least CPU competition is better than it’s been for a long time!
Rumor has it that nvidia was prepared to release higher end GPUs last generation but held back when the newest AMD products didn’t match the hype. This is why we really need competition; the M1 max wasn’t a threat to nvidia GPUs, but if the M1 ultra is then maybe we will get better GPUs all around!
Apple does have an OEM system of sorts, it’s just not visible to the end consumer. But Apple’s suppliers act like OEMs for their design elements. With apple being the “virtual” consumer that integrates them vertically.
Apple tries to have several suppliers, whenever possible, competing for design wins.
They also have proven that their vertical model works best in terms of profit margins. As they can basically subsidize the cost of certain design elements across the overall cost of the finished system. In this case, they have been designing these ridiculously overpowered CPUs for their mobile parts for a while, which had them producing dies that were larger than the competitors. But since they sell the final product using those dies, they can absorb the cost (whereas their competitors which do not produce the finished system can’t). The end result being that they can use the same CPU design on a scaled up SoC for the desktop/laptop.
We’re also witnessing a tectonic shift in the tech industry in regards to processing elements. Just as the microprocessor phased out the less integrated competitors (mini/mainframes/supers/etc) by being able to finance larger design teams/better VLSI technology through economies of scale that the old systems couldn’t ride. Now, the SoCs are taking over the microprocessors. Basically Apple, Qualcomm, Mediatek, NVIDIA… are now surpassing AMD and Intel in capitalization because they’re targetting markets with larger growth, and thus they can afford larger design teams. The clear example being Apple, who not only has perhaps the top microarchitecture design team in the industry right now, but they also have access to the leading fab nodes which puts them ahead of traditional CPU vendors like Intel by one generation at least.
javiercero1,
Benchmarks benchmarks benchmarks.
Apple beat intel temperately on single core performance for a while but so did AMD. Today intel CPUs are at the top for single core performance and AMD is ahead for multi-core. Apple is within striking distance of both of them. The truth is it’s a very dynamic situation and it depends on the exact benchmarks you are looking at.
Reading and Comprehension Reading and Comprehension Reading and Comprehension
In terms of power/performance (main metric for an SoC) in consumer desktop/laptop: Apple (5nm) is 1 generation ahead of AMD (7nm), and close to 2 when it comes to Intel (10nm).
In x86 laptop/desktop (i.e. not HDET) Intel is ahead of AMD in both ST and MT right now.
Intel is ahead in overall single core (i9-12900K) @ 2004 (vs 1793 for M1 Ultra) but it requires 4x as much power (240W vs 60W)!!!!!
Apple is ahead in MT w the M1 Ultra @ 24055 (vs 18534 for i9), again at 1/4th of the power.
Plus the M1 has a much more powerful GPU than the Intel (Ryzen 9 doesn’t have a GPU), much much much better media encoder engines, and has 2 NPUs which are not present in either Intel or AMD designs.
Right now Apple has the core with the best IPC. Intel microarchitecture requires 50% higher clock to achieve the same # of instructions retired.
Apple has both a competitive microarchitecture in performance, a lead in efficiency, and the lead in fab node.
javiercero1,
I’ve been perfectly candid about M1 and ARM processors winning on power. I’ll quote myself from just a few posts up: “I’d wager that the M1 wins on power efficiency, ARM processors usually do.”
If you want to make a claim that X is ahead of Y then you need to do so on the basis of some specific metric or benchmark. Your post objectively lacked any metric or benchmark to base the claim on.
Now if you want to retroactively say you were talking about power efficiency and not performance, that’s ok but it isn’t what you said initially. The reason I push back on your claims isn’t to be a jerk, it is to encourage you to be more careful by qualifying them appropriately. When you say things that are true in one context but not another, it is rather misleading to withhold the context to which you were referring to. This can easily propagate the spread of misinformation.
It is true that Intel isn’t known for high end or even mid range GPUs. They do fine for basic tasks but for demanding tasks it’s much better to use a discrete GPU.
I don’t want you to take requests for more data as an insult. It’s just that arguments made on blanket assertions can only take a discussion so far and it gets too messy when you’ve got people with different opinions. The benefit of using actual benchmarks & data is that it does a far better job at establishing facts that everyone can agree on!
I already listed the geekbench and power numbers in my argument. Now kindly STFU.
javiercero1,
I’ve been respectful to you and ask you to be respectful as well. You did not specifically mention geekbench anywhere, but it seems that’s what you were referring to:
Thank you for clarifying this, can you be clearer next time though? There wasn’t a link showing hardware specs or even a name of a benchmark.
Also you didn’t say where you got your wattage numbers from. I have to guess that “240W” comes from the “241 W” that’s quoted from the “Maximum Turbo Power” in intel specs? And where did you get “60W” for the M1 Ultra? I could not find any definitive specs on apple’s site. They have a marketing chart that cuts off at 60W…
https://www.apple.com/newsroom/2022/03/apple-unveils-m1-ultra-the-worlds-most-powerful-chip-for-a-personal-computer/
…but it looks like apple is guilty of cropping it off. Consider that anandtech measured the M1 Max going up to CPU loads of 44W despite the fact that Apple’s marketing material showed it going up to 30W. That’s 46% more than apple’s slide suggested!
https://www.anandtech.com/show/17024/apple-m1-max-performance-review/3
https://www.apple.com/newsroom/2021/10/introducing-m1-pro-and-m1-max-the-most-powerful-chips-apple-has-ever-built/
Ultimately we are both in agreement that the M1 CPUs will be much more efficient than intel CPUs, but I don’t think there’s enough independent data yet to say exactly by how much.
Try to keep up: we’re discussing the M1 Ultra and geekbench results.. I listed the power number needed by the i9 to achieve it’s top MT results for GB.
Your inability to grasp basic contextual information should be a hit that you’re out of your depth, and should perhaps refrain from retarding this thread.
Thank you.
javiercero1,
You completely failed to mention it at the time, just please do a better job clearly citing sources in the future. I think that’s fair.
I’ve ask you to justify your figures and you won’t. You can’t justify your assertion that “Intel is ahead in overall single core (i9-12900K) @ 2004 (vs 1793 for M1 Ultra) but it requires 4x as much power (240W vs 60W)!!!!!” because you took intel’s max CPU power (241W) and divided it by an arbitrary 60W cutoff on apple marketing material. Even if you were tricked by apple’s material, you should still have known better because a CPU’s max power limits aren’t representative of the power used during a specific benchmark. Here someone tested an i9-12900k under a reduced 125W power limit…
https://www.club386.com/intel-core-i9-12900k-at-125w/3/
Would you look at that, they got the same 1960+/-10 single core GB5 score at 241W and 125W limits. In other words the single core test never broke the 125W power limit. The point here being it is absolutely crucial that we compare the actual power usages of both CPUs and not the max power usages!!! This is obvious to me and should have been obvious to you too. And yet there you were trying to get away with using the max power usage instead. You either made an honest mistake or knowingly lied.
You are a nasty piece of work when you take things out on me, but in truth it’s your error. So instead of getting upset with me you could just dot your ‘i’s, cross your ‘t’s, and just admit when we don’t have all the data. There’s no need for this drama.
I provided the figures, it was a pretty straight forward point. Your problems with basic reading and comprehension and your lack of basic knowledge in this field are not my problems.
I am not upset, I just have zero respect for you. Learn the difference.
Now, kindly, go pound sand somewhere else already.
“but I’m not sure a monitor is worth €1779.”
FTFY
javiercero1,
You might as well have zero respect for apple too because not even they support your numbers.
https://www.apple.com/newsroom/2022/03/apple-unveils-m1-ultra-the-worlds-most-powerful-chip-for-a-personal-computer/
Their claimed power difference is still significant but it’s less than a factor of 2, meanwhile you are having a temper tantrum over having been called out for claiming that it’s a factor of 4. Look, you can pretend not to be wrong and distract from your claims by hiding them behind ad hominem attacks, but this bullying behavior is very unbecoming and hurts your credibility more than mine. It is very childish to use your posts to attack me instead of discussing the topic. Whatever, I guess the topic is done.
Have a good day.
The firestorm cores in the M1 consume each a maximum of 3.5w in benchmarks, the Golden Cove cores in the 12th gen i9 each consume between 25W to 30W(max turbo load).
That’s actually a factor of between 7 and 8.5. I.e. the Intel p-core requires 7 times more power to beat M1 p-core by 11%.
The problem with you is that you know so little about this matter that you don’t know how little you know. Yes, this topic was done long ago. It’s just that your need to chime in matters that you have no clue about got in the way, once again, of a normal conclusion.
javiercero1,
BTW you’re still failing to cite your data. Due to your proclivity for sticking with debunked data, nothing you say is trustworthy – it has to be backed by sources.
You’re making the same amateur mistake as before: the max power limits are not representative of the power used for every task! Your logic is just as faulty as comparing highway fuel efficiency between cars by only looking at full throttle data. You can’t just compare maxes because the actual power consumption is extremely dependent on what the CPU is doing!
It’s not just me, apple’s own figures disagree with you and they’re claiming less than a factor of 2. Sticking your bad data behind adhominum attacks doesn’t fix your data data. Seriously it’s childish.
https://www.techarp.com/computer/apple-m1-soc-review/
https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/4
I comparing apples to apples; i.e. max power figure vs max power figure.
You literally are trying to engage in a discussion where, once again, you have no idea what you are talking about. And as I said, all you do is retard it.
Go pound sand, please.
javiercero1,
Thank you for sources. However like I said comparing “max power” does not imply that max power is applicable to all workloads.
I’d also like to point out that power usage is not linear. Doubling power will only provide a marginal increase in performance, which is something we need to factor in.
So when you say “the Intel p-core requires 7 times more power to beat M1 p-core by 11%.”, that extra 11% intel performance is actually very significant. To get an idea of how significant it can be look here…
https://www.reddit.com/r/hardware/comments/qwn1j9/core_i912900k_performance_efficiency_at_various/
In other words, if you’re willing to give up 12% performance, you can cut intel’s power in half!
It looks like you chose the highest M1 GB5 score but you didn’t choose the highest intel score. When I do that the single core performance difference is actually 16.3%
https://browser.geekbench.com/search?q=i9-12900K
https://browser.geekbench.com/v5/cpu/13345054
Reducing performance by 19% can eliminate 60% of the power consumption!
Apple themselves came up the factor of 1.9 against an unspecified CPU. Their lack of transparency is why it needs to be independently tested, but for now it’s the best number we’ve got.
It is objectively your ad hominem fallacies that are bringing down the level of intelligent discussion.
I love the part where you’re literally trying to boombersplain the point I made eons ago and that you have consistently not being able to grasp.
Yes, you’re the one retarding the intelectual level in this threads. But as I said, you know so little that you don’t know how little you know.
javiercero1,
Whatever, at least you’ve agreed with my point in the end.
LOL Your type of pathological narcissism is the reason why STFU was invented
javiercero1,
Why are you here javiercero1? You seem determined to go out of your way to kill any possibility for fun, intelligent, educational discussions just to be an asshole. Is that really all you think you are good for? If not then do better! Do better for the sake of osnews and do better for your own sake as well! Stop the tantrums and change your ways before it’s too late. It starts with bullying online, but soon enough it will affect your friends, your marriage and it will affect your family. Just like you do here you’ll start to blame all of them for your attacks on them, but deep down it won’t make you feel any better. As you grow older the lost opportunities to enjoy a good life are going to gnaw at you. Do something about it now, change your outlook on life. Learn to see the best in people and in yourself.
I am here to read the articles and enhance the level of technical discourse whenever possible.
I’ve actually led a pretty amazing and full life. And on top of that I get to make a very nice living working with some of the most brilliant people on earth designing the very type of devices being discussed in this article. Thus my participation in this thread.
Thus far YOU have added ZERO value to this discussion. You not only tried to boomersplain basic concepts to someone w a PhD in the field, but you even went as far as try to pass a point, which you failed to grasp over and over, as yours.
One of the great things about having led a full life is that I can see a covert narcissist like you from a mile away. And part of leading a good life is having great boundaries when dealing w manipulators and hypocrites and have no issue putting them in their place. So tell you what bud, why don’t you spare me and just cram it?
javiercero1,
You like to say that you understand the topic more than everyone else but quite frankly you haven’t meaningfully demonstrated that. Whatever high level understanding you have of the topic is not coming through at all. If anything you have the distinct reputation for dumbing discussions way down to irrelevant personal attacks just because the evidence doesn’t go your way. Take a look back at all of your posts for this article javiercero1. Unless you graduated from Trump U, none of your posts are worthy of PhD standards. They’ve been childish and substandard on so many levels. Your arguments are chock-full of ad hominem fallacies. I’ve never witnessed anyone as guilty of them as you are. That is not at a PhD level and if your colleagues saw your behavior online it would be an embarrassment to them. Being an ass doesn’t make you look strong, it makes you look weak. Acting like a child doesn’t make you look smart, it makes you look dumb. Just be a better person, and for gods sake please stop dumbing down conversations to personal attacks. All of us deserve better than this…way better. Please raise your standards javiercero1. if not for osnews, and if not for making the world better, then at least for your own image.