Over the years, there have been extremely drastic improvements in computing, until the invention of the highly integrated processors we see today. The SoC, or the system on chip has dramatically changed the computing landscape with highly integrated systems and architectures, making it increasingly centralized, within a footprint of about 4x4milimeters. The world has since changed, the rapidly changing standards and pace of the gaming and scientific communities are pushing today’s computing to it’s limits. With niche bubbles for professions like Data Science and Artificial Intelligence popping up everywhere, the semiconductor and software industry needs to wake up to a storm brewing right in their backyard.
The efficient computing dilemma: smaller ‘sistors or optimized design and architecture.
In the last decade, semi-conductors have taken two distinct paths, one of which has only recently begun. Since the release of AMD’s 5nm process, as well as its implementation of the infamous “infinity fabric”, the company has changed the very nature of how semiconductors are looked at, especially in the consumer and prosumer markets. Although its traction has been slow in the enterprise space due to the way the specific market works, companies have started adopting it’s Epyc line-up, with larger companies phasing swathes of plans, blueprints and racks for its latest 96-core versions of pure power etched in silicon.
Increasing competition from Intel, has kept AMD on its feet. With fiascos and issues in its newest lineups that use 3D V-cache technology, as well as the OEM manufacturers having issues with power delivery and voltage regulation. In its wake, AMD has left smaller pockets of its customers sad, distraught or outright annoyed by its behavior of chip pricing, feature support and offering as well as its partnerships. Often called the Robinhood of the silicon lottery, the company did not waste a single minute to do the same things it’s competitor Intel did. It made partnerships with companies to exclusively sell their Threadripper WX line-up and had a seemingly random feature offering through out its line up that seemed consistent in proportion to price only until the 2nd tier.
The higher and increasing loss of market share forced Intel to take the distributed route. Although it has been working on bringing better transistor processes to market, its first toe-to-toe competitor to AMD used E and P cores.
The efficiency cores were used for lighter tasks where as “heavier tasks” such as rendering and compression, were relegated to the P cores. Often, software companies were puzzled about using these resources and how exactly to best use them.
Adobe was puzzled a lot, so was Blackmagic, although other benchmarking software did manage to pull through and show proportional increments in performance and efficiency. Intel brute forced its way into the upper echelons, because loss of face did not mean loss of business or trade volume, it meant death. Intel was kept afloat, by its numerous advantages and non-liquid assets and investments such as it’s own foundries that were able to keep it afloat throughout the pandemic and rapid innovation cycles of its competitors.
AI: The new bubble.
Ofcourse, AI was going to be mentioned, but maybe not one of the firsts, when I began writing this article. But, over the 3 months I have taken to write this one (mainly due to a vagabond summer semester which I spent making food, a long due recovery from a break-up that happened months ago and gaming with my cousin), AI has rapidly increased its level of proliferation across markets, industries and countries. Obviously, the omnipresence of AI does not imply a direct relation to new semiconductor technology, but, it does provide an incentive towards what could become a large part of our daily lives, however fake and dystopian they might seem.
Google did what no one else could do seamlessly, they began integrating AI-based features and APIs that could process photos text and speech as well as emails, user patterns and almost anything that could be done on a phone. And they did it well, and they did it fast; but, they soon realized the shortfalls of OEM/off-the shelf chipsets and it must have stung them hard. Because, Google wasted no time in developing in-house silicon once they knew the time was right. As the world reels in from a widespread shortage of semiconductors, it is now also facing another huge problem. Although the storm hasn’t “brewed” but rather, has only come to a simmer, this could be a disaster in waiting; as multiple companies have now started creating pressures on the few semiconductor foundries across the globe.
Of course, economies of scale and 21st century technology might keep up with a lot of it, and of course there will be a balance that reaches as the demand and supply phenomenon normalize, but even then, the world has never had this rampant and large-scale “fraying” of a specific type of technology. Along with it come the added demand pressures of AI and ML, both of which require extremely huge amounts of silicon to even start training their models, let alone deploying them on a large scale.
Coming back to Google, a seeming dystopia has been created by the company thanks to their on-chip processing of AI accelerated or AI based features in the new Pixel phones (7 and 8 at the time of writing this), which have seemingly unreal features that give humans, the ability to change their pasts and presents and in some contexts and cases; even their futures. Google has certainly figured out a way to get ahead in this regard, but like every other company post-Pandemic, they have splintered from the singular stream of semiconductor manufacturing or semiconductor IP holding companies.
nVidia, the current boxing-bag of the tech world, has also rapidly expanded their workloads to focus on AI applications. Their releases and their consistency of updates and features on their machine learning based CFD platform is just another point of contention.
Although the AI bubble might pop, or may even take hold and manifest itself as cloud as it soars to new heights everyday, it adds another extremely acute stressor on why there lies a need to begin work on a much more universal and standardized computing platform that democratizes development, production and manufacturing as well as use, implementation and adoption of newer and much more standardized technology.
A Dethroned King
Intel has taken tremendous strides since 2016, but for a while, it looked like the company would never be able to recover from the shock that was the Zen architecture. Although a theory that is ‘quite out there’, Intel could have been saved by its customers if it did not act like the entitled pompous prick it did, when it had supremacy. A certain amount of humility in its pricing and strategy could have certainly made a huge difference in retaining customers as well as persisting instead of subsisting once Ryzen came along.
The New Luke-Warm War.
Recent developments in the world have changed the methods, research, configuration and consumption of semiconductor products. It is divided again, the West and the East, and each side has their own strengths. While, there could be certain advantages to manufacturing components with processes like 14nm and 18 nm, they surely are overshadowed by the immense additional throughput of processing power at stages that 5 and 7nm offer.
The added increase in computational throughput over large footprints, combined and hybridized, squished together has been in trend. The M1 MAX and PRO versions from Apple have created the Frankenstein of the semiconductor/materials science industry. And although in itself Apple’s semi-conductor development might be one of the best, as it evidently pulls through a large number of massive compute applications and workloads, the inherent nature and essence of Frankenstein-ing chips is in janky workarounds.
Monolithic chip dies still do offer some of the most advantageous performance figures, however much they burn up, and however much the 3D V-cache technology was faulty. Ofcourse, the blame is to be shared, like all things bad in the industry, while the performance achievements are to be owned. Either way of morality and ethics you may you belong, the fact still remains, that vertical stacking is the best way to Frankenstein a chip when you can’t make the transistors any smaller and the FET technology any better. And here lies the most important fart that the baked beans of the full shit breakfast made. layered technology stacks, complex APIs and completely split visions of what the future would look like. Ofcourse, the tech world would be the last to live in the shared-goal communist uptopia hwere everybody worked for the greater good of the motherland. But, in turn, the companies today have gone a complete 180 and committed incest and ensuring that the mother never finds another partner.
The future of semiconductors looks bleak. Partially due to Moore’s Law and unfortunately due to physics and this naughty little thing called Quantum Mechanics. It is unfortunate that we reached the benchmark so fast, and yet it is great that we already reached it. I never wish to live in a world where a simple Google Search would have taken a few minutes because the processor struggled to go from 833Mhz to 900. Truly revolutionary, and yet, today’s pace of advancement has given more things to worry about than reprioritize.
Leave a Reply