Chained ramblings

I don't usually write much about my opinions, but I suppose I can afford to be wrong once a year. Because are, after all, opinions; I don't have all the information, and I'm bound to make mistakes. Besides, I've been reading analyses from professional analysts for years, and they're constantly wrong, so I have at least the same right to make mistakes as they do. Don't take this analysis as a basis for investing. It was originally written in another language, so it may contain translation errors.

I hope no analyst ever reads this! You should have more information than I do!

I'm going to start with something I've been wanting to talk about, and that's N2. About two years ago, I realized that N2's yield metrics were terrible. Back then, we didn't have the 18A data, so we couldn't compare, but everything changed when the Blues reported a figure of less than 0.4 in September 2024. From then on, it was clear to me that N2 wouldn't have a similar yield until at least January 2026. At that point (now), the Blue team would have a much lower D0.

A year ago, I sent a graph I calculated based on this to some people via DM. I wasn't far off. The facts speak for themselves with N2.

It started with the claim that N2 would be on HVM by 2025. It was said that the Z6 DC would be the first to debut on that node (for AMD, not the first overall), only for it to later emerge that Apple had supposedly requested more than half of the N2 production, leaving the rest with crumbs. This included Nova Lake.

I should mention that I'm seeing many shipments of the 28C and 52C models—microprocessors, not parts for NVL.

The whole thing with the Arizona P2 and P3 was curious. First, it was said that the P3 was already in full production when satellite images showed me there was only mud. They were also giving absurd production start dates. Then it was said that the P2 would finally be for N2, like someone deciding at the last minute not to order a pizza and ordering a hamburger instead. If that's true, it tells me they panicked about the tariffs. But it's still absurd because the start of production would be extremely late anyway (they'd probably have to bring N2 to the US as soon as possible). A few months later, they said again that P2 would indeed be for N3, giving more realistic production start dates, but still wrong in my opinion (too optimistic). Satellite images show that the structure isn't finished.

At CES, we also learned that the Z6 chips presented are enormous, around 155 to 175 mm². Accustomed to chips of around 70-80 mm², it seems that yield is going to be a disadvantage here. Is this a change in AMD's strategy or a design limitation for the current generation? If it's the latter, we'll have to see how well it's optimized.

Others of us see that they've opted to use N2P, and it seems, from the news, that they'll be released at the same time or very close together. I find that very strange. This reminds me of 20A and 18A and makes me think about theories regarding N2. I could talk about something I found here, but for obvious reasons, even though it's freely available data, I prefer not to. But it's very interesting to see the gap between the two nodes.

For N3 FinFET, it went from N3B to N3E, and then to the newly released N3P. N2 inaugurates GAAFET, and I find it strange, to put it mildly, that they're practically saying they'll be released almost simultaneously.

It's January 18, 2026. The blue team said that 18A would be in HVM in Q2 2025. I detected massive shipments of several PTL models in September 2025 (now fully confirmed because we have the overclocks). I remember finding the first Panther model about a year ago, if I'm not mistaken. I published it; it was the 99CTX6. That's not only Q2, but it's also, at the last minute, the end of Q3, not Q4. Intel has delivered. I'm certain that LBT is going to be overly cautious.

Normally, if they said H2, it meant December or even January of the following year. Now, if they say H2, it could mean Q3.

It's also true that the first models I saw were 807290s, and the ones being shipped en masse are 807293s. If we consider statements I've heard from Naga, these might be models with optimized yields. This highlights the potential of the 18A when yields improve. It also suggests that the 290 models would have been released earlier, but in lower volumes, faster, but likely with lower yields. LBT (and I think Naga too) is playing it safe.

Nvidia has typically released "unique" mobile models in low volume during the first year of a single node, followed by the release of final models with multiple SKUs and a massive performance leap the following year.

With PTL, it's not that they've released few SKUs, but I think they've opted for a conservative design in terms of density and yield. And yet, their performance and power consumption figures are surprising.

I believe this is the first time Nvidia has released a new range that utilizes three different nodes for its iGPUs. It goes without saying that the graphics component of PTL is a resounding success. Undeniable, except for those who try to hide the obvious. Before, there were literally two in the game; now there are three. It's also worth remembering that GPUs are the foundation of Nvidia's success in AI.

In this regard, something curious is happening. In PassMark, the B390's raw performance is around 9,400 points compared to the 890M's just under 8,200 points, but in real-world tests, the B390 performs significantly better. From the benchmarks I've seen, the actual improvement doesn't match that raw performance. Are we witnessing a radical change in driver optimization in favor of the Superman team?

Personally , I think they've hit the nail on the head in GPU design and optimization.

Another interesting point is that the 2Xe3 core version shows a PassMark score of approximately 2,400, and the 4Xe3 core version around 5,000, compared to the B370's 8,000-plus points with 10Xe3 cores and the B390's 9,000-plus points. The LNL 140V also shows around 5,000. Do you see what I mean?

(Edited) While I don't usually believe MLID, it's worth recalling what he said about PTL. He stated that it could outperform the dedicated B580 graphics card. My question is, how would the B390 perform if it were a dGPU with GDDR?

It is also interesting to note that with Arrow and Lunar the iGPUs were named 140T and 140V, but with PTL the dGPU nomenclature is B370 & B390.

I'm eager to see benchmarks of the 4Xe3 core model. On the other hand, the 2Xe3 core version from PTL and WCL (check out the NPU 42-18 TOP scores !).

On the other hand, the press in general seems to have overlooked, or at least not emphasized, this detail, which I find very interesting. The 10 and 12 Xe core versions run on N3E, the 4Xe core versions on Intel 3, and to my surprise, and that of many others who are at the same level of knowledge as me on these topics, we learned at CES that the 204 model had two Xe cores manufactured on 18A and on the same monolithic tile, leaving only the I/O tile separate on N6.

Although it's true that this information had been mentioned in a comment by Bionic Squash, I think. I'm afraid many of us missed it.

Are we , with the 2Xe3 core version on 18A, talking about the first GAAFET and BSPDN computer iGPU? Or at least the first combined GAA+BSPDN CPU-GPU tile?

A year and a half ago, when I first learned about Wildcat Lake's plans, I said it could be a top seller if Intel does things right. Now, knowing that the iGPU will also run at 18A, it will be very interesting to see its performance, and especially its power consumption and efficiency. Will those 2400 MHz have the potential to support Frame Generation?

I also detected shipments of "WCL N+" some time ago, and later I believe it was Jaykihn who said there would be a refresh version of WCL 404. This model will surely surpass or at least match the performance of LNL, improving upon its already good efficiency and battery life. Lunar Lake was a game-changer in my opinion. A declaration of intent for all those who clamored for the death of x86.

The icing on the cake would be if it had four Xe cores instead of just two. I'd rather "dream" that the 2Xe3 core version has FireGross.

Going back to the death of x86 (x86 and x64 have been killed off many times. AMD was practically dead in 2015, and yet here it is, making a strong comeback). They've tried to kill Intel many times. It consumed a lot of power, battery life was short, it overheated, it had vulnerabilities...

Now, with what's been presented with PTL, that mantra is dead. I don't know if it will return, but right now it's dead. (Interesting that most people are ignoring the fact that SMT, a nest of vulnerabilities, has been removed . There are some things about Zen here that worry me , by the way.) 27 hours of Netflix playback and up to 40 hours of local video playback show that in idle mode we've gone from 4-8 hours of battery life to 50? Impressive. If it can play local video for 40 hours, in idle mode it will easily exceed 50, in my opinion.

Obviously, you're not going to buy a product to leave it idle, but knowing its idle life tells you how optimized it is.

All processors, whether ARM or x86, with average-capacity batteries, don't last more than 4 hours. But nobody runs a computer at full capacity from the moment they turn it on. As the saying goes, "the good ones aren't all that good," and "the bad ones aren't all that bad."

It's true that both the 27 and 40 hours figures are for playing content (great work by Intel on the 7.5-inch IPU), but the power consumption results for typical tasks using only the 4 LP e- cores are impressive.

Intel started using E-cores and LP e-cores long before AMD, attempting to make the leap to what we all know as big.LITTLE . They were criticized for this, with some saying that E-cores weren't very useful. And frankly, with Arrow and Lunar, seeing the performance tests, I think Skymont has been a game-changer. To begin with, looking at the die shots , it makes much better use of the silicon.

It only needs to make the leap to what we could call superbig cores to match what's available in ARM, but in this respect, I don't know if current program workloads would take advantage of that leap. With just one SuperBig Core, it's not enough. The core you have in the silicon is enough to set a record!

Intel is ahead of AMD in ST, yet in gaming, it's undeniable that X3D is the top-selling gaming processor in its own right. This is a fact, not an opinion. And Nova Lake with its bLLC reaffirms this.

In the graphics arena, just two years ago, nobody took Intel seriously. For about two and a half decades, it had been a market dominated by NVIDIA and ATI. 2025 offered glimpses that this was changing with Battlemage (including Lunar Lake and Arrow Lake on the iGPU side). By 2026, with the performance of PTL's iGPUs, I have no doubt that this is undeniable.

We're talking about achieving impressive performance with 55mm2 in N3E and minimal power consumption compared to other technologies.

The big question is whether Intel (which has at least one 4Xe core silicon die in Intel 3, a proprietary node, and another monolithic die alongside the CPU with 2Xe cores in 18A, also a proprietary node) will now start manufacturing dGPUs in one of its nodes.

This could be another giant leap forward. I think it's an ace up their sleeve. And I'm extremely surprised that there's no information about this product. Is it the NS3 chip? Is it a dGPU ? Mobile ?
I have confirmation that it's from Intel.

I'm linking this to the nodes. And one in particular that, for the past year and a half, has seemed more important to me than 18A. Everyone's focused on 18A. I was sure that 18A would be released in 2025, but what worried me more was the Intel 3T node.

Intel 3T seems almost more important to me than 18A. Intel 3T debuts Foveros Direct 3D cu2cu, a key component for 3D packaging. Everyone knows how important it has been for AMD X3D.

Intel's CWF adds a small change: while X3D is roughly equivalent to the compute silicon, the CWF base "3D" tile is significantly larger than the four compute tiles. If the compute tiles are around 55mm², the base tile with the main SRAM is about 300mm², if I recall correctly. The elongated shape of the base tile, instead of a purely square one, is interesting. Does this design contribute to better performance ?

3D printed packaging should not only be able to house SRAM in the future, but, as Ian indicated in one of his videos, other components as well, taking miniaturization and density to another level. iGPU + SoC + I/O + CPU stacked in cubes. Can you imagine it?

Speaking of RAM… What happened to MRAM?

Another thing everyone seems to overlook is FCBGA 2D+. I'd love for someone to write an article about it (and about Cube). Intel says this will save them millions of dollars. I think I first saw FCBGA 2D with SRF SP and FCBGA 2D+ with CWF AP. I'm fascinated by this technology.

To put it in perspective, I'm starting to see shipments of Z6 DCs, which I believe are the Classic version and not the Dense. However, I was seeing very few shipments of CWFs. It's not that there were few, it's that they changed the naming system. These shipments I'm seeing are mostly of 432MB models, although some of them don't show any specifications. I'm seeing even fewer shipments of the 576MB model. The 432MB model will be a giant leap forward compared to SRF. Once they have the 3T yield optimized, the benefits will increase dramatically, in my opinion. The tiles of SRF and GNR are approaching 600mm², compared to the 300mm² of the CWF 3T base tile, if I remember correctly.

(By the way, in one of the images of a wafer in an Intel video, I saw one and calculated its dimensions. It came out to 265mm². The dimensions and aspect ratio don't match any known product. Could it be DMR 48C? Who knows. I need to calculate if it matches an image I recently posted from another video and see what socket size it would be to see which one it matches . )

A little over a year ago, with Pat's departure, I said that Intel's strategy now would be to reduce costs and make 18A profitable.

I wasn't wrong. LBT shared my opinion. Intel needed layoffs, a streamlined structure, and a focus on its core business.

Intel does everything (including 5G. Like many others, I thought they'd sold all their 5G to Apple, but they hadn't) and needed to focus. I know Pat didn't like layoffs, but if you're losing money, not laying people off isn't an option.

I also have to say that without Pat, Intel would be dead right now.
Thank you, Pat. You didn't deserve to leave like that. They should have let you stay on the Board even if you weren't CEO anymore. Yes , maybe it would be strange, like demoting you. But it seems like a tragedy that Intel let you go. Even with your mistakes ( to err is human). (est ! ), your GREAT successes add up infinitely more.

We recently learned about SambaNova's aspirations, which worried me. Buying assets now doesn't seem very smart if you want to focus on the core, no matter how much AI they need.

In my opinion, Intel needs to continue working on CPUs and GPUs and stop considering buying AI companies.

But I have to say that my opinion changes when I see that Mobileye has bought a humanoid robotics company, since I firmly believe that the true AI revolution will be this, not DCAI. But discussing this would lead me to write an article that would be more like a book, it would be so long.

Robinson Lake also struck me as an extremely interesting product. A product for Nvidia Jetson that can now be replaced with PTL. I seem to recall that the board format was STX. Bravo. I am a strong advocate of ATX standards and their derivatives. I'm not an environmental fanatic, but environmental sustainability depends on standards. I'm very clear on that.

I have two 3D printers, a CNC milling machine, and a laser CNC (the latter two designed by me and measuring 2x1 meters), and I'm a huge fan of Arduino and Nema motors. Anything that can be easily repaired or upgraded is welcome.

I love Intel's step forward with Robinson Lake. If you add the ability to upgrade or replace the DRAM, I give it a 10/10.

Here we could bring up the patent Underfox found. One of the most interesting discoveries, and one that's rarely discussed. It takes CAMM to the next level. CAMM applied to MoP ?
This article might not be very well structured, and I apologize for that, but I'm writing it on the fly in spare moments.

I'm going to change the subject and talk about sockets.

I've noticed many new sockets, and they're very large, even mammoth. But speaking of the smaller ones, I have to say that the jump from the Elite X to the X2, for example, seems enormous. It's a clear attempt to make a statement. We'll see about power consumption and silicon size. Performance will show a significant leap if they haven't made any mistakes. The change in the new Ampere sockets is also interesting.

But when I talk about large sockets, I really mean sockets with over 11,000 pins, reaching a record (at least I don't know of anything higher) of 24,000 pins. If the 9,000 pins of DMR are impressive, 24,000 or 18,000 pins of Tomahawk are overwhelming. I could say who I think made the 24,000-pin socket, but I prefer to wait.

Among those monstrous sockets is also Black Cat Island (without the final D). I don't know if it's from Intel, but it clearly resembles Crescent Island.

I recently published Goose Island, and there's a Cisco conference on February 2nd featuring LBT, Jensen, and Altman ? Think about it; I recommend it.

I'm 90% sure which socket is Jaguar Shores, and I'm pretty sure about another monstrous socket that I think belongs to another new Intel product. I don't know if it's Coral or Rogue River (it's not called that anymore, is it?). If I told you the number of cores it has, you wouldn't believe it. Intel, who's seen you (with 4 cores and 8 threads for years) to this kind of nonsense?

There's a lot of talk about FAB 52 and Ohio, but it seems everyone's overlooking FAB 62. Structurally, it's been finished for months (while P2 doesn't seem finished, as I mentioned before).

In any case, I find it much more interesting to talk about Fab 62 than 52 or Ohio, honestly. Just as I was one of the first to report that Fab 42 was becoming a hybrid, and later did the same with the one in Ireland, it will be interesting to see what Fab 62 ends up being. Or, to put it another way, what ASML machines end up being used at that Fab. Similarly, I believe I was the first to realize that the modules in Ireland would be interconnected in the same way as in Arizona.

We're told that Fab 52's output is 10 kWpw, which gives us an hourly production of less than 100 wafers, if I'm not mistaken. Each NXE, if I remember correctly, is designed for more than 200 per hour.
That's just one step! Yes , I think there's room for improvement, but if you calculate the total number of chips they can manufacture with that production, it reveals some very interesting data when you compare it with the total sales of CCG and DC processors.

I said some time ago that Intel needs clients for IFS, and I said that Intel is an IFS client. In fact, it's the primary client.

Intel simply needs to bring all its products back to IFS. And to do that, it needs its products to be the best, or at least among the best. It doesn't need customers; it needs high yield and the success of PTL, NVL, Celestial, and others.

In terms of AI, it's true that Gaudi 3 doesn't seem to be the saving grace. But here's an interesting point to consider. The NPU, or AI processor, in the Core Ultra processors has undergone a radical change from the previous version in Lunar Lake to the one in Panther Lake. The change isn't just in the number of tiles. What's truly interesting here, in my opinion, is the space it occupies. While the Lunar Lake NPU had 48 TOPS, the Panther Lake NPU has 50 TOPS, occupying significantly less square footage—40% less, I believe. Double-check it just in case.

B570, B580, B50, B60, B370, B390, a new NPU taking up much less space. It seems there's reason to believe that it has a great opportunity, at least in AI.

It may seem like it's arriving late, and indeed it is in the first phase at least, but the AI revolution hasn't truly made itself known yet, in my opinion.

Remember that there are only three Foundries on the planet with cutting-edge processes: Samsung, TSMC, and Intel. These three will divide the market among themselves. Both Samsung and Intel have a great opportunity for growth.

Returning to the sockets, if we look at the 5090 and B200 sockets, in my opinion Nvidia has many tricks up its sleeve, just as AMD has done, seeing that Intel was stuck at 14++++++ and then at 10. The performance of AMD and Intel sockets in GPUs versus these two Nvidia beasts is also interesting here. We could call it performance per pin per node.

And here's an important question: would AMD really use N2 for Epyc if Intel had an obsolete node?

While I think AMD has been comfortable with its outstanding work over the years, I don't see it as so comfortable now. I get the feeling that April 18th has changed the plans of more than one company.

And it makes sense. Three years ago, I predicted what I called the N3 train (I don't think I'm any smarter, you only had to look at TSMC's roadmaps). A train that practically all manufacturers would be on. A technological standstill in which some, like Apple, would desperately need a cutting-edge node. Because yes, Apple hasn't achieved any miracles. Forget about it. They'd been designing their A-series processors with ARM for years and had a very easy time generating hype using MoP, making use of exclusive cutting-edge nodes, and having monolithic designs. Using N5 while everyone else was using N7, or using N3 while everyone else was using N5. Even so, it's very impressive to risk everything, but of course, things change drastically now that everyone is using N3P or N3X and you're launching N2. There isn't that much of a difference anymore. And not only that, but some are going for N2P, probably out of desperation, but for another reason: to overtake Apple .

I have to say that Vadim's post surprised me extraordinarily.

And in all that battle there are Samsung and Intel who only need one strong client, just one, for one of their cutting-edge nodes to succeed in my opinion.

I think it was Jukan who posted something that suggests TSMC can't keep up with production due to excessive demand. It's only a matter of time before someone switches to the dark side (switches to Samsung and/or Intel) in addition to the rumored products.

I was planning to end with a conclusion, but I think it's better if everyone draws their own. OMG, I'm already nine pages in Word! To those of you who have managed to reach the end, I must congratulate you for enduring this rant.

What I'm going to do is take this opportunity to thank all my followers (seriously, not like my end-of-year joke posts hehe ) for this incredible year . DKR, MoJo , Game.Keeps.Loading , Xiao Yang, Ichi Jiku Zeon , SiliconFly , Prakhar Verma , Raw Mango, Tem , John Heritage , Konkretor , Trastwere , pers0naluni0n, Sarah, jaj18,gdch,… there are so many of you, sorry if I forgot anyone! meng! xD

Also like to thank the teams at Wccfetch , VideoCardz , TomsHardware , TechPowerUP , etc. , etc. You all do an incredible job !

Finally, I'd like to thank Bionic Squash and Jaykihn for their feedback . Ian , HighYield, Patrick Moorehead, and Daniel Newman... are all amazing.

X86 is dead&back! 

 

 

 

 

 

 

Popular posts from this blog

23rd Annual Global Technology Conference