Nikodemos
30 minutes ago
Charts, imv, are essential & should be used for PLANNING your trade! (in advance)
Charts (are often underutilized) as investment tool:
[] To anticipate, map & take an entry
[] Build a position (where the fundies are; value, undervalued, overbought/underbought)
[] And to chart, in advance, ....an EXIT!!
I'm a FIRM believer in writing out an investment thesis IN ADVANCE of taking a position.
And then using a chart to FIT that 'business plan'.
And lastly to conduct real-time, on-going ANALYSIS (of the thesis & plan)
As NEWS, developments, filings, & such are presented, new facts are disclosed & discovered, etc..
1. Plan your trade
2. Trade your plan!
With nVidia, the company has CONTINUED to outperform.... The management (like other stocks I've bought: Recently $DKNG ~$10-$11warrants comes to mind) was a HUGE FACTOR in the investment-thesis; & they have continued to EXECUTE!
nVidia (& others) remains a SUPERIOR ENGINEER-driven product & company. I thought ROKU CEO fit same as an INNOVATOR. And each of these companies have PERFORMED very well from where I purchased them -- & so INSTEAD of TRADING (in & out, of them) -- I just kept BUILDING a larger & larger position. Easy to do when YOU CHART a favorable entry & are NOT chasing, bogged down in the red, fretting & stressing over downside, etc..
To me, that is the VALUE of charts! A visual representation of ALL THE DATA -- Presented in a way that allows the mind to process FAR MORE data than you could without!
GL2U....
PS: Gaps are incredibly useful, especially whereas capitulation is concerned; & when any stock is running at (overbought or oversold) extremes: Exhaustion gaps!! Portends an attempt at driving the price in the opposite direction (encouraging DEMAND -- or enticing SUPPLY)....
In my experience: MOST GAPS FILL! Not an if proposition really, just a matter of WHEN!! And, once again, a CHART (metrics, diagnostics, etc.) often advises as to WHERE YOU ARE: Like a roadmap, in the near-term, mid-term & long-term story of the stock!
HIGHLY useful & I recommend EVERYONE knowing how to read, learning how to use, & back-testing with RIGOR to know how YOU TRADE & can leverage their use! GL
...
..
.
Nikodemos
56 minutes ago
PM Message: I initially BOUGHT nVidia sub-$10 & started posting about nVidia publicly in ~2013
I'm REPLYING to one of my first messages to this board (for your review).
nVidia has been JUST ONE of my table-pounding BUYS since then...
I even MENTIONED $100 price BUYS as "GOLD!!" *(see below)
Why?
[] The management
[] The tech
[] Innovation
[] etc...
ALL OF WHICH was evident THEN!! nVidia has ALWAYS been a LEADER!! Engineer-driven, performance-&-product based company!
And now their NEW chips with AI-augmentations are going to CONTINUE that DOMINANCE imv...
FD: I have had a very long-standing position (Roth IRA, funds, shares, bought calls, etc.) in this stock. No one should take mine or anyone else's opinion as investing advice. Consult your own advisor, capital, goals, objectives, investment horizon, timelines, etc., when planning your investment decisions. I've been a SUPPORTER, champion &/or advocate of the company, products, tech, & stock because I have believed it to be INCREDIBLY UNDERVALUED!! And, I was right... I think this will BE TRUE in the coming decade -- as it was in the previous one & am personally INVESTED ACCORDINGLY!! However, everyone to do with their money as their risk-reward profile, fiscal goals & objectives DICTATE! I manage my OWN money -- you do you & yours!!
ALSO, keep in mind:
1,000 shares (then, @ ~$7k cost) = over $1,000,000.00 TODAY!!
....
*MORE TABLE POUNDING when nVidia retraced to $100 (proof for you PM messenger)
...
PSS: Check my posting history; so you know from what I speak. GL2U
..
.
Chart Reader
5 hours ago
Here is the chart evaluation of NVDA that I just did for my newsletter this week:
Changing the subject for a second, the NASDAQ 100 has been the leader to the upside recently but mostly because the Tech industry has been the strongest. Among the Tech stocks, it has been mostly NVDA and the AI industry that has been the positive catalyst, as such, the stock is the key to this market right now. NVDA reported better than expected earnings a couple of weeks back and generated not only a new all-time high but did it with a breakaway/runaway gap formation. The stock generated a red close this past week and did get down to 1069.63 and the runaway gap is at 1064.75. The stock closed very slightly in the lower half of the week’s trading range (midpoint being $1099 and it closed at $1096), suggesting a slightly higher probability of going below last week’s low than above last week’s high at 1127.17. If the latter occurs, the retest of the runaway gap will be successful and continuing the uptrend would likely ensue. If the former occurs "and" the gap is closed, a drop down to $960 would become the short-term target. The average fundamental price target of the stock analysts for the rest of the year is $1197, meaning that the stock could move up another $100 (per share) from Friday’s close. Nonetheless and on the opposite side, if the stock closes the runaway gap, the breakaway gap at 960.20 ($136 per share lower than Friday’s close) would be the target, meaning that it is even less than a 1-1 risk/reward ratio. This does make this stock very vulnerable (at this time) and because it is the “key” stock to the overall index market, it makes the market vulnerable as well. The bulls need to be committed to making a new all-time weekly closing high this week. On a daily closing basis, the all-time daily closing high is 1148.25 but on the weekly closing chart, all the bulls need to do is generate a green weekly close next Friday. The latter is a must, while the former (new all-time daily closing high) is a potential target that does not absolutely need to be broken “this” week. As such, NVDA is the key to the market this week.
Oleblue
1 day ago
Nvidia’s Enormous Financial Success Becomes . . . Normal
May 23, 2024 Timothy Prickett Morgan
For the past five years, since Nvidia acquired InfiniBand and Ethernet switch and network interface card supplier Mellanox, people have been wondering what the split is between compute and networking in the Nvidia datacenter business that has exploded in growth and now represents most of revenue for each quarter.
Now we know.
Each quarter, Nvidia chief financial officer Collette Kress puts out a commentary that accompanies the financial results for each thirteen week period, which gives out some color on what sold and by how much. As the entire world knows, Nvidia just reported its numbers for the quarter ended in April, which is its first quarter of its fiscal 2025 year, and the numbers were stellar as expected. And inside of that commentary, Kress revealed the actual revenues for its Compute and Networking Groups as distinct from each other and also distinct from its Graphics group.
The actual data for Q1 and Q4 of fiscal 2024 and Q1 of fiscal 2025 shows that the compute business is perhaps a bit stronger than many had expected and the networking business is a bit weaker. But both are clearly strong, and will continue to strengthen as fiscal 2025 rolls on. The generative AI market is growing so fast that even with intense competition there will be no way to blunt the market momentum of the CUDA platform that Nvidia has created over the past two decades and that has an incredible advantage over alternatives in HPC and AI.
But, as we have said before, we think that we are experiencing peak Nvidia right now, and maybe the party will continue out into fiscal 2026. But eventually, competition will come and the generative AI hype and hope will settle down and AMD, Intel, the Arm collective, and others will get their share of this market. Until then, this is Nvidia’s time to make hay while the grass is tall and the sun is bright,
And boy, is Nvidia ever making hay in the datacenter.
Nvidia has two different and almost identical ways of breaking down its datacenter business.
Some compute and networking products are sold outside of the datacenter, but not very much, and some products sold into the datacenter are based on gaming cards, so the Datacenter division has slightly different revenues from the Compute and Networking group.
In Q1, to be precise, the Datacenter division had $22.56 billion in sales, up by 5.3X year on year and up 22.6 percent sequentially. In a call with Wall Street analysts, Kress said that somewhere around the mid-40 percent of the companies Datacenter division revenues came from cloud builders, and we reckon it is about 46 percent and that works out to $10.38 billion, a factor of 10X higher than the year ago period by our model. That means the remaining $12.18 billion in datacenter product sales went to hyperscalers (like Meta Platforms), HPC centers, enterprises, and other organizations, which was only up by a factor of 3.8X. (See what we mean about the normalizing of multiples that are just not common for most companies in the five hundred year history of companies?)
The Compute and Networking group lumps together all revenues that are not from Graphics products used in PCs and workstations. In Q1, Compute and Networking comprised $22.68 billion in revenues, up by a factor of 5.1X year on year and up 26.7 percent sequentially from Q4 2024 ended in January. For a short time, Nvidia provided operating income for its groups, but has not done this for a while.
In its financial report, Nvidia said that sales of datacenter compute products, mostly “Hopper” GPUs and their related platform components, rose by 5.8X to $19.39 billion in fiscal Q1, which was also up 28.7 percent sequentially from Q1 in the prior fiscal year. This is the kind of growth that a company is lucky to get on an annual basis if it is wildly successful.
For networking products, revenues rose by a mere 2.4X to $3.17 billion, but were down 4.8 percent sequentially as supply of InfiniBand products could not meet demand and the ramp of the Spectrum X Ethernet products had not yet hit appreciable volumes.
Our model indicates that InfiniBand sales were up 2.7X to $2.71 billion in Q1 2025, but down 5 percent sequentially, and comprised 85.5 percent of networking sales. Ethernet and NVSwitch sales made up the remaining $459 million in networking sales, up by a factor of 2.14X year on year but down 3.6 percent sequentially.
Nvidia is fully embracing Ethernet in the datacenter with Spectrum X, and as we have pointed out before, it has not choice because the hyperscalers and cloud builders now want it and most enterprises are absolutely allergic to InfiniBand. They want one network, and it is Ethernet. And thus, Ethernet switching from all of the key vendors is going to become more of a fabric.
“Spectrum-X is ramping in volume with multiple customers, including a massive 100,000 GPU cluster,” Kress said on the Wall Street call. “Spectrum-X opens a brand-new market to Nvidia networking and enables Ethernet only data centers to accommodate large-scale AI. We expect Spectrum-X to jump to a multibillion-dollar product line within a year.”
What Nvidia does not talk about is how the adoption of Ethernet will affect sales of InfiniBand, but it obviously will have a cannibalizing effect. How much remains to be seen.
In the meantime, Nvidia splurged $7.8 billion in the quarter on share repurchases (not just as an investment but as a means of giving shares as part of compensation packages) and dividends in the quarter, and on June 10 it will do a 10 to 1 stock split that will put its shares closer to the $100 mark that is a comfortable number for institutional and individual investors, which will help boost Nvidia’s shares even further. But Nvidia’s enormous success as it rolls through fiscal 2025 and into fiscal 2026 is really what will send Nvidia’s share price even higher. The projections are for sales of $28 billion, plus or minus 2 percent, for fiscal Q2, and we think Nvidia will easily break $100 billion in sales this year. As does anyone else who can plot four dots on a line.
This ride is not over yet. But it is the exciting part, for sure.
Nvidia co-founder and chief executive officer Jensen Huang laid out the landscape for everyone as he ended the call, and we will let him do the talking:
“We have a rich ecosystem of customers and partners who are going to announce taking our entire AI factory architecture to market. And so for companies that want the ultimate performance, we have InfiniBand computing fabric. InfiniBand is a computing fabric, Ethernet is a network. And InfiniBand, over the years, started out as a computing fabric, became a better and better network. Ethernet is a network and with Spectrum-X, we’re going to make it a much better computing fabric. And we’re committed – fully committed – to all three links. NVLink computing fabric for single computing domain to InfiniBand computing fabric to Ethernet networking computing fabric.
And so we’re going to take all three of them forward at a very fast clip. And so you’re going to see new switches coming, new NICs coming, new capability, new software stacks that run on all three of them. New CPUs, new GPUs, new networking NICs, new switches – a mound of chips that are coming. And the beautiful thing is all of it runs CUDA and all of it runs our entire software stack. So you invest today on our software stack, without doing anything at all, it’s just going to get faster and faster and faster and faster. And if you invest in our architecture today, without doing anything, it will go to more and more clouds and more and more data centers and everything just runs.
And so I think the pace of innovation that we’re bringing will drive up the capability, on the one hand, and drive down the TCO on the other hand. And so we should be able to scale out with the Nvidia architecture for this new era of computing and start this new industrial revolution where we manufacture not just software anymore, but we manufacture artificial intelligence tokens and we’re going to do that at scale.”
This market is expanding so fast that everyone can play. But for the next few years at least, Nvidia will continue to be the big winner.
https://www.nextplatform.com/2024/05/23/nvidias-enormous-financial-success-becomes-normal/
Oleblue
1 day ago
Key Hyperscalers And Chip Makers Gang Up On Nvidia’s NVSwitch Interconnect
May 30, 2024 Timothy Prickett Morgan
The generative AI revolution is making strange bedfellows, as revolutions and emerging monopolies that capitalize on them, often do.
The Ultra Ethernet Consortium was formed in July 2023 to take on Nvidia’s InfiniBand high performance interconnect, which has quickly and quite profitably become the de facto standard for linking GPU accelerated nodes to each other. And now the Ultra Accelerator Link consortium is forming from many of the same companies to take on Nvidia’s NVLink protocol and NVLink Switch (sometimes called NVSwitch) memory fabric for linking GPUs into shared memory clusters inside of a server node and across multiple nodes in a pod.
Without a question, the $6.9 billion acquisition of Mellanox Technologies, which was announced in March 2019 and which closed in April 2020, was a watershed event for Nvidia, and it has paid for itself about three times over since Mellanox was brought onto the Nvidia books.
The networking business at Nvidia was largely driven by Quantum InfiniBand switching sales, with occasional high volume sales of Spectrum Ethernet switching products to a few hyperscalers and cloud builders. And that Ethernet business and experience with InfiniBand has given Nvidia the means to build a better Ethernet, the first iteration of which is called Spectrum X, to counter the efforts of the Ultra Ethernet Consortium, which seeks to build a low-latency, lossless variant of Ethernet that has all of the goodies of congestion control and dynamic routing of InfiniBand (implemented in unique ways) with the much broader and flatter scale of Ethernet, with a stated goal of eventually supporting more than 1 million compute engine endpoints in a single cluster with few levels of networking and respectable performance.
NVLink started out as a way to gang up the memories on Nvidia GPU cards, and eventually Nvidia Research implemented a switch to drive those ports, allowing Nvidia to link more than two GPUs in a barbell topology or four GPUs in a crisscrossed square topology commonly used for decades to create two-socket and four-socket servers based on CPUs. Several years ago, AI systems needed eight or sixteen GPUs sharing their memory to make the programming easier and the datasets accessible to those GPUs at memory speeds, not network speeds. And so the NVSwitch that was in the labs was quickly commercialized in 2018 on the DGX-2 platform based on “Volta” V100 GPU accelerators.
We discussed the history of NVLink and NVSwitch in detail back in March 2023 a year after the “Hopper” H100 GPUs launched and when the DGX H100 SuperPOD systems, which could in theory scale to 256 GPUs in a single GPU shared memory footprint, debuted. Suffice it to say, NVLink and its NVLink Switch fabric have turned out to be as strategic as to Nvidia’s datacenter business as InfiniBand is and as Ethernet will likely become. And many of the same companies that were behind the Ultra Ethernet Consortium effort to agree to a common set of augmentations for Ethernet to take on InfiniBand are now getting together to form the Ultra Accelerator Link, or UALink, consortium to take on NVLink and NVSwitch and provide a more open shared memory accelerator interconnect that is supported on multiple technologies and is available from multiple vendors.
The kernel of the Ultra Accelerator Link consortium was planted last December when CPU and GPU maker AMD and PCI-Express switch maker Broadcom said that the xGMI and Infinity Fabric protocols used to link its Instinct GPU memories to each other and also to the memories of CPU hosts using the load/store memory semantics of NUMA links for CPUs would be supported on future PCI-Express switches from Broadcom. We had heard that it would be a future “Atlas 4” switch that adheres to the PCI-Express 7.0 specification, which would be ready for market in 2025. Jas Tremblay, vice president and general manager of the Data Center Solutions Group at Broadcom, confirms that this effort is still underway, but don’t jump to the wrong conclusion. Do not assume that PCI-Express will be the only UALink transport, or that xGMI will be the only protocol.
AMD is contributing the much broader Infinity Fabric shared memory protocol as well as the more limited and GPU-specific xGMI, to the UALink effort, and all of the other players are agreeing to use Infinity Fabric as the standard protocol for accelerator interconnects. Sachin Katti, senior vice president and general manager of the Network and Edge Group at Intel, said that the Ultra Accelerator Link “promoter group” that is comprised of AMD, Broadcom, Cisco Systems, Google, Hewlett Packard Enterprise, Intel, Meta Platforms, and Microsoft is looking at using the Layer 1 transport level of Ethernet with Infinity Fabric on top as a way to glue GPU memories into a giant shared space akin to NUMA on CPUs.
Here is the concept of creating the UALink GPU and accelerator pods:
And here is how you use Ethernet to link the pods into larger clusters:
No one is expecting to link GPUs from multiple vendors inside one chassis or maybe even one rack or one pod of multiple racks. But what the UALink consortium members do believe is that system makers will create machines that use UALink and allow accelerators from many players to be put into these machines as customers build out their pods. You could have one pod with AMD GPUs, one pod with Intel GPUs, and another pod with some custom accelerators from any number of other players. It allows commonality of server designs at the interconnect level, just like the Open Accelerator Module (OAM) spec put out by Meta Platforms and Microsoft allows commonality of accelerator sockets on system boards.
Wherefore Art Thou CXL?
We know what you are thinking: Were we not already promised this same kind of functionality with the Compute Express Link (CXL) protocol running atop of PCI-Express fabrics? Doesn’t the CXLmem subset already offer the sharing of memory between CPUs and GPUs? Yes, it does. But PCI-Express and CXL are much broader transports and protocols. Katti says that the memory domain for pods of AI accelerators is much larger than the memory domains for CPU clusters, which as we know scale from 2 to 4 to sometimes 8 to very rarely 16 compute engines. GPU pods for AI accelerators scale to hundreds of compute engines, and need to scale to thousands, many believe. And unlike CPU NUMA clustering, GPU clusters in general and those running AI workloads in particular are more forgiving when it comes to memory latency, Katti tells The Next Platform.
So don’t expect to see UALinks lashing together CPUs, but there is no reason to believe that future CXL links won’t eventually be a standard way for CPUs to share memory – perhaps even across different architectures. (Stranger things have happened.)
This is really about breaking the hold that NVLink has when it comes to memory semantics across interconnect fabrics. Anything Nvidia does with NVLink and NVSwitch, its several competitors need to have a credible alternative – whether they are selling GPUs or other kinds of accelerators or whole systems – for prospective customers – who most definitely want more open and cheaper alternatives to the Nvidia interconnect for AI server nodes and rackscale pods of gear.
‘When we look at the needs of AI systems across datacenters, one of the things that’s very, very clear is the AI models continue to grow massively,” says Forrest Norrod, general manager of the Data Center Solutions group at AMD. “Everyone can see this means that for the most advanced models that many accelerators need to work together in concert for either inference or training. And being able to scale those accelerators is going to be critically important for driving the efficiency, the performance, and the economics of large scale systems going out into the future. There are several different aspects of scaling out, but one of the things that all of the promoters of Ultra Accelerator Link feel very strongly about is that the industry needs an open standard that can be moved forward very quickly, an open standard that allows multiple companies to add value to the overall ecosystem. And one that allows innovation to proceed at a rapid clip unfettered by any single company.”
That means you, Nvidia. But, to your credit, you invested in InfiniBand and you created NVSwitch with absolutely obese network bandwidth to do NUMA clustering for GPUs. And did it because PCI-Express switches are still limited in terms of aggregate bandwidth.
Here’s the funny bit. The UALink 1.0 specification will be done in the third quarter of this year, and that is also when the Ultra Accelerator Consortium will be incorporated to hold the intellectual property and drive the UALink standards. That UALink 1.0 specification will provide a means to connect up to 1,024 accelerators into a shared memory pod. In Q4 of this year, a UALink 1.1 update will come out that pushes up scale and performance even further. It is not clear what transports will be supported by the 1.0 and 1.1 UALink specs, or which ones will support PCI-Express or Ethernet transports.
NVSwitch 3 fabrics using NVLink 4 ports could in theory span up to 256 GPUs in a shared memory pod, but only eight GPUs were supported in commercial products from Nvidia. With NVSwitch 4 and NVLink 5 ports, Nvidia can in theory support a pod spanning up to 576 GPUs but in practice commercial support is only being offered on machines with up to 72 GPUs in the DGX B200 NVL72 system.
https://www.nextplatform.com/2024/05/30/key-hyperscalers-and-chip-makers-gang-up-on-nvidias-nvswitch-interconnect/