• Mingdom
  • Posts
  • Mingdom Moment: 2024.04.30

Mingdom Moment: 2024.04.30

Best of Youtube => updates to my mental model

The Tim Ferriss podcast used to be my go-to podcast back in 2014-2018. One of his quotes from those days that really stuck with me through the years is you are the average of the people you associate most with.

Having lived through COVID, years of working-from-home, and most recently quitting my corporate job last year, I’m finding that those I associate with are becoming increasingly digital.

With the rise of social media, it’s perhaps more important than ever to be selective of who you associate with or listen to. Yes this leads to information bubbles, but is that really a bad thing if I can choose a good bubble?

The positive side is that we now have essentially free and unlimited access to content featuring the best of the best in the world. Instead of audio podcasts, long-form videos on Youtube have now become my preferred choice to digest information.

In this post, I’ll cover some of the best Youtube content I watched in the recent month or so. At the end, I’ll pretend like I’m just an AI agent and summarize my thoughts.

Best of Youtube

Zuckerberg long-form Youtube Interview and leaked emails

What stood out the most to me - Meta’s genAI effort happened sort of by accident. They initially bought Nvidia GPUs to meet the training needs for Instagram Reels. Zuck, exercising a rule-of-thumb (from prior experience with scaling the company), decided to buy twice the GPUs needed. Then the GenAI revolution happened and they used the extra compute to train Llama. Llama 3 just came out and is already integrated with all the Meta products, it’s also now widely considered as the best open-source LLM across cost, speed and overall ability.

Unlike other companies whose entire business model is around the AI itself (i.e. OpenAI or Anthropic), Meta is in a position to benefit from open-sourcing LLMs because they use it just as an infrastructure/tool for building products. Much like React or Pytorch.

With both Meta and Zuck, I’m reminded of how much and how quickly the perception has changed for them over the last couple of years:

  • Facebook went from being one of the most hated and uninvest-able companies to now once again being one of the cool kids on the block.

  • Meta stock fell to $88 in 2022 due to both the bad sentiment and the narrative back then was that they were spending too much money in CAPEX and VR.

  • Turns out most of that CAPEX was in AI infrastructure (translation: Nvidia GPUs) and now those investments have more than paid for themselves.

  • Zuck has also been doing more long-form interviews which I think played much more to his strengths and personal image. Would also recommend the 3 episodes he did with Lex Fridman.

Facebook / Meta has been one of my top core positions since their IPO, and one of the reasons is that I believe their ability to move fast (and sometimes break things) is unmatched. It goes hand-in-hand with their ability to attract top talent - it’s well-known that they give the best comp packages among the large tech companies and many of my friends who started their careers at Microsoft have since “graduated” to Meta.

As an investor, I found the leaked email exchanges from Zuck on the Instagram acquisition and their investments into AR/VR enlightening. It shows me that Zuck’s success was no accident, he’s extremely cunning with massive long-term ambitions for the company.

Yann LeCun on Lex Fridman

Yann is the chief AI scientist at Meta. We listened to this one during a long car ride and I must admit that most of what he’s saying went over my head. But the part that stuck out the most to me is that Yann thinks that “auto-regressive LLMs” which ChatGPT and all the current GenAI hype is based on is not going to lead to AGI. He talked at length about how the real world is very different from just language, and how a research effort he was apart of trying to apply LLMs except for predicting next frame in a video went no-where in 10 years. So very likely other breakthroughs are needed before we achieve AGI, not just keep scaling LLMs.

He summarized his thoughts on how “smart” he thinks LLMs are in a tweet below:

Mike Schroepfer with Kevin Scott, VC & ex-Meta CTO

Current CTO of Microsoft interviews former CEO of Meta!

  • Mike was at Facebook for most of its formative years. Tons of challenges over the years that were always fresh, which kept him engaged.

  • This is the guy who helped create React & Pytorch - both open-source libraries that are now the industry standard / leader in their respective areas. One of his superpowers is being able to identify and imagine platforms with huge scale.

  • He really enjoyed college, too many favorite courses to learn. He enjoyed both learning and teaching what he learned as a TA at Stanford. Boy I wish I could say the same about my time at Waterloo.

  • After leaving his role as CTO of Meta, Mike continues to be involved with Meta as a Sr Fellow. He mentioned being a big fan of Zuck.

  • His VC current fund invests primarily on climate change technology.

Acquired Podcast with Ben Gilbert & David Rosenthal: Microsoft, Part 1

You could tell three things about Bill Gates pretty quickly. He was really smart, he was really competitive, and he wanted to show you how smart he was. And he was really, really persistent.

Paul Allen, Co-founder of Microsoft

Acquired is my favorite suuuuper long podcasts, with each episode generally being more than 3 hours long. This episode a special one because they are doing a deep dive on Microsoft, the company I joined as a fresh kid out of college.

Part One covers the very beginning, from Bill Gates’ childhood and family life to the early years of Microsoft. At 4.5h long, it’s way too long to summarize but here are the main takeaways for me:

  1. Bill Gates’ parents are both ballers, it’s no accident that he is who he is.

  2. Bill didn’t wonder if he was going to be a CEO or not, his question since childhood was which company should I go be the CEO of?

  3. Luck x Skill: Bill Gates & Paul Allen were at the right place and right time, for being in one of the few places on earth that had access to computers during their high school years. Absolutely undeniable that they had insane technical and business skills for developing Microsoft as a business.

  4. Gates originally wanted to be a mathematician, he took #1 in state of WA during highschool but realized in college that he wasn’t going to be among the best and it “hurt his motivation”.

  5. Steve Ballmer is better at Math than Bill Gates! Gates worked very hard to get Ballmer to join Microsoft and gave him 8.5% of the company (which is extremely abnormal when Microsoft already had over 30 employees). This blew my mind because I was at Microsoft during the Ballmer years and it never occurred to me that Ballmer is a Math genius. He’s perhaps the most underrated CEO of our time.

  6. Microsoft had the best talent in their early days. It really hits home to me how much talent matters in tech. A huge part of the moat in a software company is talent. IP

  7. The only time Microsoft diluted shareholders was when they took a $1 million check from a VC firm (TVI) for 5% of the company in 1981. Microsoft was already profitable at the time and didn’t really need the money. When they went public in 1986, their valuation was just 1x their revenue. Crazy times that we will probably never see again, as it’s normal for tech companies to be worth 10x their annual revenue today.

Check out the show-notes for a more detailed summary, but I highly encourage you to listen to the whole thing. Can’t wait for the next episode, which I assume will cover some of the years that I was there. As an aside - I actually met Ben Gilbert (the podcast host) when he interned at Microsoft on the same team (Office Web Apps) back in 2011.

Warren Buffett and Bill Gates panel from 1998

Found this one because they mentioned it during the Acquired podcast on Microsoft and boy was this a treat. First, this is entertaining to watch as Buffett especially is quite funny. Getting their advice for success anchored at University students at UW is the icing on the cake.

Remember that this is in 1998! The internet was not a sure thing, Microsoft had grown quickly from a group of upstarts to being the big “evil” megacorp facing a major antitrust lawsuit (that they later lost). It’s ironic how much the narrative on Microsoft has changed since then, as even though Microsoft is now the biggest company in the world by market cap, they are also cool again.

At the same time, so much of what they were talking about in this talk is still relevant. For example, they were talking about the internet kind of like we are talking about ChatGPT now.

Both Bill and Warren seem to have a deep degree of self-awareness, of what they are good at and what they want to do next. Buffett joked that he will be running Berkshire even 5 years after his death, and he’s still the CEO/chairman today at 93. Bill meanwhile says that he could see himself handing the CEO title to someone else in the next decade - and indeed he handed it to Steve Baller in 2000!

Joe Tsai, co-founder of Alibaba

There’s a proverb in China: 枪打出头鸟. Literally translates to “the bird who sticks out their head gets shot”. Well Jack Ma was that bird who got shot by the CCP (not literally) so now we just get to talk to the other founder of Alibaba.

This is the first time I’m hearing Joe Tsai speak and he seemed overall humble, intelligent and articulate. I enjoyed his candid take on China’s economy and Alibaba needing to turnaround as they have lost ground to competitors like JD and PDD. He thinks that China is around 2 years behind the US in AI, and that the US sanction on China for AI chips has hurt Alibaba’s cloud business.

I was surprised by how un-Chinese he seemed in this interview, and the fact that he was somehow able to buy an NBA basketball team? Then I googled it… and it’s because he’s Taiwanese/Canadian and not a Chinese national. Sense restored.

20VC: Sam Altman and Brad Lightcap interview

The highlight that resonated with me was Sam essentially declared that OpenAI would put most startups building on top of OpenAI out of business.

Based on my first-hand experience building a simple stock analyzer app using LLM, I can say that it’s super easy to build an app on top of OpenAI API (or any other LLM) these days. By the same token, this means most apps that are a simple wrapper on top of OpenAI + some publicly accessible data would have no moat and will eventually just be a commodity.

Sam seems overall very optimistic and thinks that the main constraint moving forward is compute and total power available to us.

BG2 pod with Bill Gurney, Brad Gerstner and Sunny Madra

One of my favorite new podcasts to watch weekly. What I found most interesting:

  • Meta releasing Llama3 and immediately became the most used LLM on Groq. Sunny (who works at Groq) says users are migrating from GPT4 to Llama3 because it costs 10x less and has comparable results.

  • LLM APIs are mostly a plug and play thanks to OpenAI paving the way, other LLMs have similar if not the same APIs as OpenAI.

  • Zuck announcing that he’s spending $100B on AI is a kind of “scorched earth” tactic essentially signaling to others trying to train LLMs to don’t bother unless you have that kind of money.

  • Meta currently is giving their latest and greatest LLM for free, but hold the rights to monetize later. They already will make some money from the cloud providers based on Llama usage.

  • Cool to see that they watched many of the same podcasts as me

Concluding thoughts

Believe it or not, the above is just a very small fraction of my Youtube consumption as I’m basically an LLM trained on top of Youtube’s recommendations to me.

Common themes / updates to my mental model of the world:

  1. The best make their own luck. It’s easy to call billionaires like Bill Gates or Mark Zuckerberg lucky (and they are undeniably lucky) but it also takes tremendous skill to capitalize on that luck.

  2. The biggest moat for tech companies (especially in software) is their velocity to release & iterate. This is directly tied to their ability to attract and retain top talent. And of course, the founder or CEO who sits at the very top play a huge role here (#1).

  3. Perception of the public masses tend to shift quickly to extremes - such as the public perception of Meta & Mark Zuckerberg. This can lead to exaggerated stock movements which can be exploited (at least for now) for profit for long term investors.

  4. Speaking of exaggerations, the current hype around GenAI & LLMs are probably over-exaggerated. I am in the Yann LeCun camp that other break-throughs are needed before AGI. But the influx of talent in the overall space should increase the chances of these breakthroughs.

I must caution my handful of readers that these updates are more like reinforcements against existing beliefs rather than brand new ideas. I am after all a moderately old LLM trained on nearly 40 years of existing data. I also know that I’m highly biased due to the curated data that was fed into my neural nets.

But I’d love to learn a different perspective - if you have one, please share!

Reply

or to participate.