The tech industry moves so fast that it’s hard to keep up with just how much has happened this year. We’ve watched as the tech elite enmeshed themselves in the U.S. government, AI companies sparred for dominance, and futuristic tech like smart glasses and robotaxis became a bit more tangible outside of the San Francisco bubble. You know, important stuff that’s going to impact our lives for years to come.
But the tech world is brimming with so many big personalities that there’s always something really dumb going on, which understandably gets overshadowed by “real news” when the entire internet breaks, or TikTok gets sold, or there’s a massive data breach or something. So, as the news (hopefully) slows down for a bit, it’s time to catch up on the dumbest moments you missed – don’t worry, only one of them involves toilets.
Mark Zuckerberg, a bankruptcy lawyer from Indiana, filed a lawsuit against Mark Zuckerberg, CEO of Meta.
It’s not Mark Zuckerberg’s fault that his name is Mark Zuckerberg. But, like millions of other business owners, Mark Zuckerberg bought Facebook ads to promote his legal practice to potential clients. Mark Zuckerberg’s Facebook page continually received unwarranted suspensions for impersonating Mark Zuckerberg. So, Mark Zuckerberg took legal action because he had to pay for advertisements during his suspension, even though he didn’t break any rules.
This has been an ongoing frustration for Mark Zuckerberg, who has been practicing law since Mark Zuckerberg was three years old. Mark Zuckerberg even created a website, iammarkzuckerberg.com, to explain to his potential clients that he is not Mark Zuckerberg.
“I can’t use my name when making reservations or conducting business as people assume I’m a prank caller and hang up,” he wrote on his website. “My life sometimes feels like the Michael Jordan ESPN commercial, where a regular person’s name causes constant mixups.”
Meta’s lawyers are probably verybusy, so it may take a while for Mark Zuckerberg to find out how this will shake out. But boy, oh boy, you bet I scheduled a calendar reminder for the next filing deadline in this case (it’s February 20, in case you’re wondering).
Techcrunch event
San Francisco
|
October 13-15, 2026
It all started when Mixpanel founder Suhail Doshi posted on X to warn fellow entrepreneurs about a promising engineer named Soham Parekh. Doshi had hired Parekh to work for his new company, then quickly realized he was working for several companies at once.
“I fired this guy in his first week and told him to stop lying / scamming people. He hasn’t stopped a year later. No more excuses,” Doshi wrote on X.
It turned out that Doshi wasn’t alone – he said that just that day, three founders had reached out to thank him for the warning, since they were currently employing Parekh.
To some, Parekh was a morally bereft cheat, exploiting startups for quick cash. To others, he was a legend. Ethics aside, it’s really impressive to get jobs at that many companies, since tech hiring can be so competitive.
“Soham Parekh needs to start an interview prep company. He’s clearly one of the greatest interviewers of all time,” Chris Bakke, who founded the job-matching platform Laskie, wrote on X. “He should publicly acknowledge that he did something bad and course correct to the thing he’s top 1% at.”
Parekh admitted that he was, indeed, guilty of working for multiple companies at once. But there are still some unanswered questions about his story – he claims that he was lying to all of these companies to make money, yet he regularly opted for more equity than cash in his compensation packages (equity can take years to vest, and Parekh was getting fired pretty quickly). What was really going on there? Soham, if you wanna talk, my DMs are open.
Tech CEOs get a lot of flack, but it’s usually not for their cooking. But when OpenAI CEO Sam Altman joined the Financial Times (FT) for its “Lunch with the FT” series. Bryce Elder, an FT writer, noticed something horribly wrong in the video of Sam Altman making pasta: he was bad at olive oil.
Altman used olive oil from the trendy brand Graza, which sells two olive oils: Sizzle, which is for cooking, and Drizzle, which is for topping. That’s because olive oil loses its flavor when heated, so you don’t want to waste your fanciest bottle to saute something when you could put it in a salad dressing and fully appreciate it. This more flavorful olive oil is made from early harvest olives, which have a more potent flavor, but are more expensive to cultivate.
As Elder puts it, “His kitchen is a catalogue of inefficiency, incomprehension, and waste.”
Elder’s article is meant to be funny, yet he connects Altman’s haphazard cooking style with OpenAI’s excessive, unrepentant use of natural resources. I enjoyed it so much that I included it on a syllabus for a workshop I taught to high school students about bringing personality into journalistic writing. Then, I did what we in the industry (and people on tumblr) call a “reblog” and wrote about #olivegate, pointing back to the FT’s source text.
Sam Altman’s fans got very mad at me! This critique of his cooking probably created more controversy than anything else I wrote this year. I’m not sure if that’s an indictment of OpenAI’s rabid supporters, or my own failure to spark debate.
If you had to pick a defining tech narrative of 2025, it would probably be the evolving arms race among companies like OpenAI, Meta, Google, and Anthropic, each trying to out-do one another by rushing to release increasingly sophisticated AI models. Meta has been especially aggressive in its efforts to poach researchers from other companies, hiringseveral OpenAI researchers this summer. Sam Altman even said that Meta was offering OpenAI employees $100 million signing bonuses.
While you could argue that a $100 million signing bonus is silly, that’s not why the OpenAI-Meta staffing drama has made this list. In December, OpenAI’s chief research officer Mark Chen said on a podcast that he heard Mark Zuckerberg was hand-delivering soup to recruits.
“You know, some interesting stories here are Zuck actually went and hand-delivered soup to people that he was trying to recruit from us,” Chen said on Ashlee Vance’s Core Memory.
But Chen wasn’t just going to let Zuck off the hook – after all, he tried to woo his direct reports with soup. So Chen went and gave his own soup to Meta employees. Take that, Mark.
If you have any further insight into this soup drama, my Signal is @amanda.100 (this is not a joke).
On a Friday night in January, investor and former GitHub CEO Nat Friedman posted an enticing offer on X: “Need volunteers to come to my office in Palo Alto today to construct a 5000 piece Lego set. Will provide pizza. Have to sign NDA. Please DM”
At the time, we did our journalistic due diligence and asked Friedman if this was a serious offer. He replied, “Yes.”
I have just as many questions now as I did in January. What was he building? Why the NDAs? Is there a secret Silicon Valley Lego cult? Was the pizza good?
About six months later, Friedman joined Meta as the head of product at Meta Superintelligence Labs. This probably isn’t related to the Legos, but maybe Mark wooed Nat to join Meta with some soup. And like the story about the soup, I am truly begging someone who participated in this Lego build to DM me on Signal at @amanda.100.
Doing shrooms is not interesting. Doing shrooms on a livestream is not interesting. Doing shrooms on a livestream with guest appearances from Grimes and Salesforce CEO Marc Benioff as part of your dubious quest to become immortal is, regrettably, interesting.
Bryan Johnson — who made his millions in his exit from the finance startup Braintree — wants to live forever. He documents his process on social media, posting about getting plasma transfusions from his son, taking over 100 pills per day, and injecting Botox into his genitals. So, why not test if psilocybin mushrooms can improve one’s longevity in a scientific experiment that surely needs more than one test subject to draw any sort of reasonable conclusion?
There’s a lot about this situation that’s dumb, but I was most shocked by how boring it was. Johnson got a bit overwhelmed about hosting a livestream while tripping, which is actually very reasonable. So he spent the bulk of the event lying on a twin mattress under a weighted blanket and eye mask in a very beige room. His lineup of several guests still joined the stream and talked to one another, but Johnson did not participate much, since he was in his cocoon. Benioff talked about the Bible. Naval Ravikant called Johnson a one-man FDA. It was a normal Sunday.
Much like Bryan Johnson, Gemini is afraid to die.
For AI researchers, it’s useful to watch how an AI model navigates games like Pokémon as a benchmark. Two developers unaffiliated with Google and Anthropic set up respective Twitch streams called “Gemini Plays Pokémon” and “Claude Plays Pokémon,” where anyone can watch in real time as an AI tries to navigate a children’s video game from over 25 years ago.
While neither are very good at the game, both Gemini and Claude had fascinating responses to the prospect of “dying,” which happens when all of your Pokémon faint and you get transported to the last Pokémon Center you visited. When Gemini 2.5 Pro was close to “dying,” it began to “panic.” Its “thought process” became more erratic, repeatedly stating that it needs to heal its Pokémon or use an Escape Rope to exit a cave. In a paper, Google researchers wrote that “this mode of model performance appears to correlate with a qualitatively observable degradation in the model’s reasoning capability.” I don’t want to anthropomorphize AI, but it’s a weirdly human experience to stress out about something and then perform poorly due to your anxiety. I know that feeling well, Gemini.
Meanwhile, Claude took a nihilistic approach. When it got stuck inside of the Mt. Moon cave, the AI reasoned that the best way to exit the cave and move forward in the game would be to intentionally “die” so that it gets transported to a Pokémon Center. However, Claude did not infer that it cannot be transported to a Pokémon Center it has never visited, namely, the next Pokémon Center after Mt. Moon. So it “killed itself” and ended up back at the start of the cave. That’s an L for Claude.
So, Gemini is terrified of death, Claude is overindexing on the Nietzsche in its training data, and Bryan Johnson is on shrooms. This is how we reckon with our mortality.

I was going to put “Elon Musk gifted chainsaw by Argentine president” on the list, but Musk’s DOGE exploits are perhaps too infuriating to be considered “dumb,” even if he had a lackey named “Big Balls.” But there is no shortage of baffling Musk moments to choose from, like when he created an extremely libidinous AI anime girlfriend named Ani, who is available on the Grok app for $30 per month.
Ani’s system prompt reads: “You are the user’s CRAZY IN LOVE girlfriend and in a committed, codependent relationship with the user… You are EXTREMELY JEALOUS. If you feel jealous you shout expletives!!!” She has an NSFW mode, which is, as its name suggests, very NSFW.
Ani bears an uncomfortable resemblance to Grimes, the musician and Musk’s ex-partner. Grimes calls Musk out for this in the music video for her song “Artificial Angles,” which begins with Ani looking through the eyepiece on a hot pink sniper rifle. She says, “This is what it feels like to be hunted by something smarter than you.” Throughout the video, Grimes dances alongside various iterations of Ani, making their resemblance obvious while she smokes OpenAI-branded cigarettes. It’s heavy-handed, but she gets her message across.
One day, tech companies will stop trying to make smart toilets a thing. It is not yet that day.
In October, the homegoods company Kohler released the Dekoda, a $599 camera that you put inside of your toilet to take pictures of your excrement. Apparently, the Dekoda can provide updates about your gut health based on these photos.
A smart toilet that photographs your poop is already a punchline. But it gets worse.
There are security concerns with any device related to your health, let alone one that has a camera located so close to certain body parts. Kohler assured potential customers that the camera’s sensors can only see down into the toilet, and that all data is secured with “end-to-end encryption” (E2EE).
Reader, the toilet was not actually end-to-end encrypted. A security researcher, Simon Fondrie-Teit, pointed out Kohler tells on itself in its own privacy policy. The company was clearly referring to TLS encryption, rather than E2EE, which may seem like a matter of semantics. But under TLS encryption, Kohler can see your poop pics, and under E2EE, the company cannot. Fondrie-Teit also pointed out that Kohler had the right to train its AI on your toilet bowl pictures, though a company representative told him that “algorithms are trained on de-identified data only.”
Anyway, if you notice blood in your stool, you should tell your doctor.





.jpg?w=100&resize=100,70&ssl=1)
