Get the full experience! Sign up to access transcripts, personalized summaries, and more features.
Meta and Apple were about to go to the woodshed in Europe, but it looks like Trump’s tariffs have run interference for them. Everyone wants in on stablecoins, example number 23. Beware of phishing emails from Google.com. And are OpenAI’s latest models good, bad, or just “jagged”?
Sponsors:
Links:
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The podcast begins with the discussion of Meta and Apple's regulatory challenges in Europe, specifically regarding anti-competitive practices. It appears that the Trump administration's tariffs strategy has potentially bought the companies some time, as the EU delayed announcing penalties initially set for mid-April 2025. The conversation highlights the interplay between tech regulations and trade discussions, emphasizing the EU's firm stance on the Digital Markets Act, which is aimed at supporting smaller companies against big tech.
Next, the focus shifts to the evolving landscape of stablecoins and cryptocurrencies in the financial sector. Firms like Circle and Paxos are exploring bank charters as the Trump administration seeks to integrate crypto into mainstream finance. The podcast details the expected legislation that would enforce stricter regulations on stablecoin issuance, although some companies are optimistic about the opportunities it presents.
An alarming report on phishing scams is shared, detailing how hackers can spoof communications from Google through a sophisticated DKIM replay attack. The episode emphasizes the need for users to be vigilant against these deceptive practices, especially when messages appear to come from legitimate sources, like Google. The targeted approach used by the attackers against technical individuals illustrates the growing complexity of online fraud.
The episode wraps up with a discussion on OpenAI's introduction of its new O3 and O4 Mini AI models. A notable concern is raised regarding their increased tendency to hallucinate, resulting in inaccurate outputs despite improvements in other areas. The conversation captures a divided opinion among experts regarding the implications of these advancements and whether they signify a step towards AGI or highlight ongoing challenges in AI reliability.
Join other podcast enthusiasts who are getting podcast summaries.
Sign Up Free