“Myth” (2010) by Damien Hirst from the Korean Museum of Contemporary Art. “One side presents a white unicorn symbolizing the sublime, while the other reveals its flayed form…even sacred creatures are finite beings destined to face death.” Our own beliefs and myths can be subject to the same deconstruction.

I recently spent a buzzy 4 days in Seoul on the US Embassy’s invitation to teach AI workshops to a small hand-selected cohort of Korea’s brightest young professionals, researchers, and entrepreneurs at TechCamp 2026. This was probably the easiest invite to say yes to; I’ve always admired Korean culture, the language feels like home (there is some historical debate about Korean’s overlap with Dravidian languages in India), and I appreciate the country’s resilience throughout history shaped by multiple wars/colonization and growing into a major technological power. Somehow the brief trip exceeded even my high expectations, after many days working closely with participants, advising on their AI hackathon, spending everyday with the other US AI experts, each with their own unique vantage point in the industry, and some much-needed solo exploration of Seoul.

“Prosperity” as a North Star

The program’s theme set by the US embassy was the “AI prosperity stack”. The participants were encouraged to build AI prototypes with prosperity in mind, particularly individual and societal well-being. The outcome of this theme resulted in a bevy of truly special, uniquely Korean prototypes and product ideas. Everything from utilizing robotics models for the historical preservation of heritage items, building a Constitutional AI system for truth-seeking, to an integrated app for mothers (and their partners) to manage postpartum depression. A common theme I noticed throughout was a focus on application-specific use cases and exploring clear pain points to accelerate adoption. This was also clear in a keynote delivered by one of Upstage’s (Korean LLM Startup) founders, which focuses on adoption in higher risk industries.

Though AI career paths are highly structured, competitive, and linear in Korea, I strongly urged the participants to continue to be unconventional, define the shape of their own futures, and to focus less on rigid corporate roles, with the heavy grain of salt that this may be harder to do in Korea than in the US.

I enjoyed advising on these various ideas as we worked together to pull apart potential unintended affective harms (for example: ensuring the app for moms doesn’t put the burden on struggling mothers or provide them with harmful AI companions), discussing the challenges of using non-deterministic models for truth-seeking, and understanding the labor and cultural implications for having robots develop heritage products. It was refreshing to see solutions that were so tightly scoped to tangible real-world issues, specific to the Korean experience. I think it’s easy to forget, considering the pace of frontier AI development, the types of human-centric real-world, often “smaller” or “banal”-seeming issues just get left on the table for more alluring or sexier pursuits (someone I used to work with used to call this the “sexy kitchen problem”). My hope is that as models become more globally-adopted, there is the continuous pursuit of solving distinct societal challenges using culturally-aware AI systems, apps, and products.

Safety as a Living Practice

The workshops that I ran were focused on two different topics: 1) understanding how to risk-assess + deploy frontier multimodal models safely, and 2) the trust problem: AI, online authenticity, and the future of epistemic systems. I was worried that both were a bit wonky for this audience, but I was very surprised by the detail and contextual understanding behind participants’ questions and the sheer enthusiasm around AI safety issues. We got into the weeds with some challenging case studies about how to assess a powerful media model and prioritizing specific governance solutions on 6 month, 1-year, and 3+ year timelines to improve our collective epistemic futures. More to come here in future blogs (!) but some key highlights from my workshops:

A. Designing for safety means assessing risk end-end and exploring mitigations from pre-training stages to monitoring post-deployment.
B. There are always safety tradeoffs. There is no frontier AI model that is 100% safe—this does not conceptually or empirically exist. The key is to prioritize what tradeoff(s) you’re making, with a balanced understanding of severity and likelihood of risks, and iterate on safety constantly.
C. AI is an accelerant, but not the single root cause of our current issues with our information ecosystem. The key to improving this issue is improving interoperability with various provenance solutions; but this can be an uphill battle.

Myth, Deconstructed

The last bit I’ll leave you with is not AI-specific, but rather the magic and wonder of Seoul itself. The amount of kindness, hospitality, and respect I experienced in Seoul was astonishing—from the participants, organizers, all the way to the shopkeeper I’d buy some much-needed green tea from. As much as I love the pace, intensity, and general vibes of NYC, I found myself smiling and saying thank you and truly meaning it more than ever in Korea. The kindness was contagious. I was also able to have some solo time in Seoul, mostly exploring alleyways and kbeauty products (couldn’t resist!) and took a peek at Damien Hirst’s largest exhibition in Asia at the MMCA. The pièce de résistance from what I observed was “Myth” (image highlighted above), challenging beauty and belief, exploring the connection between science and religion, and highlighting that even myths can be deconstructed and explained by science.

I feel somewhat similarly about AI—there is frequently plenty of myth and hype to cut through, a need for analysis and science to explain the layers behind how models work and where they are helpful/harmful, but I still feel a sense of wonder by it all as models are unlocking new possibilities and ideas on a global scale. As someone primarily working on safety and risks, it’s hard to think with this lens frequently. However, Seoul has got me thinking more about the “AI prosperity revolution”, and most importantly pushing me to think: how do we make it stick?

Keep Reading