East Bay Tech is a small East Bay meetup for founders, developers, and operators who want to talk about how AI is actually showing up in their work.
The goal is simple: good conversation, strong people in the room, and more signal than you usually get from a generic meetup or networking event.
Topics
These are the discussion starters for April 17, 2026.
Read whichever items look useful to you. The packet is optional, and skimming even a few of these is enough context for a strong conversation.
Current model snapshot: Artificial Analysis
- This is our regular April 16, 2026 snapshot of the frontier model market.
- The most notable addition in the middle of the chart versus our last overview is GLM 5.1.
- Claude Opus 4.7 and Muse Spark are still not reflected here yet, which is part of why they are worth discussing separately below.
- It is a fast way to orient the room before we get into individual launches, positioning, and use cases.
Frontier models, distribution, and the market map
GLM-5.1
- Z.AI is positioning GLM-5.1 as a serious coding and reasoning model, with a large context window and strong long-horizon task framing.
- There is also a useful Hacker News thread for a faster read on market reaction.
- Good room question: how much should teams update their model map as Chinese labs become more credible in coding and agentic workflows?
Introducing Claude Opus 4.7
- Anthropic says Opus 4.7 improves on 4.6 especially on the hardest software engineering tasks and longer-running work.
- The practical claim is less "new benchmark hero" and more "higher-confidence handoff on difficult coding."
- Good room question: what actually changed in practice between 4.6 and 4.7, and where does Anthropic still lead or lag?
Introducing Muse Spark
- Meta's first Muse model is framed around personal superintelligence, multimodal reasoning, and product distribution across Meta's consumer surfaces.
- Worth watching because it ties model quality directly to distribution, context from social products, and mainstream consumer reach.
- Good room question: if a model is "good enough" and deeply integrated into Instagram, WhatsApp, Facebook, and glasses, how much does raw model ranking still matter?
Anthropic just passed OpenAI in revenue while spending 4x less to train their models
- A useful discussion prompt on how the frontier labs are being valued and compared beyond raw model rankings.
- Whether or not you fully buy the framing, it is a strong entry point for talking about capital efficiency, distribution, and what "winning" looks like in the model market.
- Good room question: do the labs with the best models win, or the labs with the best revenue engine and distribution?
Gemma 4 in Google AI Edge Gallery
- Google AI Edge Gallery now features Gemma 4 for on-device use, including iPhone.
- This is a practical signal that private, offline, mobile inference is becoming much more tangible.
- Good room question: what use cases really want on-device models, and which still need the cloud?
Agents, workflows, and services-as-software
OpenClaw: The complete guide to building, training, and living with your personal AI agent
- A detailed walkthrough of how one operator is running a team of personal AI agents across work and life.
- Useful concrete topics: safe deployment, channels, skills, heartbeats, and where personal agents are already real versus still fragile.
- Good room question: which parts of the "always-on personal agent" stack feel durable, and which parts still feel like demos?
Services: The New Software
- Sequoia's core claim is that the next great AI companies may sell the work, not the tool.
- The essay frames the market as copilots versus autopilots, with judgement as the remaining human moat.
- Good room question: in which categories does this transition already work, and where is the "services" framing still too early?
Safety, backlash, and public trust
Assessing Claude Mythos Preview's cybersecurity capabilities
- Anthropic describes Mythos Preview as a step-change model for cybersecurity, including zero-day and exploit-generation capability.
- The post matters because it is not framed as ordinary model progress. It is framed as a defensive coordination problem.
- Good room question: what concrete changes should security teams make if these capability claims are directionally true?
Powell, Bessent flag systemic risk from advanced AI models
- A financial-regulation angle on the same Mythos story, with AI cyber capability treated as a broader systemic risk channel.
- Interesting because the discussion shifts from product risk to macro and infrastructure risk.
- Good room question: when do frontier AI models become a financial-stability issue, not just a technology issue?
OpenAI CEO Sam Altman attack coverage
- The reported Molotov cocktail attack on Sam Altman's home is an ugly signal about how much anti-AI sentiment has intensified.
- It pushes the trust conversation beyond policy and PR into legitimacy, public anger, and radicalization.
- Good room question: how should labs respond when backlash stops being abstract and becomes personal and violent?
CNN on Anthropic's Mythos rollout
- A clearer mainstream-news hook on the Mythos story: Anthropic says it will make the new model available to some of the world's biggest cybersecurity and software firms to slow the AI arms race on offense and defense.
- This is useful because it turns the abstract "dangerous cyber capability" conversation into a concrete distribution and coordination strategy.
- Good room question: if a lab thinks a model changes the cyber balance, who should get access first, and under what constraints?
Research, open source, and developer tooling
Codex for almost everything
- OpenAI is positioning Codex as a broader coding and software-work product rather than a narrow autocomplete tool.
- That makes this relevant for a practical discussion about how much of the developer workflow gets absorbed into model-native environments.
- Good room question: what parts of software work are actually ready for "Codex for almost everything," and which parts still need tighter human control?
Embarrassingly Simple Self-Distillation Improves Code Generation
- A strong research prompt for anyone thinking about how code models get materially better.
- Worth discussing because simple training or decoding ideas often matter more than headline narratives about "the next paradigm."
- Good room question: what improvements in coding models are likely to come from better training recipes versus bigger models?
coding-assistants.rst in the Linux kernel tree
- The Linux project now has explicit documentation around coding assistants in its process docs.
- That makes this a useful artifact for talking about norms, disclosure, and trust in AI-assisted software work.
- Good room question: what should good AI-assisted engineering hygiene actually require?
Astral to join OpenAI
- Astral joining OpenAI is a strong signal about how model labs want to own more of the developer workflow.
- This matters beyond one company because it points at tighter integration between models, coding agents, and the tools around them.
- Good room question: does the winning developer platform own the model, the workflow, or the whole stack?
MarkItDown
- Microsoft's document-to-Markdown tool is exactly the kind of boring infrastructure that quietly unlocks better agent workflows.
- It is also a reminder that a lot of useful AI leverage comes from input cleanup, not just smarter models.
- Good room question: which "unglamorous" tools are becoming critical for reliable automation?
Chandra OCR 2
- Chandra focuses on hard document-intelligence problems like complex tables, forms, handwriting, and layout preservation.
- This is worth discussing because document AI is one of the most commercially useful but still uneven parts of the stack.
- Good room question: are we finally near reliable document automation, or are we still underestimating edge cases?
Format
- Small, discussion-first meetup.
- More real examples and honest questions, less presentation.
- Best for people with real use cases, lessons, or questions to bring.
House rules
- Be respectful, curious, and concise.
- Share what you are seeing in practice.
- No aggressive selling or constant self-promotion.
- Leave people with something useful.