
Hello there,
Welcome to AI Unplugged. Here are today’s updates:
Anthropic launches Bloom to automate AI safety research
OpenAI examines the limits of chain-of-thought monitoring
Governments move to regulate AI companions and emotional risk
Tools, and a prompt for decisions that should be made once ⬇️
NEWS ALERTS
Anthropic released Bloom, an open-source framework designed to automate safety evaluations at scale, replacing ad-hoc human reviews with repeatable, scenario-driven testing that scores risks like deception and misuse. The real shift here is economic and operational: safety checks become cheaper, continuous, and harder to ignore as models evolve, which undercuts the industry’s habit of treating safety as a one-time pre-deployment box-check rather than an ongoing engineering obligation.
OpenAI published research testing whether a model’s chain-of-thought is actually useful for safety monitoring, and the conclusion is blunt: inspecting reasoning reveals risky behavior far more reliably than judging outputs alone. This directly challenges governance approaches that rely on surface behavior, because if you don’t look at how models think—especially in agentic or high-stakes settings—you’re effectively flying blind until something breaks.
New York and California are rolling out early regulations targeting AI companions, with requirements around transparency, emotional safety, and protections for vulnerable users—rules that explicitly exclude generic productivity tools. This is not symbolic; it signals that as AI systems become persistent, relational, and psychologically influential, lawmakers will force constraints that shape product design and compliance whether companies are ready or not.
PRODUCTIVITY TOOLS
📈 Glowtify — Uses AI to dissect campaign performance, surface what actually moves conversions, and cut through vanity metrics so marketing decisions are driven by signal, not guesswork.
🧠 Neuralk AI — Deploys AI agents to automate research and knowledge work end-to-end, replacing manual synthesis with faster, more consistent outputs.
🔄 AnyFormat — Converts files between virtually any format on demand, using AI to handle edge cases that usually break traditional converters.
🧩 Azoma — Acts as a thinking scaffold, forcing structure into planning and execution so ideas don’t stall at the “concept” stage.
AI MARKET
💰 Funding Rounds
Manifold raised $18M in Series B funding to scale its data infrastructure platform as demand grows for production-grade data systems.
Dazzle AI secured $8M in Seed funding to expand its AI-driven automation capabilities and reduce reliance on fragile manual workflows.
💼 AI Roles
PROMPT GUIDE
Decisions We’re Wasting Time Re-Deciding
Purpose: Leadership keeps looping on the same calls and it’s costing focus, speed, and credibility.
Prompt:
Act as my Chief of Staff.
From the update below, do the following:
List the decisions that are being re-litigated and should be permanently closed.
Diagnose exactly why each one keeps resurfacing (missing owner, vague criteria, fear of consequences, misaligned incentives, etc.).
Identify the single most high-leverage decision to lock before year-end—and explain why this one matters more than the rest.
Specify how to lock it: decision owner, decision rule, documentation format, and communication plan so it does not reopen in Q1.
Be concrete. No theory. No generic advice. Under 150 words.
Update:
[paste recurring debates, approvals, or leadership discussions]Until next time,
AI Unplugged
