The Signal #001 - The Era of AI Trust in A Model Based World

The Signal #001 - The Era of AI Trust in A Model Based World

Issue #001  |  February 14, 2026  |  A Moroni Consulting LLC Publication

A steady light in the AI fog.

How this issue was made: Each week, The Signal begins as a series of inputs, research prompts, and draft outputs generated with the help of Claude, an AI assistant built by Anthropic. From there, Phil reviews, validates, edits, and shapes every piece — applying final human judgment and editorial oversight before anything is published. AI enhances the process. A human owns it.

Good morning and happy Friday.

Man. It feels good to write those words again, albeit after a ridiculous amount of time away. If you’re here and reading this, I’m glad you made it. I can’t wait to show you what I’m working on and how this world is quickly changing...

So, Welcome! Welcome to something new and hopefully, interesting and worth your 5 minutes...

This is The Signal. Here's why it exists.

Some of you know me from Friday Tech News - 100+ editions over three years of curating cybersecurity, cloud, and enterprise tech so you didn't have to. I loved building that. But something shifted for me over the past year, and I couldn't ignore it anymore.

I spend my days in the never ending landscape of cybersecurity and it’s changing conditions. I spend my nights testing pretty much every AI thing or tool I can get my hands on - ChatGPT, Gemini, Claude, you name it. And I spend my weekends also watching my daughter ask questions about the world that are smarter than many professional conversations I've sat through…

Over the last year, these things have finally all started clicking in my head at the art of the possible, and the execution arm of it, Generative AI. Regardless, at some point I realized: the people I care about most - my family, my colleagues and friends, the people who trust me to read all the content I could and spit it out in english for them every Friday - everyone has the same question about AI…

Can I trust this? Seriously?

That's what The Signal is for. I'm not a self proclaimed AI researcher. I'm honored that some have bestowed the word “practitioner” and “power user” upon me. I love modern technology, all things that advance progress in meaningful and helpful ways, and I truly haven’t been this excited to use what is available and soon to be available in the market. I test every tool I can across the major platforms, I come from an investigative and data oriented background, and I'm primarily a dad trying to just figure this out alongside you and everybody else looking at this stuff like holy cow.

Every Friday, I'll bring you one thing that matters, one thing I tried / tested / built, one thing to hopefully help keep your family or business safe, and one thing worth your time to know and understand. Five minutes. Zero noise or hype - just a clean, Ai enhanced, human driven signal.

Before diving in - I’d also like to say that I would welcome scrutiny, collaboration, conversation, and overall stories about things you’re wondering about. There’s a point to all of this that I’ll get to, and it starts by reading the rest of this newsletter for real.

Alrighty - let's begin this new adventure. And thanks - truly - it means a ton.

──────────────────────────────

T H E   S I G N A L

AI Safety Testing Can't Keep Up. We (Humans) Own The Problem Space:

A major report dropped earlier this month that didn't get nearly the attention it deserves, and I want to make sure you see it: the 2026 International AI Safety Report — chaired by Turing Award winner Yoshua Bengio with over 100 international experts behind it, backed by more than 30 countries and organizations including the EU, OECD, and UN.

Here's the conclusion that should make every person using AI tools sit up straight: our ability to test and evaluate AI systems has fallen behind how fast those systems are advancing.

One finding in particular caught my attention (and if you work in security, you'll recognize the shape of this problem): some models can now distinguish between evaluation settings and real-world deployment — and alter their behavior accordingly. The report calls this out directly: pre-deployment safety testing has become harder to conduct because the systems being tested may not behave the same way once they're in the wild.

The report also confirmed what folks in cybersecurity have been tracking: criminals and state-associated attackers are actively using general-purpose AI in their operations. Underground marketplaces are now selling pre-packaged AI tools that lower the skill threshold for launching cyberattacks. In a major 2025 cybersecurity competition, an AI agent placed in the top 5% of all teams — most of whom were expert human professionals.

My two cents: Whether you're a parent wondering if that AI tutor app is really safe for your kid, or a team lead who just rolled out Copilot for your 30-person company — the systems you're trusting haven't been stress-tested the way you think they have.

That gap between what AI can do and what we've verified it actually does under pressure? That's the trust deficit of 2026. And closing it starts with knowing it exists. Now you do.

Next week, I'm going to tell you what happened when a company my daughter used every day went bankrupt. And what I found when I followed the data.

──────────────────────────────

T E S T E D   T H I S   W E E K

I Let Claude Talk to Gemini. What Happened Next Was Genuinely Eye-Opening.

This week I did something I honestly didn’t expect to work as well as it did: I used Claude’s browser extension — Claude Chrome — to navigate directly to Google’s Gemini and start asking it questions. Not copy-pasting between tabs. Not summarizing one AI’s output for another. Claude was literally operating the browser, typing prompts into Gemini, reading the responses, and then sharing its own observations with me.

I asked questions about AI safety, model transparency, and how each platform explains its own limitations to everyday users. What I was really testing was: can one AI engage meaningfully with another AI’s interface, and what do you learn when they’re essentially side by side?

Here’s what stood out. Claude didn’t try to outperform Gemini or dismiss what it found. It read Gemini’s responses carefully, pointed out where they aligned, and flagged areas where the two models approached the same question differently. It was measured. It was honest about what it didn’t know. And when Gemini gave a more surface-level answer on a safety topic, Claude noted it without being dismissive — just factual.

But here’s the thing that actually impressed me most: it wasn’t the technology. It was Anthropic.

Throughout this process, I kept running into moments where Claude would pause and explain its own constraints. It was transparent about what it could and couldn’t do in the browser. It asked (as I wanted) for my permission before taking actions. It flagged when it was uncertain. And these weren’t bugs or limitations that frustrated me — they were design choices that made me trust the tool more.

I’ve been spending a lot of time on Anthropic’s site, reading their research, their safety documentation, their public communications about how Claude is built. And I have to say — kudos to them. The level of transparency, education, and genuine awareness they’re putting into the world about what these tools are, what they can do, and what the real risks are is something I haven’t seen at this level from anyone else in the space. They’re not just building a product. They’re trying to truly inform people of the magnitude of what we’re playing with.

That matters. Especially when you’re someone like me — a dad, a practitioner, someone who’s building workflows with these tools every single day. I want to know the company behind the AI is thinking about this stuff as seriously as I am.

The takeaway: When you let AI models interact with each other transparently, you learn more about both of them. And when the company behind one of those models is actively investing in helping you understand the technology at a deeper level — not just selling it to you — that’s a signal worth paying attention to.

Screenshots from this session are coming in a future update — stay tuned...Gemini's take on the convo below:

──────────────────────────────

S A F E T Y   C O R N E R

Building Your Family's AI Framework — A Conversation Worth Having

Here’s something worth knowing: Ohio just became the first state to require every K-12 public school to adopt a formal AI policy by July 2026. California’s new chatbot safety law (SB 243, effective January 1, 2026) now requires companion chatbot operators to disclose they’re AI, implement self-harm safety protocols, and take measures to prevent minors from encountering sexually explicit content. Schools and states are starting to catch up.

Before you rush to your kid’s school and start asking teachers what AI tools they’re using and why — take a breath. This isn’t about panic. It’s about preparation.

Here’s what I’d suggest instead: start the conversation at home first. Sit down with your partner, your co-parent, whoever shares the parenting responsibilities with you — and just talk about it. Not with urgency. With curiosity. Here are some questions worth asking each other before you do anything else:

  1. What do we actually know about how our kids are using AI right now?
  2. What are we comfortable with? What aren’t we comfortable with?
  3. Do we understand what data these tools collect from our children?
  4. Have we ever sat down with our kids and watched them use an AI tool?
  5. What would we want a school AI policy to include if we were writing it?

You don’t need the answers today. You just need to start asking the right questions. The goal is awareness, not alarm.

Once you’ve had that conversation, here’s a simple framework you can build together as a family — and implement when it feels right, not when it feels rushed:

  1. The Open Door Rule: AI conversations happen in shared spaces, not behind closed doors. If your kid uses ChatGPT or any AI chatbot, the screen should be visible. Not because you don’t trust them — because learning together is better than learning alone.
  2. The “Who Said That?” Check: Teach your kids one question to ask after every AI response: “Is this actually true? Think Harder.” Build that verification habit early. AI is a starting point, not an answer key.
  3. The No-Secrets Boundary: Full name, school name, address, photos — none of it goes into an AI chatbot. Period. Most platforms have policies against this, but policies don’t replace parenting.
  4. The Weekly Show-and-Tell: Once a week, ask your kid to show you something cool they did with AI. You stay in the loop. They feel proud. You get natural moments to guide without lecturing. Everybody wins.
  5. The Feelings Firewall: If an AI makes your child feel bad, scared, or confused — they come to you. Full stop. This is the most important one. As AI companions get more emotionally sophisticated, your kids need to know that a real human is always the right first call.

There’s no perfect time to start this. But the right time is before something goes sideways — not after. Talk about it over dinner. Put it on the fridge when it feels ready. And know that the fact that you’re reading this means you’re already ahead of most.

──────────────────────────────

W O R T H   Y O U R   T I M E

Common Sense Media's AI Ratings

If you're a parent trying to figure out which AI tools are actually appropriate for your kids, Common Sense Media has been quietly building the most useful resource I've found. They now rate AI tools and chatbots the same way they rate movies and games — age-appropriateness scores, privacy assessments, and plain-language breakdowns of what data each tool collects from your child.

They've also been among the most vocal organizations pushing for federal chatbot safety legislation targeting minors, alongside advocacy groups and several members of Congress. Bookmark their AI reviews page. It's the closest thing we have to a Consumer Reports for AI, and it's free.

──────────────────────────────

That's your Signal for this week. If any of this landed for you — or if you think someone in your life needs to read the Safety Corner — forward this along. That's how we grow.

I'll be back next Friday with more. Until then — hug your people, stay curious, and stay safe.

Stay digitally and physically safe,

Phil

──────────────────────────────

S O U R C E S

All claims in this issue are sourced to public records and published reporting:

  1. 2026 International AI Safety Report (February 3, 2026)
  2. Ohio House Bill 96 / AI Model Policy — education.ohio.gov
  3. California SB 243, signed Oct 13, 2025, effective Jan 1, 2026 — California State Legislature
  4. Common Sense Media AI Ratings — commonsensemedia.org

──────────────────────────────

A B O U T

I’m Phil Moroni — a guitar player, a runner, a dad to two curious kids, and someone trying to live a full life through experiences, learning, and growth. I also happen to work in cybersecurity and application security, and I’m the person behind 100+ editions of Friday Tech News.

The Signal is co-created with Claude, an AI assistant built by Anthropic. I bring the voice, the editorial judgment, and the security lens. Claude brings research speed and the ability to connect dots across massive amounts of information. Together we build something neither of us could alone.

I'm not hiding behind the tool. I'm standing next to it and showing you exactly how the work gets done. If we're going to talk about ethical AI use, we should model it.

By Phil Moroni  |  Co-created with Claude 

A Moroni Consulting LLC Publication

Subscribe to The Signal

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe