When I tell people I'm VP of Operations at an AI startup, the reactions vary. Some are genuinely curious. Some assume I handle the "non-technical stuff." And some — thankfully fewer every year — seem surprised that a woman is in AI leadership at all.
I'm not writing this to complain. I'm writing this because who builds AI products matters as much as what they build. The perspectives that are missing from the room shape the product just as much as the ones that are present — and the way those perspectives get translated into operational decisions is where ethics actually lives or dies.
Ethics Is an Operations Problem
Most companies talk about ethics the way they talk about values: on a slide, in an all-hands, on an About page. I think about it the way I think about any other operational system — what are the defaults, what are the guardrails, who enforces them when the pressure is on, and what happens when an incentive tries to bend them.
If ethics only lives in a mission statement, it's a preference. If it lives in defaults, workflows, data handling, vendor contracts, prompt policies, and the way a tradeoff is escalated when a short-term win disagrees with a long-term principle — then it's real. That's the version we're building toward.
What That Looks Like Inside Tactical Talk
A product that sits inside real conversations carries a higher ethical load than most software. We made some early calls that I'd argue are non-negotiable for anything in this category:
- No sale of personal information. This is the actual posture in our Privacy Policy, not a slogan — including under CCPA. We built the business model around subscriptions so the incentives don't drift.
- Detection over dominance. The product does not help users manipulate others. It helps users recognize manipulation being used against them. Detection Shields — Gaslighting, Narcissist Defense, Manipulation Detection, Lie Detection, Body Language Coach — are the execution of that choice. The modes themselves are filtered against the same principle.
- User-first safety primitives. Safe Word and Witness Mode exist so the user, not the product, controls how a hard conversation is handled and whether it's captured.
- Shadow Coach for practice instead of pressure. We'd rather someone rehearse a hard conversation against an AI at hardball difficulty than show up cold and be steamrolled in the real one.
- Transparent pricing. No hidden fees, no dark patterns in the subscription flow, no "try to cancel and see what happens" friction.
None of those are marketing lines. They're operational choices — which means they show up in code, contracts, and decisions we make when no one is watching.
Universal Principles for Ethics-First Products
If I had to distill what I've learned running operations on a product like this, it would come down to a short list that I think applies to anyone building AI right now:
- Write down what you won't build. Before the roadmap. Before the pitch deck. The negative space defines the product more than the features do.
- Pick a business model that makes your ethics easy. If your revenue depends on extracting attention, selling data, or maximizing engagement, your ethics will lose every argument with your P&L. Subscription and clear value exchange make the right default the cheap default.
- Design for the worst 1% of users, not just the top 99%. Somebody will try to use any powerful tool to harm someone else. Assume it, and ship the safeguards before you need them, not after.
- Make the ethics visible in the UX. If users can't see your safety primitives, they don't exist as trust — only as compliance.
- Escalate tradeoffs, don't bury them. When a short-term growth call collides with a long-term principle, that should be an explicit conversation, not a quiet decision made by whoever was closest to the keyboard.
On the Diversity Question
The industry needs more diverse voices shaping AI products — not as quota, but because the products being built right now will shape how billions of people interact with technology for a long time. If the rooms those decisions get made in are narrow, the products will be narrow. That's a design problem, not a politics problem.
If you're a woman considering a career in AI, or in operations, or in any part of this field: bring your perspective, bring your experiences, and bring the problems you've faced that no one else in the room has faced. Those are the insights that turn a generic product into one that actually works for the people using it.
— Tavish Hower, VP of Operations, Tactical Talk