The rise of AI is redefining the trust and safety dynamics within digital ecosystems, challenging existing frameworks.
Why it matters: Companies face the dual challenge of adhering to regulatory compliance and proactively enhancing user trust. Consumers increasingly demand environments that are not only safe but also transparent and accountable.
Our insight: To meet evolving consumer expectations and regulatory pressures, companies must invest in adaptable, self-regulated systems. This should include integrating AI tools for nuanced content moderation and fostering cross-platform collaborations to enhance safety standards.
The challenge of ensuring trust and safety of users in digital ecosystems has never been more complicated with the emergence of artificial intelligence (AI). Harmful content is growing exponentially in both volume and complexity, even as AI systems are becoming increasingly effective at detecting and deleting it.
Navigating this new paradigm, though, calls for new frameworks and a balance between open access and regulatory oversight that’s stringent in some regions but inconsistent across many others. The few global rules promoting safety online and on digital platforms are mostly driven from outside of the United States — in the EU, the U.K., Australia and Singapore, to name a few — while the U.S. regulatory landscape and that of other countries remain vague and politically fragmented. In this environment, companies can either choose to see regulatory obligations as a compliance checkbox or a self-driven initiative to strengthen programs surrounding a strategic area of their business, with the goal of increasing user and customer trust.
Some professionals across legal, compliance, engineering, product and operations are choosing the latter; they are coming together to tackle the challenges of safety and trust within their own companies and as a community, driven by a shared interest. It’s happening even as a few major players are scaling back investments in fact-checking and moderation, while others — particularly those serving sensitive user groups like children — are doubling down on safety infrastructure.
Here’s what we know: The future hinges on proactive and adaptable and self-regulated architectures. Consumers are no longer satisfied with platforms that simply avoid harm — they expect environments that are actively safe, transparent and trustworthy, and will continue to demand change. Companies must figure out a way to provide a safer online experience while still maximizing user experience and product innovation.
From regulation to responsibility
Many companies have undergone at least one iteration of compliance — such as completing risk assessments under the U.K. Online Safety Act, aligning with the Digital Services Act or adapting to similar frameworks in other jurisdictions. These early efforts have laid the groundwork, but they’ve also revealed the operational complexity and strategic trade-offs involved in building scalable, trustworthy safety systems.
Among other things, these regulations emphasize:
- Expectations surrounding content moderation
- The importance of transparency reporting
- The need for safer design of products and their functionality
The effect of the regulations on companies has been mixed on the impacted organizations. Major tech companies with the financial and operational scale to adapt, for example, responded quickly to some of the proposed frameworks, and likely already had many of the required processes in place.
For many mid-sized online platforms and technology companies, however, the regulations represented a first-time compliance challenge, especially with navigating:
- Initial legal interpretation of complex, developing laws
- Building new processes for risk assessment and mitigation
- Documenting controls and enforcement mechanisms from scratch
- Rationalizing overlapping requirements across jurisdictions.
These challenges are often amplified by executive teams pushing their organizations for faster AI integration and innovation — to stay competitive, launch new features and meet user demands.
Balancing technology transformation and AI with online safety
This year alone, major tech companies like Meta, Amazon, Alphabet and Microsoft are projected to spend over $320 billion on AI technologies and infrastructure. The surge in investment has led to a proliferation of AI-enhanced consumer-facing products, creating new vectors for risk, complicating internal change management and increasing the need for robust safety controls.
AI tools are also being used to moderate content and enforce platform policies. Proverbially, AI is a powerful arrow in the organizational trust-and-safety quiver, but it requires models to interpret hyper-nuanced scenarios. For example, in online gaming environments, phrases like “I’m going to shoot you” may be flagged as violent or harassing, even though they’re contextually appropriate within certain gameplay. This highlights the need for context-aware AI-moderation systems that can distinguish between genuine harm and genre-specific language.
That’s where strong data governance comes in; without it, even the most context-aware AI solutions are prone to failure. Think of data governance as the plumbing beneath a smart city: If the pipes are leaky or misaligned, no amount of surface innovation will make the system work.
According to Protiviti’s Global AI Pulse Survey, combining effective governance with data engineering, analytics and responsible data usage training is crucial for boosting AI maturity and maximizing AI investment returns. The survey found that 69% of organizations with advanced AI maturity express high confidence in their data capabilities.
From now on, companies need to prioritize key governance practices:
- Data-lineage tracking to ensure transparency across AI pipelines
- Metadata labeling to flag sensitive data before training
- Access controls and input sanitization to prevent prompt injection and data leakage
- Continuous auditing and flagging systems to monitor AI outputs and user interactions
Investment in compliance: The rise of the second line
There is also a growing need for a dedicated second-line compliance function. Historically, legal teams interfaced directly with engineering and product — but they’re not staffed to handle the operational intensity of today’s compliance landscape.
Second-line teams now serve as strategic partners, working across legal, product, trust and safety, and engineering to:
- Interpret and implement overlapping regulations
- Manage transparency reporting workflows
- Coordinate risk assessments and control documentation
- Ensure consistent enforcement across product lines.
From compliance to community collaboration
While compliance teams ensure adherence to legal requirements, trust and safety teams focus on maintaining a safe user experience, which often requires a bespoke approach. This partnership is crucial for navigating the complex regulatory environment and fostering trust among users.
The industry is increasingly recognizing that creating a safer space online requires collaboration between compliance/legal teams and trust and safety personnel, not just within an organization but also among the diversity of industry stakeholders — from engineers and legal experts to operations teams managing content moderation.
Technology is bringing these communities together in a way that regulation has so far failed to do. One example of collaborative innovation is Lantern, an application programming interface tool that connects platforms to facilitate information sharing. Teams across the technology industry use it to flag bad actors across platforms; this helps prevent cross-platform abuse like bullying, without the need to share personal data.
This kind of ecosystem-level collaboration is unique to the trust and safety space and reflects a growing commitment to community-led standards.
What companies can do: Practical solutions
To stay ahead of regulatory pressure and help ensure effective trust and safety programs, here are six practical solutions companies should consider:
- Build processes that scale: To meet lean staffing and bottom-line efficiency goals, companies must build adaptable compliance processes that are embedded “as far left as possible” in the development lifecycle. Developing processes and products that consider risks as part of the design will help companies sustain long-term sustainability in such a dynamic regulatory environment.
- Develop the building blocks of a compliance program: To achieve greater efficiencies, you need the building blocks of a compliance program, which usually includes developing core data and/or program elements. Long-term efficiency gains will depend on things like inventory control, a risk library and implementing risk management tools.
- Invest in data governance: Prioritize investing in, and establishing, the right data governance strategy to support both growth and innovation and hedge against regulatory risk. Having adaptability and flexibility in your data architecture will be the single most enabling factor in developing a sustainable and efficient compliance program.
- Test AI-driven policies: Leverage AI to simulate and refine platform policies quickly. At the same time, don’t neglect to monitor user reports and emerging harms to stay ahead of new threats.
- Engage in community-based information sharing: Join coalitions and nonprofits that facilitate cross-platform data sharing. Use tools like Lantern to flag bad actors and prevent abuse across ecosystems.
The bottom line: The future of online safety lies in collaboration and proactive design. AI will no doubt play a sizable role, as will automation. But the next wave of digital responsibility will be driven by industry players with intention, innovation and integrity.
Karter Klumpyan, a Director with Protiviti’s Risk & Compliance practice, contributed to this content.