In a nutshell: In the latest encroachment upon workers' privacy, companies like Walmart, T-Mobile, AstraZeneca and BT are turning to a new AI tool to monitor conversations happening on collaboration and chat channels in Teams, Zoom, Slack and more.

For years, businesses have monitored the content of employees' emails, setting up tools and rules to passively check what staff were sending to each other and out into the world. However, this monitoring is set to become significantly more invasive as renowned brands turn to AI tools for overseeing conversations in collaboration and messaging services like Slack, Yammer, and Workplace from Meta.

Aware, a startup from Columbus, Ohio, presents itself as a "contextual intelligence platform that identifies and mitigates risks, strengthens security and compliance, and uncovers real-time business insights from digital conversations at scale." Those "digital conversations" are the chats that workers are having on productivity and collaboration apps.

Those "digital conversations" are the chats that workers are having on productivity and collaboration apps.

The company's flagship product aims to monitor "sentiment" and "toxicity" by using verbal and image detection and analysis capabilities to observe what people discuss and their feelings on various issues.

While the data is ostensibly anonymized, tags can be added for job roles, age, gender, etc., allowing the platform to identify whether certain departments or demographics are responding more or less positively to new business policies or announcements.

Things get worse though with another of their tools, eDiscovery. It enables companies to nominate individuals, such as HR representatives or senior leaders, who could identify specific individuals violating "extreme risk" policies as defined by the company. These 'risks' might be legitimate, such as threats of violence, bullying, or harassment, but it's not hard to imagine the software being instructed to flag less genuine risks.

Speaking to CNBC, Aware co-founder and CEO Jeff Schuman said, "It's always tracking real-time employee sentiment, and it's always tracking real-time toxicity. If you were a bank using Aware and the sentiment of the workforce spiked in the last 20 minutes, it's because they're talking about something positively, collectively. The technology would be able to tell them whatever it was."

While some may argue that there is no right to or expectation of privacy on any company's internal messaging apps, the news of analytic tracking will undoubtedly have a chilling effect on people's speech. There's a world of difference between traditional methods of passive data collection and this new real-time, AI monitoring.

And while Aware is quick to point out that the data on their product is anonymized, that claim is very hard to demonstrate. A lack of names may render the data semantically anonymous, but often it doesn't take more than a handful of data points to piece together who-said-what. Studies going back decades have shown that people can be identified in 'anonymous' data sets using very few and very basic pieces of information.

It will be intriguing to see the repercussions when the first firings occur because AI determined that an individual's Teams chat posed an 'extreme risk'.