The outbreak of the Israel-Hamas war has resulted in a rise in online misinformation and a greater need for trust and safety technology. Former employees of Meta and Google, who have established new startups, are selling content moderation technology to an extended range of online platforms. Recruitment for these tech companies has grown since the established tech firms dramatically reduced their trust and safety teams earlier in the year.
Lauren Wagner has a deep understanding of disinformation. As the 2020 U.S. presidential election drew close, she worked at Facebook, focusing on information integrity and creating products to ensure content was moderated and fact-checked. Since war erupted between Israel and Hamas last month, the sheer quantity of misinformation and violent content spread over the internet has left her stunned. Wagner departed from Facebook parent Meta in 2020, and her work in trust and safety feels antiquated in the current climate. She wonders, “When you’re in a situation where there’s such a large volume of visual content, how do you even start managing that when it's like long video clips and there's multiple points of view?” She notes the emergence of terrorism live-streams as a particular challenge. The problem is exacerbated by the recent cost-cutting measures enacted by Meta, Google parent Alphabet, and X, formerly Twitter, which resulted in a number of jobs related to content moderation and trust and safety being cut from their teams. Wagner, who is now the founder of Radium Ventures, a venture capital firm dedicated to trust and safety technologies, believes this situation is shining a light on the risks of hosting user-generated content. Operations that were not usually associated with political debate, such as Discord and Telegram, are now having to take precautionary measures to avoid being exploited for terrorist propaganda. Though Discord declined to comment and Telegram did not respond to a request for comment, Wagner is hoping that the need for action will be widely recognized.
Roblox recently experienced a surge of users attending pro-Palestinian protests in their virtual world, which has prompted the company to closely monitor posts for violations of their community standards. A spokesperson for Roblox told CNBC they have thousands of moderators and automated tools in place to watch for this type of activity, and that the site is open to expressions of solidarity, but not any content that endorses or condones violence, promotes terrorism or hatred, or supports particular political parties.
Several former trust and safety staffers at Meta - the company Wagner used to work for - have since lost their jobs, but remain devoted to the cause. Wagner was among the first to invest in a startup called Cove, made up of ex-Meta workers, plus a few more companies developing technology to be sold to businesses. These products are meant to provide a more coherent information ecosystem and standardized process to manage user-generated content.
Those with previous experience in trust and safety working for big companies such as Google, Reddit, ByteDance, Apple and Discord, are recognizing the opportunity presented by the current market. Tech policy experts and industry professionals recently attended TrustCon in San Francisco so as to discuss the hot topics related to online trust and safety as well as the potential social implications of the layoffs occurring across the industry. Startups such as ActiveFence, Checkstep and Cove, who supply various Trust & Safety solutions, were present in the exhibition hall to promote their services, converse with potential customers and recruit personnel. Cove CEO Michael Dworsky, who left Facebook in 2021 to found the company, noted that with the cost-cutting incurred, there are now a substantial number of talented people available for hiring. Cove's management platform works with other classifiers to detect issues like harassment so businesses can shield their users without needing expensive engineers for coding. Mason Silber, the technology chief of Cove and a previous seven-year employee of Facebook, remarks that when Facebook began investing in trust and safety, there were not many off-the-shelf solutions to purchase. As a result, they had to do it through necessity rather than choice and they ultimately developed some of the most dependable and reliable safety solutions in the world. A spokesperson from Meta declined to comment on the matter.
Wagner, who left Meta after two and a half years in mid-2022, commented that content moderation was simpler before, particularly at the time of the Middle East crisis. It had typically been sufficient for a trust and safety team member to conduct a regular scan of a picture and identify whether it was false. However, due to the high quantity and velocity of images and videos submitted to platforms, as well as the ability to edit details with the rising use of generative AI tools, the task has become increasingly complex.Social media sites must now manage content related to both the Middle East war and the Russia-Ukraine dispute, as well as prepare for the approaching 2024 presidential election. Amidst this, ex-President Donald Trump, who is being accused of interfering in the 2020 election and is charged in Georgia, is likely to become the Republican pick.Business process services, that includes IT outsourcing and call centers, has been identified by Manu Aggarwal, a partner at Everest Group, as one of the fastest-growing segments. By 2024, Everest Group predicts the industry to generate around $300 billion revenue, with trust and safety accounting for $11 billion. Accel, a venture capital firm, has funded Cinder, a startup established by founders with Meta trust and safety system backgrounds plus counterterrorism work. Sara Ittelson of Accel stated that with more platforms recognizing the demand for more reliable protection, and the continuous dissection of the social media market, the trust and safety technology space is likely to expand.The European Commission has mandated that big online platforms with a significant number of European users must demonstrate how they control and delete unlawful and violent material, or else face up to 6% of their annual income in fines. Cinder and Cove have responded by proposing their technologies as a means for online businesses to systematize and document their content management procedures to fulfill the Commission's Digital Services Act.
Without the assistance of specialized tech tools, many firms have attempted to tailor Zendesk (a customer service software company) and Google Sheets to manage their trust and safety regulations. According to Cove's Dworsky, this outdated approach yields an "absurdly laborious and non-scalable" process, akin to "assembling a Frankenstein's monster." Despite the emergence of powerful trust and safety technology, the issue of managing violent content and disinformation remains immense and out of control. According to the Anti-Defamation League's published survey, 70% of respondents experienced some form of hate or false information regarding the Israel-Hamas conflict on social media. Companies are undoubtedly facing considerable difficulty in distinguishing between lawful and objectionable material. The Network Contagion Research Institute's Lead Intelligence Analyst, Alex Goldenberg, stressed the need for both effective content moderation and honesty with users about action taken. He proposed a balance, noting that "transparency is essential at a time where third-party access to what is happening on social media is required." and disinformation
Noam Bardin, the former CEO of Waze, a navigation firm now owned by Google, founded the real-time messaging service Post a year ago. Bardin, who hails from Israel, expressed frustration at the proliferation of misinformation and disinformation since the war began in October. "The whole portrayal of what's happening is shaped and controlled through social media, and this has resulted in a huge influx of propaganda, disinformation, AI-generated content, bringing material from other conflicts into this conflict," said Bardin. Bardin added that Meta and X have encountered difficulty dealing with and removing suspicious posts, which has appeared increasingly difficult amidst a swell of videos.
At Post, a platform akin to Twitter, Bardin has employed "moderation tools, automated tools, and processes" since the company's inception. To that end, Bardin makes use of services from ActiveFence and OpenWeb, both based in Israel. He notes that "whenever someone comments or posts, it is subject to review by AI to evaluate the content and classify it based on harm, pornography, violence, etc." Post is an example of firms that trust and safety startups are targeting. Likewise, live-chatting services have been appearing in video game sites, online marketplaces, dating apps, and music streaming sites, leaving them vulnerable to potentially hazardous user-generated material.
Brian Fishman, co-founder of Cinder, remarked that "militant organizations" rely on a network of services to spread their propaganda, including Telegram, Rumble, and Vimeo, which are less tech-savvy than Facebook. Requests for comment from Rumble and Vimeo went unanswered. Fishman believes that customers perceive trust and safety tools analogous to their cybersecurity investments. In both cases, expenditure is necessary to avert calamity. "'You are essentially paying for insurance, which means you will not receive full gain from it daily," remarked Fishman. "You are investing a bit more in dark times, so that you have the means when you really need it; this is one of those moments when organizations need it."
Lawmakers recently asked social media and AI companies to clamp down on misinformation and disinformation.Original
I'm not going to be able to make it to the meeting tonight.
Revised
I won't be able to attend the meeting tonight.
top of page
bottom of page
Comments