CampaignSMS

For marketing at scale, generative AI's use hinges on trust and the … – Insider Intelligence

In-depth analysis, benchmarks and shorter spotlights on digital trends.
Interactive projections with 10k+ metrics on market trends, & consumer behavior.
Proprietary data and over 3,000 third-party sources about the most important topics.
Industry benchmarks for the most important KPIs in digital marketing, advertising, retail and ecommerce.
Client-only email newsletters with analysis and takeaways from the daily news.
Exclusive time with the thought leaders who craft our research.
Over half (51%) of marketers currently use generative AI, and another 22% plan to use it soon, per a June 2023 Salesforce survey. As adoption grows, however, the conversation is shifting from whether to integrate the technology to how to successfully implement it.
“There’s this recognition [among marketers] of how complex the technology can be, and how quickly the landscape is evolving and maturing,” our analyst Kelsey Voss said during a recent Meet the Analyst Webinar panel discussion.
As marketers learn about the latest AI developments, there are ongoing concerns of how to handle large amounts of data, especially sensitive customer data, as well as worries about privacy protections, bias, authenticity, and more.
Marketers have explored generative AI enough to understand the efficiencies it can create, explained Neha Shah, senior director of product marketing at Salesforce. Now, their concerns aren’t about how to use it, but what it means for data, privacy, and overall trust placed in these models.
Because of generative AI’s rapid democratization, the guidance of many companies has not kept up with adoption.
As a result, even with 51% of marketers using generative AI tools, trust remains an issue within the C-suite. Putting proprietary company data straight into an AI platform has been flagged by a number of companies due to worries of data leakage.
To foster that trust, the C-suite, marketers, and ultimately customers need to support adoption of guidelines for responsible AI use, explained Fatemeh Khatibloo, director of responsible AI and tech at Salesforce.
With these guidelines in place, they act as a “ladder up to trust,” said Khatibloo. Salesforce prioritizes five pillars to guide the continued development of trusted and responsible generative AI:
By building products that adhere to responsible guidelines, the promise of what generative AI can do is more reliable, and marketers can use the tools with a sense of empowerment.
Marketers want to be sure that the customer data they’ve been entrusted with is used appropriately. If trust is baked into a generative AI-powered customer relationship management tool, for example, marketers can protect customers while delivering personalization at scale.
Keeping marketers in the fold ensures that AI-generated content adheres to brand guidelines; prevents hallucinations that AI models are prone to when information is insufficient; and builds trust, allowing the AI model to learn so the next time the task can be done at scale.
“Human oversight is essential,” agreed Voss. “Particularly with complex decisions or if empathy is required. It’s a balanced approach between combining AI’s capabilities with human judgment, and that can definitely help enhance trust in AI systems.”
Watch the full webinar.
 
This was originally featured in the eMarketer Daily newsletter. For more retail insights, statistics, and trends, subscribe here.
One Liberty Plaza9th FloorNew York, NY 100061-800-405-0844
1-800-405-0844[email protected]

source

Leave a Reply

Your email address will not be published. Required fields are marked *