World Stock News

Real‑time stock data, professional analysis, and smart portfolio tools. One platform for all your investing needs.

Fake reviews are tipping point for much bigger online trust scandal

Private equity giant Prosus will acquire Just Eat Takeaway.com

The takeaway firm may have manipulated its star ratings, the CMA said

From fake reviews to ChatGPT, the decline of consumer trust is now a major online infrastructure problem, writes Paul Armstrong

The UK’s competition regulator has opened an investigation into fake reviews and manipulation involving companies including Autotrader, Feefo, Dignity, Just Eat and Pasta Evangelists. On the surface, this reads like a familiar reputational issue, but that’s not what’s happening here. Trust is no longer primarily a brand problem but an infrastructure failure. The systems that produce credibility, from reviews and rankings to recommendations and summaries, are degrading under pressure from incentives that prioritise engagement, visibility and scale over accuracy, even when basic verification such as identity checks or proof of purchase is weak or inconsistently applied.

Surface-level scandals mask a deeper shift. The Edelman Trust Barometer shows the ever declining trust in most things across institutions, yet the issue now goes well beyond public sentiment. Trust signals are increasingly generated, filtered and amplified by systems that businesses don’t control. Reviews can be manipulated, rankings can be gamed and AI systems can summarise or distort information at scale. Information volume is rising at the same time verification is weakening. Most people respond by disengaging. Faced with too many signals and too little clarity, attention drops and verification disappears. Even worse, they lean into bots for comfort. 

Trust is now produced by systems, not brands

Fake reviews represent the visible edge of a wider structural change. Consumers rely on aggregated signals to make decisions, yet those signals are shaped by platforms optimising for activity rather than accuracy. Verification remains inconsistent. In many cases, identity checks are weak, proof of purchase is optional and enforcement is uneven. The result is an environment where manipulation is not only possible but economically rational. Amazon is obviously a huge target, and has been taking (forced) steps to curb this. 

The same pattern appears across the broader technology ecosystem. Product reviews on Amazon can be inflated or distorted through coordinated activity. Search rankings on Google are shaped by optimisation strategies that reward those who understand the system rather than those who are necessarily most accurate. Influencers routinely promote poor or low-quality products to large audiences because incentives are tied to reach and conversion rather than long-term trust. Visibility often outweighs reliability. Dynamic pricing is seeing an uptick too. Algorithms adjust prices based on demand signals that can be manipulated or misread. Ad targeting optimises for clicks rather than truth. Recommendation engines prioritise behavioural relevance over accuracy. Each system works individually. The risk emerges when businesses rely on several of them simultaneously.

Platform design choices can actively erode trust further. On X, paid verification has replaced quality identity verification, turning what was once a signal of authenticity into a purchasable feature. Without rigorous checks behind it, the badge signals little more than willingness to pay. Credibility becomes performative rather than earned, creating a system where trust appears visible but lacks substance.

AI and advertising

A more immediate threat is emerging inside answer engines such as ChatGPT and Gemini. These systems are rapidly becoming the interface through which users access information, recommendations and even purchasing decisions. Commercial pressure is already shaping how those answers are produced. Advertising models and monetisation strategies are beginning to influence outputs, introducing a new layer where trust can be compromised at scale.

Recent moves by OpenAI to scale back its advertising ambitions reflect how quickly the tension has surfaced. Revenue incentives push toward commercial integration. Trust depends on perceived neutrality. Once those forces collide, credibility erodes quickly.

Answer engines compress multiple layers of information into a single output that users rarely verify. Rankings, reviews, summaries and external data are blended into one response. More often than not, the response simply becomes the decision. When distortion enters at that level, detection becomes significantly harder.

The tension is already visible elsewhere. Wikipedia, long treated as a reference layer for the internet, has moved to restrict or ban large-scale automated content generation from language models to protect the integrity of its information. A platform built on human verification is actively defending itself against systems designed to generate content at scale.

Authority is being given to systems we don’t understand

Inside organisations the same dynamic is taking hold. Data used for strategy flows through multiple layers before reaching leadership. External signals are filtered through platforms, data providers and AI tools. Narratives about markets, competitors and customers are shaped indirectly by algorithms rather than direct observation.

Authority begins to shift away from individuals toward systems that few people fully understand. Decisions feel faster because outputs arrive more quickly. The underlying process becomes harder to interrogate. Dirty data feeds flawed models. Models produce confident outputs. Those outputs influence strategy.

A dodgy dataset feeding multiple systems can affect pricing, forecasting, hiring or investment decisions across an entire organisation simultaneously. Errors scale faster than detection because distortion occurs upstream. Large technology companies have also shown a willingness to absorb reputational damage when the commercial upside outweighs the cost. Fines and regulatory action often become part of operating expenditure rather than a deterrent, creating an asymmetry where the systems shaping trust continue to operate even when reliability is questioned.

What executives should be asking now

Executives and senior leadership teams need to move beyond treating trust as a communications issue. Two areas likely require your immediate attention. First, data provenance. Where does the data used in decision-making originate, how is it verified and which external systems influence it before it reaches the business. Named dependencies matter. If strategy relies on Amazon marketplace data, Google search visibility, influencer channels or third-party datasets, those inputs should be treated as potential points of failure rather than neutral sources.

Second, system influence. Which platforms, algorithms or models shape what customers see and what internal teams believe. AI tools, recommendation systems and external data providers actively shape perception and decision-making. Leadership teams should be able to identify which systems have the greatest influence over revenue, brand perception and strategic direction. Neither question is abstract. Answers determine whether an organisation is operating on reliable information or on signals that have already been distorted upstream.

Control, not messaging, is the real issue

Regulatory action will focus on enforcement. Platforms will be pushed to remove fake reviews and improve transparency. Enforcement will likely only address the most visible problems, choosing to leave the more structural issues. Trust has shifted from something companies communicate to something systems produce. Businesses that don’t understand how those systems work operate without full visibility. Companies that assume the signals they rely on are accurate risk building strategies on increasingly unstable foundations. 

The gap between perceived reality and actual reality is widening and it’s why it’s the topic at the next TNN (The New Normal) on 7 May. Organisations that treat trust as infrastructure will see it. Others will continue optimising for signals they don’t control.​​​​​​​​​​​​​​​​ The old adage, your brand is what Google says it is, may never have been truer than right now.

Paul Armstrong is founder of emerging tech advisory, TBD Group, and its intelligence community TBD+

#Fake #reviews #tipping #point #bigger #online #trust #scandal