The vast majority of those aged under 35 now say that using social media, search engines, or news aggregators is their main way of getting news online. Even though the use of social media as a source of news has seen little growth in recent years, the centrality of social media, search engines, news aggregators, and other platforms that use algorithms to select news continues to grow as direct access to news websites and apps increasingly becomes confined to older and more interested consumers.
Our podcast on the chapter
The rapid growth of these ‘distributed’ platforms in the first part of the twenty-first century was initially accompanied by excitement and enthusiasm, but over time this transformed into concerns about possible negative effects – first expressed in the speculative notion of ‘echo chambers’, some years later, ‘filter bubbles’, and more recently, the spread of misinformation. There has been extensive research into whether algorithmically driven platforms really do overexpose people to like-minded views, while filtering out information they are likely to disagree with, creating feedback loops that ultimately reshape their worldview. At least when it comes to news exposure in recent years, this does not appear to be happening (Ross Arguedas et al. 2022). For now, platform use appears to increase the diversity of people’s news repertoires – but platforms change, the debate continues, and the overall effect on people’s attitudes and beliefs is less well understood.
We also know little about people’s attitudes and beliefs about algorithmic news selection itself. But these matter because many of the worst fears about echo chambers and filter bubbles are predicated on a view of audiences as passive, credulous, and unreflective recipients of information.
Surveys can help us see whether these assumptions are correct. In this chapter we use data from the Digital News Report to explore how people feel about news selected by algorithms, while comparing it with selection by editors and journalists. We also look at what drives people’s views on news selection, how it varies by country, and how it has changed over time.
People are sceptical about algorithmic news selection
To measure people’s attitudes towards algorithmic news selection we asked respondents whether they agree that ‘having stories automatically selected for me on the basis of “what I have consumed in the past” or “what my friends have consumed” is a good way to get news?’. To help interpret the results, we also asked respondents a similarly worded question about news selected by ‘editors and journalists’.1
The headline results reveal that audiences are quite sceptical about all these ways of selecting news. Just 19% across all countries where we asked these questions2 agree that having stories automatically selected for me on the basis of what my friends have consumed is a good way to get news, with 42% disagreeing. People have a more positive view of automatic selection based on past consumption, but just three in ten (30%) agree it’s a good way to get news – with equal numbers disagreeing.
Perhaps surprisingly, this is slightly more positive than people’s views of news selection by editors and journalists (27%). People are clearly quite sceptical of all forms of news selection, whether done by humans or by algorithms – something we have referred to in the past as ‘generalised scepticism’ (Fletcher and Nielsen 2018). Part of the reason we refer to this scepticism as ‘generalised’ is because people’s views on all these methods of news selection are fairly strongly correlated (r ≈ 0.5 for each comparison), meaning that people tend to have a similar view on all three. If someone thinks that editorial selection is a good way to get news, they usually think the same about algorithmic selection – and vice versa. Journalists, academics, and industry observers, often with good reason, tend to see these selection methods as being antithetical to one another – but it is important to recognise that audiences do not think about the issue in this way.
People’s scepticism has changed little over time
If we compare these results with those from the same questions in 2016, we can see that people’s views on the issue have not changed very much in the last seven years – at least at the headline level. Averaging the data across the same set of countries, we see there has been a 6 percentage point fall in the proportion that think their past consumption is a good basis for automated news selection, and a smaller 3 point fall in approval of editorial selection and social recommendations. It is important to note that the proportion who do not think these are good ways to get news has remained stable, with 4–6 point increases in the middle ‘neither agree nor disagree' category. This suggests that approval has morphed into ambivalence – but ultimately these are small changes, especially considering the seven-year gap and everything that has happened in between.
As ever, these averages mask variation at the country level. The chart below shows that the UK, Denmark, and Hungary have the lowest levels of approval for both types of algorithmic news selection, whereas in Spain, South Korea, and Brazil, approval is almost twice as high. Although it’s not immediately clear from the chart below, there are a small number of high-trust, newspaper-centric countries in Northern and Western Europe – such as Austria (33%), Sweden (30%), and the Netherlands (34%) – where the figures for editorial selection, though still low, are slightly higher than those for both types of algorithmic selection.
The chart also shows how, in most countries, approval for all three modes of selection has fallen since 2016. Australia is something of an outlier as the only country where approval has risen across the board. In some countries, such as Canada, Brazil, and the UK, the changes from 2016 are relatively large – especially for news selected by algorithms on the basis of past consumption. However, although the downward trends are fairly consistent, in many cases the falls are of 3pp or less, and not statistically significant. And we should remember again that in most cases approval has been replaced with ambivalence.
Interest and trust in news increases approval of news selection methods
Returning to the data from 2023, we see that approval for each method of news selection varies by interest in news. Those that say they are ‘very’ or ‘extremely’ interested in news are considerably more likely to agree that each is a good way of getting the news – not just when it comes to editorial selection, but also for both methods of algorithmic news selection. Approval for automatic selection based on past behaviour and selection by editors and journalists increases more with interest than automatic selection based on friends’ consumption – but the increase is still clear.
Similarly, when we look at the results by different levels of trust in news, we see that approval for both algorithmic news selection and editorial news selection is significantly higher for those with higher levels of trust – with around half of those who ‘strongly agree’ that they can trust most news most of the time agreeing that automatic selection based on past behaviour (52%) and selection by editors and journalists (55%) are good ways to get the news.3 Again, the parallel increases in approval for all three methods of news selection suggest that people do not have diverging views on algorithmic news selection versus editorial news selection.
We can also explore the link between different types of news selection and what people say is their main way of getting news online. Although we might expect large differences between those who say their main way of getting news online is by going direct to news websites and apps and those who say their main way is to use a platform that relies on algorithmic selection (via social media, a news aggregator, or using a search engine to search for a news topic), in fact, the numbers and patterns stay broadly the same. It is true that people who say their main way of getting news online is via algorithmically driven platforms are a little more likely to approve of news automatically selected based on past behaviour (+7pp) and based on friends’ consumption (+8pp) – but it is not the case that people with a preference for direct access and people with a preference for platform access have contrasting views on editorial and algorithmic news selection.
People worry about over-personalisation
We also used the survey to ask people how they feel about some of the risks commonly associated with algorithmically selected news – more specifically, the risk that more personalisation might mean they are not shown certain types of information. Across all countries where we asked these questions, nearly half agree that they ‘worry that more personalised news may mean that I miss out on important information’ (48%) and ‘challenging viewpoints’ (46%). Figures for both are higher among those with higher levels of interest and trust in the news, and at the country level, concern about both is highest in the UK, USA, Australia, and Norway.
The average figures are slightly down from 2016, where just over half (57% and 55% respectively) say they ‘tend to agree’ or ‘strongly agree’ with these statements. Although there was a slight increase in the proportion that disagree (+3pp), the numbers for ‘neither agree nor disagree’ increased by 6–7pp – again highlighting an increase in ambivalence. Nonetheless, there is still considerable public concern about the potential effects of over-personalisation, even as algorithmically driven news access via search, social, and aggregators becomes more important in many parts of the world.
People’s attitudes towards news selection – whether it’s done by algorithms or by editors and journalists – can be characterised by ‘generalised scepticism’. People are clearly sceptical about whether automatic news selection based on past behaviour or friends’ consumption is a good way to get news, and they worry about missing out due to over-personalisation – but they are equally wary of how editors and journalists select news. Furthermore, people’s views on algorithmic and editorial selection are often closely aligned. If they are sceptical of one (and they often are), they are likely to be sceptical of the other, too. Few people, for example, have a positive view of editorial selection while holding a negative view of algorithmic selection based on past behaviour.
These views have changed little since 2016 – though there’s some evidence that people have become a little more ambivalent over time. This may be because we are past the peak of concerns about echo chambers and filter bubbles (even as concern over misinformation is as high as ever), or it could be because most people are now using more social networks – each using algorithmic news selection in different ways – making it harder for people to have a consistent positive or negative view.
We might take some comfort from the apparent scepticism surrounding algorithmic selection, as it suggests that people interpret what they see on platforms quite cautiously. Despite becoming increasingly important for how people get news, most people are far from enthusiastic about how platforms select news for them. Platforms, then, rather than seeing their increased importance as a ringing endorsement of users’ news experience, should perhaps remember that this is partly due to declining levels of interest in news, and associated falls in direct access to news websites and apps, which have made platforms relatively more important. Most people do not come to platforms for news specifically but come across it when they are there for other reasons.
Publishers should perhaps keep in mind that most people do not see their selection processes as markedly different from those employed by platforms. In most countries people think that the automatic assessment of their past behaviour will deliver better results for them than the considered judgement of editors and journalists. This suggests that publishers have some work to do to convince audiences of the value they add as experts in news selection, while also pointing to the limits of simply asserting the value of that expertise when trying to win back trust. Algorithmic news selection is far from perfect, but editorial selection isn’t perfect either – and people seem to know it.
1 To help respondents understand the question, we included a preamble which read ‘Every news website, mobile app or social network makes decisions about what content to show to you. These decisions can be made by editors and journalists or by computer algorithms analysing information about what other content you have used, or on the basis of what you and your friends share and interact with on social media.’
2 So that we could compare over time, we asked the questions on algorithmic selection in all countries included in the Digital News Report 2016 (when we first fielded this battery of questions). The countries are listed in the second figure in this chapter.
3 Other research, based on independent analysis of the 2016 data, found similar associations between interest and trust in news and each selection method, as well as negative associations for education and age. Although the direction of the association did not vary by selection method, the strength of the association did vary for some variables (Thurman et al. 2019).