press dispensary press release writing, press release distribution

Astroscreen announces $1M funding to detect social media manipulation

* UK start-up reveals tech to expose social media disinformation attacks
* Uses machine learning and human intelligence to help brands with early detection of reputation-damaging social media campaigns
* $1M initial funding raised

April 30, 2019, LONDON, UK. Press Dispensary. UK-based start-up Astroscreen today announces that it has secured $1M in initial funding to progress its pioneering technology which identifies carriers of disinformation on social media. Techniques include coordinated activity detection, linguistic fingerprinting and fake account and botnet detection.

The funding round was led by Speedinvest, Luminous Ventures, UCL Technology Fund, which is managed by AlbionVC in collaboration with UCLB, AISeed, and the London Co-investment Fund.

Social networks are now critical to how the public consumes and shares the news. However, they were built to reward virality, which makes them easy to manipulate and therefore to be weaponised on a global scale for commercial and political gain. Electoral interference is perhaps the best-known example, where foreign intelligence agencies are accused of using fake accounts and bots to meddle with the political process and erode trust in democracy. However, commercial brands are just as likely to be targeted, suffering powerful adverse effects. This is the focus of Astroscreen.

At the heart of disinformation, attacks lie fake social media accounts - bots (automated) and 'sock-puppets' (human-run). These networks of bots and sock-puppets can be used in a highly organised way to spread and amplify minor controversies or fabricated and misleading content. Once an attack gains steam, it is reproduced by genuine users, influencers and then bona fide news organisations.

As well as being used in politics they are increasingly being deployed in commercial assaults, already attacking global brands ranging from Nike and Starbucks to pharmaceutical giants (1).

Lomax Ward, Partner, Luminous Ventures, said:  "Luminous is delighted to be backing Astroscreen.  Luminous believes in backing visionary founders using technology to solve genuine problems, and Astroscreen is doing just that.  The abuse of social media is a significant societal issue, and Astroscreen's defence mechanisms are a key part of the solution.  We are excited to be working with Juan and Ali."

Astroscreen CEO Ali Tehrani previously founded a machine-learning news analytics company which he sold in 2015 before fake news gained widespread attention. Tehrani said: "While I was building my previous start-up I saw at first-hand how biased, polarising news articles were shared and artificially amplified by vast numbers of fake accounts. This gave the stories high levels of exposure and authenticity they wouldn't have had on their own.

"The use of such disinformation to discredit brands has the potential for very costly and damaging disruption when up to 60% of a company's market value can lie in its brands."

CTO Juan Echeverria, whose Ph.D. at UCL was on fake account detection on social networks, made headlines in January 2017 (2) with the discovery of a massive botnet managing some 350,000 separate accounts on Twitter. Echeverria said: "Social media platforms are saturated with fake accounts and botnets and are losing this cat-and-mouse game because botnet makers are continuously finding new ways of avoiding detection. As they incorporate conversational AI (3) and deepfakes (4), these botnets will get more sophisticated by the day."

Ali Tehrani concluded: “Social media platforms themselves cannot solve this problem because they’re looking for scalable solutions to maintain their software margins. If they devoted sufficient resources, their profits would look more like a newspaper publisher than a tech company.  

“So, they’re focused on detecting collective anomalies – accounts and behaviour that deviate from the norm for their userbase as a whole. But this is only good at detecting spam accounts and highly automated behaviour, not the sophisticated techniques of disinformation campaigns.”

“Astroscreen takes a wholly different approach, combining machine-learning and human intelligence to detect contextual (instead of collective) anomalies – behaviour that deviates from the norm for a specific topic. Taking Brexit as an example, the inauthentic Twitter accounts that contributed to the conversation were only inauthentic in the context Brexit and went undetected by Twitter’s scalable spam detectors.

“Our technology monitors social networks for signs of disinformation attacks, informing brands if they're under attack at the earliest stages and giving them enough time to mitigate the negative effects."

Arnaud Bakker, Associate, Speedinvest, said: "There has been a clear increase in disinformation attacks in recent years, and Astroscreen's proprietary solution is perfectly situated to tackle this widespread problem of social media-induced reputation damage which global brands are only just waking up to."

David Grimm, Investment Director, UCL Technology Fund, said: “This is a fantastic example of cutting edge university research being applied to solve a massive problem that plagues social media giants.”


Tweet this! ", UK start-up, announces $1m seed round funding, to expand platform powered by ML and human intelligence to expose social media disinformation attacks "

Follow Astroscreen
On Crunchbase:
On AngelList:

- ends -

Notes for editors

About Astroscreen
Astroscreen detects disinformation campaigns on social networks, using machine learning and human intelligence.

(1) Examples of recent brand attacks: (paywall)
(2) Headlines for January 2017 “Star Wars” botnet discovery:
MIT Technology Review:
Business Insider:
New Scientist:

(3) Conversational AI: a form of AI (artificial intelligence) that allows software to communicate with using every day, human-like, natural language, via social media posts/tweets, websites, apps and devices. Commercially, they are becoming commonplace in customer service environments, whether using voice (eg phone) or text (eg responses to website visitors).

(4)  Deepfakes (‘deep learning’ + ‘fakes’): fake videos constructed automatically by AI, combining or superimposing separate videos and/or soundtracks so as to portray people saying things or performing actions that never occurred in reality. Deepfakes might show a person performing acts they never took part in or alter a politician’s words or gestures to make them appear to say something they never actually said.

For further information please contact
Courtney Glymph, Astroscreen
Tel: 07867 488769


Media contacts