Background
The digital age has fuelled a form of exploitation where children can be sexually abused online with the use of a computer and webcam, or even just a mobile phone. This means abusers located anywhere in the world can exploit children without ever having to leave their home, and worse still are shielded by the virtual nature of the internet. These heinous crimes may be virtual but the impact is real – with devastating consequences for those affected.
By the numbers
Images related to child sexual exploitation reported in 2019
estimated % of content scanned globally for illgal content
Video related to child sexual exploitation reported in 2019
% increase of CSE reports year-over-year

OUR CHALLENGES
In 2020 alone there were over 60 million Child Sexual Assault Material (CSAM) images or videos reported by authorities across the globe. While the legal definition of CSAM varies by country, the common denominator is an image or video that shows a child who is engaged in or is depicted as being engaged in explicit sexual activity.
CSAM can be found on pretty much any storage or communication system – from social media and email, to file & image hosting and messaging – putting so many at risk.
Our Mission and Purpose
Create best in class technology, to fight the spread of Child Sexual Assault Materials (CSAM) in society, protecting the most vulnerable among us.

DIRECT IMPACT
Named after our country director in NFS Thailand, where it all started in 2007, our latest venture Krunam was created to combat the growing cases of CSAM online. With 82% of the total number of children appearing in CSAM aged 7-14 and 92% of them young girls, it’s a massive global crisis that needs attention. We’ve joined forces with VigilAI – a London based leader in AI/Deep Learning and computer vision, and JustBusiness – a bay area based leader in founding and supporting impact focused businesses. With our joint expertise in technology, business development and survivor services, Krunam is designed from the ground up to address the complex ecosystem that surrounds the fight against CSAM.
Current technology identifies less than 10% of CSAM, and we’re committed to doing better by moving on to the next generation of CSAM detection – our VigilAI CAID Classifier. Developed in collaboration with the UK’s Home Office, our Classifier pushes the boundaries of perceptual hashing only able to identify previously known CSAM, since it uses visual cues learned from being trained on CAID’s dataset (the Home Office’s Child Abuse Image Database and the largest database of CSAM in the world) which contains millions of CSAM examples.
Our work impacts the many that are at risk – survivors of assault who are re-victimized by the distribution of CSAM, content moderators who are damaged psychologically while reviewing CSAM in order to find perpetrators and remove it from digital platforms, and businesses that risk damaging the brand and online community they’ve spent years cultivating.