Using ML to Better the World

AI manipulated and deepfake videos might be putting "bad AI" in the limelight lately but I recently had an experience that showed one of the ways AI is making the internet a safer place.

3 years ago   •   4 min read

By Thomas Fowler

I had, what was for me, the oddest experience a few months ago. While filling up our family car with petrol, my two boys, who are aged 7 and 6 respectively, were looking at the Warning Stickers next to the petrol pump. As kids are want to do, they were telling me what was allowed and what was not allowed, “No matches! ...No phones!” then, following a pause, “What’s that third one, dad?”

“Oh. That’s the no-smoking sign. You can’t smoke here...” I remarked. Surprisingly, to me at least, he replied, “What’s Smoking?”. And so, the next few minutes were spent explaining to a 7 and 6-year-old what smoking was.

I found this astounding because, on the whole, children these days can consume more content than ever before. They can consume content entirely on-demand and the sheer amount of content available means they tailor their choices at will. So how was it, that with everything that they watch online, neither of them had yet encountered smoking? I found this astounding.

You see, in our household, there were a few obvious candidates responsible for their sheltered view of the world:

  1. Neither my wife, I nor their grandparents, aunts, uncles and cousins in our family are smokers. So my boys have never seen someone smoke before.
  2. Broadly speaking, smoking has been missing from popular culture in mainstream media - certainly neither Iron Man nor Captain America smoke.
  3. My boys consume all their on-demand media through the YouTube Kids app. This media is automatically screened and filtered by Google’s Video AI platforms, where content not applicable for children is filtered out or removed entirely.

It was this last reason that got me thinking about the use of ML and AI in our everyday lives in such a way that makes a positive difference.

The amount of content that content makers consistently upload to platforms like YouTube is astounding - 720,000 Hours of video are uploaded to YouTube every day. That’s over 500 hours uploaded EVERY MINUTE!

It is impossible for human beings to filter through and moderate content at this scale. For these platforms to be viable for children, organisations like Google must put machines at work to assist in the effort to find, categorise and filter content that is unsafe for children such that platforms like YouTube Kids are safe enough that I can happily leave my boys to consume the content they want, on-demand. In Q1 of 2019, YouTube removed 8.3 million videos. Of these, 76% were found by machine learning classifiers and over 70% of these were removed before they had any views. It was this level of automated content processing that contributed to the fact that, for nearly 8 years, my two young boys had never seen a video of someone smoking.

(A note here for parents: This ML capability employed by YouTube does not, of course, negate our responsibility as parents to understand what their children are consuming - you should always be aware of what your kids are up to online. But it does go a long way to making our jobs as parents not just easier, but even remotely possible based on the sheer volume of what is available on YouTube.)

The dangers in the user-supplied content model are obvious: The sheer amount of content, and the potential ability for those that seek to harm to supply this content, means that the world of content consumption for adults and children alike is fraught with danger. For example, we live in a world where bad actors can use technologies such as DeepFakes to seek to influence in a way that is nigh impossible for human beings to detect. The only cure for DeepFakes is Machine Learning processing categorisation. We need the help of AI-driven machines, working non-stop 24x7, to help us keep the content our loved ones consume, safe.

It is only through the process of AI-driven automated content processing, categorisation and filtering that we can even begin to safely use platforms like YouTube and others.

So what does this all mean for organisations? Google has, over at least the last 5 or more years, shown their dedication to making ML and AI available to everyday organisations to automate their workloads. From the open-sourcing of Tensorflow to the release of CloudML, Google has made the tools available to bring ML, AI and Automation into your organisation in a mature, scalable and reliable fashion.

Do you want to process and categorise video like YouTube? Use the Cloud Video Intelligence API from Google Cloud Platform (GCP). What about training, hosting and managing custom models built in Tensorflow? Google has custom hardware (Cloud TPUs) and the AI platform to process your models at scale. Want to process text or voice? Google has pre-trained models for Speech-to-text and NLP that are proven to be best in class.

The point is this: organisations like Google and YouTube have been solving problems using ML and AI for nearly a decade. We can stand on the shoulders of giants and use these technologies to not just improve the way we do business but improve the lives of our customer bases as well.

How will you use ML and AI to improve your business and, perhaps, even make the world a better and safer place?

DotModus is a Google Premier Partner with over 120 certified Data Scientists and Data Engineers. If you’re interested in understanding how Googles technology can change your business, speak to us.

Thomas Fowler, CEO, DotModus

Spread the word

Keep reading