In the last 10 years machine learning has become ubiquitous and touches all lives in ways that was unimaginable before. The machines can make decisions that required considerable human effort at a much faster speed and reduced cost with a little human oversight. As a result, machines don’t just have a higher than before influence in shaping our lives but are also under increased scrutiny by both regulators as well as user rights advocates.
The adage “with great power comes great responsibility” has long been used – from French revolution to superhero comics. It has never been truer as the great power that machine learning wields is now in the hands of almost anyone making a software product. It ranges from giving people access to the funds that can alter their lifepath, medical diagnosis that can increase their life expectancy or reduce it dramatically to their social media feed that cannot just provide them the content that keeps them engaged, but also polarise their beliefs by feeding them information that reinforces their existing notions.
With the growing influence of AI technologies and the corresponding scrutiny, the way AI development happens is beginning to change. The full data science lifecycle needs to incorporate the elements of responsible AI and the professionals who know how to design and implement these will be the ones that employers will look for.