Dan Faggella, founder and head of research at Emerj Artificial Intelligence Research, discussed the sectors and industries adapting AI as well as challenges companies face when developing their AI strategies in the rapidly changing market.
Mr Faggella is an expert on use-cases and return on investment (ROI) for AI in business, and regularly works with global enterprises in financial services and security.
He has spoken for some of the world's largest and reputable organisations, including the World Bank, United Nations, Organisation for Economic Cooperation and Development (OECD), INTERPOL and many others.
SPUTNIK: Where are some of the fastest-growing sectors for the application of AI? How do AI firms and suppliers aim to improve these sectors, and can you provide a couple of examples being implemented by specific firms?
Dan Faggella: The sectors deploying AI are mostly digitally-native sectors like FinTech, eCommerce, and online media such as Facebook, Twitter, Google Search. These industries already have the tech talent, data fluency, and data infrastructure to put AI into use, but larger sectors struggle with this.
In terms of larger, stodgier sectors, financial services such as banking and credit cards are spending more on AI than many others, including manufacturing, oil and gas, and transportation, and seem to be adopting it a bit faster, thanks in part to massive coffers of money and a reasonably strong digital transformation over the last 10 years.
Fraud is currently a powerful and extremely popular use-case here.
SPUTNIK: Artificial narrow intelligence allows firms to complete mundane tasks with increased efficiency and lower margin of error (MoE). How do algorithms achieve this level of accuracy and which tasks demand minimal MoE? Can algorithmic bias or prejudice, shaped by humans, increase errors in other ways?
Dan Faggella: There are oodles of tasks that can be done by AI in ways that reduce costs and error.
We advise our enterprise clients to stratify these opportunities based on two primary factors: The financial impact of automation or lower error rates, and the reliability of data intake and measurement. The latter point is important.
If a bank wants to lower its margin of error in detecting credit card payment fraud, they need to have access to lots of organized data about these fraud instances, such as whose card was used, for what amount, at what time of day, in what location, with what merchant, etc.
But if a bank wants to improve the performance of its sales staff, it is much more challenging to determine the data to intake.
Unlabelled audio data from sales calls or unstructured text in sales emails aren't nearly as reliable as the data associated with credit card payments. That isn't to say that the sales example is not viable for AI, but it is to say that the reliability of data intake and measurement for credit card fraud is better.
SPUTNIK: What are some of the most common questions firms ask when choosing AI in their businesses? How can firms increase their ROI and minimise risk when choosing an AI vendor or applied AI programme? How can AI consultancies such as yours help companies navigate amid the rise of AI?
Dan Faggella: Firms often ask the wrong questions, and this is precisely the problem. They believe that AI is like IT, and can be 'plugged in' to solve near-term problems.
While this is sometimes the case, AI requires many elements of digital maturity that leadership doesn't understand, so they generally invest in a scattered set of one-off AI investments instead of a set of projects that both help them build long-term AI maturity and deliver on near-term outcomes, namely cost savings, revenue increase, etc.
Our company combines these AI adoption and ROI best-practices with a map of the full landscape of possible AI use-cases, so that leaders can select from a complete menu, rather than being 'sold to' by biased vendors.
Being unbiased and having direct connections to the smartest people in the field - these two factors have probably meant more to our enterprise customers than anything else.
SPUTNIK: What is the future of applied AI in the workplace? Do you see a more disruptive or symbiotic human-machine relationship with AI? Will AI also become a ubiquitous or niche industry, and are fears of AI replacing human labour valid?
Dan Faggella: There's so much to talk about here. First, AI tools will become vastly more accessible. Right now, most AI tools require data science skills and a specific kind of training to use productively, but future tools will be much more like today's software.
AI will be a kind of 'layer' of intelligence on top or within almost any enterprise software-as-a-service (SaaS) product. Software will be able to predict, not just display and prompt users to solve problems, rather than lying passively for the next keystroke.
Humans will still orchestrate and set up the machines, monitor their performance and adjust them to get better results. In some lines of work, humans will have AI systems as taskmasters - directing them to their next work item.
In other lines of work, humans will have AI as constant companions, prompting them with helpful notices or taking action for them automatically. The line between taskmaster and companion will become blurred and subjective, ultimately impacting us well beyond economy and work.