Technology

More start-ups, applications in health-care, questions around flaws and CO2 emissions: AI predictions for 2024


Artificial intelligence, or AI, has been the buzzword on everyone’s lips for the last few years. But what’s to come in the next 12 months of development?


In a field that’s constantly evolving, there’s a lot to look forward to—including AI integration in your doctor’s office, more work towards AI diagnostic tools across health-care and more widespread development in small tech as opposed to just big companies.


At the same time, some of the issues with AI are starting to bubble to the surface: ballooning flaws such as bias and misinformation, a critical lack of oversight or regulation and a growing debate around ethics, job security and the true environmental toll of AI processing.


Are we due for a year of discovery, or an AI reckoning?


Here’s what experts think might be in store for the world of AI development in 2024.


NOT JUST THE REALM OF BIG TECH


The headlining news in AI over the last few years has largely revolved around a few big names—but that’s something that’s starting to change, according to researcher Sasha Luccioni, who specializes in the societal and environmental impacts of AI.


“I definitely think there’s gonna be more democratization or at least distributed deployment of AI,” she told CTVNews.ca in a phone interview in December.


“Currently, we’ve seen kind of a concentration of power in terms of big tech and OpenAI doing most of the deployments, especially in generative AI models, but I really see that shifting as more and more start-ups and smaller companies in general are catching up and doing their own cool stuff with AI.”


OpenAI is the team behind the branding juggernaut of ChatGPT, which is one of the most well-known generative AI tools.


One of the issues that has kept AI technology in the realm of tech giants is the sheer computing power needed to train and run AI models. This is easy for established companies such as Google or Microsoft, but for the average person, it’s basically impossible, according to Luccioni.


“With generative models, or large language models, they’re really quite big,” she said, adding that large language models need “anywhere from a thousand to even a couple of thousand GPUs, which are specialized hardware. So that really adds up quickly if you have to either rent it on the cloud or you have to build your own cluster. Most data models, you can’t train on a single computer anymore, you need a massive amount of infrastructure.


“But actually, if you share models, you can build incrementally, you don’t have to start from scratch.”


Luccioni said she sees the impact of this collaboration through her work at HuggingFace, an open source machine learning platform for both AI experts and enthusiasts to share and develop models.


One big development towards more collaboration came earlier this year, when Meta publicly released Llama (Large Langauge Model Meta AI) to help those in the AI research community. Meta followed up in the summer by publicly releasing Llama 2.


“You don’t have to be like, ‘Okay, I’m just gonna train this massive model for a million GPU hours,’ you can take an existing one like Llama, or any of the open source models, and you can adapt it—you tune it, essentially, to your own use case, or your own data set. And that way, you get the advantage of this massive model without having to spend a million dollars in compute.”


If a bank, for instance, wanted a customer service chat bot, those programmers are able to take an existing generic language model and plug in data related to common customer queries or interactions with bank employees and adapt the model to be specific for the bank’s needs, Luccioni explained.


“So we’re seeing a lot of, especially start-ups that don’t have access to this massive amount of compute, pivoting to more adaptive training.”


She predicts that we’ll see even more sharing of tools and information in the AI research community in 2024, potentially spurring broader applications for the technology.


THE GROWING FIELD OF MEDICAL AI


The field of medicine is one of the most exciting frontiers for the practical applications of AI, according to Oishi Banerjee, a PhD student in computer science at Harvard.


She works at the Rajpurkar Lab, where they focus on advancing medical AI, and said that AI in medical imaging is poised to take off in the near future.


“I’m personally hopeful that in medicine, 2024 might be the year that we start seeing image models that are really specialized for medicine,” she told CTVNews.ca in a phone interview. “For example, that you could just give (an AI image model) a CT scan, and it will automatically say, ‘Oh, this one’s the liver and this one’s the tumor. This one’s the pancreas.’ That would be amazing.”


While there are models that exist right now for very specific tasks in the field of health-care, like isolating a kidney in a CT scan, Banerjee said, the expansion of image generation capabilities in AI hasn’t hit health-care yet. But it’s on the way.


“We’ve already seen incredible progress in these models getting way more versatile and adaptable in the natural image domain, there’s a lot of active work to bring that capability and that versatility over to medicine,” she said.


“Diagnostic tools are likely to get more versatile and better during 2024, just because of how much more versatile and how much more powerful the under-lying AI technologies have gotten.”


She sees many other potential applications on the way as well, from patient-facing chatbots to AI models that scan the latest medical literature to language models helping radiologists with reports.


An AI model trained on a database of patients’ medical histories and their reactions to certain treatments may be able to aggregate how some patients respond to a specific treatment.


“If the model can pick up on those patterns, which sometimes humans have not yet figured out, then you can use that model to take in a brand new patient, look at their scans, look at their history, and say, ‘Oh, I think that the best possible drug for this person is going to be drug A and not drug B.,’” she said, noting that this isn’t her area of expertise.


In the future, an AI model that has been trained to understand the full scope of chemical properties could also suggest new molecules to test as part of drug discovery based on specific disease presentations, according to Banerjee.


“So that’s exciting,” she said—but those advanced applications are still years away.


Banerjee is confident we are going to see a large increase in AI on the administrative side of health-care in 2024.


“That sounds less appealing or less exciting, but I cannot overstate … how much time doctors are spending on administrative paperwork,” she said.


“I would say in health care, 2024 might be the year that AI gives your doctor more time to spend with you.”


This prediction was echoed by Timothy Chan, a professor at the University of Toronto and Canada Research Chair in Novel Optimization and Analytics in Health.


“The least sexy applications, things like improving processes, streamlining back office or routine tasks, will probably have a big impact without it being noticed by the average patient,” he told CTVNews.ca over email, adding that AI “will become more and more central to the planning and delivery of care.”


However, it’s not all smooth sailing in medical AI.


A recent study performed in the U.S. found that clinicians, when presented with a good AI model, saw their own diagnostic accuracy increase slightly. However, when researchers presented clinicians with an AI model trained to be biased, the clinicians’ accuracy in treatment recommendations fell by 11 percentage points. Study authors said this means clinicians weren’t able to recognize and adjust for the bias of the AI model.


Poor AI models replicating bias is something that experts are working to combat, Banerjee said.


“I do see a push within the medical AI community that says when you are evaluating a new medical AI tool, try to get a diverse population, don’t just test it on one hospital in one affluent city, try to get many different groups of patients, do subgroup analyses to make sure that you’re not screwing over one historically marginalized group,” she said.


“Many people are very aware of this issue in medical AI, it is being taken seriously.”


TACKLING THE SHORTCOMINGS OF AI: BIAS, TRUTH AND CLIMATE TOLL


The issue of bias permeates many AI models across different fields, even when researchers aren’t creating deliberately biased models to test human perception.


One study that Luccioni worked on found that when AI models were asked to generate images of different professions, they were regurgitating societal biases.


“Doctors were 95 per cent men, and so were lawyers and CEOs, and nurses were women and stuff like that,” she said.


And bias isn’t the only problem: AI often produces content that is factually inaccurate.


“Generative models don’t have a concept of truth,” Luccioni said. “There’s no fact checking involved.”


Like a young child just trying to guess the answer that a parent wants to hear, AI language models or chatbots often invent things in response to more obscure or nuanced questions or prompts. They’ve been caught creating citations from books that don’t exist, or taking real people and inventing quotes that they never said.


AI developers call this “hallucination.” Right now, it’s impossible to create an AI language model which is incapable of lying. And depending on the application developers want for that model, this is a big problem, according to Luccioni.


Earlier this year, a U.S. non-profit organization aimed at supporting people impacted by eating disorders had to take down an AI-powered chatbot tool because users were reporting that the chatbot was giving “harmful” advice, including telling one user to count calories and lose weight after the user told the chatbot they had an eating disorder.


While bias and misinformation are flaws that have been discussed before, there’s a hidden cost to AI that Luccioni is hoping will get more attention in 2024: its unknown toll on the environment.


“All of technology, especially AI, because it’s a very computationally expensive technology, comes with a cost in terms of energy consumption, in terms of carbon emissions,” Luccioni said.


“We have very, very few data points about what’s the carbon footprint of training an AI model.”


When a generative AI model like ChatGPT is trained, there are physical servers that are powering that model. The more demand on the models, the more servers used, and thus the more energy going into powering them.


“There’s also the water for cooling the servers, because they get so hot that you always have to circulate cold water in order to cool them down,” Luccioni said.


Data centres, which power internet-based tools, are estimated to use three to five per cent of the global electricity around the world, she said, “on par with a country (the size of) Spain.”


It’s unknown what percentage of that is due to AI training and deployment.


“When I’ve looked at open source models, I found that, generating 10 images, for example, (uses) as much energy as charging a cell phone,” she said.


Trying to pinpoint the environmental impact of AI and how to make this field more sustainable is the core of Luccioni’s research.


She helped to create a tool called Code Carbon which estimates not only the impact of training an AI model, but the continuing impact of its use, “because when a model is live and responding to user queries or chats, it also uses energy and emits CO2.”


The tool, which is freely available for download online, allows users to see an estimate of the CO2 produced by the computing resources that user is using to execute their code. It then provides recommendations for how to lessen emissions by optimizing code or by hosting “cloud infrastructure in geographical regions that use renewable energy sources,” the tool’s website explains.


Luccioni is hoping to make AI’s impact on climate a larger part of the conversation—but some experts aren’t sure we’ll tackle it properly in 2024.


“Regarding climate impact, the energy it takes to train these huge models, I think this issue is not well understood by the general public,” Chan said. “Perhaps that will get some airtime (in 2024), but I don’t think it will cut through all the other noise around AI.”


PUSHBACK? REGULATION? PREDICTIONS FOR THE FUTURE


Bias, misinformation and climate costs all lead us to the inevitable question: where is the oversight for this industry?


“I think that we’re much overdue, actually, for oversight and regulation,” Luccioni said. But she’s not sure if we’ll get it in the next year.


“The current forms of legislation are having trouble keeping up with the pace of AI technologies.”


Attempts are underway. This summer, the European Union announced that AI in EU member states would be regulated by the AI Act, which would establish boundaries around the creation and use of AI. In early December, Parliament reached a provisional agreement with the Council on the AI Act, although it still needs to be formally adopted by both to become EU law.


A proposed regulatory framework for AI systems in Canada is currently being examined in the House of Commons, but it wouldn’t take effect this year. Called the Artificial Intelligence and Data Act (AIDA), it is part of Bill C-27, the Digital Charter Implementation Act. If the bill gains royal assent, a consultation process will start to clarify AIDA, with the framework coming into effect no sooner than 2025.


But the pace of AI development makes regulation difficult.


“I think it will be a gradual reckoning, with various regulatory initiatives playing catch up,” Chan said.


Luccioni pointed out that in many situations, tools already exist to solve problems such as AI-generated imagery not being properly marked.


“There’s no way to detect what’s called a DALL.E-generated image, because we haven’t asked that of those companies,” she said.


“There’s invisible watermarks that you can embed in machine-generated imagery. But we’re not obliging tech companies to do that. And so why would they do that if it’s just going to cost them more money or demand more effort?”


She’s been contacted by panicked students who didn’t use ChatGPT to build an essay, but are being accused of it and have no idea how to prove that they didn’t.


“People suffer the consequences, and that’s not okay. And so, I definitely hope the tide will turn in terms of what we let tech companies get away with in 2024.”


Banerjee added AI will likely get more powerful and more relevant in our lives, making concerns around AI more pressing.


“I could see sort of a split between the people who are getting quote, unquote, replaced by AI or pushed out of their current fields, versus the people who are going to make money off of AI pushing humans out.”


In the medical field, the stakes are so high that there’s no imminent risk of AI replacing human experts, according to Banerjee, and all of the applications being considered right now are assistive tools.


But the conversation about job security in other fields is “a good one to have,” as AI becomes more widespread.


What is clear is that AI is going nowhere. Chan said if he could make one prediction about AI in 2024, it would be this:


“You’ll use more it than ever before, but won’t even know it.”  

Shares:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *