Friday, August 12, 2022

Automated techniques could make it easier to develop AI

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

“BERT takes months of calculations and is very expensive, about a million dollars to generate that model and iterate those processes,” Bahrami says. “So if everyone wants to do the same thing, then it’s expensive — it’s not energy efficient, not good for the world.”

While the field is promising, researchers are still looking for ways to make autoML techniques more computationally efficient. For example, methods like neural architecture are currently building and testing many different models to find the best fit, and the energy it takes to complete all those iterations can be significant.

AutoML techniques can also be applied to machine learning algorithms that do not use neural networks, such as creating arbitrary decision forests or supporting vector machines to classify data. Research in those areas is more advanced, with many coding libraries already available to people who want to incorporate autoML techniques into their projects.

The next step is to use autoML to quantify uncertainty and address questions about reliability and fairness in the algorithms, said Hutter, a conference organizer. In that view, standards of reliability and fairness would be comparable to other machine learning constraints, such as accuracy. And autoML can capture biases in those algorithms and automatically correct them before they are released.

The search continues

But for something like deep learning, autoML still has a long way to go. Data used to train deep learning models, such as images, documents, and recorded speech, tends to be compact and complicated. It takes enormous computing power to process. The cost and time to train these models can be prohibitive to anyone other than researchers working at private companies with deep pockets.

One of the competitions at the conference asked participants to develop energy-efficient alternative algorithms for the search for neural architecture. It is quite a challenge because this technique has notorious computational demands. It automatically cycles through numerous deep learning models to help researchers choose the right one for their application, but the process can take months and cost upwards of a million dollars.

The goal of these alternative algorithms, so-called zero-cost search proxies for neural architecture, is to make the search for neural architecture more accessible and environmentally friendly by significantly reducing the need for computation. The result only takes a few seconds, instead of months. These techniques are still in the early stages of development and are often unreliable, but predicting machine learning researchers that they have the potential to make the model selection process much more efficient.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article