• Machine learning

Algorithm Performance vs. Faster Hardware: Which Makes a Successful ML Project?

Vahan Zakaryan
7 Apr 2022
5 min
Algorithm Performance vs. Faster Hardware: Which Makes a Successful ML Project?

What’s more important — knowledge or experience? 

This is exactly the type of question that springs to my mind when I hear people wondering about the role of algorithm performance and computing power or hardware capabilities in deploying efficient machine learning (ML) models.

To achieve success in any project, regardless of the domain, you’ll probably need both. This suggestion is fair for weighing the importance of a good algorithm architecture and fast hardware to experiment on. 

But the devil is in the details. And a lot depends on whether you are working on an ML model that will predict housing prices for the next five years or training a model to spot tumors in medical images. Here’s why.

Want to know more about tech trends?
Sign up to be the first who receive our expert articles

    Thank you

    MIT study proves algorithm improvement importance

    In 2020, a group of scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a research paper proving the importance of improved algorithms in advancing computing against hardware performance. 

    The team analyzed 113 algorithm families, which are sets of algorithms enabling solutions to the same problems, tracking the history of their improvement since 1940. As a result, researchers received evidence that the complexity of an algorithm contributes more to the success of a computational task than the hardware used in the process.

    “Through our analysis, we were able to say how many more tasks could be done using the same amount of computing power after an algorithm improved. As problems increase to billions or trillions of data points, algorithmic improvement becomes substantially more important than hardware improvement,” Neil Thompson, an MIT research scientist and the author of the study, commented on the work.

    “In an era where the environmental footprint of computing is increasingly worrisome, this is a way to improve businesses and other organizations without the downside,” he added.

    These findings caused a buzz in the tech community, making some software and ML engineers question the impact of Moore’s Law, which suggests that computing power doubles about every two years, on the efficiency of algorithm-based tasks. 

    But the thing is that no matter how perfect your algorithm architecture is, you won’t get anywhere without highly efficient hardware to test it on.

    Why hardware performance still plays a key role 

    The complexity of the algorithm architecture affects the choice of hardware for a specific project. The more complex the architecture is, the more parameters it has, the more powerful hardware will need to be to complete computational tasks that it was designed to solve.

    To train a neural network of up to 3 million parameters can take up to several days. More time will be needed for computation if a project requires a more sophisticated architecture with more parameters. High-performing hardware can speed up the entire programming cycle. It accelerates iterations, and allows the team to experiment with the neural network faster and reach a desired result in a shorter amount of time. 

    Industry standards for hardware in machine learning projects are Graphics Processing Units (GPUs), Tensor Processing Units (TPUs), and Central Processing Units (CPUs).

    • GPUs were initially designed to perform specific tasks related to video, like video rendering and animation. They are highly successful at parallel processing and can simultaneously perform lots of arithmetic operations, speeding up the completion of a set task, including in ML. (NVIDIA A100, NVIDIA Tesla T4, NVIDIA Tesla P4, NVIDIA Tesla V100, and NVIDIA Tesla K80 are some of my favorites)
    • TPUs are specialized integrated circuits used to accelerate AI calculations and algorithm applications. Google, for example, offers cloud TPUs that allow the training of neural networks on the cloud without the need to install any special hardware or software.
    • CPUs manage all computer functions and handle basic arithmetic, and input/output functions of programs. Since they were designed to perform a broad range of computational tasks, they might not be the best option for algorithm-related tasks, however, they are still used in the market.

    The bottom line here is that the better the chosen hardware works, the faster the team will deliver the needed result. However, in certain domains, it’s more about the technical expertise of engineers and accuracy, than the speed of neural network delivery.

    Adding project domain to the equation

    How do you estimate the efficiency of a trained ML model or neural network? Usually, engineers rely on two key metrics — the accuracy of the model and the timeframe during which the needed accuracy can be achieved.

    While the former is determined by the architecture of a model and how well hyperparameters that define the success of the learning process are tuned, the latter mostly depends on the hardware performance.

    If you are dealing with a sophisticated task which entails high risks, like training a neural network that will spot early signs of cancer in X-Rays or detect potentially cancerous lesions in skin, the algorithm improvement and technical expertise of skilled engineers is crucial.

    Postindustria’s team of ML engineers has years of experience in training highly efficient ML models. When approaching an ML project, we meticulously design a pipeline and keep improving the model, until we get the best result possible. This approach allowed us to create an efficient hand-tracking model for a virtual ring try-on that accurately measures the size of a finger, virtually places a ring on it in the right spot, and provides realistic rendering. 

    If you need expert help in building ML models, leave us your contact details and we’ll get in touch to discuss your project.

    Book a strategy session_

    Get actionable insights for your product

      Thank you for reaching out,

      Make sure to check for details.