Uber Open Sources the Third Release of Ludwig, its Code-Free Machine Learning Platform

Uber Open Sources the Third Release of Ludwig, its Code-Free Machine Learning Platform

The new release makes Ludwig one of the most complete open source AutoML stacks in the market.

I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:


Uber continues its innovative contributions to open source machine learning technologies. Last week, the transportation giant open sourced Ludwig 0.3, the third update to its no-code machine learning platform. Over the last few months, the Ludwig community has expanded beyond Uber and includes contributors such as Stanford University. This new release expands on the AutoML capabilities of its predecessors. Let’s do a recap of previous versions of Ludwig and explore this new release.

What is Uber Ludwig?

Functionally, Ludwig is a framework for simplifying the processes of selecting, training and evaluating machine learning models for a given scenario. Ludwig provides a set of model architectures that can be combined together to create an end-to-end model optimized for a specific set of requirements. Conceptually, Ludwig was designed based on a series of principles:

  • No coding required: no coding skills are required to train a model and use it for obtaining predictions.
  • Generality: a new data type-based approach to deep learning model design that makes the tool usable across many different use cases.
  • Flexibility: experienced users have extensive control over model building and training, while newcomers will find it easy to use.
  • Extensibility: easy to add new model architecture and new feature data types.
  • Understandability: deep learning model internals are often considered black boxes, but we provide standard visualizations to understand their performance and compare their predictions.

Using Ludwig, a data scientist can train a deep learning model by simply providing a CSV file that contains the training data as well as a YAML file with the inputs and outputs of the model. Using those two data points, Ludwig performs a multi-task learning routine to predict all outputs simultaneously and evaluate the results. Under the covers, Ludwig provides a series of deep learning models that are constantly evaluated and can be combined in a final architecture. The main innovation behind Ludwig is based on the idea of data-type specific encoders and decoders. Ludwig uses specific encoders and decoders for any given data type supported. Like in other deep learning architectures, encoders are responsible for mapping raw data to tensors while decoders map tensors to outputs. The architecture of Ludwig also includes the concept of a combiner which is a component that combine the tensors from all input encoders, process them, and return the tensors to be used for the output decoders.

Ludwig 0.3

Ludwig 0.3 incorporates new features, widely used in machine learning applications, via a consistent no-code interface. Let’s review some of the fundamental contributions of the new release of Ludwig.

1) Hyperparameter Optimization

Finding the best combination of hyperparameters for a given machine learning problem can be exhausting. Ludwig 0.3 introduces a new command, hyperopt that performs automated hyperparameter searches and returns possible configurations. Hyperopt can be called using a simple syntax:

And the outputs presents different values and scales for hyperparameters.

2) Integration with Weights and Biases

Complementing the previous point, Ludwig 0.3 integrates with the Weights and Biases(W&B) platform. W&B provides a very visual interface for rapid experimentation and hyperparameter tuning in machine learning models. To use W&B, Ludwig users can simple append the -wandb parameters to their commands.

3) Code-Free Transformers

In recent years, language-pretrained models and transformers have been at the center of major breakthroughs in areas of deep learning such as natural language processing. Ludwig 0.3 integrates support for transformers via its integration with Hugging Face’s Transformers repository.

4) TensorFlow 2 Backend

The new release of Ludwig has undergone a major re-architecture based on TensorFlow 2. While this re-architecture might not be immediately obvious to end-users, it allows Ludwig to take advantage of many of the new capabilities of TensorFlow 2 and introduce a much more modular design.

5) New Data Source Integration

One of the main limitations of Ludwig has been the small number of datasets that can be constrained dataset structures supported as inputs. Essentially, Ludwig has only supported CSVs and Pandas Dataframes as input datasets. The new version addresses this challenge by introducing integration with many other formats such as excel, feather, fwf, hdf5, html tables, json, jsonl, parquet, pickle, sas, spss, stata and tsv. The new datasets can be used using a simple command line:

Other Capabilities

Ludwig 0.3 introduces other capabilities such as a new vector data type that supports noisy labels for weak supervision, support for new vector data types and k-fold cross-validation for training which complement an already impressive release. Little by little, Ludwig is becoming one of the most impressive open source AutoML stacks in the market.

Original. Reposted with permission.


Author: admin

Leave a Reply

Your email address will not be published.