Introducing Moirai 2.0 – Salesforce

Introducing Moirai 2.0 – Salesforce

Time series forecasting plays a central role in data-driven decision making. Yet, adapting forecasting models across different domains and temporal resolutions often requires custom engineering. This increases both development and maintenance costs — especially for large-scale enterprise systems with many users and use cases.

These challenges motivated our universal forecasting paradigm, which began with the release of Moirai 1.1, our open-source forecasting foundation model, and the launch of GIFT-Eval, a public leaderboard designed to evaluate and track time series foundation models.

Since then, GIFT-Eval has grown into a widely adopted benchmark, hosting 27 model submissions, including 14 foundation models from both industry and academia. The growing interest in general-purpose forecasting has made it clear that strong baselines and scalable models are more important than ever.

We’re now introducing Moirai 2.0, a new and improved version of our time series foundation model. Compared to earlier versions, Moirai 2.0 is faster, more accurate, and currently ranks #1 by MASE on the GIFT-Eval leaderboard among all non test data leaking models.

What’s New in Moirai 2.0?

Moirai 2.0 brings new updates across three main areas:

Architecture

We’ve transitioned from a masked encoder architecture to a decoder-only transformer model. This design better fits the nature of autoregressive forecast generation and makes the model easier to scale across larger datasets and use cases. 

Data

To support the decoder-only architecture, we expanded the pretraining dataset with a richer mix of data sources, including:

  • GIFT-Eval Pretrain, and Train datasets. 
  • Chronos mixup data generated by us (from non-leaking subsets).
  • Synthetic time series produced via KernelSynth (Chronos paper).
  • Internal Salesforce operational data.

Training Strategy

We improved our training objective and generation setup which showed highest impact on results:

  • Switched from a distributional loss to a quantile loss formulation.
  • Moved from single-token to multi-token prediction, improving efficiency and stability.
  • Added a data filtering mechanism to filter out non-forecastable, low quality, time series during pretraining.
  • Added a new patch token embedding which includes missing value information.
  • Added patch-level random mask to improve robustness of the model during inference.

We’ve also experimented with many other changes — some of which made it into this release, and others that helped guide design decisions along the way. For those interested in exploring the details, the updated implementation is available open source.

We look forward to seeing how Moirai 2.0 performs in a wider range of applications and welcome feedback from the community.

Performance

We evaluated Moirai 2.0 on the GIFT-Eval benchmark to assess its accuracy, efficiency, and overall improvements.

As shown in Figure 1, Moirai 2.0 achieves the best MASE score among all non–test-data-leaking foundation models, while also matching the CRPS performance of the previous state-of-the-art.

Beyond accuracy, Moirai 2.0 also brings substantial gains in speed and model size. Figure 2 compares Moirai 2.0 with earlier versions of Moirai across four metrics: inference time vs. performance (top) and parameter count vs. performance (bottom).

Compared to our previous best model, Moirai_large, Moirai 2.0 is:

  • 16% better on MASE
  • 13% better on CRPS
  • 44% faster in inference
  • 96% smaller in parameter size

These improvements make Moirai 2.0 a smaller, faster, and more accurate alternative to its predecessors. We hope this update enables new possibilities for more efficient and scalable time series forecasting across applications.

Minimal Example

Getting started with Moirai 2.0 is just as easy as before. Below is a minimal example that shows how to load the model, generate forecasts, and visualize the results using the electricity dataset.

Step 1: Import Required Modules

import matplotlib.pyplot as plt

from gluonts.dataset.repository import dataset_recipes

from uni2ts.eval_util.data import get_gluonts_test_dataset

from uni2ts.eval_util.plot import plot_next_multi

from uni2ts.model.moirai2 import Moirai2Forecast, Moirai2Module

Step 2: Load Moirai 2.0

MODEL = "moirai2"  
SIZE = "small"  
CTX = 1000  
BSZ = 32 

model = Moirai2Forecast(
    module=Moirai2Module.from_pretrained(
        f"Salesforce/moirai-2.0-R-small",
    ),
    prediction_length=100,
    context_length=1680,
    target_dim=1,
    feat_dynamic_real_dim=0,
    past_feat_dynamic_real_dim=0,
)

Step 3: Load Dataset and Generate Forecasts

# Load dataset using loader utils
test_data, metadata = get_gluonts_test_dataset(
    "electricity", prediction_length=None, regenerate=False
)

predictor = model.create_predictor(batch_size=BSZ)
forecasts = predictor.predict(test_data.input)

input_it = iter(test_data.input)
label_it = iter(test_data.label)
forecast_it = iter(forecasts)

Step 4: Plot Forecasts

# Visualize forecasts
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(25, 10))
plot_next_multi(
    axes,
    input_it,
    label_it,
    forecast_it,
    context_length=200,
    intervals=(0.5, 0.9),
    dim=None,
    name="pred",
    show_label=True,
)

We hope this example helps you get started quickly with Moirai 2.0. You can find the full example notebook here: example_notebook.

Share this post :

Facebook
Twitter
LinkedIn
Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *

Create a new perspective on life

Your Ads Here (365 x 270 area)
Latest News
Categories

Subscribe our newsletter

Stay updated with the latest tools and insights—straight from ToolRelay.