Tangram is an all-in-one automated machine learning framework.

Train a model from a CSV file on the command line. Make predictions from Elixir, Go, JavaScript, Python, Ruby, or Rust. Learn about your models and monitor them in production from your browser.
Train a model on the command line.
Train a machine learning model by running tangram train with the path to a CSV file and the name of the column you want to predict.

The CLI automatically transforms your data into features, trains a number of linear and gradient boosted decision tree models to predict the target column, and writes the best model to a .tangram file. If you want more control, you can provide a config file.
$ tangram train --file heart_disease.csv --target diagnosis ✅ Loading train data. ✅ Loading test data. ✅ Shuffling train data. ✅ Shuffling test data. ✅ Inferring column types. ✅ Computing train stats. ✅ Computing test stats. ✅ Computing baseline metrics. ✅ Computing features. info: Press ctrl-c to stop early and save the best model trained so far. ✅ Training model 1 of 8. ✅ Training model 2 of 8. ✅ Training model 3 of 8. ✅ Computing model comparison features. ✅ Computing comparison metric. ✅ Computing features. 🚂 Training model 4 of 8. [==========================================> ]
Make predictions in your favorite language.
Make predictions with libraries for Elixir, Go, JavaScript, Python, Ruby, and Rust.

Tangram is written in Rust and exposed to each language via native extensions, so predictions are fast and your data never travels over the network.
1
2
3
4
5
6
7
model = Tangram.load_model_from_path("./heart_disease.tangram") output = Tangram.predict(model, %{ :age => 63, :gender => "male", # ... })
1
2
3
4
5
6
7
8
9
import "github.com/tangramdotdev/tangram/languages/go" model, _ := tangram.LoadModelFromPath("./heart_disease.tangram", nil) output := model.PredictOne(tangram.Input{ "age": 63, "gender": "male", // ... }, nil)
1
2
3
4
5
6
7
8
9
const tangram = require("@tangramdotdev/tangram"); const model = new tangram.Model("./heart_disease.tangram"); const output = model.predict({ age: 63, gender: "male", // ... });
1
2
3
4
5
6
7
8
9
import tangram model = tangram.Model.from_path('./census.tangram') output = model.predict({ 'age': 63, 'gender': 'male', # ... })
1
2
3
4
5
6
7
8
9
require 'tangram' model = Tangram::Model.from_path('./heart_disease.tangram') output = model.predict({ age: 63, gender: 'male', # ... })
1
2
3
4
5
6
7
8
9
10
let model: tangram::Model = tangram::Model::from_path("./heart_disease.tangram", None).unwrap(); let input = tangram::predict_input! { "age": 63.0, "gender": "male", // ... }; let output = model.predict_one(input, None);
Learn more about your models in your browser.
Run tangram app and open http://localhost:8080, or go to https://app.tangram.dev, and upload the model you trained.

The app shows you dataset statistics, a summary of all the models that the CLI trained, the features that were most important to your model, and metrics showing how the best model performed on the test set.
Accuracy
+0.1014
0.7553
0.8567
Baseline
Training
PR Curve
PR
ROC Curve
ROC
Reference
Tune your model to get the best performance.

Tune binary classification models to your preferred tradeoff between precision and recall. To use your selected threshold, update the predict call in your code.

0.5
85.67%
Accuracy
78.91%
Precision
56.58%
Recall
1
2
// Update your code to use the selected threshold. model.predict(input, { threshold: 0.50 })
Monitor your models in production.
Once your model is deployed, make sure that it performs as well in production as it did in training.

Opt in to logging by calling logPrediction.

Later on, if you find out the true value for a prediction, call logTrueValue.
1
2
3
4
5
6
7
8
9
10
11
12
13
# Log the prediction. Tangram.log_prediction(model, %Tangram.LogPredictionArgs{ identifier: "John Doe", options: predict_options, input: input, output: output, }) # Later on, if we get an official diagnosis for the patient, log the true value. Tangram.log_true_value(model, %Tangram.LogTrueValueArgs{ identifier: "John Doe", true_value: "Positive", })
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// Log the prediction. err = model.LogPrediction(tangram.LogPredictionArgs{ Identifier: "John Doe", Input: input, Options: predictOptions, Output: output, }) if err != nil { log.Fatal(err) } // Later on, if we get an official diagnosis for the patient, log the true value. err = model.LogTrueValue(tangram.LogTrueValueArgs{ Identifier: "John Doe", TrueValue: "Positive", }) if err != nil { log.Fatal(err) }
1
2
3
4
5
6
7
8
9
10
11
12
13
// Log the prediction. model.logPrediction({ identifier: "6c955d4f-be61-4ca7-bba9-8fe32d03f801", input, options, output, }) // Later on, if we get an official diagnosis for the patient, log the true value. model.logTrueValue({ identifier: "6c955d4f-be61-4ca7-bba9-8fe32d03f801", trueValue: "Positive", })
1
2
3
4
5
6
7
8
9
10
11
12
13
# Log the prediction. model.log_prediction( identifier="John Doe", input=input, output=output, options=predict_options, ) # Later on, if we get an official diagnosis for the patient, log the true value. model.log_true_value( identifier="John Doe", true_value="Positive", )
1
2
3
4
5
6
7
8
9
10
11
12
13
# Log the prediction. model.log_prediction( identifier: 'John Doe', input: input, output: output, options: options ) # Later on, if we get an official diagnosis for the patient, log the true value. model.log_true_value( identifier: 'John Doe', true_value: 'Positive' )
1
2
3
4
5
6
7
8
9
10
11
12
13
// Log the prediction. model.log_prediction(tangram::LogPredictionArgs { identifier: "John Doe".into(), input, options: Some(options), output, })?; // Later on, if we get an official diagnosis for the patient, log the true value. model.log_true_value(tangram::LogTrueValueArgs { identifier: "John Doe".into(), true_value: "Positive".into(), })?;
Monitor predictions.
After calling logPrediction, look up any prediction in the app by its identifier.

Every prediction will display its input and output, as well as a detailed explanation showing how each feature contributed to the output.
Positive
Prediction
97.48%
Probability
Monitor data drift.
After logging predictions, you can view stats comparing the production data with the training data. The app will automatically alert you if a significant difference is detected.
All Good
exercise_max_heart_rate
Training
Production
High Invalid Count
chest_pain
Training
Production
Monitor metrics.
After logging true values, you can view metrics comparing your model's performance in production vs. training. The app will automatically alert you if a signficant difference is detected.
Monthly Accuracy
Training
Production
Monthly Precision
Training
Production