Spend less time wrangling machines and accelerate development with the workload and application management solution designed with deep learning in mind.
Whether managing servers yourself or working with a DevOps or IT team — now you have a simpler, more powerful way to manage deep learning resources.
No more unfair trade offs — our platform lets you gain both the flexibility, versioning, and leanness of containers as well as the immense computation power of GPUs.
AI infrastructure doesn’t have to be a money pit. Maximize resource utilization with smart scheduling and auto-allocation across multiple concurrent workloads and users.
We include the most popular open source frameworks such as Tensorflow (plus Tensorboard and Tensorflow Serving), Caffe, MXNet, Chainer, Torch, Theano, and others.
Sometimes the task at hand doesn’t require a neural net in order to get the job done. That’s why NumPy, SciPy, Scikit Learn, Pandas, and much more come standard with Bitfusion.
A common tripping point for AI developers is managing GPU driver complexity. We ensure CUDA, CUDNN, and the underlying GPU drivers are out of sight, out of mind.
When you are experimenting with a new neural net architecture or learning deep learning in the first place, it helps to have examples and sample data for getting your feet wet.
Fully-integrated Jupyter, for a sharable GUI development environment that supports 40 different languages, including Python, Julia, R, and Scala.
Jupyter includes a web-based Terminal session to make it easy for you to navigate subdirectories and pull in additional packages or code via command line.
Spin up and down workspaces as needed — your code and data is instantly available when you start and automatically preserved when you shut down.
Download our Bitfusion command line interface and initiate jobs from your own local environment or development box.
Workspaces come with the Bitfusion CLI pre-bundled so you can initiate jobs from your interactive development environment.
Train multiple models simultaneously for 10X faster development and rapid hyperparameter exploration.
Track the quantity and progress of your workloads via GUI or CLI, and get notified automatically when your jobs are complete.
Currently supports Tensorflow Serving, which provides a foundation for versioning, deploying, and managing Tensorflow models.
Automates the deployment of an API server and URL endpoint based on your model export and serving files.
Resources are automatically assigned and attached based on the workload requirements with intelligent scheduling, queuing, and job management.
At the heart of the platform is our groundbreaking Bitfusion Core GPU virtualization, which provides unmatched resource flexibility and elasticity.
Upload and manage your own container images to use as base workspace and job configurations, tailored for your unique requirements.
Create a new base workspace or job container image out of an active workspace. Leverage snapshots for just yourself, or share them with others.
Administrate multiple users and groups with different permission tiers to fit your team’s business processes and policies.
Manage and monitor your cluster infrastructure, track utilization and system events, and modify configurations and policy as needed.
Co-processor compute virtualization enables powerful new capabilities including on demand elastic GPUs, seamless multi-node scaling, and automatic high availability.
Bitfusion Core runs in userspace, ensuring it can run securely, on almost any OS, and without requiring any changes to existing hypervisors or cloud infrastructure.
As completely transparent middleware, Bitfusion Core requires zero application changes. That’s why including Core in the AI Platform makes for a perfect combination.