Spend less time wrangling machines and accelerate development with the infrastructure and application management solution designed with deep learning in mind.
Whether managing servers yourself or working with a DevOps or IT team — now you have a simpler, more powerful way to manage deep learning resources.
Easy deployment and publishing of trained models for inference. Continuous integration and deployment for deep learning. Scale up and down instructure as needed.
AI infrastructure doesn’t have to be a money pit. Maximize resource utilization with smart scheduling and auto-allocation across multiple concurrent workloads and users.
We include the most popular open source frameworks such as Tensorflow (plus Tensorboard and Tensorflow Serving), Caffe, MXNet, Chainer, Torch, Theano, and others.
When you are experimenting with a new neural net architecture or learning deep learning in the first place, it helps to have examples and sample data for getting your feet wet.
A common tripping point for AI developers is managing underlying infrastructure. We ensure libraries and all infrastructure management are out of sight, out of mind.
We also allow you to bring your own development containers or tools or frameworks or bring any from the open source world to plug into the Flex Infrastructure.
Fully-integrated Jupyter, for a sharable GUI development environment that supports 40 different languages, including Python, Julia, R, and Scala.
Includes a web-based Terminal session to make it easy for you to navigate subdirectories and pull in additional packages or code via command line.
Easy integration with your preferred development environment, code and data management, and any upstream or downstream tools.
Download our Bitfusion command line interface and create workspaces from your own local environment or development box.
Query cluster resources, create workspaces, and perform all platform functions via a simply command line interface.
Train multiple models simultaneously for 10X faster development and rapid hyperparameter exploration.
Track the quantity and progress of your workloads via GUI or CLI, and get notified automatically when your jobs are complete.
Automates the deployment of an API server and URL endpoint based on your model export and serving files to the cloud or edge devices.
Scale up or scale down your deployment based on application load. Setup autoscaling policies to minimize deployment costs.
Resources are automatically assigned and attached based on the workload requirements with intelligent scheduling, queuing, and job management.
At the heart of the platform is our groundbreaking Bitfusion Core GPU virtualization, which provides unmatched resource flexibility and elasticity.
Upload and manage your own container images to use as base workspace and job configurations, tailored for your unique requirements.
Start with some of our base containers and utilize the platform snapshotting capability to continually expand the number of available working environments.
Administrate multiple users and groups with different permission tiers and resource policies to fit your team’s needs.
Manage and monitor your cluster infrastructure, track utilization and system events, and modify configurations and policy as needed.
Co-processor compute virtualization enables powerful new capabilities including on demand elastic GPUs, fractional GPUs, and automatic high availability.
Bitfusion Core runs in userspace, ensuring it can run securely, on almost any OS, and without requiring any changes to existing hypervisors or cloud infrastructure.
As completely transparent middleware, Bitfusion Core requires zero application changes. That’s why including Core in the AI Platform makes for a perfect combination.