Deep learning and AI technologies are revolutionizing the world, whether it’s through self-driving cars, drones, virtual assistants, more accurate medical diagnosis, or automatic lead generation. As a result, AI is drastically altering the ways in which business is conducted. In the 90’s and 2000s, the web revolutionized businesses by offering them ability to improve customer value by an order of magnitude. Take Amazon for example which created a website for selling books which was brick and mortar until then, then later transformed the retail industry and the computing industry. Another example is how Mobile revolutionized businesses just under a decade ago. Take Uber for example which leveraged mobile to disrupt the taxi-cab industry. Similar to those transformations, AI has the potential to transform any business today. The business value of applying AI to something results in increased revenue (faster time to market) and reduced costs. Driving + AI = autonomous driving. Manufacturing + AI = smart manufacturing. Retail + AI = personalized retail. Personal assistant (be it office or home) + AI = smart personal assistant. Health care + AI = smart and affordable health care, the list goes on and on…In future posts, I will discuss application and use of AI, industry by industry. Just to throw some metrics … According to the research firm Tractica, AI software will generate $36.8 billion revenue by 2020. According to Goldman Sachs, AI will have a +51-154bps impact on US productivity growth by 2025.
Deep learning is a subset of AI, essentially one way of achieving AI which has become popular in the recent years. Deep learning is a relatively new term and the name originates because it refers to employing deep neural network which is a particular type of machine learning algorithm. Deep learning is the closest we have gotten to mimic how the human brain learns and commercialize it. Prior to deep learning, AI was usually done through rule based automation (IF THIS, THEN THAT). Technically, if you read wikipedia, you will find machine learning can be classified into supervised learning (computer is presented with example inputs and their desired outputs, and the goal is to learn a general rule that maps inputs to outputs) and unsupervised learning (computer is just presented with data, no labels are given to the learning algorithm, leaving it on its own to find structure in its input). The concept of neural networks has been there since the 1960s, but speed of training and amount of data it can intake has been the achilles heel for its widespread adoption. It was not until 2012, when Alex Krizhevsky (Geoffrey Hinton’s student) at University of Toronto was able to demonstrate leveraging NVidia GPUs to train neural networks really fast, that neural networks really started taking off. Ofcourse, the growth and availability of massive amounts of data now (compared to a few decades ago) also played a big role there.
Just like every other application development process, the AI application development lifecycle can be broken down into three phases, namely: development, training and deployment. It all starts with a business problem! AI teams are usually organized under a Data Science Officer or CTO, in most companies. Any deep learning project typically starts with an idea, usually something one wants to apply AI to, in order to make it better. The idea is usually tested with a small data sample, usually representative of the larger dataset. Data scientists do a lot of experiment cycling and iterate through a ton of models to first prototype the algorithm for the system they are building. This is mostly done on their personal computers or sometimes they SSH to a terminal in a cloud or datacenter if the data resides there. Once folks are comfortable with the initial prototype or MVP (minimum viable product) of the algorithm, they start scaling up the model to larger datasets and this stage is training the algorithm or system. This is where the massive power of GPUs are more useful. This stage usually comprise of long jobs taking multiple cycles and days. People leverage the cloud or clusters of machines for this phase. Some structure and experiment management is followed in this stage, so that results are obtained in a methodical manner and are reproducible. Once the network or algorithm is trained, the last stage is deploying it to the field so that business problem at hand can leverage the algorithm. The most common way of leveraging algorithm is by deploying the algorithm as an application programming interface (API) thats consumable by any business application. Ofcourse, in today’s world, all of these can be achieved by anyone and not just the googles or facebooks or amazons of the world, without owning even a single server. In a future post, I will cover these AI development lifecycle phases in detail.
For example, say an existing home security camera company wants to build the next generation of their security system using an image recognition leveraging AI (to detect friends and family versus potential criminals at home). The goal here would be building an initial system powered by neural networks that can recognize images from a camera and tag them as either friends or enemies, which is fed back into some business logic. They first start by creating a data science team (hiring is a key problem and there is a shortage of data scientists today, I will cover this in a later post). The data science team will start with curating the existing data, picking the right neural network framework and algorithm for the problem at hand, some manual iteration using sample data to create a prototype algorithm. This stage is then followed by applying real world data from their existing massive dataset to hone the neural network. Once the network is trained, then the last stage is deploying the network on their existing cameras (this stage is called inference). Now, a home CCTV camera company essentially has been transformed into a smart home security system company for the modern age, potentially improving their existing customer value by an order of magnitude.
As companies strive to stay competitive within their space, they are increasingly turning to AI to bolster their applications and remain on the cutting-edge of technology. Technology is rapidly changing the world for consumers and businesses alike; applying AI as part of your core strategy is paramount to driving innovation and increasing customer lifetime value.
Building AI applications can still be extremely time and resource consuming, with a lot of time being sometimes being spent on DevOps instead of the application development. The devOps process involves installing a massive number of software packages and keeping them up to date, managing custom infrastructure and coordinating a variety of jobs and complicated workflows through development, training and deployment. Time is of the essence in today’s highly competitive world. Just like how basic infrastructure plumbing has been the critical element in every single major civilization shift like agricultural, industrial or internet revolution, it is also critical for the AI revolution right now. At Bitfusion, we created Flex to essentially to serve as a simple, automated and self-serviced infrastructure management solution across any type of infrastructure CPUs, GPUs or FPGAs, so that you don’t have to build what you don’t need to, and just focus on your core business. Try Bitfusion Flex today on AWS!