Deep learning, also known as hierarchical learning, is a form of machine learning that applies multiple levels or layers of similar structures to successively refine its analysis of an input signal in an attempt to find a solution or discover a pattern.
The most common structure of choice is a model of a neural network, where the first layer identifies the most abstract features of the relevant signal, which are passed to the next layer for further refinement. For example, a deep learning application for image recognition may have a layer that identifies outlines within a video frame, which are passed to a layer to identify features within the outlines, and successive layers identify further details until a specific class or type of object is found (cat, person, backpack, etc.).
Deep learning algorithms can generally process elements of large data sets in parallel, which is driving the rapid adoption of Graphics Processing Units (GPUs) as key components of modern deep learning hardware architectures. This is adding hundreds or thousands of cores for parallel processing via GPUs, and is far less expensive and more energy efficient than adding cores via additional CPUs.
Looking at the five-year horizon for AI and cognitive computing, we see more investment in basic research, enterprise and consumer applications, and advanced hardware architectures to support these software advances. In this research note, we identify four AI trends and offer predictions for each.
AI chatbots use natural language processing and machine learning technologies to offer more natural, interactive communication interfaces between humans and machines. For many applications, AI chatbots ultimately define the customer or user experience (CX/UX). The benefits of adding a conversational interface to an application are clear and accrue quickly. In this webinar, Adrian Bowles discusses the driving forces behind the rapid adoption of AI chatbots by enterprises.