Deep Learning – What is it ?

Deep Learning

deep learning is an aspect of artificial intelligence (AI) that is concerned with emulating the learning approach that human beings use to gain certain types of knowledge. At its simplest, deep learning can be thought of as a way to automate predictive analytics.
Here in this article following topics are covered.

  1. What is deep learning?
  2. What are the scope of deep learning?
  3. How you can implement it?
  4. Can you become a Deep Learning Engineer?

    While traditional machine learning algorithms are linear, DL algorithms are stacked in a hierarchy of increasing complexity and abstraction. To understand DL, imagine a toddler whose first word is dog. The toddler learns what a dog is (and is not) by pointing to objects and saying the word dog. The parent says, “Yes, that is a dog,” or, “No, that is not a dog.” As the toddler continues to point to objects, he becomes more aware of the features that all dogs possess. What the toddler does, without knowing it, is clarify a complex abstraction (the concept of dog) by building a hierarchy in which each level of abstraction is created with knowledge that was gained from the preceding layer of the hierarchy.

    Examples of deep learning applications

    Because DL models process information in ways similar to the human brain, models can be applied to many tasks people do. Deep learning is currently used in most common image recognition tools, NLP processing and speech recognition software. These tools are starting to appear in applications as diverse as self-driving cars and language translation services.

    1.What is Deep Learning?

    Deep Leaning is also an aspect of Artificial Intelligence.Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

    If you are just starting out in the field of DL or you had some experience with neural networks some time ago, you may be confused. I know I was confused initially and so were many of my colleagues and friends who learned and used neural networks in the 1990s and early 2000s.The leaders and experts in the field have ideas of what deep learning is and these specific and nuanced perspectives shed a lot of light on what deep learning is all about.

    In this post, you will discover exactly what deep learning is by hearing from a range of experts and leaders in the field.

    Deep Learning is Large Neural Networks

    Andrew Ng from Coursera and Chief Scientist at Baidu Research formally founded Google Brain that eventually resulted in the productization of deep learning technologies across a large number of Google services.
    He has spoken and written a lot about what deep learning is and is a good place to start.
    In early talks on DL , Andrew described DL in the context of traditional artificial neural networks. In the 2013 talk titled “Deep Learning, Self-Taught Learning and Unsupervised Feature Learning” he described the idea of deep learning as:

    Using brain simulations, hope to:
    – Make learning algorithms much better and easier to use.
    – Make revolutionary advances in machine learning and AI.
    I believe this is our best shot at progress towards real AI.

    Later his comments became more nuanced.
    The core of deep learning according to Andrew is that we now have fast enough computers and enough data to actually train large neural networks. When discussing why now is the time that DL is taking off at ExtractConf 2015 in a talk titled “What data scientists should know about deep learning“, he commented:

    very large neural networks we can now have and … huge amounts of data that we have access to

    He also commented on the important point that it is all about scale. That as we construct larger neural networks and train them with more and more data, their performance continues to increase. This is generally different to other machine learning techniques that reach a plateau in performance.

    for most flavors of the old generations of learning algorithms … performance will plateau. … deep learning … is the first class of algorithms … that is scalable. … performance just keeps getting better as you feed them more data.

    2.What are the scope of deep learning?

    1. The deep learning industry will adopt a core set of standard tools

    2. Deep learning will gain native support within Spark

    The Spark community will beef up the platform’s native deep learning capabilities in the next 12 to 24 months. Judging by the sessions at the recent Spark Summit, it would appear that the community is leaning toward stronger support for TensorFlow, at the very least, with BigDL, Caffe, and Torch also picking up adoption.

    3. Deep learning will find a stable niche within the open analytics ecosystem

    Most deep learning deployments already depend on Spark, Hadoop, Kafka, and other open source data analytics platforms. What’s becoming clear is that you can’t adequately train, manage, and deploy deep learning algorithms without the full suite of big data analytics capabilities provided by these other platforms. In particular, Spark is becoming an essential platform for scaling and acceleratingDL algorithms built in various tools. As I noted in this recent article, many DL developers are using Spark clusters for such specialized pipeline tasks as hyperparameter optimization, fast in-memory data training, data cleansing, and preprocessing.

    4. Deep learning tools will incorporate simplified programming frameworks for fast coding

    The application developer community will insist on APIs and other programming abstractions for fast coding of the core algorithmic capabilities with fewer lines of code. Going forward, DL developers will adopt integrated, open, cloud-based development environments that provide access to a wide range of off-the-shelf and pluggable algorithm libraries. These will enable API-driven development of DL applications as composable containerized microservices. The tools will automate more DL development pipeline functions and present a notebook-oriented collaboration and sharing paradigm. As this trend intensifies, we’ll see more more headlines such as “Generative Adversarial Nets in 50 Lines of Code (PyTorch).”

    5. Deep learning toolkits will support visual development of reusable components

    Deep learning toolkits will incorporate modular capabilities for easy visual design, configuration, and training of new models from pre-existing building blocks. Many such reusable components will be sourced through “transfer learning” from prior projects that addressed similar use cases. Reuseable DL artifacts, incorporated into standard libraries and interfaces, will consist of feature representations, neural-node layerings, weights, training methods, learning rates, and other relevant features of prior models.

    6. Deep learning tools will be embedded in every design surface

    It’s not too soon to start envisioning “democratized deep learning.” Within the next five to 10 years, deep learning development tools, libraries, and languages will become standard components of every software development toolkit. Equally as important, user-friendly DL development capabilities will be embedded in generative design toolsused by artists, designers, architects, and creative people of all stripes who would never go near a neural network. Driving this will be a popular mania for deep learning-powered tools for image search, autotagging, photorealistic rendering, resolution enhancement, style transformation, fanciful figure inception, and music composition.
    As theDL market advances toward mass adoption, it will follow in the footsteps of data visualization, business intelligence, and predictive analytics markets. All of them have moved their solutions toward self-service cloud-based delivery models that deliver fast value for users who don’t want to be distracted by the underlying technical complexities. That’s the way technology evolves.

    3.How you can implement deep learning?

    1. Automatic Colorization of Black and White Images




    2. Automatically Adding Sounds To Silent Movies



    3. Automatic Machine Translation



    4. Object Classification and Detection in Photographs

    5. Automatic Handwriting Generation

    6. Automatic Text Generation

    7. Automatic Image Caption Generation

    8. Automating Games

    4.Can you become a Deep Learning Engineer?

    Here are some steps to become a Depp Learning engineering

    1.Learning the Skills

    • Learn to code using Python or a similar language.
    • Work through online data exploration courses.
    • Complete online courses related to machine learning.
    • Earn a relevant certification or degree to help you land a job

    2.Gaining Experience

    • Work on personal machine learning projects
    • Participate in Kaggle knowledge competitions
    • Apply for a machine learning internship.

    3. Acquiring a Machine Learning Job

    • Look for machine learning jobs online.
    • Write a resume that highlights your machine learning skills.
    • Create a personalized cover letter for each position you apply to.
    • Submit the job application.

    4.Working as a Machine Learning Engineer

    • Create and run machine learning experiments
    • Build and implement machine learning systems.
    • Ensure the data pipelines run smoothly.
    • Participate in educational programs to earn promotions.