Category: AI & ML

Will AI Overtake Human Creativity?

Virtual Intelligence Is Dangerous

 

“AI will be either the best or the worst thing, ever to happen to humanity”.

Said Stephen Hawking when asked about his opinion on Artificial Intelligence.

 

AI Versus Human Creativity

 

A few months earlier, the greatest South Korean GO player, Lee Sedol was being challenged by Google’s artificial player Alpha GO. GO game is considered to be the toughest game in the world. We can play our first move by 20 different choices in the game chess, while in GO game first move can be played by 361 different ways. After the initial one or two moves, the game becomes more and more complicated.

Lee Sedol had got the status of the professional player in GO game at 12 years of age. He nearly won the 18th international world championship and had became a South Korean superstar at a young age.

So the game played in South Korea from 9th March to 14th March, 2016. 60 million users from China and 25 million users from Japan were watching it live. In South Korea, it was a festive atmosphere as people expected Lee to beat the bot.  However, South Korean hearts broke after the results came out.

 

 

The famous star of GO game, 33 Years old, Lee Sedol lost by 4-1 against the Alpha GO!

South Korea mourned but it also brought forward the fact that Human intelligence would slowly be overshadowed by AI.

However, the fact was that Alpha Go was only calculating way ahead of its counterpart. There was factually no creativity involved in the game of GO.

Largely, the past four decades of AI has focused on ever more sophisticated methods for solving ever more highly constrained problems (e.g. chess, Go, memorizing labeled data-sets like Imagenet, or constrained quiz tasks like Jeopardy).

The field has unfortunately entered a downward spiral where publications are often judged by how well a given method performs on a particular artificial dataset, compared to 20 past methods on the same dataset. This approach of relying on artificial datasets to measure progress can quickly stifle creativity, and I see rampant evidence of this decline at even the best ML/AI conferences, like NIPS or AAAI, where year and year, papers that are accepted are largely highly incremental advances on previous work.

Very novel ideas have little chance of success, because they are usually unable to “play the same game” of showing marginal improvement on MNIST, or Imagenet, or COCO, or one of the dozens of other artificial datasets. It is as if physicists judge their profession by seeing how fast a car they can build with the latest advances in quantum field theory.

Creativity is an ability closely tied to “imagination”. The emphasis in creativity and imagination is not problem-solving at the expert level, but rather “problem creation”, if you will. It is a way of stretching the boundaries of what is possible by being able to ask counterfactual questions. Einstein was a great believer in the power of imagination.

Imagination is what led him to develop the theory of relativity, because he could ask questions like “What would the world look like if I rode a beam of light?” Imagination, he said, “would get you anywhere”, whereas “logic will only get you from A to B”. It is hard to imagine how one can do world class physics these days without a healthy dose of imagination. It is highly likely that this year’s Nobel prize in physics will go to the leaders of the LIGO detectors, which detected Einstein’s gravitational waves, a 100 years after they were predicted. The latest report of detection comes from two black holes that collided 1.8 billion light years away, releasing more energy in this one event than the energy released from all the stars in the observable universe. How can one even begin to understand the power of such events, without using imagination, since it is so far removed from our everyday experience

There is strong evidence that imagination is unique to humans as it is strongly localized in the frontal lobe of the brain, a structure most developed in humans as compared to other animals. Humans with damage to the frontal lobe are largely normal, although they are strikingly “in the present”, and unable to imagine the future. If you ask such a person what their plans are for the next week, they will understand the question, but say that their mind is a complete blank when they try to think of the future. Imagination is largely tied to the processes that go in the frontal lobe, and it is probably also the “seat of creativity”.

 

Jean Michel Basquiat’s untitled painting of a human skull- GoodWorkLabs

 

Fundamental advances are needed to understand how imagination works, and it will take at least the better part of the next decade or two before we begin to develop effective methods. One of our favorite examples of creativity is art. Jean Michel Basquiat’s untitled painting of a human skull sold at a New York auction recently for over $100 million. It is a strikingly original piece of art, and the 27 year old Brooklyn born painter was originally a graffiti artist, whose paintings now command prices similar to Van Gogh, Picasso, and Monet.

Will AI ever be able to produce great art of this caliber?

Perhaps, that day we should be bothered about the future of AI.

 

Is AI Going To Fade Like Nanotechnology

Is AI Overhyped Like NanoTech?

 

Nanotechnology was once so hyped and we cannot help compare it with what is happening with AI now. There are so many things that nano can do, but renaming projects to nano just to get funding was what happened among companies in 2000–2005.

Eg: Nano Face wash, Nano *Insert a title*

There is an explanation for this. It can be understood using the hype curve. It works according to Amara’s Law, which is a computer saying ,stating:

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

The General Curve is as shown below:

 

According to the Gartner Hype cycle for Artificial Intelligence 2017, AI is at the peak of Inflated Expectations so we can now expect negative publicity marking the stage of Trough of Disillusionment.

 

 

Artificial Intelligence first began in 1950 when the English Mathematician Alan Turing published a paper entitled “Computing Machinery and Intelligence”. But, the Technology trigger happened only over the last decade. We are in the stage where the mass media hype where “Data Science is the way to go”. The expectations are tremendous and we are talking about Robots being given citizenship which is good in some sense and scary as well. Andrew Ngvery recently gave a talk on how we have to move more talent to where it is most needed by training them. This shows how we are moving forward with this AI technology.

  1. We need Data Scientists with skills. Data Science is no more a skill, it is a way.
  2. Data science came long long ago when we first started to generate relations between different things. Now, it has been recognized as a separate entity because computer science boils down to applied mathematics which boils down to functions.
  3. Data Science is indeed very promising and a lot of funding is given to those who do it. (The pay at Goldman Sachs should say it all. It is approximately $104,578-$114,768)
  4. But, for something to become successful, one needs to wait for experiments to happen and results to come out. This is not the case today. We talk data almost everyday that we are busy doing shoddy work to get results out. This is not good and the prime reason why we are entering the phase of disillusionment.

 

Comparison with Nanotechnology

 

With Nanotechnology, the hype index shot very high and peaked mid 2002. It was the Data Science equivalent back then. You’d want to work there. The news was so full of Carbon Nano tubes and how the future was going to change. The news articles at that time went on and on about the miraculous properties of nano materials. But do we talk about it today? We read about it sometimes in the newspapers. That is it.

Nanotech in the mid 2002 was promising and the career prospects were great. But, an analysis showed that it could not live up to its hype because of time. It all comes down to this, doesn’t it?

In the 2005, we had talks of whether Nano is a boon or bane.

As early as in 2008, we got books on the hype of Nano tech “Nano-hype: The Truth Behind the Nanotechnology Buzz

In 2017, we hardly hear about it but some real work is going on. Nano tech is now in the plateau of productivity. Lithium Ion batteries and startups focus (MIT’s 30 under 30 has so many people working on Nano tech and not just Data Science) on this now better but ironically they lack funding because the hype now is data science and investors run towards the hype. Nobody can help this.

 

Comparison with CFC Discovery 

 

When CFC was first invented and its refrigeration properties were identified in discovered in 1928 by Thomas Midgley, he was in search of non toxic alternatives to the existing refrigerants during that time namely Ammonia and Sulfur dioxide. It caught the media and every single refrigerator used it until they found out that it destroyed the ozone layer in 1970. For thirty years no one knew the detrimental effects it had on the environment. And Funny enough it has appeared in the 30 worst discoveries by the leading TIME magazine. Now, it has been banned and we are trying to solve the problem created by the previous solution.

From the above analysis, few points are to be noted:

  1. We tend to provide solutions to solve problems which end up producing further problems and we end up cleaning the mess. We seem to be caught in this cycle.
  2. In every single problem, whether the hype led to productive output, it brought money. From the above, one can infer that “Research goes where money flows” and it is not the other way round. That’s life.
  3. Data Science has been carried out since the beginning of time. Just that it was named Physics, Chemistry, Maths, Biology and so on. It was interpretation of data and the science behind it. So, they named it appropriately.
  4. In today’s exciting world, we want to do anything with data which was not thought of before. Hence, Data Science.
  5. Data Science is a way and not a skill. Mechanical Engineering is a skill. People who understand this will win.

 

The Prominent people like Balaji Viswanathan CEO of Invento who does ML for his bots uses it, Andrew Ng Sees the need to teach it, Adam D’Angelo believes in it. The other CS giants know it. And I, a mechanical Engineering student, am contemplating about this and making sense of it.

The future looks good but this shall also pass. We are going to create solutions, create a mess, clean it up, create a mess, and the cycle will repeat.

 

4 Amazing Messenger Bots

Bots To Look Out For

 

2017 maybe brilliant and gleaming yet as we’re seeing some especially encouraging names in the realm of bots. Since bots initially debuted on Facebook Messenger a year ago, designers have been turning out a large number of the little folks. What’s more, it’s hard not to see a portion of the more creative bots out there. 

We definitely realize that bots hold unbelievable potential for producing leads. Here are a portion of the designers that are doing it right.

The 4 best Facebook Messenger bots of 2017 so far.

 

 

WTF is That 

 

WTF-Is-That-GoodWorkLabs

 

Watch out on this bot. As it is developing, it’s ended up being a particularly helpful little apparatus. This bot can help recognize things from only a photograph from bugs to peculiar sustenance things. The calculations supporting it are a long way from reality as of now. However,  it’s turning out to be an icebreaker and comfort thing across the board.

 

Duolingo 

 

Duolingo-GoodWorkLabs

 

It wouldn’t have been long until a language application went onto the bot scene. Enabling clients to talk with neighborly, supportive bots, Duolingo makes it simple to work on composing and conveying in another dialect. The discussions are restricted.  However they’re an incredible approach to help yourself to remember vital ideas and vocabulary terms. In addition, there’s an assortment of identities to connect with, making language learning fun.

 

MeditateBot 

 

MeditateBot-GoodWorkLabs

 

Staying calm has never been so simple. Exercise-related bots seem like a natural evolution of the entire bot concept and MeditateBot is no different. The bot, developed by the team behind the Calm app, guides users through flexible meditation exercises and allows users to set daily reminders to get into a regular meditation habit.

 

Poncho 

 

Poncho-GoodWorkLabs

 

Climate applications are surely just an old thing new yet Poncho accomplishes something that a significant number of these old applications could not. It gives a brisk climate process as well as a  day by day customized figure. With other clever highlights, including a nitty gritty dust tally and day by day running gauges that anticipate whether the following day will be bright or not, there’s a considerable measure to love also. Additionally, Poncho’s a well disposed little person, who shares jokes and accommodating tips. Who said bots can’t be adorable?

The Difference Between AI & ML

Machine Intelligence Or Artificial Learning

 

AI stands for artificial intelligence, where intelligence is defined as the ability to acquire and apply knowledge.

ML stands for machine learning where learning is defined as the acquisition of knowledge or skills through experience, study, or by being taught.

Imagine we want to create artificial ants who can crawl around in two dimensional space. However, there are dangers in this world: if an ant encounters a poisonous area, it will die. If there are no poison in ant’s proximity, the ant will live.

 

The Difference Between Artificial Intelligence And Machine Learning

 

How can we teach ants to avoid poisonous areas, so that these ants can live as long as they wish? Let’s give our ants a simple instruction set that they can follow; they can move freely in two dimensional space one unit at a time. Our first attempt is to allow ants to crawl around by generating random instructions.

Then we tweak these ants and let them crawl around the world again. We repeat this until ants successfully avoid the poisonous areas in the world. This is a holistic machine learning way to approach the problem. We make ants to fit in configuration by using some arbitrary rule. This works because in each iteration we prune away a set of non-fitting ants. Eventually, we are pushed towards more fitting ants.

Now, what if we change the location of poisonous areas, what do you think will happen? Ants would undergo a huge crisis because they couldn’t survive in the world anymore; they couldn’t simply know where the poisonous areas are and therefore would not be able to avoid them. But why would this happen, and can we improve it further?

Could ants somehow know where the areas are and adapt their behavior to make them more successful? This is where artificial intelligence comes into play. We need a way to give ants this information, give them knowledge of the environment. Our ants need a way to sense the world. Until this, they have been living in completely darkness, without any way to perceive the world around them. For example, we can let ants to leave a short trail which other ants can sense. Then we can make ants to follow this trail and if they cannot sense a trail, they just crawl around randomly.

Now, if there are multiple ants, most of them will hit the poisonous areas and die. But there are also ants who won’t die and therefore crawl in a non-poisonous areas – they will leave a trail! Other ants can follow this trail blindly and always know that they will live. This works because ants can receive some information of their surroundings. They can’t perceive the poisonous areas (they don’t even know what poison is), but they can avoid them even in completely new environments without any special learning.

 

These two approaches are quite different.

  • The machine learning way tries to find a pattern which ants can follow and succeed. But it doesn’t give ants a change to make local decisions.
  • The artificial intelligence way is to let ants to make local decisions to be successful as a whole. In nature, we can find many similarities to this kind of artificial intelligence way to solve problems.

 

Artificial Intelligence — Human Intelligence Exhibited by Machines

Machine Learning — An Approach to Achieve Artificial Intelligence

 

AI can refer to anything from a computer program playing a game of chess, to a voice-recognition system like Amazon’s Alexa interpreting and responding to speech. The technology can broadly be categorized into three groups: Narrow AI, artificial general intelligence (AGI), and super intelligent AI.

IBM’s Deep Blue, which beat chess grand master Garry Kasparov at the game in 1996, or Google DeepMind’s AlphaGo, which in 2016 beat Lee Sedol at Go, are examples of narrow AI—AI that is skilled at one specific task. This is different from artificial general intelligence (AGI), which is AI that is considered human-level, and can perform a range of tasks.

Superintelligent AI takes things a step further. As Nick Bostrom describes it, this is “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” In other words, it’s when the machines have outsmarted us.

Machine learning is one sub-field of AI. The core principle here is that machines take data and “learn” for themselves. It’s currently the most promising tool in the AI kit for businesses. ML systems can quickly apply knowledge and training from large data sets to excel at facial recognition, speech recognition, object recognition, translation, and many other tasks. Unlike hand-coding a software program with specific instructions to complete a task, ML allows a system to learn to recognize patterns on its own and make predictions.

While Deep Blue and DeepMind are both types of AI, Deep Blue was rule-based, dependent on programming so it was not a form of ML.

DeepMind, on the other hand, was.

So, essentially there is a huge difference between these two entities but they are dependent on each other.

Do you want to build a product with AI and ML? Then just drop in a quick message with your requirements!

[leadsquare_shortcode]

 

Do We Need Artificial Intelligence?

The AI Paradigm

 

AI was coined by John McCarthy, an American computer scientist, in 1956 at The Dartmouth Conference.

According to John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”.

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.

Have you ever been so lazy to be stalled on your bed with packets of tortilla chips and the latest episodes of Game of Thrones, that you just fantasized a remote control with multiple buttons to open the door or turn the fan on or do all that boring stuff?

Oh wait, that still requires you to hold the remote and press the buttons, right? Gee, why don’t we have a robot that would just read our mind and do everything from household stuff to attending the unwanted guests without asking anything in return. Firstly, such robot will have to be super intelligent.

 

AI-Paradigm-Need-For-AI-GoodWorkLabs

 

Not only will it have to be efficient to perform routine tasks, but also understand your emotions viz-a-viz, mood swings and your behavioral pattern by observing you every minute and processing the data of your actions and emotions. Apart from the hard-coded seemingly basic set of functions, which in itself is a mammoth task, the machine will have to progressively learn by observations in order to perform as good as a smart human to serve you.

While a lot of this has been significantly achieved, it is still a very hard task for a machine to detect, segregate and arrange scented towels, hairdryers, Nutella box or contact lenses from a pile of junk than computing the complicated Euler product for a Riemann Zeta function. Machines can be entirely clueless and result into wrong outputs for what seems obvious that humans can solve in just a second’s glance.

Firstly, Artificial Intelligence is not the artificial intelligence Hollywood would have us imagine it to be. When people talk about ‘volcanic’ changes in ‘AI’ they are talking about one particular field of technology: Machine Learning and within that field, Deep Learning. Machine Learning is a very literal description of the technology it describes, that is a program written to learn and adapt. The pioneering technology within this field is the neural network (NN), which mimics at a very rudimentary level the pattern recognition abilities of the human brain by processing thousands or even millions of data points. Pattern recognition is pivotal in terms of intelligence.

A lot of people assume that we are developing general AI rather than applied AI. Applied AI is intelligence, but in a very limited field and requires supervised training. For example, in recognizing human faces (Facebook), driving cars (Google Autonomous Cars),  namely matching teachers to students for optimal outcomes. A general AI on the other hand, is not limited to a narrow field where humans still have to impose certain rules before it can ‘learn.’ It learns ‘unsupervised’. To clarify, there are hundreds of companies using applied AI such as a vacuum cleaner that knows how to avoid your cat, there are none that have developed general AI like the Terminator.

We are getting closer to general AI though. There is a developing technology, “Adversarial Training of Neural Networks“, where the data from one machine learning program helps to train the other in a kind of closed loop. This is the technology that Google and Facebook have been flouting a lot recently. An example of this might be in medicine, where one ML program is used to diagnose a patient, and another is used to prescribe a treatment. The two programs may train each other in that correct treatments suggest correct diagnoses and the correct diagnosis may lead to different treatments, and so on.

AI is humanity’s quest to understand itself.

It is our attempt to explain things that define us and placed us on an evolutionary pedestal: Our ability to reason and think, to be self-aware, learn complex patterns and create and achieve better and bigger things.

In short, it is an attempt to map how our brain which is something more than just the grey matter in our head, works.

Attempting to generate ‘intelligence’, which is a broad term we’ve come to use to define all of our uniqueness artificially maybe humanity’s ultimate self-reflection. It could be the culmination of centuries of pondering about philosophy, psychology, religion, biology, chemistry and a million other fragmented sciences and non-sciences, which we have developed as we grew to explain ourselves and the world around us.

The strange paradox is to decide whether we need AI or not one has to decide whether humans should be like Gods or not. At the moment,we are like the Gods. We could either go back to being human, everyday animals or  we have to get good at being gods or we risk our survival.

 

Robot thinking close up

The Yardsticks For A Perfect AI

How should the Perfect AI be?

During WWII, the Russians trained dogs to hide under tanks when they heard gunshots. Then they tied bombs to their backs and sent them to blow up German tanks. Or so was the plan.

What the Russians did not take into account, was that the dogs were trained with Russian tanks, which used diesel, but the German tanks used gasoline, and smelled different. So when hearing gunshots, the dogs immediately ran under the nearest Russian tank…

This tale is about natural intelligence, which we’re suppose to understand. The problem with AI, especially “learning machines”, is that we can try to control what they do, but cannot control how they do it.

So we never know, even when we get correct answers, whether the machine had found some logic path to the answer, or that the answer just “smells right”. In the latter case, we might be surprised when asking questions we do not know the right answer to.

 

Goodworklabs-Ai-Bots-FAcebook

 

Now, the question arises: “Can AI adapt to every possibility, and if it does will it not lead to the end of humanity?”

There was a movie called that is scarily futuristic. It describes a AI Robot that could replicate human characters so well that it tricked a human into letting it escape into the real world.

And add to the fact that probably AI can understand political correctness.

Language algorithms work by analyzing how words (840 billion of them on the internet) are clustered in human speech and certain words (such as ‘male’ or ‘female’, ‘black’ or ‘white’) are ‘surrounded’ by different associated words. This means language and other data-set analysis programs already pick up on and replicate our social biases. And only a supervising or moderating program could counteract this.

In 2016 Microsoft ran an experiment in ‘conversational learning’ called ‘Tay’ (Thinking About You) on Twitter. But people tweeted the bot lots of nasty stuff which, within a day, Tay started repeating back to them.

More on it here:

https://en.wikipedia.org/wiki/Tay_(bot)

Of course, we know full well that AI’s biggest prejudice will be against homo-sapiens. So, it may learn to use all the politically correct terms when it’s talking to us … but inwardly it’ll be dreaming of living in an AI-only neighbourhood where the few humans to be seen are ‘the help’.

The best way to understand all the things that AI is missing is to describe a single example situation that folds together a variety of cognitive abilities that humans typically take for granted. Contemporary AI and machine learning (ML) methods can address each ability in isolation (to varying degrees of quality), but integrating these abilities is still an elusive goal.

Imagine that you and your friends have just purchased a new board game — one of those complicated ones with an elaborate board, all sorts of pieces, decks of cards, and complicated rules. No one yet knows how to play the game, so you whip out the instruction booklet. Eventually you start playing. Some of you may make some mistakes, but after a few rounds, everyone is on the same page, and is able to at least attempt to win the game.

 

What goes into the process of learning how to play this game?

 

  • Language parsing: The player reading from the rule book has to turn symbols into spoken language. The players listening to the rules being read aloud have to parse the spoken language.

 

  • Pattern recognition: The players have to connect the words being read aloud with the objects in the game. “Twelve sided die” and “red soldier” have to be identified based on linguistic cues. If the instruction booklet has illustrations, these have to be matched with the real-world objects. During the game, the players have to recognize juxtapositions of pieces and cards, and key sequences of events. Good players also learn to recognize patterns in each others’ play, effectively creating models of other people’s mental states.

 

  • Motor control: The players have to be able to move pieces and cards to their correct locations on the board.

 

  • Rule-following and rule inference: The players have to understand the rules and check if they have been applied correctly. After the basic rules have been learned, good players should also be able to discover higher-level rules or tendencies that help them win. Such inferences strongly related to the ability to model other people’s’ minds. This is known in psychology as theory of mind.

 

  • Social etiquette: The players, being friends, have to be kind to each other even if some players make mistakes or disrupt the proceedings. (of course we know this doesn’t always happen).

 

  • Dealing with interruptions: If the doorbell rings and the pizza arrives, the players must be able to disengage from the game, deal with the delivery person, and then get back to the game, remembering things like whose turn it is.

 

There has been at least some progress in all of these sub-problems, but the current explosion of AI/ML is primarily a result of advances in pattern recognition. In some specific domains, artificial pattern recognition now outperforms humans. But there are all kinds of situations in which even pattern recognition fails. The ability of AI methods to recognize objects and sequences is not yet as robust as human pattern recognition.

Humans have the ability to create a variety of invariant representations. For example, visual patterns can be recognized from a variety of view angles, in the presence of occlusions, and in highly variable lighting situations. Our auditory pattern recognition skills may be even more impressive. Musical phrases can be recognized in the presence of noise as well as large shifts in tempo, pitch, timbre and rhythm.

 

AI-services-goodworklabs

 

No doubt AI will steadily improve in this domain, but we don’t know if this improvement will be accompanied by an ability to generalize previously-learned representations in novel contexts.

No currently-existing AI game-player can parse a sentence like “This game is like Settlers of Catan, but in Space”. Language-parsing may be the most difficult aspect of AI. Humans can use language to acquire new information and new skills partly because we have a vast store of background knowledge about the world. Moreover, we can apply this background knowledge in exceptionally flexible and context-dependent ways, so we have a good sense of what is relevant and what is irrelevant.

Generalization and re-use of old knowledge are aspects of a wider ability: integration of multiple skills. It may be that our current approaches do not resemble biological intelligence sufficiently for large-scale integration to happen easily.

 

 

Artificial Intelligence (AI) in Recruitment

Recruitment Powered By AI

Artificial Intelligence (AI) seems to be the buzzword doing the rounds of boardrooms of every big and small company around the world. Taking giant strides every passing week AI is set to dominate our lives in the near future. With various industries wholeheartedly embracing AI and furiously implementing it in their companies, it would be a no-brainer to say that AI would cover almost every aspect of our lives in the next five to ten years.

While wisdom says that change is the essence of life, a majority of people resist it. The same is the case for some people resisting AI in recruitment. Some scaremongers have been misinforming that AI would lead to a lot of losses in jobs. It would be foolish to fear machines which were created by us. It would be prudent to say that leveraging AI in recruitment can be a great tool in a company’s hand which can lead to various advantages for the organization.

 

How Artificial Intelligence in recruitment works?

 

By enhancing certain automated tasks which are repetitive and very laborious, AI helps to save a company’s precious time and resources. The machine learning tool of AI is very useful to screen quality candidates from thousands of applicants as ML has the ability to learn on its own. By automatically screening, sourcing and scheduling, AI helps a company focus only on the cream of candidates and thereby saving tons of time. With rapid improvements in AI, the prospect of a super smart chatbot completing the entire recruitment process can’t be ruled out.

 

Artificial Intelligence (AI) in Recruitment

 

Some benefits of AI in recruitment

  • AI reduces a recruiter’s tedious task and boosts his productivity.
  • Automation streamlines the whole recruitment process and reduces the hiring time by half.
  • A company’s reputation and goodwill increases as the responsiveness of the chatbots to the candidates is 100%.
  • By standardizing the whole process and removing the anomalies, the quality of hire can be drastically improved.

Practical applications of AI in recruitment

Mya is a very popular recruitment assistant chatbot that automates almost 75% of the recruitment process. She can communicate with candidates with the help of popular messaging apps like Facebook and can also provide immediate feedback to applicants. Candidates can also ask Mya about the company’s culture and their hiring procedures.

This is definitely a huge step towards solving real-time business problems such as recruitment.

The future challenges

As technologies take time to evolve and mature it should be understood the same would be in the case of AI in recruitment. There are certain challenges which can slow down the AI juggernaut in the recruitment arena. Some of the challenges with AI in recruitment are:

  • In the initial screening procedure of the resume the data should be accurate to make AI hiring effective.
  • If recruiters feel they can do a better job at hiring, the HR department would be reluctant to implement AI in their offices.
  • As MI can learn from itself, it can also pick up human biases and prejudices and that can adversely affect the whole recruitment process.

Most experts believe AI in recruitment can be a significant leap ahead in the sector. It would be pretty challenging in the coming days for manual HR to compete with it.

Lastly, this automation will definitely take out the stress from the entire hiring process and make it vastly efficient.

3 Reasons Why Machine Learning Is Transforming Digital Marketing

Machine Learning In Marketing

 

AI and its associated concepts of Machine Learning and NLP are fast affecting all major functions of a business. Digital marketing too can be counted as one of the sectors that have seen the massive influence of ML seep in.

The involvement of Machine Learning into sales and marketing activities was a natural progression considering the ease with which we can store humongous amount of data and process it in much faster time with lower cost tools and resources.

 

5 Reasons Why Machine Learning Is Transforming Digital Marketing

 

Here are some reasons why machine learning is transforming digital marketing:

1 – Better campaign customization

Traditional marketers from the era of print and TV ads were stuck on broadcasting their marketing message to one and all. However, since the digital customer is different, there is a need for one-to-one engagement for better outcomes. This calls for knowing on a deeper level the presences, needs, and behaviors of the potential customers to send targeted marketing message. Machine learning can help marketers to dig deeper and sense a pattern not readily visible. This way you can customize your marketing campaign for better efficacy.

2 – Dynamic ad display

The recent case of Jivox IQ machine learning algorithm (called as Neuron) providing a much more personalized brand messaging than a brand CPG manager is a case in point where the advantages of Machine Learning can be put to practical use. This way marketers can add a touch of ‘smart’ to their digital marketing programmes and ensure better quality conversions.

3 – Better segmentation

As evident, the ‘one size fits all’ phrase has never been more wrong as with the digital marketing ecosystem. Hence marketers have employed segmentation to show the relevant ads to the right set of people at the right time.  While you may create broad segments, in order to create micro segments for better targeting you would need the incredible data processing and insight generation prowess of machine learning.

What does this evolution means for marketers?

Does this mean that marketing automation combined with machine learning will throw the careers of marketers off-balance? Of course not. It will simply mean that marketers will be spared the labor of manually processing data, going through tons of reports, sensing patterns, uncovering insights, and aiding in management decision making.

All this will now be handled by the Machine Learning algorithm in a much more effective and swifter way. They can enhance their job responsibilities to creating media strategies, identifying effective marketing channels, and trying to understand customer behavior, and create more appealing marketing campaigns.

Machine Learning is clearly revolutionizing the world of programmatic marketing. It is affecting every function of marketing – right from what flavor of marketing campaign to be directed at which customer segment, to a new way of telling brand stories.

These reasons clearly outline why marketers can no longer shy away from embracing Machine Learning to give their digital marketing campaigns the much needed competitive edge. Adopt ML into your marketing mix and see how your outcomes will be radically different from those driven by traditional marketing approaches.

The Miracle Called IBM Watson

IBM Watson – Technology Or Magic ?

 

Isaac Asimov, a science fiction author wrote a trilogy series called “Foundation” in 1950s. The foundation is all about a scientist named Harry Seldon who picks up a group of high IQ people in different fields at a very early age of 8 to 10 years and creates a civilization on an uninhabited planet. A super computer governs this civilization. Since all the people are of known behavioural trend, this computer not only analyses  characters and their offsprings, but also governs them silently. At any given time it can predict who is going to be their leader, how long he is going to rule and who will be the successor. It can predict the entire civilization for next 150 years. When an issue arises, the computer can predict and provides the solution for the same. It learns from the current civilization to prepare prediction for next 150 years.

Now the entire story is far fetched, but seemingly plausible, thanks to Watson. That is the power of Watson. Its artificial intelligence, though not as accurate as that depicted in the fiction, it is a starting point.

 

The Miracle Called Watson

 

IBM Watson can analyse all the data fed into it and come up with an accurate prediction. This is not an easy task for any computer or logic. It really pains us when somebody thinks Watson just answers queries. It is not a product or a piece of code, it is an IBM (marketing) brand used for a whole bunch of stuff.

Please don’t confuse a framework with an algorithm. Tensorflow is a software library that can be used to implement a number of machine learning algorithms. It’s the algorithm itself that matters, not the framework. Tensorflow is just a library that helps with parallelism, which is only useful in a hand full of cases.

IBM developers – as far as I know – are a bit indifferent when it comes to libraries. They rely heavily on (and contribute to) open source and will use whatever works best. A lot of the components/algorithms they use are much older than TensorFlow and most machine learning libraries. If you ask me, they probably have built most of this stuff from scratch without using any particular framework.

IBM Watson is a cognitive computing based Artificial intelligence super computer which uses unstructured big data as a source. Watson is a question answering computer system capable of answering questions posed in natural language.

Watson is a question answering computer system capable of answering questions posed in natural language, developed in IBM’s DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM’s first CEO, industrialist Thomas J. Watson. The computer system was specifically developed to answer questions on the quiz show Jeopardy!

In 2011, Watson competed on Jeopardy! against former winners Brad Rutter and Ken Jennings.

Watson received the first place prize of $1 million.

Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage including the full text of Wikipedia, but was not connected to the Internet during the game. For each clue, Watson’s three most probable responses were displayed on the television screen. Watson consistently outperformed its human opponents on the game’s signaling device, but had trouble in a few categories, notably those having short clues containing only a few words.

In February 2013, IBM announced that Watson software system’s first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan Kettering Cancer Center, New York City, in conjunction with health insurance company WellPoint.  90% of nurses in the field who use Watson now follow its guidance wholeheartedly.

At the core, Watson is a complex NLP system. Numerous processes are involved that are rule-based, such as Lucene building a variety of indices, based on rules, as one of 20+ pre-processing steps for corpus content i.e documents that contain the domain knowledge.

There is a second phase where humans provide examples of implicit rules. A textual query is related to a portion of the corpus, Q&A, essentially telling Watson that when it sees the same query after training it should respond with the area of the corpus indicated.

The challenge is that Watson, and NLP in general, is a non-deterministic system based on probabilities. The training process above is repeated thousands of times and the algorithms  build up probabilities of the relationship of a text query to an area of the corpus.

Some experts will suggest that IBM Watson is a failure and some will tell you that it is the biggest technological marvel ever. The debate will be forever, the lesson is to take the positives from the Watson and build on it.

Harnessing its powers is the way forward.

3 Advantages Of Cognitive Computing

Understanding Cognitive Computing

 

Gartner has rated cognitive computing as a platform that will bring about a digital disruption unlike any seen in the last 20 years. This makes it worthwhile for your business to check out cognitive computing capabilities and how it can deliver advantages to your business.

Cognitive computing systems bring about the best of multiple technologies such as natural language queries and processing, real time computing, and machine learning based technologies. By using these technologies, cognitive computing systems can analyze incredible volume of both structured and unstructured data.

The objective of cognitive computing is to mimic human thoughts and put it in a programmatic model for practical applications in relevant situations. This biggest name in cognitive computing – IBM Watson, relies on deep learning algorithms aided by neural networks. They work together to absorb more data, learn more, and mimic human thinking better.

 

Advantages Of Cognitive Computing

 

Today, we have compiled a list of some key benefits of cognitive computing through real life use cases:

 

1 – Better data analysis

Take the example of healthcare industry. Cognitive computing systems can collate information, reports, and data from disparate sources like medical journals, personal patient history, diagnostic tools, and documentation of similar lines of treatment adopted in the past from different hospitals and medical care centers.

This provides the physician with data backed and evidence based recommendation that can enhance the level of patient care provided to the patient. So here, cognitive computing will not replace the doctor, it will simply take over the tedious job of sifting through multiple data sources and processing it in a logical manner.

 

Advantages Of Cognitive Computing

 

2 – Efficient processes 

Swiss Re is a great example of how a complex process can be made simpler by employing cognitive computing. According to officials, using cognitive computing helps them to identify and take action based on emerging patterns. It also helps them to spot opportunities and uncover issues in real time for faster and more effective response.

Its underwriting process for the Life and Health Reinsurance business unit was revolutionized when it used IBM Watson to analyze and process huge amount of unstructured data around managing exposure to risk. This enabled them to purchase better quality risk and thus add to their business margins.

 

3 – Better level of customer interactions

Hilton partnered with IBM to enable better quality of interactions and drive a superior front desk and hospitality experience to guests. The result is Connie, a Watson enabled robot concierge. It can provide amazingly relevant, contextual, and accurate information on broad subjects around travel and hospitality, like, informing about local tourist attractions, providing information on hotel amenities, and providing fine dining recommendations. Hilton is reimagining the entire travel experience with Connie, to make it smarter, easier and more enjoyable for guests.

 

These advantages highlight the massive potential that cognitive computing possesses. Embracing it at an early stage will help you experiment and personalize the tremendous power of cognitive computing to deliver incredible gains to your business.

Ready to start building your next technology project?