Prerequisites For Learning Hadoop & Big Data

Learning Big Data The Right Way

 

Who should learn Hadoop? – Anybody with basic programming knowledge can learn Hadoop. Mostly professionals from Business Intelligence (BI), SAP, Data Warehouse, ETL, Mainframe background or any other technology domain can start learning Big Data with Hadoop.

 

Prerequisites For Learning big data &Hadoop

 

When we are discussing the prerequisites for Hadoop, we need to understand that Hadoop is a tool and it does not have any strict perquisites or requirements before because of this only it is the most powerful and useful tool in today’s data world. We need to understand why Hadoop is impacting so much because it is not fixed or restricted in a particular domain.

There is no strict prerequisite to start learning Hadoop. However, if you do want to become an expert and make an excellent career you should at least have a basic knowledge of JAVA & Linux. Don’t have any knowledge of Java & Linux? No worry. You can still learn Hadoop. The Best way would be to also learn Java & Linux parallel. There is an added advantage of learning Java and Linux that we will explain in following points

  • There are some advance feature that are only available in Java API.
  • It will be beneficial to know Java if you want to go deep into Hadoop & want to learn more about the functionality of particular module.
  • Having a solid understanding of Linux Shell will help you understand the HDFS command line. Besides Hadoop was originally built on Linux & it is preferred OS for running Hadoop

There is no strict prerequisite to start learning Hadoop. However, if you want to become an expert in Hadoop and make an excellent career, you should have at least basic knowledge of Java and Linux

 

To completely understand and become proficient in Hadoop there will be some basic requirements to which developer needs to be familiar with. Familiarity with Linux Systems is a must. Most people lack this ability.

For Hadoop, it depends on which part of the stack you’re talking about.  For sure, you’ll need to know how to use the GNU/Linux Operating System.  We would also highly recommend programming knowledge and proficiency in Java, Scala, or Python.  Things like Storm give you multiple languages to work with.  Things like Spark lend itself to Scala.  Most components are written in Java so there’s a strong bias to having good Java skills.

“Big Data” is not a thing, but rather descriptive of a data management problem involving the 3 V’s.  Big data isn’t something you learn, it’s a problem you have.”

More and more organizations will be adopting Hadoop and other big data stores which will rapidly introduce new, innovative Hadoop solutions. For this, Businesses will hire more big data analytics to provide a better service to their customers and keep their competitive edge. This will open up capabilities for coders and data scientists that will be mind-blowing. – “Jeff Catlin, CEO, Lexalytics”.

So, we recommend the following to kick-start your career in Hadoop. 

  • Linux Commands – for HDFS [Hadoop Distribution File System]
  • Java – For Map Reduce
  • SQL – For Databases
  • Python to write codes.

 

Go big with big data!

Java Vs Python

The Language Battle

 

Java and Python are two of most popular and powerful programming languages of present time. Beginner programmers are often confused about choosing the right one. Since we are a premier Java developing firm, our opinion is slightly leaned towards Java.

Although, hey! We love python too.

 

JAVA VS PYTHON

 

Java VS Python: Key Differences

 

  • Braces vs Indentation
    • Python uses indentation to separate code into blocks. Java, like most other languages, uses curly braces to define the beginning and end of each function and class definition.
  • Dynamic vs Static Typing
    • Java forces you to define the type of a variable when you first declare it and will not allow you to change the type later in the program. While Python uses dynamic typing, which allows you to change the type of a variable.
  • Portability
    • Any computer or mobile device that is able to run the Java virtual machine can run a Java application, whereas to run Python programs you need a compiler that can turn Python code into code that your particular operating system can understand.
  • Ease of use
    • Python is an easier language for novice programmers to learn. You will progress faster if you are learning Python as a first language than Java. However, the popularity of Java means that learning this powerful language is essential if you want your apps run everywhere.

 

Why People Choose Java

 

  1. The strong java community

No matter how good a language is, it wouldn’t survive if there is no community to support. Java has a strong community who is ready to help throughout your career. I think its the reason why stackoverflow has the largest number of answers on java.

  1. Java is free

If a programmer wants to learn a new language or an organization wants to use a technology, cost matters. This is why java achieved much popularity.

  1. Huge collection of OpenSource libraries

Java is backed with a number of open source libraries that helps the developers to reduce their development time as well the lines of code. Some of these libraries are

  1. Powerful development tools

One can choose from several of the Development tools (IDE) that are available for java.

  1. Java is platform independent

The main reason of Java’s popularity in the 1990s was the idea of platform independence.  Its tagline “write once run anywhere” attracted many developments into java. Most of the Java applications are developed in Windows environment and run in UNIX platform.

  1. Java is Object Oriented and even supports functional programming with Java 8

Developing OOPS application is much easier, and it also helps to keep system modular, flexible and extensible.

 

The Python Advantage 

 

1. Python requires no “set up.” A full python environment is already on every Linux machine, and on Macs. On Linux, the program yum, or the Yellowdog Updater, Modified is written in python, so python is here to stay. Java requires a substantial amount of setup. So if you want to get started with python programming, just type python at the prompt. To start with Java, call someone who knows it.

2. The systems written in Java that we have purchased all suffer from the need to have particular versions of Java installed, and thick clients of these systems also have that requirement. Support of Java appears to be expensive. We do not yet have a similar number of python systems, but no one is expecting configuration management to be an issue with them. From an educational standpoint, this sounds like a good way to become frustrated.

3. Python has its own idiosyncrasies. In Java, every object must be a representation of some class, but in python the “variables” are of a unique flavor. Variables do not represent objects [cf. object: something in memory that has an address] nor are they pointers, nor are they references. It is best to think of them as temporary “names” for an underlying reality, much like the Allegory of the Cave in The Republic (Plato). From a learning standpoint, this may be more difficult for those of us with 35 years of experience than it is for those first taking up programming.

4. A number of companies are stuck with a great deal of legacy code written in Python 2. Consequently, Python suffers from a misconception about how strict or loose the typing system may be, and how strictly it may be enforced. Keep in mind that because Python mainly works with “names” of objects, we are really not discussing the same thing when we discuss types of Python’s objects that we are discussing in other languages. Python does offer some rather seamless type conversions that can make it seem that the concept of types is less strict than it is in fact. Learning Python 3 first makes sense, but most of the employment is still in Python

5. Compared with Java, python is terse. Personally, the growing amount of arthritis in our hands welcomes this feature. In truth, my C++ code was frequently criticized for its overuse of operator overloading and the ternary operator. This may not make it a good learning experience, because for many people learning comes more easily when the material is spelt out.

 

Our Advice

 

If you have to absolutely choose only one of the two and are not from computer science stream, definitely Python and if you are from computer science stream, Java. If no restriction, choose both.

 

Why Virtual Reality Hasn’t Gone Big Yet?

Why is VR Still A Virtual Dream?

 

“This is a TV that you can strap to your face to make you think you’re somewhere else. Here, try it. I’ll hold your drink.”

This is what virtual reality promised us.

Some would say that it is yet to take off and its progress has been real slow. We got pondering about the same and have come up with a few reasons that justify the same.

 

virtual-reality-marketing

 

1. Hardware limitations. Remember that VR devices have double screens, one per eye. Now add a monitor with a relatively high resolution. This is the equivalent of rendering in three screens. On top of that, you need a steady 60 fps or more at least. Add the inherent latency between input devices or controllers, and in some cases like the VIVE the player’s position tracking, and most computers will struggle.

2. As a developer you reduce your market size greatly, as now you need a computer with VR capable specs and a lot more power on top of it for the game itself.

3. There may be projects already working on this, but those will not be indie developers. These projects usually take 2+ years for completion. Given VR devices hit the consumer market about a year ago, there is still some time.

4. Notable exceptions are AAA games that were or are being ported to VR, such as Fallout 4 or Resident Evil 7, but these were games successful on their own, so the companies are not taking a major risk with the ports.

 

To Summarise it up, “ Photo realistic VR experiences take a considerable amount of money to pull off. Firstly, you need a high enough resolution screen, and what we have is about 1/10th the resolution of where we need to be. Secondly, along with the higher resolution screen you’d need to be able to push a realistic graphics simulation to a headset at 90+fps consistently, otherwise you run the risk of ruining the immersion.

The hardware just isn’t there yet, as there isn’t a graphics card in existence today with the horsepower and driver software to pilot something as crazy as photo-realistic VR.”

The hardware requirements for a given level of graphics are much higher for VR than they are for a single monitor (or even a triple monitor). To put this in perspective, for VR you need to render probably 5x as many raw pixels as you do for the output of a current console at 1080p, you need to do additional post processing on them when you’re done, and you need to do it three times as fast (90+ vs 30fps). S

o you’re talking about (in an oversimplified way) 10 to 15x hardware requirements for the same content vs a console fame. And you can never have dropped frames or stutter (things like Time Warp mitigate the impact but it’s still critical). So this right away means you won’t be able to use the same poly counts and do the same shaders – the secret sauce – that make modern games look amazing.

The total VR market right now is in the low single digit millions across all platforms (Vive, Rift, PSVR) and it’s tricky to do cross-platform yet. So you need to pay for your development with 1/50th or less the potential audience you have with a conventional game. That means you’re not going to have $100m budgets and the offices full of texture artists, shader makers, and modelers required to build the detailed worlds you see in big AAA titles. So if you want to make a game, you need to focus your limited budget on the kind of titles and content that you can execute well with the team your market size will support.

And that doesn’t tend to be the photo-realistic, precisely-rendered environments you can do with big budgets and lower pixel/frame rate requirements.

VR will arrive and make it big soon, just not now.

Give it some time.

3 Instance Where AI Outperformed The Humans

AI Knows From A To Z

 

3-AI-Instances-Where It Proved To Be Smart

 

Target found out the pregnancy of a teenager before her parents did.

An angry father walks into a Target store in Minneapolis, demanding to talk to the manager:

“My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

A few days later:

“I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”

Target had a system which assigns each shopper a “pregnancy prediction” score based on the products they buy. The system could also estimate their due date to within a small window, so Target could send coupons timed to very specific stages of their pregnancy.

This happened in 2012 and it’s hardly state-of-the-art “AI”, but it just goes to show that anything creepy a machine learning model does, is just a product of how and with what data it is trained.

 

Programmer and CMU PhD Tom Murphy created a function to “beat” NES games by watching the score. How it worked was that the program would do things that increased the score, and then learn how to reproduce them again and again, resulting in high scores. It came up with novel techniques and strategies for playing games and even exploited glitches humans didn’t know about, or at least hadn’t told it about. The program, called a “technique for automating NES games,” can take on nearly every NES game. Nearly.

Tom made the program play Tetris. Most of us have played this game and needless to say, we all know that it gets tricky after a certain point. The program struggled to figure out what to do. The choices of Tetris blocks is entirely random, so it’s not surprising that the computer wasn’t able to consider future repercussions far enough ahead to notice that stacking those blocks in a certain ways made a big difference.

On one such run, when faced with imminent defeat, the computer did something eerie. Rather than lose, and receive a ‘game over’ message, it just paused the game. Forever.

Tom describes the computer’s reasoning like this: “The only winning move is to not play.” And that’s right. If you pause a game for ever you will never lose that game.

 

An Artificial Intelligence program developed by Elon Musk’s Team called Open AI created a lot of buzz as well. Musk believes that development of AI should be regulated and AI safety should be a prime concern of every developer. To put weight to his idea, he started a project called OpenAI. The team used DOTA 2 as a test means to develop their AI.

Now what’s special is how they trained this BOT. They didn’t write any code about the rules of Dota 2 or any strategies that professional players use. They just gave basic instructions(eg: Winning is good, losing is bad, Taking Damage is Bad, Giving Damage is good, etc) and made the BOT play with a copy of itself. In the beginning, the BOT made very stupid decisions. But slowly, it started to learn, devise its own strategies and make novel moves. It took the BOT 2 hours to beat the existing Dota 2 BOT and 2 weeks to reach the level of a professional Dota player!

Finally, OpenAI put its BOT to test against many of the world’s top Dota 2 players(1v1 match) and it was easily able defeat them. Then came The International 2017, one of the biggest eSports event in the world. Here, OpenAI was tested against what people consider the best Dota 2 player in the world: Danylo “Dendi” Ishutin. To everyone’s surprise, OpenAI defeated Dendi in a solid 2–0 before Dendi gave up!

 

A Few More Worthy Mentions 

 

The blink recognition software in Nikon’s camera kept asking “Did someone blink” when an Asian would pose in front of the camera. The camera perceived the *small* eyes of Asians as closed.

Recently a report announced that Facebook had to abandon their experiment after two AIs went out of control and supposedly started interacting with each other in a language other than English which made them easier to work! below is what they said to each other.

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

You, i everything else (dot)(dot)(dot)(dot), looks like Bob was devising a plan to kill everyone other than him and Alice.

 

And that ladies & gentleman is how Artificial Intelligence has evolved over the years.

Fun fact, it always is on the move. It is always evolving.

Augmented Reality Is The Future

AR – The Future Tech

 

Augmented reality (AR) is the coordination of digital information with live video and the user’s condition in real time. devices utilized for Augmented Reality are generally those of a computer, camera, processor and screen.

Reasons why Augmented Reality will be a future battleground

  • Recent Launch of Apple AR Kit
  • Social Media Platforms Incorporating Augmented Reality
  • It will change the future of marketing
  • Increasing number of users embracing Augmented Reality
  • Various mobile Apps Utilizing AR

 

 

The most prominent example is in the healthcare industry. You can find more and more professionals engage with augmented reality to leverage their day to day task. 

  • A doctor is able to view a patient’s medical history displayed over the latest medical scan, and even over the patient himself.
  • Healthcare practitioners are now able to project medical imagery on to patient’s bodies using head mounted displays. Projecting CT scans though the display can give doctors “X-ray vision” of patients and provide important contextual cues for diagnosing patients.
  • Patients are educated through simulation about their medical conditions (Cataract or AMD) using apps like Eye Decide.
  • Patients get reminders on taking drugs by wearing Google Glass and having an app installed on the device.
  • A nurse views a perfect highlighted image of the patient’s veins so the IV can be inserted in one painless attempt.

Some more facts and figures that prove that AR is the next big thing in tech are:

  • The dedicated augmented reality market is expected to reach $659.98 million by the end of 2018
  • According to Digi-Capital, AR/VR could hit $150 billion in revenue by 2020, with VR taking around $30 billion and AR $120 billion
  • By the end of 2017, the sales of augmented reality smart glasses is expected to be worth $1.2 billion
  • According to ISACA, 60% to 70% of consumers see clear benefits in using AR and IoT devices in their daily life at work.
  • According to Forrester Research, 14.4 million U.S. enterprise workers expected to utilize smart glasses by 2025.
  • According to Gartner, smart glasses will save nearly $1 billion per year in the field-service industry.

 

AR is not limited to a particular sphere as well. It can be utilized across all spheres of the market for branding & marketing purposes. 

 

1.Construction, engineering and architecture – A holographic representation provides an unmatched level of real-world proportion, scale, form, and perspective compared to traditional ways of building models.

2.Product configurator –The AR/MR apps are useful to product designers because they result in faster prototyping and 3D model visualization.

  1. Healthcare – With AR headsets, doctors and dentists can show their patients a 3D view of the organ or section of the mouth that they are going to operate on.
  2. Education – The main advantage is that 3D images and simulations can be created for students of all age groups. It is ideal for STEM education.
  3. Augmented field service – Companies can equip their field technicians with AR headsets and ensure that experienced engineers are present to guide technicians working in remote locations.
  4. Engaging advertising – Brands can incorporate AR elements in their advertisements and offer coupons to drive customer footfall into the store.
  5. Events – Event organizers and exhibitors are turning to Augmented Reality to increase interactivity at their events which helps in attracting visitors.
  6. Product demonstrations – Augmented Reality apps can give your potential customers an accurate view about the product. Furniture stores, home decorators, fashion stores are ideally suited to take advantage of this technology.
  7. Interactive Websites – Websites which use Augmented Reality have seen a decrease in the bounce rate by their visitors. The result is that sales conversions, downloads and even total page visits increase.
  8. AR-enhanced tours – A tourist walking down a historic place can be given information on his mobile phone which has been overlaid with the real world images.

 

Augmented reality along with virtual reality is changing the world on a daily basis. The applications are unlimited and the possibilities are limited by our imagination only.

Will AI Overtake Human Creativity?

Virtual Intelligence Is Dangerous

 

“AI will be either the best or the worst thing, ever to happen to humanity”.

Said Stephen Hawking when asked about his opinion on Artificial Intelligence.

 

AI Versus Human Creativity

 

A few months earlier, the greatest South Korean GO player, Lee Sedol was being challenged by Google’s artificial player Alpha GO. GO game is considered to be the toughest game in the world. We can play our first move by 20 different choices in the game chess, while in GO game first move can be played by 361 different ways. After the initial one or two moves, the game becomes more and more complicated.

Lee Sedol had got the status of the professional player in GO game at 12 years of age. He nearly won the 18th international world championship and had became a South Korean superstar at a young age.

So the game played in South Korea from 9th March to 14th March, 2016. 60 million users from China and 25 million users from Japan were watching it live. In South Korea, it was a festive atmosphere as people expected Lee to beat the bot.  However, South Korean hearts broke after the results came out.

 

 

The famous star of GO game, 33 Years old, Lee Sedol lost by 4-1 against the Alpha GO!

South Korea mourned but it also brought forward the fact that Human intelligence would slowly be overshadowed by AI.

However, the fact was that Alpha Go was only calculating way ahead of its counterpart. There was factually no creativity involved in the game of GO.

Largely, the past four decades of AI has focused on ever more sophisticated methods for solving ever more highly constrained problems (e.g. chess, Go, memorizing labeled data-sets like Imagenet, or constrained quiz tasks like Jeopardy).

The field has unfortunately entered a downward spiral where publications are often judged by how well a given method performs on a particular artificial dataset, compared to 20 past methods on the same dataset. This approach of relying on artificial datasets to measure progress can quickly stifle creativity, and I see rampant evidence of this decline at even the best ML/AI conferences, like NIPS or AAAI, where year and year, papers that are accepted are largely highly incremental advances on previous work.

Very novel ideas have little chance of success, because they are usually unable to “play the same game” of showing marginal improvement on MNIST, or Imagenet, or COCO, or one of the dozens of other artificial datasets. It is as if physicists judge their profession by seeing how fast a car they can build with the latest advances in quantum field theory.

Creativity is an ability closely tied to “imagination”. The emphasis in creativity and imagination is not problem-solving at the expert level, but rather “problem creation”, if you will. It is a way of stretching the boundaries of what is possible by being able to ask counterfactual questions. Einstein was a great believer in the power of imagination.

Imagination is what led him to develop the theory of relativity, because he could ask questions like “What would the world look like if I rode a beam of light?” Imagination, he said, “would get you anywhere”, whereas “logic will only get you from A to B”. It is hard to imagine how one can do world class physics these days without a healthy dose of imagination. It is highly likely that this year’s Nobel prize in physics will go to the leaders of the LIGO detectors, which detected Einstein’s gravitational waves, a 100 years after they were predicted. The latest report of detection comes from two black holes that collided 1.8 billion light years away, releasing more energy in this one event than the energy released from all the stars in the observable universe. How can one even begin to understand the power of such events, without using imagination, since it is so far removed from our everyday experience

There is strong evidence that imagination is unique to humans as it is strongly localized in the frontal lobe of the brain, a structure most developed in humans as compared to other animals. Humans with damage to the frontal lobe are largely normal, although they are strikingly “in the present”, and unable to imagine the future. If you ask such a person what their plans are for the next week, they will understand the question, but say that their mind is a complete blank when they try to think of the future. Imagination is largely tied to the processes that go in the frontal lobe, and it is probably also the “seat of creativity”.

 

Jean Michel Basquiat’s untitled painting of a human skull- GoodWorkLabs

 

Fundamental advances are needed to understand how imagination works, and it will take at least the better part of the next decade or two before we begin to develop effective methods. One of our favorite examples of creativity is art. Jean Michel Basquiat’s untitled painting of a human skull sold at a New York auction recently for over $100 million. It is a strikingly original piece of art, and the 27 year old Brooklyn born painter was originally a graffiti artist, whose paintings now command prices similar to Van Gogh, Picasso, and Monet.

Will AI ever be able to produce great art of this caliber?

Perhaps, that day we should be bothered about the future of AI.

 

Is AI Going To Fade Like Nanotechnology

Is AI Overhyped Like NanoTech?

 

Nanotechnology was once so hyped and we cannot help compare it with what is happening with AI now. There are so many things that nano can do, but renaming projects to nano just to get funding was what happened among companies in 2000–2005.

Eg: Nano Face wash, Nano *Insert a title*

There is an explanation for this. It can be understood using the hype curve. It works according to Amara’s Law, which is a computer saying ,stating:

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

The General Curve is as shown below:

 

According to the Gartner Hype cycle for Artificial Intelligence 2017, AI is at the peak of Inflated Expectations so we can now expect negative publicity marking the stage of Trough of Disillusionment.

 

 

Artificial Intelligence first began in 1950 when the English Mathematician Alan Turing published a paper entitled “Computing Machinery and Intelligence”. But, the Technology trigger happened only over the last decade. We are in the stage where the mass media hype where “Data Science is the way to go”. The expectations are tremendous and we are talking about Robots being given citizenship which is good in some sense and scary as well. Andrew Ngvery recently gave a talk on how we have to move more talent to where it is most needed by training them. This shows how we are moving forward with this AI technology.

  1. We need Data Scientists with skills. Data Science is no more a skill, it is a way.
  2. Data science came long long ago when we first started to generate relations between different things. Now, it has been recognized as a separate entity because computer science boils down to applied mathematics which boils down to functions.
  3. Data Science is indeed very promising and a lot of funding is given to those who do it. (The pay at Goldman Sachs should say it all. It is approximately $104,578-$114,768)
  4. But, for something to become successful, one needs to wait for experiments to happen and results to come out. This is not the case today. We talk data almost everyday that we are busy doing shoddy work to get results out. This is not good and the prime reason why we are entering the phase of disillusionment.

 

Comparison with Nanotechnology

 

With Nanotechnology, the hype index shot very high and peaked mid 2002. It was the Data Science equivalent back then. You’d want to work there. The news was so full of Carbon Nano tubes and how the future was going to change. The news articles at that time went on and on about the miraculous properties of nano materials. But do we talk about it today? We read about it sometimes in the newspapers. That is it.

Nanotech in the mid 2002 was promising and the career prospects were great. But, an analysis showed that it could not live up to its hype because of time. It all comes down to this, doesn’t it?

In the 2005, we had talks of whether Nano is a boon or bane.

As early as in 2008, we got books on the hype of Nano tech “Nano-hype: The Truth Behind the Nanotechnology Buzz

In 2017, we hardly hear about it but some real work is going on. Nano tech is now in the plateau of productivity. Lithium Ion batteries and startups focus (MIT’s 30 under 30 has so many people working on Nano tech and not just Data Science) on this now better but ironically they lack funding because the hype now is data science and investors run towards the hype. Nobody can help this.

 

Comparison with CFC Discovery 

 

When CFC was first invented and its refrigeration properties were identified in discovered in 1928 by Thomas Midgley, he was in search of non toxic alternatives to the existing refrigerants during that time namely Ammonia and Sulfur dioxide. It caught the media and every single refrigerator used it until they found out that it destroyed the ozone layer in 1970. For thirty years no one knew the detrimental effects it had on the environment. And Funny enough it has appeared in the 30 worst discoveries by the leading TIME magazine. Now, it has been banned and we are trying to solve the problem created by the previous solution.

From the above analysis, few points are to be noted:

  1. We tend to provide solutions to solve problems which end up producing further problems and we end up cleaning the mess. We seem to be caught in this cycle.
  2. In every single problem, whether the hype led to productive output, it brought money. From the above, one can infer that “Research goes where money flows” and it is not the other way round. That’s life.
  3. Data Science has been carried out since the beginning of time. Just that it was named Physics, Chemistry, Maths, Biology and so on. It was interpretation of data and the science behind it. So, they named it appropriately.
  4. In today’s exciting world, we want to do anything with data which was not thought of before. Hence, Data Science.
  5. Data Science is a way and not a skill. Mechanical Engineering is a skill. People who understand this will win.

 

The Prominent people like Balaji Viswanathan CEO of Invento who does ML for his bots uses it, Andrew Ng Sees the need to teach it, Adam D’Angelo believes in it. The other CS giants know it. And I, a mechanical Engineering student, am contemplating about this and making sense of it.

The future looks good but this shall also pass. We are going to create solutions, create a mess, clean it up, create a mess, and the cycle will repeat.

 

4 Mistakes To Avoid When Using Redis

Red Is Incredible

 

Redis is an in-memory key value datastore written in ANSI C programming language by Salvatore Sanfilippo.  Redis not only supports string datatype but it also supports list,  set, sorted sets, hashes datatypes, and provides a rich set of operations to work with these types. If you have worked with Memcached, an in-memory object caching system, you will find that it is very similar, but Redis is Memcached++.  Redis not only supports rich datatypes, it also supports data replication and can save data on disk.  The key advantages of Redis are :

 

  1. Exceptionally Fast : Redis is very fast and can perform about 110000 SETs per second, about 81000 GETs per second. You can use the redis-benchmark utility for doing the same on your machine.
  2. Supports Rich data types : Redis natively supports most of the datatypes that most developers already know like list, set, sorted set, hashes. This makes it very easy to solve a variety of problems because we know which problem can be handled better by which data type.
  3. Operations are atomic : All the Redis operations are atomic, which ensures that if two clients concurrently access Redis server will get the updated value.
  4. MultiUtility Tool : Redis is a multi utility tool and can be used in a number of usecases like caching, messaging-queues (Redis natively supports Publish/ Subscribe ), any short lived data in your application like web application sessions, web page hit counts, etc.  There are a lot of people using Redis and they can be found at the owner website.

 

 

Here are a few things we suggest thinking about when you are utilising the superpowers of Redis.

  • Choose consistent ways to name and prefix your keys.  Manage your namespace.
  • Create a “registry” of key prefixes which maps each to your internal documents for those application which “own” them.
  • For every class of data you put into your Redis infrastructure: design, implement and test the mechanisms for garbage collection and/or data migration to archival storage.
  • Design, implement and test a sharding (consistent hashing) library before you’ve invested much into your application deployment and ensure that you keep a registry of “shards” replicated on each server.

 

Let us explain each of these points in brief.

 

You should assume, from the outset, that your Redis infrastructure will be a common resource used by a number of applications or separate modules.  You can have multiple databases on each server numbered 0 through 31 by default, though you can increase the number of these.  However, it’s best to assume that you’ll need to use key prefixes to avoid collisions among various different application/modules.

 

Consistent key prefixing & Managing your namespace:

Your applications/modules should provide the flexibility to change these key prefixes dynamically.  Be sure that all keys are synthesized from the application/module prefix concatenated with the key that you’re manipulating; make hard-coding of key strings verboten.

 

Registry: Document and Track your namespace

We suggest that you have certain key patterns (prefixes or glob patterns) as “reserved” on your Redis servers.  For example you can have __key_registry__ (similar to the Python reserved method/attribute names) as a hash of key prefixes to URLs into your wiki or Trac or whatever internal documentation site you use.  Thus you can perform housekeeping on your database contents and track down who/what is responsible for every key you find in any database.  Institute a policy that any key which doesn’t match any pattern in your registry can/will be summarily removed by your automated housekeeping.

 

Garbage Collection: 

In a persistent, shared, key/value store, and in the case of Redis, in particular the collection of garbage is probably the single major maintenance issue. 

So you need to consider how you’re going to select the data that needs to be migrated out of Redis perhaps into your SQL/RDBMS or into some other form of archival storage, and how you’re going to track and purge data which is out-of-date or useless. 

The obvious approaches involve the use of the EXPIRE or EXPIREAT features/commands.  This allows Redis to manage the garbage collection for you, either relative to your manipulation of any given key, or in terms of an absolute time specification.  The only trick about Redis expiration is that you must reset it every single time.

 

Sharding: 

Redis doesn’t provide sharding.  You should probably assume that you’ll grow beyond the capacity of a single Redis server. Slaves are for redundancy, not for scaling, though you can offload some read-only operations to slaves if you have some way to manage the data consistency, for example the ZSET of key/timestamp values describe for expiry can also be used for some offline bulk processing operations; also the pub/sub features can be used for the master to provide hints regarding the quiescence of selected keys/data.

 

So you should consider writing your own abstraction layer to provide sharding.  Basically imagine that you have implemented a consistent hashing method and you run every synthesized key through that before you use it.  While you only have a single Redis server then the hash to server mapping always ends up pointing to your only server.  Later if you need to add more servers then you can adjust the mapping so that half or a third of your keys resolve to your other  servers.  Of course you’ll want to implement this so that the failure on a primary server causes your library/service module to automatically retry on the secondary and possibly any tertiary server.   Depending on your application you might even have the tertiary attempts fetch certain types of data from an another data source entirely.

 

4 Amazing Messenger Bots

Bots To Look Out For

 

2017 maybe brilliant and gleaming yet as we’re seeing some especially encouraging names in the realm of bots. Since bots initially debuted on Facebook Messenger a year ago, designers have been turning out a large number of the little folks. What’s more, it’s hard not to see a portion of the more creative bots out there. 

We definitely realize that bots hold unbelievable potential for producing leads. Here are a portion of the designers that are doing it right.

The 4 best Facebook Messenger bots of 2017 so far.

 

 

WTF is That 

 

WTF-Is-That-GoodWorkLabs

 

Watch out on this bot. As it is developing, it’s ended up being a particularly helpful little apparatus. This bot can help recognize things from only a photograph from bugs to peculiar sustenance things. The calculations supporting it are a long way from reality as of now. However,  it’s turning out to be an icebreaker and comfort thing across the board.

 

Duolingo 

 

Duolingo-GoodWorkLabs

 

It wouldn’t have been long until a language application went onto the bot scene. Enabling clients to talk with neighborly, supportive bots, Duolingo makes it simple to work on composing and conveying in another dialect. The discussions are restricted.  However they’re an incredible approach to help yourself to remember vital ideas and vocabulary terms. In addition, there’s an assortment of identities to connect with, making language learning fun.

 

MeditateBot 

 

MeditateBot-GoodWorkLabs

 

Staying calm has never been so simple. Exercise-related bots seem like a natural evolution of the entire bot concept and MeditateBot is no different. The bot, developed by the team behind the Calm app, guides users through flexible meditation exercises and allows users to set daily reminders to get into a regular meditation habit.

 

Poncho 

 

Poncho-GoodWorkLabs

 

Climate applications are surely just an old thing new yet Poncho accomplishes something that a significant number of these old applications could not. It gives a brisk climate process as well as a  day by day customized figure. With other clever highlights, including a nitty gritty dust tally and day by day running gauges that anticipate whether the following day will be bright or not, there’s a considerable measure to love also. Additionally, Poncho’s a well disposed little person, who shares jokes and accommodating tips. Who said bots can’t be adorable?

The Difference Between AI & ML

Machine Intelligence Or Artificial Learning

 

AI stands for artificial intelligence, where intelligence is defined as the ability to acquire and apply knowledge.

ML stands for machine learning where learning is defined as the acquisition of knowledge or skills through experience, study, or by being taught.

Imagine we want to create artificial ants who can crawl around in two dimensional space. However, there are dangers in this world: if an ant encounters a poisonous area, it will die. If there are no poison in ant’s proximity, the ant will live.

 

The Difference Between Artificial Intelligence And Machine Learning

 

How can we teach ants to avoid poisonous areas, so that these ants can live as long as they wish? Let’s give our ants a simple instruction set that they can follow; they can move freely in two dimensional space one unit at a time. Our first attempt is to allow ants to crawl around by generating random instructions.

Then we tweak these ants and let them crawl around the world again. We repeat this until ants successfully avoid the poisonous areas in the world. This is a holistic machine learning way to approach the problem. We make ants to fit in configuration by using some arbitrary rule. This works because in each iteration we prune away a set of non-fitting ants. Eventually, we are pushed towards more fitting ants.

Now, what if we change the location of poisonous areas, what do you think will happen? Ants would undergo a huge crisis because they couldn’t survive in the world anymore; they couldn’t simply know where the poisonous areas are and therefore would not be able to avoid them. But why would this happen, and can we improve it further?

Could ants somehow know where the areas are and adapt their behavior to make them more successful? This is where artificial intelligence comes into play. We need a way to give ants this information, give them knowledge of the environment. Our ants need a way to sense the world. Until this, they have been living in completely darkness, without any way to perceive the world around them. For example, we can let ants to leave a short trail which other ants can sense. Then we can make ants to follow this trail and if they cannot sense a trail, they just crawl around randomly.

Now, if there are multiple ants, most of them will hit the poisonous areas and die. But there are also ants who won’t die and therefore crawl in a non-poisonous areas – they will leave a trail! Other ants can follow this trail blindly and always know that they will live. This works because ants can receive some information of their surroundings. They can’t perceive the poisonous areas (they don’t even know what poison is), but they can avoid them even in completely new environments without any special learning.

 

These two approaches are quite different.

  • The machine learning way tries to find a pattern which ants can follow and succeed. But it doesn’t give ants a change to make local decisions.
  • The artificial intelligence way is to let ants to make local decisions to be successful as a whole. In nature, we can find many similarities to this kind of artificial intelligence way to solve problems.

 

Artificial Intelligence — Human Intelligence Exhibited by Machines

Machine Learning — An Approach to Achieve Artificial Intelligence

 

AI can refer to anything from a computer program playing a game of chess, to a voice-recognition system like Amazon’s Alexa interpreting and responding to speech. The technology can broadly be categorized into three groups: Narrow AI, artificial general intelligence (AGI), and super intelligent AI.

IBM’s Deep Blue, which beat chess grand master Garry Kasparov at the game in 1996, or Google DeepMind’s AlphaGo, which in 2016 beat Lee Sedol at Go, are examples of narrow AI—AI that is skilled at one specific task. This is different from artificial general intelligence (AGI), which is AI that is considered human-level, and can perform a range of tasks.

Superintelligent AI takes things a step further. As Nick Bostrom describes it, this is “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” In other words, it’s when the machines have outsmarted us.

Machine learning is one sub-field of AI. The core principle here is that machines take data and “learn” for themselves. It’s currently the most promising tool in the AI kit for businesses. ML systems can quickly apply knowledge and training from large data sets to excel at facial recognition, speech recognition, object recognition, translation, and many other tasks. Unlike hand-coding a software program with specific instructions to complete a task, ML allows a system to learn to recognize patterns on its own and make predictions.

While Deep Blue and DeepMind are both types of AI, Deep Blue was rule-based, dependent on programming so it was not a form of ML.

DeepMind, on the other hand, was.

So, essentially there is a huge difference between these two entities but they are dependent on each other.

Do you want to build a product with AI and ML? Then just drop in a quick message with your requirements!

[leadsquare_shortcode]

 

Ready to start building your next technology project?