Thursday, August 30, 2018

Face Recognition




  • Facial recognition is a Biometric Artificial Intelligence based application that can uniquely identify a person by analysing patterns based on the person's facial textures and shape. 
  • While initially a form of computer application, it has seen wider uses in recent times on mobile platforms and in other forms of technology, such as robotics.
  • It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems.Recently, it has also become popular as a commercial identification and marketing tool.



  • A facial recognition system is a technology capable of identifying or verifying a person from a digital image or a video frame from a video source.

  • There are multiple methods in which facial recognition systems work, but in general, they work by comparing selected facial features from given image with faces within a database.


Facial Recognition Applications

Based on our assessment of the applications in the field today, the a majority of facial recognition use-cases appear to fall into three major categories:

  • Security: Companies are training deep learning algorithms to recognize fraud detection, reduce the need for traditional passwords, and to improve the ability to distinguish between a human face and a photograph.
  • Healthcare: Machine learning is being combined with computer vision to more accurately track patient medication consumption and support pain management procedures.
  • Marketing: Fraught with ethical considerations, marketing is a burgeoning domain of facial recognition innovation, and it’s one we can expect to see more of as facial recognition becomes ubiquitous.

Uses of facial recognition technology:


  •  A research team at Carnegie Mellon has developed a proof-of-concept iPhone app that can take a picture of an individual and -- within seconds -- return the individual's name, date of birth and social security number.
  • The Google Arts & Culture app uses facial recognition to identify museum doppelgangers by matching a real person's faceprint with portrait's faceprint.
  • Professor Shen Hao of the Communications University of China uses facial recognition technology to track students’ attendance.
  • Amazon, MasterCard and Alibaba have rolled out facial recognition payment methods  commonly referred to as selfie pay.

          STAY UPDATED FOR MORE POST!
          SHARE IT WITH YOUR FRIENDS AND DON'T FORGET TO COMMENT BELOW!↓

Monday, August 27, 2018

Virtual Agents



  • A virtual agent or chatbot is used to describe a program based in artificial intelligence (AI) that provides automated customer service. 
  • Virtual agent can also refer to a human customer service agent who works remotely from his employer's location.
  • Virtual agent software has improved over the past five years with advances in AI and cognitive computing programs.
  • Virtual agents are designed to provide customer services, product information, marketing, support, sales, order placing, reservations or other custom services.
  • Virtual Agents are powered by a knowledge base, which includes an extensive list of possible different questions, responses and gestures, allowing the bot to react and respond to human input in a relatively human way.



  • This is enormously useful for sales and marketing teams, as they typically only focus on leads deemed "high quality." With a virtual agent, all leads can be followed up on, which could result in higher sales. In addition, virtual agents cost significantly less than human employees.
  • An intelligent virtual agent serves as a company representative and is built around a specific task, such as answering customer questions on a website's homepage.
  • Companies interested in adopting virtual agent software through a cloud service provider or software vendor must invest time and resources into "training" the virtual agent


  • Virtual agents are based on machine learning technology, which improves over time as the system ingests more data and "learns" through continued use.
  • There are a number of cloud-based virtual agent platforms that are pretrained for customer service tasks. These programs require no coding or machine learning knowledge; instead, users configure the virtual agent to suit their business needs and branding.


          STAY UPDATED FOR MORE POST!
          SHARE IT WITH YOUR FRIENDS AND DON'T FORGET TO COMMENT BELOW!↓








Wednesday, August 22, 2018

AI Medical Field

  • Artificial Intelligence(AIin healthcare is the use of algorithms and software to approximate human cognition in the analysis of complex medical data. Specifically, AI is the ability for computer algorithms to approximate conclusions without direct human input.
  • The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes.AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug developmentpersonalized medicine,and patient monitoring and care. 
  • Medical institutions such as The Mayo ClinicMemorial Sloan Kettering Cancer CenterMassachusetts General Hospital, and National Health Service, have developed AI algorithms for their departments. Large technology companies such as IBM and Google, and startups such as Welltok and Ayasdi, have also developed AI algorithms for healthcare.


  • Artificial intelligence in medicine may be characterized as the scientific discipline pertaining to research studies, projects, and applications that aim at supporting decision-based medical tasks through knowledge- and data-intensive computer-based solutions that ultimately support and improve the performance of a human care provider.
  • The purpose of this special issue is to demonstrate the potential of several intelligent approaches exploited in medical informatics technologies and applications.
  • While research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may nonetheless introduce several new types of risk to patients and healthcare providers, such as algorithmic biasDo not resuscitate implications, and other machine morality issues. These challenges of the clinical use of AI has brought upon potential need for regulations.

Medical and technological advancements occurring over this half-century period that have simultaneously enabled the growth healthcare-related applications of AI include:

  • Improvements in computing power resulting in faster data collection and data processing.
  • Increased volume and availability of health-related data from personal and healthcare-related devices.
  • Growth of genomic sequencing databases
  • Widespread implementation of electronic health record systems.
  • Improvements in natural language processing and computer vision, enabling machines to replicate human perceptual processes.
  • Enhanced the precision of robot-assisted surgery.

        
          STAY UPDATED FOR MORE POST!
          SHARE IT WITH YOUR FRIENDS AND DON'T FORGET TO COMMENT BELOW!↓





Sunday, August 19, 2018

CrowdSourcing

  • Crowdsourcing is a sourcing model in which individuals or organizations obtain goods and services. These services include ideas and finances, from a large, relatively open and often rapidly evolving group of internet users; it divides work between participants to achieve a cumulative result.
  • This phenomenon can provide organizations with access to new ideas and solutions, deeper consumer engagement, opportunities for co-creation, optimization of tasks, and reduced costs. The Internet and social media have brought organizations closer to their stakeholders, laying the groundwork for new ways of collaborating and creating value together like never before. The approach is being embraced.
  • There are major differences between crowdsourcing and outsourcing.Crowdsourcing comes from a less-specific, more public group, whereas outsourcing is commissioned from a specific, named group, and includes a mix of bottom-up and top-down processes.Advantages of using crowdsourcing may include improved costs, speed, quality, flexibility, scalability, or diversity.




  • Currently, crowdsourcing has transferred mainly to the Internet, which provides a particularly beneficial venue for crowdsourcing since individuals tend to be more open in web-based projects where they are not being physically judged or scrutinized, and thus can feel more comfortable sharing.
  • This approach ultimately allows for well-designed artistic projects because individuals are less conscious, or maybe even less aware, of scrutiny towards their work. In an online atmosphere, more attention can be given to the specific needs of a project, rather than spending as much time in communication with other individuals.



  • Crowdsourcing touches across all social and business interactions. It is changing the way we work, hire, research, make and market. Governments are applying crowdsourcing to empower citizens and give a greater voice to the people. 
  • In science and health care, crowdsourcing can democratize problem solving and accelerate innovation. With education, it has the potential to revolutionize the system, just as crowdfunding is currently challenging traditional banking and investing processes. It’s a 21st-century mindset and approach that can be applied in many areas and many ways.
        STAY UPDATED FOR MORE POST!
          SHARE IT WITH YOUR FRIENDS AND DON'T FORGET TO COMMENT BELOW!↓



Thursday, August 16, 2018

Data Mining



  • Data mining is the process of discovering patterns in large data sets involving methods at the intersection of machine learningstatistics, and database systems.

  • Data mining is an interdisciplinary subfield of computer science with an overall goal to extract information (with intelligent method) from a data set and transform the information into a comprehensible structure for further use.

  • Data mining is the analysis step of the "knowledge discovery in databases"  Aside from the raw analysis step, it also involves database and data management aspects, data pre-processingmodel and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.

The major steps involved in a data mining process are:-


  • Extract, transform and load data into a data warehouse.
  • Store and manage data in a multidimensional databases.
  • Provide data access to business analysts using application software.
  • Present analyzed data in easily understandable forms, such as graphs.


  • The first step in data mining is gathering relevant data critical for business. Company data is either transactional, non-operational or metadata. Transactional data deals with day-to-day operations like sales, inventory and cost etc. Non-operational data is normally forecast, while metadata is concerned with logical database design.



  • Patterns and relationships among data elements render relevant information, which may increase organizational revenue. Organizations with a strong consumer focus deal with data mining techniques providing clear pictures of products sold, price, competition and customer demographics.
  • The second step in data mining is selecting a suitable algorithm - a mechanism producing a data mining model. The general working of the algorithm involves identifying trends in a set of data and using the output for parameter definition.
  • The most popular algorithms used for data mining are classification algorithms and regression algorithms, which are used to identify relationships among data elements. 
  • Major database vendors like Oracle and SQL incorporate data mining algorithms, such as clustering and regression tress, to meet the demand for data mining.

         STAY UPDATED FOR MORE POST!
           SHARE IT WITH YOUR FRIENDS AND DON'T FORGET TO COMMENT BELOW!↓




Monday, August 13, 2018

Computer Vision




  • Computer vision is concerned with modeling and replicating human vision using computer software and hardware. Formally if we define computer vision then its definition would be that computer vision is a discipline that studies how to reconstruct, interrupt and understand a 3d scene from its 2d images in terms of the properties of the structure present in scene.
  • Computer vision tasks include methods for acquiringprocessinganalyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions.
  • Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.


  • Computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems.
Two important specifications in any vision system are the sensitivity and the resolution:

  • Sensitivity is the ability of a machine to see in dim light, or to detect weak impulses at invisible wavelengths. 
  • Resolution is the extent to which a machine can differentiate between objects. In general, the better the resolution, the more confined the field of vision.
  • Sensitivity and resolution are interdependent. All other factors held constant, increasing the sensitivity reduces the resolution, and improving the resolution reduces the sensitivity.


The term machine vision is often associated with industrial applications of a computer's ability to see, while the term computer vision is often used to describe any type of technology in which a computer is tasked with digitizing an image, processing the data it contains and taking some kind of action.



Examples of applications of computer vision include systems for:


  • Automatic inspection, e.g. in manufacturing applications.
  • Assisting humans in identification tasks e.g. a species identification system
  • Controlling processes  e.g. an industrial robot.
  • Detecting events  e.g. or visual surveillance or people counting.
  • Interaction e.g. as the input to a device for computer-human interaction.
  • Modeling objects or environments e.g. medical image analysis or topographical modeling.
  • Navigation e.g. by an autonomous vehicle or mobile robot.
  • Organizing information  e.g. for indexing databases of images and image sequences.

     
      If you enjoyed this blog post, share it with a friend!      
     Next week I will post more about AI...so stay tuned! 








Artificial Neural Networks



  • A computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.
  • An ANN is based on a collection of connected units or nodes called artificial neurons which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it.
  • The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.



Types of Artificial Neural Networks:
There are two Artificial Neural Network topologies − FeedForward and Feedback
  •  FeedForward ANN
The information flow is unidirectional. A unit sends information to other unit from which it does not receive any information. There are no feedback loops. They are used in pattern generation/recognition/classification. They have fixed inputs and outputs.
FeedForward ANN



  • FeedBack ANN



Here, feedback loops are allowed. They are used in content addressable memories
FeedBack ANN

Working of ANNs:

In the topology diagrams shown, each arrow represents a connection between two neurons and indicates the pathway for the flow of information. Each connection has a weight, an integer number that controls the signal between the two neurons.
If the network generates a “good or desired” output, there is no need to adjust the weights. However, if the network generates a “poor or undesired” output or an error, then the system alters the weights in order to improve subsequent results.

Machine Learning in ANNs:

ANNs are capable of learning and they need to be trained. There are several learning strategies −


  • Supervised Learning.
  • Unsupervised Learning.
  • Reinforcement Learning.

If you enjoyed this blog post, share it with a friend!      
Next week I will post more about AI...so stay tuned! 


Sunday, August 12, 2018

Robotics






  • Robotics is a branch of engineering that involves the conception, design, manufacture, and operation of robots. This field overlaps with electronics,computer science, artificial intelligence, mechatronics, nanotechnology and bioengineering.
  • These technologies are used to develop machines that can substitute for humans and replicate human actions. Robots can be used in any situation and for any purpose, but today many are used in dangerous environments (including bomb detection and deactivation), manufacturing processes, or where humans cannot survive. Robots can take on any form but some are made to resemble humans in appearance. 
  • This is said to help in the acceptance of a robot in certain replicative behaviors usually performed by people. Such robots attempt to replicate walking, lifting, speech, cognition, and basically anything a human can do. Many of today's robots are inspired by nature, contributing to the field of bio-inspired robotics.



Asimov suggested three principles to guide the behavior of robots and smart machines. Asimov's Three Laws of Robotics, as they are called, have survived to the present:


  • Robots must never harm human beings.
  • Robots must follow instructions from humans without violating rule 1.
  • Robots must protect themselves without violating the other rules.



The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century.Throughout history, it has been frequently assumed that robots will one day be able to mimic human behavior and manage tasks in a human-like fashion. Today, robotics is a rapidly growing field, as technological advances continue researching, designing, and building new robots serve various practical purposes, whether domesticallycommercially, or militarily.




There are many types of robots; they are used in many different environments and for many different uses, although being very diverse in application and form they all share three basic similarities when it comes to their construction:
  • Robots all have some kind of mechanical construction, a frame, form or shape designed to achieve a particular task. For example, a robot designed to travel across heavy dirt or mud, might use caterpillar tracks. The mechanical aspect is mostly the creator's solution to completing the assigned task and dealing with the physics of the environment around it. Form follows function.
  • Robots have electrical components which power and control the machinery. For example, the robot with caterpillar tracks would need some kind of power to move the tracker treads. That power comes in the form of electricity, which will have to travel through a wire and originate from a battery, a basic electrical circuit

  • All robots contain some level of computer programming code. A program is how a robot decides when or how to do something. In the caterpillar track example, a robot that needs to move across a muddy road may have the correct mechanical construction and receive the correct amount of power from its battery, but would not go anywhere without a program telling it to move.




Applications of Robotics:

  • Military robots.
  • Caterpillar plans to develop remote controlled machines and expects to develop fully autonomous heavy robots by 2021. Some cranes already are remote controlled.
  • It was demonstrated that a robot can perform a herding task.
  • Robots are increasingly used in manufacturing (since the 1960s). In the auto industry, they can amount for more than half of the "labor". There are even "lights off " factories such as an IBM keyboard manufacturing factory in Texas that is 100% automated.
  • Robot combat for sport – hobby or sport event where two or more robots fight in an arena to disable each other.

Robotics is an essential component in many modern manufacturing environments. As factories increase their use of robots, the number of robotics–related jobs grow and have been observed to be steadily rising. The employment of robots in industries has increased productivity and efficiency savings and is typically seen as a long term investment for benefactors.


If you enjoyed this blog post, share it with a friend!      
Next week I will post more about AI...so stay tuned






Deep Learning


  • Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervisedsemi-supervised or unsupervised.
  • Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design and board game programs, where they have produced results comparable to and in some cases superior to human experts.
  • Deep learning models are vaguely inspired by information processing and communication patterns in biological nervous systems yet have various differences from the structural and functional properties of biological brains, which make them incompatible with neuroscience evidences.






         Deep learning is a class of machine learning algorithms that:
  • use a cascade of multiple layers of non linear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • learn in supervised (e.g. classification) and/or unsupervised (e.g. pattern analysis) manners.
  • learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts.





  • In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own. (Of course, this does not completely obviate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.
  • Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks.
  • Deep learning architectures are often constructed with a greedy layer-by-layer method.Deep learning helps to disentangle these abstractions and pick out which features improve performance.


        
         If you enjoyed this blog post, share it with a friend!


         Next week I will post more about AI...so stay tuned! 




AI Optimized Hardware




  • AI-optimized hardware is a technology that makes hardware much friendlier. These are new graphics and central processing units and processing devices that are specifically designed and structured to execute AI-oriented tasks.
  • AI-optimized hardware is primarily used in making a difference in deep learning applications.
  • Some of the companies that are offering AI-optimized hardware are Google, IBM, Intel, Nvidia, Alleviate, and Cray.




  • An AI accelerator is a class of microprocessor or computer system designed to accelerate artificial neural networksmachine vision and other machine learning algorithms for roboticsinternet of things and other data-intensive or sensor-driven tasks.They are often many core designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. A number of vendor-specific terms exist for devices in this space.
  • Computer systems have frequently complemented the CPU with special purpose accelerators for specialized tasks, most notably video cards for graphics, but also sound cards for sound, etc. As Deep learning and AI workloads rose in prominence, specialized hardware units were developed or adapted from previous products to accelerate these tasks.


  • Graphics processing units or GPUs are specialized hardware for the manipulation of images. As the mathematical basis of neural networks and image manipulation are similar, embarrassingly paralleltasks involving matrices, GPUs became increasingly used for machine learning tasks. As such, as of 2016 GPUs are popular for AI work, and they continue to evolve in a direction to facilitate deep learning, both for training and inference in devices such as self-driving cars.and gaining additional connective capability for the kind of dataflow workloads AI benefits from (e.g. Nvidia NVLink).As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network specific hardware to further accelerate these tasks.Tensor cores are intended to speed up the training of neural networks.
  • Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices like field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks and software alongside each other.

      AI accelerating co-processors:
  • The processor in Qualcomm's mobile platform Snapdragon 845 contains a Hexagon 685 DSP core for AI processing in camera, voice, XR and gaming applications
  • PowerVR 2NX NNA (Neural Net Accelerator) is an IP core from Imagination Technologies licensed for integration into chips.
  • Neural Engine is an AI accelerator core within the Apple A11 Bionic SoC
  • Cadence Tensilica Vision C5 is a neural networks optimized DSP IP core
  • The Neural Processing Unit is a neural network accelerator within the HiSilicon Kirin 970.

           If you enjoyed this blog post, share it with a friend!      
          Next week I will post more about AI...so stay tuned! 





Computational Biology

<!-- Google Tag Manager --> <script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),ev...