Posted on Leave a comment

What is differential privacy in machine learning (preview)?

How differential privacy works

Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy may be required for regulatory compliance.

Differential privacy machine learning process.

In traditional scenarios, raw data is stored in files and databases. When users analyze data, they typically use the raw data. This is a concern because it might infringe on an individual's privacy. Differential privacy tries to deal with this problem by adding "noise" or randomness to the data so that users can't identify any individual data points. At the least, such a system provides plausible deniability. Therefore, the privacy of individuals is preserved with limited impact on the accuracy of the data.

In differentially private systems, data is shared through requests called queries. When a user submits a query for data, operations known as privacy mechanisms add noise to the requested data. Privacy mechanisms return an approximation of the data instead of the raw data. This privacy-preserving result appears in a report. Reports consist of two parts, the actual data computed and a description of how the data was created.

Differential privacy metrics

Differential privacy tries to protect against the possibility that a user can produce an indefinite number of reports to eventually reveal sensitive data. A value known as epsilon measures how noisy, or private, a report is. Epsilon has an inverse relationship to noise or privacy. The lower the epsilon, the more noisy (and private) the data is.

Epsilon values are non-negative. Values below 1 provide full plausible deniability. Anything above 1 comes with a higher risk of exposure of the actual data. As you implement machine learning solutions with differential privacy, you want to data with epsilon values between 0 and 1.

Another value directly correlated to epsilon is delta. Delta is a measure of the probability that a report isn’t fully private. The higher the delta, the higher the epsilon. Because these values are correlated, epsilon is used more often.

Limit queries with a privacy budget

To ensure privacy in systems where multiple queries are allowed, differential privacy defines a rate limit. This limit is known as a privacy budget. Privacy budgets prevent data from being recreated through multiple queries. Privacy budgets are allocated an epsilon amount, typically between 1 and 3 to limit the risk of reidentification. As reports are generated, privacy budgets keep track of the epsilon value of individual reports as well as the aggregate for all reports. After a privacy budget is spent or depleted, users can no longer access data.

Reliability of data

Although the preservation of privacy should be the goal, there’s a tradeoff when it comes to usability and reliability of the data. In data analytics, accuracy can be thought of as a measure of uncertainty introduced by sampling errors. This uncertainty tends to fall within certain bounds. Accuracy from a differential privacy perspective instead measures the reliability of the data, which is affected by the uncertainty introduced by the privacy mechanisms. In short, a higher level of noise or privacy translates to data that has a lower epsilon, accuracy, and reliability.

Open-source differential privacy libraries

SmartNoise is an open-source project that contains components for building machine learning solutions with differential privacy. SmartNoise is made up of the following top-level components:

  • SmartNoise Core library
  • SmartNoise SDK library

SmartNoise Core

The core library includes the following privacy mechanisms for implementing a differentially private system:

ComponentDescription
AnalysisA graph description of arbitrary computations.
ValidatorA Rust library that contains a set of tools for checking and deriving the necessary conditions for an analysis to be differentially private.
RuntimeThe medium to execute the analysis. The reference runtime is written in Rust but runtimes can be written using any computation framework such as SQL and Spark depending on your data needs.
BindingsLanguage bindings and helper libraries to build analyses. Currently SmartNoise provides Python bindings.

SmartNoise SDK

The system library provides the following tools and services for working with tabular and relational data:

ComponentDescription
Data Access

Library that intercepts and processes SQL queries and produces reports. This library is implemented in Python and supports the following ODBC and DBAPI data sources:

  • PostgreSQL
  • SQL Server
  • Spark
  • Preston
  • Pandas
ServiceExecution service that provides a REST endpoint to serve requests or queries against shared data sources. The service is designed to allow composition of differential privacy modules that operate on requests containing different delta and epsilon values, also known as heterogeneous requests. This reference implementation accounts for additional impact from queries on correlated data.
Evaluator

Stochastic evaluator that checks for privacy violations, accuracy, and bias. The evaluator supports the following tests:

  • Privacy Test - Determines whether a report adheres to the conditions of differential privacy.
  • Accuracy Test - Measures whether the reliability of reports falls within the upper and lower bounds given a 95% confidence level.
  • Utility Test - Determines whether the confidence bounds of a report are close enough to the data while still maximizing privacy.
  • Bias Test - Measures the distribution of reports for repeated queries to ensure they aren’t unbalanced

Next steps

Learn more about differential privacy in machine learning:

Posted on Leave a comment

Responsible AI – Privacy and Security Requirements

Training data and prediction requests can both contain sensitive information about people / business which has to be protected. How do you safeguard the privacy of the individuals? What steps are taken to ensure that individuals have control of their data? There are regulations in countries to ensure privacy and security.

 In Europe you have the GDPR (General Data Protection Regulations) and in California there is CCPA (California Consumer Privacy Act,). Fundamentally, both give an individual control over its Data and requires that companies should protect the Data being used in the model. When Data processing is based on consent, then am individual has the right to revoke the consent at any time.

 Defending ML Models against attacks – Ensuring privacy of consumer data:

 I have discussed about very briefly about the tools for adversarial training – CleverHans and FoolBox Python libraries here: Model Debugging: Sensitivity Analysis, Adversarial Training, Residual Analysis  . Let us now look at more stringent means of protecting a ML model against attacks. It is important to protect the ML model against attacks, thus, ensuring the privacy and security of data. An ML model may be attacked in different ways – some literature classifies the attacks into: “Information Harms” and “Behavioural Harms”. Information Harm occurs when the information is allowed to leak from the model. There are different forms of Information Harms: Membership Inference, Model Inversion and Model Extraction. In Membership Inference, the attacker can determine if some information is part of the training data or not. In Model Inversion, the attacker can extract all the training data from the model and Model Extraction, the attacker is able to extract the entire model!

 Behavioural Harm occurs when the attacker can change the behaviour of the ML model itself – example: by inserting malicious data. In this post – I have given an example of an autonomous vehicle in this article: Model Debugging: Sensitivity Analysis, Adversarial Training, Residual Analysis

Cryptography | Differential privacy to protect data

You should consider privacy enhancing technologies like Secure Multi Party Computation ,(SMPC) and Fully Homomorphic Encryption (FHE). SMPC involves multiple systems to train or serve the model whilst the actual data is kept secure

In FHE the data is encrypted. Prediction requests involve encrypted data and training of the model is also carried out on encrypted data. This results in heavy computational cost because the data is never decrypted except by the user. Users will send encrypted prediction requests and will receive back an encrypted result. The goal is that using cryptography you can protect the consumers data.

Differential Privacy in Machine Learning

Differential privacy involves protection of the data by adding noise to the data so that the attackers cannot identify the real content. SmartNoise is an open-source project that contains components for building machine learning solutions with differential privacy. SmartNoise is made of following top level components:

✔️Smart Noise Core Library

✔️Smart Noise SDK Library

This is a good read to understand about Differential Privacy: https://docs.microsoft.com/en-us/azure/machine-learning/concept-differential-privacy

 Private Aggregation of Teacher Ensembles (PATE)

This follows the Knowledge Distillation concept that I discussed here: Post 1- Knowledge DistillationPost - 2 Knowldge Distillation. PATE begins by dividing the data into “k” partitions with no overlaps. It then trains k models on that data and then aggregates the results on an aggregate teacher model. During the aggregation for the aggregate teacher, you will add noise to the data and the output.

For deployment, you will use the student model. To train the student model you take unlabelled public data and feed it to the teacher model and the result is labelled data with which the student model is trained. For deployment, you use only the student model.

The process is illustrated in the figure below:

No alt text provided for this image

PATE (Private Aggregation of Teacher Ensembles)

Source

Credits:

Posted on Leave a comment

A one-up on motion capture

A new neural network approach captures the characteristics of a physical system’s dynamic motion from video, regardless of rendering configuration or image differences.
 
 

MIT researchers used the RISP method to predict the action sequence, joint stiffness, or movement of an articulated hand, like this one, from a target image or video.

From “Star Wars” to “Happy Feet,” many beloved films contain scenes that were made possible by motion capture technology, which records movement of objects or people through video. Further, applications for this tracking, which involve complicated interactions between physics, geometry, and perception, extend beyond Hollywood to the military, sports training, medical fields, and computer vision and robotics, allowing engineers to understand and simulate action happening within real-world environments.

As this can be a complex and costly process — often requiring markers placed on objects or people and recording the action sequence — researchers are working to shift the burden to neural networks, which could acquire this data from a simple video and reproduce it in a model. Work in physics simulations and rendering shows promise to make this more widely used, since it can characterize realistic, continuous, dynamic motion from images and transform back and forth between a 2D render and 3D scene in the world. However, to do so, current techniques require precise knowledge of the environmental conditions where the action is taking place, and the choice of renderer, both of which are often unavailable.

Now, a team of researchers from MIT and IBM has developed a trained neural network pipeline that avoids this issue, with the ability to infer the state of the environment and the actions happening, the physical characteristics of the object or person of interest (system), and its control parameters. When tested, the technique can outperform other methods in simulations of four physical systems of rigid and deformable bodies, which illustrate different types of dynamics and interactions, under various environmental conditions. Further, the methodology allows for imitation learning — predicting and reproducing the trajectory of a real-world, flying quadrotor from a video.

“The high-level research problem this paper deals with is how to reconstruct a digital twin from a video of a dynamic system,” says Tao Du PhD '21, a postdoc in the Department of Electrical Engineering and Computer Science (EECS), a member of Computer Science and Artificial Intelligence Laboratory (CSAIL), and a member of the research team. In order to do this, Du says, “we need to ignore the rendering variances from the video clips and try to grasp of the core information about the dynamic system or the dynamic motion.”

Du’s co-authors include lead author Pingchuan Ma, a graduate student in EECS and a member of CSAIL; Josh Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of CSAIL; Wojciech Matusik, professor of electrical engineering and computer science and CSAIL member; and MIT-IBM Watson AI Lab principal research staff member Chuang Gan. This work was presented this week the International Conference on Learning Representations.

While capturing videos of characters, robots, or dynamic systems to infer dynamic movement makes this information more accessible, it also brings a new challenge. “The images or videos [and how they are rendered] depend largely on the on the lighting conditions, on the background info, on the texture information, on the material information of your environment, and these are not necessarily measurable in a real-world scenario,” says Du. Without this rendering configuration information or knowledge of which renderer is used, it’s presently difficult to glean dynamic information and predict behavior of the subject of the video. Even if the renderer is known, current neural network approaches still require large sets of training data. However, with their new approach, this can become a moot point. “If you take a video of a leopard running in the morning and in the evening, of course, you'll get visually different video clips because the lighting conditions are quite different. But what you really care about is the dynamic motion: the joint angles of the leopard — not if they look light or dark,” Du says.

In order to take rendering domains and image differences out of the issue, the team developed a pipeline system containing a neural network, dubbed “rendering invariant state-prediction (RISP)” network. RISP transforms differences in images (pixels) to differences in states of the system — i.e., the environment of action — making their method generalizable and agnostic to rendering configurations. RISP is trained using random rendering parameters and states, which are fed into a differentiable renderer, a type of renderer that measures the sensitivity of pixels with respect to rendering configurations, e.g., lighting or material colors. This generates a set of varied images and video from known ground-truth parameters, which will later allow RISP to reverse that process, predicting the environment state from the input video. The team additionally minimized RISP’s rendering gradients, so that its predictions were less sensitive to changes in rendering configurations, allowing it to learn to forget about visual appearances and focus on learning dynamical states. This is made possible by a differentiable renderer.

The method then uses two similar pipelines, run in parallel. One is for the source domain, with known variables. Here, system parameters and actions are entered into a differentiable simulation. The generated simulation’s states are combined with different rendering configurations into a differentiable renderer to generate images, which are fed into RISP. RISP then outputs predictions about the environmental states. At the same time, a similar target domain pipeline is run with unknown variables. RISP in this pipeline is fed these output images, generating a predicted state. When the predicted states from the source and target domains are compared, a new loss is produced; this difference is used to adjust and optimize some of the parameters in the source domain pipeline. This process can then be iterated on, further reducing the loss between the pipelines.

To determine the success of their method, the team tested it in four simulated systems: a quadrotor (a flying rigid body that doesn’t have any physical contact), a cube (a rigid body that interacts with its environment, like a die), an articulated hand, and a rod (deformable body that can move like a snake). The tasks included estimating the state of a system from an image, identifying the system parameters and action control signals from a video, and discovering the control signals from a target image that direct the system to the desired state. Additionally, they created baselines and an oracle, comparing the novel RISP process in these systems to similar methods that, for example, lack the rendering gradient loss, don’t train a neural network with any loss, or lack the RISP neural network altogether. The team also looked at how the gradient loss impacted the state prediction model’s performance over time. Finally, the researchers deployed their RISP system to infer the motion of a real-world quadrotor, which has complex dynamics, from video. They compared the performance to other techniques that lacked a loss function and used pixel differences, or one that included manual tuning of a renderer’s configuration.

In nearly all of the experiments, the RISP procedure outperformed similar or the state-of-the-art methods available, imitating or reproducing the desired parameters or motion, and proving to be a data-efficient and generalizable competitor to current motion capture approaches.

For this work, the researchers made two important assumptions: that information about the camera is known, such as its position and settings, as well as the geometry and physics governing the object or person that is being tracked. Future work is planned to address this.

“I think the biggest problem we're solving here is to reconstruct the information in one domain to another, without very expensive equipment,” says Ma. Such an approach should be “useful for [applications such as the] metaverse, which aims to reconstruct the physical world in a virtual environment," adds Gan. “It is basically an everyday, available solution, that’s neat and simple, to cross domain reconstruction or the inverse dynamics problem,” says Ma.

This research was supported, in part, by the MIT-IBM Watson AI Lab, Nexplore, DARPA Machine Common Sense program, Office of Naval Research (ONR), ONR MURI, and Mitsubishi Electric.

Source

Posted on Leave a comment

DataRobot’s vision to democratize machine learning with no-code AI

 

The growing digitization of nearly every aspect of our world and lives has created immense opportunities for the productive application of machine learning and data science. Organizations and institutions across the board are feeling the need to innovate and reinvent themselves by using artificial intelligence and putting their data to good use. And according to several surveys, data science is among the fastest-growing in-demand skills in different sectors.

However, the growing demand for AI is hampered by the very low supply of data scientists and machine learning experts. Among the efforts to address this talent gap is the fast-evolving field of no-code AI, tools that make the creation and deployment of ML models accessible to organizations that don’t have enough highly skilled data scientists and machine learning engineers.

In an interview with TechTalks, Nenshad Bardoliwalla, chief product officer at DataRobot, discussed the challenges of meeting the needs of machine learning and data science in different sectors and how no-code platforms are helping democratize artificial intelligence.

Not enough data scientists

Nenshad Bardoliwalla
Nenshad Bardoliwalla, Chief Product Officer at DataRobot

“The reason the demand for AI is going up so significantly is because the amount of digital exhaust being generated by businesses and the number of ways they can creatively use that digital exhaust to solve real business problems is going up,” Bardoliwalla said.

At the same time, there are nowhere near enough expert data scientists in the world who have the ability to actually exploit that data.

“We knew ten years ago, when DataRobot started, that there was no way that the number of expert data scientists—people who have Ph.D. in statistics, Ph.D. in machine learning—that the world would have enough of those individuals to be able to satisfy that demand for AI-driven business outcomes,” Bardoliwalla said.

And as the years have passed, Bardoliwalla has seen demand for machine learning and data science grow across different sectors as more and more organizations are realizing the business value of machine learning, whether it’s predicting customer churn, ad clicks, the possibility of an engine breakdown, medical outcomes, or something else.

“We are seeing more and more companies who recognize that their competition is able to exploit AI and ML in interesting ways and they’re looking to keep up,” Bardoliwalla said.

At the same time, the growing demand for data science skills has driven a wedge into the AI talent gap continue. And not everyone is served equally.

Underserved industries

The shortage of experts has created fierce competition for data science and machine learning talent. The financial sector is leading the way, aggressively hiring AI talent and putting machine learning models into use.

“If you look at financial services, you’ll clearly see that the number of machine learning models that are being put into production is by far the highest than any of the other segments,” Bardoliwalla said.

In parallel, big tech companies with deep pockets are also hiring top data scientists and machine learning engineers—or outright acquiring AI labs with all their engineers and scientists—to further fortify their data-driven commercial empires. Meanwhile, smaller companies and sectors that are not flush with cash have been largely left out of the opportunities provided by advances in artificial intelligence because they can’t hire enough data scientists and machine learning experts.

Bardoliwalla is especially passionate about what AI could do for the education sector.

“How much effort is being put into optimized student outcomes by using AI and ML? How much do the education industry and the school systems have in order to invest in that technology? I think the education industry as a whole is likely to be a lagger in the space,” he said.

Other areas that still have a ways to go before they can take advantage of advances in AI are transportation, utilities, and heavy machinery. And part of the solution might be to make ML tools that don’t require a degree in data science.

The no-code AI vision

no-code ai platform

“For every one of your expert data scientists, you have ten analytically savvy businesspeople who are able to frame the problem correctly and add the specific business-relevant calculations that make sense based on the domain knowledge of those people,” Bardoliwalla said.

As machine learning requires knowledge of programming languages such as Python and R and complicated libraries such as NumPy, Scikit-learn, and TensorFlow, most business people can’t create and test models without the help of expert data scientists. This is the area that no-code AI platforms are addressing.

DataRobot and other providers of no-code AI platforms are creating tools that enable these domain experts and business-savvy people to create and deploy machine learning models without the need to write code.

With DataRobot, users can upload their datasets on the platform, perform the necessary preprocessing steps, choose and extract features, and create and compare a range of different machine learning models, all through an easy-to-use graphical user interface.

“The whole notion of democratization is to allow companies and people in those companies who wouldn’t otherwise be able to take advantage of AI and ML to actually be able to do so,” Bardoliwalla said.

No-code AI is not a replacement for the expert data scientist. But it increases ML productivity across organizations, empowering more people to create models. This lifts much of the burden from the overloaded shoulders of data scientists and enables them to put their skills to more efficient use.

“The one person in that equation, the expert data scientist, is able to validate and govern and make sure that the models that are being generated by the analytically savvy businesspeople are quite accurate and make sense from an interpretability perspective—that they’re trustworthy,” Bardoliwalla said.

This evolution of machine learning tools is analogous to how the business intelligence industry has changed. A decade ago, the ability to query data and generate reports at organizations was limited to a few people who had the special coding skill set required to manage databases and data warehouses. But today, the tools have evolved to the point that non-coders and less technical people can perform most of their data querying tasks through easy-to-use graphical tools and without the assistance of expert data analysts. Bardoliwalla believes that the same transformation is happening in the AI industry thanks to no-code AI platforms.

“Whereas the business intelligence industry has historically focused on what has happened—and that is useful—AI and ML is going is to give every person in the business the ability to predict what is going to happen,” Bardoliwalla said. “We believe that we can put AI and ML into the hands of millions of people in organizations because we have simplified the process to the point that many analytically savvy business people—and there are millions of such folks—working with the few million data scientists can deliver AI- and ML-specific outcomes.”

The evolution of no-code AI at DataRobot

datarobot continuous ai no-code platform
DataRobot’s AI Cloud is an end-to-end platform that covers the entire machine learning development lifecycle

DataRobot launched the first set of no-code AI tools in 2014. Since then, the platform has expanded at the fast pace of the applied machine learning industry. DataRobot unified its tools into the AI Cloud in 2021, and in mid-March, the company released AI Cloud 8.0, the latest version of its platform.

The AI Cloud has evolved into an end-to-end no-code platform that covers the entire machine learning development lifecycle.

“We recognized in 2019 that we had to expand, and the way you get value from machine learning is by being able to deploy models in production and have them actually provide predictions in business processes,” Bardoliwalla said.

In addition to creating and testing models, DataRobot also supports MLOps, the practices that cover the deployment and maintenance of ML models. The platform includes a graphical No-Code AI App Builder tool that enables you to create full-fledged applications on top of your models. The platform also monitors deployed ML models for decay, data-drift, and other factors that can affect performance. More recently, the company added data engineering tools for gathering, segmenting, labeling, updating, and managing the datasets used to train and validate ML models.

“Our vision expanded dramatically, and the first evidence of the end-to-end platform arrived in 2019. What we’ve done since then is tie all of that together—and this is what we announced with the 8.0 release with the Continuous AI,” Bardoliwalla said.

The future of no-code AI

As no-code AI has matured, it has also become valuable to seasoned data scientists and machine learning engineers, who are interested in automating the tedious parts of their job. Throughout the entire machine learning development lifecycle, more advanced users can integrate their own hand-written code with DataRobot’s automated tools. Alternatively, they can extract the Python or R source code for the models DataRobot generates and further customize it for integration into their own applications.

But no-code AI still has a lot to offer. “The future of no-code AI is going to be about increasing the level of automation that platforms can provide. The more you increase the level of automation, the less you have to write code,” Bardoliwalla said.

Some of the ideas that Bardoliwalla is entertaining is the development of tools that can continuously update and profile the data used in machine learning models. There are also opportunities to further streamline the automated ML process by continually monitoring the accuracy of not only the model in production, but also challenger models that can potentially replace the main ML model as context and conditions change.

“The way that no-code environments are going to succeed is that they allow for more and more functionality that used to require someone to write code, to now be able to manifested in just a couple of simple clicks inside of a GUI,” Bardoliwalla said.

Source

Posted on Leave a comment

What is Hybrid AI?

 

Researchers are working to combine the strengths of symbolic AI and neural networks to develop Hybrid AI.

As the research community makes progress in artificial intelligence and deep learning, scientists are increasingly feeling the need to move towards hybrid artificial intelligence. Hybrid AI is touted to solve fundamental problems that deep learning faces today. 

Hybrid AI brings together the best aspects of neural networks and symbolic AI. Combining huge data sets (visual and audio, textual, emails, chat logs, etc.) allows neural networks to extract patterns. Then, rule-based AI systems can manipulate the retrieved information by using algorithms to manipulate symbols.

Researchers are working to develop hybrid AI systems that can figure out simple abstract relations between objects and the reason behind them as effortlessly as a human brain. 

What is symbolic AI?

During the 1960s and 1970s, new technological advances were met with researchers’ increasing desire to understand how machines and nature interact. Researchers believed that using symbolic approaches would inevitably produce an artificially intelligent machine, which was seen as their discipline’s long-term goal.

The “good old-fashioned artificial intelligence” or “GOFAI” was coined by John Haugeland in his 1985 book ‘Artificial Intelligence: The Very Idea‘ that explored artificial intelligence’s ethical and philosophical implications. Since the initial efforts to build thinking computers in the 1950s, research and development in the AI field have followed two parallel approaches: symbolic AI and connectionist AI. 

Symbolic AI (also known as Classical AI) is an area of artificial intelligence research that focuses on attempting to express human knowledge clearly in a declarative form, that is, facts and rules. From the mid-1950s until the late 1980s, there was significant use of symbolic artificial intelligence. On the other hand, in recent years, a connectionist approach such as machine learning with deep neural networks has come to the forefront.

Combining symbolic AI and neural networks 

 

There has been a shift from the symbolic approach in the past few years due to its technical limits. 

According to David Cox, IBM Director at MIT-IBM Watson AI Lab, deep learning and neural networks excel at the “messiness of the world,” but symbolic AI does not. Neural networks meticulously study and compare a large number of annotated instances to discover significant relationships and create corresponding mathematical models. 

Several prominent IT businesses and academic labs have put significant effort into the use of deep learning. Neural networks and deep learning excel at tasks where symbolic AI fails. As a result, it’s being used to tackle complex challenges today. For example, deep learning has made significant contributions to the computer vision revolution with use cases in facial recognition and tuberculosis detection. Language-related activities have also benefited from deep learning breakthroughs.

There are, however, certain limits to deep learning and neural networks. One argument is that the availability of large volumes of data depends on it. In addition, neural networks are also vulnerable to hostile instances, often known as adversarial data, which can manipulate an AI model’s behaviour in unpredictable and harmful ways.

However, when combined with each other, symbolic AI and neural networks can form a good base for developing hybrid AI systems.

Future of hybrid AI 

The hybrid AI model utilises the neural network’s ability to process and evaluate unstructured data while also using symbolic AI techniques. Connectivist viewpoints argue that techniques based on neural networks will eventually provide sophisticated and broadly applicable AI. In 2019, International Conference on Learning Representations (ICLR) featured a paper in which the researchers combined neural networks with rule-based artificial intelligence to create an AI model. This approach has been called the “Neuro-Symbolic Concept Learner” (NCSL); it claims to overcome the difficulties AI faces and to be superior to the sum of its parts. NCSL, a hybrid system of AI developed by researchers at MIT and IBM tackles visual question answering (VQA) problems; the NSCL uses neural networks in conjunction with neural networks with remarkable accuracy. The researchers demonstrated that NCSL was able to handle the VQA dataset CLEVR. Even more important, the hybrid AI model could make outstanding achievements with less training data and overcome two long-standing deep learning challenges.

Even Google search engine is a complex, all-in-one AI system made up of cutting-edge deep learning tools such as Transformers and advanced symbol manipulation tools like the knowledge graph.




Source

Posted on Leave a comment

What is the IoT?

The Internet of Things (IoT) describes the network of physical objects—“things”—that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the internet. These devices range from ordinary household objects to sophisticated industrial tools. With more than 7 billion connected IoT devices today, experts are expecting this number to grow to 10 billion by 2020 and 22 billion by 2025. 

Why is Internet of Things (IoT) so important?

Over the past few years, IoT has become one of the most important technologies of the 21st century. Now that we can connect everyday objects—kitchen appliances, cars, thermostats, baby monitors—to the internet via embedded devices, seamless communication is possible between people, processes, and things.

By means of low-cost computing, the cloud, big data, analytics, and mobile technologies, physical things can share and collect data with minimal human intervention. In this hyperconnected world, digital systems can record, monitor, and adjust each interaction between connected things. The physical world meets the digital world—and they cooperate.

What technologies have made IoT possible?

While the idea of IoT has been in existence for a long time, a collection of recent advances in a number of different technologies has made it practical.

  • Access to low-cost, low-power sensor technology. Affordable and reliable sensors are making IoT technology possible for more manufacturers.
  • Connectivity. A host of network protocols for the internet has made it easy to connect sensors to the cloud and to other “things” for efficient data transfer.
  • Cloud computing platforms. The increase in the availability of cloud platforms enables both businesses and consumers to access the infrastructure they need to scale up without actually having to manage it all.
  • Machine learning and analytics. With advances in machine learning and analytics, along with access to varied and vast amounts of data stored in the cloud, businesses can gather insights faster and more easily. The emergence of these allied technologies continues to push the boundaries of IoT and the data produced by IoT also feeds these technologies.
  • Conversational artificial intelligence (AI). Advances in neural networks have brought natural-language processing (NLP) to IoT devices (such as digital personal assistants Alexa, Cortana, and Siri) and made them appealing, affordable, and viable for home use.

What is industrial IoT?

Industrial IoT (IIoT) refers to the application of IoT technology in industrial settings, especially with respect to instrumentation and control of sensors and devices that engage cloud technologies. Refer to thisTitan use case PDF for a good example of IIoT. Recently, industries have used machine-to-machine communication (M2M) to achieve wireless automation and control. But with the emergence of cloud and allied technologies (such as analytics and machine learning), industries can achieve a new automation layer and with it create new revenue and business models. IIoT is sometimes called the fourth wave of the industrial revolution, or Industry 4.0. The following are some common uses for IIoT:

  • Smart manufacturing
  • Connected assets and preventive and predictive maintenance
  • Smart power grids
  • Smart cities
  • Connected logistics
  • Smart digital supply chains
tractor
  •  

What are IoT applications?

Business-ready, SaaS IoT Applications

IoT Intelligent Applications are prebuilt software-as-a-service (SaaS) applications that can analyze and present captured IoT sensor data to business users via dashboards. 

IoT applications use machine learning algorithms to analyze massive amounts of connected sensor data in the cloud. Using real-time IoT dashboards and alerts, you gain visibility into key performance indicators, statistics for mean time between failures, and other information. Machine learning–based algorithms can identify equipment anomalies and send alerts to users and even trigger automated fixes or proactive counter measures.

With cloud-based IoT applications, business users can quickly enhance existing processes for supply chains, customer service, human resources, and financial services. There’s no need to recreate entire business processes.

What are some ways IoT applications are deployed?

The ability of IoT to provide sensor information as well as enable device-to-device communication is driving a broad set of applications. The following are some of the most popular applications and what they do.

Create new efficiencies in manufacturing through machine monitoring and product-quality monitoring.

Machines can be continuously monitored and analyzed to make sure they are performing within required tolerances. Products can also be monitored in real time to identify and address quality defects.

Improve the tracking and “ring-fencing” of physical assets.

Tracking enables businesses to quickly determine asset location. Ring-fencing allows them to make sure that high-value assets are protected from theft and removal.

Use wearables to monitor human health analytics and environmental conditions.

IoT wearables enable people to better understand their own health and allow physicians to remotely monitor patients. This technology also enables companies to track the health and safety of their employees, which is especially useful for workers employed in hazardous conditions.

Drive efficiencies and new possibilities in existing processes.

One example of this is the use of IoT to increase efficiency and safety in connected logistics for fleet management. Companies can use IoT fleet monitoring to direct trucks, in real time, to improve efficiency.

Enable business process changes.

An example of this is the use of IoT devices for connected assets to monitor the health of remote machines and trigger service calls for preventive maintenance. The ability to remotely monitor machines is also enabling new product-as-a-service business models, where customers no longer need to buy a product but instead pay for its usage.

map

What industries can benefit from IoT?

Organizations best suited for IoT are those that would benefit from using sensor devices in their business processes.

Manufacturing

Manufacturers can gain a competitive advantage by using production-line monitoring to enable proactive maintenance on equipment when sensors detect an impending failure. Sensors can actually measure when production output is compromised. With the help of sensor alerts, manufacturers can quickly check equipment for accuracy or remove it from production until it is repaired. This allows companies to reduce operating costs, get better uptime, and improve asset performance management.

Automotive

The automotive industry stands to realize significant advantages from the use of IoT applications. In addition to the benefits of applying IoT to production lines, sensors can detect impending equipment failure in vehicles already on the road and can alert the driver with details and recommendations. Thanks to aggregated information gathered by IoT-based applications, automotive manufacturers and suppliers can learn more about how to keep cars running and car owners informed.

Transportation and Logistics

Transportation and logistical systems benefit from a variety of IoT applications. Fleets of cars, trucks, ships, and trains that carry inventory can be rerouted based on weather conditions, vehicle availability, or driver availability, thanks to IoT sensor data. The inventory itself could also be equipped with sensors for track-and-trace and temperature-control monitoring. The food and beverage, flower, and pharmaceutical industries often carry temperature-sensitive inventory that would benefit greatly from IoT monitoring applications that send alerts when temperatures rise or fall to a level that threatens the product.

Retail

IoT applications allow retail companies to manage inventory, improve customer experience, optimize supply chain, and reduce operational costs. For example, smart shelves fitted with weight sensors can collect RFID-based information and send the data to the IoT platform to automatically monitor inventory and trigger alerts if items are running low. Beacons can push targeted offers and promotions to customers to provide an engaging experience.

Public Sector

The benefits of IoT in the public sector and other service-related environments are similarly wide-ranging. For example, government-owned utilities can use IoT-based applications to notify their users of mass outages and even of smaller interruptions of water, power, or sewer services. IoT applications can collect data concerning the scope of an outage and deploy resources to help utilities recover from outages with greater speed.

Healthcare

IoT asset monitoring provides multiple benefits to the healthcare industry. Doctors, nurses, and orderlies often need to know the exact location of patient-assistance assets such as wheelchairs. When a hospital’s wheelchairs are equipped with IoT sensors, they can be tracked from the IoT asset-monitoring application so that anyone looking for one can quickly find the nearest available wheelchair. Many hospital assets can be tracked this way to ensure proper usage as well as financial accounting for the physical assets in each department.

General Safety Across All Industries

In addition to tracking physical assets, IoT can be used to improve worker safety. Employees in hazardous environments such as mines, oil and gas fields, and chemical and power plants, for example, need to know about the occurrence of a hazardous event that might affect them. When they are connected to IoT sensor–based applications, they can be notified of accidents or rescued from them as swiftly as possible. IoT applications are also used for wearables that can monitor human health and environmental conditions. Not only do these types of applications help people better understand their own health, they also permit physicians to monitor patients remotely.

trends


How is IoT changing the world? Take a look at connected cars.

IoT is reinventing the automobile by enabling connected cars. With IoT, car owners can operate their cars remotely—by, for example, preheating the car before the driver gets in it or by remotely summoning a car by phone. Given IoT’s ability to enable device-to-device communication, cars will even be able to book their own service appointments when warranted.

The connected car allows car manufacturers or dealers to turn the car ownership model on its head. Previously, manufacturers have had an arms-length relationship with individual buyers (or none at all). Essentially, the manufacturer’s relationship with the car ended once it was sent to the dealer. With connected cars, automobile makers or dealers can have a continuous relationship with their customers. Instead of selling cars, they can charge drivers usage fees, offering a “transportation-as-a-service” using autonomous cars. IoT allows manufacturers to upgrade their cars continuously with new software, a sea-change difference from the traditional model of car ownership in which vehicles immediately depreciate in performance and value.

Source