Posted on Leave a comment

Tokenizing Virtual Identity: Blockchain & AI’s Inevitable Impact

Tokenizing Virtual Identity

Tokenizing virtual identity is the latest buzzword in the world of technology. With the rise of blockchain and AI, the process of tokenizing virtual identity has become more feasible and efficient. In a world that is increasingly dependent on digital communication and transactions, virtual identity has become an essential aspect of our lives. From social media to online banking, virtual identity is crucial for individuals and organizations alike. This article explores the inevitable impact of blockchain and AI on tokenizing virtual identity.

What is Blockchain and AI?

To understand the role of blockchain and AI in tokenizing virtual identity, we need to first understand what these technologies are. Blockchain is a decentralized and distributed digital ledger that records transactions across multiple computers, allowing secure and transparent storage of data. AI, on the other hand, refers to the simulation of human intelligence in machines that can perform tasks that typically require human cognition, such as learning, reasoning, and problem-solving.

The Benefits of Tokenizing Virtual Identity

Tokenizing virtual identity offers several benefits. Firstly, it provides a higher degree of security than traditional identity management systems, as it is based on cryptography and decentralized storage. Secondly, it offers greater control and ownership of personal data, allowing individuals to manage and monetize their identity. Thirdly, it offers greater efficiency by reducing the need for intermediaries and streamlining identity verification processes.

The Role of Blockchain in Tokenizing Identity

Blockchain plays a crucial role in tokenizing virtual identity. By providing a decentralized and secure platform for storing and managing identity data, blockchain ensures that personal data is owned and controlled by individuals, rather than centralized institutions. Blockchain also enables the creation of self-sovereign identities, where individuals have complete control over their identity data and can share it securely with trusted parties.

The Role of AI in Tokenizing Identity

AI plays a crucial role in tokenizing virtual identity by automating identity verification processes. By leveraging machine learning algorithms, AI can analyze large volumes of data and make intelligent decisions about identity verification. This can help reduce the risk of fraud and improve the efficiency of identity verification processes.

Tokenizing Virtual Identity: Use Cases

Tokenizing virtual identity has several use cases. For example, it can be used for secure and decentralized voting systems, where individuals can verify their identity and cast their vote securely and anonymously. It can also be used for secure and decentralized identity verification for financial and healthcare services, reducing the risk of identity theft and fraud.

Tokenizing Virtual Identity: Challenges

Tokenizing virtual identity also presents several challenges. One of the main challenges is interoperability, as different blockchain networks and AI systems may not be compatible with each other. Another challenge is scalability, as blockchain and AI systems may not be able to handle the volume of data required for identity verification on a large scale.

Security Concerns in Tokenizing Identity

Security is a key concern in tokenizing virtual identity. While blockchain and AI offer greater security than traditional identity management systems, they are not immune to attacks. Hackers could potentially exploit vulnerabilities in blockchain and AI systems to gain access to personal data. It is therefore crucial to implement robust security measures to protect personal data.

Privacy Issues in Tokenizing Identity

Privacy is another key concern in tokenizing virtual identity. While tokenizing virtual identity offers greater control and ownership of personal data, it also raises concerns about data privacy. It is essential to ensure that personal data is not shared without consent and that individuals have the right to access, modify, and delete their data.

Legal Implications of Tokenizing Identity

Tokenizing virtual identity also has legal implications. As personal data becomes more valuable, it is crucial to ensure that there are adequate laws and regulations in place to protect personal data. It is also essential to ensure that individuals have the right to access and control their data, and that they are not discriminated against based on their identity.

The Future of Tokenizing Virtual Identity

The future of tokenizing virtual identity looks bright. As blockchain and AI continue to evolve, we can expect to see more secure, efficient, and decentralized identity management systems. We can also expect to see more use cases for tokenizing virtual identity, from secure and anonymous voting systems to decentralized identity verification for financial and healthcare services.

Embracing Blockchain & AI for Identity Management

In conclusion, tokenizing virtual identity is an inevitable trend that will revolutionize the way we manage identity. By leveraging blockchain and AI, we can create more secure, efficient, and decentralized identity management systems that give individuals greater control and ownership of their personal data. While there are challenges and concerns associated with tokenizing virtual identity, these can be addressed through robust security measures, privacy protections, and adequate laws and regulations. As we continue to embrace blockchain and AI for identity management, we can look forward to a more secure, efficient, and decentralized future.

Posted on Leave a comment

Exploring the Transhumanist Themes of Ghost in the Shell: Is AI Just a Shell?

The Transhumanist Themes of Ghost in the Shell

Ghost in the Shell, a Japanese manga series written and illustrated by Masamune Shirow, has been a staple of science fiction since its inception in the late 1980s. With a powerful mix of cyberpunk and transhumanist themes, the series has explored the profound implications of artificial intelligence, cyborgs, and human augmentation. In this article, we will delve into the transhumanist themes of Ghost in the Shell and analyze the series’ messages about the future of humanity.

The Evolution of Artificial Intelligence in Ghost in the Shell

One of the most prominent themes in Ghost in the Shell is the evolution of artificial intelligence. The series depicts a world in which AI has become so advanced that it is nearly indistinguishable from human consciousness. The protagonists of the series, members of a cyborg law enforcement unit, must grapple with the ethical implications of creating and interacting with sentient AI.

The Concept of Cyborgs and Augmentation in Ghost in the Shell

Another central theme in Ghost in the Shell is the concept of cyborgs and human augmentation. In this world, it is common for individuals to have cybernetic enhancements that allow them to perform incredible feats of strength, agility, and cognitive ability. However, the series also explores the dark side of this technology, as the line between human and machine becomes increasingly blurred.

The Ethics of Transhumanism in Ghost in the Shell

The ethics of transhumanism are a constant concern in Ghost in the Shell. The series delves into questions about the morality of creating artificial life and the consequences of merging human consciousness with machines. The protagonists must navigate complex ethical dilemmas as they confront the potential dangers of transhumanism.

The Quest for Identity in Ghost in the Shell: Human or Machine?

Ghost in the Shell also explores the quest for identity in a world where the line between human and machine is blurred. The characters struggle to define themselves as either human or machine, and the series raises important questions about what it means to be a conscious being in a world where technology has become so advanced.

The Implications of Consciousness in Ghost in the Shell

The implications of consciousness are a constant concern in Ghost in the Shell. The series explores questions about the nature of consciousness and what it means to be a sentient being. The characters grapple with the possibility that their consciousness may be the result of programming rather than true free will.

The Role of Memories in Shaping Our Identity in Ghost in the Shell

One of the most poignant themes in Ghost in the Shell is the role of memories in shaping our identity. The series explores the idea that our memories are a fundamental part of who we are, and that the loss of memories can be a deeply traumatic experience. The characters must confront the possibility that their memories and identities may be manipulated by external forces, such as artificial intelligence.

The Fear of Losing Humanity in Ghost in the Shell

The fear of losing humanity is a constant theme in Ghost in the Shell. The characters struggle to maintain their humanity as they become increasingly integrated with machines, and the series raises important questions about what it means to be human in a world where technology has become so advanced.

The Boundaries between Real and Virtual Worlds in Ghost in the Shell

Ghost in the Shell also explores the boundaries between real and virtual worlds. The characters must navigate complex virtual environments that are indistinguishable from reality, and the series raises important questions about the nature of reality itself.

The Relevance of Transhumanism in Today’s World: A Reflection on Ghost in the Shell

The themes of transhumanism explored in Ghost in the Shell are more relevant today than ever before. As artificial intelligence and human augmentation become increasingly common, we must grapple with the ethical implications of these technologies and the potential consequences of merging human consciousness with machines.

The Future of Humanity in Ghost in the Shell’s Vision of Transhumanism

Ghost in the Shell presents a vision of the future that is both awe-inspiring and deeply concerning. The series raises important questions about the future of humanity in a world where technology has become so advanced, and the potential consequences of merging human consciousness with machines.

Is AI Just a Shell? Exploring the Transhumanist Themes of Ghost in the Shell

In conclusion, Ghost in the Shell is a powerful exploration of transhumanist themes that raises important questions about the future of humanity. The series presents a vision of the future that is both exhilarating and deeply concerning, and it reminds us that we must grapple with the ethical implications of artificial intelligence, human augmentation, and transhumanism. Ultimately, Ghost in the Shell asks us to consider the question of whether AI is just a shell, or whether it has the potential to become something more.

Posted on Leave a comment

Exploring Virtual Identity: Systems, Ethics, AI

The Concept of Virtual Identity

The concept of virtual identity refers to the way individuals and entities present themselves in digital environments. It encompasses aspects such as online profiles, avatars, digital footprints, and personal data. Virtual identity has become an integral part of modern life, as more and more people interact with each other and with organizations through digital channels. However, virtual identity also raises significant ethical, legal, and technological challenges that need to be addressed to ensure its responsible and beneficial use.

=== Historical Overview of Virtual Identity Systems

Virtual identity systems have been around for decades, dating back to the early days of the internet when bulletin board systems (BBS) and multi-user dungeons (MUD) allowed users to create online personas. The advent of social media platforms such as Facebook, Twitter, and Instagram in the 2000s gave rise to a new era of virtual identity, where millions of users could build and maintain online profiles that reflected their real-life identities. More recently, blockchain-based identity systems are being developed as a way to provide decentralized and secure virtual identity management.

=== Types of Virtual Identity Systems

There are several types of virtual identity systems, each with its own characteristics and use cases. Some examples include:

  • Personal identity systems: These are systems that allow individuals to create and manage their digital identities, such as social media profiles, email accounts, and online banking accounts.
  • Organizational identity systems: These are systems that allow organizations to establish their digital identities, such as corporate websites, online stores, and customer relationship management (CRM) platforms.
  • Federated identity systems: These are systems that allow users to access multiple digital services using a single set of credentials, such as the OpenID Connect protocol.
  • Self-sovereign identity systems: These are systems that give individuals full control over their digital identities, including the ability to manage their personal data, share it with others, and revoke access when needed.

=== Ethics of Virtual Identity Creation and Use

The creation and use of virtual identity raise numerous ethical concerns that need to be addressed. For instance, virtual identity systems can perpetuate bias, discrimination, and exclusion if they are designed or used in ways that favor certain groups over others. Furthermore, virtual identity systems can compromise individual privacy and autonomy if they collect and store personal data without consent or use it for nefarious purposes. Ethical considerations should be central to the design, deployment, and management of virtual identity systems to ensure that they serve the public good.

=== Regulating Virtual Identity: Legal Frameworks

Virtual identity systems are subject to various legal frameworks that govern their creation and use. These frameworks include data protection regulations, privacy laws, consumer protection laws, and intellectual property laws. For example, the General Data Protection Regulation (GDPR) in Europe imposes strict requirements on the processing of personal data, including the right to be forgotten, the right to access, and the right to rectification. Legal frameworks can help mitigate the risks associated with virtual identity systems and provide a framework for ethical and responsible use.

=== Case Study: Virtual Identity in Social Media

Social media platforms have become a major source of virtual identity for millions of people worldwide. Users can create online profiles that include personal information, photos, videos, and posts. These profiles can be used to connect with friends and family, share opinions and experiences, and engage with content from others. However, social media platforms have also been criticized for their handling of user data, their role in spreading misinformation and hate speech, and their impact on mental health and well-being. Social media companies are facing increasing pressure to adopt more responsible and transparent practices that protect users’ privacy and mitigate harm.

=== Virtual Identity and Artificial Intelligence

Artificial intelligence (AI) is playing an increasingly prominent role in virtual identity systems. AI algorithms can be used to analyze large amounts of data to identify patterns, trends, and correlations, which can be used to improve virtual identity management. For example, AI can be used to detect fraudulent activities, prevent identity theft, and personalize user experiences. However, AI also raises significant ethical concerns, such as bias, discrimination, and lack of transparency. Virtual identity systems that rely on AI should be designed and implemented in ways that prioritize ethical considerations and ensure that the benefits outweigh the risks.

=== Benefits of Virtual Identity Systems

Virtual identity systems offer numerous benefits to individuals, organizations, and society as a whole. Some of these benefits include:

  • Improved access to digital services and resources
  • Enhanced personalization and customization of user experiences
  • Increased efficiency and convenience in digital transactions
  • Better security and fraud prevention
  • Greater transparency and accountability in identity management

Virtual identity systems can also facilitate social inclusion and empowerment by providing individuals with a platform to express their identity, connect with others, and participate in public discourse.

=== Risks and Challenges of Virtual Identity

Virtual identity systems also pose significant risks and challenges that need to be addressed. Some of these risks include:

  • Privacy violations and data breaches
  • Identity theft and fraud
  • Discrimination and bias
  • Cyberbullying and online harassment
  • Misinformation and propaganda

Virtual identity systems can also exacerbate existing social and economic inequalities and widen the digital divide if they are not designed and implemented in inclusive and equitable ways.

=== The Future of Virtual Identity: Trends and Projections

The future of virtual identity is likely to be shaped by several trends and projections. These include:

  • Increasing adoption of blockchain-based identity systems
  • Greater focus on privacy and data protection
  • Advancements in AI and machine learning
  • Growing demand for self-sovereign identity management
  • Emphasis on inclusivity and accessibility

The future of virtual identity will also be shaped by societal, cultural, and political factors that are difficult to predict but will undoubtedly play a significant role.

The Importance of Virtual Identity

Virtual identity is a crucial aspect of modern life that offers both opportunities and challenges. As digital technologies continue to shape the way we interact and communicate with each other, virtual identity will become even more important in shaping our digital selves. To ensure that virtual identity serves the public good and respects individual rights and freedoms, it is essential to adopt an ethical, legal, and responsible approach to its creation and use. By doing so, we can harness the benefits of virtual identity while mitigating its risks and challenges.

=== References and Further Reading

  1. Solove, D. J. (2013). Understanding privacy. Harvard University Press.
  2. Goffman, E. (1959). The presentation of self in everyday life. Doubleday.
  3. European Union. (2016). General Data Protection Regulation (GDPR). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679
  4. Kantara Initiative. (2019). Identity and Access Management for the Internet of Things (IoT) Primer. Retrieved from https://kantarainitiative.org/download/80863/
  5. World Economic Forum. (2018). Empowering Identity: Blockchain for Development – A Primer. Retrieved from http://www3.weforum.org/docs/WEF_Empowering_Identity_Blockchain_for_Development_2018.pdf
  6. World Bank Group. (2016). Digital Dividends. Retrieved from https://openknowledge.worldbank.org/bitstream/handle/10986/23347/9781464806711.pdf
Posted on Leave a comment

Ransomware is already out of control. AI-powered ransomware could be ‘terrifying.’

Hiring AI experts to automate ransomware could be the next step for well-endowed ransomware groups that are seeking to scale up their attacks.
 

In the perpetual battle between cybercriminals and defenders, the latter have always had one largely unchallenged advantage: The use of AI and machine learning allows them to automate a lot of what they do, especially around detecting and responding to attacks. This leg-up hasn't been nearly enough to keep ransomware at bay, but it has still been far more than what cybercriminals have ever been able to muster in terms of AI and automation.

That’s because deploying AI-powered ransomware would require AI expertise. And the ransomware gangs don’t have it. At least not yet.

But given the wealth accumulated by a number of ransomware gangs in recent years, it may not be long before attackers do bring aboard AI experts of their own, prominent cybersecurity authority Mikko Hyppönen said.

Some of these groups have so much cash — or bitcoin, rather — that they could now potentially compete with legit security firms for talent in AI and machine learning, according to Hyppönen, the chief research officer at cybersecurity firm WithSecure.

Ransomware gang Conti pulled in $182 million in ransom payments during 2021, according to blockchain data platform Chainalysis. Leaks of Conti's chats suggest that the group may have invested some of its take in pricey "zero day" vulnerabilities and the hiring of penetration testers.

"We have already seen [ransomware groups] hire pen testers to break into networks to figure out how to deploy ransomware. The next step will be that they will start hiring ML and AI experts to automate their malware campaigns," Hyppönen told Protocol.

"It's not a far reach to see that they will have the capability to offer double or triple salaries to AI/ML experts in exchange for them to go to the dark side," he said. "I do think it's going to happen in the near future — if I would have to guess, in the next 12 to 24 months."

If this happens, Hyppönen said, "it would be one of the biggest challenges we're likely to face in the near future."

AI for scaling up ransomware

While doom-and-gloom cybersecurity predictions are abundant, with two decades of experience on matters of cybercrime, Hyppönen is not just any prognosticator. He has been with his current company, which until recently was known as F-Secure, since 1991 and has been researching — and vying with — cybercriminals since the early days of the concept.

In his view, the introduction of AI and machine learning to the attacker side would be a distinct change of the game. He's not alone in thinking so.

When it comes to ransomware, for instance, automating large portions of the process could mean an even greater acceleration in attacks, said Mark Driver, a research vice president at Gartner.

Currently, ransomware attacks are often very tailored to the individual target, making the attacks more difficult to scale, Driver said. Even still, the number of ransomware attacks doubled year-over-year in 2021, SonicWall has reported — and ransomware has been getting more successful as well. The percentage of affected organizations that agreed to pay a ransom shot up to 58% in 2021, from 34% the year before, Proofpoint has reported.

However, if attackers were able to automate ransomware using AI and machine learning, that would allow them to go after an even wider range of targets, according to Driver. That could include smaller organizations, or even individuals.

"It's not worth their effort if it takes them hours and hours to do it manually. But if they can automate it, absolutely," Driver said. Ultimately, “it's terrifying.”

The prediction that AI is coming to cybercrime in a big way is not brand new, but it still has yet to manifest, Hyppönen said. Most likely, that's because the ability to compete with deep-pocketed enterprise tech vendors to bring in the necessary talent has always been a constraint in the past.

The huge success of the ransomware gangs in 2021, predominantly Russia-affiliated groups, would appear to have changed that, according to Hyppönen. Chainalysis reports it tracked ransomware payments totaling $602 million in 2021, led by Conti's $182 million. The ransomware group that struck the Colonial Pipeline, DarkSide, earned $82 million last year, and three other groups brought in more than $30 million in that single year, according to Chainalysis.

Hyppönen estimated that less than a dozen ransomware groups might have the capacity to invest in hiring AI talent in the next few years, primarily gangs affiliated with Russia.

‘We would definitely not miss it’

If cybercrime groups hire AI talent with some of their windfall, Hyppönen believes the first thing they'll do is automate the most manually intensive parts of a ransomware campaign. TThe actual execution of a ransomware attack remains difficult, he said.

"How do you get it on 10,000 computers? How do you find a way inside corporate networks? How do you bypass the different safeguards? How do you keep changing the operation, dynamically, to actually make sure you're successful?" Hyppönen said. “All of that is manual."

Monitoring systems, changing the malware code, recompiling it and registering new domain names to avoid defenses — things it takes humans a long time to do — would all be fairly simple to do with automation. "All of this is done in an instant by machines,” Hyppönen said.

That means it should be very obvious when AI-powered automation comes to ransomware, according to Hyppönen.

"This would be such a big shift, such a big change," he said. "We would definitely not miss it."

But would the ransomware groups really decide to go to all this trouble? Allie Mellen, an analyst at Forrester, said she's not as sure. Given how successful ransomware groups are already, Mellen said it's unclear why they would bother to take this route.

"They're having no problem with the approaches that they're taking right now," she said. "If it ain't broke, don't fix it."

Others see a higher likelihood of AI playing a role in attacks such as ransomware. Like defenders, ransomware gangs clearly have a penchant for evolving their techniques to try to stay ahead of the other side, said Ed Bowen, managing director for the AI Center of Excellence at Deloitte.

"I'm expecting it — I expect them to be using AI to improve their ability to get at this infrastructure," Bowen said. "I think that's inevitable."

Lower barrier to entry

While AI talent is in extremely short supply right now, that will start to change in coming years as a wave of people graduate from university and research programs in the field, Bowen noted.

The barriers to entry in the AI field are also going lower as tools become more accessible to users, Hyppönen said.

"Today, all security companies rely heavily on machine learning — so we know exactly how hard it is to hire experts in this field. Especially people who have expertise both in cybersecurity and in machine learning. So these are hard people to recruit," he told Protocol. "However, it's becoming easier to become an expert, especially if you don't need to be a world-class expert."

That dynamic could increase the pool of candidates for cybercrime organizations who are, simultaneously, richer and “more powerful than ever before," Hyppönen said.

Should this future come to pass, it will have massive implications for cyber defenders, in the event that a greater volume of attacks — and attacks against a broader range of targets — will be the result.

Among other things, this would likely mean that the security industry would itself be looking to compete harder than ever for AI talent, if only to try to stay ahead of automated ransomware and other AI-powered threats.

Between attackers and defenders, "you're always leapfrogging each other" on technical capabilities, Driver said. "It's a war of trying to get ahead of the other side."

Posted on Leave a comment

Top 5 Real-World Applications for Natural Language Processing

Emerging technologies have greatly facilitated our daily lives. For instance, when you are making yourself dinner but want to call your Mom for the secret recipe, you don’t have to stop what you are doing and dial the number to make the phone call. Instead, all you need to do is to simply speak out — “Hey Siri, call Mom.” And your iPhone automatically makes the call for you.

The application is simple enough, but the technology behind it could be sophisticated. The magic that makes the aforementioned scenario possible is natural language processing (NLP). NLP is far more than a pillar for building Siri. It can also empower many other AI-infused applications in the real world.

This article first explains what NLP is and later moves on to introduce five real-world applications of NLP.

What is NLP?

From chatbots to Siri, from virtual support agents to knowledge graphs, the application and usage of NLP are ubiquitous in our daily life. NLP stands for “Natural Language Processing”. Simply put, NLP is the ability of a machine to understand human language. It is the bridge that enables humans to directly interact and communicate with machines. NLP is a subfield of artificial intelligence (AI) and in Bill Gates's words, “NLP is the pearl in the crown of AI.”

With the ever-expanding market size of NLP, countless companies are investing heavily in this industry, and their product lines vary. Many different but specific systems for various tasks and needs can be built by leveraging the power of NLP.

The Five Real World NLP Applications

The most popular exciting and flourishing real-world applications of NLP include: Conversational user interface, AI-powered call quality assessment, Intelligent outbound calls, AI-powered call operators, and knowledge graphs, to name a few.

Chatbots in E-commerce

Over five years ago, Amazon already realized the potential benefit of applying NLP to their customer service channels. Back then, when customers had issues with their product orderings, the only way they could resort was by calling the customer service agents. However, what they could get from the other side of the phone was “Your call is important to us. Please hold, we’re currently experiencing a high call load. “ most of the time. Thankfully, Amazon immediately realized the damaging effect this could have on their brand image and tried to build chatbots.

Nowadays, when you want to quickly get, for example, a refund online, there’s a much more convenient way! All you need to do is to activate the Amazon customer service chatbot and type in your ordering information and make a refund request. The chatbot interacts and replies the same way a real human does. Apart from the chatbots that deal with post-sales customer experience, chatbots also offer pre-sales consulting. If you have any questions about the product you are going to buy, you can simply chat with a bot and get the answers.

E-commerce chatbots.
E-commerce chatbots.

With the emergence of new concepts like metaverse, NLP can do more than power AI chatbots. Avatars for customer support in the metaverse rely on the NLP technology. Giving customers more realistic chatting experiences.

Customer support avatar in metaverse.
Customer support avatar in the metaverse.

Conversational User Interface

Another more trendy and promising application is interactive systems. Many well-recognized companies are betting big on CUI ( Conversational user interface). CUI is the general term to describe those user interfaces for computers that can simulate conversations with real human beings.

The most common CUIs in our everyday life are Apple’s Siri, Microsoft’s Cortana, Google’s Google Assistant, Amazon’s Alexa, etc.

Apple’s Siri is a common example of conversational user interface.
Apple’s Siri is a common example of a conversational user interface.

In addition, CUIs can also be embedded into cars, especially EVs (electric vehicles). NIO, an automobile manufacturer dedicated to designing and developing EVs, launched its own set of CUI named NOMI in 2018. Visually, the CUIs in cars can work in the same way as Siri. Drivers can focus on steering the car while asking the CUI to adjust A/C temperature, play a song, lock windows/doors, navigate drivers to the nearest gas station, etc.

Conversational user interface in cars.
The conversational user interface in cars.

The Algorithm Behind

Despite all the fancy algorithms the technical media have boasted about, one of the most fundamental ways to build a chatbot is to construct and organize FAQ pairs(or more straightforwardly, question-answer pairs) and use NLP algorithms to figure out if the user query matches anyone of your FAQ knowledge base. A simple FAQ example would be like this:

Q: Can I have some coffee?

A: No, I’d rather have some ribs.

Now that this FAQ pair is already stored in your NLP system, the user can now simply ask a similar question for example: “coffee, please!”. If your algorithm is smart enough, it will figure out that “coffee, please” has a great resemblance to “Can I have some coffee?” and will output the corresponding answer “No, I’d rather have some ribs.” And that’s how things are done.

For a very long time, FAQ search algorithms are solely based on inverted indexing. In this case, you first do tokenization on the original sentence and put tokens and documents into systems like ElasticSearch, which uses inverted-index for indexing and algorithms like TF-IDF or BM25 for scoring.

This algorithm works just as fine until the deep learning era arrives. One of the most substantial problems with the algorithm above is that neither tokenization nor inverted indexing takes into account the semantics of the sentences. For instance, in the example above, users could say “ Can I have a cup of Cappuccino” instead. Now with tokenization and inverted-indexing, there’s a very big chance that the system won’t recognize “coffee” and “a cup of Cappuccino” as the same thing and would thus fail to understand the sentence. AI engineers have to do a lot of workarounds for these kinds of issues.

But things got much better with deep learning. With pre-trained models like BERT and pipelines like Towhee, we can easily encode all sentences into vectors and store them in a vector database, for example, Milvus, and simply calculate vector distance to figure out the semantic resembles of sentences.

The algorithm behind conversational user interfaces.

AI-powered Call Quality Control

Call centers are indispensable for many large companies that care about customer experience. To better spot issues and improve call quality, assessment is necessary. However, the problem is that call centers of large multi-national companies receive tremendous amounts of inbound calls per day. Therefore, it is impractical to listen to each of the millions of calls and make the evaluation. Most of the time, when you hear “in order to improve our service, this call could be recorded.” from the other end of the phone, it doesn’t necessarily mean your call would be checked for quality of service. In fact, even in big organizations, only 2%-3% of the calls would be replayed and checked manually by quality control people.

A call center. Image source: Pexels by Tima Miroshnichenko.

This is where NLP can help. An AI-powered call quality control engine powered by NLP can automatically spot the issues incalls and can handle massive volumes of calls in a relatively short period of time. The engine helps detect if the call operator uses the proper opening and ending sentences, and avoids that banned slang and taboo words in the call. This would easily increase the check rate from 2%-3% to 100%, with even less manpower and other costs.

With a typical AI-powered call quality control service, users need to first upload the call recordings to the service. Then the technology of Automatic speech recognition (ASR) is used to transcribe the audio files into texts. All the texts are subsequently vectorized using deep learning models and subsequently stored in a vector database. The service compares the similarity between the text vectors and vectors generated from a certain set of criteria such as taboo word vectors and vectors of desired opening and closing sentences. With efficient vector similarity search, handling great volumes of call recordings can be much more accurate and less time-consuming.

Intelligent outbound calls

Believe it or not, some of the phone calls you receive are not from humans! Chances are that it is a robot talking from the other side of the call. To reduce operation costs, some companies might leverage AI phone calls for marketing purposes and much more. Google launched Google Duplex back in 2018, a system that can conduct human-computer conversations and accomplish real-world tasks over the phone. The mechanism behind AI phone calls is pretty much the same as that behind chatbots.

Google assistant.
A user asks the Google Assistant for an appointment, which the Assistant then schedules by having Duplex call the business. Image source: Google AI blog.

In other cases, you might have also heard something like this on the phone:

“Thank you for calling. To set up a new account, press 1. To modify your password to an existing account, press 2. To speak to our customer service agent, press 0.”,

or in recent years, something like (with a strong robot accent):

“Please tell me what I can help you with. For example, You can ask me ‘check the balance of my account’.”

This is known as interactive voice response (IVR). It is an automated phone system that interacts with callers and performs based on the answers and actions of the callers. The callers are usually offered some choices via a menu. And then their choice will decide how the phone call system acts. If the user request is too complex, the system can route callers to a human agent. This can greatly reduce labor costs and save time for companies.

Intents are usually very helpful when dealing with calls like these. An intent is a group of sentences or dialects representing a certain user intention. For example, “weather forecast” can be intent, and this intent can be triggered with different sentences. See the picture of a Google Dialogflow example below. Intents can be organized together to accomplish complicated interactive human-computer conversations. Like booking a restaurant, ordering a flight ticket, etc.

Google Dialogflow.
Google Dialogflow.

AI-powered call operators

By adopting the technology of NLP, companies can carry call operation services to the next level. Conventionally, call operators need to look up a hundred page-long professional manual to deal with each call from customers and solve each of the user problems case by case. This process is extremely time-consuming and for most of the time cannot satisfy callers with desirable solutions. However, with an AI-powered call center, dealing with customer calls can be both cozy and efficient.

AI-aided call operators with greater efficiency.
AI-aided call operators with greater efficiency. Image source: Pexels by MART PRODUCTION.

When a customer dials in, the system immediately searches for the customer and their ordering information in the database so that the call operator can have a general idea of the case, like how old the customer is, their marriage status, things they have purchased in the past, etc. During the conversation, the whole chat will be recorded with a live chat log shown on the screen (thanks to living Automatic Speech Recognition). Moreover, when a customer asks a hard question or starts complaining, the machine will catch it automatically, look into the AI database, and tell you what is the best way to respond. With a decent deep learning model, your service could always give your customer >99% correct answers to their questions and can always handle customers’ complaints with the most proper words.

Knowledge graph

A knowledge graph is an information-based graph that consists of nodes, edges, and labels. Where a node (or a vertex) usually represents an entity. It could be a person, a place, an item, or an event. Edges are the lines connecting the nodes. There are also labels that signify the connection or relationship between a pair of nodes. A typical knowledge graph example is shown below:

A sample knowledge graph. Source: A guide to Knowledge Graphs.

The raw data for constructing a knowledge graph may come from various sources — unstructured docs, semi-structured data, and structured knowledge. Various algorithms must be applied to these data so as to extract entities (nodes) and the relationship between entities (edges). To name a few, one needs to do entity recognition, relations extracting, label mining, entity linking. To build a knowledge graph with data in docs, for instance, we need to first use deep learning pipelines to generate embeddings and store them in a vector database.

Once the knowledge graph is constructed, you can see it as the underlying pillar for many more specific applications like smart search engines, question-answering systems, recommending systems, advertisements, and more.

Endnote

This article introduces the top five real-world NLP applications. Leveraging NLP in your business can greatly reduce operational costs and improve user experience. Of course, apart from the five applications introduced in this article, NLP can facilitate more business scenarios including social media analytics, translation, sentiment analysis, meeting summarizing, and more.

There are also a bunch of NLP+, or more generally, AI+ concepts that are getting more and more popular these few years. For example, with AI + RPA (Robotic process automation). You can easily build smart pipelines that complete workflows automatically for you, such as an expense reimbursement workflow where you just need to upload your receipt, and AI + RPA will do all the rest for you. There’s also AI + OCR, where you just need to take a picture of, say, a contract, and AI will tell you if there’s a mistake in your contract, say, the telephone number of a company doesn’t match the number shown in Google search.

Source

Posted on Leave a comment

Responsible AI – Privacy and Security Requirements

Training data and prediction requests can both contain sensitive information about people / business which has to be protected. How do you safeguard the privacy of the individuals? What steps are taken to ensure that individuals have control of their data? There are regulations in countries to ensure privacy and security.

 In Europe you have the GDPR (General Data Protection Regulations) and in California there is CCPA (California Consumer Privacy Act,). Fundamentally, both give an individual control over its Data and requires that companies should protect the Data being used in the model. When Data processing is based on consent, then am individual has the right to revoke the consent at any time.

 Defending ML Models against attacks – Ensuring privacy of consumer data:

 I have discussed about very briefly about the tools for adversarial training – CleverHans and FoolBox Python libraries here: Model Debugging: Sensitivity Analysis, Adversarial Training, Residual Analysis  . Let us now look at more stringent means of protecting a ML model against attacks. It is important to protect the ML model against attacks, thus, ensuring the privacy and security of data. An ML model may be attacked in different ways – some literature classifies the attacks into: “Information Harms” and “Behavioural Harms”. Information Harm occurs when the information is allowed to leak from the model. There are different forms of Information Harms: Membership Inference, Model Inversion and Model Extraction. In Membership Inference, the attacker can determine if some information is part of the training data or not. In Model Inversion, the attacker can extract all the training data from the model and Model Extraction, the attacker is able to extract the entire model!

 Behavioural Harm occurs when the attacker can change the behaviour of the ML model itself – example: by inserting malicious data. In this post – I have given an example of an autonomous vehicle in this article: Model Debugging: Sensitivity Analysis, Adversarial Training, Residual Analysis

Cryptography | Differential privacy to protect data

You should consider privacy enhancing technologies like Secure Multi Party Computation ,(SMPC) and Fully Homomorphic Encryption (FHE). SMPC involves multiple systems to train or serve the model whilst the actual data is kept secure

In FHE the data is encrypted. Prediction requests involve encrypted data and training of the model is also carried out on encrypted data. This results in heavy computational cost because the data is never decrypted except by the user. Users will send encrypted prediction requests and will receive back an encrypted result. The goal is that using cryptography you can protect the consumers data.

Differential Privacy in Machine Learning

Differential privacy involves protection of the data by adding noise to the data so that the attackers cannot identify the real content. SmartNoise is an open-source project that contains components for building machine learning solutions with differential privacy. SmartNoise is made of following top level components:

✔️Smart Noise Core Library

✔️Smart Noise SDK Library

This is a good read to understand about Differential Privacy: https://docs.microsoft.com/en-us/azure/machine-learning/concept-differential-privacy

 Private Aggregation of Teacher Ensembles (PATE)

This follows the Knowledge Distillation concept that I discussed here: Post 1- Knowledge DistillationPost - 2 Knowldge Distillation. PATE begins by dividing the data into “k” partitions with no overlaps. It then trains k models on that data and then aggregates the results on an aggregate teacher model. During the aggregation for the aggregate teacher, you will add noise to the data and the output.

For deployment, you will use the student model. To train the student model you take unlabelled public data and feed it to the teacher model and the result is labelled data with which the student model is trained. For deployment, you use only the student model.

The process is illustrated in the figure below:

No alt text provided for this image

PATE (Private Aggregation of Teacher Ensembles)

Source

Credits: