Posted on Leave a comment

Exploring Virtual Identity: Systems, Ethics, AI

The Concept of Virtual Identity

The concept of virtual identity refers to the way individuals and entities present themselves in digital environments. It encompasses aspects such as online profiles, avatars, digital footprints, and personal data. Virtual identity has become an integral part of modern life, as more and more people interact with each other and with organizations through digital channels. However, virtual identity also raises significant ethical, legal, and technological challenges that need to be addressed to ensure its responsible and beneficial use.

=== Historical Overview of Virtual Identity Systems

Virtual identity systems have been around for decades, dating back to the early days of the internet when bulletin board systems (BBS) and multi-user dungeons (MUD) allowed users to create online personas. The advent of social media platforms such as Facebook, Twitter, and Instagram in the 2000s gave rise to a new era of virtual identity, where millions of users could build and maintain online profiles that reflected their real-life identities. More recently, blockchain-based identity systems are being developed as a way to provide decentralized and secure virtual identity management.

=== Types of Virtual Identity Systems

There are several types of virtual identity systems, each with its own characteristics and use cases. Some examples include:

  • Personal identity systems: These are systems that allow individuals to create and manage their digital identities, such as social media profiles, email accounts, and online banking accounts.
  • Organizational identity systems: These are systems that allow organizations to establish their digital identities, such as corporate websites, online stores, and customer relationship management (CRM) platforms.
  • Federated identity systems: These are systems that allow users to access multiple digital services using a single set of credentials, such as the OpenID Connect protocol.
  • Self-sovereign identity systems: These are systems that give individuals full control over their digital identities, including the ability to manage their personal data, share it with others, and revoke access when needed.

=== Ethics of Virtual Identity Creation and Use

The creation and use of virtual identity raise numerous ethical concerns that need to be addressed. For instance, virtual identity systems can perpetuate bias, discrimination, and exclusion if they are designed or used in ways that favor certain groups over others. Furthermore, virtual identity systems can compromise individual privacy and autonomy if they collect and store personal data without consent or use it for nefarious purposes. Ethical considerations should be central to the design, deployment, and management of virtual identity systems to ensure that they serve the public good.

=== Regulating Virtual Identity: Legal Frameworks

Virtual identity systems are subject to various legal frameworks that govern their creation and use. These frameworks include data protection regulations, privacy laws, consumer protection laws, and intellectual property laws. For example, the General Data Protection Regulation (GDPR) in Europe imposes strict requirements on the processing of personal data, including the right to be forgotten, the right to access, and the right to rectification. Legal frameworks can help mitigate the risks associated with virtual identity systems and provide a framework for ethical and responsible use.

=== Case Study: Virtual Identity in Social Media

Social media platforms have become a major source of virtual identity for millions of people worldwide. Users can create online profiles that include personal information, photos, videos, and posts. These profiles can be used to connect with friends and family, share opinions and experiences, and engage with content from others. However, social media platforms have also been criticized for their handling of user data, their role in spreading misinformation and hate speech, and their impact on mental health and well-being. Social media companies are facing increasing pressure to adopt more responsible and transparent practices that protect users’ privacy and mitigate harm.

=== Virtual Identity and Artificial Intelligence

Artificial intelligence (AI) is playing an increasingly prominent role in virtual identity systems. AI algorithms can be used to analyze large amounts of data to identify patterns, trends, and correlations, which can be used to improve virtual identity management. For example, AI can be used to detect fraudulent activities, prevent identity theft, and personalize user experiences. However, AI also raises significant ethical concerns, such as bias, discrimination, and lack of transparency. Virtual identity systems that rely on AI should be designed and implemented in ways that prioritize ethical considerations and ensure that the benefits outweigh the risks.

=== Benefits of Virtual Identity Systems

Virtual identity systems offer numerous benefits to individuals, organizations, and society as a whole. Some of these benefits include:

  • Improved access to digital services and resources
  • Enhanced personalization and customization of user experiences
  • Increased efficiency and convenience in digital transactions
  • Better security and fraud prevention
  • Greater transparency and accountability in identity management

Virtual identity systems can also facilitate social inclusion and empowerment by providing individuals with a platform to express their identity, connect with others, and participate in public discourse.

=== Risks and Challenges of Virtual Identity

Virtual identity systems also pose significant risks and challenges that need to be addressed. Some of these risks include:

  • Privacy violations and data breaches
  • Identity theft and fraud
  • Discrimination and bias
  • Cyberbullying and online harassment
  • Misinformation and propaganda

Virtual identity systems can also exacerbate existing social and economic inequalities and widen the digital divide if they are not designed and implemented in inclusive and equitable ways.

=== The Future of Virtual Identity: Trends and Projections

The future of virtual identity is likely to be shaped by several trends and projections. These include:

  • Increasing adoption of blockchain-based identity systems
  • Greater focus on privacy and data protection
  • Advancements in AI and machine learning
  • Growing demand for self-sovereign identity management
  • Emphasis on inclusivity and accessibility

The future of virtual identity will also be shaped by societal, cultural, and political factors that are difficult to predict but will undoubtedly play a significant role.

The Importance of Virtual Identity

Virtual identity is a crucial aspect of modern life that offers both opportunities and challenges. As digital technologies continue to shape the way we interact and communicate with each other, virtual identity will become even more important in shaping our digital selves. To ensure that virtual identity serves the public good and respects individual rights and freedoms, it is essential to adopt an ethical, legal, and responsible approach to its creation and use. By doing so, we can harness the benefits of virtual identity while mitigating its risks and challenges.

=== References and Further Reading

  1. Solove, D. J. (2013). Understanding privacy. Harvard University Press.
  2. Goffman, E. (1959). The presentation of self in everyday life. Doubleday.
  3. European Union. (2016). General Data Protection Regulation (GDPR). Retrieved from
  4. Kantara Initiative. (2019). Identity and Access Management for the Internet of Things (IoT) Primer. Retrieved from
  5. World Economic Forum. (2018). Empowering Identity: Blockchain for Development – A Primer. Retrieved from
  6. World Bank Group. (2016). Digital Dividends. Retrieved from
Posted on Leave a comment

Will crypto make us live longer?

Imagine a world where patients and their families can directly fund scientists developing the next breakthrough drug or treatment that they need. A world in which drug development is a collaborative, open, and decentralized process. Such a future is not only possible, but the decentralized science movement is making it a reality.

Through blockchain, crypto, and NFTs of course. And that’s exactly what we are going to uncover on today’s CoinMarketCap episode:

 🔵 Coin Market Cap is the world's most-referenced price-tracking website for cryptoassets in the rapidly growing cryptocurrency space. Its mission is to make crypto accessible all around the world through data and content.

DeSci Foundation
"Open science,
fair peer-review,
efficient funding.

We support the development of a more verifiable, more open, and fairer ecosystem for science and scientists."
Posted on Leave a comment

The case for hybrid artificial intelligence

Cognitive scientist Gary Marcus believes advances in artificial intelligence will rely on hybrid AI, the combination of symbolic AI and neural networks.

Deep learning, the main innovation that has renewed interest in artificial intelligence in the past years, has helped solve many critical problems in computer vision, natural language processing, and speech recognition. However, as the deep learning matures and moves from hype peak to its trough of disillusionment, it is becoming clear that it is missing some fundamental components.

This is a reality that many of the pioneers of deep learning and its main component, artificial neural networks, have acknowledged in various AI conferences in the past year. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, the three “godfathers of deep learning,” have all spoken about the limits of neural networks.

The question is, what is the path forward?

At NeurIPS 2019, Bengio discussed system 2 deep learning, a new generation of neural networks that can handle compositionality, out of order distribution, and causal structures. At the AAAI 2020 Conference, Hinton discussed the shortcomings of convolutional neural networks (CNN) and the need to move toward capsule networks.

But for cognitive scientist Gary Marcus, the solution lies in developing hybrid models that combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning. In a paper titled “The Next Decade in AI: Four Steps Toward Robust Artificial Intelligence,” Marcus discusses how hybrid artificial intelligence can solve some of the fundamental problems deep learning faces today.

Connectionists, the proponents of pure neural network–based approaches, reject any return to symbolic AI. Hinton has compared hybrid AI to combining electric motors and internal combustion engines. Bengio has also shunned the idea of hybrid artificial intelligence on several occasions.

But Marcus believes the path forward lies in putting aside old rivalries and bringing together the best of both worlds.

What’s missing in deep neural networks?

The limits of deep learning have been comprehensively discussed. But here, I would like to generalization of knowledge, a topic that has been widely discussed in the past few months. While human-level AI is at least decades away, a nearer goal is robust artificial intelligence.

Here’s how Marcus defines robust AI: “Intelligence that, while not necessarily superhuman or self-improving, can be counted on to apply what it knows to a wide range of problems in a systematic and reliable way, synthesizing knowledge from a variety of sources such that it can reason flexibly and dynamically about the world, transferring what it learns in one context to another, in the way that we would expect of an ordinary adult.”

Those are key features missing from current deep learning systems. Deep neural networks can ingest large amounts of data and exploit huge computing resources to solve very narrow problems, such as detecting specific kinds of objects or playing complicated video games in specific conditions.

However, they’re very bad at generalizing their skills. “We often can’t count on them if the environment differs, sometimes even in small ways, from the environment on which they are trained,” Marcus writes.

Case in point: An AI trained on thousands of chair pictures won’t be able to recognize an upturned chair if such a picture was not included in its training dataset. A super-powerful AI trained on tens of thousands of hours of StarCraft 2 gameplay can play at championship level, but only under limited conditions. As soon as you change the map or the units in the game, its performance will take a nosedive. And it can’t play any game that is similar to StarCraft 2, such as Warcraft or Command & Conquer.

AI AlphaStar StarCraft II
A deep learning algorithm that plays championship-level StarCraft can’t play a similar game. It won’t even be able to maintain its level of gameplay if the settings are changed the slightest bit.

The current approach to solve AI’s generalization problem is to scale the models: Create bigger neural networks, gather larger datasets, use larger server clusters, and train the reinforcement learning algorithms for longer hours.

“While there is value in such approaches, a more fundamental rethink is required,” Marcus writes in his paper.

In fact, the “bigger is better” approach has yielded modest results at best while creating several other problems that remain unsolved. For one thing, the huge cost of developing and training large neural networks is threatening to centralize the field in the hands of a few very wealthy tech companies.

When it comes to dealing with language, the limits of neural networks become even more evident. Language models such as OpenAI’s GPT-2 and Google’s Meena chatbot each have more than a billion parameters (the basic unit of neural networks) and have been trained on gigabytes of text data. But they still make some of the dumbest mistakes, as Marcus has pointed out in an article earlier this year.

“When sheer computational power is applied to open-ended domain—such as conversational language understanding and reasoning about the world—things never turn out quite as planned. Results are invariably too pointillistic and spotty to be reliable,” Marcus writes.

What’s important here is the term “open-ended domain.” Open-ended domains can be general-purpose chatbots and AI assistants, roads, homes, factories, stores, and many other settings where AI agents interact and cooperate directly with humans. As the past years have shown, the rigid nature of neural networks prevents them from tackling problems in open-ended domains. In his paper, Marcus discusses this topic in detail.

Why we need to combine symbolic AI and neural networks?

Connectionists believe that approaches based on pure neural network structures will eventually lead to robust or general AI. After all, the human brain is made of physical neurons, not physical variables and class placeholders and symbols.

But as Marcus points out in his essay, “Symbol manipulation in some form seems to be essential for human cognition, such as when a child learns an abstract linguistic pattern, or the meaning of a term like sister that can be applied in an infinite number of families, or when an adult extends a familiar linguistic pattern in a novel way that extends beyond a training distributions.”

Marcus’ premise is backed by research from several cognitive scientists over the decades, including his own book The Algebraic Mind and the more recent Rebooting AI. (Another great read in this regard is the second chapter of Steven Pinker’s book How the Mind Works, in which he lays out evidence that symbol manipulation is an essential part of the brain’s functionality.)

We already have proof that symbolic systems work. It’s everywhere around us. Our web browsers, operating systems, applications, games, etc. are based on rule-based programs. “The same tools are also, ironically, used in the specification and execution of virtually all of the world’s neural networks,” Marcus notes.

Decades of computer science and cognitive science have proven that being able to store and manipulate abstract concepts is an essential part of any intelligent system. And that is why symbol-manipulation should be a vital component of any robust AI system.

“It is from there that the basic need for hybrid architectures that combine symbol manipulation with other techniques such as deep learning most fundamentally emerges,” Marcus says.

Examples of hybrid AI systems

human brain

The benefit of hybrid AI systems is that they can combine the strengths of neural networks and symbolic AI. Neural nets can find patterns in the messy information we collect from the real world, such as visual and audio data, large corpora of unstructured text, emails, chat logs, etc. And on their part, rule-based AI systems can perform symbol-manipulation operations on the extracted information.

Despite the heavy dismissal of hybrid artificial intelligence by connectionists, there are plenty of examples that show the strengths of these systems at work. As Marcus notes in his paper, “Researchers occasionally build systems containing the apparatus of symbol-manipulation, without acknowledging (or even considering the fact) that they have done so.” Marcus iterates several examples where hybrid AI systems are silently solving vital problems.

One example is the Neuro-Symbolic Concept Learner, a hybrid AI system developed by researchers at MIT and IBM. The NSCL combines neural networks to solve visual question answering (VQA) problems, a class of tasks that is especially difficult to tackle with pure neural network–based approaches. The researchers showed that NCSL was able to solve the VQA dataset CLEVR with impressive accuracy. Moreover, the hybrid AI model was able to achieve the feat using much less training data and producing explainable results, addressing two fundamental problems plaguing deep learning.

Google’s search engine is a massive hybrid AI that combines state-of-the-art deep learning techniques such as Transformers and symbol-manipulation systems such as knowledge-graph navigation tools.

AlphaGo, one of the landmark AI achievements of the past few years, is another example of combining symbolic AI and deep learning.

“There are plenty of first steps towards building architectures that combine the strengths of the symbolic approaches with insights from machine learning, in order to develop better techniques for extracting and generalizing abstract knowledge from large, often noisy data sets,” Marcus writes.

The paper goes into much more detail about the components of hybrid AI systems, and the integration of vital elements such as variable binding, knowledge representation and causality with statistical approximation.

“My own strong bet is that any robust system will have some sort of mechanism for variable binding, and for performing operations over those variables once bound. But we can’t tell unless we look,” Marcus writes.

Lessons from history

One thing to commend Marcus on is his persistence in the need to bring together all achievements of AI to advance the field. And he has done it almost single-handedly in the past years, against overwhelming odds where most of the prominent voices in artificial intelligence have been dismissing the idea of revisiting symbol manipulation.

Marcus sticking to his guns is almost reminiscent of how Hinton, Bengio, and LeCun continued to push neural networks forward in the decades where there was no interest in them. Their faith in deep neural networks eventually bore fruit, triggering the deep learning revolution in the early 2010s, and earning them a Turing Award in 2019.

It will be interesting to see where Marcus’ quest for creating robust, hybrid AI systems will lead to.


Posted on Leave a comment

Genomic Surveillance

Executive summary

Genomic surveillance in Belgium is based on whole genome sequencing (WGS) of a selection of
representative samples, complemented with targeted active surveillance initiatives and targeted
molecular markers aiming to early detect and precisely monitor the epidemiological evolution of
variants of concern (VOCs). Currently, 5.050 sequences of samples collected in Belgium are available
on GISAID in open access. During week 3 of 2021, Belgium achieved a coverage of 3,5% of all positive
sequences being sequenced.
During the last 2 weeks (week 5 and 6), 146 samples have been sequenced as part of the baseline
surveillance, among which 48 (33%) were 501Y.V1 and 8 (5%) were 501Y.V2.
Since week 52 of 2020, Belgium has experienced multiple introductions of VOCs followed by sustained
local transmissions. As a consequence of a higher transmissibility of these variants, we observe a
progressive shift in viral populations, with 501Y.V1 expected to represent the majority of circulating
strains by early March. Together with the rollout of vaccination, genomic surveillance will monitor the
eventual positive selection of VOCs harbouring immune escape mutations such as S:E484K.
During the last 2 weeks, the progressive phenomenon of viral population replacement by more
transmissible strains did not alter the overall stability of the epidemic in Belgium. This is probably due
to a combination of active public health response and limited number of social interactions in the
population. The risk of disruption of this equilibrium remains, as the proportion of more transmissible
viruses will continue rising, but this risk can be mitigated by a combination of active outbreak control
interventions, maintained efforts to reduce transmission in the population and rapid roll-out of

Posted on Leave a comment

VI6: Network Investigations

Network Investigations

Eoghan Casey, ... Terrance Maguire, in Handbook of Digital Forensics and Investigation, 2010

Publisher Summary

In order to conduct an investigation involving computer networks, practitioners need to understand network architecture, be familiar with network devices and protocols, and have the ability to interpret the various network-level logs. Practitioners must also be able to search and combine large volumes of log data using search tools like Splunk or custom scripts. Digital forensic analysts must be able to slice and dice network traffic using a variety of tools to extract the maximum information out of this valuable source of network-related digital evidence. This chapter provides an overview of network protocols, references to more in-depth materials, and discusses how forensic science is applied to networks. To help investigators interpret and utilize this information in a network-related investigation, this chapter focuses on the most common kinds of digital evidence found on networks, and provides information that can be generalized to other situations. This chapter assumes a basic understanding of network topology and associated technologies. Digital investigators must be sufficiently familiar with network components found in a typical organization to identify, preserve, and interpret the key sources of digital evidence in an Enterprise. This chapter concentrates on digital evidence associated with routers, firewalls, authentication servers, network sniffers, Virtual Private Networks (VPNs), and Intrusion Detection Systems (IDS).

Overview of Enterprise Networks

Digital investigators must be sufficiently familiar with network components found in a typical organization to identify, preserve, and interpret the key sources of digital evidence in an Enterprise. This chapter concentrates on digital evidence associated with routers, firewalls, authentication servers, network sniffers, Virtual Private Networks (VPNs), and Intrusion Detection Systems (IDS). This section provides an overview of how logs from these various components of an Enterprise network can be useful in an investigation. Consider the simplified scenario in Figure 9.1 involving a secure server that is being misused in some way.

Logs generated by network security devices like firewalls and IDSs can be a valuable source of data in a network investigation. Access attempts blocked by a firewall or malicious activities detected by an IDS may be the first indication of a problem, alarming system administrators enough to report the activity to digital investigators. As discussed in Chapter 4, “Intrusion Investigation,” configuring firewalls to record successful access as well as denied connection attempts gives digital investigators more information about how the system was accessed and possibly misused. By design, IDS devices only record events of interest, including known attack signatures like buffer overflows and potentially malicious activities like shell code execution. However, some IDSs can be configured to capture the full contents of network traffic associated with a particular event, enabling digital forensic analysts to recover valuable details like the commands that were executed, files that were taken, and the malicious payload that was uploaded as demonstrated later in this chapter.

Routers form the core of any large network, directing packets to their destinations. As discussed in the NetFlow section later in this chapter, routers can be configured to log summary information about every network connection that passes through them, providing a bird's eye view of activities on a network. For example, suppose you find a keylogger on a Windows server and you can determine when the program was installed. Examining the NetFlow logs relating to the compromised server for the time of interest can reveal the remote IP address used to download the keylogger. Furthermore, NetFlow logs could be searched for that remote IP address to determine which other systems in the Enterprise were accessed and may also contain the keylogger. As more organizations and ISPs collect NetFlow records from internal routers as well as those at their Internet borders, digital investigators will find it easier to reconstruct what occurred in a particular case.

Digital investigators may be able to obtain full network traffic captures, which are sometimes referred to as logging or packet capture, but are less like a log of activities than like a complete videotape of them—recorded network traffic is live, complete, and compelling. Replaying an individual's online activities as recorded in a full packet capture can give an otherwise intangible sequence of events a very tangible feel.

Authentication servers form the heart of most enterprise environments, associating activities with particular virtual identities. Logs from RADIUS and TACACS servers, as well as Windows Security Event logs on Domain Controllers, can help digital investigators attribute activities to a particular user account, which may lead us to the person responsible.

Practitioner's Tip: Virtual Identities

Because user accounts may be shared or stolen, it is not safe to assume that the owner of the user account is the culprit. Therefore, you are never going to identify a physical, flesh-and-blood individual from information logs. The universe of digital forensics deals with virtual identities only. You can never truly say that John Smith logged in at 9:00 am, only that John Smith's account was authenticated at 9:00 am. It is common, when pursuing an investigation, to conflate the physical people with the virtual identities in your mind and in casual speech with colleagues. Be careful. When you are presenting your findings or even when evaluating them for your own purposes, remember that your evidence trail will stop and start at the keyboard, not at the fingers on the keys. Even if you have digital images from a camera, the image may be consistent with the appearance of a particular individual, but as a digital investigator you cannot take your conclusions any farther.

As discussed later in this chapter, VPNs are often configured to authenticate via RADIUS or Active Directory, enabling digital investigators to determine which account was used to connect. In addition, VPNs generally record the remote IP address of the computer being used to connect into the network, as well as the internal IP address assigned by the VPN to create a virtual presence on the enterprise network. These VPN logs are often critical for attributing events of concern within an organization to a particular user account and remote computer.

Practitioner's Tip: Tracking Down Computers within a Network

When a computer is connected to a network it needs to know several things before it can communicate with a remote server: its own IP address, the IP address of its default router, the MAC address of its default router, and the IP address of the remote server. Many networks use the Dynamic Host Configuration Protocol (DHCP) to assign IP addresses to computers. When a networked system that uses DHCP is booted, it sends its MAC address to the DHCP server as a part of its request for an IP address. Depending on its configuration, the server will either assign a random IP address or a specific address that has been set aside for the MAC address in question. In any event, DHCP servers maintain a table of the IP addresses currently assigned.

DHCP servers can retain logs to enable digital investigators to determine which computer was assigned an IP address during a time of interest, and potentially the associated user account. For instance, the DHCP lease in Table 9.1 shows that the computer with hardware address 00:e0:98:82:4c:6b was assigned IP address starting at 20:44 on April 1, 2001 (the date format is weekday yyy/mm/dd hh:mm:ss where 0 is Sunday).

Table 9.1. DHCP Lease

lease {starts 0 2001/04/01 20:44:03;ends 1 2001/04/02 00:44:03;hardware ethernet 00:e0:98:82:4c:6b;uid 01:00:e0:98:82:4c:6b;client-hostname "oisin";}

Some DHCP servers can be configured to keep an archive of IP address assignments, but this practice is far from universal. Unless you are certain that archives are maintained, assume that the DHCP history is volatile and collect it as quickly as possible.

A DHCP lease does not guarantee that a particular computer was using an IP address at a given time. An individual could configure another computer with this same IP address at the same time, accidentally conflicting with the DHCP assignment or purposefully masquerading as the computer that originally was assigned this IP address via DHCP. The bright side is that such a conflict is often detected and leaves log records on the systems involved.

The same general process occurs when an individual connects to an Internet Service Provider (ISP) via a modem. Some ISPs record the originating phone number in addition to the IP address assigned, thus enabling investigators to track connections back to a particular phone line in a house or other building.

Obtaining additional information about systems on the Internet is beyond the scope of this chapter. See Nikkel (2006) for a detailed methodology on documenting Internet name registry entries, Domain name records, and other information relating to remote systems.

Posted on Leave a comment

VI5: A survey of identity and handoff management approaches for the future Internet

A survey of identity and handoff management approaches for the future Internet

Hasan Tuncer, ... Nirmala Shenoy, in Computer Communications, 2012


Since its inception almost 40 years ago, the Internet has evolved and changed immensely. New technology solutions are desired to keep up with this unprecedented growth. Besides the traditional computing devices, different types of mobile devices need to be supported by the future Internet architecture. In this work, a survey of identity and handoff management solutions proposed in future Internet architectures is presented. Mobility protocols developed by the Internet Engineering Task Force initiatives are discussed to give the background on the user mobility support challenges with the current architecture. The next generation network architectures supported by global initiatives are presented and analyzed in terms of their support for seamless user and device mobility. Furthermore, this survey is extended to include the architectures proposed for wireless mesh networks, which are envisioned to be a part of the next generation networks with their self organizing and self configuring network characteristics.

4.5.1 Identity management in DAIDALOS

DAIDALOS architecture supplies Virtual Identity (VID) Framework in which a profile of an entity (single user or group of users) may stem from contracts with different networks and services. Subsets of this entity profile are called entity profile views, that are the virtual IDs of the entity. A user can choose the virtual identity – service provider mapping. After virtual identity is confirmed by the service provider, the entity gets an IP address tied to that virtual identity [59]. Virtual identity concept requires ID-Broker, that supplies entity’s location to correspondent node and proxies the request to the entity and ID-Manager. ID-Manager provides interface for creating, managing, and destroying virtual identities by abstracting entity’s physical interfaces.

DAIDALOS also provides Virtual MAC infrastructure, which enables an entity to have two or more virtual identities bind to one physical interface to be able to access different providers. These virtual identities can be expanded to the relationships between banks, governmental institutions, operators, and service providers.