Posted on Leave a comment

Exploring Virtual Identities: Systems, Behaviour, and Ethics

The Importance of Virtual Identities

As technology continues to advance, our lives are becoming increasingly intertwined with virtual spaces. From social media platforms to online gaming communities, virtual identities have become an integral part of our daily lives. In these virtual spaces, we have the opportunity to express ourselves, interact with others, and explore new identities. However, as we spend more time in these virtual spaces, it is important that we understand the systems, behaviours, and ethics related to virtual identities.

Defining Virtual Identities: The Basics

Virtual identities are the personas we create in online spaces. These can range from our usernames, avatars, and profiles to the interactions we have with others in these spaces. In essence, virtual identities are the representations of ourselves that we present to others online. These identities can be vastly different from our real-life identities, as they allow us to experiment with different aspects of our personalities and express ourselves in new ways.

The Role of Systems in Creating Virtual Identities

The systems that govern virtual spaces can greatly influence our virtual identities. For example, social media algorithms may prioritize certain types of content or interactions, leading us to present ourselves in a certain way to gain more attention. Additionally, online games may have specific character creation options that limit our ability to fully express ourselves. It is important to recognize these systems and understand how they shape our virtual identities.

Behaviour in Virtual Spaces: How It Affects Identity

Our behaviour in virtual spaces can greatly impact our virtual identities. The interactions we have with others online can shape how we present ourselves and the personas we create. Additionally, our behaviour may be influenced by the anonymity that virtual spaces provide. It is important that we consider how our behaviour in virtual spaces affects our virtual identities and how we want to be perceived by others.

The Ethics of Virtual Identities: A Critical Analysis

As with any aspect of technology, virtual identities raise ethical concerns. For example, the use of fake social media profiles to deceive or manipulate others is a growing concern. Additionally, the collection and use of personal data by companies can raise privacy concerns. It is important that we critically analyze the ethical implications of virtual identities and consider the impact they may have on ourselves and others.

The Effects of Virtual Identities on Real Life Identity

While virtual identities may seem separate from our real-life identities, they can have a significant impact on how we view ourselves and how others view us. The personas we create online may influence our self-esteem and confidence, and may also impact our professional and personal relationships. Understanding the relationship between virtual and real-life identities is crucial for navigating both online and offline spaces.

===Privacy Concerns and Virtual Identity

Privacy concerns are a major issue when it comes to virtual identities. The data we share online can be used by companies or individuals for malicious purposes, such as identity theft or cyberbullying. It is important that we understand how to protect our personal information online and take steps to minimize our digital footprint.

Identity Theft in Virtual Spaces: An Emerging Threat

Identity theft is a growing concern in virtual spaces. Hackers and scammers may attempt to steal personal information or use fake identities to access sensitive information. It is important to be vigilant and take steps to protect ourselves from these threats.

The Psychology of Virtual Identity Formation

Understanding the psychology behind virtual identity formation can help us better navigate these spaces. For example, research has shown that anonymity can lead to increased aggression and deception online. Additionally, social comparison theory suggests that we may present ourselves in a certain way to gain social acceptance or validation. Recognizing these psychological factors can help us better understand our own behaviour in virtual spaces.

Strategies for Protecting Virtual Identity

There are a number of strategies we can use to protect our virtual identities. These may include using strong passwords, limiting the amount of personal information we share online, and being aware of potential scams or phishing attempts. Additionally, it is important to regularly monitor our online presence and take steps to remove any potentially harmful content.

Addressing Ethical Issues in Virtual Identity

As virtual identities continue to play a larger role in our lives, it is important that we address ethical issues related to their use. This may include advocating for stronger privacy protections, promoting responsible online behaviour, and holding companies accountable for their data collection and use practices.

The Future of Virtual Identities

Virtual identities are here to stay, and as technology continues to evolve, they will only become more complex and integrated into our lives. By understanding the systems, behaviours, and ethics related to virtual identities, we can navigate these spaces more effectively and protect ourselves from potential harm. As we move forward, it will be important to continue discussing and addressing the ethical implications of virtual identities to ensure that they are used responsibly and ethically.

Posted on Leave a comment

VI8: Strategic analysis and future strategies

Strategic analysis and future strategies

Reza Jamali, in Online Arab Spring, 2015

In this chapter, we try to find social media penetration barriers to the development of democracy and social justice in the Middle East. We also try to suggest some strategies to overcome these obstacles. To achieve this objective, the context of political, economic, social, technological and technical, ethical, legal analysis (PESTEL) is used and the barriers in each context are considered. Although there is no priority among these barriers, it can be argued that political instability, legal uncertainty, corruption and ethical issues play the major role in reducing the influence of social media penetration for the promotion of democracy and social justice.

On the other hand, we have argued that what happens in the circumstances of virtual social media is a clear manifestation of events in the physical environment of the country. In social media or social networks, if people, whether using real or fictional identities, stand up to protest against a group, persons or particular government, this happens because of the oppression in the physical environment, which has suddenly crossed into the virtual environment. Consequently, with any policy for cyberspace (whether in an environment of 100% government control of the media or freedom of the media), if the physical environment is not accompanied by supporting policies, physical well-being and social justice it will lead to the failure of individuals to change their government through social media.

Analysis of ethical factors

In much of the research on social media, discussion of ethical factors is impeded by a lack of sufficient information and in some cases issues regarding copyright law and morality are raised. But given the difference in objective analysis, here we try to look at it from another angle. When can we expect to see real people with real faces promoting democracy and social justice from social media? Ethical issues in social media begin when a virtual identity is shaped and the user is able to create a picture of him- or herself as he or she would like to be, not what he or she really is. It becomes extreme when people in the real world cannot show themselves as they really are, while if they express their true opinions they face penalties that are more likely to be found in dictatorial regimes. Please re-read the previous sentence. From this statement we can clearly see that an unblemished environment and observing the ethics of social media are the effect of freedom and justice in the physical environment. It can upset all the equations, even when there has been heavy investment in social media, and we cannot obtain the desired result because of the problems in the physical environment. In this case it is better to revisit the examples of our listed companies. When a company invests a lot in their brand on social media but the employees in the organisation are not happy, the employees simply share their disastisfaction and the problems they have with the work on their personal pages on social networks.

There must be a better way than this to eliminate problems. Using the network to communicate directly with the government and the people can be useful before people share their dissatisfaction with the government, whether as themselves or under a false identity, on the public network. This is a safety valve to prevent an overflow of people’s grievances. The next thing that has become clear during our research is that when a group of people who believe that social media have taken steps toward achieving their goals, the ethical points have peaked, but if the team feels that social media are phenomena that are harmful to them and which in the long term will weaken the group, failure to take note of the ethics and social media gossip from the group can eventually turn the tide in their favour. The most important points evident here are that the beginnings of such failure to comply with the ethics of such groups not only arise from social media but also from the physical environment. Suppose a religious group is strongly dissatisfied with the development of an anti-religious culture in the social media and do not see a way to deal with it. So gossip in the physical environment against social media represents attempts to blacken the reputation of social media and reduce their role in society. However, experience has shown that gossip does not end with the physical environment but evolves. The next step is for the group to create multiple pages, blogs and websites, opening up a new front in the struggle against the social media. And in the third stage of evolution, this group finds that social media must be confronted by other social media, for success to be achieved. The next thing that is one of the positive aspects of social media in the area of ethics and social justice is the high percentage of respondents who believe that regardless of whether or not governments have a role in the distribution of wealth and social justice, people must exert pressure through the Internet and social media to create justice. The minimum amount of work that must be done in this area is helping people who have low incomes and live in poverty. In all the Arab countries surveyed and Iran over 55% of people are in this situation, while the percentage in America is 38%. Most of the former are in Iran and Tunisia, at 69% and 68% per cent, respectively. This creates a strong potential for governments to increase people’s capacity to take advantage of democracy and social justice, while it appears that in some Western countries, this is more of a burden on the state.

Given the importance of ethical issues and social responsibility in the virtual environment, the researcher came up with the idea of seeking new criteria for ranking websites and social media pages. Alexa.com provides website ratings in terms of the number of visits, which is a factor that has an important role in the value of a web page or website. There will be a greater need to value sites in terms of ethical standards. That is why, in the middle of 2014, an elite group of programmers in the web field came together to launch the site http://www.EthicsRank.com, and readers of this book can also assist in measuring the observance of ethics on the web. According to our investigation, the principal costs of material and moral wrongdoing in virtual space in the Middle East and developing countries are higher than in developed countries. Owing to the nature of governments in the Middle East and the need for the constant monitoring of virtual environments to counter threats, Middle Eastern countries have defined more crimes in cyberspace and consequently there is greater punishment. This can be useful, leading to a reduction in non-compliance with ethics, but it also leads to changes in the identity of most people in the virtual community and therefore it becomes uncontrollable.

Posted on Leave a comment

An extreme form of encryption could solve big data’s privacy problem

Fully homomorphic encryption allows us to run analysis on data without ever seeing the contents. It could help us reap the full benefits of big data, from fighting financial fraud to catching diseases early

LIKE any doctor, Jacques Fellay wants to give his patients the best care possible. But his instrument of choice is no scalpel or stethoscope, it is far more powerful than that. Hidden inside each of us are genetic markers that can tell doctors like Fellay which individuals are susceptible to diseases such as AIDS, hepatitis and more. If he can learn to read these clues, then Fellay would have advance warning of who requires early treatment.

This could be life-saving. The trouble is, teasing out the relationships between genetic markers and diseases requires an awful lot of data, more than any one hospital has on its own. You might think hospitals could pool their information, but it isn’t so simple. Genetic data contains all sorts of sensitive details about people that could lead to embarrassment, discrimination or worse. Ethical worries of this sort are a serious roadblock for Fellay, who is based at Lausanne University Hospital in Switzerland. “We have the technology, we have the ideas,” he says. “But putting together a large enough data set is more often than not the limiting factor.”

Fellay’s concerns are a microcosm of one of the world’s biggest technological problems. The inability to safely share data hampers progress in all kinds of other spheres too, from detecting financial crime to responding to disasters and governing nations effectively. Now, a new kind of encryption is making it possible to wring the juice out of data without anyone ever actually seeing it. This could help end big data’s big privacy problem – and Fellay’s patients could be some of the first to benefit.

It was more than 15 years ago that we first heard that “data is the new oil”, a phrase coined by the British mathematician and marketing expert Clive Humby. Today, we are used to the idea that personal data is valuable. Companies like Meta, which owns Facebook, and Google’s owner Alphabet grew into multibillion-dollar behemoths by collecting information about us and using it to sell targeted advertising.

Data could do good for all of us too. Fellay’s work is one example of how medical data might be used to make us healthier. Plus, Meta shares anonymised user data with aid organisations to help plan responses to floods and wildfires, in a project called Disaster Maps. And in the US, around 1400 colleges analyse academic records to spot students who are likely to drop out and provide them with extra support. These are just a few examples out of many – data is a currency that helps make the modern world go around.

Getting such insights often means publishing or sharing the data. That way, more people can look at it and conduct analyses, potentially drawing out unforeseen conclusions. Those who collect the data often don’t have the skills or advanced AI tools to make the best use of it, either, so it pays to share it with firms or organisations that do. Even if no outside analysis is happening, the data has to be kept somewhere, which often means on a cloud storage server, owned by an external company.

You can’t share raw data unthinkingly. It will typically contain sensitive personal details, anything from names and addresses to voting records and medical information. There is an obligation to keep this information private, not just because it is the right thing to do, but because of stringent privacy laws, such as the European Union’s General Data Protection Regulation (GDPR). Breaches can see big fines.

Over the past few decades, we have come up with ways of trying to preserve people’s privacy while sharing data. The traditional approach is to remove information that could identify someone or make these details less precise, says privacy expert Yves-Alexandre de Montjoye at Imperial College London. You might replace dates of birth with an age bracket, for example. But that is no longer enough. “It was OK in the 90s, but it doesn’t really work any more,” says de Montjoye. There is an enormous amount of information available about people online, so even seemingly insignificant nuggets can be cross-referenced with public information to identify individuals.

One significant case of reidentification from 2021 involves apparently anonymised data sold to a data broker by the dating app Grindr, which is used by gay people among others. A media outlet called The Pillar obtained it and correlated the location pings of a particular mobile phone represented in the data with the known movements of a high-ranking US priest, showing that the phone popped up regularly near his home and at the locations of multiple meetings he had attended. The implication was that this priest had used Grindr, and a scandal ensued because Catholic priests are required to abstain from sexual relationships and the church considers homosexual activity a sin.

A more sophisticated way of maintaining people’s privacy has emerged recently, called differential privacy. In this approach, the manager of a database never shares the whole thing. Instead, they allow people to ask questions about the statistical properties of the data – for example, “what proportion of people have cancer?” – and provide answers. Yet if enough clever questions are asked, this can still lead to private details being triangulated. So the database manager also uses statistical techniques to inject errors into the answers, for example recording the wrong cancer status for some people when totting up totals. Done carefully, this doesn’t affect the statistical validity of the data, but it does make it much harder to identify individuals. The US Census Bureau adopted this method when the time came to release statistics based on its 2020 census.

Trust no one

Still, differential privacy has its limits. It only provides statistical patterns and can’t flag up specific records – for instance to highlight someone at risk of disease, as Fellay would like to do. And while the idea is “beautiful”, says de Montjoye, getting it to work in practice is hard.

There is a completely different and more extreme solution, however, one with origins going back 40 years. What if you could encrypt and share data in such a way that others could analyse it and perform calculations on it, but never actually see it? It would be a bit like placing a precious gemstone in a glovebox, the chambers in labs used for handling hazardous material. You could invite people to put their arms into the gloves and handle the gem. But they wouldn’t have free access and could never steal anything.

This was the thought that occurred to Ronald Rivest, Len Adleman and Michael Dertouzos at the Massachusetts Institute of Technology in 1978. They devised a theoretical way of making the equivalent of a secure glovebox to protect data. It rested on a mathematical idea called a homomorphism, which refers to the ability to map data from one form to another without changing its underlying structure. Much of this hinges on using algebra to represent the same numbers in different ways.

Imagine you want to share a database with an AI analytics company, but it contains private information. The AI firm won’t give you the algorithm it uses to analyse data because it is commercially sensitive. So, to get around this, you homomorphically encrypt the data and send it to the company. It has no key to decrypt the data. But the firm can analyse the data and get a result, which itself is encrypted. Although the firm has no idea what it means, it can send it back to you. Crucially, you can now simply decrypt the result and it will make total sense.

“The promise is massive,” says Tom Rondeau at the US Defense Advanced Research Projects Agency (DARPA), which is one of many organisations investigating the technology. “It’s almost hard to put a bound to what we can do if we have this kind of technology.”

In the 30 years since the method was proposed, researchers devised homomorphic encryption schemes that allowed them to carry out a restricted set of operations, for instance only additions or multiplications. Yet fully homomorphic encryption, or FHE, which would let you run any program on the encrypted data, remained elusive. “FHE was what we thought of as being the holy grail in those days,” says Marten van Dijk at CWI, the national research institute for mathematics and computer science in the Netherlands. “It was kind of unimaginable.”

One approach to homomorphic encryption at the time involved an idea called lattice cryptography. This encrypts ordinary numbers by mapping them onto a grid with many more dimensions than the standard two. It worked – but only up to a point. Each computation ended up adding randomness to the data. As a result, doing anything more than a simple computation led to so much randomness building up that the answer became unreadable.

In 2009, Craig Gentry, then a PhD student at Stanford University in California, made a breakthroughHis brilliant solution was to periodically remove this randomness by decrypting the data under a secondary covering of encryption. If that sounds paradoxical, imagine that glovebox with the gem inside. Gentry’s scheme was like putting one glovebox inside another, so that the first one could be opened while still encased in a layer of security. This provided a workable FHE scheme for the first time.

Workable, but still slow: computations on the FHE-encrypted data could take millions of times longer than identical ones on raw data. Gentry went on to work at IBM, and over the next decade, he and others toiled to make the process quicker by improving the underlying mathematics. But lately the focus has shifted, says Michael Osborne at IBM Research in Zurich, Switzerland. There is a growing realisation that massive speed enhancements can be achieved by optimising the way cryptography is applied for specific uses. “We’re getting orders of magnitudes improvements,” says Osborne.

IBM now has a suite of FHE tools that can run AI and other analyses on encrypted data. Its researchers have shown they can detect fraudulent transactions in encrypted credit card data using an artificial neural network that can crunch 4000 records per second. They also demonstrated that they could use the same kind of analysis to scour the encrypted CT scans of more than 1500 people’s lungs to detect signs of covid-19 infection.

Also in the works are real-world, proof-of-concept projects with a variety of customers. In 2020, IBM revealed the results of a pilot study conducted with the Brazilian bank Banco Bradesco. Privacy concerns and regulations often prevent banks from sharing sensitive data either internally or externally. But in the study, IBM showed it could use machine learning to analyse encrypted financial transactions from the bank’s customers to predict if they were likely to take out a loan. The system was able to make predictions for more than 16,500 customers in 10 seconds and it performed just as accurately as the same analysis performed on unencrypted data.

Suspicious activity

Other companies are keen on this extreme form of encryption too. Computer scientist Shafi Goldwasser, a co-founder of privacy technology start-up Duality, says the firm is achieving significantly faster speeds by helping customers better structure their data and tailoring tools to their problems. Duality’s encryption tech has already been integrated into the software systems that technology giant Oracle uses to detect financial crimes, where it is assisting banks in sharing data to detect suspicious activity.

Still, for most applications, FHE processing remains at least 100,000 times slower compared with unencrypted data, says Rondeau. This is why, in 2020, DARPA launched a programme called Data Protection in Virtual Environments to create specialised chips designed to run FHE. Lattice-encrypted data comes in much larger chunks than normal chips are used to dealing with. So several research teams involved in the project, including one led by Duality, are investigating ways to alter circuits to efficiently process, store and move this kind of data. The goal is to analyse any FHE-encrypted data just 10 times slower than usual, says Rondeau, who is managing the programme.

Even if it were lightning fast, FHE wouldn’t be flawless. Van Dijk says it doesn’t work well with certain kinds of program, such as those that contain branching logic made up of “if this, do that” operations. Meanwhile, information security researcher Martin Albrecht at Royal Holloway, University of London, points out that the justification for FHE is based on the need to share data so it can be analysed. But a lot of routine data analysis isn’t that complicated – doing it yourself might sometimes be simpler than getting to grips with FHE.

For his part, de Montjoye is a proponent of privacy engineering: not relying on one technology to protect people’s data, but combining several approaches in a defensive package. FHE is a great addition to that toolbox, he reckons, but not a standalone winner.

This is exactly the approach that Fellay and his colleagues have taken to smooth the sharing of medical data. Fellay worked with computer scientists at the Swiss Federal Institute of Technology in Lausanne who created a scheme combining FHE with another privacy-preserving tactic called secure multiparty computation (SMC). This sees the different organisations join up chunks of their data in such a way that none of the private details from any organisation can be retrieved.

In a paper published in October 2021, the team used a combination of FHE and SMC to securely pool data from multiple sources and use it to predict the efficacy of cancer treatments or identify specific variations in people’s genomes that predict the progression of HIV infection. The trial was so successful that the team has now deployed the technology to allow Switzerland’s five university hospitals to share patient data, both for medical research and to help doctors personalise treatments. “We’re implementing it in real life,” says Fellay, “making the data of the Swiss hospitals shareable to answer any research question as long as the data exists.”

If data is the new oil, then it seems the world’s thirst for it isn’t letting up. FHE could be akin to a new mining technology, one that will open up some of the most valuable but currently inaccessible deposits. Its slow speed may be a stumbling block. But, as Goldwasser says, comparing the technology with completely unencrypted processing makes no sense. “If you believe that security is not a plus, but it’s a must,” she says, “then in some sense there is no overhead.”

NewScientist

6 April 2022

By Edd Gent