Posted on Leave a comment

Exploring the Concept of Virtual Identity: A Technical Analysis

Virtual Identity Explained

With the increasing use of technology, the concept of virtual identity has become a popular topic of discussion. Virtual identity refers to the digital representation of an individual, which includes personal information, behavior, and interactions in the online world. This article explores the technical aspects of virtual identity and its role in various digital platforms.

The Technical Aspects of Virtual Identity

Virtual identity is a complex concept that involves technical aspects such as data encryption, user authentication, and digital signatures. Data encryption is used to ensure that personal information is kept secure during transmission across networks. User authentication is the process of confirming the identity of an individual using a username and password, biometric verification, or other identification methods. Digital signatures are used to verify the authenticity of electronic documents and transactions.

Virtual Identity: The Role of Authentication

Authentication is a critical component of virtual identity, as it ensures that only authorized individuals have access to personal information and digital resources. In addition to usernames and passwords, modern authentication methods include multi-factor authentication, biometric verification, and behavioral analysis. Multi-factor authentication involves using more than one form of identification, such as a password and a security token. Biometric verification uses physical characteristics, such as fingerprints or facial recognition, to identify individuals. Behavioral analysis uses machine learning algorithms to analyze user behavior and detect anomalies that may indicate fraudulent activity.

Virtual Identity vs. Real Identity: A Comparison

Virtual identity differs from real identity in several ways. Real identity refers to an individual’s physical characteristics and personal information, such as name, date of birth, and address. Virtual identity includes this information, as well as online behavior, interactions, and preferences. Virtual identity can be more fluid than real identity, as individuals can create multiple virtual identities or change their online persona to fit different contexts.

Privacy Concerns in Virtual Identity

Privacy is a major concern in virtual identity, as personal information can be easily accessed and exploited in the online world. Individuals must be aware of the risks associated with sharing personal information online and take steps to protect their virtual identity. This includes using strong passwords, limiting the amount of personal information shared online, and being cautious when interacting with unknown individuals or sites.

Digital Footprint: Building Virtual Identity

A digital footprint is the trail of data left behind by an individual’s online activity. This includes social media posts, search engine queries, and website visits. A digital footprint can be used to build a virtual identity, as it provides insight into an individual’s behavior and interests. It is important for individuals to manage their digital footprint and ensure that it accurately represents their values and beliefs.

The Importance of Virtual Identity Management

Virtual identity management involves controlling and maintaining an individual’s online presence. This includes monitoring online behavior, managing privacy settings, and responding to negative content or reviews. Virtual identity management is important for individuals, businesses, and organizations to maintain a positive image and protect against reputation damage.

Virtual Identity and Cybersecurity

Virtual identity is closely tied to cybersecurity, as the protection of personal information and digital resources is essential to maintaining virtual identity. Cybersecurity involves protecting against unauthorized access, cyber-attacks, and data breaches. Individuals and businesses must implement strong security measures, such as firewalls, encryption, and intrusion detection systems, to protect against cyber threats.

Virtual Identity in Social Media

Social media platforms are a major component of virtual identity, as they provide a space for individuals to express themselves and interact with others online. Social media profiles can be used to build a virtual identity, showcase skills and accomplishments, and connect with others in a professional or personal capacity. It is important for individuals to be mindful of their social media activity and ensure that it aligns with their desired virtual identity.

Virtual Identities in Gaming: A Technical Discussion

Virtual identities are also prevalent in the gaming world, where individuals can create avatars and interact with others in virtual environments. Gaming platforms must implement strong security measures to protect against hacking, cheating, and other forms of abuse. Virtual identities can be used to enhance the gaming experience, as players can customize their avatars and build relationships with other players.

Virtual Reality and Virtual Identity

Virtual reality technology allows individuals to immerse themselves in virtual environments and interact with others in a more realistic way. Virtual reality can enhance virtual identity by allowing individuals to create more realistic avatars and interact with others in a more natural way. It is important for individuals to be aware of the privacy risks associated with virtual reality and take steps to protect their personal information.

The Future of Virtual Identity

As technology continues to evolve, the concept of virtual identity will become increasingly important. It is up to individuals, businesses, and organizations to manage virtual identity effectively and protect against cyber threats. By understanding the technical aspects of virtual identity and implementing strong security measures, individuals can build a positive online presence and protect their personal information in the digital world.

Posted on Leave a comment

Will crypto make us live longer?

Imagine a world where patients and their families can directly fund scientists developing the next breakthrough drug or treatment that they need. A world in which drug development is a collaborative, open, and decentralized process. Such a future is not only possible, but the decentralized science movement is making it a reality.

Through blockchain, crypto, and NFTs of course. And that’s exactly what we are going to uncover on today’s CoinMarketCap episode:

 🔵 Coin Market Cap is the world's most-referenced price-tracking website for cryptoassets in the rapidly growing cryptocurrency space. Its mission is to make crypto accessible all around the world through data and content.

DeSci Foundation
"Open science,
fair peer-review,
efficient funding.

We support the development of a more verifiable, more open, and fairer ecosystem for science and scientists."
Posted on Leave a comment

Neuralink 2022 Update -Human Trials are coming

Let’s get into the latest updates on Elon Musk’s futuristic brain implant company Neuralink. Elon has been talking a lot lately about Neuralink and some of the applications that he expects it will be capable of, or not capable of, in the first decade or so of the product life cycle.

We know that Elon has broadly promised that Neuralink can do everything from helping people with spinal cord injuries, to enabling telepathic communication, curing brain disease like Parkinsons and ALS, allowing us to control devices with our thoughts and even merging human consciousness with artificial intelligence.

But as we get closer to the first clinical human trials for Neuralink, things are starting to become a little more clear on what this Brain Computer Interface technology will actually do, and how it will help people. So, let’s talk about what’s up with Neuralink in 2022.

Neuralink Human Trials 2022

When asked recently if Neuralink was still on track for their first human trial by the end of this year, Elon Musk replied by simply saying, “Yes.” Which I think is a good sign. It does seem like whenever Elon gives an abrupt answer like this, it means that he is confident about what he’s saying.

For comparison, at around the same time last year, when asked about human trials of Neuralink, Elon wrote, “If things go well, we might be able to do initial human trials later this year.” Notice the significant difference in those two replies. Not saying this is a science or anything, but it is notable.

We also saw earlier this year that Neuralink were looking to hire both a Director and Coordinator for Clinical Trials. In the job posting, Neuralink says that The director will “work closely with some of the most innovative doctors and top engineers, as well as working with Neuralink’s first Clinical Trial participants.”

We know that Neuralink have been conducting their surgical trials so far with a combination of monkeys and pigs. In their 2020 demonstration, Neuralink showed us a group of pigs who had all received Neuralink implants, and in some cases had also undergone the procedure to have the implant removed. Then in 2021, we were shown a monkey who could play video games without the need for a controller, using only his brain, which was connected with two Neuralink implants.

Human trials with Neuralink would obviously be a major step forward in product development. Last year, Elon wrote that, “Neuralink is working super hard to ensure implant safety & is in close communication with the FDA.” Previously, during Neuralink events, he has said that the company is striving to exceed all FDA safety requirements, not just to meet them. In the same way that Tesla vehicles exceed all crash safety requirements, they actually score higher than any other car ever manufactured.

What can Neuralink Do?

As we get closer to the prospective timeline for human testing, Elon has also been dialing down a little more into what exactly Neuralink will be able to do in its first phase implementation. It’s been a little bit hard to keep track when Elon is literally talking about using this technology for every crazy thing that can be imagined - that Neuralink would make language obsolete, that it would allow us to create digital backups of human minds, that we could merge our consciousness with an artificial super intelligence and become ultra enhanced cyborgs.

One of the new things that Elon has been talking about recently is treating morbid obesity with a Neuralink, which he brought up during a live TED Talk interview. Which is not something that we expected to hear, but it’s a claim that does seem to be backed up by some science. There have already been a couple of studies done with brain implants in people with morbid obesity, the implant transmitted frequent electric pulses into the hypothalamus region of the brain, which is thought to be driving an increase in appetite. It’s still too soon to know if that particular method is really effective, but it would be significantly less invasive than other surgeries that modify a patient's stomach in hopes of suppressing their appetite.

Elon followed up on the comment in a tweet, writing that it is “Certainly physically possible” to treat obesity through the brain. In the same post, Elon expanded on the concept, writing, “We’re working on bridging broken links between brain & body. Neuralinks in motor & sensory cortex bridging past weak/broken links in neck/spine to Neuralinks in spinal cord should theoretically be able to restore full body functionality.”

Which is one of the more practical implementations of Neuralink technology that we are expecting to see. These electrical signals can be read in the brain by one Neuralink device, and then wirelessly transmitted through BlueTooth to a second Neuralink device that is implanted in a muscle group, where the signal from the brain is delivered straight into the muscles. This exact kind of treatment has been done before with brain implants and muscular implants, but it has always required the patient to have a very cumbersome set up with wires running through their body into their brain, and wires running out of their skull and into a computer. The real innovation of Neuralink is that it makes this all possible with very small implants that connect wirelessly, so just by looking at the patient, you would never know that they have a brain implant.

Elon commented on this in another Tweet, writing, “It is an electronics, slash mechanical, slash software engineering problem for the Neuralink device that is similar in complexity level to smart watches - which are not easy!, plus the surgical robot, which is comparable to state-of-the art CNC machines.”

So the Neuralink has more in common with an Apple Watch than it does with any existing Brain Computer Interface Technology. And it is only made possible by the autonomous robotic device that conducts the surgery, the electrodes that connect the Neuralink device into the brain cortex are too small and fine to be sewn by human hands.

Elon touched on this in a response to being asked if Neuralink could cure tinnitus, a permanent ringing in the ears. Elon wrote, “Definitely. Might be less than 5 years away, as current version Neuralinks are semi-generalized neural read/write devices with about 1000 electrodes and tinnitus  probably needs much less than 1000.” He then added that, “Future generation Neuralinks will increase electrode count by many orders of magnitude.”

This brings us back to setting more realistic expectations of what a Neuralink can and cannot do. It’s entirely possible that in the future, the device can be expanded to handle some very complex issues, but as it is today, the benefits will be limited. Recently a person Tweeted at Elon, asking, “I lost a grandparent to Alzheimers - how will Neuralink address the loss of memory in the human brain?” Elon replied to say, “Current generation Neuralinks can help to some degree, but an advanced case of Alzheimers often involves macro degeneration of the brain. However, Neuralinks should theoretically be able restore almost any functionality lost due *localized* brain damage from stroke or injury.”

So, because those 1,000 electrodes can’t go into all areas of the brain all at once, Neuralink will not be effective against a condition that afflicts the brain as a whole. But those electrodes can be targeted on one particular area of damage or injury, and that’s how Neuralink will start to help in the short term, and this will be the focus of early human trials.

During his TED Talk interview, Elon spoke about the people that reached out to him, wanting to participate in Neuralink’s first human trials. Quote, “The emails that we get at Neuralink are heartbreaking. They'll send us just tragic stories where someone was in the prime of life and they had an accident on a motorcycle and now someone who’s 25 years old can’t even feed themselves. This is something we could fix.” End quote.

In a separate interview with Business Insider that was done in March, Elon talked more specifically about the Neuralink timeline, saying, “Neuralink in the short term is just about solving brain injuries, spinal injuries and that kind of thing. So for many years, Neuralink’s products will just be helpful to someone who has lost the use of their arms or legs or has just a traumatic brain injury of some kind.”

This is a much more realistic viewpoint than what we’ve seen from Elon in interviews of the past. On one episode of the Joe Rogan Podcast, Elon tried to claim that in 5 years from now language would become obsolete because everyone would be using Neuralink to communicate with a kind of digital telepathy. That could have just been the weed talking, but I’m hoping that the more realistic Elon’s messaging becomes, the closer we are getting to a real medical trial of the implant.

And finally, the key to reaching a safe and effective human trial is going to be that robot sewing machine that threads the electrodes into the cortex.  Elon referred to it as being comparable to a CNC machine. Because as good as the chip itself might be, if we can’t have a reliable procedure to perform the implant, then nothing can move forward. The idea is that after a round section of the person’s skull is removed, this robot will come in and place the tiny wires into a very specific areas in the outer layer of the brain - these don’t go deep into the tissue, only a couple of millimeters is enough to tap into the neural network of electrical signals. In theory this can all be done in a couple of hours, while the patient is still conscious - they would get an anesthetic to numb their head, obviously, but they wouldn’t have to go under full sedation, and therefore could be in and out of the procedure in an afternoon. Very similar deal to laser eye surgery - a fast and automated method to accomplish a very complex medical task. 

That’s what this Twitter user was referencing when he recently asked how close the new, version two of the Neuralink robot was to inserting the chip as simply as a LASIK procedure. To which Elon responded, quote, “Getting there.”

We know that the robot system is being tested on monkeys right now, and from what Elon says, it is making progress towards being suitable for human trials.

The last interesting thing that Elon said on Twitter in relation to Neuralink was his comment, “No need for artificial intelligence, neural networks or machine learning quite yet.” He wrote these out as abbreviations, but these are all terms that we are well familiar with from Tesla and their autonomous vehicle program. We know that Elon is an expert in AI and he has people working for him at Tesla in this department that are probably the best in the world. This is a skill set that will eventually be applied at Neuralink, but to what end, we still don’t know.

Posted on Leave a comment

Augmented Reality (AR): What it is, How it Works, Types & Uses

Posted on Leave a comment

VI1: Technology Changes Rapidly; Humans Don’t

Technology Changes Rapidly; Humans Don't

Tharon W. Howard, in Design to Thrive, 2010


The RIBS heuristic are essential to better understand how to design sustainable social networks and online communities. This final chapter is designed to afford network architects and community designers a better view both of RIBS and of external forces in the social media landscape. Social networks and online communities have the potential to effect economic, political, and social changes far beyond the expectations of their designers, and that kind of “success” can ironically threaten the sustainability of a community. When social media begin to impact larger institutions, such as the election of government officials, intellectual property laws, religious institutions, educational settings, and other established institutions of literate cultures, then a battle for control ensues. The issues resulting from such clashes can destroy communities whose leaders lack a means of understanding and anticipating the conflicts. This chapter explores four areas of the future that history suggests are likely to be the social networking battlefield of the future. These four areas are copyrights and intellectual property; disciplinary control vs. individual creativity; visual, technological, and new media literacies; and decision-making contexts for future markets. One can use RIBS as an analytical tool on existing communities in order to assess the health of their community's interactions.

Ownership and control of virtual identities

Control of an individual's virtual identity is yet another example of this future intellectual property battlefield. In this book, I've talked a lot about Blizzard's extraordinarily successful game, World of Warcraft (WoW). I've talked about how WoW players have an incredible investment in the avatars they create. Players spend months, years even, creating their avatars, collecting different weapons, armor, articles of clothing, and so on by playing the game. And, as shown in Chapter 6 with the character Justus, WoW players invest a lot of their real identities in the characters they create. For most of them, that avatar belongs to them; they made it and they invested significant resources in its creation. This is also true for users of the social network Second Life. They also identify with their avatars so strongly that users are living a “second life” through those avatars as well as the spaces they create. For WoW and Second Life users, their avatars are their virtual identities. So if these users want to share an image of their virtual selves with others, they should be able to do so, right?

Wrong. They can't share their virtual identities because (1) screen captures are considered “derivative works” and (2) because Blizzard owns World of Warcraft and Linden Labs owns Second Life. Blizzard had hundreds of artists, designers, and programmers create the armor, weapons, clothing, and mounts that players collect. As a result, they own the game and any derivative works that come from it. If a player wished, for example, to create a line of t-shirts and posters with her avatar on the front that she would sell through, say, Café Press, then Blizzard could sue for copyright infringement. And again, this makes sense from Blizzard's perspective, as the company provided all the artwork and software required to derive that particular avatar's configuration. But from the player's perspective, the avatar is her virtual self; it's who she is in that world. In the real world, she might wear Lee blue jeans to work every day; that doesn't mean she has to give Lee a cut of her salary or, to carry the analogy further, that Lee has the right to tell her she can't go to that particular job because she's wearing jeans they designed.

Ownership of purchasing identities

Beacon was an application that would tell other users on Facebook what products and services an individual was purchasing. The idea, presumably, was that knowing what videos your friends were renting, what movie tickets they were purchasing, and what video games they were buying would encourage you to make similar purchase decisions. However, the loss of control over the information being revealed about a user's Facebook identity infuriated large numbers of Facebook users who brought a class action lawsuit against Beacon, Blockbuster, Fandango, Overstock, Gamefly, Hotwire, and a small number of other companies who had partnered with Beacon to provide the service. In this case, the virtual identity wasn't an image or an avatar, it was the ability to control the story or picture of an individual that emerged through his or her purchasing decisions. The virtual identity in this case may be less tangible than an avatar, yet users’ need to own and control it is no less passionate.

Posted on Leave a comment

An extreme form of encryption could solve big data’s privacy problem

Fully homomorphic encryption allows us to run analysis on data without ever seeing the contents. It could help us reap the full benefits of big data, from fighting financial fraud to catching diseases early

LIKE any doctor, Jacques Fellay wants to give his patients the best care possible. But his instrument of choice is no scalpel or stethoscope, it is far more powerful than that. Hidden inside each of us are genetic markers that can tell doctors like Fellay which individuals are susceptible to diseases such as AIDS, hepatitis and more. If he can learn to read these clues, then Fellay would have advance warning of who requires early treatment.

This could be life-saving. The trouble is, teasing out the relationships between genetic markers and diseases requires an awful lot of data, more than any one hospital has on its own. You might think hospitals could pool their information, but it isn’t so simple. Genetic data contains all sorts of sensitive details about people that could lead to embarrassment, discrimination or worse. Ethical worries of this sort are a serious roadblock for Fellay, who is based at Lausanne University Hospital in Switzerland. “We have the technology, we have the ideas,” he says. “But putting together a large enough data set is more often than not the limiting factor.”

Fellay’s concerns are a microcosm of one of the world’s biggest technological problems. The inability to safely share data hampers progress in all kinds of other spheres too, from detecting financial crime to responding to disasters and governing nations effectively. Now, a new kind of encryption is making it possible to wring the juice out of data without anyone ever actually seeing it. This could help end big data’s big privacy problem – and Fellay’s patients could be some of the first to benefit.

It was more than 15 years ago that we first heard that “data is the new oil”, a phrase coined by the British mathematician and marketing expert Clive Humby. Today, we are used to the idea that personal data is valuable. Companies like Meta, which owns Facebook, and Google’s owner Alphabet grew into multibillion-dollar behemoths by collecting information about us and using it to sell targeted advertising.

Data could do good for all of us too. Fellay’s work is one example of how medical data might be used to make us healthier. Plus, Meta shares anonymised user data with aid organisations to help plan responses to floods and wildfires, in a project called Disaster Maps. And in the US, around 1400 colleges analyse academic records to spot students who are likely to drop out and provide them with extra support. These are just a few examples out of many – data is a currency that helps make the modern world go around.

Getting such insights often means publishing or sharing the data. That way, more people can look at it and conduct analyses, potentially drawing out unforeseen conclusions. Those who collect the data often don’t have the skills or advanced AI tools to make the best use of it, either, so it pays to share it with firms or organisations that do. Even if no outside analysis is happening, the data has to be kept somewhere, which often means on a cloud storage server, owned by an external company.

You can’t share raw data unthinkingly. It will typically contain sensitive personal details, anything from names and addresses to voting records and medical information. There is an obligation to keep this information private, not just because it is the right thing to do, but because of stringent privacy laws, such as the European Union’s General Data Protection Regulation (GDPR). Breaches can see big fines.

Over the past few decades, we have come up with ways of trying to preserve people’s privacy while sharing data. The traditional approach is to remove information that could identify someone or make these details less precise, says privacy expert Yves-Alexandre de Montjoye at Imperial College London. You might replace dates of birth with an age bracket, for example. But that is no longer enough. “It was OK in the 90s, but it doesn’t really work any more,” says de Montjoye. There is an enormous amount of information available about people online, so even seemingly insignificant nuggets can be cross-referenced with public information to identify individuals.

One significant case of reidentification from 2021 involves apparently anonymised data sold to a data broker by the dating app Grindr, which is used by gay people among others. A media outlet called The Pillar obtained it and correlated the location pings of a particular mobile phone represented in the data with the known movements of a high-ranking US priest, showing that the phone popped up regularly near his home and at the locations of multiple meetings he had attended. The implication was that this priest had used Grindr, and a scandal ensued because Catholic priests are required to abstain from sexual relationships and the church considers homosexual activity a sin.

A more sophisticated way of maintaining people’s privacy has emerged recently, called differential privacy. In this approach, the manager of a database never shares the whole thing. Instead, they allow people to ask questions about the statistical properties of the data – for example, “what proportion of people have cancer?” – and provide answers. Yet if enough clever questions are asked, this can still lead to private details being triangulated. So the database manager also uses statistical techniques to inject errors into the answers, for example recording the wrong cancer status for some people when totting up totals. Done carefully, this doesn’t affect the statistical validity of the data, but it does make it much harder to identify individuals. The US Census Bureau adopted this method when the time came to release statistics based on its 2020 census.

Trust no one

Still, differential privacy has its limits. It only provides statistical patterns and can’t flag up specific records – for instance to highlight someone at risk of disease, as Fellay would like to do. And while the idea is “beautiful”, says de Montjoye, getting it to work in practice is hard.

There is a completely different and more extreme solution, however, one with origins going back 40 years. What if you could encrypt and share data in such a way that others could analyse it and perform calculations on it, but never actually see it? It would be a bit like placing a precious gemstone in a glovebox, the chambers in labs used for handling hazardous material. You could invite people to put their arms into the gloves and handle the gem. But they wouldn’t have free access and could never steal anything.

This was the thought that occurred to Ronald Rivest, Len Adleman and Michael Dertouzos at the Massachusetts Institute of Technology in 1978. They devised a theoretical way of making the equivalent of a secure glovebox to protect data. It rested on a mathematical idea called a homomorphism, which refers to the ability to map data from one form to another without changing its underlying structure. Much of this hinges on using algebra to represent the same numbers in different ways.

Imagine you want to share a database with an AI analytics company, but it contains private information. The AI firm won’t give you the algorithm it uses to analyse data because it is commercially sensitive. So, to get around this, you homomorphically encrypt the data and send it to the company. It has no key to decrypt the data. But the firm can analyse the data and get a result, which itself is encrypted. Although the firm has no idea what it means, it can send it back to you. Crucially, you can now simply decrypt the result and it will make total sense.

“The promise is massive,” says Tom Rondeau at the US Defense Advanced Research Projects Agency (DARPA), which is one of many organisations investigating the technology. “It’s almost hard to put a bound to what we can do if we have this kind of technology.”

In the 30 years since the method was proposed, researchers devised homomorphic encryption schemes that allowed them to carry out a restricted set of operations, for instance only additions or multiplications. Yet fully homomorphic encryption, or FHE, which would let you run any program on the encrypted data, remained elusive. “FHE was what we thought of as being the holy grail in those days,” says Marten van Dijk at CWI, the national research institute for mathematics and computer science in the Netherlands. “It was kind of unimaginable.”

One approach to homomorphic encryption at the time involved an idea called lattice cryptography. This encrypts ordinary numbers by mapping them onto a grid with many more dimensions than the standard two. It worked – but only up to a point. Each computation ended up adding randomness to the data. As a result, doing anything more than a simple computation led to so much randomness building up that the answer became unreadable.

In 2009, Craig Gentry, then a PhD student at Stanford University in California, made a breakthroughHis brilliant solution was to periodically remove this randomness by decrypting the data under a secondary covering of encryption. If that sounds paradoxical, imagine that glovebox with the gem inside. Gentry’s scheme was like putting one glovebox inside another, so that the first one could be opened while still encased in a layer of security. This provided a workable FHE scheme for the first time.

Workable, but still slow: computations on the FHE-encrypted data could take millions of times longer than identical ones on raw data. Gentry went on to work at IBM, and over the next decade, he and others toiled to make the process quicker by improving the underlying mathematics. But lately the focus has shifted, says Michael Osborne at IBM Research in Zurich, Switzerland. There is a growing realisation that massive speed enhancements can be achieved by optimising the way cryptography is applied for specific uses. “We’re getting orders of magnitudes improvements,” says Osborne.

IBM now has a suite of FHE tools that can run AI and other analyses on encrypted data. Its researchers have shown they can detect fraudulent transactions in encrypted credit card data using an artificial neural network that can crunch 4000 records per second. They also demonstrated that they could use the same kind of analysis to scour the encrypted CT scans of more than 1500 people’s lungs to detect signs of covid-19 infection.

Also in the works are real-world, proof-of-concept projects with a variety of customers. In 2020, IBM revealed the results of a pilot study conducted with the Brazilian bank Banco Bradesco. Privacy concerns and regulations often prevent banks from sharing sensitive data either internally or externally. But in the study, IBM showed it could use machine learning to analyse encrypted financial transactions from the bank’s customers to predict if they were likely to take out a loan. The system was able to make predictions for more than 16,500 customers in 10 seconds and it performed just as accurately as the same analysis performed on unencrypted data.

Suspicious activity

Other companies are keen on this extreme form of encryption too. Computer scientist Shafi Goldwasser, a co-founder of privacy technology start-up Duality, says the firm is achieving significantly faster speeds by helping customers better structure their data and tailoring tools to their problems. Duality’s encryption tech has already been integrated into the software systems that technology giant Oracle uses to detect financial crimes, where it is assisting banks in sharing data to detect suspicious activity.

Still, for most applications, FHE processing remains at least 100,000 times slower compared with unencrypted data, says Rondeau. This is why, in 2020, DARPA launched a programme called Data Protection in Virtual Environments to create specialised chips designed to run FHE. Lattice-encrypted data comes in much larger chunks than normal chips are used to dealing with. So several research teams involved in the project, including one led by Duality, are investigating ways to alter circuits to efficiently process, store and move this kind of data. The goal is to analyse any FHE-encrypted data just 10 times slower than usual, says Rondeau, who is managing the programme.

Even if it were lightning fast, FHE wouldn’t be flawless. Van Dijk says it doesn’t work well with certain kinds of program, such as those that contain branching logic made up of “if this, do that” operations. Meanwhile, information security researcher Martin Albrecht at Royal Holloway, University of London, points out that the justification for FHE is based on the need to share data so it can be analysed. But a lot of routine data analysis isn’t that complicated – doing it yourself might sometimes be simpler than getting to grips with FHE.

For his part, de Montjoye is a proponent of privacy engineering: not relying on one technology to protect people’s data, but combining several approaches in a defensive package. FHE is a great addition to that toolbox, he reckons, but not a standalone winner.

This is exactly the approach that Fellay and his colleagues have taken to smooth the sharing of medical data. Fellay worked with computer scientists at the Swiss Federal Institute of Technology in Lausanne who created a scheme combining FHE with another privacy-preserving tactic called secure multiparty computation (SMC). This sees the different organisations join up chunks of their data in such a way that none of the private details from any organisation can be retrieved.

In a paper published in October 2021, the team used a combination of FHE and SMC to securely pool data from multiple sources and use it to predict the efficacy of cancer treatments or identify specific variations in people’s genomes that predict the progression of HIV infection. The trial was so successful that the team has now deployed the technology to allow Switzerland’s five university hospitals to share patient data, both for medical research and to help doctors personalise treatments. “We’re implementing it in real life,” says Fellay, “making the data of the Swiss hospitals shareable to answer any research question as long as the data exists.”

If data is the new oil, then it seems the world’s thirst for it isn’t letting up. FHE could be akin to a new mining technology, one that will open up some of the most valuable but currently inaccessible deposits. Its slow speed may be a stumbling block. But, as Goldwasser says, comparing the technology with completely unencrypted processing makes no sense. “If you believe that security is not a plus, but it’s a must,” she says, “then in some sense there is no overhead.”


6 April 2022

By Edd Gent

Virtual Identity