Posted on Leave a comment

Tokenizing Virtual Identity: Blockchain & AI’s Inevitable Impact

Tokenizing Virtual Identity

Tokenizing virtual identity is the latest buzzword in the world of technology. With the rise of blockchain and AI, the process of tokenizing virtual identity has become more feasible and efficient. In a world that is increasingly dependent on digital communication and transactions, virtual identity has become an essential aspect of our lives. From social media to online banking, virtual identity is crucial for individuals and organizations alike. This article explores the inevitable impact of blockchain and AI on tokenizing virtual identity.

What is Blockchain and AI?

To understand the role of blockchain and AI in tokenizing virtual identity, we need to first understand what these technologies are. Blockchain is a decentralized and distributed digital ledger that records transactions across multiple computers, allowing secure and transparent storage of data. AI, on the other hand, refers to the simulation of human intelligence in machines that can perform tasks that typically require human cognition, such as learning, reasoning, and problem-solving.

The Benefits of Tokenizing Virtual Identity

Tokenizing virtual identity offers several benefits. Firstly, it provides a higher degree of security than traditional identity management systems, as it is based on cryptography and decentralized storage. Secondly, it offers greater control and ownership of personal data, allowing individuals to manage and monetize their identity. Thirdly, it offers greater efficiency by reducing the need for intermediaries and streamlining identity verification processes.

The Role of Blockchain in Tokenizing Identity

Blockchain plays a crucial role in tokenizing virtual identity. By providing a decentralized and secure platform for storing and managing identity data, blockchain ensures that personal data is owned and controlled by individuals, rather than centralized institutions. Blockchain also enables the creation of self-sovereign identities, where individuals have complete control over their identity data and can share it securely with trusted parties.

The Role of AI in Tokenizing Identity

AI plays a crucial role in tokenizing virtual identity by automating identity verification processes. By leveraging machine learning algorithms, AI can analyze large volumes of data and make intelligent decisions about identity verification. This can help reduce the risk of fraud and improve the efficiency of identity verification processes.

Tokenizing Virtual Identity: Use Cases

Tokenizing virtual identity has several use cases. For example, it can be used for secure and decentralized voting systems, where individuals can verify their identity and cast their vote securely and anonymously. It can also be used for secure and decentralized identity verification for financial and healthcare services, reducing the risk of identity theft and fraud.

Tokenizing Virtual Identity: Challenges

Tokenizing virtual identity also presents several challenges. One of the main challenges is interoperability, as different blockchain networks and AI systems may not be compatible with each other. Another challenge is scalability, as blockchain and AI systems may not be able to handle the volume of data required for identity verification on a large scale.

Security Concerns in Tokenizing Identity

Security is a key concern in tokenizing virtual identity. While blockchain and AI offer greater security than traditional identity management systems, they are not immune to attacks. Hackers could potentially exploit vulnerabilities in blockchain and AI systems to gain access to personal data. It is therefore crucial to implement robust security measures to protect personal data.

Privacy Issues in Tokenizing Identity

Privacy is another key concern in tokenizing virtual identity. While tokenizing virtual identity offers greater control and ownership of personal data, it also raises concerns about data privacy. It is essential to ensure that personal data is not shared without consent and that individuals have the right to access, modify, and delete their data.

Legal Implications of Tokenizing Identity

Tokenizing virtual identity also has legal implications. As personal data becomes more valuable, it is crucial to ensure that there are adequate laws and regulations in place to protect personal data. It is also essential to ensure that individuals have the right to access and control their data, and that they are not discriminated against based on their identity.

The Future of Tokenizing Virtual Identity

The future of tokenizing virtual identity looks bright. As blockchain and AI continue to evolve, we can expect to see more secure, efficient, and decentralized identity management systems. We can also expect to see more use cases for tokenizing virtual identity, from secure and anonymous voting systems to decentralized identity verification for financial and healthcare services.

Embracing Blockchain & AI for Identity Management

In conclusion, tokenizing virtual identity is an inevitable trend that will revolutionize the way we manage identity. By leveraging blockchain and AI, we can create more secure, efficient, and decentralized identity management systems that give individuals greater control and ownership of their personal data. While there are challenges and concerns associated with tokenizing virtual identity, these can be addressed through robust security measures, privacy protections, and adequate laws and regulations. As we continue to embrace blockchain and AI for identity management, we can look forward to a more secure, efficient, and decentralized future.

Posted on Leave a comment

Ransomware is already out of control. AI-powered ransomware could be ‘terrifying.’

Hiring AI experts to automate ransomware could be the next step for well-endowed ransomware groups that are seeking to scale up their attacks.
 

In the perpetual battle between cybercriminals and defenders, the latter have always had one largely unchallenged advantage: The use of AI and machine learning allows them to automate a lot of what they do, especially around detecting and responding to attacks. This leg-up hasn't been nearly enough to keep ransomware at bay, but it has still been far more than what cybercriminals have ever been able to muster in terms of AI and automation.

That’s because deploying AI-powered ransomware would require AI expertise. And the ransomware gangs don’t have it. At least not yet.

But given the wealth accumulated by a number of ransomware gangs in recent years, it may not be long before attackers do bring aboard AI experts of their own, prominent cybersecurity authority Mikko Hyppönen said.

Some of these groups have so much cash — or bitcoin, rather — that they could now potentially compete with legit security firms for talent in AI and machine learning, according to Hyppönen, the chief research officer at cybersecurity firm WithSecure.

Ransomware gang Conti pulled in $182 million in ransom payments during 2021, according to blockchain data platform Chainalysis. Leaks of Conti's chats suggest that the group may have invested some of its take in pricey "zero day" vulnerabilities and the hiring of penetration testers.

"We have already seen [ransomware groups] hire pen testers to break into networks to figure out how to deploy ransomware. The next step will be that they will start hiring ML and AI experts to automate their malware campaigns," Hyppönen told Protocol.

"It's not a far reach to see that they will have the capability to offer double or triple salaries to AI/ML experts in exchange for them to go to the dark side," he said. "I do think it's going to happen in the near future — if I would have to guess, in the next 12 to 24 months."

If this happens, Hyppönen said, "it would be one of the biggest challenges we're likely to face in the near future."

AI for scaling up ransomware

While doom-and-gloom cybersecurity predictions are abundant, with two decades of experience on matters of cybercrime, Hyppönen is not just any prognosticator. He has been with his current company, which until recently was known as F-Secure, since 1991 and has been researching — and vying with — cybercriminals since the early days of the concept.

In his view, the introduction of AI and machine learning to the attacker side would be a distinct change of the game. He's not alone in thinking so.

When it comes to ransomware, for instance, automating large portions of the process could mean an even greater acceleration in attacks, said Mark Driver, a research vice president at Gartner.

Currently, ransomware attacks are often very tailored to the individual target, making the attacks more difficult to scale, Driver said. Even still, the number of ransomware attacks doubled year-over-year in 2021, SonicWall has reported — and ransomware has been getting more successful as well. The percentage of affected organizations that agreed to pay a ransom shot up to 58% in 2021, from 34% the year before, Proofpoint has reported.

However, if attackers were able to automate ransomware using AI and machine learning, that would allow them to go after an even wider range of targets, according to Driver. That could include smaller organizations, or even individuals.

"It's not worth their effort if it takes them hours and hours to do it manually. But if they can automate it, absolutely," Driver said. Ultimately, “it's terrifying.”

The prediction that AI is coming to cybercrime in a big way is not brand new, but it still has yet to manifest, Hyppönen said. Most likely, that's because the ability to compete with deep-pocketed enterprise tech vendors to bring in the necessary talent has always been a constraint in the past.

The huge success of the ransomware gangs in 2021, predominantly Russia-affiliated groups, would appear to have changed that, according to Hyppönen. Chainalysis reports it tracked ransomware payments totaling $602 million in 2021, led by Conti's $182 million. The ransomware group that struck the Colonial Pipeline, DarkSide, earned $82 million last year, and three other groups brought in more than $30 million in that single year, according to Chainalysis.

Hyppönen estimated that less than a dozen ransomware groups might have the capacity to invest in hiring AI talent in the next few years, primarily gangs affiliated with Russia.

‘We would definitely not miss it’

If cybercrime groups hire AI talent with some of their windfall, Hyppönen believes the first thing they'll do is automate the most manually intensive parts of a ransomware campaign. TThe actual execution of a ransomware attack remains difficult, he said.

"How do you get it on 10,000 computers? How do you find a way inside corporate networks? How do you bypass the different safeguards? How do you keep changing the operation, dynamically, to actually make sure you're successful?" Hyppönen said. “All of that is manual."

Monitoring systems, changing the malware code, recompiling it and registering new domain names to avoid defenses — things it takes humans a long time to do — would all be fairly simple to do with automation. "All of this is done in an instant by machines,” Hyppönen said.

That means it should be very obvious when AI-powered automation comes to ransomware, according to Hyppönen.

"This would be such a big shift, such a big change," he said. "We would definitely not miss it."

But would the ransomware groups really decide to go to all this trouble? Allie Mellen, an analyst at Forrester, said she's not as sure. Given how successful ransomware groups are already, Mellen said it's unclear why they would bother to take this route.

"They're having no problem with the approaches that they're taking right now," she said. "If it ain't broke, don't fix it."

Others see a higher likelihood of AI playing a role in attacks such as ransomware. Like defenders, ransomware gangs clearly have a penchant for evolving their techniques to try to stay ahead of the other side, said Ed Bowen, managing director for the AI Center of Excellence at Deloitte.

"I'm expecting it — I expect them to be using AI to improve their ability to get at this infrastructure," Bowen said. "I think that's inevitable."

Lower barrier to entry

While AI talent is in extremely short supply right now, that will start to change in coming years as a wave of people graduate from university and research programs in the field, Bowen noted.

The barriers to entry in the AI field are also going lower as tools become more accessible to users, Hyppönen said.

"Today, all security companies rely heavily on machine learning — so we know exactly how hard it is to hire experts in this field. Especially people who have expertise both in cybersecurity and in machine learning. So these are hard people to recruit," he told Protocol. "However, it's becoming easier to become an expert, especially if you don't need to be a world-class expert."

That dynamic could increase the pool of candidates for cybercrime organizations who are, simultaneously, richer and “more powerful than ever before," Hyppönen said.

Should this future come to pass, it will have massive implications for cyber defenders, in the event that a greater volume of attacks — and attacks against a broader range of targets — will be the result.

Among other things, this would likely mean that the security industry would itself be looking to compete harder than ever for AI talent, if only to try to stay ahead of automated ransomware and other AI-powered threats.

Between attackers and defenders, "you're always leapfrogging each other" on technical capabilities, Driver said. "It's a war of trying to get ahead of the other side."

Posted on Leave a comment

What is differential privacy in machine learning (preview)?

How differential privacy works

Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy may be required for regulatory compliance.

Differential privacy machine learning process.

In traditional scenarios, raw data is stored in files and databases. When users analyze data, they typically use the raw data. This is a concern because it might infringe on an individual's privacy. Differential privacy tries to deal with this problem by adding "noise" or randomness to the data so that users can't identify any individual data points. At the least, such a system provides plausible deniability. Therefore, the privacy of individuals is preserved with limited impact on the accuracy of the data.

In differentially private systems, data is shared through requests called queries. When a user submits a query for data, operations known as privacy mechanisms add noise to the requested data. Privacy mechanisms return an approximation of the data instead of the raw data. This privacy-preserving result appears in a report. Reports consist of two parts, the actual data computed and a description of how the data was created.

Differential privacy metrics

Differential privacy tries to protect against the possibility that a user can produce an indefinite number of reports to eventually reveal sensitive data. A value known as epsilon measures how noisy, or private, a report is. Epsilon has an inverse relationship to noise or privacy. The lower the epsilon, the more noisy (and private) the data is.

Epsilon values are non-negative. Values below 1 provide full plausible deniability. Anything above 1 comes with a higher risk of exposure of the actual data. As you implement machine learning solutions with differential privacy, you want to data with epsilon values between 0 and 1.

Another value directly correlated to epsilon is delta. Delta is a measure of the probability that a report isn’t fully private. The higher the delta, the higher the epsilon. Because these values are correlated, epsilon is used more often.

Limit queries with a privacy budget

To ensure privacy in systems where multiple queries are allowed, differential privacy defines a rate limit. This limit is known as a privacy budget. Privacy budgets prevent data from being recreated through multiple queries. Privacy budgets are allocated an epsilon amount, typically between 1 and 3 to limit the risk of reidentification. As reports are generated, privacy budgets keep track of the epsilon value of individual reports as well as the aggregate for all reports. After a privacy budget is spent or depleted, users can no longer access data.

Reliability of data

Although the preservation of privacy should be the goal, there’s a tradeoff when it comes to usability and reliability of the data. In data analytics, accuracy can be thought of as a measure of uncertainty introduced by sampling errors. This uncertainty tends to fall within certain bounds. Accuracy from a differential privacy perspective instead measures the reliability of the data, which is affected by the uncertainty introduced by the privacy mechanisms. In short, a higher level of noise or privacy translates to data that has a lower epsilon, accuracy, and reliability.

Open-source differential privacy libraries

SmartNoise is an open-source project that contains components for building machine learning solutions with differential privacy. SmartNoise is made up of the following top-level components:

  • SmartNoise Core library
  • SmartNoise SDK library

SmartNoise Core

The core library includes the following privacy mechanisms for implementing a differentially private system:

ComponentDescription
AnalysisA graph description of arbitrary computations.
ValidatorA Rust library that contains a set of tools for checking and deriving the necessary conditions for an analysis to be differentially private.
RuntimeThe medium to execute the analysis. The reference runtime is written in Rust but runtimes can be written using any computation framework such as SQL and Spark depending on your data needs.
BindingsLanguage bindings and helper libraries to build analyses. Currently SmartNoise provides Python bindings.

SmartNoise SDK

The system library provides the following tools and services for working with tabular and relational data:

ComponentDescription
Data Access

Library that intercepts and processes SQL queries and produces reports. This library is implemented in Python and supports the following ODBC and DBAPI data sources:

  • PostgreSQL
  • SQL Server
  • Spark
  • Preston
  • Pandas
ServiceExecution service that provides a REST endpoint to serve requests or queries against shared data sources. The service is designed to allow composition of differential privacy modules that operate on requests containing different delta and epsilon values, also known as heterogeneous requests. This reference implementation accounts for additional impact from queries on correlated data.
Evaluator

Stochastic evaluator that checks for privacy violations, accuracy, and bias. The evaluator supports the following tests:

  • Privacy Test - Determines whether a report adheres to the conditions of differential privacy.
  • Accuracy Test - Measures whether the reliability of reports falls within the upper and lower bounds given a 95% confidence level.
  • Utility Test - Determines whether the confidence bounds of a report are close enough to the data while still maximizing privacy.
  • Bias Test - Measures the distribution of reports for repeated queries to ensure they aren’t unbalanced

Next steps

Learn more about differential privacy in machine learning:

Posted on Leave a comment

Responsible AI – Privacy and Security Requirements

Training data and prediction requests can both contain sensitive information about people / business which has to be protected. How do you safeguard the privacy of the individuals? What steps are taken to ensure that individuals have control of their data? There are regulations in countries to ensure privacy and security.

 In Europe you have the GDPR (General Data Protection Regulations) and in California there is CCPA (California Consumer Privacy Act,). Fundamentally, both give an individual control over its Data and requires that companies should protect the Data being used in the model. When Data processing is based on consent, then am individual has the right to revoke the consent at any time.

 Defending ML Models against attacks – Ensuring privacy of consumer data:

 I have discussed about very briefly about the tools for adversarial training – CleverHans and FoolBox Python libraries here: Model Debugging: Sensitivity Analysis, Adversarial Training, Residual Analysis  . Let us now look at more stringent means of protecting a ML model against attacks. It is important to protect the ML model against attacks, thus, ensuring the privacy and security of data. An ML model may be attacked in different ways – some literature classifies the attacks into: “Information Harms” and “Behavioural Harms”. Information Harm occurs when the information is allowed to leak from the model. There are different forms of Information Harms: Membership Inference, Model Inversion and Model Extraction. In Membership Inference, the attacker can determine if some information is part of the training data or not. In Model Inversion, the attacker can extract all the training data from the model and Model Extraction, the attacker is able to extract the entire model!

 Behavioural Harm occurs when the attacker can change the behaviour of the ML model itself – example: by inserting malicious data. In this post – I have given an example of an autonomous vehicle in this article: Model Debugging: Sensitivity Analysis, Adversarial Training, Residual Analysis

Cryptography | Differential privacy to protect data

You should consider privacy enhancing technologies like Secure Multi Party Computation ,(SMPC) and Fully Homomorphic Encryption (FHE). SMPC involves multiple systems to train or serve the model whilst the actual data is kept secure

In FHE the data is encrypted. Prediction requests involve encrypted data and training of the model is also carried out on encrypted data. This results in heavy computational cost because the data is never decrypted except by the user. Users will send encrypted prediction requests and will receive back an encrypted result. The goal is that using cryptography you can protect the consumers data.

Differential Privacy in Machine Learning

Differential privacy involves protection of the data by adding noise to the data so that the attackers cannot identify the real content. SmartNoise is an open-source project that contains components for building machine learning solutions with differential privacy. SmartNoise is made of following top level components:

✔️Smart Noise Core Library

✔️Smart Noise SDK Library

This is a good read to understand about Differential Privacy: https://docs.microsoft.com/en-us/azure/machine-learning/concept-differential-privacy

 Private Aggregation of Teacher Ensembles (PATE)

This follows the Knowledge Distillation concept that I discussed here: Post 1- Knowledge DistillationPost - 2 Knowldge Distillation. PATE begins by dividing the data into “k” partitions with no overlaps. It then trains k models on that data and then aggregates the results on an aggregate teacher model. During the aggregation for the aggregate teacher, you will add noise to the data and the output.

For deployment, you will use the student model. To train the student model you take unlabelled public data and feed it to the teacher model and the result is labelled data with which the student model is trained. For deployment, you use only the student model.

The process is illustrated in the figure below:

No alt text provided for this image

PATE (Private Aggregation of Teacher Ensembles)

Source

Credits:

Posted on Leave a comment

Employee monitoring software became the new normal during COVID-19. It seems workers are stuck with it

Many employers say they'll keep the surveillance software switched on — even for office workers.


In early 2020, as offices emptied and employees set up laptops on kitchen tables to work from home, the way managers kept tabs on white-collar workers underwent an abrupt change as well.

Bosses used to counting the number of empty desks, or gauging the volume of keyboard clatter, now had to rely on video calls and tiny green "active" icons in workplace chat programs.

In response, many employers splashed out on sophisticated kinds of spyware to claw back some oversight.

"Employee monitoring software" became the new normal, logging keystrokes and mouse movement, capturing screenshots, tracking location, and even activating webcams and microphones.

At the same time, workers were dreaming up creative new ways to evade the software's all-seeing eye.

Now, as workers return to the office, demand for employee tracking "bossware" remains high, its makers say.

Surveys of employers in white-collar industries show that even returned office workers will be subject to these new tools.

What was introduced in the crisis of the pandemic, as a short-term remedy for lockdowns and working from home (WFH), has quietly become the "new normal" for many Australian workplaces.

A game of cat-and-mouse jiggler

For many workers, the surveillance software came out of nowhere.

The abrupt appearance of spyware in many workplaces can be seen in the sudden popularity of covert devices designed to evade this surveillance.

Before the pandemic, "mouse jigglers" were niche gadgets used by police and security agencies to keep seized computers from logging out and requiring a password to access.

Mouse jigglers for sale on eBay
An array of mouse jigglers for sale on eBay.(Supplied: eBay)

Plugged into a laptop's USB port, the jiggler randomly moves the mouse cursor, faking activity when there's no-one there.

When the pandemic hit, sales boomed among WFH employees.

In the last two years, James Franklin, a young Melbourne software engineer, has mailed 5,000 jigglers to customers all over the country — mostly to employees of "large enterprises", he says.

Often, he's had to upgrade the devices to evade an employers' latest methods of detecting and blocking them.

It's been a game of cat-and-mouse jiggler.

"Unbelievable demand is the best way to describe it," he said.

And mouse jigglers aren't the only trick for evading the software.

In July last year, a Californian mum's video about a WFH hack went viral on TikTok.

Leah told how her computer set her status to "away" whenever she stopped moving her cursor for more than a few seconds, so she had placed a small vibrating device under the mouse.

"It's called a mouse mover … so you can go to the bathroom, free from paranoia."

Others picked up the story and shared their tips, from free downloads of mouse-mimicking software to YouTube videos that are intended to play on a phone screen, with an optical mouse resting on top. The movement of the lines in the video makes the cursor move.

"A lot of people have reached out on TikTok," Leah told the ABC.

"There were a lot of people going, 'Oh, my gosh, I can't believe I haven't heard of this before, send me the link.'"

Tracking software sales are up — and staying up

On the other side of the world, in New York, EfficientLab makes and sells an employee surveillance software called Controlio that's widely used in Australia.

It has "hundreds" of Australian clients, said sales manager Moath Galeb.

"At the beginning of the pandemic, there was already a lot of companies looking into monitoring software, but it wasn't such an important feature," he said.

"But the pandemic forced many people to work remotely and the companies started to look into employee monitoring software more seriously."

An online dashboard showing active time and productivity score for a worker
Managers can track employees' productivity scores on a realtime dashboard.(Supplied: Controlio)

In Australia, as in other countries, the number of Controlio clients has increased "two or three times" with the pandemic.

This increase was to be expected — but what surprised even Mr Galeb was that demand has remained strong in recent months.

"They're getting these insights into how people get their work done," he said.

The most popular features for employers, he said, track employee "active time" to generate a "productivity score".

Managers view these statistics through an online dashboard.

Advocates say this is a way of looking after employees, rather than spying on them.

Bosses can see who is "working too many hours", Mr Galeb said.

"Depending on the data, or the insights that you receive, you get to build this picture of who is doing more and doing less."

Nothing new for blue-collar workers

But those being monitored are likely to see things a little differently. 

Ultimately, how the software is used depends on what power bosses have over their workers.

For the increasing number of people in insecure, casualised work, these tools appear less than benign.

In an August 2020 submission to a NSW senate committee investigating the impact of technological change on the future of work, the United Workers Union featured the story of a call centre worker who had been working remotely during the pandemic. 

One day, the employer informed the man that monitoring software had detected his apparent absence for a 45-minute period two weeks earlier.

The submission reads:

Unable to remember exactly what he was doing that particular day, the matter was escalated to senior management who demanded to know exactly where he physically was during this time. This 45-minute break in surveillance caused considerable grief and anxiety for the company. A perceived productivity loss of $27 (the worker's hourly rate) resulted in several meetings involving members of upper management, formal letters of correspondence, and a written warning delivered to the worker.

There were many stories like this one, said Lauren Kelly, who wrote the submission.

"The software is sold as a tool of productivity and efficiency, but really it's about surveillance and control," she said.

"I find it very unlikely it would result in management asking somebody to slow down and do less work."

Ms Kelly, who is now a PhD candidate at RMIT with a focus on workplace technologies including surveillance, says tools for tracking an employee's location and activity are nothing new — what has changed in the past two years is the types of workplaces where they are used.

Before the pandemic, it was more for blue-collar workers. Now, it's for white-collar workers too.

"Once it's in, it's in. It doesn't often get uninstalled," she said.

"The tracking software becomes a ubiquitous part of the infrastructure of management."

The 'quid pro quo' of WFH?

More than half of Australian small-to-medium-sized businesses used software to monitor the activity and productivity of employees working remotely, according to a Capterra survey in November 2020.

That's about on par with the United States.

"There's a tendency in Australia to view these workplace trends as really bad in other places like the United States and China," Ms Kelly said.

"But actually, those trends are already here."

A screenshot of a dashboard showing a graph with different emotions
The latest software claims to monitor employee emotions like happiness and sadness.(Supplied: StaffCircle)

In fact, a 2021 survey suggested Australian employers had embraced location-tracking software more warmly than those of any other country.

Every two years, the international law firm Herbert Smith Freehills surveys thousands of its large corporate clients around the world for an ongoing series of reports on the future of work.

In 2021, it found 90 per cent of employers in Australia monitor the location of employees when they work remotely, significantly more than the global average of less than 80 per cent.

Many introduced these tools having found that during lockdown, some employees had relocated interstate or even overseas without asking permission or informing their manager, said Natalie Gaspar, an employment lawyer and partner at Herbert Smith Freehills.

"I had clients of mine saying that they didn't realise that their employees were working in India or Pakistan," she said.

"And that's relevant because there [are] different laws that apply in those different jurisdictions about workers compensation laws, safety laws, all those sorts of things."

She said that, anecdotally, many of her "large corporate" clients planned to keep the employee monitoring software tools — even for office workers.

"I think that's here to stay in large parts."

And she said employees, in general, accepted this elevated level of surveillance as "the cost of flexibility".

"It's the quid pro quo for working from home," she said.

Is it legal?

The short answer is yes, but there are complications.

There's no consistent set of laws operating across jurisdictions in Australia that regulate surveillance of the workplace.

In New South Wales and the ACT, an employer can only install monitoring software on a computer they supply for the purposes of work.

With some exceptions, they must also advise employees they're installing the software and explain what is being monitored 14 days prior to the software being installed or activated.

In NSW, the ACT and Victoria, it's an offence to install an optical or listening device in workplace toilets, bathroom or change rooms.

South Australia, Tasmania, Western Australia, the Northern Territory and Queensland do not currently have specific workplace surveillance laws in place.

Smile, you're at your laptop

Location tracking software may be the cost of WFH, but what about tools that check whether you're smiling into the phone, or monitor the pace and tone of your voice for depression and fatigue?

These are some of the features being rolled out in the latest generation of monitoring software.

Zoom, for instance, recently introduced a tool that provides sales meeting hosts with a post-meeting transcription and "sentiment analysis".

A screenshot of a sales video with analytics and sentiment analysis
Zoom IQ for Sales offers a breakdown of how the meeting went.(Supplied: Zoom)

Software already on the market trawls email and Slack messages to detect levels of emotion like happiness, anger, disgust, fear or sadness.

The Herbert Smith Freehills 2021 survey found 82 per cent of respondents planned to introduce digital tools to measure employee wellbeing.

A bit under half said they already had processes in place to detect and address wellbeing issues, and these were assisted by technology such as sentiment analysis software.

Often, these technologies are tested in call centres before they're rolled out to other industries, Ms Kelly said.

"Affect monitoring is very controversial and the technology is flawed.

"Some researchers would argue it's simply not possible for AI or any software to truly 'know' what a person is feeling.

"Regardless, there's a market for it and some employers are buying into it."

The movement of the second hand of an analogue wristwatch moves an optical mouse cursor a tiny amount.(Supplied: Reddit)

Back in Melbourne, Mr Franklin remains hopeful that plucky inventors can thwart the spread of bossware.

When companies switched to logging keyboard inputs, someone invented a random keyboard input device.

When managers went a step further and monitored what was happening on employees' screens, a tool appeared that cycled through a prepared list of webpages at regular intervals.

"The sky's the limit when it comes to defeating these systems," he said.

And sometimes the best solutions are low tech.

Recently, an employer found a way to block a worker's mouse jiggler, so he simply taped his mouse to the office fan.

"And it dragged the mouse back and forth.

"Then he went out to lunch."

 
Posted on Leave a comment

What is Facial Recognition?

What is facial recognition?

Facial recognition is a way of identifying or confirming an individual’s identity using their face. Facial recognition systems can be used to identify people in photos, videos, or in real-time.

Facial recognition is a category of biometric security. Other forms of biometric software include voice recognition, fingerprint recognition, and eye retina or iris recognition. The technology is mostly used for security and law enforcement, though there is increasing interest in other areas of use.

How does facial recognition work?

Many people are familiar with face recognition technology through the FaceID used to unlock iPhones (however, this is only one application of face recognition). Typically, facial recognition does not rely on a massive database of photos to determine an individual’s identity — it simply identifies and recognizes one person as the sole owner of the device, while limiting access to others.

Beyond unlocking phones, facial recognition works by matching the faces of people walking past special cameras, to images of people on a watch list. The watch lists can contain pictures of anyone, including people who are not suspected of any wrongdoing, and the images can come from anywhere — even from our social media accounts. Facial technology systems can vary, but in general, they tend to operate as follows:

Step 1: Face detection

The camera detects and locates the image of a face, either alone or in a crowd. The image may show the person looking straight ahead or in profile.

Step 2: Face analysis

Next, an image of the face is captured and analyzed. Most facial recognition technology relies on 2D rather than 3D images because it can more conveniently match a 2D image with public photos or those in a database. The software reads the geometry of your face. Key factors include the distance between your eyes, the depth of your eye sockets, the distance from forehead to chin, the shape of your cheekbones, and the contour of the lips, ears, and chin. The aim is to identify the facial landmarks that are key to distinguishing your face.

Step 3: Converting the image to data

The face capture process transforms analog information (a face) into a set of digital information (data) based on the person's facial features. Your face's analysis is essentially turned into a mathematical formula. The numerical code is called a faceprint. In the same way that thumbprints are unique, each person has their own faceprint.

Step 4: Finding a match

Your faceprint is then compared against a database of other known faces. For example, the FBI has access to up to 650 million photos, drawn from various state databases. On Facebook, any photo tagged with a person’s name becomes a part of Facebook's database, which may also be used for facial recognition. If your faceprint matches an image in a facial recognition database, then a determination is made.

Of all the biometric measurements, facial recognition is considered the most natural. Intuitively, this makes sense, since we typically recognize ourselves and others by looking at faces, rather than thumbprints and irises. It is estimated that over half of the world's population is touched by facial recognition technology regularly.

How facial recognition is used

The technology is used for a variety of purposes. These include:

Unlocking phones

Various phones, including the most recent iPhones, use face recognition to unlock the device. The technology offers a powerful way to protect personal data and ensures that sensitive data remains inaccessible if the phone is stolen. Apple claims that the chance of a random face unlocking your phone is about one in 1 million.

Law enforcement

Facial recognition is regularly being used by law enforcement. According to this NBC report, the technology is increasing amongst law enforcement agencies within the US, and the same is true in other countries. Police collects mugshots from arrestees and compare them against local, state, and federal face recognition databases. Once an arrestee’s photo has been taken, their picture will be added to databases to be scanned whenever police carry out another criminal search.

Also, mobile face recognition allows officers to use smartphones, tablets, or other portable devices to take a photo of a driver or a pedestrian in the field and immediately compare that photo against to one or more face recognition databases to attempt an identification.

Airports and border control

Facial recognition has become a familiar sight at many airports around the world. Increasing numbers of travellers hold biometric passports, which allow them to skip the ordinarily long lines and instead walk through an automated ePassport control to reach the gate faster. Facial recognition not only reduces waiting times but also allows airports to improve security. The US Department of Homeland Security predicts that facial recognition will be used on 97% of travellers by 2023. As well as at airports and border crossings, the technology is used to enhance security at large-scale events such as the Olympics.

Applications of face recognition.

Finding missing persons

Facial recognition can be used to find missing persons and victims of human trafficking. Suppose missing individuals are added to a database. In that case, law enforcement can be alerted as soon as they are recognized by face recognition — whether it is in an airport, retail store, or other public space.

Reducing retail crime

Facial recognition is used to identify when known shoplifters, organized retail criminals, or people with a history of fraud enter stores. Photographs of individuals can be matched against large databases of criminals so that loss prevention and retail security professionals can be notified when shoppers who potentially represent a threat enter the store.

Improving retail experiences

The technology offers the potential to improve retail experiences for customers. For example, kiosks in stores could recognize customers, make product suggestions based on their purchase history, and point them in the right direction. “Face pay” technology could allow shoppers to skip long checkout lines with slower payment methods.

Banking

Biometric online banking is another benefit of face recognition. Instead of using one-time passwords, customers can authorize transactions by looking at their smartphone or computer. With facial recognition, there are no passwords for hackers to compromise. If hackers steal your photo database, 'liveless' detection – a technique used to determine whether the source of a biometric sample is a live human being or a fake representation – should (in theory) prevent them from using it for impersonation purposes. Face recognition could make debit cards and signatures a thing of the past.

Marketing and advertising

Marketers have used facial recognition to enhance consumer experiences. For example, frozen pizza brand DiGiorno used facial recognition for a 2017 marketing campaign where it analyzed the expressions of people at DiGiorno-themed parties to gauge people’s emotional reactions to pizza. Media companies also use facial recognition to test audience reaction to movie trailers, characters in TV pilots, and optimal placement of TV promotions. Billboards that incorporate face recognition technology – such as London’s Piccadilly Circus – means brands can trigger tailored advertisements. 

Healthcare

Hospitals use facial recognition to help with patient care. Healthcare providers are testing the use of facial recognition to access patient records, streamline patient registration, detect emotion and pain in patients, and even help to identify specific genetic diseases. AiCure has developed an app that uses facial recognition to ensure that people take their medication as prescribed. As biometric technology becomes less expensive, adoption within the healthcare sector is expected to increase.

Tracking student or worker attendance

Some educational institutions in China use face recognition to ensure students are not skipping class. Tablets are used to scan students' faces and match them to photos in a database to validate their identities. More broadly, the technology can be used for workers to sign in and out of their workplaces, so that employers can track attendance.

Recognizing drivers

According to this consumer reportcar companies are experimenting with facial recognition to replace car keys. The technology would replace the key to access and start the car and remember drivers’ preferences for seat and mirror positions and radio station presets.

Monitoring gambling addictions

Facial recognition can help gambling companies protect their customers to a higher degree. Monitoring those entering and moving around gambling areas is difficult for human staff, especially in large crowded spaces such as casinos. Facial recognition technology enables companies to identify those who are registered as gambling addicts and keeps a record of their play so staff can advise when it is time to stop. Casinos can face hefty fines if gamblers on voluntary exclusion lists are caught gambling.

Examples of facial recognition technology

  1. Amazon previously promoted its cloud-based face recognition service named Rekognition to law enforcement agencies. However, in a June 2020 blog post, the company announced it was planning a one-year moratorium on the use of its technology by police. The rationale for this was to allow time for US federal laws to be initiated, to protect human rights and civil liberties.
  2. Apple uses facial recognition to help users quickly unlock their phones, log in to apps, and make purchases.
  3. British Airways enables facial recognition for passengers boarding flights from the US. Travellers' faces can be scanned by a camera to have their identity verified to board their plane without showing their passport or boarding pass. The airline has been using the technology on UK domestic flights from Heathrow and is working towards biometric boarding on international flights from the airport.
  4. Cigna, a US-based healthcare insurer, allows customers in China to file health insurance claims which are signed using a photo, rather than a written signature, in a bid to cut down on instances of fraud.
  5. Coca-Cola has used facial recognition in several ways across the world. Examples include rewarding customers for recycling at some of its vending machines in China, delivering personalized ads on its vending machines in Australia, and for event marketing in Israel.
  6. Facebook began using facial recognition in the US in 2010 when it automatically tagged people in photos using its tag suggestions tool. The tool scans a user's face and offers suggestions about who that person is. Since 2019, Facebook has made the feature opt-in as part of a drive to become more privacy focused. Facebook provides information on how you can opt-in or out of face recognition here.
  7. Google incorporates the technology into Google Photos and uses it to sort pictures and automatically tag them based on the people recognized.
  8. MAC make-up, uses facial recognition technology in some of its brick-and-mortar stores, allowing customers to virtually "try on" make-up using in-store augmented reality mirrors.
  9. McDonald’s has used facial recognition in its Japanese restaurants to assess the quality of customer service provided there, including analyzing whether its employees are smiling while assisting customers.
  10. Snapchat is one of the pioneers of facial recognition software: it allows brands and organizations to create filters which mould to the user’s face — hence the ubiquitous puppy dog faces and flower crown filters seen on social media.

Technology companies that provide facial recognition technology include:

  • Kairos
  • Noldus
  • Affectiva
  • Sightcorp
  • Nviso

Advantages of face recognition

Aside from unlocking your smartphone, facial recognition brings other benefits:

Increased security

On a governmental level, facial recognition can help to identify terrorists or other criminals. On a personal level, facial recognition can be used as a security tool for locking personal devices and for personal surveillance cameras.

Reduced crime

Face recognition makes it easier to track down burglars, thieves, and trespassers. The sole knowledge of the presence of a face recognition system can serve as a deterrence, especially to petty crime. Aside from physical security, there are benefits to cybersecurity as well. Companies can use face recognition technology as a substitute for passwords to access computers. In theory, the technology cannot be hacked as there is nothing to steal or change, as is the case with a password.

Removing bias from stop and search

Public concern over unjustified stops and searches is a source of controversy for the police — facial recognition technology could improve the process. By singling out suspects among crowds through an automated rather than human process, face recognition technology could help reduce potential bias and decrease stops and searches on law-abiding citizens.

Greater convenience

As the technology becomes more widespread, customers will be able to pay in stores using their face, rather than pulling out their credit cards or cash. This could save time in checkout lines. Since there is no contact required for facial recognition as there is with fingerprinting or other security measures – useful in the post-COVID world – facial recognition offers a quick, automatic, and seamless verification experience.

Faster processing

The process of recognizing a face takes only a second, which has benefits for the companies that use facial recognition. In an era of cyber-attacks and advanced hacking tools, companies need both secure and fast technologies. Facial recognition enables quick and efficient verification of a person’s identity.

Integration with other technologies

Most facial recognition solutions are compatible with most security software. In fact, it is easily integrated. This limits the amount of additional investment required to implement it.

Disadvantages of face recognition

While some people do not mind being filmed in public and do not object to the use of facial recognition where there is a clear benefit or rationale, the technology can inspire intense reactions from others. Some of the disadvantages or concerns include:

Surveillance

Some worry that the use of facial recognition along with ubiquitous video cameras, artificial intelligence, and data analytics creates the potential for mass surveillance, which could restrict individual freedom. While facial recognition technology allows governments to track down criminals, it could also allow them to track down ordinary and innocent people at any time.

Scope for error

Facial recognition data is not free from error, which could lead to people being implicated for crimes they have not committed. For example, a slight change in camera angle or a change in appearance, such as a new hairstyle, could lead to error. In 2018, Newsweek reported that Amazon’s facial recognition technology had falsely identified 28 members of the US Congress as people arrested for crimes.

Breach of privacy

The question of ethics and privacy is the most contentious one. Governments have been known to store several citizens' pictures without their consent. In 2020, the European Commission said it was considering a ban on facial recognition technology in public spaces for up to five years, to allow time to work out a regulatory framework to prevent privacy and ethical abuses.

Massive data storage

Facial recognition software relies on machine learning technology, which requires massive data sets to “learn” to deliver accurate results. Such large data sets require robust data storage. Small and medium-sized companies may not have sufficient resources to store the required data.

Facial recognition security - how to protect yourself

While biometric data is generally considered one of the most reliable authentication methods, it also carries significant risk. That’s because if someone’s credit card details are hacked, that person has the option to freeze their credit and take steps to change the personal information that was breached. What do you do if you lose your digital ‘face’?

Around the world, biometric information is being captured, stored, and analyzed in increasing quantities, often by organizations and governments, with a mixed record on cybersecurity. A question increasingly being asked is, how safe is the infrastructure that holds and processes all this data?

As facial recognition software is still in its relative infancy, the laws governing this area are evolving (and sometimes non-existent). Regular citizens whose information is compromised have relatively few legal avenues to pursue. Cybercriminals often elude the authorities or are sentenced years after the fact, while their victims receive no compensation and are left to fend for themselves.

As the use of facial recognition becomes more widespread, the scope for hackers to steal your facial data to commit fraud — increases.

Biometric technology offers very compelling security solutions. Despite the risks, the systems are convenient and hard to duplicate. These systems will continue to develop in the future — the challenge will be to maximize their benefits while minimizing their risks.

Source