Posted on Leave a comment

Employee monitoring software became the new normal during COVID-19. It seems workers are stuck with it

Many employers say they'll keep the surveillance software switched on — even for office workers.


In early 2020, as offices emptied and employees set up laptops on kitchen tables to work from home, the way managers kept tabs on white-collar workers underwent an abrupt change as well.

Bosses used to counting the number of empty desks, or gauging the volume of keyboard clatter, now had to rely on video calls and tiny green "active" icons in workplace chat programs.

In response, many employers splashed out on sophisticated kinds of spyware to claw back some oversight.

"Employee monitoring software" became the new normal, logging keystrokes and mouse movement, capturing screenshots, tracking location, and even activating webcams and microphones.

At the same time, workers were dreaming up creative new ways to evade the software's all-seeing eye.

Now, as workers return to the office, demand for employee tracking "bossware" remains high, its makers say.

Surveys of employers in white-collar industries show that even returned office workers will be subject to these new tools.

What was introduced in the crisis of the pandemic, as a short-term remedy for lockdowns and working from home (WFH), has quietly become the "new normal" for many Australian workplaces.

A game of cat-and-mouse jiggler

For many workers, the surveillance software came out of nowhere.

The abrupt appearance of spyware in many workplaces can be seen in the sudden popularity of covert devices designed to evade this surveillance.

Before the pandemic, "mouse jigglers" were niche gadgets used by police and security agencies to keep seized computers from logging out and requiring a password to access.

Mouse jigglers for sale on eBay
An array of mouse jigglers for sale on eBay.(Supplied: eBay)

Plugged into a laptop's USB port, the jiggler randomly moves the mouse cursor, faking activity when there's no-one there.

When the pandemic hit, sales boomed among WFH employees.

In the last two years, James Franklin, a young Melbourne software engineer, has mailed 5,000 jigglers to customers all over the country — mostly to employees of "large enterprises", he says.

Often, he's had to upgrade the devices to evade an employers' latest methods of detecting and blocking them.

It's been a game of cat-and-mouse jiggler.

"Unbelievable demand is the best way to describe it," he said.

And mouse jigglers aren't the only trick for evading the software.

In July last year, a Californian mum's video about a WFH hack went viral on TikTok.

Leah told how her computer set her status to "away" whenever she stopped moving her cursor for more than a few seconds, so she had placed a small vibrating device under the mouse.

"It's called a mouse mover … so you can go to the bathroom, free from paranoia."

Others picked up the story and shared their tips, from free downloads of mouse-mimicking software to YouTube videos that are intended to play on a phone screen, with an optical mouse resting on top. The movement of the lines in the video makes the cursor move.

"A lot of people have reached out on TikTok," Leah told the ABC.

"There were a lot of people going, 'Oh, my gosh, I can't believe I haven't heard of this before, send me the link.'"

Tracking software sales are up — and staying up

On the other side of the world, in New York, EfficientLab makes and sells an employee surveillance software called Controlio that's widely used in Australia.

It has "hundreds" of Australian clients, said sales manager Moath Galeb.

"At the beginning of the pandemic, there was already a lot of companies looking into monitoring software, but it wasn't such an important feature," he said.

"But the pandemic forced many people to work remotely and the companies started to look into employee monitoring software more seriously."

An online dashboard showing active time and productivity score for a worker
Managers can track employees' productivity scores on a realtime dashboard.(Supplied: Controlio)

In Australia, as in other countries, the number of Controlio clients has increased "two or three times" with the pandemic.

This increase was to be expected — but what surprised even Mr Galeb was that demand has remained strong in recent months.

"They're getting these insights into how people get their work done," he said.

The most popular features for employers, he said, track employee "active time" to generate a "productivity score".

Managers view these statistics through an online dashboard.

Advocates say this is a way of looking after employees, rather than spying on them.

Bosses can see who is "working too many hours", Mr Galeb said.

"Depending on the data, or the insights that you receive, you get to build this picture of who is doing more and doing less."

Nothing new for blue-collar workers

But those being monitored are likely to see things a little differently. 

Ultimately, how the software is used depends on what power bosses have over their workers.

For the increasing number of people in insecure, casualised work, these tools appear less than benign.

In an August 2020 submission to a NSW senate committee investigating the impact of technological change on the future of work, the United Workers Union featured the story of a call centre worker who had been working remotely during the pandemic. 

One day, the employer informed the man that monitoring software had detected his apparent absence for a 45-minute period two weeks earlier.

The submission reads:

Unable to remember exactly what he was doing that particular day, the matter was escalated to senior management who demanded to know exactly where he physically was during this time. This 45-minute break in surveillance caused considerable grief and anxiety for the company. A perceived productivity loss of $27 (the worker's hourly rate) resulted in several meetings involving members of upper management, formal letters of correspondence, and a written warning delivered to the worker.

There were many stories like this one, said Lauren Kelly, who wrote the submission.

"The software is sold as a tool of productivity and efficiency, but really it's about surveillance and control," she said.

"I find it very unlikely it would result in management asking somebody to slow down and do less work."

Ms Kelly, who is now a PhD candidate at RMIT with a focus on workplace technologies including surveillance, says tools for tracking an employee's location and activity are nothing new — what has changed in the past two years is the types of workplaces where they are used.

Before the pandemic, it was more for blue-collar workers. Now, it's for white-collar workers too.

"Once it's in, it's in. It doesn't often get uninstalled," she said.

"The tracking software becomes a ubiquitous part of the infrastructure of management."

The 'quid pro quo' of WFH?

More than half of Australian small-to-medium-sized businesses used software to monitor the activity and productivity of employees working remotely, according to a Capterra survey in November 2020.

That's about on par with the United States.

"There's a tendency in Australia to view these workplace trends as really bad in other places like the United States and China," Ms Kelly said.

"But actually, those trends are already here."

A screenshot of a dashboard showing a graph with different emotions
The latest software claims to monitor employee emotions like happiness and sadness.(Supplied: StaffCircle)

In fact, a 2021 survey suggested Australian employers had embraced location-tracking software more warmly than those of any other country.

Every two years, the international law firm Herbert Smith Freehills surveys thousands of its large corporate clients around the world for an ongoing series of reports on the future of work.

In 2021, it found 90 per cent of employers in Australia monitor the location of employees when they work remotely, significantly more than the global average of less than 80 per cent.

Many introduced these tools having found that during lockdown, some employees had relocated interstate or even overseas without asking permission or informing their manager, said Natalie Gaspar, an employment lawyer and partner at Herbert Smith Freehills.

"I had clients of mine saying that they didn't realise that their employees were working in India or Pakistan," she said.

"And that's relevant because there [are] different laws that apply in those different jurisdictions about workers compensation laws, safety laws, all those sorts of things."

She said that, anecdotally, many of her "large corporate" clients planned to keep the employee monitoring software tools — even for office workers.

"I think that's here to stay in large parts."

And she said employees, in general, accepted this elevated level of surveillance as "the cost of flexibility".

"It's the quid pro quo for working from home," she said.

Is it legal?

The short answer is yes, but there are complications.

There's no consistent set of laws operating across jurisdictions in Australia that regulate surveillance of the workplace.

In New South Wales and the ACT, an employer can only install monitoring software on a computer they supply for the purposes of work.

With some exceptions, they must also advise employees they're installing the software and explain what is being monitored 14 days prior to the software being installed or activated.

In NSW, the ACT and Victoria, it's an offence to install an optical or listening device in workplace toilets, bathroom or change rooms.

South Australia, Tasmania, Western Australia, the Northern Territory and Queensland do not currently have specific workplace surveillance laws in place.

Smile, you're at your laptop

Location tracking software may be the cost of WFH, but what about tools that check whether you're smiling into the phone, or monitor the pace and tone of your voice for depression and fatigue?

These are some of the features being rolled out in the latest generation of monitoring software.

Zoom, for instance, recently introduced a tool that provides sales meeting hosts with a post-meeting transcription and "sentiment analysis".

A screenshot of a sales video with analytics and sentiment analysis
Zoom IQ for Sales offers a breakdown of how the meeting went.(Supplied: Zoom)

Software already on the market trawls email and Slack messages to detect levels of emotion like happiness, anger, disgust, fear or sadness.

The Herbert Smith Freehills 2021 survey found 82 per cent of respondents planned to introduce digital tools to measure employee wellbeing.

A bit under half said they already had processes in place to detect and address wellbeing issues, and these were assisted by technology such as sentiment analysis software.

Often, these technologies are tested in call centres before they're rolled out to other industries, Ms Kelly said.

"Affect monitoring is very controversial and the technology is flawed.

"Some researchers would argue it's simply not possible for AI or any software to truly 'know' what a person is feeling.

"Regardless, there's a market for it and some employers are buying into it."

The movement of the second hand of an analogue wristwatch moves an optical mouse cursor a tiny amount.(Supplied: Reddit)

Back in Melbourne, Mr Franklin remains hopeful that plucky inventors can thwart the spread of bossware.

When companies switched to logging keyboard inputs, someone invented a random keyboard input device.

When managers went a step further and monitored what was happening on employees' screens, a tool appeared that cycled through a prepared list of webpages at regular intervals.

"The sky's the limit when it comes to defeating these systems," he said.

And sometimes the best solutions are low tech.

Recently, an employer found a way to block a worker's mouse jiggler, so he simply taped his mouse to the office fan.

"And it dragged the mouse back and forth.

"Then he went out to lunch."

 
Posted on Leave a comment

Artificial Identity: Disruption and the Right to Persist

Research outputChapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review

Abstract

Anthropomorphism, artificial identity, and the fusion of personal
and artificial identities have become commonplace concepts in
human-computer interaction (HCI) and human-robot interaction
(HRI). In this paper, we argue for the fact that the design and life
cycle of ’smart’ technology must account for a further element
of HCI/HRI, namely that, beyond issues of combined identity, a
much more crucial point is the substantial investment of a user’s
personality on a piece of technology. We raise the fact that this
substantial investment occurs in a dynamic context of continuous
alteration of this technology, and thus the important psychological
and ethical implications ought to be given a more prominent place
in he theory and design of HCI/HRI technology.

Source
Posted on Leave a comment

Framework for the Metaverse

I first wrote about the Metaverse in 2018, and overhauled my thinking in a January 2020 update: The Metaverse: What It Is, Where to Find it, Who Will Build It, and Fortnite. Since then, a lot has happened. COVID-19 forced hundreds of millions into Zoomschool and remote work. Roblox became one of the most popular entertainment experiences in history. Google Trends’ index on the phrase ‘The Metaverse’ set a new ‘100’ in March 2021. Against this baseline, use of the term never exceeded seven from January 2005 through to December 2020. With that in mind, I thought it was time to do an update - one that reflects how my thinking has changed over the past 18 months and addresses the questions I’ve received during this time, such as “Is the Metaverse here?”, “When will it arrive?”, and “What does it need to grow?”. Welcome to the Foreword to ‘THE METAVERSE PRIMER’.

When did the mobile internet era begin? Some would start this history with the very first mobile phones. Others might wait until the commercial deployment of 2G, which was the first digital wireless network. Or the introduction of the Wireless Application Protocol standard, which gave us WAP browsers and thus the ability to access a (rather primitive) version of most websites from nearly any ‘dumbphone’. Or maybe it started with the BlackBerry 6000, or 7000 or 8000 series? At least one of them was the first mainstream mobile device designed for on-the-go data. Most would say it’s the iPhone, which came more than a decade after the first BlackBerry and eight years after WAP, nearly two decades after 2G, 34 years after the first mobile phone call, and has since defined many of the mobile internet era’s visual design principles, economics, and business practices.

In truth, there’s never a flip. We can identify when a specific technology was created, tested, or deployed, but not when an era precisely occurred. This is because technological change requires a lot of technological changes, plural, to all come together. The electricity revolution, for example, was not a single period of steady growth. Instead, it was two separate waves of technological, industrial, and process-related transformations. 

The first wave began around 1881, when Thomas Edison stood up electric power stations in Manhattan and London. Although this was a quick start to the era of electrical power — Edison had created the first working incandescent light bulb only two years earlier, and was only one year into its commercialization — industrial adoption was slow. Some 30 years after Edison’s first stations, less than 10% of mechanical drive power in the United States came from electricity (two thirds of which was generated locally, rather than from a grid). But then suddenly, the second wave began. Between 1910 and 1920, electricity’s share of mechanical drive power quintupled to over 50% (nearly two thirds of which came from independent electric utilities. By 1929 it stood at 78%). 

The difference between the first and second waves is not how much of American industry used electricity, but the extent to which it did — and designed around it.

Alamy

When plants first adopted electrical power, it was typically used for lighting and/or to replace a plant’s on-premises source of power (usually steam). These plants did not, however, rethink or replace the legacy infrastructure which would carry this power throughout the factory and put it to work. Instead, they continued to use a lumbering network of cogs and gears that were messy and loud and dangerous, difficult to upgrade or change, were either ‘all on’ or ‘all off’ (and therefore required the same amount of power to support a single operating station or the entire plant, and suffered from countless ‘single points of failure’), and struggled to support specialized work.

Alamy

But eventually, new technologies and understandings gave factories both the reason and ability to be redesigned end-to-end for electricity, from replacing cogs with electric wires, to installing individual stations with bespoke and dedicated electrically-powered motors for functions such as sewing, cutting, pressing, and welding. 

The benefits were wide-ranging. The same plant now had considerably more space, more light, better air, and less life-threatening equipment. What’s more, individual stations could be powered individually (which increased safety, while reducing costs and downtime), and use more specialized equipment (e.g. electric socket wrenches). 

Getty

In addition, factories could configure their production areas around the logic of the production process, rather than hulking equipment, and even reconfigure these areas on a regular basis. These two changes meant that far more industries could deploy assembly lines in their plants (which had actually first emerged in the late 1700s), while those that already had such lines could extend them further and more efficiently. In 1913, for example, Henry Ford created the first moving assembly line, which used electricity and conveyor belts to reduce the production time per car from 12.5 hours to 93 minutes, while also using less power. According to historian David Nye, Ford’s famous Highland Park plant was “built on the assumption that electrical light and power should be available everywhere.”

Once a few plants began this transformation, the entire market was forced to catch up, thereby spurring more investment and innovation in electricity-based infrastructure, equipment, and processes. Within a year of its first moving assembly line, Ford was producing more cars than the rest of the industry combined. By its 10 millionth car, it had built more than half of all cars on the road.

This ‘second wave’ of industrial electricity adoption didn’t depend on a single visionary making an evolutionary leap from Thomas Edison’s core work. Nor was it driven just by an increasing number of industrial power stations. Instead, it reflected a critical mass of interconnected innovations, spanning power management, manufacturing hardware, production theory, and more. Some of these innovations fit in the palm of a plant manager’s hand, others needed a room, a few required a city, and they all depended on people and processes. 

To return to Nye, “Henry Ford didn’t first conceive of the assembly line and then delegate its development to his managers. … [The] Highland Park facility brought together managers and engineers who collectively knew most of the manufacturing processes used in the United States … they pooled their ideas and drew on their varied work experiences to create a new method of production.” This process, which happened at national scale, led to the ‘roaring twenties’, which saw the greatest average annual increases in labor and capital productivity in a hundred years.

Powering the Mobile Internet

This is how to think about the mobile internet era. The iPhone feels like the start of the mobile internet because it united and/or distilled all of the things we now think of as ‘the mobile internet’ into a single minimum viable product that we could touch and hold and love. But the mobile internet was created — and driven — by so much more.

In fact, we probably don’t even mean the first iPhone but the second, the iPhone 3G (which saw the largest model-over-model growth of any iPhone, with over 4× the sales). This second iPhone was the first to include 3G, which made the mobile web usable, and operated the iOS App Store, which made wireless networks and smartphones useful. 

But neither 3G nor the App Store were Apple-only innovations or creations. The iPhone accessed 3G networks via chips made by Infineon that connected via standards set by the ITU and GSMA, and which were deployed by wireless providers such as AT&T on top of wireless towers built by tower companies such as Crown Castle and American Tower. The iPhone had “an app for that” because millions of developers built them, just as thousands of different companies built specialized electric motor devices for factories in the 1920s. In addition, these apps were built on a wide variety of standards — from KDE to Java, HTML and Unity — which were established and/or maintained by outside parties (some of whom competed with Apple in key areas). The App Store’s payments worked because of digital payments systems and rails established by the major banks. The iPhone also depended on countless other technologies, from a Samsung CPU (licensed in turn from ARM), to an accelerometer from STMicroelectronics, Gorilla Glass from Corning, and other components from companies like Broadcom, Wolfson, and National Semiconductor. 

All of the above creations and contributions, collectively, enabled the iPhone and started the mobile internet era. They also defined its improvement path. 

Consider the iPhone 12, which was released in 2020. There was no amount of money Apple could have spent to release the iPhone 12 as its second model in 2008. Even if Apple could have devised a 5G network chip back then, there would have been no 5G networks for it to use, nor 5G wireless standards through which to communicate to these networks, and no apps that took advantage of its low latency or bandwidth. And even if Apple had made its own ARM-like GPU back in 2008 (more than a decade before ARM itself), game developers (which generate more than two thirds of App Store revenues) would have lacked the game-engine technologies required to take advantage of its superpowered capabilities. 

Getting to the iPhone 12 required ecosystem-wide innovation and investments, most of which sat outside Apple’s purview (even though Apple’s lucrative iOS platform was the core driver of these advancements). The business case for Verizon’s 4G networks and American Tower Corporation’s wireless tower buildouts depended on the consumer and business demand for faster and better wireless for apps such as Spotify, Netflix and Snapchat. Without them, 4G’s ‘killer app’ would have been… slightly faster email. Better GPUs, meanwhile, were utilized by better games, and better cameras were made relevant by photo-sharing services such as Instagram. And this better hardware powered greater engagement, which drove greater growth and profits for these companies, thereby driving better products, apps, and services. Accordingly, we should think of the overall market as driving itself, just as the adoption of electrical grids led to innovation in small electric-powered industrial motors that in turn drove demand for the grid itself.

We must also consider the role of changing user capability. The first iPhone could have skipped the home button altogether, rather than waiting until the tenth. This would have opened up more room inside the device itself for higher-quality hardware or bigger batteries. But the home button was an important training exercise for what was a vastly more complex and capable mobile phone than consumers were used to. Like closing a clamshell phone, it was a safe, easy, and tactile way to ‘restart’ the iPhone if a user was confused or tapped the wrong app. It took a decade for consumers to be able to have no dedicated home button. This idea is critical. As time passes, consumers become increasingly familiar with advanced technology, and therefore better able to adopt further advances - some of which might have long been possible!

And just as consumers shift to new mindsets, so too does industry. Over the past 20 years, nearly every industry has hired, restructured, and re-oriented itself around mobile workflows, products, or business lines. This transformation is as significant as any hardware or software innovation — and, in turn, creates the business case for subsequent innovations.

Defining the Metaverse

This essay is the foreword to my nine-part and 33,000-word primer on the Metaverse, a term I’ve not yet mentioned, let alone described.

Before doing so, it was important for me to provide the context and evolutionary path of technologies such as ‘electricity’ and the ‘mobile internet’. Hopefully it provided a few lessons. First, the proliferation of these technologies fundamentally changed human culture, from where we lived to how we worked, what we made, what we bought, how, and from who. Second, these ‘revolutions’ or ‘transformations’ really depended on a bundle of many different, secondary innovations and inventions that built upon and drove one another. Third, even the most detailed understanding of these newly-emergent technologies didn’t make clear which specific, secondary innovations and inventions they required in order to achieve mass adoption and change the world. And how they would change the world was almost entirely unknowable.

OldInternet.PNG

In other words, we should not expect a single, all-illuminating definition of the ‘Metaverse’. Especially not at a time in which the Metaverse has only just begun to emerge. Technologically driven transformation is too organic and unpredictable of a process. Furthermore, it’s this very messiness that enables and results in such large-scale disruption. 

My goal therefore is to explain what makes the Metaverse so significant – i.e. deserving of the comparisons I offered above – and offer ways to understand how it might work and develop.

The Metaverse is best understood as ‘a quasi-successor state to the mobile internet’. This is because the Metaverse will not fundamentally replace the internet, but instead build upon and iteratively transform it. The best analogy here is the mobile internet, a ‘quasi-successor state’ to the internet established from the 1960s through the 1990s. Even though the mobile internet did not change the underlying architecture of the internet – and in fact, the vast majority of internet traffic today, including data sent to mobile devices, is still transmitted through and managed by fixed infrastructure – we still recognize it as iteratively different. This is because the mobile internet has led to changes in how we access the internet, where, when and why, as well as the devices we use, the companies we patron, the products and services we buy, the technologies we use, our culture, our business model, and our politics. 

The Metaverse will be similarly transformative as it too advances and alters the role of computers and the internet in our lives.

The fixed-line internet of the 1990s and early 2000s inspired many of us to purchase our own personal computer. However, this device was largely isolated to our office, living room or bedroom. As a result, we had only occasional access to and usage of computing resources and an internet connection. The mobile internet led most humans globally to purchase their own personal computer and internet service, which meant almost everyone had continuous access to both compute and connectivity.

Metaverse iterates further by placing everyone inside an ‘embodied’, or ‘virtual’ or ‘3D’ version of the internet and on a nearly unending basis. In other words, we will constantly be ‘within’ the internet, rather than have access to it, and within the billions of interconnected computers around us, rather than occasionally reach for them, and alongside all other users and real-time.

The progression listed above is a helpful way to understand what the Metaverse changes. But it doesn’t explain what it is or what it’s like to experience. To that end, I’ll offer my best swing at a definition:

“The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”

Most commonly, the Metaverse is mis-described as virtual reality. In truth, virtual reality is merely a way to experience the Metaverse. To say VR is the Metaverse is like saying the mobile internet is an app. Note, too, that hundreds of millions are already participating in virtual worlds on a daily basis (and spending tens of billions of hours a month inside them) without VR/AR/MR/XR devices. As a corollary to the above, VR headsets aren’t the Metaverse any more than smartphones are the mobile internet.

Sometimes the Metaverse is described as a user-generated virtual world or virtual world platform. This is like saying the internet is Facebook or Geocities. Facebook is a UGC-focused social network on the internet, while Geocities made it easy to create webpages that lived on the internet. UGC experiences are just one of many experiences on the internet.

Furthermore, the Metaverse doesn’t mean a video game. Video games are purpose-specific (even when the purpose is broad, like ‘fun’), unintegrated (i.e. Call of Duty is isolated from fellow portfolio title Overwatch), temporary (i.e. each game world ‘resets’ after a match) and capped in participants (e.g. 1MM concurrent Fortnite users are in over 100,000 separated simulations. Yes, we will play games in the Metaverse, and those games may have user caps and resets, but those are games in the Metaverse, not the Metaverse itself. Overall, The Metaverse will significantly broaden the number of virtual experiences used in everyday life (i.e. well beyond video games, which have existed for decades) and in turn, expand the number of people who participate in them. 

Lastly, the Metaverse isn’t tools like Unreal or Unity or WebXR or WebGPU. This is like saying the internet is TCP/IP, HTTP, or web browser. These are protocols upon which the internet depends, and the software used to render it.

The Metaverse, like the internet, mobile internet, and process of electrification, is a network of interconnected experiences and applications, devices and products, tools and infrastructure. This is why we don’t even say that horizontally and vertically integrated giants such as Facebook, Google or Apple are an internet. Instead, they are destinations and ecosystems on or in the internet, or which provide access to and services for the internet. And of course, nearly all of the internet would exist without them.

The Metaverse Emerges

As I’ve written before, the full vision of the Metaverse is decades away. It requires extraordinary technical advancements (we are far from being able to produce shared, persistent simulations that millions of users synchronized in real-time), and perhaps regulatory involvement too. In addition, it will require overhauls in business policies, and changes to consumer behavior.

But the term has become so recently popular because we can feel it beginning. This is one of the reasons why Fortnite and Roblox are so commonly conflated with the Metaverse. Just as the iPhone feels like the mobile internet because the device embodied the many innovations which enabled the mobile internet to go mainstream, these ‘games’ bring together many different technologies and trends to produce an experience which is simultaneously tangible and feels different from everything that came before. But they do not constitute the Metaverse.

Sweeney.PNG

Personally, I’m tracking the emergence of the Metaverse around eight core categories, which can be thought of as a stack (click each header for a dedicated essay).

  1. Hardware: The sale and support of physical technologies and devices used to access, interact with, or develop the Metaverse. This includes, but is not limited to, consumer-facing hardware (such as VR headsets, mobile phones, and haptic gloves) as well as enterprise hardware (such as those used to operate or create virtual or AR-based environments, e.g. industrial cameras, projection and tracking systems, and scanning sensors). This category does not include compute-specific hardware, such as GPU chips and servers, as well as networking-specific hardware, such as fiber optic cabling or wireless chipsets.
  2. Networking: The provisioning of persistent, real-time connections, high bandwidth, and decentralized data transmission by backbone providers, the networks, exchange centers, and services that route amongst them, as well as those managing ‘last mile’ data to consumers. 
  3. Compute: The enablement and supply of computing power to support the Metaverse, supporting such diverse and demanding functions as physics calculation, rendering, data reconciliation and synchronization, artificial intelligence, projection, motion capture and translation.
  4. Virtual Platforms: The development and operation of immersive digital and often three-dimensional simulations, environments, and worlds wherein users and businesses can explore, create, socialize, and participate in a wide variety of experiences (e.g. race a car, paint a painting, attend a class, listen to music), and engage in economic activity. These businesses are differentiated from traditional online experiences and multiplayer video games by the existence of a large ecosystem of developers and content creators which generate the majority of content on and/or collect the majority of revenues built on top of the underlying platform.
  5. Interchange Tools and Standards: The tools, protocols, formats, services, and engines which serve as actual or de facto standards for interoperability, and enable the creation, operation and ongoing improvements to the Metaverse. These standards support activities such as rendering, physics, and AI, as well as asset formats and their import/export from experience to experience, forward compatibility management and updating, tooling, and authoring activities, and information management.
  6. Payments: The support of digital payment processes, platforms, and operations, which includes fiat on-ramps (a form of digital currency exchange) to pure-play digital currencies and financial services, including cryptocurrencies, such as bitcoin and ether, and other blockchain technologies.
  7. Metaverse Content, Services, and Assets: The design/creation, sale, re-sale, storage, secure protection and financial management of digital assets, such as virtual goods and currencies, as connected to user data and identity. This contains all business and services “built on top of” and/or which “service” the Metaverse, and which are not vertically integrated into a virtual platform by the platform owner, including content which is built specifically for the Metaverse, independent of virtual platforms.
  8. User Behaviors: Observable changes in consumer and business behaviors (including spend and investment, time and attention, decision-making and capability) which are either directly associated with the Metaverse, or otherwise enable it or reflect its principles and philosophy. These behaviors almost always seem like ‘trends’ (or, more pejoratively, ‘fads’) when they initially appear, but later show enduring global social significance. 

(You’ll note ‘crypto’ or ‘blockchain technologies’ are not a category. Rather, they span and/or drive several categories, most notably compute, interchange tools and standards, and payments — potentially others as well.)

MasterMetaverse1.png

Each of these buckets is critical to the development of the Metaverse. In many cases, we have a good sense of how each one needs to develop, or at least where there’s a critical threshold (say, VR resolution and frame rates, or network latency). 

But ultimately, how these many pieces come together and what they produce is the hard, important, and society-altering part of any Metaverse analysis. Just as the electricity revolution was about more than the kilowatt hours produced per square mile in 1900s New York, and the internet about more than HTTP and broadband cabling.

Based on precedent, however, we can guess that the Metaverse will revolutionize nearly every industry and function. From healthcare to payments, consumer products, entertainment, hourly labor, and even sex work. In addition, altogether new industries, marketplaces and resources will be created to enable this future, as will novel types of skills, professions, and certifications. The collective value of these changes will be in the trillions.

This is the Foreword to the nine-part ‘METAVERSE PRIMER’.

Matthew Ball (@ballmatthew)

The Metaverse Primer

Metaverse
Jun 29, 2021 Written By Matthew Ball

Posted on 2 Comments

Decrypted: Caution in the Age of the Quantified Self

Tracking your health and fitness with the help of smartphone apps and wearables is fun and motivating; auto insurers are now allowing drivers tracking options to prove their safety and save money.

Quantified-Self

Editor’s Note: For most of us, the wide world of technology is a wormhole of dubious trends with a side of jargon soup. If it’s not a bombardment of startups and tech trends (minimum viable product, Big Data, billion dollar IPO!) then it’s unrelenting feature mongering (Smart Everything! Siri!). What’s a level-headed guy with a few bucks in his pocket supposed to do? We’ve got an answer, and it’s not a ⌘+Option+Esc. Welcome to Decrypted, a new weekly commentary about tech’s place in the real world. We’ll spend some weeks demystifying and others criticizing, but it’ll all be in plain english. So take off your headphones, settle in for something longer than 140 characters and prepare to wise up.

Last month New York Times writer Ron Lieber wrote on his experience allowing State Farm to track his driving as part of their usage-based insurance policy. These types of systems are in their infancy, but they allow drivers to lower their insurance premiums based on safe driving, as determined by data points like acceleration, velocity and g-forces during turns. “For me, it turned driving into a game that could yield real money through safer behavior,” wrote Lieber.

Driving data is one part of the new, so-called “quantified self”, in which car sensors, home thermostats and omnipresent on-person devices gather objective information to create a viewable, digital portrait of someone’s life. Data collection is quickly becoming popular due to the effectiveness and objectivity of using data to adjust premiums, target advertisements and generally operate a consumer-facing business better. The immediate incentives for early stage adopters like Lieber of any tracking device are clear: understanding and motivation. In the case of auto insurance, this digital portrait has so far led “participants in the program [to] get an average of 10 to 15 percent off their premium”. But for consumers, there are also troubling implications looming concerning how a person’s digital portrait can be used and the security of important data.

The worries aren’t exactly new. At the advent of popular location services such as FourSquare, the excitement users felt seeing where they’d been and their ability to keep tabs on their friends was a draw soon tinged by the risks of oversharing (which can be quickly summed up by the aptly named Please Rob Me, a site that serves to show people how information they post online can be used against them). Currently, in an exact parallel, fitness trackers are gaining popularity because of their ability to help users visualize and track their fitness and compare themselves to their friends, not to mention the motivation inherent in having every one of their steps counted (doubters need to look no further than David Sedaris’s experiences). While the danger in the first instance of being “located” appeals readily to a person’s hard-wired sense of caution, the potential dangers inherent in the recent rise of tracking health data using wearable devices and smartphone apps is more real, and much more insidious.

The potential dangers inherent in the recent rise of tracking health data using wearable devices and smartphone apps is real, and insidious.

To see these dangers one can look again to the car insurance example. Currently, according to Lieber, there’s no penalty for dangerous driving for those who opt into the program, only discounts for safe driving. But as more drivers agree to share their data, there will be a built-in relative cost for “private” driving, as rates will remain constant for holdouts. There’s also the potential that premiums for holdouts will rise if these drivers share similar characteristics (age, race, income level, location) with other drivers who have installed sensors into their cars and proven themselves risky drivers.

This same logic could ostensibly be carried over into other, more personal areas. Wearable companies like Fitbit and Jawbone have been an increasingly popular choice in capturing data for large shares of health-conscious consumers. But what they do with that data is up to them. In a series on privacy concerns held last spring, the FTC found that the “12 [health] apps tested transmitted information to 76 different third-parties”, including consumer health metrics along with identifying characteristics. These third-parties include data brokers, who keep tabs on millions of Americans.

While scrutiny has caused existing companies to change their privacy policies, including Fitbit, and companies new to the segment to stress privacy conscious policy — Apple dedicated a new section of their site to privacy considerations after they announced their HealthKit app — there exists no stringent set of laws dictating how these companies can use the data they collect. HIPAA and the Health Insurance Portability and Accountability Act do not extend beyond medical records to cover seemingly innocuous health data on your smartphone, and while these apps receive some FDA regulation, the agency is mostly policing safety, not privacy.

“Health data stored by patients in apps is typically not protected by federal health privacy laws, although some apps may be covered by state privacy laws, so historically consumers using these apps were protected to the extent the app vendors abided by any promises made in their licensing agreements or privacy policies”, Deven McGraw, a health care attorney for Manatt, Phelps & Phillips, recently told Politico.

To users, the future consequences of data tracking are far off and unknown, while the current benefits make data tracking actually a very appealing prospect.

And companies have huge incentives to gain, and keep open, access to these data. For insurance companies, the data allows them to more accurately adjust premiums and mitigate financial risks; in the case of advertisers, any available data helps to target adverts and sell products more effectively. In an interview in Forbes, Kelly Barnes, who tracks healthcare for PricewaterhouseCoopers, said she’s “very confident we’re all going to be on insurance marketplaces in the not-too-distant future”, via our digital selves.

Amplifying the danger is the present landscape. To users, the future consequences of data tracking are far off and unknown, while the current benefits make data tracking actually a very appealing prospect. In the coming years, those who allow access to health data for their insurance company and can prove high levels of fitness — and, although it’s still on the horizon, good eating habits via soon-to-be-developed glucose monitoring devices — will see their premiums shrink. Even if you don’t opt to give information to insurance companies, you might soon be convinced otherwise by your employer. Companies seeing the health benefits of “recommended” but not mandatory use of wristbands has led health tracking companies to target employers, pitching them on their ability to monitor their employees and motivate them to be healthier, or at the very least, take the stairs occasionally. Healthier employees are cheaper for insurance reasons, work harder and don’t take off as much. Insurance companies and employers will now be able to accurately assess risk and better motivate the health of their employees and customers. In short, everyone will be able to do their jobs better — which sounds great, unless you fall on the left side of the health bell-curve.

Doubtless, the future will hold online privacy legislation, like the kind President Obama urged in 2012. But the incentives to leave the door open to selling and using this data by Big Business are extraordinary, as indicated by businesses with an interest in data exerting their vast lobbying power in order to obstruct legislation in Washington. Stymied legislation has led states to take privacy law into their own hands, to limited effect.

Currently, available tracking technology is mostly harmless, and usually beneficial and fun. But the trend of big data is only going to expand. The future could see the rise of much more intimate and revealing data, such as genome mapping (some companies already offer genome sequencing services) or wearables that extend beyond obvious biometric data for more revealing, and therefore valuable, body chemistry. In these latter cases, data could reveal information and punish users based on the likelihood they’ll develop future conditions (the motivating reason behind the passage of the Genetic Information Nondiscrimination Act in 2008), in addition to addressing their day-to-day habits. But those worries are far off, and for now users who want to continue wearing a Fitbit should do so. They just need to keep in mind that they don’t have to be high profile for their data to be valuable.

BY J. TRAVIS SMITH | SEP 19, 2014 | GEAR PATROL