Posted on Leave a comment

How Can Machine Learning Improve Business Decision-making?

Artificial Intelligence and Machine Learning in Development

It sounds like something from a 1980s sci-fi film. The idea of a machine helping to make your business decisions is something straight out of a blockbuster, but the way technology has evolved means that companies embracing Machine Learning for decision-making can actually get the edge over their competition. 

AI & Machine Learning 

Machine Learning is intrinsically linked with AI. It is the capacity that a machine has to learn and demonstrate intelligence and insight. The role of AI within a business largely depends on exactly what type of business is and what you are trying to achieve. More and more Machine Learning business apps that automate processes, analyzing data, arise every day. 

Machine Learning Predictive Models & Machine Learning Text Classification 

Predictive modeling is a process that uses data and statistics to predict outcomes with data models. It can study data and predict what is going to happen next, or what should happen next, which can be extremely useful in certain industries. The data thrown out can help tremendously with decision making.  

Predictive Modeling | Comidor Blog

Text classification is able to categorize and select texts based on Machine Learning in a smart way. The process becomes quicker and more efficient. This can be put into place for things like chat boxes.

Process Mining and Machine Learning 

Process mining evaluates business processes and can give you new methods of improving your business, either by making it more efficient or saving money. There are ways that AI Machine Learning can be constantly involved in your process mining, giving you new insights and informing the business decisions you need to make next. 

An example is using KPI’s. Process mining can explore data regarding where processes have gone wrong. For example, they could analyze data from your suppliers to tell you who is more likely to deliver on time, or they could analyze the data from previous sales to see whether or not you are likely to run out of stock. The key performance indicators are crucial or giving a number value, from which the process mining can be much more effectively carried out. 

Almost every business can benefit from becoming more efficient in one way or another, and process mining could be the first port of call. 

Artificial Intelligence and Machine Learning for Decision-making

AI can be put into practice when it comes to decision making, about almost any aspect of your business. For example, you can use it to analyze data on the money you are spending, staff responsibilities, even employee happiness. If you can feed it data then AI can show you new insights. 

artificial-intelligence blog | Comidor Blog

Decision-making process – The pros & cons of AI 

The pros of including AI in your decision making are clear. Having these new insights can help you to spot new areas of improvement and make vast enhancements in the way you conduct your business. AI can often see things that other data analysts would not. It can also tick away in the background, so you don’t have to pay consultants to work with the data if a computer is interpreting it. 

AI decision-making speeds up the process. AI can operate at incredible speeds and see data in ways that humans would take years to analyze in a matter of minutes. This should be utilized by big corporations when they are looking to make their business more efficient and even to make their processes more intelligent

The cons of this include the fact that there are still some shortcomings. The human touch is sometimes still needed. For instance, seeing the potential of a new staff member needs human input. Statistics might tell you that they need to go, and AI might back this up, but you might see the potential in them, still. 

AI doesn’t do creative thinking or coming up with ideas, so this will still fall on business employees and leaders. 

How Machine Learning can be applied to business processes 

Almost all business processes can be streamlined in some way. It could be that AI shows exactly how to do this. AI can also be put into practical uses relevant to your business. 

How Machine Learning can determine a pricing strategy

Machine learning might also be used to dictate pricing. An algorithm can learn from consumer information and other seller data to help you to price goods and services in a way that is competitive and likely to convert. 

AI decision-making – Developments for the near future 

From consumer protection to intelligent process automation, there aren’t many ways in which Machine Learning can’t be applied in business. It is very hard to know exactly how it will pan out, but there is little denying that AI is here to stay. Practical uses and an understanding of exactly what your customer is looking for, or how customers and staff behave, will become more intertwined with how business is done. 

The Future – Decision-Making For Your Business With AI 

When it comes to making decisions about a business, data is always going to be vital, but with Machine learning, we have so many ways in which we can use them and find out more and more about customers, businesses, and the processes we use. Don’t worry, the robots aren’t taking over like an 80s sci-fi film, but we do have more tools and functions to use as part of our business strategies than ever before thanks to Machine Learning. 

Source

Posted on Leave a comment

Artificial Identity: Disruption and the Right to Persist

Research outputChapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review

Abstract

Anthropomorphism, artificial identity, and the fusion of personal
and artificial identities have become commonplace concepts in
human-computer interaction (HCI) and human-robot interaction
(HRI). In this paper, we argue for the fact that the design and life
cycle of ’smart’ technology must account for a further element
of HCI/HRI, namely that, beyond issues of combined identity, a
much more crucial point is the substantial investment of a user’s
personality on a piece of technology. We raise the fact that this
substantial investment occurs in a dynamic context of continuous
alteration of this technology, and thus the important psychological
and ethical implications ought to be given a more prominent place
in he theory and design of HCI/HRI technology.

Source
Posted on Leave a comment

Artificial Identity

James DiGiovanna

DOI:10.1093/oso/9780190652951.003.0020

Enhancement and AI create moral dilemmas not envisaged in standard ethical theories. Some of this stems from the increased malleability of personal identity that this technology affords: an artificial being can instantly alter its memory, preferences, and moral character. If a self can, at will, jettison essential identity-giving characteristics, how are we to rely upon, befriend, or judge it? Moral problems will stem from the fact that such beings are para-persons: they meet all the standard requirements of personhood (self-awareness, agency, intentional states, second-order desires, etc.) but have an additional ability—the capacity for instant change—that disqualifies them from ordinary personal identity. In order to rescue some responsibility assignments for para-persons, a fine-grained analysis of responsibility-bearing parts of selves and the persistence conditions of these parts is proposed and recommended also for standard persons who undergo extreme change.

Posted on Leave a comment

Nvidia’s Next GPU Shows That Transformers Are Transforming AI

The neural network behind big language processors is creeping into other corners of AI

Transformers, the type of neural network behind OpenAI’s GPT-3 and other big natural-language processors, are quickly becoming some of the most important in industry, and they are likely to spread to other—perhaps all—areas of AI. Nvidia’s new Hopper H100 is proof that the leading maker of chips for accelerating AI is a believer. Among the many architectural changes that distinguish the H100 from its predecessor, the A100, is a “transformer engine.” Not a distinct part of the new hardware exactly, it’s a way of dynamically changing the precision of the calculations in the cores to speed up the training of transformer neural networks.

“One of the big trends in AI is the emergence of transformers,” says Dave Salvator, senior product manager for AI inference and cloud at Nvidia. Transformers quickly took over language AI, because their networks pay “attention” to multiple sentences, enabling them to grasp context and antecedents. (The T in the benchmark language model BERT stands for “transformer” as it does in the occasionally insulting GPT-3.)

“We are trending very quickly toward trillion parameter models” —Dave Salvator, Nvidia

But more recently, researchers have been seeing an advantage to applying that same sense of attention to vision and other models dominated by convolutional neural networks. Salvator notes that more than two-thirds of papers about neural networks in the last two years dealt with transformers or their derivatives. “The number of challenges transformers can take on continues to grow,” he says.

However, transformers are among the biggest neural-network models in terms of the number of parameters involved. And they are growing much faster than other models. “We are trending very quickly toward trillion-parameter models,” says Salvator. Nvidia’s analysis shows the training needs of transformer models growing 275-fold every two years, while the trend for all other models is 8-fold growth every two years. Bigger models need more computational resources especially for training, but also for operating in real time as they often need to do. Nvidia developed the transformer engine to help keep up.

 The computational needs of transformers are growing more rapidly than those of other forms of AI. Those are growing really fast, too, of course.NVIDIA

The transformer engine is really software combined with new hardware capabilities in Hopper’s tensor cores. These are the units dedicated to carrying out machine learning’s bread-and-butter calculation—matrix multiply and accumulate. Hopper has tensor cores capable of computing with floating-point numbers of a variety of precision—from 64-bit down to 8-bit. The A100’s cores were designed for floating-point numbers only as short as 16 bits. But the trend in AI computing has been toward developing neural nets that lean on the lowest precision that will still yield an accurate result. The smaller formats compute faster and more efficiently, and they require less memory and memory bandwidth. The addition of 8-bit floating-point units in the H100 leads to a significant speedup—double the throughput compared to its 16-bit units.

The transformer engine’s secret sauce is its ability to dynamically choose what precision is needed for each layer in the neural network at each step in training a neural network. The least-precise units, the 8-bit floating point, can speed through their computations, but then produce 16-bit or 32-bit sums for the next layer if that’s the precision needed there. The Hopper goes a step further, though. Its 8-bit floating-point units can do their matrix math with either of two forms of 8-bit numbers.

To understand why that’s helpful, you might need a quick lesson in the structure of floating-point numbers. This format represents numbers using some of the bits for the exponent, some for the mantissa, and one for the sign. The more bits you have representing the exponent, the greater the range of numbers you can express. The more bits in the mantissa, the greater the precision of those numbers. The standard 16-bit floating-point format (IEEE 754-2008) demands 5 bits of exponent and 10 bits of mantissa, along with the sign bit. Seeking to reduce data-storage requirements and speed machine learning, makers of AI accelerators recently adopted bfloat-16, which trades three bits of mantissa for an added exponent, giving it the same range as a 32-bit number.

Nvidia has taken that trade-off further. “One of the unique things we found when you get to [8-bit] is that there really isn’t a one size fits all format that we were confident would work,” says Jonah Alben, Nvidia’s senior vice president of GPU engineering. So Hopper’s 8-bit units can work with either 5 bits of exponent and two of mantissa (E5M2) when range is important or 4 bits of exponent and three of mantissa (E4M3) when precision is key. The transformer engine orchestrates what’s needed on the fly to speed training. We “embody our experience testing transformers into this so that it knows how to make the right decisions,” says Alben.

In practice, this usually means using different types of floating-point formats for the different parts of a training task. Generally, training a neural network involves exposing it to lots of data (forward inferencing), measuring how bad the network is at doing its task on that data, and then adjusting the network parameters, layer-by-layer backwards through the network to improve it (back propagation). Wash, rinse, repeat. Generally, back propagation needs greater precision, so the E4M3 format might be favored there, while the inferencing (forward) step favors the E5M3’s range.

Nvidia is not alone in pursuing this approach. At the IEEE/ACM International Symposium on Computer Architecture in 2021IBM researchers presented an accelerator called RaPiD that used the E5M2/E4M3 scheme for training, as well. A system of four such chips delivered training speedups between 10 and 100 percent, depending on the neural network involved.

Nvidia’s Hopper will be available in the third quarter of 2022.

Posted on Leave a comment

Framework for the Metaverse

I first wrote about the Metaverse in 2018, and overhauled my thinking in a January 2020 update: The Metaverse: What It Is, Where to Find it, Who Will Build It, and Fortnite. Since then, a lot has happened. COVID-19 forced hundreds of millions into Zoomschool and remote work. Roblox became one of the most popular entertainment experiences in history. Google Trends’ index on the phrase ‘The Metaverse’ set a new ‘100’ in March 2021. Against this baseline, use of the term never exceeded seven from January 2005 through to December 2020. With that in mind, I thought it was time to do an update - one that reflects how my thinking has changed over the past 18 months and addresses the questions I’ve received during this time, such as “Is the Metaverse here?”, “When will it arrive?”, and “What does it need to grow?”. Welcome to the Foreword to ‘THE METAVERSE PRIMER’.

When did the mobile internet era begin? Some would start this history with the very first mobile phones. Others might wait until the commercial deployment of 2G, which was the first digital wireless network. Or the introduction of the Wireless Application Protocol standard, which gave us WAP browsers and thus the ability to access a (rather primitive) version of most websites from nearly any ‘dumbphone’. Or maybe it started with the BlackBerry 6000, or 7000 or 8000 series? At least one of them was the first mainstream mobile device designed for on-the-go data. Most would say it’s the iPhone, which came more than a decade after the first BlackBerry and eight years after WAP, nearly two decades after 2G, 34 years after the first mobile phone call, and has since defined many of the mobile internet era’s visual design principles, economics, and business practices.

In truth, there’s never a flip. We can identify when a specific technology was created, tested, or deployed, but not when an era precisely occurred. This is because technological change requires a lot of technological changes, plural, to all come together. The electricity revolution, for example, was not a single period of steady growth. Instead, it was two separate waves of technological, industrial, and process-related transformations. 

The first wave began around 1881, when Thomas Edison stood up electric power stations in Manhattan and London. Although this was a quick start to the era of electrical power — Edison had created the first working incandescent light bulb only two years earlier, and was only one year into its commercialization — industrial adoption was slow. Some 30 years after Edison’s first stations, less than 10% of mechanical drive power in the United States came from electricity (two thirds of which was generated locally, rather than from a grid). But then suddenly, the second wave began. Between 1910 and 1920, electricity’s share of mechanical drive power quintupled to over 50% (nearly two thirds of which came from independent electric utilities. By 1929 it stood at 78%). 

The difference between the first and second waves is not how much of American industry used electricity, but the extent to which it did — and designed around it.

Alamy

When plants first adopted electrical power, it was typically used for lighting and/or to replace a plant’s on-premises source of power (usually steam). These plants did not, however, rethink or replace the legacy infrastructure which would carry this power throughout the factory and put it to work. Instead, they continued to use a lumbering network of cogs and gears that were messy and loud and dangerous, difficult to upgrade or change, were either ‘all on’ or ‘all off’ (and therefore required the same amount of power to support a single operating station or the entire plant, and suffered from countless ‘single points of failure’), and struggled to support specialized work.

Alamy

But eventually, new technologies and understandings gave factories both the reason and ability to be redesigned end-to-end for electricity, from replacing cogs with electric wires, to installing individual stations with bespoke and dedicated electrically-powered motors for functions such as sewing, cutting, pressing, and welding. 

The benefits were wide-ranging. The same plant now had considerably more space, more light, better air, and less life-threatening equipment. What’s more, individual stations could be powered individually (which increased safety, while reducing costs and downtime), and use more specialized equipment (e.g. electric socket wrenches). 

Getty

In addition, factories could configure their production areas around the logic of the production process, rather than hulking equipment, and even reconfigure these areas on a regular basis. These two changes meant that far more industries could deploy assembly lines in their plants (which had actually first emerged in the late 1700s), while those that already had such lines could extend them further and more efficiently. In 1913, for example, Henry Ford created the first moving assembly line, which used electricity and conveyor belts to reduce the production time per car from 12.5 hours to 93 minutes, while also using less power. According to historian David Nye, Ford’s famous Highland Park plant was “built on the assumption that electrical light and power should be available everywhere.”

Once a few plants began this transformation, the entire market was forced to catch up, thereby spurring more investment and innovation in electricity-based infrastructure, equipment, and processes. Within a year of its first moving assembly line, Ford was producing more cars than the rest of the industry combined. By its 10 millionth car, it had built more than half of all cars on the road.

This ‘second wave’ of industrial electricity adoption didn’t depend on a single visionary making an evolutionary leap from Thomas Edison’s core work. Nor was it driven just by an increasing number of industrial power stations. Instead, it reflected a critical mass of interconnected innovations, spanning power management, manufacturing hardware, production theory, and more. Some of these innovations fit in the palm of a plant manager’s hand, others needed a room, a few required a city, and they all depended on people and processes. 

To return to Nye, “Henry Ford didn’t first conceive of the assembly line and then delegate its development to his managers. … [The] Highland Park facility brought together managers and engineers who collectively knew most of the manufacturing processes used in the United States … they pooled their ideas and drew on their varied work experiences to create a new method of production.” This process, which happened at national scale, led to the ‘roaring twenties’, which saw the greatest average annual increases in labor and capital productivity in a hundred years.

Powering the Mobile Internet

This is how to think about the mobile internet era. The iPhone feels like the start of the mobile internet because it united and/or distilled all of the things we now think of as ‘the mobile internet’ into a single minimum viable product that we could touch and hold and love. But the mobile internet was created — and driven — by so much more.

In fact, we probably don’t even mean the first iPhone but the second, the iPhone 3G (which saw the largest model-over-model growth of any iPhone, with over 4× the sales). This second iPhone was the first to include 3G, which made the mobile web usable, and operated the iOS App Store, which made wireless networks and smartphones useful. 

But neither 3G nor the App Store were Apple-only innovations or creations. The iPhone accessed 3G networks via chips made by Infineon that connected via standards set by the ITU and GSMA, and which were deployed by wireless providers such as AT&T on top of wireless towers built by tower companies such as Crown Castle and American Tower. The iPhone had “an app for that” because millions of developers built them, just as thousands of different companies built specialized electric motor devices for factories in the 1920s. In addition, these apps were built on a wide variety of standards — from KDE to Java, HTML and Unity — which were established and/or maintained by outside parties (some of whom competed with Apple in key areas). The App Store’s payments worked because of digital payments systems and rails established by the major banks. The iPhone also depended on countless other technologies, from a Samsung CPU (licensed in turn from ARM), to an accelerometer from STMicroelectronics, Gorilla Glass from Corning, and other components from companies like Broadcom, Wolfson, and National Semiconductor. 

All of the above creations and contributions, collectively, enabled the iPhone and started the mobile internet era. They also defined its improvement path. 

Consider the iPhone 12, which was released in 2020. There was no amount of money Apple could have spent to release the iPhone 12 as its second model in 2008. Even if Apple could have devised a 5G network chip back then, there would have been no 5G networks for it to use, nor 5G wireless standards through which to communicate to these networks, and no apps that took advantage of its low latency or bandwidth. And even if Apple had made its own ARM-like GPU back in 2008 (more than a decade before ARM itself), game developers (which generate more than two thirds of App Store revenues) would have lacked the game-engine technologies required to take advantage of its superpowered capabilities. 

Getting to the iPhone 12 required ecosystem-wide innovation and investments, most of which sat outside Apple’s purview (even though Apple’s lucrative iOS platform was the core driver of these advancements). The business case for Verizon’s 4G networks and American Tower Corporation’s wireless tower buildouts depended on the consumer and business demand for faster and better wireless for apps such as Spotify, Netflix and Snapchat. Without them, 4G’s ‘killer app’ would have been… slightly faster email. Better GPUs, meanwhile, were utilized by better games, and better cameras were made relevant by photo-sharing services such as Instagram. And this better hardware powered greater engagement, which drove greater growth and profits for these companies, thereby driving better products, apps, and services. Accordingly, we should think of the overall market as driving itself, just as the adoption of electrical grids led to innovation in small electric-powered industrial motors that in turn drove demand for the grid itself.

We must also consider the role of changing user capability. The first iPhone could have skipped the home button altogether, rather than waiting until the tenth. This would have opened up more room inside the device itself for higher-quality hardware or bigger batteries. But the home button was an important training exercise for what was a vastly more complex and capable mobile phone than consumers were used to. Like closing a clamshell phone, it was a safe, easy, and tactile way to ‘restart’ the iPhone if a user was confused or tapped the wrong app. It took a decade for consumers to be able to have no dedicated home button. This idea is critical. As time passes, consumers become increasingly familiar with advanced technology, and therefore better able to adopt further advances - some of which might have long been possible!

And just as consumers shift to new mindsets, so too does industry. Over the past 20 years, nearly every industry has hired, restructured, and re-oriented itself around mobile workflows, products, or business lines. This transformation is as significant as any hardware or software innovation — and, in turn, creates the business case for subsequent innovations.

Defining the Metaverse

This essay is the foreword to my nine-part and 33,000-word primer on the Metaverse, a term I’ve not yet mentioned, let alone described.

Before doing so, it was important for me to provide the context and evolutionary path of technologies such as ‘electricity’ and the ‘mobile internet’. Hopefully it provided a few lessons. First, the proliferation of these technologies fundamentally changed human culture, from where we lived to how we worked, what we made, what we bought, how, and from who. Second, these ‘revolutions’ or ‘transformations’ really depended on a bundle of many different, secondary innovations and inventions that built upon and drove one another. Third, even the most detailed understanding of these newly-emergent technologies didn’t make clear which specific, secondary innovations and inventions they required in order to achieve mass adoption and change the world. And how they would change the world was almost entirely unknowable.

OldInternet.PNG

In other words, we should not expect a single, all-illuminating definition of the ‘Metaverse’. Especially not at a time in which the Metaverse has only just begun to emerge. Technologically driven transformation is too organic and unpredictable of a process. Furthermore, it’s this very messiness that enables and results in such large-scale disruption. 

My goal therefore is to explain what makes the Metaverse so significant – i.e. deserving of the comparisons I offered above – and offer ways to understand how it might work and develop.

The Metaverse is best understood as ‘a quasi-successor state to the mobile internet’. This is because the Metaverse will not fundamentally replace the internet, but instead build upon and iteratively transform it. The best analogy here is the mobile internet, a ‘quasi-successor state’ to the internet established from the 1960s through the 1990s. Even though the mobile internet did not change the underlying architecture of the internet – and in fact, the vast majority of internet traffic today, including data sent to mobile devices, is still transmitted through and managed by fixed infrastructure – we still recognize it as iteratively different. This is because the mobile internet has led to changes in how we access the internet, where, when and why, as well as the devices we use, the companies we patron, the products and services we buy, the technologies we use, our culture, our business model, and our politics. 

The Metaverse will be similarly transformative as it too advances and alters the role of computers and the internet in our lives.

The fixed-line internet of the 1990s and early 2000s inspired many of us to purchase our own personal computer. However, this device was largely isolated to our office, living room or bedroom. As a result, we had only occasional access to and usage of computing resources and an internet connection. The mobile internet led most humans globally to purchase their own personal computer and internet service, which meant almost everyone had continuous access to both compute and connectivity.

Metaverse iterates further by placing everyone inside an ‘embodied’, or ‘virtual’ or ‘3D’ version of the internet and on a nearly unending basis. In other words, we will constantly be ‘within’ the internet, rather than have access to it, and within the billions of interconnected computers around us, rather than occasionally reach for them, and alongside all other users and real-time.

The progression listed above is a helpful way to understand what the Metaverse changes. But it doesn’t explain what it is or what it’s like to experience. To that end, I’ll offer my best swing at a definition:

“The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”

Most commonly, the Metaverse is mis-described as virtual reality. In truth, virtual reality is merely a way to experience the Metaverse. To say VR is the Metaverse is like saying the mobile internet is an app. Note, too, that hundreds of millions are already participating in virtual worlds on a daily basis (and spending tens of billions of hours a month inside them) without VR/AR/MR/XR devices. As a corollary to the above, VR headsets aren’t the Metaverse any more than smartphones are the mobile internet.

Sometimes the Metaverse is described as a user-generated virtual world or virtual world platform. This is like saying the internet is Facebook or Geocities. Facebook is a UGC-focused social network on the internet, while Geocities made it easy to create webpages that lived on the internet. UGC experiences are just one of many experiences on the internet.

Furthermore, the Metaverse doesn’t mean a video game. Video games are purpose-specific (even when the purpose is broad, like ‘fun’), unintegrated (i.e. Call of Duty is isolated from fellow portfolio title Overwatch), temporary (i.e. each game world ‘resets’ after a match) and capped in participants (e.g. 1MM concurrent Fortnite users are in over 100,000 separated simulations. Yes, we will play games in the Metaverse, and those games may have user caps and resets, but those are games in the Metaverse, not the Metaverse itself. Overall, The Metaverse will significantly broaden the number of virtual experiences used in everyday life (i.e. well beyond video games, which have existed for decades) and in turn, expand the number of people who participate in them. 

Lastly, the Metaverse isn’t tools like Unreal or Unity or WebXR or WebGPU. This is like saying the internet is TCP/IP, HTTP, or web browser. These are protocols upon which the internet depends, and the software used to render it.

The Metaverse, like the internet, mobile internet, and process of electrification, is a network of interconnected experiences and applications, devices and products, tools and infrastructure. This is why we don’t even say that horizontally and vertically integrated giants such as Facebook, Google or Apple are an internet. Instead, they are destinations and ecosystems on or in the internet, or which provide access to and services for the internet. And of course, nearly all of the internet would exist without them.

The Metaverse Emerges

As I’ve written before, the full vision of the Metaverse is decades away. It requires extraordinary technical advancements (we are far from being able to produce shared, persistent simulations that millions of users synchronized in real-time), and perhaps regulatory involvement too. In addition, it will require overhauls in business policies, and changes to consumer behavior.

But the term has become so recently popular because we can feel it beginning. This is one of the reasons why Fortnite and Roblox are so commonly conflated with the Metaverse. Just as the iPhone feels like the mobile internet because the device embodied the many innovations which enabled the mobile internet to go mainstream, these ‘games’ bring together many different technologies and trends to produce an experience which is simultaneously tangible and feels different from everything that came before. But they do not constitute the Metaverse.

Sweeney.PNG

Personally, I’m tracking the emergence of the Metaverse around eight core categories, which can be thought of as a stack (click each header for a dedicated essay).

  1. Hardware: The sale and support of physical technologies and devices used to access, interact with, or develop the Metaverse. This includes, but is not limited to, consumer-facing hardware (such as VR headsets, mobile phones, and haptic gloves) as well as enterprise hardware (such as those used to operate or create virtual or AR-based environments, e.g. industrial cameras, projection and tracking systems, and scanning sensors). This category does not include compute-specific hardware, such as GPU chips and servers, as well as networking-specific hardware, such as fiber optic cabling or wireless chipsets.
  2. Networking: The provisioning of persistent, real-time connections, high bandwidth, and decentralized data transmission by backbone providers, the networks, exchange centers, and services that route amongst them, as well as those managing ‘last mile’ data to consumers. 
  3. Compute: The enablement and supply of computing power to support the Metaverse, supporting such diverse and demanding functions as physics calculation, rendering, data reconciliation and synchronization, artificial intelligence, projection, motion capture and translation.
  4. Virtual Platforms: The development and operation of immersive digital and often three-dimensional simulations, environments, and worlds wherein users and businesses can explore, create, socialize, and participate in a wide variety of experiences (e.g. race a car, paint a painting, attend a class, listen to music), and engage in economic activity. These businesses are differentiated from traditional online experiences and multiplayer video games by the existence of a large ecosystem of developers and content creators which generate the majority of content on and/or collect the majority of revenues built on top of the underlying platform.
  5. Interchange Tools and Standards: The tools, protocols, formats, services, and engines which serve as actual or de facto standards for interoperability, and enable the creation, operation and ongoing improvements to the Metaverse. These standards support activities such as rendering, physics, and AI, as well as asset formats and their import/export from experience to experience, forward compatibility management and updating, tooling, and authoring activities, and information management.
  6. Payments: The support of digital payment processes, platforms, and operations, which includes fiat on-ramps (a form of digital currency exchange) to pure-play digital currencies and financial services, including cryptocurrencies, such as bitcoin and ether, and other blockchain technologies.
  7. Metaverse Content, Services, and Assets: The design/creation, sale, re-sale, storage, secure protection and financial management of digital assets, such as virtual goods and currencies, as connected to user data and identity. This contains all business and services “built on top of” and/or which “service” the Metaverse, and which are not vertically integrated into a virtual platform by the platform owner, including content which is built specifically for the Metaverse, independent of virtual platforms.
  8. User Behaviors: Observable changes in consumer and business behaviors (including spend and investment, time and attention, decision-making and capability) which are either directly associated with the Metaverse, or otherwise enable it or reflect its principles and philosophy. These behaviors almost always seem like ‘trends’ (or, more pejoratively, ‘fads’) when they initially appear, but later show enduring global social significance. 

(You’ll note ‘crypto’ or ‘blockchain technologies’ are not a category. Rather, they span and/or drive several categories, most notably compute, interchange tools and standards, and payments — potentially others as well.)

MasterMetaverse1.png

Each of these buckets is critical to the development of the Metaverse. In many cases, we have a good sense of how each one needs to develop, or at least where there’s a critical threshold (say, VR resolution and frame rates, or network latency). 

But ultimately, how these many pieces come together and what they produce is the hard, important, and society-altering part of any Metaverse analysis. Just as the electricity revolution was about more than the kilowatt hours produced per square mile in 1900s New York, and the internet about more than HTTP and broadband cabling.

Based on precedent, however, we can guess that the Metaverse will revolutionize nearly every industry and function. From healthcare to payments, consumer products, entertainment, hourly labor, and even sex work. In addition, altogether new industries, marketplaces and resources will be created to enable this future, as will novel types of skills, professions, and certifications. The collective value of these changes will be in the trillions.

This is the Foreword to the nine-part ‘METAVERSE PRIMER’.

Matthew Ball (@ballmatthew)

The Metaverse Primer

Metaverse
Jun 29, 2021 Written By Matthew Ball

Posted on Leave a comment

Web 3 and the Metaverse Are Not the Same

Web 3 ideas like NFTs are only part of building the next generation of the internet, argues the host of the “Hello Metaverse” podcast.

By Annie Zhang

Of late, the terms “metaverse” and “Web 3″ have been used interchangeably. While they both point to a vision of a better, future internet, it’s important the two concepts not be conflated or become a source of division around ideologies of how we want to continue building the internet.

The metaverse – which gets its name from the 1992 sci-fi novel “Snow Crash” – is more of a vision than a concrete reality. Many people imagine it to be a 3D immersive world that is synchronous, persistent and unlimited in concurrent users. It is a digitally native place where we will spend the majority of our time to work, learn, play, entertain, etc.Annie Zhang is the host of the “Hello Metaverse” podcast where she explores the cultural and societal implications of its developments. She has been building next-generation social products at various consumer companies.

The metaverse feels vague and speculative because it is; it hasn’t really taken form yet. While some technologists want to anchor the vision along the lines of Meta’s Ready Player One-esque keynote presentation, the reality is the metaverse will require everyone’s input and participation to truly take form. It should encompass the confluence of different iterative efforts and technological advancements and have no discrete end.

Web 3, on the other hand, is a far more specific paradigm that provides clear solutions to specific shortcomings of the Web 2 internet. It is a reaction to the walled-garden ecosystems that platforms like Facebook and YouTube created, which caused people to have their data extracted, privacy breached and ability to control the content they create oppressed. Web 3 subverts that model because it directly addresses the issues of ownership and control.

Read more: A Crypto Guide to the Metaverse

By building on the blockchain, data is open and distributed and collectively owned by peer-to-peer networks. As a result, users own their data, peer-to-peer transactions can bypass middlemen and data lives on the blockchain as a public good that anyone can contribute to and monetize.

We’ve seen incredible new consumer behaviors emerge already from Web 3 initiatives, such as creators being able to sell their content as non-fungible tokens (NFT), play-to-earn games that have helped people make a livelihood playing games and a community-organized investing collective (ConstitutionDAO) mobilizing enough capital to bid for the U.S. Constitution at a Sotheby’s auction.

While Web 3 is a powerful tool to transform how we can manage data, governance and exchange money, the slowness of clearing blockchain transactions limits the settings and use cases in which it makes sense to be applied. Although a purely decentralized model of the internet sounds enticing, there is impracticality to it. So, while it could be argued that Web 3 is a critical building block for the metaverse, it is only one component of a greater sum.0 seconds of 6 minutes, 29 secondsVolume 90%

By acknowledging that Web 3 and decentralization are simply a building block for the metaverse, it opens up opportunities for other types of contributors rather than antagonizing them.

When Meta (formerly Facebook) announced its heavily AR/VR-centric metaverse vision, there was an outcry that Big Tech will dominate the metaverse and therefore force platforms to operate as a closed ecosystem once again.

What people missed is the innovation and focus Meta was pushing for was largely on hardware and a 3D user consumption and input interface that, quite frankly, does not exist today. Facebook is trying to solve the immersion problem, and it’s an important one. Think about it. Many of us have spent the last two years on Zoom and have become worn out. How will we feel about wearing a VR headset all day?

If we anticipate spending more and more time in the virtual world enjoyably, we need the virtual interfaces that are more immersive, natural and expressive. Meta’s developments in AR/VR and motion sensing technologies do not undermine the work of Web 3 and decentralization. In fact, the best-case scenario is that people start building Web 3 applications within the emergent 3D form factors of AR/VR and holographic projections.

Another sensationalized opinion is that Web 3 will make Web 2 obsolete. Again, it’s hard to imagine such a reality. Despite certain shortcomings of Web 2, there are still many products that operate more effectively without using the blockchain. Platforms like Discord or Twitch help people communicate and broadcast at scale and in real time. Companies like Uber or DoorDash effectively queue up demand and match it with supply.

Like it or not, centralization works. OpenSea, currently the largest NFT marketplace, is fundamentally a centralized marketplace that simply facilitates transactions on the blockchain. Coinbase is another example of a centralized exchange that enables transactions of cryptocurrencies. In both cases, these intermediaries take service fees on transactions just like any other Web 2 marketplace.

While these hybrid products do not align perfectly to the decentralization ideology, they are critical “bridging products” that help greater adoption of Web 3 elements by appealing to the mainstream. In a similar way that Snap Stories was a popular teen product but struggled with adoption with older users, Meta’s adoption of Stories helped it become a mainstream product for all demographics.

Read CoinDesk’s Culture Week

When new technologies and paradigms emerge, it can often be seen as a revolution. But what we see throughout history is that they tend to build on top of existing foundations from past eras. Email is still a huge part of our day to day lives, and yet it was a protocol invented in the Web 1 era of the internet.

Jon Lai, a GP at investment firm a16z, has a grounded perspective on the development path to the metaverse in this episode of “Hello Metaverse.” “There’s a lot of building yet to be done. The blockchain, play-to-earn, different types of jobs, virtual economics, all of that are like stepping stones [as well as] UGC [user-generated content] platforms and scaling content creation … it won’t be this shining product launch from some company who just says, ‘Hey! We’ve been working on this for 10 years and boom, here’s the metaverse’. It’s going to be the cumulative sum of a bunch of different companies working in completely different spaces on completely different products.”

This is all to say, we need to focus on the interplay between different operating models and how they can work together to create better realities for people rather than focus on their differences and “choosing a side.” While the latest development of Web 3 and efforts to make use cases of the blockchain mainstream is a huge leap forward in our progress in making a better internet, it’s simply one component and it should not neglect other complementary initiatives.

This article appeared first on December 21st 2021
https://www.coindesk.com/layer2/2021/12/21/web-3-and-the-metaverse-are-not-the-same/