Posted on Leave a comment

IoTeX — When Blockchain meets the Internet of Things

Posted on Leave a comment

European Union limits targeted advertising and content algorithms under new law

The Digital Services Act could reshape the internet.
 

Following a marathon 16-hour negotiation session, the European Union reached an agreement early Saturday to adopt the Digital Services Act. The legislation seeks to impose greater accountability on the world's tech giants by enforcing new obligations companies of all sizes must adhere to once the act becomes law in 2024. Like the Digital Markets Act before it, the DSA could have far-reaching implications, some of which could extend beyond Europe.

While the European Commission has yet to release the final text of the Digital Services Act, it did detail some of its provisions on Saturday. Most notably, the law bans ads that target individuals based on their religion, sexual orientation, ethnicity or political affiliation. Companies also cannot serve targeted ads to minors.

Another part of the law singles out recommendation algorithms. Online platforms like Facebook will need to be transparent about how those systems work to display content to users. They will also need to offer alternative systems "not based on profiling," meaning more platforms would need to offer chronological feeds. Additionally, some of the largest platforms today will be required to share "key" data to vetted researchers and NGOs so those groups can provide insights into "how online risks evolve."

"Today's agreement on the Digital Services Act is historic, both in terms of speed and of substance," said European Commission President Ursula von der Leyen. "It will ensure that the online environment remains a safe space, safeguarding freedom of expression and opportunities for digital businesses. It gives practical effect to the principle that what is illegal offline, should be illegal online."

Under the DSA, the EU will have the power to fine tech companies up to six percent of their global turnover for rule violations, with repeat infractions carrying the threat of a ban from the bloc. As The Guardian points out, in the case of a company like Meta, that would translate into a single potential fine of approximately $7 billion.

The DSA differentiates between tech companies of different sizes, with the most scrutiny reserved for platforms that have at least 45 million users in the EU. In that group are companies like Meta and Google. According to a recent report, those two, in addition to Apple, Amazon and Spotify, collectively spent more than €27 million lobbying EU policymakers last year to change the terms of the Digital Services Act and Digital Markets Act. The laws could inspire lawmakers in other countries, including the US, as they look to pass their own antitrust laws.

"We welcome the DSA's goals of making the internet even more safe, transparent and accountable, while ensuring that European users, creators and businesses continue to benefit from the open web," a Google spokesperson told Engadget. "As the law is finalized and implemented, the details will matter. We look forward to working with policymakers to get the remaining technical

Source

The Digital Service Act can be viewed here.

Posted on Leave a comment

How Can Machine Learning Improve Business Decision-making?

Artificial Intelligence and Machine Learning in Development

It sounds like something from a 1980s sci-fi film. The idea of a machine helping to make your business decisions is something straight out of a blockbuster, but the way technology has evolved means that companies embracing Machine Learning for decision-making can actually get the edge over their competition. 

AI & Machine Learning 

Machine Learning is intrinsically linked with AI. It is the capacity that a machine has to learn and demonstrate intelligence and insight. The role of AI within a business largely depends on exactly what type of business is and what you are trying to achieve. More and more Machine Learning business apps that automate processes, analyzing data, arise every day. 

Machine Learning Predictive Models & Machine Learning Text Classification 

Predictive modeling is a process that uses data and statistics to predict outcomes with data models. It can study data and predict what is going to happen next, or what should happen next, which can be extremely useful in certain industries. The data thrown out can help tremendously with decision making.  

Predictive Modeling | Comidor Blog

Text classification is able to categorize and select texts based on Machine Learning in a smart way. The process becomes quicker and more efficient. This can be put into place for things like chat boxes.

Process Mining and Machine Learning 

Process mining evaluates business processes and can give you new methods of improving your business, either by making it more efficient or saving money. There are ways that AI Machine Learning can be constantly involved in your process mining, giving you new insights and informing the business decisions you need to make next. 

An example is using KPI’s. Process mining can explore data regarding where processes have gone wrong. For example, they could analyze data from your suppliers to tell you who is more likely to deliver on time, or they could analyze the data from previous sales to see whether or not you are likely to run out of stock. The key performance indicators are crucial or giving a number value, from which the process mining can be much more effectively carried out. 

Almost every business can benefit from becoming more efficient in one way or another, and process mining could be the first port of call. 

Artificial Intelligence and Machine Learning for Decision-making

AI can be put into practice when it comes to decision making, about almost any aspect of your business. For example, you can use it to analyze data on the money you are spending, staff responsibilities, even employee happiness. If you can feed it data then AI can show you new insights. 

artificial-intelligence blog | Comidor Blog

Decision-making process – The pros & cons of AI 

The pros of including AI in your decision making are clear. Having these new insights can help you to spot new areas of improvement and make vast enhancements in the way you conduct your business. AI can often see things that other data analysts would not. It can also tick away in the background, so you don’t have to pay consultants to work with the data if a computer is interpreting it. 

AI decision-making speeds up the process. AI can operate at incredible speeds and see data in ways that humans would take years to analyze in a matter of minutes. This should be utilized by big corporations when they are looking to make their business more efficient and even to make their processes more intelligent

The cons of this include the fact that there are still some shortcomings. The human touch is sometimes still needed. For instance, seeing the potential of a new staff member needs human input. Statistics might tell you that they need to go, and AI might back this up, but you might see the potential in them, still. 

AI doesn’t do creative thinking or coming up with ideas, so this will still fall on business employees and leaders. 

How Machine Learning can be applied to business processes 

Almost all business processes can be streamlined in some way. It could be that AI shows exactly how to do this. AI can also be put into practical uses relevant to your business. 

How Machine Learning can determine a pricing strategy

Machine learning might also be used to dictate pricing. An algorithm can learn from consumer information and other seller data to help you to price goods and services in a way that is competitive and likely to convert. 

AI decision-making – Developments for the near future 

From consumer protection to intelligent process automation, there aren’t many ways in which Machine Learning can’t be applied in business. It is very hard to know exactly how it will pan out, but there is little denying that AI is here to stay. Practical uses and an understanding of exactly what your customer is looking for, or how customers and staff behave, will become more intertwined with how business is done. 

The Future – Decision-Making For Your Business With AI 

When it comes to making decisions about a business, data is always going to be vital, but with Machine learning, we have so many ways in which we can use them and find out more and more about customers, businesses, and the processes we use. Don’t worry, the robots aren’t taking over like an 80s sci-fi film, but we do have more tools and functions to use as part of our business strategies than ever before thanks to Machine Learning. 

Source

Posted on Leave a comment

Okta to pay $6.5B to acquire Seattle’s Auth0; identity tech startup was valued at $1.9B last year

Auth0, the billion-dollar Seattle-area startup that is a leader in identity authentication software, is being acquired by Okta, another leader in the space, the companies announced Wednesday. The all-stock deal is valued at approximately $6.5 billion — one of the largest acquisitions of a Seattle tech company.

Auth0 was co-founded in 2013 by Eugenio Pace, who formerly ran the patterns and practices group at Microsoft, and Matias Woloski, a software engineer who remains the company’s CTO. Both hail from Argentina, and Auth0 has built its more than 850-person team through a distributed approach with workers scattered all over the world.

The startup raised a $120 million round in July at a $1.9 billion valuation, making it a rare Seattle unicorn. That step up in valuation from $1.9 billion to $6.5 billion in just eight months is impressive, but not everyone is thinking that Auth0 should have sold so soon.

Even still, the deal is a huge windfall for the company’s founders and early investors, including Pacific Northwest firms Founders’ Co-op and Portland Seed Fund. And it’s a big payoff in Seattle’s startup scene — nearly tripling the $2.25 billion that EMC paid for Seattle data storage company Isilon in 2010.

“We started Auth0 seven years ago,” Pace said last year at the GeekWire Awards, after Auth0 won honors for Deal of the Year. “Sometimes it feels like seven minutes and sometimes it feels like 70 years. But it’s been a great journey.”

GeekWire heard rumblings about a play for Auth0 a few weeks ago, but we were unable to confirm the news. Forbes, which broke the story today, noted that the deal was slow to close because Auth0 was weighing other options, including an IPO and other possible suitors.

Auth0 will continue operating as an independent business within Okta.

San Francisco-based Okta boasts a market capitalization of $31 billion, with 2,800 employees worldwide. The company’s shares fell more than 13% in after-hours trading.

Okta reported its fourth quarter earnings Wednesday, with revenue up 40% to $234.7 million and net losses growing to $75.8 million, up from $50.4 million.

“Okta and Auth0 have an incredible opportunity to build the identity platform of the future,” Pace said in a news release.

Auth0 co-founders CEO Eugenio Pace, bottom left, and Matias Woloski, bottom right, sign acquisition agreement papers via video chat with Okta co-founders Frederic Kerrest and CEO Todd McKinnon, top right. (Okta Photo)

Auth0 is currently ranked No. 4 on the GeekWire 200, our index of top Pacific Northwest startups. However, as is customary with an acquisition or IPO, Auth0 will now be moved off the list.

“We think it’s a fantastic validation of their ‘developer-first’ approach to enterprise software, and of Seattle’s startup ecosystem more generally,” Founders’ Co-op Managing Partner Chris DeVore told GeekWire. “We’re thrilled for the founders and have already seen the knock-on effects of the entrepreneurial culture they built as two of our most recent investments (Fusebit and Zerowall) were both founded by Auth0 alums.”

Salesforce Ventures led Auth0’s $120 million Series F round in July. The funding followed a $103 million round in May 2019. Total funding to date for the 8-year-old company is more than $330 million.

Other Auth0 investors include DTCP, Bessemer Venture Partners, Sapphire Ventures, Meritech Capital, World Innovation Lab, Trinity Ventures, Telstra Ventures, and K9 Ventures. Early investor and first Auth0 board member Sunil Nagaraj, who at the time of the deal was working for Bessemer, writes about the early days of the startup in this blog post congratulating the founding team on the acquisition.

“You will not find another person on Earth that cares more about understanding someone and communicating something clearly than Auth0 CEO Eugenio Pace,” Nagaraj wrote.

Auth0 co-founders Matias Woloski, left, and Eugenio Pace. (Auth0 Photo)

Auth0 combines existing login and identity verification options into a few lines of code that developers can quickly add to their applications. Its platform includes services like single sign-on, two-factor authentication, password-free login capabilities, and the ability to detect password breaches.

The pandemic has put a spotlight on security tech companies with accelerated adoption of digital services. Pace told GeekWire last year that demand for Auth0’s services was “massive” as companies connect more and more with customers in the cloud.

Auth0 processes more than 4.5 billion login transactions per month.

“I’m thrilled by the choice, flexibility, and value we’ll offer customers: Okta and Auth0 address a broad set of identity use cases, and our identity platforms are robust and extensible enough to serve the world’s largest organizations and most innovative developers,” Todd McKinnon, CEO and co-founder of Okta, wrote in a blog post.

Posted on Leave a comment

Framework for the Metaverse

I first wrote about the Metaverse in 2018, and overhauled my thinking in a January 2020 update: The Metaverse: What It Is, Where to Find it, Who Will Build It, and Fortnite. Since then, a lot has happened. COVID-19 forced hundreds of millions into Zoomschool and remote work. Roblox became one of the most popular entertainment experiences in history. Google Trends’ index on the phrase ‘The Metaverse’ set a new ‘100’ in March 2021. Against this baseline, use of the term never exceeded seven from January 2005 through to December 2020. With that in mind, I thought it was time to do an update - one that reflects how my thinking has changed over the past 18 months and addresses the questions I’ve received during this time, such as “Is the Metaverse here?”, “When will it arrive?”, and “What does it need to grow?”. Welcome to the Foreword to ‘THE METAVERSE PRIMER’.

When did the mobile internet era begin? Some would start this history with the very first mobile phones. Others might wait until the commercial deployment of 2G, which was the first digital wireless network. Or the introduction of the Wireless Application Protocol standard, which gave us WAP browsers and thus the ability to access a (rather primitive) version of most websites from nearly any ‘dumbphone’. Or maybe it started with the BlackBerry 6000, or 7000 or 8000 series? At least one of them was the first mainstream mobile device designed for on-the-go data. Most would say it’s the iPhone, which came more than a decade after the first BlackBerry and eight years after WAP, nearly two decades after 2G, 34 years after the first mobile phone call, and has since defined many of the mobile internet era’s visual design principles, economics, and business practices.

In truth, there’s never a flip. We can identify when a specific technology was created, tested, or deployed, but not when an era precisely occurred. This is because technological change requires a lot of technological changes, plural, to all come together. The electricity revolution, for example, was not a single period of steady growth. Instead, it was two separate waves of technological, industrial, and process-related transformations. 

The first wave began around 1881, when Thomas Edison stood up electric power stations in Manhattan and London. Although this was a quick start to the era of electrical power — Edison had created the first working incandescent light bulb only two years earlier, and was only one year into its commercialization — industrial adoption was slow. Some 30 years after Edison’s first stations, less than 10% of mechanical drive power in the United States came from electricity (two thirds of which was generated locally, rather than from a grid). But then suddenly, the second wave began. Between 1910 and 1920, electricity’s share of mechanical drive power quintupled to over 50% (nearly two thirds of which came from independent electric utilities. By 1929 it stood at 78%). 

The difference between the first and second waves is not how much of American industry used electricity, but the extent to which it did — and designed around it.

Alamy

When plants first adopted electrical power, it was typically used for lighting and/or to replace a plant’s on-premises source of power (usually steam). These plants did not, however, rethink or replace the legacy infrastructure which would carry this power throughout the factory and put it to work. Instead, they continued to use a lumbering network of cogs and gears that were messy and loud and dangerous, difficult to upgrade or change, were either ‘all on’ or ‘all off’ (and therefore required the same amount of power to support a single operating station or the entire plant, and suffered from countless ‘single points of failure’), and struggled to support specialized work.

Alamy

But eventually, new technologies and understandings gave factories both the reason and ability to be redesigned end-to-end for electricity, from replacing cogs with electric wires, to installing individual stations with bespoke and dedicated electrically-powered motors for functions such as sewing, cutting, pressing, and welding. 

The benefits were wide-ranging. The same plant now had considerably more space, more light, better air, and less life-threatening equipment. What’s more, individual stations could be powered individually (which increased safety, while reducing costs and downtime), and use more specialized equipment (e.g. electric socket wrenches). 

Getty

In addition, factories could configure their production areas around the logic of the production process, rather than hulking equipment, and even reconfigure these areas on a regular basis. These two changes meant that far more industries could deploy assembly lines in their plants (which had actually first emerged in the late 1700s), while those that already had such lines could extend them further and more efficiently. In 1913, for example, Henry Ford created the first moving assembly line, which used electricity and conveyor belts to reduce the production time per car from 12.5 hours to 93 minutes, while also using less power. According to historian David Nye, Ford’s famous Highland Park plant was “built on the assumption that electrical light and power should be available everywhere.”

Once a few plants began this transformation, the entire market was forced to catch up, thereby spurring more investment and innovation in electricity-based infrastructure, equipment, and processes. Within a year of its first moving assembly line, Ford was producing more cars than the rest of the industry combined. By its 10 millionth car, it had built more than half of all cars on the road.

This ‘second wave’ of industrial electricity adoption didn’t depend on a single visionary making an evolutionary leap from Thomas Edison’s core work. Nor was it driven just by an increasing number of industrial power stations. Instead, it reflected a critical mass of interconnected innovations, spanning power management, manufacturing hardware, production theory, and more. Some of these innovations fit in the palm of a plant manager’s hand, others needed a room, a few required a city, and they all depended on people and processes. 

To return to Nye, “Henry Ford didn’t first conceive of the assembly line and then delegate its development to his managers. … [The] Highland Park facility brought together managers and engineers who collectively knew most of the manufacturing processes used in the United States … they pooled their ideas and drew on their varied work experiences to create a new method of production.” This process, which happened at national scale, led to the ‘roaring twenties’, which saw the greatest average annual increases in labor and capital productivity in a hundred years.

Powering the Mobile Internet

This is how to think about the mobile internet era. The iPhone feels like the start of the mobile internet because it united and/or distilled all of the things we now think of as ‘the mobile internet’ into a single minimum viable product that we could touch and hold and love. But the mobile internet was created — and driven — by so much more.

In fact, we probably don’t even mean the first iPhone but the second, the iPhone 3G (which saw the largest model-over-model growth of any iPhone, with over 4× the sales). This second iPhone was the first to include 3G, which made the mobile web usable, and operated the iOS App Store, which made wireless networks and smartphones useful. 

But neither 3G nor the App Store were Apple-only innovations or creations. The iPhone accessed 3G networks via chips made by Infineon that connected via standards set by the ITU and GSMA, and which were deployed by wireless providers such as AT&T on top of wireless towers built by tower companies such as Crown Castle and American Tower. The iPhone had “an app for that” because millions of developers built them, just as thousands of different companies built specialized electric motor devices for factories in the 1920s. In addition, these apps were built on a wide variety of standards — from KDE to Java, HTML and Unity — which were established and/or maintained by outside parties (some of whom competed with Apple in key areas). The App Store’s payments worked because of digital payments systems and rails established by the major banks. The iPhone also depended on countless other technologies, from a Samsung CPU (licensed in turn from ARM), to an accelerometer from STMicroelectronics, Gorilla Glass from Corning, and other components from companies like Broadcom, Wolfson, and National Semiconductor. 

All of the above creations and contributions, collectively, enabled the iPhone and started the mobile internet era. They also defined its improvement path. 

Consider the iPhone 12, which was released in 2020. There was no amount of money Apple could have spent to release the iPhone 12 as its second model in 2008. Even if Apple could have devised a 5G network chip back then, there would have been no 5G networks for it to use, nor 5G wireless standards through which to communicate to these networks, and no apps that took advantage of its low latency or bandwidth. And even if Apple had made its own ARM-like GPU back in 2008 (more than a decade before ARM itself), game developers (which generate more than two thirds of App Store revenues) would have lacked the game-engine technologies required to take advantage of its superpowered capabilities. 

Getting to the iPhone 12 required ecosystem-wide innovation and investments, most of which sat outside Apple’s purview (even though Apple’s lucrative iOS platform was the core driver of these advancements). The business case for Verizon’s 4G networks and American Tower Corporation’s wireless tower buildouts depended on the consumer and business demand for faster and better wireless for apps such as Spotify, Netflix and Snapchat. Without them, 4G’s ‘killer app’ would have been… slightly faster email. Better GPUs, meanwhile, were utilized by better games, and better cameras were made relevant by photo-sharing services such as Instagram. And this better hardware powered greater engagement, which drove greater growth and profits for these companies, thereby driving better products, apps, and services. Accordingly, we should think of the overall market as driving itself, just as the adoption of electrical grids led to innovation in small electric-powered industrial motors that in turn drove demand for the grid itself.

We must also consider the role of changing user capability. The first iPhone could have skipped the home button altogether, rather than waiting until the tenth. This would have opened up more room inside the device itself for higher-quality hardware or bigger batteries. But the home button was an important training exercise for what was a vastly more complex and capable mobile phone than consumers were used to. Like closing a clamshell phone, it was a safe, easy, and tactile way to ‘restart’ the iPhone if a user was confused or tapped the wrong app. It took a decade for consumers to be able to have no dedicated home button. This idea is critical. As time passes, consumers become increasingly familiar with advanced technology, and therefore better able to adopt further advances - some of which might have long been possible!

And just as consumers shift to new mindsets, so too does industry. Over the past 20 years, nearly every industry has hired, restructured, and re-oriented itself around mobile workflows, products, or business lines. This transformation is as significant as any hardware or software innovation — and, in turn, creates the business case for subsequent innovations.

Defining the Metaverse

This essay is the foreword to my nine-part and 33,000-word primer on the Metaverse, a term I’ve not yet mentioned, let alone described.

Before doing so, it was important for me to provide the context and evolutionary path of technologies such as ‘electricity’ and the ‘mobile internet’. Hopefully it provided a few lessons. First, the proliferation of these technologies fundamentally changed human culture, from where we lived to how we worked, what we made, what we bought, how, and from who. Second, these ‘revolutions’ or ‘transformations’ really depended on a bundle of many different, secondary innovations and inventions that built upon and drove one another. Third, even the most detailed understanding of these newly-emergent technologies didn’t make clear which specific, secondary innovations and inventions they required in order to achieve mass adoption and change the world. And how they would change the world was almost entirely unknowable.

OldInternet.PNG

In other words, we should not expect a single, all-illuminating definition of the ‘Metaverse’. Especially not at a time in which the Metaverse has only just begun to emerge. Technologically driven transformation is too organic and unpredictable of a process. Furthermore, it’s this very messiness that enables and results in such large-scale disruption. 

My goal therefore is to explain what makes the Metaverse so significant – i.e. deserving of the comparisons I offered above – and offer ways to understand how it might work and develop.

The Metaverse is best understood as ‘a quasi-successor state to the mobile internet’. This is because the Metaverse will not fundamentally replace the internet, but instead build upon and iteratively transform it. The best analogy here is the mobile internet, a ‘quasi-successor state’ to the internet established from the 1960s through the 1990s. Even though the mobile internet did not change the underlying architecture of the internet – and in fact, the vast majority of internet traffic today, including data sent to mobile devices, is still transmitted through and managed by fixed infrastructure – we still recognize it as iteratively different. This is because the mobile internet has led to changes in how we access the internet, where, when and why, as well as the devices we use, the companies we patron, the products and services we buy, the technologies we use, our culture, our business model, and our politics. 

The Metaverse will be similarly transformative as it too advances and alters the role of computers and the internet in our lives.

The fixed-line internet of the 1990s and early 2000s inspired many of us to purchase our own personal computer. However, this device was largely isolated to our office, living room or bedroom. As a result, we had only occasional access to and usage of computing resources and an internet connection. The mobile internet led most humans globally to purchase their own personal computer and internet service, which meant almost everyone had continuous access to both compute and connectivity.

Metaverse iterates further by placing everyone inside an ‘embodied’, or ‘virtual’ or ‘3D’ version of the internet and on a nearly unending basis. In other words, we will constantly be ‘within’ the internet, rather than have access to it, and within the billions of interconnected computers around us, rather than occasionally reach for them, and alongside all other users and real-time.

The progression listed above is a helpful way to understand what the Metaverse changes. But it doesn’t explain what it is or what it’s like to experience. To that end, I’ll offer my best swing at a definition:

“The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”

Most commonly, the Metaverse is mis-described as virtual reality. In truth, virtual reality is merely a way to experience the Metaverse. To say VR is the Metaverse is like saying the mobile internet is an app. Note, too, that hundreds of millions are already participating in virtual worlds on a daily basis (and spending tens of billions of hours a month inside them) without VR/AR/MR/XR devices. As a corollary to the above, VR headsets aren’t the Metaverse any more than smartphones are the mobile internet.

Sometimes the Metaverse is described as a user-generated virtual world or virtual world platform. This is like saying the internet is Facebook or Geocities. Facebook is a UGC-focused social network on the internet, while Geocities made it easy to create webpages that lived on the internet. UGC experiences are just one of many experiences on the internet.

Furthermore, the Metaverse doesn’t mean a video game. Video games are purpose-specific (even when the purpose is broad, like ‘fun’), unintegrated (i.e. Call of Duty is isolated from fellow portfolio title Overwatch), temporary (i.e. each game world ‘resets’ after a match) and capped in participants (e.g. 1MM concurrent Fortnite users are in over 100,000 separated simulations. Yes, we will play games in the Metaverse, and those games may have user caps and resets, but those are games in the Metaverse, not the Metaverse itself. Overall, The Metaverse will significantly broaden the number of virtual experiences used in everyday life (i.e. well beyond video games, which have existed for decades) and in turn, expand the number of people who participate in them. 

Lastly, the Metaverse isn’t tools like Unreal or Unity or WebXR or WebGPU. This is like saying the internet is TCP/IP, HTTP, or web browser. These are protocols upon which the internet depends, and the software used to render it.

The Metaverse, like the internet, mobile internet, and process of electrification, is a network of interconnected experiences and applications, devices and products, tools and infrastructure. This is why we don’t even say that horizontally and vertically integrated giants such as Facebook, Google or Apple are an internet. Instead, they are destinations and ecosystems on or in the internet, or which provide access to and services for the internet. And of course, nearly all of the internet would exist without them.

The Metaverse Emerges

As I’ve written before, the full vision of the Metaverse is decades away. It requires extraordinary technical advancements (we are far from being able to produce shared, persistent simulations that millions of users synchronized in real-time), and perhaps regulatory involvement too. In addition, it will require overhauls in business policies, and changes to consumer behavior.

But the term has become so recently popular because we can feel it beginning. This is one of the reasons why Fortnite and Roblox are so commonly conflated with the Metaverse. Just as the iPhone feels like the mobile internet because the device embodied the many innovations which enabled the mobile internet to go mainstream, these ‘games’ bring together many different technologies and trends to produce an experience which is simultaneously tangible and feels different from everything that came before. But they do not constitute the Metaverse.

Sweeney.PNG

Personally, I’m tracking the emergence of the Metaverse around eight core categories, which can be thought of as a stack (click each header for a dedicated essay).

  1. Hardware: The sale and support of physical technologies and devices used to access, interact with, or develop the Metaverse. This includes, but is not limited to, consumer-facing hardware (such as VR headsets, mobile phones, and haptic gloves) as well as enterprise hardware (such as those used to operate or create virtual or AR-based environments, e.g. industrial cameras, projection and tracking systems, and scanning sensors). This category does not include compute-specific hardware, such as GPU chips and servers, as well as networking-specific hardware, such as fiber optic cabling or wireless chipsets.
  2. Networking: The provisioning of persistent, real-time connections, high bandwidth, and decentralized data transmission by backbone providers, the networks, exchange centers, and services that route amongst them, as well as those managing ‘last mile’ data to consumers. 
  3. Compute: The enablement and supply of computing power to support the Metaverse, supporting such diverse and demanding functions as physics calculation, rendering, data reconciliation and synchronization, artificial intelligence, projection, motion capture and translation.
  4. Virtual Platforms: The development and operation of immersive digital and often three-dimensional simulations, environments, and worlds wherein users and businesses can explore, create, socialize, and participate in a wide variety of experiences (e.g. race a car, paint a painting, attend a class, listen to music), and engage in economic activity. These businesses are differentiated from traditional online experiences and multiplayer video games by the existence of a large ecosystem of developers and content creators which generate the majority of content on and/or collect the majority of revenues built on top of the underlying platform.
  5. Interchange Tools and Standards: The tools, protocols, formats, services, and engines which serve as actual or de facto standards for interoperability, and enable the creation, operation and ongoing improvements to the Metaverse. These standards support activities such as rendering, physics, and AI, as well as asset formats and their import/export from experience to experience, forward compatibility management and updating, tooling, and authoring activities, and information management.
  6. Payments: The support of digital payment processes, platforms, and operations, which includes fiat on-ramps (a form of digital currency exchange) to pure-play digital currencies and financial services, including cryptocurrencies, such as bitcoin and ether, and other blockchain technologies.
  7. Metaverse Content, Services, and Assets: The design/creation, sale, re-sale, storage, secure protection and financial management of digital assets, such as virtual goods and currencies, as connected to user data and identity. This contains all business and services “built on top of” and/or which “service” the Metaverse, and which are not vertically integrated into a virtual platform by the platform owner, including content which is built specifically for the Metaverse, independent of virtual platforms.
  8. User Behaviors: Observable changes in consumer and business behaviors (including spend and investment, time and attention, decision-making and capability) which are either directly associated with the Metaverse, or otherwise enable it or reflect its principles and philosophy. These behaviors almost always seem like ‘trends’ (or, more pejoratively, ‘fads’) when they initially appear, but later show enduring global social significance. 

(You’ll note ‘crypto’ or ‘blockchain technologies’ are not a category. Rather, they span and/or drive several categories, most notably compute, interchange tools and standards, and payments — potentially others as well.)

MasterMetaverse1.png

Each of these buckets is critical to the development of the Metaverse. In many cases, we have a good sense of how each one needs to develop, or at least where there’s a critical threshold (say, VR resolution and frame rates, or network latency). 

But ultimately, how these many pieces come together and what they produce is the hard, important, and society-altering part of any Metaverse analysis. Just as the electricity revolution was about more than the kilowatt hours produced per square mile in 1900s New York, and the internet about more than HTTP and broadband cabling.

Based on precedent, however, we can guess that the Metaverse will revolutionize nearly every industry and function. From healthcare to payments, consumer products, entertainment, hourly labor, and even sex work. In addition, altogether new industries, marketplaces and resources will be created to enable this future, as will novel types of skills, professions, and certifications. The collective value of these changes will be in the trillions.

This is the Foreword to the nine-part ‘METAVERSE PRIMER’.

Matthew Ball (@ballmatthew)

The Metaverse Primer

Metaverse
Jun 29, 2021 Written By Matthew Ball

Virtual Identity
0