Posted on Leave a comment

Are We Living in a Simulated Reality?

 

According to some theorists, we are living in a simulated reality. This theory is based on the idea that the world we experience is nothing more than a computer simulation. Furthermore, some scientists believe that an advanced civilization could create this simulation.

We spend so much time inside computers and phones that it’s hard to imagine life without them. But what if we’re living in a simulated reality?

Some people think that computers could be creating simulations of different worlds in which to play, while others believe that our entire reality could be just one extensive computer simulation.

What is defined as Real?

When discussing what is real, it’s important to define what is meant by the term. For some, the reality is what can be experienced through the five senses. Anything that exists outside of that is considered to be fake or simulated.

Others may believe that reality is more than just what can be perceived with the senses. It may also include things that are beyond our understanding or knowledge.

In the movie “The Matrix,” Morpheus asks Neo what is real. This is a question that people have asked throughout history. Philosophers have debated this question for centuries. What is real? Is it the physical world that we can see and touch? Or is it something else?

What is real? How do you define ‘real’? If you’re talking about what you can feel, what you can smell, what you can taste and see, then ‘real’ is simply electrical signals interpreted by your brain.

-Morpheus, The Matrix

 

Some people believe that there is more to reality than what we can see and touch. They believe that a spiritual world exists beyond our physical world. Others believe that reality is nothing more than an illusion.

There is no single answer to this question as it varies from individual to individual. What one person considers natural may not be seen as such by someone else. This makes it a difficult topic to debate or discuss.

The Matrix: A movie or a Documentary?

There is a lot of debate over whether the 1999 movie The Matrix is a work of fiction or a documentary.

The Matrix is a movie based on the idea of simulated reality. It asks the question, what if our world is not what we think it is? What if we are living in a simulation? The movie takes this idea and runs it, creating a believable and fascinating world.

 

However, some people believe that The Matrix is more than just a movie. They think that it is a documentary. Our world is a simulated reality, and we live in it without knowing it. While this may seem like a crazy idea, it does have some basis in science.

Simulated reality is something that scientists are currently studying, and there is evidence that suggests it could be possible. So, while The Matrix may be a movie, it could also be based on reality exploring the idea of a simulated reality.

The Simulation Theory

The theory is that we might be living in a simulated reality. Proponents of the simulation theory say that it’s plausible because computing power increases exponentially.

Why wouldn't simulators do so if we could create a simulated world indistinguishable from reality?

Some scientists even believe that we’re already living in a computer-generated simulation and that our consciousness is just a program or algorithm.

Physicist creates AI algorithm that may prove reality is a simulation

A theory suggests that we are all living in a simulated reality. This theory, known as the simulation theory, indicates that humans created a computer program that allows us to experience life as if we are living in the real world at some point in our history.

Some people believe that this theory could explain the mysteries of our existence, such as why we are here and what happens when we die.

The first time the simulation theory was proposed was by philosopher Rene Descartes in 1641. However, it wasn’t until the 1970s that the theory began to gain popularity. This was due to the development of computers and later artificial intelligence.

Then, in 2003, philosopher Nick Bostrom published a paper titled “Are You Living in a Computer Simulation?” which revived interest in the theory.

While there’s no definitive proof that we’re living in a simulation, the theory raises some interesting questions.

What if everything we experience is just an illusion? What does that mean for our understanding of reality and ourselves?

How could we know if we’re living in a simulation?

There are a few different ways to determine whether or not we’re living in a simulation. One way is to look at the feasibility of creating a simulated world. If it’s possible to create a simulated world that is indistinguishable from the real world, we’re likely living in a simulation.

Another way to determine if we’re living in a simulation is to look at the development of artificial intelligence. If artificial intelligence surpasses human intelligence and becomes able to create its simulations, then it’s likely that we’re living in a simulated world.

Whether or not we live in a computer-generated simulation has been debated by philosophers and scientists for centuries. Still, recent advancements in artificial intelligence (AI) have brought the topic back into the spotlight.

Some experts believe that if we create intelligent machines, they could eventually become powerful enough to create their simulations, leading to an infinite number of universes — including ours.

So how could we know if we’re living in a simulation? One way would be to see if the laws of physics can be simulated on a computer. Another approach is to look for glitches or inaccuracies in the universe that could suggest it’s fake. However, both methods are complicated to execute and may not provide conclusive results.

The bottom line is that we may never know whether or not we’re living in a simulation.

Final Thought

The likelihood of living in a simulated reality is still up for debate; the ramifications of such a possibility are far-reaching.

If we were to find ourselves in a simulated world, it would force us to re-evaluate our understanding of reality and its meaning to being human. It would also raise important questions about the nature of existence and our place in the universe.

Apr 18



Source

Posted on Leave a comment

Let’s discuss Functional NFTs

Functional NFTs are changing the ways we interact with each other and the gaming experience. Earlier, NFTs were limited to products but now it’s putting a value on services too. Now with functional NFTs, you can choose to buy an experience rather than a piece of art. 

Non-Fungible Tokens (NFTs) have stirred up things in the world of art. While the underlying technology behind NFTs remains simple. They have morphed into multiple applications some of which we shall discuss soon. Traditionally there have been five categories of NFTs: Collectibles, Game Assets, Virtual Land, Crypto Art and Others (including domain names, property titles) etc. Currently, there seems to be another category that has been getting some buzz in the industry. This new player is called “Functional NFTs”. 

What are Functional NFTs?

Let’s discuss what Functional NFTs are first. The meaning should be clear from the name itself. NFTs that provide some sort of functionality. It could be a game asset that performs some function. For example, if a game has an avatar as an NFT and it provides certain functionality, then it can be called a Functional NFT. This functionality can be seen as accruing points in a game or giving the player some special power.

Another example can be an NFT created by a restaurant owner. The NFT works as a pass for one person to have dinner on Sunday at the restaurant. Therefore the NFT has some functionality and serves a given purpose. In a similar fashion imagine walking into a club and not having to stand in a line. Well, there can be an NFT for that too. Owning that NFT can give you free access to the club and since you own the NFT, people do not need to check for your ID. 

Normal vs Functional NFTs

Moreover, there has been a heated debate about value accrual in normal NFTs vs Functional NFTs. The argument is that non-functional NFTs are easier to make and are sold quickly on the market. Thus acquiring value quickly. In comparison to that Functional NFTs such as in games need to be thought about. It takes time to build a great experience around the basic utility of the functional NFT.

Consequently taking more time to build value. For example, Axie Infinity, a Pokemon-like game that allows players to collect, breed and battle creatures. It was launched in 2018, but it was quite different then from what it is right now. The developer team had multiple iterations to finesse the game experience. Once the gaming experience was finessed, the NFT assets within the game accrued value. The phenomenon is termed as “Promise Effect” which says that an NFT that promises some experience will accrue value slower than a non-functional NFT.

A new type of Functional NFTs

HODL Valley, a new metaverse gaming project is trying to create a tokenized city. One among many of its features is Functional NFT, but these NFTs take it a step too far. HODL Valley contains around 24 different locations, each with a specific function and utility. These locations are connected to DApps which carry out the functionality for users. These locations can be purchased in-app and the revenues generated by them can be taken home by the NFT owner. For example, let’s say a bank has been represented by an NFT. Since it’s connected to a DApp, it can provide lending and borrowing services. As other users in the game play and use the bank. The NFT owner, who is, in turn, the owner of the bank will be able to generate an income stream from it. That is how functional NFTs have been pitched recently. 

These functional NFTs are bound to change the way we interact with games and real life. With added functionality, individuals can get a unique experience. It’s not just a token anymore which represents value, it’s a function in itself. If NFTs was money then it was only selling products until now. Now, it has started moving into services too.

Source

Posted on Leave a comment

How To Use Metaverse Technology To Design A Better Real World

 

Design thinking, a method that puts people and empathy at the center of new product development, has swept from consultancies like IDEO and Frog to nearly every corporate innovation group. Design thinking starts with ethnographic research and insights, then uses prototypes and resonance testing to iterate towards more successful user-centered products. This process is now the gold standard in modern product development. But rather than selling more products, what if the goal is to solve large-scale social problems? How can we enlist metaverse technologies like AI, computer vision, augmented reality, and spatial computing on these meaningful issues? 

 

Metaverse technologies’ incredible potential should be applied beyond avatar chat rooms and virtual property pyramid schemes– They should be put to work to do so much more.

 

There are many programs to learn design thinking, coding, or 3D modeling and animation in the service of producing first-person shooters, but only one academic program in the world that makes solving a United Nations Sustainable Development Goals a central requirement for every student project. The Copenhagen Institute of Interaction Design takes the United Nations’  collection of 17 interlinked global goals designed to be a "blueprint to achieve a better and more sustainable future for all" as a core tenet of their teachings.

 

In January I was invited by co-founders Simona Maschi and Alie Rose, to teach a week-long SuperSight workshop in Costa Rica, focusing on computer vision and augmented reality to envision a better world. “The SDGs are backed up by the most extensive market research in history: they tell us where the needs are at the planetary level. If there are needs there are markets to be created. The great responsibility for design teachers and students is to accelerate the transition towards sustainable products and services that are regenerative and circular. In this process nature can be a mentor teaching us about eco-systems and circularity.” To prepare students for the challenges ahead, the CIID curriculum includes biomimicry and immersive learning sessions in the jungle of Costa Rica. 

 

So to Costa Rica we went. Over the course of the week, my co-instructor Chris McRobbie and I showed some of our AR projects, introduced foundational concepts, design principles, and riffed on the vast potential for the metaverse. The students made things: they used the latest machine learning algorithms built into SNAP lenses and the SNAP Lens Studio tool, then used Apple’s Reality Composer to make a series of augmented reality prototypes. Let me show you what they made, and WHY:

 

Manali and Jen created an AR tool to replace all the statues of old white men in San Jose with inspirational women. Why? For a kid who passes these landmarks every day ambiently learning about their world, “there are a lot of women who deserve to be recognized more.” The student video is here: 

 

Jose, Pablo, and Priscilla used computer vision to blur product packages in the grocery store that are unsustainable. This diminished reality application stears shoppers toward buying products in packaging that’s better for the environment. 

Lisa and Karla created a gamified stretching experience to motivate some movement between all those zoom meetings.

Mia and Vicky used computer vision for an application that is central to so many families and drives a lot of social interaction–pet ownership. Automatic human face recognition remains a fraught topic, but this team used pet-recognition which is much less controversial. The concept helps strangers learn if a dog is friendly, get some ideas for good conversations with the owner, and safely return them home if they are lost.

The most controversial project was from Sofi and Dee, who created a smart glasses app for women to discreetly tag creepy men. Other women see the augmented marks if they choose–a kind of an inverse scarlet letter. 

In last years’ CIID program, Arvind Sanjeev, envisioned a new way to create shared ad-hoc metaverse experiences with an AR flashlight called LUMEN. It has a computer vision system on the front and a bright laser projector to show information anywhere you shine its beam. LUMEN is great for groups of people to peer into the metaverse together. For example, point the beam on a wall to see where electrical conduits run, or onto a body to see the underlying skeletal structure and learn about a knee or shoulder implant. After graduation, Arvind joined forces with Can Yanardag and Matt Visco to develop Lumen into a real venture/platform. The transparent body X-ray effects are so compelling I’m showing LUMEN to orthopedic surgeons and physical therapists at the Healthcare Summit in Jackson Hole this week. 

 

Run a metaverse envisioning workshop for your company this year. 

There are now so many accessible immersive computer prototyping tools like Apple Reality Composer, Adobe Aero, and Snap Lens studio to help your team start experimenting. Even a one-day workshop with a skilled facilitator can help your team ask important questions and start to sketch some ideas to prototype. I often bring in an illustrator or storyboard artist to capture ideas from a good strategic discussion, then hire a game studio to create a fast 3D interactive “sketch” to envision the most promising concepts that come out of a workshop. Building things is a blast. Teams are engaged, learn about the potential of the new medium, and there’s enormous pride that “we made this!”

Tangible prototypes communicate ideas incredibly effectively around the organization.

The metaverses are coming; start sketching experiences for these new worlds.

Each metaverse will have its own technology, privacy policy, business model, and architecture—isolationist or open. Zuckerberg’s vision will be very different than Google’s, Microsoft’s, Apple’s, Amazon’s, MagicLeap’s, UnReal’s or Nvidia’s. Niantic is pursuing a metaverse that augments the world with digital game layers to encourage people to get outside—the real-world metaverse is the one I’m most excited to design and develop.

The key is to get your team to start driving the metaverse-building engines, as my workshop students did. A link to the best prototyping tools is on SuperSight.world. Sketch some experiences: How might this technology change how you collaborate at a distance, learn in context, configure and sell products, envision the future? Becoming fluent in these tools for rapid prototyping and remote work is imperative to stay agile, competitive, and creative.

Posted on Leave a comment

Framework for the Metaverse

I first wrote about the Metaverse in 2018, and overhauled my thinking in a January 2020 update: The Metaverse: What It Is, Where to Find it, Who Will Build It, and Fortnite. Since then, a lot has happened. COVID-19 forced hundreds of millions into Zoomschool and remote work. Roblox became one of the most popular entertainment experiences in history. Google Trends’ index on the phrase ‘The Metaverse’ set a new ‘100’ in March 2021. Against this baseline, use of the term never exceeded seven from January 2005 through to December 2020. With that in mind, I thought it was time to do an update - one that reflects how my thinking has changed over the past 18 months and addresses the questions I’ve received during this time, such as “Is the Metaverse here?”, “When will it arrive?”, and “What does it need to grow?”. Welcome to the Foreword to ‘THE METAVERSE PRIMER’.

When did the mobile internet era begin? Some would start this history with the very first mobile phones. Others might wait until the commercial deployment of 2G, which was the first digital wireless network. Or the introduction of the Wireless Application Protocol standard, which gave us WAP browsers and thus the ability to access a (rather primitive) version of most websites from nearly any ‘dumbphone’. Or maybe it started with the BlackBerry 6000, or 7000 or 8000 series? At least one of them was the first mainstream mobile device designed for on-the-go data. Most would say it’s the iPhone, which came more than a decade after the first BlackBerry and eight years after WAP, nearly two decades after 2G, 34 years after the first mobile phone call, and has since defined many of the mobile internet era’s visual design principles, economics, and business practices.

In truth, there’s never a flip. We can identify when a specific technology was created, tested, or deployed, but not when an era precisely occurred. This is because technological change requires a lot of technological changes, plural, to all come together. The electricity revolution, for example, was not a single period of steady growth. Instead, it was two separate waves of technological, industrial, and process-related transformations. 

The first wave began around 1881, when Thomas Edison stood up electric power stations in Manhattan and London. Although this was a quick start to the era of electrical power — Edison had created the first working incandescent light bulb only two years earlier, and was only one year into its commercialization — industrial adoption was slow. Some 30 years after Edison’s first stations, less than 10% of mechanical drive power in the United States came from electricity (two thirds of which was generated locally, rather than from a grid). But then suddenly, the second wave began. Between 1910 and 1920, electricity’s share of mechanical drive power quintupled to over 50% (nearly two thirds of which came from independent electric utilities. By 1929 it stood at 78%). 

The difference between the first and second waves is not how much of American industry used electricity, but the extent to which it did — and designed around it.

Alamy

When plants first adopted electrical power, it was typically used for lighting and/or to replace a plant’s on-premises source of power (usually steam). These plants did not, however, rethink or replace the legacy infrastructure which would carry this power throughout the factory and put it to work. Instead, they continued to use a lumbering network of cogs and gears that were messy and loud and dangerous, difficult to upgrade or change, were either ‘all on’ or ‘all off’ (and therefore required the same amount of power to support a single operating station or the entire plant, and suffered from countless ‘single points of failure’), and struggled to support specialized work.

Alamy

But eventually, new technologies and understandings gave factories both the reason and ability to be redesigned end-to-end for electricity, from replacing cogs with electric wires, to installing individual stations with bespoke and dedicated electrically-powered motors for functions such as sewing, cutting, pressing, and welding. 

The benefits were wide-ranging. The same plant now had considerably more space, more light, better air, and less life-threatening equipment. What’s more, individual stations could be powered individually (which increased safety, while reducing costs and downtime), and use more specialized equipment (e.g. electric socket wrenches). 

Getty

In addition, factories could configure their production areas around the logic of the production process, rather than hulking equipment, and even reconfigure these areas on a regular basis. These two changes meant that far more industries could deploy assembly lines in their plants (which had actually first emerged in the late 1700s), while those that already had such lines could extend them further and more efficiently. In 1913, for example, Henry Ford created the first moving assembly line, which used electricity and conveyor belts to reduce the production time per car from 12.5 hours to 93 minutes, while also using less power. According to historian David Nye, Ford’s famous Highland Park plant was “built on the assumption that electrical light and power should be available everywhere.”

Once a few plants began this transformation, the entire market was forced to catch up, thereby spurring more investment and innovation in electricity-based infrastructure, equipment, and processes. Within a year of its first moving assembly line, Ford was producing more cars than the rest of the industry combined. By its 10 millionth car, it had built more than half of all cars on the road.

This ‘second wave’ of industrial electricity adoption didn’t depend on a single visionary making an evolutionary leap from Thomas Edison’s core work. Nor was it driven just by an increasing number of industrial power stations. Instead, it reflected a critical mass of interconnected innovations, spanning power management, manufacturing hardware, production theory, and more. Some of these innovations fit in the palm of a plant manager’s hand, others needed a room, a few required a city, and they all depended on people and processes. 

To return to Nye, “Henry Ford didn’t first conceive of the assembly line and then delegate its development to his managers. … [The] Highland Park facility brought together managers and engineers who collectively knew most of the manufacturing processes used in the United States … they pooled their ideas and drew on their varied work experiences to create a new method of production.” This process, which happened at national scale, led to the ‘roaring twenties’, which saw the greatest average annual increases in labor and capital productivity in a hundred years.

Powering the Mobile Internet

This is how to think about the mobile internet era. The iPhone feels like the start of the mobile internet because it united and/or distilled all of the things we now think of as ‘the mobile internet’ into a single minimum viable product that we could touch and hold and love. But the mobile internet was created — and driven — by so much more.

In fact, we probably don’t even mean the first iPhone but the second, the iPhone 3G (which saw the largest model-over-model growth of any iPhone, with over 4× the sales). This second iPhone was the first to include 3G, which made the mobile web usable, and operated the iOS App Store, which made wireless networks and smartphones useful. 

But neither 3G nor the App Store were Apple-only innovations or creations. The iPhone accessed 3G networks via chips made by Infineon that connected via standards set by the ITU and GSMA, and which were deployed by wireless providers such as AT&T on top of wireless towers built by tower companies such as Crown Castle and American Tower. The iPhone had “an app for that” because millions of developers built them, just as thousands of different companies built specialized electric motor devices for factories in the 1920s. In addition, these apps were built on a wide variety of standards — from KDE to Java, HTML and Unity — which were established and/or maintained by outside parties (some of whom competed with Apple in key areas). The App Store’s payments worked because of digital payments systems and rails established by the major banks. The iPhone also depended on countless other technologies, from a Samsung CPU (licensed in turn from ARM), to an accelerometer from STMicroelectronics, Gorilla Glass from Corning, and other components from companies like Broadcom, Wolfson, and National Semiconductor. 

All of the above creations and contributions, collectively, enabled the iPhone and started the mobile internet era. They also defined its improvement path. 

Consider the iPhone 12, which was released in 2020. There was no amount of money Apple could have spent to release the iPhone 12 as its second model in 2008. Even if Apple could have devised a 5G network chip back then, there would have been no 5G networks for it to use, nor 5G wireless standards through which to communicate to these networks, and no apps that took advantage of its low latency or bandwidth. And even if Apple had made its own ARM-like GPU back in 2008 (more than a decade before ARM itself), game developers (which generate more than two thirds of App Store revenues) would have lacked the game-engine technologies required to take advantage of its superpowered capabilities. 

Getting to the iPhone 12 required ecosystem-wide innovation and investments, most of which sat outside Apple’s purview (even though Apple’s lucrative iOS platform was the core driver of these advancements). The business case for Verizon’s 4G networks and American Tower Corporation’s wireless tower buildouts depended on the consumer and business demand for faster and better wireless for apps such as Spotify, Netflix and Snapchat. Without them, 4G’s ‘killer app’ would have been… slightly faster email. Better GPUs, meanwhile, were utilized by better games, and better cameras were made relevant by photo-sharing services such as Instagram. And this better hardware powered greater engagement, which drove greater growth and profits for these companies, thereby driving better products, apps, and services. Accordingly, we should think of the overall market as driving itself, just as the adoption of electrical grids led to innovation in small electric-powered industrial motors that in turn drove demand for the grid itself.

We must also consider the role of changing user capability. The first iPhone could have skipped the home button altogether, rather than waiting until the tenth. This would have opened up more room inside the device itself for higher-quality hardware or bigger batteries. But the home button was an important training exercise for what was a vastly more complex and capable mobile phone than consumers were used to. Like closing a clamshell phone, it was a safe, easy, and tactile way to ‘restart’ the iPhone if a user was confused or tapped the wrong app. It took a decade for consumers to be able to have no dedicated home button. This idea is critical. As time passes, consumers become increasingly familiar with advanced technology, and therefore better able to adopt further advances - some of which might have long been possible!

And just as consumers shift to new mindsets, so too does industry. Over the past 20 years, nearly every industry has hired, restructured, and re-oriented itself around mobile workflows, products, or business lines. This transformation is as significant as any hardware or software innovation — and, in turn, creates the business case for subsequent innovations.

Defining the Metaverse

This essay is the foreword to my nine-part and 33,000-word primer on the Metaverse, a term I’ve not yet mentioned, let alone described.

Before doing so, it was important for me to provide the context and evolutionary path of technologies such as ‘electricity’ and the ‘mobile internet’. Hopefully it provided a few lessons. First, the proliferation of these technologies fundamentally changed human culture, from where we lived to how we worked, what we made, what we bought, how, and from who. Second, these ‘revolutions’ or ‘transformations’ really depended on a bundle of many different, secondary innovations and inventions that built upon and drove one another. Third, even the most detailed understanding of these newly-emergent technologies didn’t make clear which specific, secondary innovations and inventions they required in order to achieve mass adoption and change the world. And how they would change the world was almost entirely unknowable.

OldInternet.PNG

In other words, we should not expect a single, all-illuminating definition of the ‘Metaverse’. Especially not at a time in which the Metaverse has only just begun to emerge. Technologically driven transformation is too organic and unpredictable of a process. Furthermore, it’s this very messiness that enables and results in such large-scale disruption. 

My goal therefore is to explain what makes the Metaverse so significant – i.e. deserving of the comparisons I offered above – and offer ways to understand how it might work and develop.

The Metaverse is best understood as ‘a quasi-successor state to the mobile internet’. This is because the Metaverse will not fundamentally replace the internet, but instead build upon and iteratively transform it. The best analogy here is the mobile internet, a ‘quasi-successor state’ to the internet established from the 1960s through the 1990s. Even though the mobile internet did not change the underlying architecture of the internet – and in fact, the vast majority of internet traffic today, including data sent to mobile devices, is still transmitted through and managed by fixed infrastructure – we still recognize it as iteratively different. This is because the mobile internet has led to changes in how we access the internet, where, when and why, as well as the devices we use, the companies we patron, the products and services we buy, the technologies we use, our culture, our business model, and our politics. 

The Metaverse will be similarly transformative as it too advances and alters the role of computers and the internet in our lives.

The fixed-line internet of the 1990s and early 2000s inspired many of us to purchase our own personal computer. However, this device was largely isolated to our office, living room or bedroom. As a result, we had only occasional access to and usage of computing resources and an internet connection. The mobile internet led most humans globally to purchase their own personal computer and internet service, which meant almost everyone had continuous access to both compute and connectivity.

Metaverse iterates further by placing everyone inside an ‘embodied’, or ‘virtual’ or ‘3D’ version of the internet and on a nearly unending basis. In other words, we will constantly be ‘within’ the internet, rather than have access to it, and within the billions of interconnected computers around us, rather than occasionally reach for them, and alongside all other users and real-time.

The progression listed above is a helpful way to understand what the Metaverse changes. But it doesn’t explain what it is or what it’s like to experience. To that end, I’ll offer my best swing at a definition:

“The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”

Most commonly, the Metaverse is mis-described as virtual reality. In truth, virtual reality is merely a way to experience the Metaverse. To say VR is the Metaverse is like saying the mobile internet is an app. Note, too, that hundreds of millions are already participating in virtual worlds on a daily basis (and spending tens of billions of hours a month inside them) without VR/AR/MR/XR devices. As a corollary to the above, VR headsets aren’t the Metaverse any more than smartphones are the mobile internet.

Sometimes the Metaverse is described as a user-generated virtual world or virtual world platform. This is like saying the internet is Facebook or Geocities. Facebook is a UGC-focused social network on the internet, while Geocities made it easy to create webpages that lived on the internet. UGC experiences are just one of many experiences on the internet.

Furthermore, the Metaverse doesn’t mean a video game. Video games are purpose-specific (even when the purpose is broad, like ‘fun’), unintegrated (i.e. Call of Duty is isolated from fellow portfolio title Overwatch), temporary (i.e. each game world ‘resets’ after a match) and capped in participants (e.g. 1MM concurrent Fortnite users are in over 100,000 separated simulations. Yes, we will play games in the Metaverse, and those games may have user caps and resets, but those are games in the Metaverse, not the Metaverse itself. Overall, The Metaverse will significantly broaden the number of virtual experiences used in everyday life (i.e. well beyond video games, which have existed for decades) and in turn, expand the number of people who participate in them. 

Lastly, the Metaverse isn’t tools like Unreal or Unity or WebXR or WebGPU. This is like saying the internet is TCP/IP, HTTP, or web browser. These are protocols upon which the internet depends, and the software used to render it.

The Metaverse, like the internet, mobile internet, and process of electrification, is a network of interconnected experiences and applications, devices and products, tools and infrastructure. This is why we don’t even say that horizontally and vertically integrated giants such as Facebook, Google or Apple are an internet. Instead, they are destinations and ecosystems on or in the internet, or which provide access to and services for the internet. And of course, nearly all of the internet would exist without them.

The Metaverse Emerges

As I’ve written before, the full vision of the Metaverse is decades away. It requires extraordinary technical advancements (we are far from being able to produce shared, persistent simulations that millions of users synchronized in real-time), and perhaps regulatory involvement too. In addition, it will require overhauls in business policies, and changes to consumer behavior.

But the term has become so recently popular because we can feel it beginning. This is one of the reasons why Fortnite and Roblox are so commonly conflated with the Metaverse. Just as the iPhone feels like the mobile internet because the device embodied the many innovations which enabled the mobile internet to go mainstream, these ‘games’ bring together many different technologies and trends to produce an experience which is simultaneously tangible and feels different from everything that came before. But they do not constitute the Metaverse.

Sweeney.PNG

Personally, I’m tracking the emergence of the Metaverse around eight core categories, which can be thought of as a stack (click each header for a dedicated essay).

  1. Hardware: The sale and support of physical technologies and devices used to access, interact with, or develop the Metaverse. This includes, but is not limited to, consumer-facing hardware (such as VR headsets, mobile phones, and haptic gloves) as well as enterprise hardware (such as those used to operate or create virtual or AR-based environments, e.g. industrial cameras, projection and tracking systems, and scanning sensors). This category does not include compute-specific hardware, such as GPU chips and servers, as well as networking-specific hardware, such as fiber optic cabling or wireless chipsets.
  2. Networking: The provisioning of persistent, real-time connections, high bandwidth, and decentralized data transmission by backbone providers, the networks, exchange centers, and services that route amongst them, as well as those managing ‘last mile’ data to consumers. 
  3. Compute: The enablement and supply of computing power to support the Metaverse, supporting such diverse and demanding functions as physics calculation, rendering, data reconciliation and synchronization, artificial intelligence, projection, motion capture and translation.
  4. Virtual Platforms: The development and operation of immersive digital and often three-dimensional simulations, environments, and worlds wherein users and businesses can explore, create, socialize, and participate in a wide variety of experiences (e.g. race a car, paint a painting, attend a class, listen to music), and engage in economic activity. These businesses are differentiated from traditional online experiences and multiplayer video games by the existence of a large ecosystem of developers and content creators which generate the majority of content on and/or collect the majority of revenues built on top of the underlying platform.
  5. Interchange Tools and Standards: The tools, protocols, formats, services, and engines which serve as actual or de facto standards for interoperability, and enable the creation, operation and ongoing improvements to the Metaverse. These standards support activities such as rendering, physics, and AI, as well as asset formats and their import/export from experience to experience, forward compatibility management and updating, tooling, and authoring activities, and information management.
  6. Payments: The support of digital payment processes, platforms, and operations, which includes fiat on-ramps (a form of digital currency exchange) to pure-play digital currencies and financial services, including cryptocurrencies, such as bitcoin and ether, and other blockchain technologies.
  7. Metaverse Content, Services, and Assets: The design/creation, sale, re-sale, storage, secure protection and financial management of digital assets, such as virtual goods and currencies, as connected to user data and identity. This contains all business and services “built on top of” and/or which “service” the Metaverse, and which are not vertically integrated into a virtual platform by the platform owner, including content which is built specifically for the Metaverse, independent of virtual platforms.
  8. User Behaviors: Observable changes in consumer and business behaviors (including spend and investment, time and attention, decision-making and capability) which are either directly associated with the Metaverse, or otherwise enable it or reflect its principles and philosophy. These behaviors almost always seem like ‘trends’ (or, more pejoratively, ‘fads’) when they initially appear, but later show enduring global social significance. 

(You’ll note ‘crypto’ or ‘blockchain technologies’ are not a category. Rather, they span and/or drive several categories, most notably compute, interchange tools and standards, and payments — potentially others as well.)

MasterMetaverse1.png

Each of these buckets is critical to the development of the Metaverse. In many cases, we have a good sense of how each one needs to develop, or at least where there’s a critical threshold (say, VR resolution and frame rates, or network latency). 

But ultimately, how these many pieces come together and what they produce is the hard, important, and society-altering part of any Metaverse analysis. Just as the electricity revolution was about more than the kilowatt hours produced per square mile in 1900s New York, and the internet about more than HTTP and broadband cabling.

Based on precedent, however, we can guess that the Metaverse will revolutionize nearly every industry and function. From healthcare to payments, consumer products, entertainment, hourly labor, and even sex work. In addition, altogether new industries, marketplaces and resources will be created to enable this future, as will novel types of skills, professions, and certifications. The collective value of these changes will be in the trillions.

This is the Foreword to the nine-part ‘METAVERSE PRIMER’.

Matthew Ball (@ballmatthew)

The Metaverse Primer

Metaverse
Jun 29, 2021 Written By Matthew Ball

Posted on Leave a comment

Ask Ethan: What’s the real science behind Google’s time crystal?

Is the time crystal really an otherworldly revolution, leveraging quantum computing that will change physics forever?

KEY TAKEAWAYS

  • Google’s quantum computing team, in a first, has developed and demonstrated a discrete time crystal on a quantum computer. 
  • By driving the system with a microwave pulse, they can get it to return to its original quantum state periodically, with no thermal noise or decoherence effects. 
  • However, claims that it violates thermodynamics, is otherworldly, or changes physics forever are all demonstrably untrue and misrepresentative of the actual quality research.

It is tempting, whenever a new discovery comes along, to imagine a whole slew of revolutions that might soon ensue. After all, anytime you can suddenly do or accomplish any task that was previously impractical or even (thought to be) impossible, that is one less obstacle standing in the way of even your loftiest, pie-in-the-sky dreams. However, no matter what discoveries ensue, the fundamental laws of physics that underlie reality must always be obeyed; you might be able to cajole nature into doing a lot of clever things, but you cannot very well break the rules that govern it. If you could, we would have to write down new laws, because the old ones would no longer be valid. Despite all the tricks we have developed, we cannot create fundamental magnetic monopoles, violate the conservation of momentum or energy, or work our way around the second law of thermodynamics.

Yet a recent article, touting a brand new discovery involving time crystals and Google’s quantum computer, claims to do just that: evade the second law of thermodynamics. Is that even right? Patreon supporter Chad Marler wants to know, asking:

Hi Ethan… I was reading some headlines and came across this article. While I suspect the innovation was real, some of the wording in the article made my [nonsense] detector go off… it sounds like something you would hear on an Art Bell show.

I will tell you, up front, that the scientific paper is legit, but the recent article is full of misconceptions and misinterpretations. Let’s dive in and find out what it is all about.

Time crystal
Normal crystals repeat their structure/configuration in space, like the crystal structure of corundum, α-Al2O3. But a time crystal would repeat its quantum state in time, instead. (Credit: Ben Mills)

What is a time crystal?

Unlike most things in physics, where theorists imagine a possibility far out of reach of current or near-future technology, time crystals have only been around for a very short time, even in our minds. In 2012, Nobel Laureate Frank Wilczek proposed that a new state of matter might exist: a time crystal. Most of us know what a normal crystal is: a repeating, rigid lattice or grid of particles, like atoms or molecules, that compose a solid, ordered structure.

When we talk about time crystals, however, there is no lattice, no grid, and no solid, ordered structure. The important component of a time crystal, at least conceptually, is the “repeating” part. Whereas a conventional crystal has a structure that repeats in space, meaning it exhibits translational symmetry (if you move within the crystal, the structure looks the same everywhere), a time crystal should repeat its configuration, periodically, in time.

Even in their ground state, electrons still have a non-zero energy, meaning that there will always be random motions over time. Only if the system returns to the exact original state, periodically, with no thermal noise or other imperfections, can a time crystal be created. (Credit: SPARKYSCIENCE AND ANTICOMPOSITENUMBER)

Originally, when time crystals were first considered, they were presumed to be impossible for a number of reasons. There were theorems published that proved their impossibility. There were assertions that a system that transitioned from lower-to-higher energy states would not return to its original state again spontaneously, and then go back-and-forth between those two states, because that would indicate some type of perpetual motion, violating the second law of thermodynamics and the conservation of energy.

But not only did theorists find loopholes in those theorems, but more impressively, experimentalists just went right ahead and created them in the lab. In 2016, Norman Yao and his team came up with a scheme to create a time crystal through a very clever plan. Instead of taking a closed, constant system, he proposed leveraging a system with out-of-equilibrium conditions. He would then “drive” that system externally, making it an open (rather than a closed) system and achieving the much sought after “time crystal” state.

Time crystal
Phase diagram of the discrete time crystal as a function of Ising interaction strength and spin-echo pulse imperfections. Only in the blue, shaded region is the time crystal state achieved, where the X-axis is the dipole spins (interaction strength) and the Y-axis is the driving force (pulses) injected into the system. (Credit: Norman Y. Yao, Andrew C. Potter, Ionut-Dragos Potirniche, Ashvin Vishwanath.)

It is a little bit complicated, but you can imagine that you have a bunch of atoms that have a spin, and those spins have directions: dipole moments. The way you can “drive” the system is by subjecting the system to spin-echo pulses that contain imperfections, but which occur periodically while allowing interactions to randomly occur in the intermediate times. If you get the combinations of these dipole moments of the spins and the spin-echo pulses to behave in a certain fashion, you could get a time crystal.

The hard part, though, is avoiding what normally happens when you interact with a system: If there is an exchange of energy, that energy gets transferred throughout the system, internally, causing runaway heating due to many-body interactions. Somehow, you have to:

  • drive the system, externally, with a spin-flip pulse,
  • so that you get a periodic response,
  • that is proportional to the time at which you pulse the system,
  • and at some multiple of the period, you return to your initial state,
  • while the “time crystal” only oscillates away from and then back into that initial state.

Only if you go back, periodically, to exactly your initial state, with no extra heating and achieve a pure steady-state can you make a time crystal.

Time crystal
The blueprint for creating a time crystal: take an entangled system and drive it with a spin-flip pulse. At some multiple of the period, you will return to the same initial state. (Credit: APS / Alan Stonebraker / Phil Richerme)

How can you make one in real life?

Yao’s work first appeared in August 2016, and within mere months, two independent groups put it to the test:

They tried to set up a system precisely as Yao had demanded but, because the conditions are so general, wound up taking vastly different approaches.

Monroe’s group took a series of yttrium atoms all lined up, in a one-dimensional line, all coupled together via their electrostatic interactions. When they subjected this atomic line to a series of spin-flip pulses, they found that the system would return to its initial state every two full pulse periods. Meanwhile, Lukin’s group took an actual diamond crystal that contained somewhere on the order of ~1,000,000 spin-impurities within it and pulsed those impurities within the crystal with microwave radiation. That radiation flipped their spins, but time crystal oscillations were only observed every three full pulse periods — whereupon the crystal would return to its initial quantum state.

The Harvard Diamond, created by a team led by Mikhail Lukin, has so many nitrogen impurities that it turned black. This is one of two independent physical systems used to create a time crystal. When driven under the proper conditions, it returns to its initial state, whatever that state may have been, every three full pulse periods. (Credit: Georg Kucsko.)

This occurred for both groups, interestingly enough, even when the driving pulses were imperfect. You could:

  • alter the magnitude of the pulse, making it stronger or weaker,
  • vary the frequency of pulsation, making it a little quicker or slower,
  • turn up or turn down the amount of noise and/or interactions that occurred between the pulses,
  • or change the conditions of the environment that the system is placed in,

and still recover this time crystal behavior. Surprisingly, for these non-equilibrium systems, there is a lot of wiggle-room as far as what you can do and still observe this time crystal behavior.

But as they were originally envisioned by Wilczek in 2012, an idealized time crystal would occur in a system that was in thermal equilibrium — that was neither absorbing nor emitting energy from or to the surrounding environment. In order to create a time crystal, you needed to have an open system that could exchange energy with its external surroundings, and that system needed to be driven at a periodic frequency. Moreover, the imperfections in the driving could not be too large, or the crystal would “melt” in precisely the fashion we want to avoid: with runaway heating occurring from many-body interactions.

Time crystal
Ten yttrium atoms with entangled electron spins, as used to first create a time crystal. With every two full pulse periods that pass, the full suite of atoms returns to its original, initial configuration of spins. (Credit: Chris Monroe, University of Maryland.)

What did the Google team, using a quantum computer, actually do?

Back when these time crystals were first realized in 2016/2017, it was recognized that time crystals could conceivably be applied to quantum computers. Instead of encoding a bit, like the “0” or “1” a standard computer encodes, a quantum computer encodes a qubit, which is a probability-weighted superposition of both “0” and “1” simultaneously. Although you can only measure a “0” or “1” at the end, the fact that you have many qubits allows you to see whether you have preserved the quantum behavior of the system (or not), whether your results are error-free (or not), and what type of final-state distribution you get and whether it matches your theoretical predictions.

The hard “problem” with a quantum computer is the problem of decoherence: Over relatively short timescales, the system interacts with the surrounding particles, and this causes you to lose the quantum behavior you are trying to preserve. For Google’s quantum computer, which is based on superconducting qubits (as opposed to quantum dots or ion traps, for example), you get a coherence timescale of about 50 microseconds. You can only perform perhaps a few dozen computations before decoherence ruins your experiment, and you lose the quantum behavior you sought to maintain and measure. (Or, more precisely, before too many errors, including errors from simple crosstalk between qubits, simply transform your signal into noise.)

The ordered and disordered eigenstates of a set of configurations. In equilibrium (a), only the lowest energy states are ordered, with higher-energy ones being unordered. In most driven systems (b), no states are ordered. But in systems with many-body localization (c), all states can be ordered, allowing for the possibility of returning periodically to your original state. (Credit: Google Quantum AI and collaborators, arXiv:2107.13571.)

Instead of using a dynamical phase like the spins of atoms, though, a quantum computer allows you to use a different property: the order of eigenstates in many-body systems. If you brought your qubits into an equilibrium setting, you would see that there was order in the lowest energy states and unordered states at higher energies. That is why, under normal circumstances, if you allow too much energy to propagate through your system, you just wind up with featureless, unordered systems; it is like the heat or energy just randomized everything.

However, some systems can exhibit what is called MBL: many-body localization, where you get local conservation laws and only a discrete number of ordered states. When you drive the system, which the Google team did with pulsed microwaves that cause the qubits to flip, your qubits have the potential to behave just like the dynamical phases did when we were measuring atomic spins: If the qubits do not absorb heat or impart energy to their surroundings, they can simply flip between different ordered states. With enough pulses, you can conceivably recover your original state.

Sure enough, every two full periods of the microwave pulses resulted in a recovery of the original state: a time crystal. Not bound by these decoherence effects any longer, the researchers could maintain this time crystal state for up to ~100 seconds, a remarkable achievement.

The Sycamore processor, which is a rectangular array of 54 qubits connected to its four nearest neighbros with couplers, contains one inoperable qubit, leading to an effective 53 qubit quantum computer. The optical image shown here illustrates the scale and color of the Sycamore chip as seen in optical light. (Credit: Google Quantum AI and collaborators, retrieved from NASA.)

And how do the claims in the LiveScience article hold up?

Although the article does a fine job of describing the experiments performed themselves, there is a howler of a statement made early on:

With the ability to forever cycle between two states without ever losing energy, time crystals dodge one of the most important laws of physics — the second law of thermodynamics, which states that the disorder, or entropy, of an isolated system must always increase. These bizarre time crystals remain stable, resisting any dissolution into randomness, despite existing in a constant state of flux.

There is no dodge; the second law of thermodynamics applies to closed systems, not open ones. The disorder of the system, if you include the microwave pulses and the external environment, does in fact go up, just as predicted. The crystals oscillate between allowable states and return to their original ones when driven properly, just as their non-qubit analogues did years prior. In order to do this, the researchers needed to discriminate between external decoherence and internal thermalization, both of which can destroy the quantum state they are seeking to maintain, which itself is an admirable achievement.

When a series of food items are placed in a pan and the chef jiggles it in a way to coax the items into flipping, some will flip 180°, others 360°, others 540°, etc. But if the chef jiggles it enough times, all the items may return to their original state, rather than taking on random configurations. This is the concept of a time crystal. (Credit: Public domain / Creative Commons CC0.)

Although it may be fun to claim, as the headline of the article did, that this is “otherworldly” and “could change physics forever,” it is more like imagining you have got a skillet with different sized and shaped mollusks in it and a chef who jiggles the pan in a way that makes the shelled creatures flip. Some will flip 180°, others 360°, others 540°, etc. In the quantum world, some of these mollusks can take on in-between values, too. But after a certain number of jiggles, the mollusks all wind up the same way they started, regardless of what that specific initial configuration was. That is all the Google team is doing, but instead of mollusks or spinning atoms, they are using the eigenstates of a quantum computer.

Which, if we are being honest, is still a remarkable achievement! This is a new kind of time crystal, a new way of achieving it, and one with the potential to study non-equilibrium phases of matter on a quantum computer. And although you have to pump energy into the system in pulses, the time crystal can, in fact, return to whatever specific state it began with, even with small imperfections occurring in the “flips,” without destroying, decohering, or losing the nature of the quantum state due to thermal instabilities. No laws are violated and the physics we know is not changed in any way, but this is a phenomenal achievement nonetheless. In a mere nine years, we have gone from theorizing the existence of time crystals to creating them to observing them on a quantum processor. When a new field yields significant advances so quickly, it compels us to pay attention.

Send in your Ask Ethan questions to startswithabang at gmail dot com!

Posted on Leave a comment

Hacktopia open call 2021

HACKTOPIA is een nieuw citizen science-initiatief van de stad Antwerpen en Vlaams onderzoekscentrum imec . Via deze ‘open call’ nodigen we burgers uit om  actief mee na te denken over de problematiek van wateroverlast in de stad Antwerpen. Welke uitdagingen kunnen worden aangepakt? Jullie komen zelf met het idee, wij zoeken mee naar de juiste technologie en data om de stad te ‘hacken’, bottom-up te verbeteren. Iedereen (burger) wetenschapper!

HACKTOPIA

HACKTOPIA is een initiatief van de Stad Antwerpen en Vlaams onderzoekscentrum imec waarbij we burgers empoweren  om de slimme stad van morgen vorm te geven. Jij komt met het idee, wij reiken de technologie en data aan om de stad zelf te ‘hacken’.  

Deze editie van HACKTOPIA heeft als thema WATER, en dan meer bepaald de problemen en vraagstukken die hierbij komen kijken. Hevige regenval of stormweer kan een heleboel water met zich meebrengen. Misschien vind je dat klimaatadaptatie nét niet snel genoeg gaat, of heb je zelfs al een straf idee over wat een stad als Antwerpen juist nodig heeft. Zie jij de last in wateroverlast?
 

Imec City of Things - HACKATOPIA



Schotel ons dan jouw op te lossen watervraagstukken voor… en wie weet steken jij en Anthony Liekens  (zelfverklaarde ‘mad scientist’ met een missie: wetenschap en technologie voor iedereen toegankelijk maken) van Makerspace Antwerpen binnenkort de handen uit de mouwen om ’t stad te hacken! 

Imec City of Things - Hackatopia

Mad scientist Anthony Liekens

What the Hack?! 

Stap 1  
Je  broedt op een idee of een uitdaging die je graag aangepakt ziet.  
Waaraan moet jouw idee of uitdaging voldoen? 

  • het thema wateroverlast staat centraal 
  • het is gebaseerd op een reële nood van (de inwoners van) Antwerpen 
  • het is relevant voor een groot aantal burgers van de stad (niet enkel voor jou) 
  • er zit een innovatief en – bij voorkeur – technologisch kantje aan 
  • het is een duurzame oplossing die na afloop van het project nog een eigen leven kan leiden  

Stap 2  
Je  zendt het idee in via onderstaand formulier. We contacteren je binnen de paar dagen om een telefonisch intakegesprek met ons te voeren.  

Stap 3  
Vanaf nu wordt het echt spannend! Je wordt misschien wel geselecteerd om je concept samen met ons verder te ontwikkelen.  

Stap 4  
Jij en je eventuele mede-makers nemen deel aan een aantal  workshops  (meer info hieronder). Professionele makers van Makerspace Antwerpen en experten van onder andere imec  staan je bij met raad en daad om je idee vorm te geven en een blauwdruk te ontwikkelen .

Stap 5  
De makers gaan samen met jou aan de slag om een simulatie van je oplossing te bouwen. Dit vroege prototype gaan we dan ‘in het echt’ testen  met (eind)gebruikers.  

Stap 6  
Na een spannend jury-event wordt het beste concept uitgekozen voor een vervolgtraject. Daarbij wordt je oplossing verder uitgewerkt tot een eerste werkend prototype (proof of concept). Ben je nieuwsgierig naar wat dit juist inhoudt? Lees dan zeker ook de GitHub-pagina van Klankentappers, het winnende imec Hackable City of Things, citizen science concept uit 2019!  

Meedoen 

  • Je schrijft je in via dit registratieformulier.
  • Bij selectie contacteren we je om een telefonisch intakegesprek in te plannen.
  • Tijdens dit gesprek evalueren we jouw idee aan de hand van de vooropgestelde selectiecriteria (zie: stap 1).
  • Na je eventuele selectie ontvang je nog een uitgebreidere briefing met wat je juist kan verwachten .

Voorwaarden 

Je engageert je om bij deelname:  

  • Te werken binnen een team van minimaal 3 en maximaal 6 personen. Het team wordt na selectie gevormd door imec en stad Antwerpen op basis van gelijkaardige uitdagingen en ideeën. 
    • Deelnemers die in groep inschrijven (max. 3 personen) worden bij selectie automatisch in hetzelfde team geplaatst. 
  • Het volledige proces te doorlopen, en deel te nemen aan alle workshops.  
  • Actief bij te dragen aan het maken van het prototype en het uitvoeren van de test.  
  • Je oplossing en de data die daar eventueel uit voortvloeien open te stellen voor je medeburgers, onderzoekers en de stad. 

Wat staat je te wachten?

Gedurende het onderzoeksproject word je ondersteund door een innovatiemanager van imec en een maker van Makerspace Antwerpen. Bovendien kan je steunen op het advies van experts van stad Antwerpen, imec en eventuele derde partijen.   

De workshops vinden telkens plaats op een donderdagavond na de werkuren (met uitzondering van de testingdag), en dit op volgende data: 

  1. Ideation – probleemstelling & afbakening idee: 07/10/2021 
  2. Get out of the Building – Omgevingsscan & expertadvies: 21/10/2021 
  3. Sketch & map: 09/11/2021 
  4. Prototyping: 18/11/2021 
  5. Testing: 25/11/2021 
  6. Jury & pitch – winnaarselectie: 09/12/2021 

Timing

  • Inschrijving tot en met  27/09/2021 
  • Project loopt tot en met  december 2021

Contact

Voor meer info of vragen over HACKTOPIA kan je terecht bij hacktopia@antwerpen.be   

Benieuwd naar meer?

  • Lees meer over Klankentappers, dit citizen-scienceproject wil wetenschappelijk onderbouwde geluidsmetingen betaalbaar en toegankelijk maken voor burgers.
     
  • Ontdek onze andere blogposts en projecten.   
https://www.imeccityofthings.be/nl/blog/hacktopia-open-call-2021
Virtual Identity
0