Posted on Leave a comment

Ground-Breaking Research Finds 11 Multidimensional Universe Inside the Human Brain

The human brain is capable of creating structures in up to 11 dimensions, according to scientists. According to a study published in Frontiers in Computational Neuroscience, the Human brain can deal and create in up to 11 dimensions.

According to the Blue Brain Project, the dimensions are not interpreted in the traditional sense of a dimension, which most of us understand. Scientists found exciting new facts about the intricacy of the human brain as part of the Blue Brain Project.

Neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, said: “We found a world that we had never imagined. There are tens of millions of these objects, even in a speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Traditional mathematical viewpoints were found to be inapplicable and unproductive once researchers studied the human brain.



The graphic tries to depict something that can’t be seen – a multi-dimensional cosmos of structures and places. A computerised replica of a section of the neocortex, the brain’s most evolved portion, may be found on the left. On the right, several forms of various sizes and geometries are used to illustrate constructions with dimensions ranging from one to seven and beyond. The central “black-hole” represents a collection of multi-dimensional voids or cavities. In a new paper published in Frontiers in Computational Neuroscience, researchers from the Blue Brain Project claim that groupings of neurons coupled into such holes provide the necessary link between brain structure and function. Blue Brain Project is the source of this image.

“The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly,” Markram revealed.

Instead, scientists opted to investigate algebraic topology. Algebraic topology is a branch of mathematics that studies topological spaces using techniques from abstract algebra. In applying this approach in their latest work, scientists from the Blue Brain Project were joined by mathematicians Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

Professor Hess explained: “Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time.”

The researchers observed that brain structures are formed when a collection of neurons – cells in the brain that carry impulses – form a clique. Each neuron in the group is connected to every other neuron in the group in a unique way, resulting in the formation of a new entity. The ‘dimension’ of an item increases as the number of neurons in a clique increases.

The scientists used algebraic topography to model the architecture within a virtual brain they developed with the help of computers. They subsequently confirmed their findings by doing experiments on genuine brain tissue. The researchers discovered that by adding inputs to the virtual brain, cliques of increasingly HIGHER dimensions formed. In addition, investigators detected voids between the cliques.

Ran Levi from Aberdeen University said: “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner. It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The new information on the human brain provides previously unseen insights into how the brain processes information. Scientists have said, however, that it is still unclear how the cliques and cavities arise in such a unique way.

The new research could someday help scientists solve one of neuroscience’s greatest mysteries: where does the brain ‘store’ memories.

Reference: Peer reviewed research

Zeeshan Ali

November 06, 2022

Posted on Leave a comment

Top 5 Real-World Applications for Natural Language Processing

Emerging technologies have greatly facilitated our daily lives. For instance, when you are making yourself dinner but want to call your Mom for the secret recipe, you don’t have to stop what you are doing and dial the number to make the phone call. Instead, all you need to do is to simply speak out — “Hey Siri, call Mom.” And your iPhone automatically makes the call for you.

The application is simple enough, but the technology behind it could be sophisticated. The magic that makes the aforementioned scenario possible is natural language processing (NLP). NLP is far more than a pillar for building Siri. It can also empower many other AI-infused applications in the real world.

This article first explains what NLP is and later moves on to introduce five real-world applications of NLP.

What is NLP?

From chatbots to Siri, from virtual support agents to knowledge graphs, the application and usage of NLP are ubiquitous in our daily life. NLP stands for “Natural Language Processing”. Simply put, NLP is the ability of a machine to understand human language. It is the bridge that enables humans to directly interact and communicate with machines. NLP is a subfield of artificial intelligence (AI) and in Bill Gates's words, “NLP is the pearl in the crown of AI.”

With the ever-expanding market size of NLP, countless companies are investing heavily in this industry, and their product lines vary. Many different but specific systems for various tasks and needs can be built by leveraging the power of NLP.

The Five Real World NLP Applications

The most popular exciting and flourishing real-world applications of NLP include: Conversational user interface, AI-powered call quality assessment, Intelligent outbound calls, AI-powered call operators, and knowledge graphs, to name a few.

Chatbots in E-commerce

Over five years ago, Amazon already realized the potential benefit of applying NLP to their customer service channels. Back then, when customers had issues with their product orderings, the only way they could resort was by calling the customer service agents. However, what they could get from the other side of the phone was “Your call is important to us. Please hold, we’re currently experiencing a high call load. “ most of the time. Thankfully, Amazon immediately realized the damaging effect this could have on their brand image and tried to build chatbots.

Nowadays, when you want to quickly get, for example, a refund online, there’s a much more convenient way! All you need to do is to activate the Amazon customer service chatbot and type in your ordering information and make a refund request. The chatbot interacts and replies the same way a real human does. Apart from the chatbots that deal with post-sales customer experience, chatbots also offer pre-sales consulting. If you have any questions about the product you are going to buy, you can simply chat with a bot and get the answers.

E-commerce chatbots.
E-commerce chatbots.

With the emergence of new concepts like metaverse, NLP can do more than power AI chatbots. Avatars for customer support in the metaverse rely on the NLP technology. Giving customers more realistic chatting experiences.

Customer support avatar in metaverse.
Customer support avatar in the metaverse.

Conversational User Interface

Another more trendy and promising application is interactive systems. Many well-recognized companies are betting big on CUI ( Conversational user interface). CUI is the general term to describe those user interfaces for computers that can simulate conversations with real human beings.

The most common CUIs in our everyday life are Apple’s Siri, Microsoft’s Cortana, Google’s Google Assistant, Amazon’s Alexa, etc.

Apple’s Siri is a common example of conversational user interface.
Apple’s Siri is a common example of a conversational user interface.

In addition, CUIs can also be embedded into cars, especially EVs (electric vehicles). NIO, an automobile manufacturer dedicated to designing and developing EVs, launched its own set of CUI named NOMI in 2018. Visually, the CUIs in cars can work in the same way as Siri. Drivers can focus on steering the car while asking the CUI to adjust A/C temperature, play a song, lock windows/doors, navigate drivers to the nearest gas station, etc.

Conversational user interface in cars.
The conversational user interface in cars.

The Algorithm Behind

Despite all the fancy algorithms the technical media have boasted about, one of the most fundamental ways to build a chatbot is to construct and organize FAQ pairs(or more straightforwardly, question-answer pairs) and use NLP algorithms to figure out if the user query matches anyone of your FAQ knowledge base. A simple FAQ example would be like this:

Q: Can I have some coffee?

A: No, I’d rather have some ribs.

Now that this FAQ pair is already stored in your NLP system, the user can now simply ask a similar question for example: “coffee, please!”. If your algorithm is smart enough, it will figure out that “coffee, please” has a great resemblance to “Can I have some coffee?” and will output the corresponding answer “No, I’d rather have some ribs.” And that’s how things are done.

For a very long time, FAQ search algorithms are solely based on inverted indexing. In this case, you first do tokenization on the original sentence and put tokens and documents into systems like ElasticSearch, which uses inverted-index for indexing and algorithms like TF-IDF or BM25 for scoring.

This algorithm works just as fine until the deep learning era arrives. One of the most substantial problems with the algorithm above is that neither tokenization nor inverted indexing takes into account the semantics of the sentences. For instance, in the example above, users could say “ Can I have a cup of Cappuccino” instead. Now with tokenization and inverted-indexing, there’s a very big chance that the system won’t recognize “coffee” and “a cup of Cappuccino” as the same thing and would thus fail to understand the sentence. AI engineers have to do a lot of workarounds for these kinds of issues.

But things got much better with deep learning. With pre-trained models like BERT and pipelines like Towhee, we can easily encode all sentences into vectors and store them in a vector database, for example, Milvus, and simply calculate vector distance to figure out the semantic resembles of sentences.

The algorithm behind conversational user interfaces.

AI-powered Call Quality Control

Call centers are indispensable for many large companies that care about customer experience. To better spot issues and improve call quality, assessment is necessary. However, the problem is that call centers of large multi-national companies receive tremendous amounts of inbound calls per day. Therefore, it is impractical to listen to each of the millions of calls and make the evaluation. Most of the time, when you hear “in order to improve our service, this call could be recorded.” from the other end of the phone, it doesn’t necessarily mean your call would be checked for quality of service. In fact, even in big organizations, only 2%-3% of the calls would be replayed and checked manually by quality control people.

A call center. Image source: Pexels by Tima Miroshnichenko.

This is where NLP can help. An AI-powered call quality control engine powered by NLP can automatically spot the issues incalls and can handle massive volumes of calls in a relatively short period of time. The engine helps detect if the call operator uses the proper opening and ending sentences, and avoids that banned slang and taboo words in the call. This would easily increase the check rate from 2%-3% to 100%, with even less manpower and other costs.

With a typical AI-powered call quality control service, users need to first upload the call recordings to the service. Then the technology of Automatic speech recognition (ASR) is used to transcribe the audio files into texts. All the texts are subsequently vectorized using deep learning models and subsequently stored in a vector database. The service compares the similarity between the text vectors and vectors generated from a certain set of criteria such as taboo word vectors and vectors of desired opening and closing sentences. With efficient vector similarity search, handling great volumes of call recordings can be much more accurate and less time-consuming.

Intelligent outbound calls

Believe it or not, some of the phone calls you receive are not from humans! Chances are that it is a robot talking from the other side of the call. To reduce operation costs, some companies might leverage AI phone calls for marketing purposes and much more. Google launched Google Duplex back in 2018, a system that can conduct human-computer conversations and accomplish real-world tasks over the phone. The mechanism behind AI phone calls is pretty much the same as that behind chatbots.

Google assistant.
A user asks the Google Assistant for an appointment, which the Assistant then schedules by having Duplex call the business. Image source: Google AI blog.

In other cases, you might have also heard something like this on the phone:

“Thank you for calling. To set up a new account, press 1. To modify your password to an existing account, press 2. To speak to our customer service agent, press 0.”,

or in recent years, something like (with a strong robot accent):

“Please tell me what I can help you with. For example, You can ask me ‘check the balance of my account’.”

This is known as interactive voice response (IVR). It is an automated phone system that interacts with callers and performs based on the answers and actions of the callers. The callers are usually offered some choices via a menu. And then their choice will decide how the phone call system acts. If the user request is too complex, the system can route callers to a human agent. This can greatly reduce labor costs and save time for companies.

Intents are usually very helpful when dealing with calls like these. An intent is a group of sentences or dialects representing a certain user intention. For example, “weather forecast” can be intent, and this intent can be triggered with different sentences. See the picture of a Google Dialogflow example below. Intents can be organized together to accomplish complicated interactive human-computer conversations. Like booking a restaurant, ordering a flight ticket, etc.

Google Dialogflow.
Google Dialogflow.

AI-powered call operators

By adopting the technology of NLP, companies can carry call operation services to the next level. Conventionally, call operators need to look up a hundred page-long professional manual to deal with each call from customers and solve each of the user problems case by case. This process is extremely time-consuming and for most of the time cannot satisfy callers with desirable solutions. However, with an AI-powered call center, dealing with customer calls can be both cozy and efficient.

AI-aided call operators with greater efficiency.
AI-aided call operators with greater efficiency. Image source: Pexels by MART PRODUCTION.

When a customer dials in, the system immediately searches for the customer and their ordering information in the database so that the call operator can have a general idea of the case, like how old the customer is, their marriage status, things they have purchased in the past, etc. During the conversation, the whole chat will be recorded with a live chat log shown on the screen (thanks to living Automatic Speech Recognition). Moreover, when a customer asks a hard question or starts complaining, the machine will catch it automatically, look into the AI database, and tell you what is the best way to respond. With a decent deep learning model, your service could always give your customer >99% correct answers to their questions and can always handle customers’ complaints with the most proper words.

Knowledge graph

A knowledge graph is an information-based graph that consists of nodes, edges, and labels. Where a node (or a vertex) usually represents an entity. It could be a person, a place, an item, or an event. Edges are the lines connecting the nodes. There are also labels that signify the connection or relationship between a pair of nodes. A typical knowledge graph example is shown below:

A sample knowledge graph. Source: A guide to Knowledge Graphs.

The raw data for constructing a knowledge graph may come from various sources — unstructured docs, semi-structured data, and structured knowledge. Various algorithms must be applied to these data so as to extract entities (nodes) and the relationship between entities (edges). To name a few, one needs to do entity recognition, relations extracting, label mining, entity linking. To build a knowledge graph with data in docs, for instance, we need to first use deep learning pipelines to generate embeddings and store them in a vector database.

Once the knowledge graph is constructed, you can see it as the underlying pillar for many more specific applications like smart search engines, question-answering systems, recommending systems, advertisements, and more.

Endnote

This article introduces the top five real-world NLP applications. Leveraging NLP in your business can greatly reduce operational costs and improve user experience. Of course, apart from the five applications introduced in this article, NLP can facilitate more business scenarios including social media analytics, translation, sentiment analysis, meeting summarizing, and more.

There are also a bunch of NLP+, or more generally, AI+ concepts that are getting more and more popular these few years. For example, with AI + RPA (Robotic process automation). You can easily build smart pipelines that complete workflows automatically for you, such as an expense reimbursement workflow where you just need to upload your receipt, and AI + RPA will do all the rest for you. There’s also AI + OCR, where you just need to take a picture of, say, a contract, and AI will tell you if there’s a mistake in your contract, say, the telephone number of a company doesn’t match the number shown in Google search.

Source

Posted on Leave a comment

Neuralink 2022 Update -Human Trials are coming

Let’s get into the latest updates on Elon Musk’s futuristic brain implant company Neuralink. Elon has been talking a lot lately about Neuralink and some of the applications that he expects it will be capable of, or not capable of, in the first decade or so of the product life cycle.

We know that Elon has broadly promised that Neuralink can do everything from helping people with spinal cord injuries, to enabling telepathic communication, curing brain disease like Parkinsons and ALS, allowing us to control devices with our thoughts and even merging human consciousness with artificial intelligence.

But as we get closer to the first clinical human trials for Neuralink, things are starting to become a little more clear on what this Brain Computer Interface technology will actually do, and how it will help people. So, let’s talk about what’s up with Neuralink in 2022.

Neuralink Human Trials 2022

When asked recently if Neuralink was still on track for their first human trial by the end of this year, Elon Musk replied by simply saying, “Yes.” Which I think is a good sign. It does seem like whenever Elon gives an abrupt answer like this, it means that he is confident about what he’s saying.

For comparison, at around the same time last year, when asked about human trials of Neuralink, Elon wrote, “If things go well, we might be able to do initial human trials later this year.” Notice the significant difference in those two replies. Not saying this is a science or anything, but it is notable.

We also saw earlier this year that Neuralink were looking to hire both a Director and Coordinator for Clinical Trials. In the job posting, Neuralink says that The director will “work closely with some of the most innovative doctors and top engineers, as well as working with Neuralink’s first Clinical Trial participants.”

We know that Neuralink have been conducting their surgical trials so far with a combination of monkeys and pigs. In their 2020 demonstration, Neuralink showed us a group of pigs who had all received Neuralink implants, and in some cases had also undergone the procedure to have the implant removed. Then in 2021, we were shown a monkey who could play video games without the need for a controller, using only his brain, which was connected with two Neuralink implants.

Human trials with Neuralink would obviously be a major step forward in product development. Last year, Elon wrote that, “Neuralink is working super hard to ensure implant safety & is in close communication with the FDA.” Previously, during Neuralink events, he has said that the company is striving to exceed all FDA safety requirements, not just to meet them. In the same way that Tesla vehicles exceed all crash safety requirements, they actually score higher than any other car ever manufactured.

What can Neuralink Do?

As we get closer to the prospective timeline for human testing, Elon has also been dialing down a little more into what exactly Neuralink will be able to do in its first phase implementation. It’s been a little bit hard to keep track when Elon is literally talking about using this technology for every crazy thing that can be imagined - that Neuralink would make language obsolete, that it would allow us to create digital backups of human minds, that we could merge our consciousness with an artificial super intelligence and become ultra enhanced cyborgs.

One of the new things that Elon has been talking about recently is treating morbid obesity with a Neuralink, which he brought up during a live TED Talk interview. Which is not something that we expected to hear, but it’s a claim that does seem to be backed up by some science. There have already been a couple of studies done with brain implants in people with morbid obesity, the implant transmitted frequent electric pulses into the hypothalamus region of the brain, which is thought to be driving an increase in appetite. It’s still too soon to know if that particular method is really effective, but it would be significantly less invasive than other surgeries that modify a patient's stomach in hopes of suppressing their appetite.

Elon followed up on the comment in a tweet, writing that it is “Certainly physically possible” to treat obesity through the brain. In the same post, Elon expanded on the concept, writing, “We’re working on bridging broken links between brain & body. Neuralinks in motor & sensory cortex bridging past weak/broken links in neck/spine to Neuralinks in spinal cord should theoretically be able to restore full body functionality.”

Which is one of the more practical implementations of Neuralink technology that we are expecting to see. These electrical signals can be read in the brain by one Neuralink device, and then wirelessly transmitted through BlueTooth to a second Neuralink device that is implanted in a muscle group, where the signal from the brain is delivered straight into the muscles. This exact kind of treatment has been done before with brain implants and muscular implants, but it has always required the patient to have a very cumbersome set up with wires running through their body into their brain, and wires running out of their skull and into a computer. The real innovation of Neuralink is that it makes this all possible with very small implants that connect wirelessly, so just by looking at the patient, you would never know that they have a brain implant.

Elon commented on this in another Tweet, writing, “It is an electronics, slash mechanical, slash software engineering problem for the Neuralink device that is similar in complexity level to smart watches - which are not easy!, plus the surgical robot, which is comparable to state-of-the art CNC machines.”

So the Neuralink has more in common with an Apple Watch than it does with any existing Brain Computer Interface Technology. And it is only made possible by the autonomous robotic device that conducts the surgery, the electrodes that connect the Neuralink device into the brain cortex are too small and fine to be sewn by human hands.

Elon touched on this in a response to being asked if Neuralink could cure tinnitus, a permanent ringing in the ears. Elon wrote, “Definitely. Might be less than 5 years away, as current version Neuralinks are semi-generalized neural read/write devices with about 1000 electrodes and tinnitus  probably needs much less than 1000.” He then added that, “Future generation Neuralinks will increase electrode count by many orders of magnitude.”

This brings us back to setting more realistic expectations of what a Neuralink can and cannot do. It’s entirely possible that in the future, the device can be expanded to handle some very complex issues, but as it is today, the benefits will be limited. Recently a person Tweeted at Elon, asking, “I lost a grandparent to Alzheimers - how will Neuralink address the loss of memory in the human brain?” Elon replied to say, “Current generation Neuralinks can help to some degree, but an advanced case of Alzheimers often involves macro degeneration of the brain. However, Neuralinks should theoretically be able restore almost any functionality lost due *localized* brain damage from stroke or injury.”

So, because those 1,000 electrodes can’t go into all areas of the brain all at once, Neuralink will not be effective against a condition that afflicts the brain as a whole. But those electrodes can be targeted on one particular area of damage or injury, and that’s how Neuralink will start to help in the short term, and this will be the focus of early human trials.

During his TED Talk interview, Elon spoke about the people that reached out to him, wanting to participate in Neuralink’s first human trials. Quote, “The emails that we get at Neuralink are heartbreaking. They'll send us just tragic stories where someone was in the prime of life and they had an accident on a motorcycle and now someone who’s 25 years old can’t even feed themselves. This is something we could fix.” End quote.

In a separate interview with Business Insider that was done in March, Elon talked more specifically about the Neuralink timeline, saying, “Neuralink in the short term is just about solving brain injuries, spinal injuries and that kind of thing. So for many years, Neuralink’s products will just be helpful to someone who has lost the use of their arms or legs or has just a traumatic brain injury of some kind.”

This is a much more realistic viewpoint than what we’ve seen from Elon in interviews of the past. On one episode of the Joe Rogan Podcast, Elon tried to claim that in 5 years from now language would become obsolete because everyone would be using Neuralink to communicate with a kind of digital telepathy. That could have just been the weed talking, but I’m hoping that the more realistic Elon’s messaging becomes, the closer we are getting to a real medical trial of the implant.

And finally, the key to reaching a safe and effective human trial is going to be that robot sewing machine that threads the electrodes into the cortex.  Elon referred to it as being comparable to a CNC machine. Because as good as the chip itself might be, if we can’t have a reliable procedure to perform the implant, then nothing can move forward. The idea is that after a round section of the person’s skull is removed, this robot will come in and place the tiny wires into a very specific areas in the outer layer of the brain - these don’t go deep into the tissue, only a couple of millimeters is enough to tap into the neural network of electrical signals. In theory this can all be done in a couple of hours, while the patient is still conscious - they would get an anesthetic to numb their head, obviously, but they wouldn’t have to go under full sedation, and therefore could be in and out of the procedure in an afternoon. Very similar deal to laser eye surgery - a fast and automated method to accomplish a very complex medical task. 

That’s what this Twitter user was referencing when he recently asked how close the new, version two of the Neuralink robot was to inserting the chip as simply as a LASIK procedure. To which Elon responded, quote, “Getting there.”

We know that the robot system is being tested on monkeys right now, and from what Elon says, it is making progress towards being suitable for human trials.

The last interesting thing that Elon said on Twitter in relation to Neuralink was his comment, “No need for artificial intelligence, neural networks or machine learning quite yet.” He wrote these out as abbreviations, but these are all terms that we are well familiar with from Tesla and their autonomous vehicle program. We know that Elon is an expert in AI and he has people working for him at Tesla in this department that are probably the best in the world. This is a skill set that will eventually be applied at Neuralink, but to what end, we still don’t know.

Posted on Leave a comment

Are We Living in a Simulated Reality?

 

According to some theorists, we are living in a simulated reality. This theory is based on the idea that the world we experience is nothing more than a computer simulation. Furthermore, some scientists believe that an advanced civilization could create this simulation.

We spend so much time inside computers and phones that it’s hard to imagine life without them. But what if we’re living in a simulated reality?

Some people think that computers could be creating simulations of different worlds in which to play, while others believe that our entire reality could be just one extensive computer simulation.

What is defined as Real?

When discussing what is real, it’s important to define what is meant by the term. For some, the reality is what can be experienced through the five senses. Anything that exists outside of that is considered to be fake or simulated.

Others may believe that reality is more than just what can be perceived with the senses. It may also include things that are beyond our understanding or knowledge.

In the movie “The Matrix,” Morpheus asks Neo what is real. This is a question that people have asked throughout history. Philosophers have debated this question for centuries. What is real? Is it the physical world that we can see and touch? Or is it something else?

What is real? How do you define ‘real’? If you’re talking about what you can feel, what you can smell, what you can taste and see, then ‘real’ is simply electrical signals interpreted by your brain.

-Morpheus, The Matrix

 

Some people believe that there is more to reality than what we can see and touch. They believe that a spiritual world exists beyond our physical world. Others believe that reality is nothing more than an illusion.

There is no single answer to this question as it varies from individual to individual. What one person considers natural may not be seen as such by someone else. This makes it a difficult topic to debate or discuss.

The Matrix: A movie or a Documentary?

There is a lot of debate over whether the 1999 movie The Matrix is a work of fiction or a documentary.

The Matrix is a movie based on the idea of simulated reality. It asks the question, what if our world is not what we think it is? What if we are living in a simulation? The movie takes this idea and runs it, creating a believable and fascinating world.

 

However, some people believe that The Matrix is more than just a movie. They think that it is a documentary. Our world is a simulated reality, and we live in it without knowing it. While this may seem like a crazy idea, it does have some basis in science.

Simulated reality is something that scientists are currently studying, and there is evidence that suggests it could be possible. So, while The Matrix may be a movie, it could also be based on reality exploring the idea of a simulated reality.

The Simulation Theory

The theory is that we might be living in a simulated reality. Proponents of the simulation theory say that it’s plausible because computing power increases exponentially.

Why wouldn't simulators do so if we could create a simulated world indistinguishable from reality?

Some scientists even believe that we’re already living in a computer-generated simulation and that our consciousness is just a program or algorithm.

Physicist creates AI algorithm that may prove reality is a simulation

A theory suggests that we are all living in a simulated reality. This theory, known as the simulation theory, indicates that humans created a computer program that allows us to experience life as if we are living in the real world at some point in our history.

Some people believe that this theory could explain the mysteries of our existence, such as why we are here and what happens when we die.

The first time the simulation theory was proposed was by philosopher Rene Descartes in 1641. However, it wasn’t until the 1970s that the theory began to gain popularity. This was due to the development of computers and later artificial intelligence.

Then, in 2003, philosopher Nick Bostrom published a paper titled “Are You Living in a Computer Simulation?” which revived interest in the theory.

While there’s no definitive proof that we’re living in a simulation, the theory raises some interesting questions.

What if everything we experience is just an illusion? What does that mean for our understanding of reality and ourselves?

How could we know if we’re living in a simulation?

There are a few different ways to determine whether or not we’re living in a simulation. One way is to look at the feasibility of creating a simulated world. If it’s possible to create a simulated world that is indistinguishable from the real world, we’re likely living in a simulation.

Another way to determine if we’re living in a simulation is to look at the development of artificial intelligence. If artificial intelligence surpasses human intelligence and becomes able to create its simulations, then it’s likely that we’re living in a simulated world.

Whether or not we live in a computer-generated simulation has been debated by philosophers and scientists for centuries. Still, recent advancements in artificial intelligence (AI) have brought the topic back into the spotlight.

Some experts believe that if we create intelligent machines, they could eventually become powerful enough to create their simulations, leading to an infinite number of universes — including ours.

So how could we know if we’re living in a simulation? One way would be to see if the laws of physics can be simulated on a computer. Another approach is to look for glitches or inaccuracies in the universe that could suggest it’s fake. However, both methods are complicated to execute and may not provide conclusive results.

The bottom line is that we may never know whether or not we’re living in a simulation.

Final Thought

The likelihood of living in a simulated reality is still up for debate; the ramifications of such a possibility are far-reaching.

If we were to find ourselves in a simulated world, it would force us to re-evaluate our understanding of reality and its meaning to being human. It would also raise important questions about the nature of existence and our place in the universe.

Apr 18



Source

Posted on Leave a comment

Let’s discuss Functional NFTs

Functional NFTs are changing the ways we interact with each other and the gaming experience. Earlier, NFTs were limited to products but now it’s putting a value on services too. Now with functional NFTs, you can choose to buy an experience rather than a piece of art. 

Non-Fungible Tokens (NFTs) have stirred up things in the world of art. While the underlying technology behind NFTs remains simple. They have morphed into multiple applications some of which we shall discuss soon. Traditionally there have been five categories of NFTs: Collectibles, Game Assets, Virtual Land, Crypto Art and Others (including domain names, property titles) etc. Currently, there seems to be another category that has been getting some buzz in the industry. This new player is called “Functional NFTs”. 

What are Functional NFTs?

Let’s discuss what Functional NFTs are first. The meaning should be clear from the name itself. NFTs that provide some sort of functionality. It could be a game asset that performs some function. For example, if a game has an avatar as an NFT and it provides certain functionality, then it can be called a Functional NFT. This functionality can be seen as accruing points in a game or giving the player some special power.

Another example can be an NFT created by a restaurant owner. The NFT works as a pass for one person to have dinner on Sunday at the restaurant. Therefore the NFT has some functionality and serves a given purpose. In a similar fashion imagine walking into a club and not having to stand in a line. Well, there can be an NFT for that too. Owning that NFT can give you free access to the club and since you own the NFT, people do not need to check for your ID. 

Normal vs Functional NFTs

Moreover, there has been a heated debate about value accrual in normal NFTs vs Functional NFTs. The argument is that non-functional NFTs are easier to make and are sold quickly on the market. Thus acquiring value quickly. In comparison to that Functional NFTs such as in games need to be thought about. It takes time to build a great experience around the basic utility of the functional NFT.

Consequently taking more time to build value. For example, Axie Infinity, a Pokemon-like game that allows players to collect, breed and battle creatures. It was launched in 2018, but it was quite different then from what it is right now. The developer team had multiple iterations to finesse the game experience. Once the gaming experience was finessed, the NFT assets within the game accrued value. The phenomenon is termed as “Promise Effect” which says that an NFT that promises some experience will accrue value slower than a non-functional NFT.

A new type of Functional NFTs

HODL Valley, a new metaverse gaming project is trying to create a tokenized city. One among many of its features is Functional NFT, but these NFTs take it a step too far. HODL Valley contains around 24 different locations, each with a specific function and utility. These locations are connected to DApps which carry out the functionality for users. These locations can be purchased in-app and the revenues generated by them can be taken home by the NFT owner. For example, let’s say a bank has been represented by an NFT. Since it’s connected to a DApp, it can provide lending and borrowing services. As other users in the game play and use the bank. The NFT owner, who is, in turn, the owner of the bank will be able to generate an income stream from it. That is how functional NFTs have been pitched recently. 

These functional NFTs are bound to change the way we interact with games and real life. With added functionality, individuals can get a unique experience. It’s not just a token anymore which represents value, it’s a function in itself. If NFTs was money then it was only selling products until now. Now, it has started moving into services too.

Source

Posted on Leave a comment

Tech’s On-Going Obsession With Virtual Reality

 
KEY TAKEAWAYS

Virtual reality and augmented reality have been steadily evolving for decades, but still haven't lived up to the expectations of many. Here's a look at the current state of VR and AR, and where they're likely to go.

Virtual reality (VR) has been one of the most important technological crazes of modern times. Although the original idea can be traced back to the early '80s, in the last few years we've kept hearing the same question being asked over and over:

"Is THIS the year of VR?"

Because of the inherent limits of our current technologies, VR still struggles to make its breakthrough and become an everyday use product. (Read also: VR/AR Where We Are and Where We Came From.)

Before diving deeper into the topic, let's first take a look at what VR was supposed to be, and what it actually has become, or at least promises to be, instead.

What Is Virtual Reality?

VR equipment consists of headsets and other gadgets used to project a person's virtual image in an artificial world. The general idea is to be able to interact within a virtual reality that is as realistic as possible with objects and other individuals that may also share the same space. In addition to traditional VR goggles, many other items such as gloves and headphones have been added to modern equipment.

Virtual reality seemed to capture public imagination during the '80s and '90s, when movies like "Johnny Mnemonic" and "The Lawnmower Man" fired up a real craze. However, back then, this technology was still very rudimentary and never managed to go beyond unreliable devices such as the infamous Nintendo Power Glove.

Today VR development has come back with devices such as the Oculus Rift, YouTube 360° videos and... well... obviously full-immersive adult movies.

Differences Between Virtual Reality and Augmented Reality

Virtual Reality should not be confused with augmented reality (AR). VR tries to simulate reality through visual and auditory stimulation, while AR just builds on existing reality by enhancing it with digital projections.

AR usually consists of apps and software used on mobile devices to add virtualized elements to the real world. (Read also: Augmented Reality 101.)

Examples of AR include pop-out 3-D emails and text messages, virtual makeup mirrors and apparel color-changing apps. AR can be used to enhance reality by, for example, building physical objects via 3-D printers after they have been "virtualized" from 3-D pictures.

VR offers a believable reconstruction of real-life for entertainment purposes, while AR adds virtual elements to the real world.

Current Status and Future Potentialities

Silicon Valley kept building VR for quite some time, but where is this technology now other than the fleeting entertainment that "Pokemòn Go" provided us with?

Truth be told, much of the current hype about VR technology revolves around a few interesting gadgets. One of the most popular VR headsets is the Oculus Rift, which began as a Kickstarter campaign before Facebook bought it in 2014. Together with the Sony PlayStation VR and the HTC Vive, these devices revolutionized the gaming scenario.

The addition of integrated hardware such as motion-tracked controllers and an extremely immersive experience made these headsets quite popular among gamers. However, the relatively small gaming library and a price that is still far from truly being affordable to the average person are factors that currently prevent these from becoming mainstream.

VR tech is more than just video games, though. According to experts' predictions, in the next 10 years the VR sector will be worth $38 billion. Retailers such as Ikea started their first experiments to let customers view and move about their new appliances or kitchens via a virtual reality headset and controllers. Marks & Spencer launched its first virtual reality showrooms and Volvo designed a virtual driving experience with the Google Cardboard headset.

Will VR Be the Future of Smartphones?

Extremely influential individuals such as Mark Zuckerberg provided some interesting insight on how current smartphone technology seemingly reached a technological impasse. According to his opinion, the competition with Google and Apple is preventing Facebook from developing its full potential in the VR world.

Integration between smartphones and VR can instead be the most probable solution. Programming legends such as John Carmack (the father of "Doom" and "Quake III Arena") are betting on the development of Gear VR, a technology that can make smartphone VR a reality. It's still too early to say whether VR is going to be the solution and the future of social networks as a whole. However, this is definitely the place where Google Glass and Microsoft's HoloLens are looking to.

Possible Medical Applications of VR Technology

One of the latest trends for VR tech is to use it to treat some diseases and conditions. A lot of medical research on its possible applications other than entertainment and media is going on. VR headsets have been used to help phobic patients fight their fears in a controlled environment.

Soldiers who suffer from post-traumatic stress disorder (PTSD) have been treated with it since 1997, when Georgia Tech developed the first Virtual Vietnam VR. Other applications include pain management and social cognition training for autistic patients. (Read also: How AI in Healthcare is Identifying Risks and Saving Money. )

Augmented reality, on the other hand, is currently being used for advanced 3-D imaging by surgeons at the Lucile Packard Children’s Hospital Stanford and Stanford Health Care. Physicians can get a better view of patient anatomy that helps them during delicate operations such as valve replacement.

Controversial Aspects

Just like any other groundbreaking discovery, VR technology is not devoid of potentially negative aspects. A quite modern controversy recently arose, since it's almost inevitable that a large portion of VR landscape will focus on the adult entertainment industry.

This world is still seen as a male-dominated one that only recently saw some form of parity in the form of LGBT adult material. A new technology may, however, cause this hardly gained progress to take several steps backward. Larger companies will probably focus on mainstream male-oriented content, forcing niche audiences to be initially crowded out, if not excluded.

Other possible controversies include social isolation and ethical issues (mostly related to video gaming violence). As violence in the form of firefights and armed battles will take place in such a realistic and immersive way, younger or psychologically unstable consumers can be strongly affected. (Read also: Finite State Machine: How it Has Affected Your Gaming for Over 40 Years.)

Whether this influence would be negative or positive is yet unknown, but many developers would have to ensure that the content of a game can still be perceived as different from reality. Striking the right balance between fiction and realism can be hard, however, as the sense of distance that usually provides players with a safety net can be lost.

Final Thoughts

Despite the hype, VR technology is still in its earliest stages of development. However, it definitely is an enfant prodige, and we surely want to be there to witness the moment when this promising invention will finally go beyond its first steps.

By 

  • " data-original-title="Written by">Claudio Buttice
    Published: August 28, 2020 | Last updated: February 17, 2022