Posted on Leave a comment

European Union limits targeted advertising and content algorithms under new law

The Digital Services Act could reshape the internet.
 

Following a marathon 16-hour negotiation session, the European Union reached an agreement early Saturday to adopt the Digital Services Act. The legislation seeks to impose greater accountability on the world's tech giants by enforcing new obligations companies of all sizes must adhere to once the act becomes law in 2024. Like the Digital Markets Act before it, the DSA could have far-reaching implications, some of which could extend beyond Europe.

While the European Commission has yet to release the final text of the Digital Services Act, it did detail some of its provisions on Saturday. Most notably, the law bans ads that target individuals based on their religion, sexual orientation, ethnicity or political affiliation. Companies also cannot serve targeted ads to minors.

Another part of the law singles out recommendation algorithms. Online platforms like Facebook will need to be transparent about how those systems work to display content to users. They will also need to offer alternative systems "not based on profiling," meaning more platforms would need to offer chronological feeds. Additionally, some of the largest platforms today will be required to share "key" data to vetted researchers and NGOs so those groups can provide insights into "how online risks evolve."

"Today's agreement on the Digital Services Act is historic, both in terms of speed and of substance," said European Commission President Ursula von der Leyen. "It will ensure that the online environment remains a safe space, safeguarding freedom of expression and opportunities for digital businesses. It gives practical effect to the principle that what is illegal offline, should be illegal online."

Under the DSA, the EU will have the power to fine tech companies up to six percent of their global turnover for rule violations, with repeat infractions carrying the threat of a ban from the bloc. As The Guardian points out, in the case of a company like Meta, that would translate into a single potential fine of approximately $7 billion.

The DSA differentiates between tech companies of different sizes, with the most scrutiny reserved for platforms that have at least 45 million users in the EU. In that group are companies like Meta and Google. According to a recent report, those two, in addition to Apple, Amazon and Spotify, collectively spent more than €27 million lobbying EU policymakers last year to change the terms of the Digital Services Act and Digital Markets Act. The laws could inspire lawmakers in other countries, including the US, as they look to pass their own antitrust laws.

"We welcome the DSA's goals of making the internet even more safe, transparent and accountable, while ensuring that European users, creators and businesses continue to benefit from the open web," a Google spokesperson told Engadget. "As the law is finalized and implemented, the details will matter. We look forward to working with policymakers to get the remaining technical

Source

The Digital Service Act can be viewed here.

Posted on Leave a comment

The Identity Paradigm

Tony Gregory intercultual psychologist

In 1962, Thomas Kuhn published the most important intellectual work of the 20th century, The Structure of Scientific Revolutions. In it he argued against the long-held belief that evolution was an uninterrupted and steady continuum. He posited instead that progress came in jerks and starts – long periods of calm that were managed according to widely accepted beliefs and customs interspersed with brief violent periods of enormous change, like the renaissance, when all that had been accepted before was challenged and frequently overthrown. He called these violent brief periods 'paradigm shifts,' and since that time it has become an accepted part of how we see our world.

It was not long after that that Alvin Toffler wrote Future Shock, in which he argued that not only was Kuhn correct, but that the periods of relative stability between the brief and violent episodes of change were becoming shorter, so short in fact that it challenged out ability as humans to adjust to one set of revolutionary changes before another set was already upon us.

He gave as an example the impact of railroads on history. When Julius Caesar marched his legions south from France to Italy to conquer Rome in the first century AD it took more or less the same time as it took Napoleon to cover the same distance seventeen hundred years later. But it was only forty years after that when the railroad linking France and Italy was completed, cutting the journey from two months to three days. When Lincoln was assassinated in 865, it was only noon the next day that they heard about it in San Francisco. I saw the assassination of Robert Kennedy live – at the same time it happened – a century later. There are many examples you can give, but the impact is similar – changes coming at such a fast pace produce stress, and stress is the handmaiden of paradigm change.

One of the most important insights about paradigm shifts is that the animals that did well following the rules of the previous paradigm did not do well in the new one if they continued to follow those same rules because all the rules had changed (just ask the dinosaurs). People that owned stables during the age of agriculture were no longer at the center of things when the automobile replaced the horse as the accepted means of transportation. Quite clearly, there is a clear message here – if the paradigm changes and you don't, your future looks bleak.

But it is important to point out that not all paradigm changes are the same. The industrial revolution was a definite change in paradigms, and economic power in the world shifted dramatically from an emphasis on ownership of land to an emphasis on access to raw materials and the means of production. Yet the family structure survived the change, as did religion and nationalism.

The change from the ice age to the Holocene period which we presently inhabit was also a paradigm shift, but one far more powerful than the movement from agriculture to industry. When the glaciers finally retreated and the planet warmed, our species (Homo sapiens in case you forgot) spread around the globe and our numbers exploded because it became possible for us to sustain ourselves in far larger groups, which in turn allowed us to do things we had never done before, like build permanent dwellings and use the land to provide us with food on a continual basis, which we called agriculture.

We actually started recording events then, some ten thousand years ago – we call it history. The concentration of our species in such large numbers created a need to order things, to solve disputes and regulate affairs, and that led to the birth of customs, religion and culture and the domestication of animals. I could go on but I think you get the point – the change was so dramatic that nothing that had been true before remained. It was a transformation.

The other thing to point out is that all of this happened slowly, over the period of more than one lifetime. The people that came south after the glaciers retreated were long gone before the first cities were built and the first empires were formed. Akkadia was the first human empire, formed in Mesopotamia 4300 years ago, and that’s a full five thousand years after the glaciers began to retreat. We had time to adjust, time to consider how to respond to our new reality, time to try different ways of approaching things, and time to fail and try something else and still survive (unlike the Neanderthals).

Now, at the beginning of what we call our twenty-first century since we started writing stuff down, it appears that we are on the verge of a new paradigm shift, and possibly one as dramatic as that last big one when the ice retreated. If that is true, then we should remember that insight from so long ago – nothing that went before remained. That is the mark of a complete transformation.

It's tough for us to think about that because whether we like it or not we are children of our current paradigm, formed by its assumptions, educated in its customs and brainwashed accordingly. We find it difficult to think of ourselves without these things we are wedded to. Look, when Copernicus stepped forward in 1543 and said "Uh…I just want to point out that the earth is not the center , it’s the sun" even very smart people had a hard time wrapping their heads around that. It took literally a hundred years before it was accepted as a scientific proof (except in parts of the United States where science is still not accepted until this day). That is called denial of reality, and back then a lot of people were in that state for an extended period of time.

So when I step up and suggest that everything is about to change, not just the small stuff, I imagine that a lot of people – smart people – will find that hard to accept. Nevertheless, I think our ice age is about to end, and, in the spirit of Alvin Toffler, I think the new paradigm will be upon us so quickly that we will not have a lot of time to react. So, with that proviso, here is my preview of the next paradigm. Please forgive me if not all of the changes are of the same magnitude and if I leave some out. I, too, am a child of our current paradigm, and like everyone else my vision to see ahead is both limited and subjective.

We have become accustomed to identifying ourselves in relation to other people, to our geographical location, our membership in some political group ( a nation) and to our occupation, and to what we believe, which the more extreme among us label 'the truth.' So, I say I am a father, a husband, a member of a certain family, a citizen of a community and a nation, and I work as a psychologist - and all of that is about to change.

WORK

let's start with the easy one – work. There is not enough of it to go around. In our current paradigm we regard unemployment as some sort of negative state, a disease that needs to be treated. We talk about work moving around the world and call it outsourcing. We act as if the lack of jobs in North America means those same jobs have somehow magically moved to Asia and it is the cause of a great deal of unrest. None of that is true.

What is true is that human work, as we have come to know it during the last three centuries, is disappearing. What was once done by human labor is now done by machines. In a report on Automation in 2020, the World Economic Forum predicted by the year 2025, 53% of work would be performed by humans and 47% by machines, a 14% increase from the year the report was issued. If you carry that ratio forward then all work will be done by machines before the year 2060. But forget the numbers game. The impact of automation is that work will cease to be the center of life as it has been during the last three centuries.

It's not only that people will not physically move to find work, like they moved from the country to the cities at the start of the industrial revolution. It means there will be no place to move to. The family will not have to sacrifice some part of their life so that the wage earner can do his job, there simply will be no wage earner. People's income from work will not have to be supplemented by government spending when it is not enough because there will be no income from work. That is the nature of a complete transformation.

Income will not be apportioned on the basis of achievement (higher salary for work that is valued more highly) but existentially– you will not get money because of what you do but rather because of who you are. Iran was the first country to install universal basic income in 2010, and the practice is now prevalent throughout northern Europe. In an economic sense it is inevitable. If people depend on work for income, when there is no work, people starve, and when people starve, they revolt and topple governments (Just ask Louis the XVI). Every government on earth will take steps to prevent that.

Once work is no longer a benchmark of identification, the status distributed on the basis of occupation or position will cease to exist. A manager will not be more important than a laborer; a doctor will not have higher status than a janitor because these jobs will cease to exist.  The subtle but unmistakable prejudice of assigning credibility based on occupation (doctors must be smarter than gardeners) will slowly fade away and people will be judged on who they really are rather than the work they perform.

Organizations will look completely different, and all the silly talk about organizational 'culture' will cease (thank God) because machines aren't in need of culture. The center of life will not be the place of work, there will be no traffic jams nor daily disruption of activities because of the physical need to move from one place to another, and identity will have to emanate from something other than where you work, because there will be no such thing.

Some things will remain. There will probably be teachers to some extent, though most instruction will be provided by machines, and there will be caretakers for more intimate human contact, though again, basic medical functions will be fully automated. Entertainment may remain a human occupation in some form, though it is important to point out that today most of the most popular entertainment is now animation (80% of top box office receipts in 2019 came from Disney studios and the most popular films tend to feature cartoon characters rather than human beings).

The clincher in all of this is time. We had eons to adjust from a nomadic life style to living in permanent communities. We will have just decades to adjust from a world with work to a world without work and it will leave literally billions of people gasping to find something to do. Some people like to compare what will happen to that old experiment of putting the frog in a pot of lukewarm water and heating it slowly so the frog doesn’t notice until he's cooked, but that isn't what will happen. The changes will be so fast that we will feel ourselves cooking, and it won't be pleasant.

FAMILY

Family has been the anchor of our identity for longer than work, probably for the last fifteen to twenty thousand years. It is without doubt the most emotionally-charged part of our identity, and most of our great works of literature deal with it from Oedipus to Anna Karenina. There is a natural inclination for a species to nurture its young; this is not exclusive to mammals. What is exclusive is the tendency of mammals to remain in units defined by a common blood line for an extended period of time, and among the mammals we humans are the champs. We extend our families for generations and we have made them the center of our lives, once again, for good and ill.

Part of the reason for this is survival. In the beginning if you were sick or injured you would not survive unless there were other people around you who cared enough to tend to you. More recently, the bond of survival has not been exclusively physical but also economic. Especially in the current generation, children in the west in particular are less well-off financially than their parents and without that support they would not make it. Like the man said, family is the place that when you go there they have to take you in.

There is an attendant pride that accompanies family identity, particularly when the family is adept either at maintaining a certain status (aristocracy, for example) or occupation (the military, for example). So, there are families of hostlers, shoemakers, haberdashers, iron-workers, doctors, and so on, and the connection between familial and occupational identity makes these families stronger over time. They exert pressure on their young to 'follow in their footsteps' and to adopt their ideals and beliefs, and believe this continuity has great value.

The industrial revolution weakened this bond for all but the wealthiest, causing as it did displacement of millions of people who found it necessary to move away from their place of origin to another community in order to secure employment, and the division of labor into employers and employees weakened the family ties of the latter and in millions of cases made it impossible for them to maintain the occupation or trade of the previous generations. The evolution of humanity from family-based to community-based dates from this time, about three hundred years ago.

But the real dismemberment of the family has been prosperity. As people become wealthier, on the top of their agenda is the desire to distance themselves from others. This has now arrived at a situation in which one out of every seven households in the United States is listed as a single person residence, and the situation in many major European cities is even more pronounced. In popular culture the familial bond has been replaced by the comradely bond, i.e. people you meet are closer to you than people of your same blood. In turn, this has led to a decrease in marriages and birthrates, and it becomes a self-propagating loop.

The coming identity paradigm holds a future in which the individual will replace the family as the basic social unit. Clearly, this is such a revolution that it is difficult for most people to imagine, but it is on the way, supported by the development of virtual relationships as a replacement for close physical relationships, meaning the sensation of being close to a person without ever being in the same room with him or her.

This is already well underway, egged on by social media, which encourages the individual to remain isolated from others in a physical sense in preference of a virtual connection. It is a common sight now to see a group of people 'together' in a public place not speaking to each other but rather managing a dialogue with a cell phone with somebody else who is not in the room.

Unlike the loss of work, which is a phenomenon not dictated or controlled by personal choice, this movement toward the individual in place of the family unit will take time, tempered by economic factors as well as strong cultural opposition, but it is coming nonetheless and will be the norm for most places on the planet by the end of the century.  There are already sections of big cities like Tokyo that are intended for the exclusive use of young people, as well as adult communities restricted to those over the age of 65.

Multi-generational living arrangements are already largely a thing of the past globally, particularly beyond the nuclear family. The cultural consequences of this change are immense and frankly frightening for me to contemplate. Practically, it means that we will need to find new ways to transfer property and assign responsibility (designated driver will replace parent). Emotionally, we will go through a hard time when we dismember old axioms like 'blood is thicker than water,' because quite clearly, with all of its attraction, collegial ties will never take on the commitment that blood ties have.  In the new identity paradigm, the family will disappear.

BELONGING

Belonging is such a central pillar of our current paradigm that it has been enshrined as a key component of mental health. People who shun contact with others are not just considered anti-social; they are labeled as mentally unwell (autistic). Mass movements were a central feature of the last two centuries, both political and social. Whether they were as benign as scouting organizations or as controversial as political protests, being part of some action which involved thousands of other people gathering together was a mainstay of life in every country on the planet. This is now coming to an end.

People will still voice their opinions, but they will do so online. Even dating has become a virtual activity rather than a night out; you check out a person's profile in the privacy of your own home long before you meet them.  The same is true of voting and all forms of political activity. Not only can it be done from the home, it is being done from the home. The key to watch here is sporting events, one of the more acceptable reasons to mix physically with thousands of other people. When people begin to prefer viewing the events on a screen rather than sitting in a stadium, public participation will be terminated because it will become unprofitable.

Again, there will still be instances where thousands if not millions of people will express their opinions on a common topic, but this will be done in real time, surveys conducted by pressing a button on your phone rather than driving to a common location.

The mental health community will be forced to redesign conclusions about what it means to be alone. Indeed, loneliness itself will need to be redefined. Are you really alone (not lonely) if you are physically removed from everyone else but your cell phone is by your side? There will be a whole new list of mental conditions when the common living situation is one person alone. Clearly, there will be fewer problems resulting from interpersonal conflict (like domestic violence) because there will be fewer people living together. On the other hand, a whole new list of ailments will pop up because there will not be that other person in the room that can tell you when you are wrong. It will be a new world.

ARTIFICIAL INTELLIGENCE

Our present paradigm has been flavored with our conceit that we are masters of the world, that we could bend the natural laws to our will, that we had some sort of irresistible control over everything. I suppose that the climate crisis is enough evidence to demonstrate what a mistake that was, but there is something even closer to home that will shake us to our roots in the new paradigm – we are no longer calling the shots.

Artificial intelligence will be the driving force in the new paradigm, and algorithms will make decisions in a distinctly different way than human beings. The lead elements of this new force are already changing the buying and selling of stocks and bonds and the application of medical procedures in hospitals all over the world. In the space of a few decades, all transportation will be directed by artificial intelligence, and drones and driver-less vehicles will be the norm (There will be no more human drivers or pilots because they are too dangerous). Manufacturing is already there, but there will be complete automation by the middle of the century.

AI will take the lead in education and customer service and the last pathetic attempts to suggest that the room for human work is just moving to other occupations will fall mute. In the new paradigm we will cease to make decisions about anything other than what we want personally, and that too will be limited. This is the one that scares me the most, but unless I take advantage of the next big change I won't be around, so it won't matter.

Human beings are used to making decisions. For a long time our ability to do this well was intimately tied to our survival. The idea that this will be taken from us because AI will do it better is a conclusion that many of us will find hard to swallow, and we will be reaching for that phantom limb long after it has been removed. Old people who believe they can drive just as well at the age of eighty as they did when they were 20 is a hint of what it will feel like. When the reality sets in that this is not rue it will likely be accompanied by a depression that will be very difficult to deal with, maybe even tied to the meaning of life. It will be a global emotional crisis that more than likely will trigger new forms of belief.

MORTALITY

Yuval Harari has been writing for some time about the conquest of death. At the present time, eight vital organs can be transplanted: the heart, kidneys, liver, lungs, pancreas, intestine, thymus and uterus. Artificial limbs are now commonplace, as well as eye transplants, artificial bladder implants, inner ear implants, and deep brain stimulation. The practicality of replacing the entire body, other than some higher functions of the brain, is now a distinct possibility before the middle of the century.

That means that your body no longer defines who you are, nor are you limited to a specific number of years before you 'die.' 'Life' will have to be redefined when it is not followed by the modifier 'time.' Immortality is a daunting moral and philosophical challenge, but it is no longer a physical one. It is very likely that the possibility of living longer will have a dramatic effect on birthrates, as the idea of passing the torch to a new generation, what Richard Dawkins called The Selfish Gene, will become a remnant of thinking from the previous paradigm, because that thinking is based on the assumption that the existing organism cannot sustain itself beyond a certain date.

No doubt the conquest of mortality will also lead to significant changes in relationships that were previous thought of (at least in theory) as life time commitments, like marriage and even parenthood. It will also be marked by the development of a whole new industry dedicated to the total replacement of the body, possibly with gender changes thrown in for a little spice – live eighty years as a man and another eighty years as a woman.

Immortality combined with artificial intelligence will demand an entire rethinking of the role of Homo sapiens on the planet, as well as how we define spirituality (if all of us are immortal how does this change the status of deities?).  It is a daunting prospect. Things that we regarded as one-time decisions will lose that distinction, and almost everything will become choice-determined. Death itself will become a decision, not inevitability, and this alone will completely reshape philosophy and morality.

NATIONALITY

For the past several centuries we have defined ourselves as members of one nationality or another to such an extent that human beings were willing to die to protect or extend that abstract concept, something that commanded our loyalty even more than family or religion.

Most of us tend to forget our previous participation in smaller political units like tribes and regions, and for the most part these remain as romantic abstractions, lacking the full force of what it means to be a system of a country. Those pictures of Uncle Sam pointing his finger at you and calling you to enlist are not just propaganda, they are the expression of the belief of the country that it has the right to demand that its citizens give their lives to protect it. In the country in which I live this is a reality, and the state is by law authorized to exert its domain over the private lives of its citizens.

Because of the maximum commitment it involves, most of us are highly emotional about what we call our national identity. Yet nations, too, may not be a part of the next paradigm, as difficult as it is to believe. There is a contractual need for people to align themselves with a large political entity that manages an infrastructure. We need water, electricity, transportation systems and supply chains, and these are arrangements beyond the power or resources of any individual. But they are definitely contractual, and by no mean the exclusive rights or ability of nations.

In practice – not theory, practice – power companies in the United States can supply energy to all the homes of North America and maybe South America as well. The practice of ending the power grid at a country's borders is a political decision, not a technological one.

There is also no practical reason why a person living in Caracas cannot contract with a company half way around the globe, say India, for the supply of needed services, if that supplier is capable of meeting the demand. When it becomes clear that the supply of services that were formally relegated only to nations – security, welfare, transportation, health, energy, waste disposal, and more – can be supplied to individuals by a more effective alternative, then the grip of nations on individuals will slip.

The people of Catalonia do not want to be part of Spain, and the people of California have their doubts about the United States, yet this dissatisfaction with the larger national unity is still just a little step, the dismantling of larger political bodies into smaller ones.

There is a real possibility that the next paradigm holds a much more dramatic change in store – the alliance of the individual with an organizing structure beyond nations. Instead of a process of unification that produces ever bigger political bodies, think of it in the other direction – the existence of thousands of service providers making direct contact with consumers directly on a non-geographical basis, and not using a government as an agent.

So, for example, the person living in London might receive his mail from a supplier in Delhi, his power from a supplier in Norway, his security from a company in Scotland, and his health from an organization in Switzerland. He may still consider himself English, but this will have more to do with his physical surroundings than with the political structure associated with it.

Quite clearly such a dramatic change has immeasurable implications for property ownership and civil legislation of every kind, and the number of lawyers required to work it out I don't even want to think about, but the point is that on a practical level it is indeed possible. It is only the abstract concept of nations for which so many people laid down their lives in the previous century that keeps it from happening. Nations have traditionally promoted themselves by their opposition to other nations, a practice which was expensive and bloody (we are better than they are; they want to kill us, so let's kill them first). If there is a business model that proves to be much more cost-efficient than the national one (and less bloody), it will come to pass, and within the next one hundred years, though I know how hard that is to believe. Yes, nations may be a thing of the past.

There will be a lot of gnashing of teeth when contemplating the alternatives, and there will remain a true need for the collection of public money in order to finance projects for the good of all (taxes), and there will always be disagreements over decisions made and the need to handle the losers so that they do not act to disrupt the system – all of that is true, but there is no natural law that says this must be the work of nations. The fact is that many nations are artificial in the extreme, the deformed children of colonialism, places like Pakistan and India and many states in Africa. The attempt to supplant such constructions with something else more effective is a positive idea, and it will be pursued.

RELIGION

The final pillar of identity that will be challenged in the new paradigm is belief. For the last millennium, many individuals have defined who they are as members of some religious movement, with Christianity and Islam being the most prominent recent examples. More blood has been spilled trying to sway different parts of the world to one religion or another over the last millennium than any other cause. This was challenged a half millennium ago when Christianity finally started to come apart into disparate elements of Protestantism and Catholicism and has been echoed more recently with the division of Islam into Sunni and Shiite. Still, many nations are defined by their religion. There are more than 80 nations today that officially give preference to one religion over another, including the one in which I reside.

Yet that, too, will be challenged by the impact of the new identity paradigm. In 2020, church membership in the United States dropped below 50% for the first time since the Gallup Poll began reporting. The American Mosque Survey reported a similar decline in the number of African Americans attending mosques in the United States. Similar situations are found in Europe. The Muslim population in Asia is still growing, but at a slower rate than was true half a century ago. Christianity in Latin America is becoming increasing more Pentecostal and less Catholic.

This does not mean that in the new paradigm religion will not play a role, but it does seem to indicate that the role will be much more individualized and much less public. In other words, the practice of mass movements of people professing the same belief who attempt to forcibly take over various parts of the world to install that belief seems to be coming at an end. It will take some time to realize that, but certainly most everyone can see that religious leaders today of whatever ilk are less influential in their ability to sway global events than they were even a hundred years ago.

Nations like Iran may still claim some sort of religious intent in their dealings with other nations, but this will become much less convincing during the next few decades, and most people will see it for what it really is – a political movement masquerading as a belief. A recent survey conducted in Iran suggested that about 40% of the country identified itself as actively Muslim in opposition to the official state claim of 99%.

***

Imagine for a moment a human being who is not defined by his nationality, place in a family, age, and membership in a religion, race, occupation, status or gender. How, then, is he to be defined? - Purely by his or her actions, emotions and thoughts, and what he or she makes from them? It would be true individuality, an identity that would make grouping impossible and therefore defy prejudice or assumptions. You would need to assess each person you meet in depth to really get to know them, because there would be no basis on which to make assumptions.

Patterns of course would eventually develop, they always do, but the base for these patterns would be different. We will no longer here things like "all women are…" or "Blacks are always…" or "Jews all are…" because there will be no meaning to these old distinctions. It would be like saying all Huguenots are the same or all Wares are the same, because these groups no longer exist. Some people will think alike, have the same taste, wear similar fashions, believe similar things, but those like-minded people will come from a wide variety of what used to be called mutually exclusive groups in the old paradigm, our paradigm.

I know that these observations may make some people uncomfortable; I know they make me uncomfortable. We are creatures of our times, and many of us have gotten ahead by following closely the rules that our paradigm gave us. So why is it that we need a new paradigm when so many of us are comfortable with the one we have even with all of its flaws?

Well, I don't think anyone did a survey of the woolly mammoths before the end of the ice age. It turned out that the paradigm shift was beyond their control, and their extinction was one of the unfortunate consequences of it. The truth is that many of the decisions we made over the last few centuries have consequences that we did not intend nor want, but they are consequences nonetheless. Who could have predicted that prosperity would lead to a desire to separate and not to join? Yet this is where the evolution of our species has led us – to a complete redefinition of who we are. We are subject to the consequences of our own actions, intentional or not.

I suppose in the middle of the feudal millennium many smart people would have found it hard to believe that there could be a world one day without masters or peasants, but it came to pass. Similarly, many of us may find it hard to believe today that there could be a world without marriage or the concept of children as the property of their parents until a certain age, or that people have a duty to sacrifice their lives for a nation's aspirations, but there is an equal likelihood that these things too will come to pass.

I guess the real question is if we will end up like the woolly mammoths, buried in the tundra to be excavated years hence by some other species that made the transformation to the new paradigm more successfully than us, or we will somehow transform ourselves to the new rules and realities... Time will tell.

But get ready. The first winds of the new paradigm are already whipping up the leaves around us. There will be rain after that and thunder and lightning. It will be a real storm, one like we have never experienced before. It won't work to close all the shutters and wait for the storm to pass, because this is a transformation, not a period of chaos after which everything will return to what it was before. This is the identity paradigm, and it is the invitation to define anew who we are.

Imagine there's no heaven

It's easy if you try

No hell below us

Above us, only sky

Imagine all the people

Livin' for today

Ah

Imagine there's no countries

It isn't hard to do

Nothing to kill or die for

And no religion, too

 -John Lennon

Imagine…

Source

Posted on Leave a comment

Hive Mind: discovering natural intelligence!

First successful demonstration of the brain to brain communication in human was done in 2014 by neuroscientists. The experiment allowed the subjects to exchange mentally conjured despite being 5,000 miles apart. It’s the neuroscientific equivalent of instant messaging. Two human subjects, one in India and one in France, successfully transmitted the words “hola” and “ciao” in a computer-assisted brain-to-brain transmission using internet-linked electroencephalogram (EEG) and robot-assisted image-guided transcranial magnetic stimulation (TMS) technologies.

To this experiment, Researchers used EEG technology to make interconnection of one human mind to another human mind. They recruited four participants, one of whom was assigned to the brain-computer interface (BCI) branch, the part of the chain where the messages were to originate. The other three participants were assigned to the computer-brain interface (CBI) branch to receive the messages being transmitted to them.

Using EEG, the researchers translated the greetings “hola” and “ciao” into binary, and then emailed the results from India to France. At this receiving location, a CBI transmitted the message to the receivers’ brains through noninvasive brain stimulation. This was experienced as phosphenes — flashes of light in their peripheral vision. The light appeared in the numerical sequences that allowed the receivers to decode the data in the message. It’s important to note that this information was not conveyed to the subjects via tactile, visual, or auditory cues; special measures were taken to block sensory input. This ensured that the communication was exclusively mind-to-mind — though it was channeled through several different mediums.

A second experiment was conducted between individuals in Spain and France, achieving a total error rate of just 15% percent (11% on the decoding end and 5% on the initial coding site).

This in itself is a remarkable step in human communication, but being able to do so across a distance of thousands of miles is a critically important proof-of-principle for the development of brain-to-brain communications.

Alternatively, we can say that hive mind is the apparent intelligence that emerges at the group level in some social species, particular insects like honeybees and ants. An individual honeybee might not be very bright (although that’s debatable), but the honeybee colony as a collective might be very intelligent.

 

Other works on hive mind:

Google hive mind robot:

Google’s electrical engineer Sergey Levine has published a paper on ArXiv about the developments the team has made in creating deep learning software that tries to mimic humans picking up objects. Levine and his fellow researchers have decided that the best option is to hook up 14 robots to a hive mind – like the Borg race in Star Trek – and force them to pick up objects over and over again.

Once one of them figures out how to pick up a particular object, it will pass on the information to the others in the neural network.

 

read more Blackhole, the kinky hole

 

Observing the behavior of the arms over 800,000 grasp attempts, the researchers have shown no major improvement in terms of their ability to pick up objects in a more human-like manner, but their decisions in how they pick things up – such as where is the best place to grasp it – has reached almost human levels.

Scientists from MIT’s Sloan Neuroeconomics Lab and Princeton University decided to look for a better way to harvest the boundless potential of the hive mind. Through their research, which is published in the journal “Nature”, they were able to develop a technique that they dubbed the “surprisingly popular” algorithm. This algorithm can more accurately pinpoint correct answers from large groups of people through a rather simple technique. People are asked a question, and they must give two answers. The first is what they think the correct answer is, and the second is what they think the popular opinion will be. The overall deviation between the crowd’s two responses indicates the correct answer.

In the future, the scientists hope to utilize their method in a number of different settings, such as political forecasting, making economic predictions, pricing artwork, or grading research proposals.

One day soon, the hive mind may be used as the primary way for us to make predictions and prepare for whatever the future holds.

Posted on Leave a comment

Entity–relationship model

An entity–relationship model (or ER model) describes interrelated things of interest in a specific domain of knowledge. A basic ER model is composed of entity types (which classify the things of interest) and specifies relationships that can exist between entities (instances of those entity types).

 

An entity–attribute-relationship diagram for an MMORPG using Chen's notation.

In software engineering, an ER model is commonly formed to represent things a business needs to remember in order to perform business processes. Consequently, the ER model becomes an abstract data model, that defines a data or information structure which can be implemented in a database, typically a relational database.

Entity–relationship modeling was developed for database and design by Peter Chen and published in a 1976 paper,[1] with variants of the idea existing previously.[2] Some ER models show super and subtype entities connected by generalization-specialization relationships,[3] and an ER model can be used also in the specification of domain-specific ontologies.

Introduction

An E-R model is usually the result of systematic analysis to define and describe what is important to processes in an area of a business. It does not define the business processes; it only presents a business data schema in graphical form. It is usually drawn in a graphical form as boxes (entities) that are connected by lines (relationships) which express the associations and dependencies between entities. An ER model can also be expressed in a verbal form, for example: one building may be divided into zero or more apartments, but one apartment can only be located in one building.

Entities may be characterized not only by relationships, but also by additional properties (attributes), which include identifiers called "primary keys". Diagrams created to represent attributes as well as entities and relationships may be called entity-attribute-relationship diagrams, rather than entity–relationship models.

An ER model is typically implemented as a database. In a simple relational database implementation, each row of a table represents one instance of an entity type, and each field in a table represents an attribute type. In a relational database a relationship between entities is implemented by storing the primary key of one entity as a pointer or "foreign key" in the table of another entity.

There is a tradition for ER/data models to be built at two or three levels of abstraction. Note that the conceptual-logical-physical hierarchy below is used in other kinds of specification, and is different from the three schema approach to software engineering.

Conceptual data model
This is the highest level ER model in that it contains the least granular detail but establishes the overall scope of what is to be included within the model set. The conceptual ER model normally defines master reference data entities that are commonly used by the organization. Developing an enterprise-wide conceptual ER model is useful to support documenting the data architecture for an organization.
A conceptual ER model may be used as the foundation for one or more logical data models (see below). The purpose of the conceptual ER model is then to establish structural metadata commonality for the master data entities between the set of logical ER models. The conceptual data model may be used to form commonality relationships between ER models as a basis for data model integration.
Logical data model
A logical ER model does not require a conceptual ER model, especially if the scope of the logical ER model includes only the development of a distinct information system. The logical ER model contains more detail than the conceptual ER model. In addition to master data entities, operational and transactional data entities are now defined. The details of each data entity are developed and the relationships between these data entities are established. The logical ER model is however developed independently of the specific database management system into which it can be implemented.
Physical data model
One or more physical ER models may be developed from each logical ER model. The physical ER model is normally developed to be instantiated as a database. Therefore, each physical ER model must contain enough detail to produce a database and each physical ER model is technology dependent since each database management system is somewhat different.
The physical model is normally instantiated in the structural metadata of a database management system as relational database objects such as database tablesdatabase indexes such as unique key indexes, and database constraints such as a foreign key constraint or a commonality constraint. The ER model is also normally used to design modifications to the relational database objects and to maintain the structural metadata of the database.

The first stage of information system design uses these models during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain area of interest. In the case of the design of an information system that is based on a database, the conceptual data model is, at a later stage (usually called logical design), mapped to a logical data model, such as the relational model; this in turn is mapped to a physical model during physical design. Note that sometimes, both of these phases are referred to as "physical design."

Entity–relationship model

Two related entities

 

An entity with an attribute

 

A relationship with an attribute

An entity may be defined as a thing capable of an independent existence that can be uniquely identified. An entity is an abstraction from the complexities of a domain. When we speak of an entity, we normally speak of some aspect of the real world that can be distinguished from other aspects of the real world.[4]

An entity is a thing that exists either physically or logically. An entity may be a physical object such as a house or a car (they exist physically), an event such as a house sale or a car service, or a concept such as a customer transaction or order (they exist logically—as a concept). Although the term entity is the one most commonly used, following Chen we should really distinguish between an entity and an entity-type. An entity-type is a category. An entity, strictly speaking, is an instance of a given entity-type. There are usually many instances of an entity-type. Because the term entity-type is somewhat cumbersome, most people tend to use the term entity as a synonym for this term

Entities can be thought of as nouns. Examples: a computer, an employee, a song, a mathematical theorem, etc.

A relationship captures how entities are related to one another. Relationships can be thought of as verbs, linking two or more nouns. Examples: an owns relationship between a company and a computer, a supervises relationship between an employee and a department, a performs relationship between an artist and a song, a proves relationship between a mathematician and a conjecture, etc.

The model's linguistic aspect described above is utilized in the declarative database query language ERROL, which mimics natural language constructs. ERROL's semantics and implementation are based on reshaped relational algebra (RRA), a relational algebra that is adapted to the entity–relationship model and captures its linguistic aspect.

Entities and relationships can both have attributes. Examples: an employee entity might have a Social Security Number (SSN) attribute, while a proved relationship may have a date attribute.

All entities except weak entities must have a minimal set of uniquely identifying attributes which may be used as a unique/primary key.

Entity–relationship diagrams don't show single entities or single instances of relations. Rather, they show entity sets (all entities of the same entity type) and relationship sets (all relationships of the same relationship type). Examples: a particular song is an entity; the collection of all songs in a database is an entity set; the eaten relationship between a child and his lunch is a single relationship; the set of all such child-lunch relationships in a database is a relationship set. In other words, a relationship set corresponds to a relation in mathematics, while a relationship corresponds to a member of the relation.

Certain cardinality constraints on relationship sets may be indicated as well.

Mapping natural language[edit]

Chen proposed the following "rules of thumb" for mapping natural language descriptions into ER diagrams: "English, Chinese and ER diagrams" by Peter Chen.

English grammar structureER structure
Common nounEntity type
Proper nounEntity
Transitive verbRelationship type
Intransitive verbAttribute type
AdjectiveAttribute for entity
AdverbAttribute for relationship

Physical view show how data is actually stored.

Relationships, roles and cardinalities

In Chen's original paper he gives an example of a relationship and its roles. He describes a relationship "marriage" and its two roles "husband" and "wife".

A person plays the role of husband in a marriage (relationship) and another person plays the role of wife in the (same) marriage. These words are nouns. That is no surprise; naming things requires a noun.

Chen's terminology has also been applied to earlier ideas. The lines, arrows and crow's-feet of some diagrams owes more to the earlier Bachman diagrams than to Chen's relationship diagrams.

Another common extension to Chen's model is to "name" relationships and roles as verbs or phrases.

Role naming

It has also become prevalent to name roles with phrases such as is the owner of and is owned by. Correct nouns in this case are owner and possession. Thus person plays the role of owner and car plays the role of possession rather than person plays the role ofis the owner of, etc.

The use of nouns has direct benefit when generating physical implementations from semantic models. When a person has two relationships with car then it is possible to generate names such as owner_person and driver_person, which are immediately meaningful.[5]

Cardinalities

Modifications to the original specification can be beneficial. Chen described look-across cardinalities. As an aside, the Barker–Ellis notation, used in Oracle Designer, uses same-side for minimum cardinality (analogous to optionality) and role, but look-across for maximum cardinality (the crows foot).[clarification needed]

In Merise,[6] Elmasri & Navathe[7] and others[8] there is a preference for same-side for roles and both minimum and maximum cardinalities. Recent researchers (Feinerer,[9] Dullea et al.[10]) have shown that this is more coherent when applied to n-ary relationships of order greater than 2.

In Dullea et al. one reads "A 'look across' notation such as used in the UML does not effectively represent the semantics of participation constraints imposed on relationships where the degree is higher than binary."

In Feinerer it says "Problems arise if we operate under the look-across semantics as used for UML associations. Hartmann[11] investigates this situation and shows how and why different transformations fail." (Although the "reduction" mentioned is spurious as the two diagrams 3.4 and 3.5 are in fact the same) and also "As we will see on the next few pages, the look-across interpretation introduces several difficulties that prevent the extension of simple mechanisms from binary to n-ary associations."

 

Various methods of representing the same one to many relationship. In each case, the diagram shows the relationship between a person and a place of birth: each person must have been born at one, and only one, location, but each location may have had zero or more people born at it.

 

Two related entities shown using Crow's Foot notation. In this example, an optional relationship is shown between Artist and Song; the symbols closest to the song entity represents "zero, one, or many", whereas a song has "one and only one" Artist. The former is therefore read as, an Artist (can) perform(s) "zero, one, or many" song(s).

Chen's notation for entity–relationship modeling uses rectangles to represent entity sets, and diamonds to represent relationships appropriate for first-class objects: they can have attributes and relationships of their own. If an entity set participates in a relationship set, they are connected with a line.

Attributes are drawn as ovals and are connected with a line to exactly one entity or relationship set.

Cardinality constraints are expressed as follows:

  • a double line indicates a participation constrainttotality or surjectivity: all entities in the entity set must participate in at least one relationship in the relationship set;
  • an arrow from entity set to relationship set indicates a key constraint, i.e. injectivity: each entity of the entity set can participate in at most one relationship in the relationship set;
  • a thick line indicates both, i.e. bijectivity: each entity in the entity set is involved in exactly one relationship.
  • an underlined name of an attribute indicates that it is a key: two different entities or relationships with this attribute always have different values for this attribute.

Attributes are often omitted as they can clutter up a diagram; other diagram techniques often list entity attributes within the rectangles drawn for entity sets.

Related diagramming convention techniques:

Crow's foot notation

Crow's foot notation, the beginning of which dates back to an article by Gordon Everest (1976),[12] is used in Barker's notationStructured Systems Analysis and Design Method (SSADM) and information technology engineering. Crow's foot diagrams represent entities as boxes, and relationships as lines between the boxes. Different shapes at the ends of these lines represent the relative cardinality of the relationship.

Crow's foot notation was used in the consultancy practice CACI. Many of the consultants at CACI (including Richard Barker) subsequently moved to Oracle UK, where they developed the early versions of Oracle's CASE tools, introducing the notation to a wider audience.

With this notation, relationships cannot have attributes. Where necessary, relationships are promoted to entities in their own right: for example, if it is necessary to capture where and when an artist performed a song, a new entity "performance" is introduced (with attributes reflecting the time and place), and the relationship of an artist to a song becomes an indirect relationship via the performance (artist-performs-performance, performance-features-song).

Three symbols are used to represent cardinality:

  • the ring represents "zero"
  • the dash represents "one"
  • the crow's foot represents "many" or "infinite"

These symbols are used in pairs to represent the four types of cardinality that an entity may have in a relationship. The inner component of the notation represents the minimum, and the outer component represents the maximum.

  • ring and dash → minimum zero, maximum one (optional)
  • dash and dash → minimum one, maximum one (mandatory)
  • ring and crow's foot → minimum zero, maximum many (optional)
  • dash and crow's foot → minimum one, maximum many (mandatory)

Model usability issues

In using a modeled database, users can encounter two well known issues where the returned results mean something other than the results assumed by the query author.

The first is the 'fan trap'. It occurs with a (master) table that links to multiple tables in a one-to-many relationship. The issue derives its name from the way the model looks when it's drawn in an entity–relationship diagram: the linked tables 'fan out' from the master table. This type of model looks similar to a star schema, a type of model used in data warehouses. When trying to calculate sums over aggregates using standard SQL over the master table, unexpected (and incorrect) results may occur. The solution is to either adjust the model or the SQL. This issue occurs mostly in databases for decision support systems, and software that queries such systems sometimes includes specific methods for handling this issue.

The second issue is a 'chasm trap'. A chasm trap occurs when a model suggests the existence of a relationship between entity types, but the pathway does not exist between certain entity occurrences. For example, a Building has one-or-more Rooms, that hold zero-or-more Computers. One would expect to be able to query the model to see all the Computers in the Building. However, Computers not currently assigned to a Room (because they are under repair or somewhere else) are not shown on the list. Another relation between Building and Computers is needed to capture all the computers in the building. This last modelling issue is the result of a failure to capture all the relationships that exist in the real world in the model. See Entity-Relationship Modelling 2 for details.

Entity–relationships and semantic modeling

Semantic model

A semantic model is a model of concepts, it is sometimes called a "platform independent model". It is an intensional model. At least since Carnap, it is well known that:[13]

"...the full meaning of a concept is constituted by two aspects, its intension and its extension. The first part comprises the embedding of a concept in the world of concepts as a whole, i.e. the totality of all relations to other concepts. The second part establishes the referential meaning of the concept, i.e. its counterpart in the real or in a possible world".

Extension model

An extensional model is one that maps to the elements of a particular methodology or technology, and is thus a "platform specific model". The UML specification explicitly states that associations in class models are extensional and this is in fact self-evident by considering the extensive array of additional "adornments" provided by the specification over and above those provided by any of the prior candidate "semantic modelling languages"."UML as a Data Modeling Notation, Part 2"

Entity–relationship origins

Peter Chen, the father of ER modeling said in his seminal paper:

"The entity-relationship model adopts the more natural view that the real world consists of entities and relationships. It incorporates some of the important semantic information about the real world.[1]

In his original 1976 article Chen explicitly contrasts entity–relationship diagrams with record modelling techniques:

"The data structure diagram is a representation of the organization of records and is not an exact representation of entities and relationships."

Several other authors also support Chen's program:[14] [15] [16] [17] [18]

Philosophical alignment

Chen is in accord with philosophical traditions from the time of the Ancient Greek philosophers: Plato and Aristotle.[19] Plato himself associates knowledge with the apprehension of unchanging Forms (namely, archetypes or abstract representations of the many types of things, and properties) and their relationships to one another.

Limitations

  • An ER model is primarily conceptual, an ontology that expresses predicates in a domain of knowledge.
  • ER models are readily used to represent relational database structures (after Codd and Date) but not so often to represent other kinds of data structure (data warehouses, document stores etc.)
  • Some ER model notations include symbols to show super-sub-type relationships and mutual exclusion between relationships; some don't.
  • An ER model does not show an entity's life history (how its attributes and/or relationships change over time in response to events). For many systems, such state changes are nontrivial and important enough to warrant explicit specification.
  • Some[who?] have extended ER modeling with constructs to represent state changes, an approach supported by the original author;[20] an example is Anchor Modeling.
  • Others model state changes separately, using state transition diagrams or some other process modeling technique.
  • Many other kinds of diagram are drawn to model other aspects of systems, including the 14 diagram types offered by UML.[21]
  • Today, even where ER modeling could be useful, it is uncommon because many use tools that support similar kinds of model, notably class diagrams for OO programming and data models for relational database management systems. Some of these tools can generate code from diagrams and reverse-engineer diagrams from code.
  • In a survey, Brodie and Liu[22] could not find a single instance of entity–relationship modeling inside a sample of ten Fortune 100 companies. Badia and Lemire[23] blame this lack of use on the lack of guidance but also on the lack of benefits, such as lack of support for data integration.
  • The enhanced entity–relationship model (EER modeling) introduces several concepts not in ER modeling, but are closely related to object-oriented design, like is-a relationships.
  • For modelling temporal databases, numerous ER extensions have been considered.[24] Similarly, the ER model was found unsuitable for multidimensional databases (used in OLAP applications); no dominant conceptual model has emerged in this field yet, although they generally revolve around the concept of OLAP cube (also known as data cube within the field).[25]

See also

References

  1. Jump up to:a b Chen, Peter (March 1976). "The Entity-Relationship Model - Toward a Unified View of Data". ACM Transactions on Database Systems1 (1): 9–36. CiteSeerX 10.1.1.523.6679doi:10.1145/320434.320440S2CID 52801746.
  2. ^ A.P.G. Brown, "Modelling a Real-World System and Designing a Schema to Represent It", in Douque and Nijssen (eds.), Data Base Description, North-Holland, 1975, ISBN 0-7204-2833-5.
  3. ^ "Lesson 5: Supertypes and Subtypes"docs.microsoft.com.
  4. ^ Beynon-Davies, Paul (2004). Database Systems. Basingstoke, UK: Palgrave: Houndmills. ISBN 978-1403916013.
  5. ^ "The Pangrammaticon: Emotion and Society". January 3, 2013.
  6. ^ Hubert Tardieu, Arnold Rochfeld and René Colletti La methode MERISE: Principes et outils (Paperback - 1983)
  7. ^ Elmasri, Ramez, B. Shamkant, Navathe, Fundamentals of Database Systems, third ed., Addison-Wesley, Menlo Park, CA, USA, 2000.
  8. ^ ER 2004 : 23rd International Conference on Conceptual Modeling, Shanghai, China, November 8-12, 2004. 2004-10-27. ISBN 9783540237235.
  9. ^ "A Formal Treatment of UML Class Diagrams as an Efficient Method for Configuration Management 2007" (PDF).
  10. ^ "James Dullea, Il-Yeol Song, Ioanna Lamprou - An analysis of structural validity in entity-relationship modeling 2002" (PDF).
  11. ^ Hartmann, Sven. "Reasoning about participation constraints and Chen's constraints Archived 2013-05-10 at the Wayback Machine". Proceedings of the 14th Australasian database conference-Volume 17. Australian Computer Society, Inc., 2003.
  12. ^ G. Everest, "BASIC DATA STRUCTURE MODELS EXPLAINED WITH A COMMON EXAMPLE", in Computing Systems 1976, Proceedings Fifth Texas Conference on Computing Systems, Austin,TX, 1976 October 18–19, pages 39-46. (Long Beach, CA: IEEE Computer Society Publications Office).
  13. ^ "The Role of Intensional and Extensional Interpretation in Semantic Representations".
  14. ^ Kent in "Data and Reality" :
    "One thing we ought to have clear in our minds at the outset of a modelling endeavour is whether we are intent on describing a portion of "reality" (some human enterprise) or a data processing activity."
  15. ^ Abrial in "Data Semantics" : "... the so called "logical" definition and manipulation of data are still influenced (sometimes unconsciously) by the "physical" storage and retrieval mechanisms currently available on computer systems."
  16. ^ Stamper: "They pretend to describe entity types, but the vocabulary is from data processing: fields, data items, values. Naming rules don't reflect the conventions we use for naming people and things; they reflect instead techniques for locating records in files."
  17. ^ In Jackson's words: "The developer begins by creating a model of the reality with which the system is concerned, the reality that furnishes its [the system's] subject matter ..."
  18. ^ Elmasri, Navathe: "The ER model concepts are designed to be closer to the user’s perception of data and are not meant to describe the way in which data will be stored in the computer."
  19. ^ Paolo Rocchi, Janus-Faced Probability, Springer, 2014, p. 62.
  20. ^ P. Chen. Suggested research directions for a new frontier: Active conceptual modeling. ER 2006, volume 4215 of Lecture Notes in Computer Science, pages 1–4. Springer Berlin / Heidelberg, 2006.
  21. ^ Carte, Traci A.; Jasperson, Jon (Sean); and Cornelius, Mark E. (2020) "Integrating ERD and UML Concepts When Teaching Data Modeling," Journal of Information Systems Education: Vol. 17 : Iss. 1 , Article 9.
  22. ^ The power and limits of relational technology in the age of information ecosystems Archived 2016-09-17 at the Wayback Machine. On The Move Federated Conferences, 2010.
  23. ^ A. Badia and D. Lemire. A call to arms: revisiting database design. Citeseerx,
  24. ^ Gregersen, Heidi; Jensen, Christian S. (1999). "Temporal Entity-Relationship models—a survey". IEEE Transactions on Knowledge and Data Engineering11 (3): 464–497. CiteSeerX 10.1.1.1.2497doi:10.1109/69.774104.
  25. ^ RICCARDO TORLONE (2003). "Conceptual Multidimensional Models" (PDF). In Maurizio Rafanelli (ed.). Multidimensional Databases: Problems and Solutions. Idea Group Inc (IGI). ISBN 978-1-59140-053-0.

Further reading

External links

Posted on Leave a comment

How Can Machine Learning Improve Business Decision-making?

Artificial Intelligence and Machine Learning in Development

It sounds like something from a 1980s sci-fi film. The idea of a machine helping to make your business decisions is something straight out of a blockbuster, but the way technology has evolved means that companies embracing Machine Learning for decision-making can actually get the edge over their competition. 

AI & Machine Learning 

Machine Learning is intrinsically linked with AI. It is the capacity that a machine has to learn and demonstrate intelligence and insight. The role of AI within a business largely depends on exactly what type of business is and what you are trying to achieve. More and more Machine Learning business apps that automate processes, analyzing data, arise every day. 

Machine Learning Predictive Models & Machine Learning Text Classification 

Predictive modeling is a process that uses data and statistics to predict outcomes with data models. It can study data and predict what is going to happen next, or what should happen next, which can be extremely useful in certain industries. The data thrown out can help tremendously with decision making.  

Predictive Modeling | Comidor Blog

Text classification is able to categorize and select texts based on Machine Learning in a smart way. The process becomes quicker and more efficient. This can be put into place for things like chat boxes.

Process Mining and Machine Learning 

Process mining evaluates business processes and can give you new methods of improving your business, either by making it more efficient or saving money. There are ways that AI Machine Learning can be constantly involved in your process mining, giving you new insights and informing the business decisions you need to make next. 

An example is using KPI’s. Process mining can explore data regarding where processes have gone wrong. For example, they could analyze data from your suppliers to tell you who is more likely to deliver on time, or they could analyze the data from previous sales to see whether or not you are likely to run out of stock. The key performance indicators are crucial or giving a number value, from which the process mining can be much more effectively carried out. 

Almost every business can benefit from becoming more efficient in one way or another, and process mining could be the first port of call. 

Artificial Intelligence and Machine Learning for Decision-making

AI can be put into practice when it comes to decision making, about almost any aspect of your business. For example, you can use it to analyze data on the money you are spending, staff responsibilities, even employee happiness. If you can feed it data then AI can show you new insights. 

artificial-intelligence blog | Comidor Blog

Decision-making process – The pros & cons of AI 

The pros of including AI in your decision making are clear. Having these new insights can help you to spot new areas of improvement and make vast enhancements in the way you conduct your business. AI can often see things that other data analysts would not. It can also tick away in the background, so you don’t have to pay consultants to work with the data if a computer is interpreting it. 

AI decision-making speeds up the process. AI can operate at incredible speeds and see data in ways that humans would take years to analyze in a matter of minutes. This should be utilized by big corporations when they are looking to make their business more efficient and even to make their processes more intelligent

The cons of this include the fact that there are still some shortcomings. The human touch is sometimes still needed. For instance, seeing the potential of a new staff member needs human input. Statistics might tell you that they need to go, and AI might back this up, but you might see the potential in them, still. 

AI doesn’t do creative thinking or coming up with ideas, so this will still fall on business employees and leaders. 

How Machine Learning can be applied to business processes 

Almost all business processes can be streamlined in some way. It could be that AI shows exactly how to do this. AI can also be put into practical uses relevant to your business. 

How Machine Learning can determine a pricing strategy

Machine learning might also be used to dictate pricing. An algorithm can learn from consumer information and other seller data to help you to price goods and services in a way that is competitive and likely to convert. 

AI decision-making – Developments for the near future 

From consumer protection to intelligent process automation, there aren’t many ways in which Machine Learning can’t be applied in business. It is very hard to know exactly how it will pan out, but there is little denying that AI is here to stay. Practical uses and an understanding of exactly what your customer is looking for, or how customers and staff behave, will become more intertwined with how business is done. 

The Future – Decision-Making For Your Business With AI 

When it comes to making decisions about a business, data is always going to be vital, but with Machine learning, we have so many ways in which we can use them and find out more and more about customers, businesses, and the processes we use. Don’t worry, the robots aren’t taking over like an 80s sci-fi film, but we do have more tools and functions to use as part of our business strategies than ever before thanks to Machine Learning. 

Source

Posted on Leave a comment

Artificial Identity: Disruption and the Right to Persist

Research outputChapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review

Abstract

Anthropomorphism, artificial identity, and the fusion of personal
and artificial identities have become commonplace concepts in
human-computer interaction (HCI) and human-robot interaction
(HRI). In this paper, we argue for the fact that the design and life
cycle of ’smart’ technology must account for a further element
of HCI/HRI, namely that, beyond issues of combined identity, a
much more crucial point is the substantial investment of a user’s
personality on a piece of technology. We raise the fact that this
substantial investment occurs in a dynamic context of continuous
alteration of this technology, and thus the important psychological
and ethical implications ought to be given a more prominent place
in he theory and design of HCI/HRI technology.

Source
Virtual Identity
0