Humanity is defined as the quality or condition of being human, human nature. Human nature is the concept that there is a set of inherent distinguishing characteristics, including ways of thinking, feeling and acting, which all humans tend to have. In an age where technology surrounds and influences humanity one must consider whether technology is changing how we think, feel and act. Is technology influencing humanity or is humanity influencing technology?
The questions of what ultimately influences human behavior and how the causation works, and whether technology is the influence or is humanity the influence over technology are important questions for all of society. The Significance of Social Social technology is permeating everything in our lives. It seems that every medium of media has integrated social into the message. The Facebook craze has pulled over 500 million people into use of social technology for multiple purposes. Humanity is experiencing connectivity to the human network like never in history.
Mastercard lounge access middle east. The influence of technology on humanity is affecting all market sectors whether private or public, on-line and off-line. The intersection of technology and the human network is disrupting old business models, organizational theories and beliefs systems founded in old knowledge. The significance of these disruptions and the relative meaning is a moving curve of comprehension. What we see on the surface is merely attempts to create short-term results using old belief systems and business models. What lies underneath is a societal shift in beliefs as humanity strives to create meaning and significance from the emerging changes.
Social is significance. The real promise of social tools is societal, not just relational; is significance, not just attention.
You’ve got to get the first right before you tackle the second — and that means not just investing in “gamification,” a Twitter account, or a Facebook group. It means thinking more carefully how to utilize those tools to get a tiny bit (or a heckuva lot) more significant, and starting to mean something in enduring terms. The deepest test of a 21st century business isn’t just whether it glitters, but whether it can create thick value, that endures, benefits, and multiplies: whether it matters. The Mazeway Re-synthesis Humanity has many belief systems which influence the many groups within the human network. Each of ourach belief systems form around collective belief systems influenced by society and by media. Could it be that social technology is enabling the human network to create a new belief system?
When a small group of people experience a change of belief system, and if they persuade others to share it, they become new prophets of change, if they fail to persuade others, they are discounted and labeled by society. Many new belief systems are incompatible with the old, and results in small crowds and followers going to a new “promised land”. This accelerates the process of colonizing vacant territory, and therefore both the capacity to develop a new belief system, and the capacity to be persuaded to switch to the belief system of the prophets of change, becomes a mazeway synthesis of beliefs. Mazeway resynthesis’ is the change in belief systems that occurs from prophets of change, the mazeway being to the person what culture is to society, so that the prophet awakes to a new reality which he or she then tries to impart to followers; if successful, the prophet becomes the leader of a new movement; otherwise, he or she is alienated from the network of people clinging to the old belief system. Sound familiar?
Those social media preachers within and outside the traditional organization are the prophets of change attempting to create a mazeway re-synthesis with those who believe in the meaning and significance that social represents. Prophets of Ideas Ideas represent the presentation of new knowledge that emerges from creativity of the human network. Because of the connectivity given to the human network meaningful and significant ideas are emerging every day. The meaningful and significant ideas are the ones which create thick value that endures, benefits, and multiplies productivity of the human network. Is a means of identifying which ideas define the range of acceptance people’s ideas and media fall into. Media used to persuade or educate the marketplace so that the window of behavior either “moves” or expands to encompass them.
Opponents of change reject ideas that represent significant change in how humanity thinks, feels and acts. While the influence of technology on humanity always has been and always will be significant the real influence on society is what humanity does with the technology to create meaningful value for all humanity. This is also true for any person or organization using ‘social technology for any purpose. Tagged as:,. It is up to technologists, thinkers, wise people to lead the preservation of humanity.Technology should remain as enabler, as simplifying and time saver tools so we can think more and do more good on earth.As in any thing on earth, it can do us good and it can be manipulated. Good values, good ethics must triumph in any innovation, any new technology, any policy making.In summary it is totally up to us to control the way we handle technology so to remain above animals through our God given humanitarian identity. Sunil Bhardwaj.
“The Machine might say, ‘Mary, you are very tense this morning. It is not good for the organization for you to be doing X right now. Why don’t you try Y?’” If a technology as simple as PowerPoint can raise such difficult questions, how are people going to cope with the really complex issues waiting for us down the road—questions that go far more to the heart of what we consider our specific rights and responsibilities as human beings?
Would we want, for example, to replace a human being with a robot nanny? A robot nanny would be more interactive and stimulating than television, the technology that today serves as a caretaker stand-in for many children. Indeed, the robot nanny might be more interactive and stimulating than many human beings. Yet the idea of a child bonding with a robot that presents itself as a companion seems chilling.
We are ill prepared for the new psychological world we are creating. We make objects that are emotionally powerful; at the same time, we say things such as “technology is just a tool” that deny the power of our creations both on us as individuals and on our culture. At MIT, I began the Initiative on Technology and Self, in which we look into the ways technologies change our human identities. One of our ongoing activities, called the Evocative Objects seminar, looks at the emotional, cognitive, and philosophical power of the “objects of our lives.” Speakers present objects, often technical ones, with significant personal meaning. We have looked at manual typewriters, programming languages, hand pumps, e-mail, bicycle gears, software that morphs digital images, personal digital assistants—always focusing on what these objects have meant in people’s lives. What most of these objects have in common is that their designers saw them as “just tools” but their users experience them as carriers of meanings and ideas, even extensions of themselves. The image of the nanny robot raises a question: Is such a robot capable of loving us?
Technology And Humanity Book
Let me turn that question around. In Spielberg’s AI, scientists build a humanoid robot, David, who is programmed to love. David expresses his love to a woman who has adopted him as her child. In the discussions that followed the release of the film, emphasis usually fell on the question of whether such a robot could really be developed. Was this technically feasible? And if it were feasible, how long would we have to wait for it? People thereby passed over another question, one that historically has contributed to our fascination with the computer’s burgeoning capabilities.
The question is not what computers can do or what computers will be like in the future, but rather, what we will be like. What we need to ask is not whether robots will be able to love us but rather why we might love robots. Some things are already clear. We create robots in our own image, we connect with them easily, and then we become vulnerable to the emotional power of that connection.
When I studied children and robots that were programmed to make eye contact and mimic body movements, the children’s responses were striking: When the robot made eye contact with the children, followed their gaze, and gestured toward them, they responded to the robot as if it were a sentient, and even caring, being. This was not surprising; evolution has clearly programmed us to respond to creatures that have these capabilities as though they were sentient.
But it was more surprising that children responded in that way to very simple robots—like Furby, the little owl-like toy that learned to speak “Furbish” and to play simple games with children. So, for example, when I asked the question, “Do you think the Furby is alive?” children answered not in terms of what the Furby could do but in terms of how they felt about the Furby and how it might feel about them.
Interestingly, the so-called theory of object relations in psychoanalysis has always been about the relationships that people—or objects—have with one another. So it is somewhat ironic that I’m now trying to use the psychodynamic object-relations tradition to write about the relationships people have with objects in the everyday sense of the word. Social critic Christopher Lasch wrote that we live in a “culture of narcissism.” The narcissist’s classic problem involves loneliness and fear of intimacy. From that point of view, in the computer we have created a very powerful object, an object that offers the illusion of companionship without the demands of intimacy, an object that allows you to be a loner and yet never be alone. In this sense, computers add a new dimension to the power of the traditional teddy bear or security blanket. So how exactly do the robot toys that you are describing differ from traditional toys?
Well, if a child plays with a Raggedy Ann or a Barbie doll or a toy soldier, the child can use the doll to work through whatever is on his or her mind. Some days, the child might need the toy soldier to fight a battle; other days, the child might need the doll to sit quietly and serve as a confidante. Some days, Barbie gets to attend a tea party; other days, she needs to be punished.
But even the relatively simple artificial creatures of today, such as Hasbro’s My Real Baby or Sony’s dog robot AIBO, give the appearance of having minds of their own, agendas of their own. You might say that they seem to have their own lives, psychologies, and needs.
Indeed, for this reason, some children tire easily of the robots—they simply are not flexible enough to accommodate childhood fantasies. These children prefer to play with hand puppets and will choose simple robots over complicated ones. It was common for children to remark that they missed their Tamagotchis a virtual pet circa 1997 that needed to be cleaned, fed, amused, and disciplined in order to grow because although their more up-to-date robot toys were “smarter,” their Tamagotchis “needed” them more. If we can relate to machines as psychological beings, do we have a moral responsibility to them? When people program a computer that develops some intelligence or social competency, they tend to feel as though they’ve nurtured it.
And so, they often feel that they owe it something—some loyalty, some respect. Even when roboticists admit that they have not succeeded in building a machine that has consciousness, they can still feel that they don’t want their robot to be mistreated or tossed in the dustheap as though it were just a machine. Some owners of robots do not want them shut off unceremoniously, without a ritualized “good night.” Indeed, when given the chance, people wanted to “bury” their “dead” Tamagotchi in on-line Tamagotchi graveyards.
So once again, I want to turn your question around. Instead of trying to get a “right” answer to the question of our moral responsibility to machines, we need to establish the boundaries at which our machines begin to have those competencies that allow them to tug at our emotions. In this respect, I found one woman’s comment on AIBO, Sony’s dog robot, especially striking in terms of what it might augur for the future of person-machine relationships: “AIBO is better than a real dogIt won’t do dangerous things, and it won’t betray youAlso, it won’t die suddenly and make you feel very sad.” The possibilities of engaging emotionally with creatures that will not die, whose loss we will never need to face, presents dramatic questions.
The sight of children and the elderly exchanging tenderness with robotic pets brings philosophy down to earth. In the end, the question is not whether children will come to love their toy robots more than their parents, but what will loving itself come to mean?
What sort of relational technologies might a manager turn to? We’ve already developed machines that can assess a person’s emotional state. So for example, a machine could measure a corporate vice president’s galvanic skin response, temperature, and degree of pupil dilation precisely and noninvasively. And then it might say, “Mary, you are very tense this morning. It is not good for the organization for you to be doing X right now.
Why don’t you try Y?” This is the kind of thing that we are going to see in the business world because machines are so good at measuring certain kinds of emotional states. Many people try to hide their emotions from other people, but machines can’t be easily fooled by human dissembling. So could machines take over specific managerial functions? For example, might it be better to be fired by a robot? Well, we need to draw lines between different kinds of functions, and they won’t be straight lines. We need to know what business functions can be better served by a machine. There are aspects of training that machines excel at—for example, providing information—but there are aspects of mentoring that are about encouragement and creating a relationship, so you might want to have another person in that role.
Again, we learn about ourselves by thinking about where machines seem to fit and where they don’t. Most people would not want a machine to notify them of a death; there is a universal sense that such a moment is a sacred space that needs to be shared with another person who understands its meaning. Similarly, some people would argue that having a machine fire someone would show lack of respect. But others would argue that it might let the worker who is being fired save face. “Are you really you if you have a baboon’s heart inside, had your face resculpted by Brazil’s finest plastic surgeons, and are taking Zoloft to give you a competitive edge at work?” Related to that, it’s interesting to remember that in the mid-1960s computer scientist Joseph Weizenbaum wrote the ELIZA program, which was “taught” to speak English and “make conversation” by playing the role of a therapist. The computer’s technique was mainly to mirror what its clients said to it. Thus, if the patient said, “I am having problems with my girlfriend,” the computer program might respond, “I understand that you are having problems with your girlfriend.” Weizenbaum’s students and colleagues knew and understood the program’s limitations, and yet many of these very sophisticated users related to ELIZA as though it were a person.
With full knowledge that the program could not empathize with them, they confided in it and wanted to be alone with it. ELIZA was not a sophisticated program, but people’s experiences with it foreshadowed something important. Although computer programs today are no more able to understand or empathize with human problems than they were 40 years ago, attitudes toward talking things over with a machine have gotten more and more positive. The idea of the nonjudgmental computer, a confidential “ear” and information resource, seems increasingly appealing. Indeed, if people are turning toward robots to take roles that were once the sole domain of people, I think it is fair to read this as a criticism of our society. So when I ask people why they like robot therapists, I find it’s because they see human ones as pill pushers or potentially abusive.
When I’ve found sympathy for the idea of computer judges, it is usually because people fear that human judges are biased along lines of gender, race, or class. Clearly, it will be awhile before people say they prefer to be given job counseling or to be fired by a robot, but it’s not a hard stretch for the imagination. The story of people wanting to spend time with ELIZA brings me to what some have termed “computer addiction.” Is it unhealthy for people to spend too much time with a computer? Usually, the fear of addiction comes up in terms of the Internet.
In my own studies of Internet social experience, I have found that the people who make the most of their “lives on the screen” are those who approach on-line life in a spirit of self-reflection. They look at what they are doing with their virtual selves and ask what these actions say about their desires, perhaps unmet, as well as their need for social connection, perhaps unfilled. If we stigmatize the medium as “addictive” (and try to strictly control it as if it were a drug), we will not learn how to more widely nurture this discipline of self-reflection. The computer can in fact serve as a kind of mirror. A 13-year-old boy once said to me that when you are with a computer, “you take a little piece of your mind and put it into the computer’s mindand you start to see yourself differently.” This sense of the computer as second self is magnified in cyberspace. For some people, cyberspace is a place to act out unresolved conflicts, to play and replay personal difficulties on a new and exotic stage. For others, it provides an opportunity to work through significant problems, to use the new materials of “cybersociality” to reach for new resolutions.
These more positive identity effects follow from the fact that for some, cyberspace provides what psychologist Erik Erikson would have called a “psychosocial moratorium,” a central element in how Erikson thought about identity development in adolescence. Today, the idea of the college years as a consequence-free time-out seems of another era. But if our culture no longer offers an adolescent time-out, virtual communities often do.
It is part of what makes them seem so attractive. Time in cyberspace reworks the notion of the moratorium because it may now exist on an always-available window. A parent whose child is on heroin needs to get the child off the drug. A parent whose child spends a great deal of time on the Internet needs, first and foremost, to be curious about what the child is doing there. Does the child’s life on the screen point to things that might be missing in the rest of his or her life?
When contemplating a person’s computer habits, it is more constructive to think of the Internet as a Rorschach than as a narcotic. In on-line life, people are engaged in identity play, but it is very serious identity play.
Isn’t there a risk that we’ll start to confuse simulation with reality? Yes, there certainly is. When my daughter was seven years old, I took her on a vacation in Italy. We took a boat ride in the postcard-blue Mediterranean. She saw a creature in the water, pointed to it excitedly, and said, “Look, Mommy, a jellyfish. It looks so realistic.” When I told this to a research scientist at Walt Disney, he responded by describing the reaction of visitors to Animal Kingdom, Disney’s newest theme park in Orlando, populated by “real,” that is, biological, animals.
He told me that the first visitors to the park expressed disappointment that the biological animals were not realistic enough. They did not exhibit the lifelike behavior of the more active robotic animals at Disney World, only a few miles away. What is the gold standard here? For me, this story is a cautionary tale.
It means that in some way the essence of a crocodile has become not an actual living crocodile but its simulation. In business, one is tempted to sell the simulation if that is what people have come to expect. But how far should you go in selling the simulation by marketing it as authentic?
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |