Species the few genes that make is unique despite

Species share habitats and resources, yet within shared environments each species has its own niche whether this be physical, psychological or both (Rochat, 2006) — a feature in the environment can be viewed and utilised for different needs depending on the species, this goes some way to determining the niche. In order to debate the effects of technology on human behavior, first we must establish this niche; what separates humans. Explanations as to what makes us human vary: The command of symbolic language (Deacon, 1997) Ability to have culture/enculturation (Tomasello, 1999) Two legged walking, social complexity, prolonged post-natal dependence (Bruner, 1972) Humans ability to read the minds of others (Byrne & Whiten, 1988).  The will to question our humanity, the conception of the universe etc Neurologist Robert Sapolsky explored the genetic differences between chimpanzees and humans, describing the few genes that make is unique despite sharing ninety-eight percent of the same genes: “Take a chimp brain fetally and let it go two or three more rounds of division and you get a human brain instead” (2011). What happens in these extra rounds transforms the way we interact with our environment compared to other species. There is a highly convincing counter-argument to contradict beliefs from Deacon, Bruner and the other psychologists that our niche originates from the nature by which we share resources to survive, transact and relate with one another — other aspects that can be attributed to humans alone are bi-products bred from creativity thinking to gain competitive advantage over rivals (Rochat, 2006). Rochat states other “accounts of human speciation revolve too often around the triggering emergence of a structure or a capacity that is essentially internal to the organism. But this ‘internalist’ view is in his opinion flawed”.  When looking at the difference between humans and machines, perhaps the some of the most significant differences come from the more ‘internalist’ answers such as culture, social complexity and the ability to read the minds of others as well as, as Rochat states, how we relate and deal with one another in un-matched ways.   2.2.2 Mimicking Creativity  Hyperrealism is a genre of sculpture and painting that attempts to replicate high resolution photography. Platforms such as instagram and tumblr often stimulate attention to work of this nature with users talking about the amount of talent and skill involved — whilst skill can’t be denied, someone who is more inclined to appreciating concepts, originality and creativity may find it difficult to see why this is so coverted. As discussed above the greatest difference between man and machine is the ability to have authentic emotion expression. Furthermore, whether machines can be creative, or wether they limited to these kinds of art is widely unexplored until very recently. Art of this kind is one that machines can execute very easily, could this mean a greater appreciation for creativity and concept in art, or simply leave viewers wowed by the aesthetic and almost microscopic detail?   Programmers have choice as to the algorithms they create, their intentions are artistic themselves even if they don’t have complete control over the AI producing the art. Aaron is a computer program that has been painting for thirty-five years via another program, both Aaron and his tools set up by Harold Cohen who’s goal is to “get a computer program to do what only rather talented human beings can do” (2015). Originally a painter himself, Cohen was interested in devising rules to paint under, then “almost without thinking” in execution. This kind of art is called ‘automatism’, “a term borrowed from physiology referring to creating art without conscious thought, accessing material from the unconscious mind as part of the creative process” which was strongly influenced by André Breton’s response to psychoanalyst Sigmund Freud and his launch of the Surrealist movement in 1924 (Tate, n.d.). Here one could argue that machines can be used as a technology in the same way as a pen to execute random movements within set parameters; the way we are able to move our hand and arm compared to the number of colours available to the machine etc. This isn’t to say machines can only create art in this way, but it does suggest that machines can create art in similar ways to how humans have done in the past.    Fig.1 — CAN: Creative Adversarial Networks, Generating “Art” by Learning About Styles and Deviating from Style Norms. (Elgammal, A., Liu, B., Elhoseiny, M. and Mazzone, M. 2017).   Furthermore, a new kind of Turing test was developed this year to judge whether humans can tell the difference between man and machine’s artistic realistations, asking humans which art they prefer (see Fig. 2). The test used a new kind of program, these Creative Adversarial Networks (CANs) are based off of Generative Adversarial Networks. GANs are made up of firstly ‘discriminators’ which take images, determining whether they are man or machine made, then secondly generators that generate new images, fooling the discriminator into thinking they’re human (Thoutt, 2017). The recent research paper ‘CAN: Creative Adversarial Networks Generating “Art” by Learning About Styles and Deviating from Style Norms’ outlines the test itself and the reason for their invention of CANs, arguing that whilst GANs have been successful mostly, they have very limited ability to generate creative outcomes in their original design. The discriminator in CANs goes further, sorting images within 25 artistic styles; the generator then attempts to avoid these styles to create something original and unique, outside of human influence (Thoutt, 2017). The paper states “too little arousal potential is considered boring, and too much activates the aversion system, which results in negative response” — they define creativity as a piece being original, yet not so far from the norm that it looks mechanical (Elgammal et al., 2017). Results were far stronger using CANs than the latter in confusing humans, given the recentness of these results and their evolvement of the powerful GANs, they are likely the best results so far of a machine thinking creatively. It could be argued that the creative intent is simply to avoid what has been done before rather than to create for a personal purpose, yet the intention was to compel machines to think creatively, not to express themselves. From this research it appears creativity is something machines are capable of and therefore is not an explicit divider between man and machine, leaving emotion expression still far away.   2.2.3 Mimicking Emotion   Fig.2 —  Documentary “Rest in Pixels”. (BBC iPlayer, 2016).  Today we see an increasing number of attempts to mimic human life with machines, with entities such as the Lifenaut project forecasting fifty years from now it’ll be considered quite normal to have a back up of your own mind; to capture people’s identity as a “step towards immortality” (Fig. 1) Projects like this are still in their early stages and being able to tell what is man and what machine is an easy task, however as aesthetics and programming develop it may become more confusing with projects such as these aiming to get as close to organic life as possible as imagined in television programme ‘Black Mirror: Be Right Back’ which follows a widow who receives a replicant of her recently bereaved husband. Most lacking from these robots currently is any sort of real and genuine emotion. How machines would actually generate human emotion is currently unclear and seemingly impossible with our current understanding of how we possess emotion expression. Darwin’s investigation into this, as titled ‘The Expression of Human Emotions in Man and Animals’ ignored the belief that emotion expression was conceived by divine design, instead defending the argument that emotion expression is evolved and adapted over time (Hess & Thibault, 2009). Darwin created three principles by which to demonstrate where emotion expression comes from, this evolutionary process is often one we are unaware of, reacting instinctively with emotions we inherited from our parents, stating “Every true or inherited movement of expression seems to have had some natural and independent origin. But when once acquired, such movements may be voluntarily and consciously employed as a means of communication” (Darwin, 1872/1965, p. 354). A great difference between man and machine can be seen here, as it seems impossible a machine could ever emotionally express a response that was purely instinctive; that it did not know the reasoning of. We remain at a point where the best a machine could do is copy emotion expression rather than feel it themselves, as seen with the Lifenaut robot.   Going forward, we are likely to see the mimicking of human emotion expression to be developed further within robots. Hanson robotics creates intelligent, lifelike faced robots that can sense and lock onto your eyes, replicate your facial expression to empathise as well as hold limited conversation, using ‘frubber’ — “a material that is contraction of face and skin” (Robots that “show emotion”, 2009). These machines are being used to explore the overlap between man and machine and also to help better understand how we interact with one another. Eventually, Hanson wants to humanise robots. How this could affect our human behavior is hard to imagine, he stresses the importance of proceeding with caution, “because the law of unintended consequences says that we don’t know what effects these new technologies, bio-inspired technologies, are going to have on the human civilization” (Hanson, 2012). The notion of man and machine co-habiting is an idea that is pursued by institutions all over the world with massive funding, and after investigating seems almost scarily possible, I say scarily as recent fast technological advances have impacted users in ways that they probably hadn’t considered prior to engaging with them. The fact Hanson mentions uncertainty in the effects new technologies will have on civilization and the eco-system is an interesting one. With increasing research and sprouting that recent technological advancements are making us less sociable and less productive (Rosen, 2015), the regulation of this technology will be crucial to the development of our human behavior; for example, Hanson Robotics developed ways to detect human expression for robots to attempt to empathise, but what happens when computers get better than humans at facial recognition? Imaging a future where glasses can detect whether the person in front of us is telling a truth or lie, Norberto Andrade’s Article for The Atlantic deals with this issue, stating “At the collective level, the occasional “little white lie”, like the one that leads us to say, “what a beautiful baby” when in reality you think it is very ugly — is much more than a high-minded lie; it is a fundamental institution in the art of social survival and coexistence.”. Technology such as this could compel us to live highly superficial lives dictated more than ever before by what is considered socially acceptable, exaggerating the effects we can already see in socio-technological developments of social media, for example.   Apply in graphic design everyday context — hipster logo generator, could machines replace creatives to some extent?  https://www.hipsterlogogenerator.com/ even through other technological means such as sites like fiverr disrupting the graphic design industry https://news.ycombinator.com/item?id=8152775   At some point link to the different nature of technology + our increased interaction with in changing the way we behave, afterall we learn emotion expression / behaviour from our parents and those around us through adolescence. Question of what it’d be like to grow up entirely surrounded by machines, how would we learn to express or would we at all?    2.3 Categorising Human’s Relationship with Technology  2.3.1 ‘Digital Natives’ and ‘Digital Immigrants’  Technology grows exponentially, with Moore’s Law stating computer processing power doubles every 18 months (Moore, 1965), and with such an acceleration leading to many quickly–formed theories and explanations as to how this will effect our human nature. One of the first and most famous is the idea of ‘digital natives’ (DN) and ‘digital immigrants’ (DI). As defined by Marc Perensky, (2001a) digital natives are born after 1980 whilst those born before 1980 are digital immigrants. DN brains are said to be “physically different as a result of the digital input they received growing up” (Perensky, 2001b) — their new learning functions are argued to involve: “fluency in multiple media, valuing each for the types of communication, activities, experiences, and expressions it empowers, learning based on collectively seeking, sieving, and synthesising experiences rather than individually locating and absorbing information from a single best source; active learning based on experience that includes frequent opportunities for reflection, expression through non-linear associational webs of representations rather than linear stories, and co-design of learning experiences personalized to individual needs and preferences” (Dede, 2005, p. 10).  Educational games are suggested to be the best way to accommodate these new ‘native’ brains, as average college graduates are said to spend 5000 hours of their lives reading but over 10,000 hours playing video games and 20,000 hours watching television, these games can be used to engage the more experienced areas of the new younger brains and therefore are better for learning (Perensky 2001a) in defense of this, Perensky argues even if these games teach half as fast as traditional methods, they will likely be played twice as much.   Although these arguments have been accepted by many, there is very little hard evidence to support them directly, much of the evidence that is there comes from semi-related studies around young children’s homework outside of traditional teaching (Valentine et al, 2016), those with learning difficulties (SLC, 1996) and the military (Parmentier, No Date) — yet those supporting these claims suggest rather ambitiously that due to the drastically different nature of DN brains, our education systems are out-dated and that educators can continue the use of their “suddenly-much-less-effective traditional methods until they retire”, or instead recognise the difference and look at new ways “and other sources to help them communicate their still-valuable knowledge and wisdom in the world’s new language” (2001b). Whilst to an extent younger persons brains are certainly different, so are those from all over the world with different tastes, different upbringings and so much more. To label all people born after 1980 as digital natives, and all before as digital immigrants with such little empirical evidence is seemingly flawed.