In a flash, the last TWC lecture is over, and so does the high workload from this module. While it seemed to be a burden at first, as the weeks passed, I noticed reviewing my previous postings on the past lectures did play a key role in reviewing what I had learnt. Writing these thoughts down has placed endless questions in my mind about technology, sparking my further interest in this limitless subject.
While the module seems to be on the side of science, it indeed does have a deeper message within as the same key points about taking the initiative rather than waiting around were being reiterated in every lecture. The lessons I learnt from this module have shaped a more curious and open mindset for me, and it will indeed prove to be useful in my future modules at SMU.
Technology and World Change
Wednesday, 13 November 2013
TWC Week 13 (Session 12)
This week's lecture was on presentations again and was the last in this TWC module.
One of the key ideas and concept that captured my attention was during the presentation on the answer to world hunger. While there are many future solutions to solving the issue of food supply, it seems that many of these solutions have opened new doors for large corporations to step and take advantage of the market situation. While GM food has brought about numerous benefits, the bullying behaviour of Monsanto and the the unclear health risks of such modified food cannot be ignored. It seems that for food, there is indeed enough food to go around, and the problem lies in its distribution. Hence progress in technology in this field may not be the key to solving this problem.
A key take away point from the presentations on trans-humanism and from my group's research on mind-controlled prosthetics would be that technology is indeed progressing at a very fast pace, but at the price of opening many doors to the unknown. With creations like mind-controlled exoskeletons and prosthetic limbs that are more powerful than the original human body itself, it seems likely that the risk of misuse in the future is high. As such it would be wise to start developing an ethical and legal framework on their use before such equipment become widespread around the world.
An issue for further discussion would probably be on the rich-poor divide. How much further can this social divide be pushed with increasingly powerful and efficient inventions from the 'richer' countries while the poor lag behind even further in the technology race?
8/10
Sunday, 10 November 2013
TWC Week 12 (Session 11)
This week's lecture was on web presentations by 3 groups.
One of the interesting observations I made was during the presentation about futuristic buildings, where ideas were drawn from the past, with one of the key drivers being adapting the house to natural disaster zones like tornadoes and earthquakes. Another interesting concept was from the presentation on nanotechnology where contrary to the popular mind set that progress means bigger and faster, technology could also progress in the form of becoming smaller.
One of the key takeaways from the lecture were the importance of analyzing current trends as shown in last week's lecture, as it has an enormous influence on how we shape our future. Another would be to always keep sustainability in mind as seen in the presentation on renewable energy.
7/10
One of the interesting observations I made was during the presentation about futuristic buildings, where ideas were drawn from the past, with one of the key drivers being adapting the house to natural disaster zones like tornadoes and earthquakes. Another interesting concept was from the presentation on nanotechnology where contrary to the popular mind set that progress means bigger and faster, technology could also progress in the form of becoming smaller.
One of the key takeaways from the lecture were the importance of analyzing current trends as shown in last week's lecture, as it has an enormous influence on how we shape our future. Another would be to always keep sustainability in mind as seen in the presentation on renewable energy.
7/10
Wednesday, 6 November 2013
TWC Individual Paper (Final)
Artificial Intelligence[1]
Executive Summary
With the recent rapid development in artificial intelligence, future implications and ethical concerns are being considered and definite guidelines have to be drawn to prevent such technology from becoming developing into something beyond our control.
Firstly, this paper documents the dawn of artificial intelligence and shows how far it has developed till today. It next examines the current situation and level of artificial intelligence as well as their impact on the world now. Finally, it focuses on the important future considerations and its enormous influence on the future especially if these serious implications and dangers are not addressed early.
Introduction
Artificial intelligence (A.I.) is a wide
field with different definitions but all of which involve the science of
creating machines with intelligent behaviour (AISB, n.d.). The general basis of
A.I. is the
assumption that the human brain’s thoughts and activity can be explained and
replicated in mechanical terms and reasoning. A normal human brain is estimated
to have around 100 billion nerve cells. However it is the neural connections
between these cells known as synapses, which enable our brain to think and
function, and they number over several hundreds of trillions (Fry, 2011). As such, this places the power of the human brain
far beyond the reach of any supercomputer in existence today, with the potential
capabilities of the human memory still unchartered (Reber, 2010). Mankind has
crossed numerous new frontiers in endless fields, creating wonderful feats of
technology and engineering, as well as exploring the boundaries of outer space.
However, we still have not managed to fully understand something much closer to
us, the human brain. (Walsh, 2013) As we will see, mankind has been attempting
to explain and replicate the human brain function mechanically throughout the
ages. While it can be said that while we are still ultimately unsuccessful, the
development of A.I. has brought about positive changes to our lives in terms of
convenience, efficiency, minimising safety risks and errors as well as endless
other benefits (Kirkpatrick, 2013).
Firstly this paper seeks to document the
rise of A.I. before moving on to the current situation and future
considerations. Through detailed examining of the current state of A.I., clues
and signs can be gathered about the direction of where A.I. would be heading
towards. It is evident from the current situations, some of the potential
hazards and implications we could be facing in the future. The numerous
concerns about where we could be heading to in the future with regards to A.I.
are clearly not unfounded and will be analysed to seek out potential solutions
to solve them.
With regards to limitations of
this paper, the key factor would be the lack of any real world examples or case
studies to substantiate the future complications as the possibility of such
scenarios happening would be decades away in the future at least. Another issue
would be the need for endless assumptions and specialized expertise in A.I. to
draw up precise and exact solutions to answer the problems faced in future
considerations. As such, my thoughts and analysis will be made from studying of
the current situation as well as from the theories of other researchers in this
field.
Historical Perspective
We will never know for sure where the idea of A.I. originated from. Stories of intelligent robots abound in historical myths of different cultures, ranging from the golden robots of Hephaestus in Ancient Greece, to Yan Zi’s automaton in Ancient China (Grachev, 2006). But what we do know for sure is that in the 1950s, several milestone events which led to the birth of A.I. as we know it today occurred. Alan Turing’s published paper on the possibilities of machines as intelligent as us, together with A.I. research being accepted as an academic discipline led to the Dartmouth Conference in 1956 (Turing, 1950). Many leading researchers in various fields were gathered and set the guidelines and foundation for the development of A.I. for many decades to come (Stewart, n.d.). This also sparked government interest over the next two decades which led to extensive funding in A.I. research, especially on machine translation (Slocum, 1985). As such, A.I. developed at a rapid pace with many programs and algorithms developed to solve problems as well as replicate isolated areas of human thinking (“Stottler Henke”, n.d.).
However in the 1970s, government reports were showing that the results of artificial research were not living up to the great expectations promised and as such, led to a heavy cut in government funding. This led to a period of time known as the “A.I. winter”. It also exposed the lack of genuine understanding and underestimation in how A.I. works, as programmers could only create machines with highly limited functions rather than the intelligent thinking machines that were expected (Smith, McGuire, Huang, Yang, 2006). Overall, progress in artificial intelligence was heavily affected and did not recover until the 1990s.
Fast forward to today, it is impossible for one to ignore the huge leaps in technological advancement with regards to A.I. While we are still nowhere near creating anything that can fully replicate a human brain, robots have been created that have some form of autonomous function. (Veloso, 2013) We are also rapidly entering a new frontier littered with many potential problems and concerns which will be examined in this paper.
Current
Situation
The level of A.I. today is a far cry
from that of in the 1960s-70s. No longer are machines limited to mathematical
functions and the like. A.I. has now become more widely common in our everyday
lives and have become increasingly autonomous functions. Various machines and
robots have also been created to exceed the level of the human brain in limited
areas (“Artificial Intelligence”, n.d.).
An example of such machines would be in
the game of chess. Back in the 1960s and 1970s, the idea of a computer beating
one of the world’s best chess players was seemingly unthinkable. That all
changed when IBM destroyed that notion in May 1997, with an improved version of
its original Deep Blue, beating arguably the greatest player in the history of
chess, Gary Kasparov in a 6 game match (“Deep Blue”, n.d.). Today’s chess
programs have evolved from the complex hardware Deep Blue had, which required
it to evaluate every single move possible at each point in time, into chess
software, resulting in much more efficient processing and thinking. The
strongest chess engine in existence today, Houdini has an inbuilt ‘instinct’
which allows it to select only potentially good moves for further examining,
while merely glancing over weaker moves, making it much more efficient and
faster than previous programs (Chessbase, 2012). Such an ability has been only
previously been capable of by human players, and for a computer program to
possess such a skill represents a huge step up for the level of A.I. This also
opens more possibilities as to how A.I. could grow from here with evidence that
instinct can be programmed in as well. It may soon become a question not of
how, but when would robots become human-like in thought.
Today’s increasingly autonomous A.I. can
be seen in the example of Google’s driverless cars. Extensive testing has shown
the driverless car to be accident free at over 500 000km (Guizzo, 2011). One may argue that such technology is not new as
aeroplanes currently have an autopilot feature where the plane can be flown
automatically with minimal interference from the pilot. However the autopilot
in aeroplanes today is merely to help the pilot perform his tasks, rather than
take over his responsibility. (Kramer, 2013) It is also incapable of more
complex tasks like taking off and landing. What makes the driverless car
amazing is the environment of which the autonomous machine is operating in. The
different sensors of the car working seamlessly together without any human
interference to drive the car along crowded streets and highways, successfully
avoiding walking pedestrians and moving cars is definitely a major engineering
feat. Such a car is still far from being commercially available, with many
liability and legal issues still being unanswered (Hars, 2013). Such a level of
autonomy in A.I. today demonstrates that complex machines and mechanisms can be
programmed to be autonomous despite being in a challenging and ever changing
environment. Once again, such an event further shows the magnitude of
possibilities for A.I. in the future.
A.I. also plays an increasingly important role today
as seen in the military. Armies around the world are constantly seeking new
ways to reduce the risk of soldiers being killed in battle and have turned to A.I.
for alternative ways of warfare (Gaudin, 2013). An example would be the Samsung machine gun
robot developed in South Korea. Developed for use along the Demilitarized Zone
between North and South Korea, the robot is capable of performing sentry
duties, with its sensors being able to detect human movements and challenge
intruders through audio or video communications. Though the robot would still
be under the control of a human, it does have an automatic mode where it can
make the decision to fire at an intruder when he is unable to provide an access
code (Global Security.org, 2011). Today, the possibilities of Lethal Autonomous
Robots are being extensively researched into, with ideas of a robot having full
decision of who to kill without any human interference being developed. The
technology for such a machine does exist currently, as seen in the increasingly
autonomous drones being manufactured. Although there is an element of human
control for all autonomous robots currently, the possibility of a fully
autonomous killing robot being developed in the near future is highly likely (Pilkington,
2013). Such technological advancements not only have the potential to change
the mode and tactics of warfare forever, but it also has the ability to limit
the loss of human lives by reducing the need for actual human soldiers. While
the potential benefits of such robots are enormous, the potential risks of
these autonomous robots operating out of our control cannot be ignored, and
will be discussed later under future considerations.
A.I. has also been developed to a point
where it can replicate human emotions and feelings to a limited degree. In the
past, films like Blade Runner could only portray our speculations about the
possibilities of robots with human-like emotions, through its story about a
group of robots who have escaped to Earth from off-world colonies in an attempt
to prolong their lifespans. However, in 2010, the first robot that could
develop emotions and form bonds with humans was unveiled, being modelled on the
attachment process that chimpanzee infants and human babies have (BBC, 2010).
Another important milestone would be the development of robots that could
evolve their programming code and learn the ability to lie in controlled
experiments (Christensen, 2009). With an increased understanding of A.I.,
humans have begun to make progress in replicating basic human emotions and
feelings as well as moving on to more complex ones. Though the range of
emotions and feelings portrayed are still very limited as compared to those of
a human, such progress is definitely fascinating as it leaves us to wonder if
robots in the future can one day be ‘human-like’. The possibilities are endless
but so are the future implications and dangers, which will be discussed next.
Future
Considerations
With A.I. developing at a rapid pace, we are moving
into a new age faced with many ethical concerns and dangers. Disastrous
scenarios have been portrayed in many movies like The Terminator series as well
as I, Robot. Many scientists and researchers have also warned about the
potential implications and dangers of a developed A.I.
The first issue would be the danger of a
technological singularity, a scenario when A.I. would have evolved past the
point of human intelligence to dominate human civilization and human nature
itself. Numerous A.I. researchers have argued that there is no motivation as
well as being mathematically impossible for a highly developed A.I. to maintain
any ‘friendliness’ with the human race. As such, this places them as a
potential threat to mankind (Keiper, A. & A.N. Schulman, 2011). While the
possibility of such advanced robots dominating the human race seems ages away,
the seeds of such technology have already been planted as shown in the earlier
examples of increasingly autonomous robots in the military. Different proposals
to prevent such a disaster from occurring exist. One side of the argument
emphasises the need to base all future A.I. developmental goals on a friendly
relationship with humans, so as to ensure that the machines will never turn on
us. However the level of engineering and coding to install such a behavioural
pattern is beyond the limits of technology today and even anywhere in the near
future, which makes it an unviable option at the moment. Less pessimistic
proposals include the extreme view which concludes that the level of A.I. today
is sufficient, and trying to improve it will lead us on an inevitable road to a
technological singularity (Goertzel, 2011).
With such proposals looking seemingly unfeasible at the moment, it would be
best to take a more neutral approach, with development to continue as it is
now, and having us humans keeping a close watch on progress to reduce the
chances of such a singularity occurring. A good example would be the United
Nations calling for a halt on further research and development of lethal
autonomous robots until international guidelines on their usage can be decided
upon (Bloomfield, 2013).
Another key issue would be with regards to the
liability of the robot. A.I. has been developed to such a stage today where
robots are increasingly responsible for doing highly manual and routinized jobs
of humans as seen in the commercial sectors, leading the question of who is to
be responsible for any breaches of the law committed by them. With such
increasing responsibility, it is highly likely that a robot would be placed in
charge of a life and death situation in the future. For such difficult
questions, once again, a wide variety of opinions do exist. While there are
calls for detailed logs for autonomous system to explain the robot’s reasoning,
such features will no doubt place a limit on the level of the decision making
systems (The Economist, 2012). Applying ethical systems to such A.I. seems to
be the best way forward, but with the sheer difficulty of transferring such
guidelines into programming systems, it stands to say that such programming
would not exist anytime soon. Overall, such technological developments are
definitely without question, for the better of mankind, but it would be
necessary to answer such critical questions and set strict rules and guidelines
in place before embarking on more ambitious projects, to spare us the mind
boggling problems in the future.
One path of A.I. progress that is being currently
explored is the goal of not only creating a highly intelligent machine through
replicating the human thought process but also, to develop a robot with a human-like
consciousness that would be virtually indistinguishable from that of a human.
With such a robot in existence, we would be inclined to treat it as a fellow
human being, bringing in questions of the rights they should be accorded, that
of a human or of a machine that we can shut down at any time (Roth, 2009).
Currently
when individuals are being arrested for damages to machines or robots, it’s the
rights of people like the owners of these machines or robots that are being
protected, not the machines. With the existence of such human-like robots in
the future, the lines between man and machine will be blurred. With the case of
civil rights, if they are accorded to such robots in the future, it stands to
mean that they would be held responsible for any errors or laws broken
committed by them, opening another huge can of worms with endless issues like
those of the differences between software and hardware (Freitas, R.A., Jr.,
1985).
These major issues have to be answered before
technology spirals out of our control. The issue of A.I. is rather similar to
that of all technological advancements. Like a fire, it can help us keep warm while
on the other hand, consume our houses in an instant if we are not careful. As
such, the author ultimately feels that the element of human control should
always be present in A.I. as a safety feature, hence keeping machines and
robots always under our command. This not only enforces the point of which A.I.
was created in the first place, which was to help and serve our needs, but
shuts down the possibilities of a technological singularity where machines
ultimately dominate us. The element of human control as a safety switch also
closes the door on questions of whether robots should deserve equal civil
rights. With the case of autonomous robots, it is the author’s view that they
should be kept to the commercial sectors involving simple repetitive tasks that
do not require a level of intelligence or reasoning, rather than in high risk
situations involving life and death, given the extreme difficulty of installing
ethical systems in robots at the moment.
Conclusion
We have indeed come a long way from ancient history
till today. What used to be myths and stories that people dreamed about have
begun to set foot in reality. The dawn of a new age started in the 1950s with
extensive collaboration between many researches of various fields on the
theories and guidelines of A.I., and was propelled by extensive government
funding. Although the development of A.I. hit a major tumbling block in the
1970s, it managed to pick up momentum once again in the 1990s at a rapid pace.
Machines with capabilities once thought of as impossible were being unveiled
and mankind’s ambition sought to ensure even greater feats of engineering would
occur. Such events would definitely announce the start of a revolutionary new
age and greatly shape our ways of life.
However, the current situation poses enough clues
about the potential implications of what could happen if we do not have a tight
rein over our technological progress. While the future considerations raised
earlier may not happen in the near future, it is indeed a possibility in the
decades to come. The main concerns stem from the idea of robots one day
operating beyond our control as well as having an equal or even high
intelligence than the creators, humans ourselves. While proposals for ethical
systems and guidelines to be built into artificial programming codes seem like
the best solution, such engineering and technology simply does not exist
currently and do not seem likely to be developed in the near future. As such,
the idea of always having a human in overall control as a failsafe seems the
most practical to the author. Such a system does have its disadvantages as it
limits the setting up of an artificial neural network as well as the prevention
of example-based learning systems which allow a robot to make improved
decisions based on its experiences which are far more advanced than current
programming systems based on mere programming code for the robots. However the
elephant in the room cannot be ignored, and hence it is the author’s view that
such measures have to be implemented until technological advancements have
found a way to program such ethical guidelines into programming code.
Overall, A.I. in the future does have the endless
potential to change our lives dramatically for the better. While there may be
many concerns and potential problems to face, it is the author’s view that as
long as we address them early and adequately, we would be able to harness the
full benefits without creating unnecessary problems for ourselves.
References
AISB. (n.d.). What is Artificial Intelligence?
Retrieved from http://www.aisb.org.uk/public-engagement/what-is-ai
Artificial Intelligence. (n.d.). Retrieved from http://curiosity.discovery.com/question/smarter-human-or-computer
BBC (2010 August 10). Nao the first robot with
'emotions' unveiled. Retrieved from http://news.bbc.co.uk/local/threecounties/hi/people_and_places/newsid_8900000/8900417.stm
Bloomfield, A. (2013) Lethal Autonomous Robotics: UN
Calls For Global Moratorium On Killer Robots. Retrieved from http://www.policymic.com/articles/39017/lethal-autonomous-robotics-un-calls-for-global-moratorium-on-killer-robots
Chessbase. (2012 October 29). Houdini 3 – the world's strongest chess engine in the Fritz interface. Retrieved from http://en.chessbase.com/Home/TabId/211/PostId/4008591
Christensen, B. (2009 August 24). Robots Learn to Lie. Retrieved from http://www.livescience.com/10574-robots-learn-lie.html
Deep Blue. (n.d.). Retrieved from http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/
Fry, A. (2011 July 27). A Cubic
Millimeter of Your Brain. Retrieved from
Freitas, R.A.Jr. (1985 January 13). The Legal Rights of Robots. Retrieved from http://www.rfreitas.com/Astro/LegalRightsOfRobots.htm
Gaudin, S. (2013, October 10). U.S. Army evaluates self-driving, machine gun-toting robots. Retrieved from http://www.computerworld.com/s/article/9243120/U.S._Army_evaluates_self_driving_machine_gun_toting_robots
Global Security.org (2011 July 11). Samsung Techwin SGR-A1 Sentry Guard Robot. Retrieved from http://www.globalsecurity.org/military/world/rok/sgr-a1.htm
Goertzel, B.
(2011 August 17) Does Humanity Need an AI Nanny? Retrieved from http://hplusmagazine.com/2011/08/17/does-humanity-need-an-ai-nanny/
Grachev, G. (2006
September 11) Humanoid robots existed in ancient civilizations. Retrieved from http://english.pravda.ru/science/mysteries/11-09-2006/84374-robots-0/#
Guizzo, E. (2011 October 18). How Google's Self-Driving Car Works. Retrieved from http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works
Hars, A. (2013 September). Supervising
autonomous cars on autopilot: A hazardous idea. Retrieved from http://www.inventivio.com/innovationbriefs/2013-09/Supervised-Autonomous-Driving-Harmful.2013-09.pdf
Keiper, A. & A.N. Schulman. (2011).The Problem with 'Friendly' Artificial Intelligence, The New Atlantis, Number 32, p. 80-89.
Kirkpatrick, K. (2013 November). Legal Issues with Robots. Retrieved from http://cacm.acm.org/magazines/2013/11/169024-legal-issues-with-robots/fulltext#
Kramer, M. (2013 July 9). Q&A With a Pilot: Just How Does Autopilot Work? Retrieved from http://news.nationalgeographic.com/news/2013/07/130709-planes-autopilot-ask-a-pilot-patrick-smith-flying-asiana/
Pilkington, E. (2013 May 29). 'Killer robots' pose threat to peace and should be banned, UN warned. Retrieved from http://www.theguardian.com/science/2013/may/29/killer-robots-ban-un-warning\
Reber, P. (2010 April 19). What is the memory capacity of the human brain? Retrieved from http://www.scientificamerican.com/article.cfm?id=what-is-the-memory-capacity
Roth, D. (2009 January 19). Do Humanlike Machines Deserve Human Rights? Retrieved from http://www.wired.com/culture/culturereviews/magazine/17-02/st_essay
Slocum, J. (1985). A Survey of Machine Translation: Its history, current status, and future prospects. Computational Linguistics, 11(1). Retrieved from http://acl.ldc.upenn.edu/J/J85/J85-1001.pdf
Smith, C., McGuire, B., Huang, T., Yang, G. (2006 December). The History of Artificial Intelligence. Retrieved from http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf
Stewart, B. (n.d.). Dartmouth Artificial Intelligence (AI) Conference. Retrieved from http://www.livinginternet.com/i/ii_ai.htm
Stottler Henke. (n.d.). Retrieved from http://www.stottlerhenke.com/ai_general/history.htm
The Economist. (2012 June 2). Morals and the machine. Retrieved from http://www.economist.com/node/21556234
Turing, A. (1950) Computing Machinery and Intelligence. Retrieved from http://www.csee.umbc.edu/courses/471/papers/turing.pdf
Walsh, F. (2013 October 7). Billion pound brain project under way.
Retrieved from http://www.bbc.co.uk/news/health-24428162
Veloso, M. (2013). Autonomous Robot Soccer Teams.
Retrieved from http://www.nae.edu/File.aspx?id=7300
Subscribe to:
Posts (Atom)