Conference Program

Conference Program.

The above is a link to the program of the APSAFE Conference, which will be held from November 28 to 30 at Chulalongkorn University. Registration is still open.


International Workshop, “Ethics, Wellbeing and Meaningful Broadband”

International Workshop

Ethics, Wellbeing and Meaningful Broadband

August 16 – 17, 2011

Chulalongkorn University

Introducing broadband to a country such as Thailand has faced a number of challenges. Many of these challenges are regulatory and political in nature. Many groups are vying for a lead position in the broadband game and no one wants to lose out. This has led to an impasse where nothing is moving. However, a brighter prospect appears to be on the horizon when a new law was passed recently setting up the National Broadcasting and Telecommunication Commission (NBTC) which has the responsibility and the authority to lay out regulatory framework for broadband network. Hence it seems that Thailand will have its own broadband network soon.

Nevertheless, a new set of challenges are emerging as a result of the introduction of the broadband network. When the physical infrastructure is there, these new challenges include how the network will be used to the best interest of the public as a whole. We have introduced the notion of “meaningful broadband” to refer specifically to these new challenges. How can broadband communication be “meaningful” in the sense that it responds to not only the demand for economic growth, but also the need to maintain the values and goals which are not so directly measurable? These values and goals comprise the meaningfulness of people’s lives. Meaningful values, for example, are present when the people do not become a mere cog in the giant economic wheel but retain their sense of purpose and direction that is ethically positive. Hence a number of questions and challenges emerges: How can broadband use be integrated into the traditional lives of the people so that it does not become a mere tool of the new seemingly all-powerful values of consumerism and globalized commercialization? How can broadband fit with and even promote the values that are meaningful to the people?

This is the rationale for the international workshop on “Ethics, Wellbeing and Meaningful Broadband.” A number of internationally recognized scholars have been invited to the workshop to share their viewpoints with leading Thai thinkers and members of the public to find ways to respond to the challenges of ethical and meaningful broadband use mentioned above. The workshop aims at answering the following questions:

1) How to operationalize “sufficiency economy”? The Thai constitution has a requirement for each Thai ministry and agency, including its regulatory agency, to further “sufficiency economy,” a principle laid out by the Thai king. The principle has affinities with the Bhutanese principle of “Gross National Happiness.” How is this requirement of either Sufficiency Economy and Gross National Happiness being operationalized? Of it is being ignored, why and which government agencies are innovating on this theme. As a new regulatory agency tied to the theme of digital convergence (linking broadcast and broadband), NBTC represent a new opportunity to position Sufficiency Economy as an overall driver of digital convergence strategies, integrated into the frequency allocation (spectrum management) and taxation strategies of the new regulator as well as establishing a new interface between regulation and “human development” which is a traditional concern of ministries such as public health, culture and education which have been totally isolated from telecommunications regulation.

2) How to pre-empt government censorship of the Internet? Recently political constituencies and governmental factions have furthered internet censorship in Thailand and in other Asian nations. This is particularly evident regarding online games, gambling, pornography, and (in particular countries), certain themes such as Lese Majeste, Singapore’s sensitivity to criticism, China’s sensitivity to human rights arguments, Arab countries sensitivity to protest movements fostered by the internet. Censorship is an example of “throwing the baby out with the bathwater” because exclusion of web sites via censorship often prevents a country from receiving benefits from internet-based learning in an effort to achieve specific goals and much internet censorship is ineffective and unenforceable for a variety of reasons. Nonetheless we can expect censorship to continue and grow unless “national broadband ecosystems” emerge that that are meaningful to citizens and nations. In particular, needs of vulnerable citizens (the poor, the uneducated, young children) must be protected. What can be done in the design and regulation of new technologies to attracted ethically valuable applications technology and discourage negative impacts? What can be learned by the effort to develop “quality of life indexes,” e.g. those underlying Bhutan’s Gross National Happiness (GNH), to provide objective measures of technologies that enable policymakers to exclude or attract certain technologies based on their anticipated ethical impacts?

3) Rethinking “media ethics” for the broadband era: What is the track record of “media ethics” strategies to limit harm from television and encourage voluntary compliance by Hollywood or music-makers? What has/hasn’t worked in influencing behavior by large number of users, e.g. young children? What are the obstacles that have prevented better success of media ethics strategies? Now that the broadband era is introducing multimedia convergence how is the media ethics field changing? What new opportunities and challenges is it facing. What can be learn from South Korea and other broadband-saturated nations? How to effectively integrate media ethics considerations into broadband policies before a nation embarks upon its broadband-enabled transformation?

4) Predicting the ethical impacts of broadband: What are the best methods for scenario construction, forecasting and prediction of the ethical impacts of broadband? How can a “wellbeing society” be visualized and construction that involves broadband use? How can broadband contribute to wellbeing? How are the ethical impacts in poor uneducated countries different from advanced highly educated nations?

5) Technological determinism vs. human intervention: What are current views regarding the philosophical concept of technological determinism? What is the origin and development of this concept and what do we know from empirical research on this theme — from Pythagoras to Heidegger to McLuhan? What are the technologically deterministic viewpoints that now dominate the broadband era — and what corporate or governmental interests sustain these viewpoints? What opportunities exist to alter the course of next-generation broadband-enabled technologies in order to ameliorate their ethical impacts?


The public is invited to attend. However, space is limited. Please register with Mr. Parkpume Vanichaka at by July 31, 2011. Registered participants are invited for the luncheon before the main event on August 16. There are no registration fees.


The workshop will be conducted both in English and Thai, and there will be simultaneous interpretation services.


Workshop on “Ethics, Wellbeing and Meaningful Broadband”

Room 105, Maha Chulalongkorn Building, Chulalongkorn University

August 16, 2011

11.45 Lunch and Registration

13.00 “The Second Wireless Revolution: Bringing Meaningful Broadband to the Next Two Billion,” Craig W. Smith

14.00 “Content Regulations in the Broadband Era: Incentives and Disincentives Based Approach to Content Regulations,” by Akarapon Kongchanagul

14.45 “The Seven Habits of Highly Meaningful Broadband,” Arthit Suriyawongkul

15.30 Break

15.45 “The Anonymous Group: A Look at Online Rebel,” Poomjit Sirawongprasert

16.30 “Give Them the Tools, Get Out of the Way: the Liberisation of Communication and its Consequences,” Nares Damrongchai

17.15 Closes

August 17, 2011

8.30 Registration

9.00 Keynote Lecture, “Ironies of Interdependence: Some Reflections on ICT and Equity in Global Context,” Peter Hershock, East-West Center, USA

10.00 “Toward a Well-being Society Scenario,” Hans van Willenswaard

10.45 Break

11.00 “From Veblen to Zuckerberg: Past, Present, and Future of Techno-Determinism in Thailand,” Pun-arj Chairatana

11.45 Lunch

13.00 “Computer Technology for the Well-Being of the Elderly and People with Disabilities,” Proadpran Punyabukkana

13.45 “Meaningfulness, IT and the Elderly,” Soraj Hongladarom

14.30 Mini-break

14.40 “Media and Information Literacy (MIL): the Move beyond Broadband Access,” Kasititorn Pooparadai

15.25 Break

15.40 “Right Speech VS. Free speech: Buddhist Perspective and Meaningful Broadband,” Supinya Klangnarong

16.25 “From Meaningful Broadband to Open Infrastructures and Peer Economies,” Michel Bauwens

17.10 General Discussion – Where do we go from now?

17.30 Workshop closes.

18.00 Dinner, “Baan Khun Mae” Restaurant, Siam Square

Moral Robots?

One of the interesting things that emerged from the two conferences (SPT2011 and CEPE2011) I attended in the US in late May was that there were a lot of talks and discussions on “moral” or “ethical” robots. For those of you who are not in the know, robots now are much more sophisticated and much more advanced. The US military has been developing “killer robots” for some time now and it is common practice now for the military to send unmanned airplanes to target and bomb their enemy positions. Development of soldier robots is also in the making. The idea is to develop robots which can function much like a soldier, and in combat with the enemy the robot can of course shoot and kill. Quite a terrifying aspect.

Robots are not only being developed to shoot and kill. At the opposite end of the spectrum, there are robots that act as companions for those who need them but cannot find one with flesh and blood. Robots are now replacing humans as companions or the elderly in nursing homes. At least this is happening in the West. Instead of having human companions, the elderly (and in fact not only them) are being provided with “companion robots” which look like either humans or cute pets, and are supposed to be tender and gentle. We can certainly imagine human-look-alikes that can talk and show (semblances of) emotions on their faces in nursing homes, providing the elderly with round-the-clock care and attention, much more readily that a human ever could.

These situations call for ethical reflection. A question that was raised during the discussion on caregiving robots was: What does this signify about our own situation? If we are to give our parents and grandparents caregiving robots, what does this tell us about ourselves? But there was another question. Imperfect as the robots are, they are still better than nothing. That is, if there is no one around to care for the elderly, then at least the robots can fill in the void.

I have written many months ago that a Japanese professor had already developed a robot replica of himself. He also created a robot girl that looked uncannily similar to a real girl. This of course gives rise to the topic of robot sex. Many have taken up this topic and discuss whether it is good or bad for a human being to have a robot as their companion and sexual partner. Does having sex with a robot essentially the same as masturbation, or is it in the same league as having sex with a real human partner?

This may depend on whether robots can be self aware and conscious. They are not capable of doing that now, or so it seems, but the harder problem is that we humans do not even have complete understanding of our own self-consciousness. We are still debating on what it actually is, and according to the Churchlands we are essentially deluding ourselves when we think that there is actually such a thing as self-consciousness, or consciousness for that matter. But if the Churchlands are right, then we are also deluding ourselves when we ask of robots whether they can be self aware or not. They can’t, because even we ourselves cannot, and in fact no being ever could.

Even if the Churchlands are wrong, we still have problems explaining self consciousness, so presumably we would have problems explaining why we seem to believe that robots can’t become conscious too.

Actually the problem whether robots can become conscious does not have to concern us here. What is more pressing is that robots are already around and they are working as soldiers or caregivers and many other things. What should we do with them? Is it possible to install some kind of “ethics algorithem” into their “minds” so that they become ethical? So a very interesting question is: Can robots become more ethical than us? If so, then what is left of us human beings?

Bioethical Viewpoints: East and West

I am now attending the 11th Asian Bioethics Conference in Singapore. This is a grueling conference where all the papers are presented one after another in one big room from 8:20 am to almost 8 pm. So let’s see what will happen. Four days before this conference there was a bigger one, the World Congress of Bioethics.

The themes of both conferences focused on cultural perspectives on bioethical issues. During the World Congress there was a panel of no fewer than eight panelists who came together to discuss whether issues in bioethics are universal and culturally relative. For example, there has been an ongoing debate whether issues in bioethics, such as conducting research on human subjects, do admit themselves of cultural variety. In other words, since bioethics is a normative discipline, there is the problem whether those norms transcend cultures or are they restricted to the specifics of cultures wherein the norms take place. In conducting research on human subjects, it is well known that the researchers need to obtain signed informed consent forms from the participants (or subjects). In most cases the consent from the concerned individual is enough. The consent is an agreement between the participant and the researcher only. But in some other cases that is not enough. The research needs also to obtain consent of the community leader in order for them to conduct research on individuals within the community. This happens when researchers go to a remote village and contact individuals there directly. This violates a norm of the village itself, which views itself as a close knit community where decisions needs to be made collectively or through the village leader. Hence the need to obtain consent from the leader in addition to that of the individual herself.

This has generated a lot of debates among bioethicists. Key to the debate is the question of what justifies the need for community consent and also what justifies the need for individual consent in the first place. This is where philosophy can be very useful. But what happens is that when philosophers deal with these issues of justification, they have found that different cultures look at the issue differently. One culture may look at the requirement of community consent to be superfluous, or they may even look at this as an encroachment upon the autonomy of the individuals themselves. If somebody can make a decision about your body on your behalf, then you do not have much of control of yourself to begin with. On the other hand, another culture may believe that the addition of the judgment and decision making by the village leader is necessary, because the individual herself is not an isolated entity existing apart from others. The community is a self-subsisting entity, of which the individual is a part. For an individual to make a decision, such as to allow the researcher to perform research on her body, would mean that the individual is somehow cut off from the community, since the decision comes from herself alone. Furthermore, in real settings the individual may feel that she needs to consult the leader, who speaks for the whole community because she defers to the leader’s wisdom on this kind of thing.

Bioethics have been debating this issue for quite some time. At issue, of course, is the question whether community consent is justified. According to some ethical system, this is not necessary because the individual should control her own destiny and for others to decide things for her would be to limit her freedom and autonomy. But according to another system, this is justified because the individual’s ontological status is different. Instead of being fully autonomous, the individual in this system is only part of her own community.

How can we resolve this issue? The debates surrounding cultural perspectives on bioethics are actually about whether judgments in bioethics are universal and culture-transcendent, or whether they are culture-specific. In addition, the debate is also about “Eastern” and “Western” perspectives. The two kinds of debate are not exactly the same (although many bioethics have always tended to conflate the two). Furthermore, the debate can also be between the East and the West. These need to be spelled out clearly. The first kind of debate is between those who believe that ethical norms are universal and those who do not believe that. The second kind is between those who believe that the Western perspective is universal, and all other perspectives outside of the West are wrong (this also includes those who believe that only the Eastern perspective is right — they may differ about who is right, but they agree that among the two views, at least one must be true). The third kind, moreover, is a straightforward debate between the two perspectives. Instead of talking about “East” or “West,” those who enter the third kind of debate focus their attention on the concrete issues at hand, such as how to obtain informed consent from participants, or the best policy for mother surrogacy, and so on. Representatives from the eastern and western cultures can enter the debate of the third kind without realizing that they come from different cultures.

If this is the case, then we need to be clear first at what level the debate about cultural perspectives on bioethics is. It seems to be that most debates are of the second kind. That is, debates as to which system is universal. Most of the World Congress panelists believed that their judgments are universal and should be accepted and enforced by all cultures. In fact we need to take this position, because if we did not–if we believed instead that validity of arguments depend on where you are from, then there is no point in having intercultural discussion at all. So the standard of good argument needs to transcend cultures.

I think what is lacking in these debates about cultural perspectives is a kind of argument aiming at showing that judgment stemming from a non-western culture is a universal one that should be accepted by all bioethicists. For example, the view that the individual is embedded within the web of social and cultural relations and actually depends on the web for her being should be accepted universally, because it will help solve a lot of problems that we are facing globally in bioethics. It will emphasize he importance of compassion and sympathy, for example, but unfortunately this was not mentioned much at all during these meetings.

Can a Bodhisattva Kill?

One of the most controversial topics in Mahayana Buddhism touched upon in Kunga Sangbo Rinpoche’s teaching at the Bodhgaya Hall last Thursday is about the act of killing by a bodhisattva. Those who have even a very basic understanding of Buddhism knows that killing is prohibited in the precepts. The first precept says specifically that one should not kill, since the act of killing would lead to bad karma that will result in the one who kills move further away from liberation from suffering. But there is a troubling and difficult story where it might be all right for a bodhisattva to kill. Understanding this story correctly would certainly lead to a fuller understanding of the Buddhist message as a whole.

Let us look at the story. Suppose there is a crazy  and evil man who is about to push a nuclear bomb button which will result in the death of millions of people. Suppose further that the only way to stop this guy is to shoot and kill him (suppose further that shooting to main him is not enough). Then the bodhisattva, realizing that this guy is about to commit grievous sin which will lead to countless lives in the lower realms, kills the guy in order to save him from committing the crime and also to save the millions of lives. In that case, is the bodhisattva justified in doing so?

Let us remember that in Buddhism it is the intention or motivation in doing an act that is the key, and not the actual nature of the act itself. Thus if the nature of an act is such that it is one of killing, then it is ultimately the motivation behind the act that counts. Hence the bodhisattva is justified in killing the mad man because his intention is a pure one.

This is difficult to understand. Usually we are taught that when an act is prohibited, it is the act itself that is prohibited. But that is not the case in Buddhism. The reason why the first precept recommends us to refrain from killing is that, in an overwhelming number of cases and situations, killing incurs bad karma because in these cases our motivation is not a pure one. It does not happen every day that there is a mad man rushing to push a nuclear button. In most cases when we kill we do so because of either our hatred or desire — in either cases the act becomes unwholesome and will contribute significantly to lives in the lower realms. But when the intention is to stop the person who is about to commit grievous sin from doing so, and to help save other lives, then it might be probably all right.

Even so, however, the bodhisattva himself does incur a significant amount of bad karma; it is possible that the bodhisattva himself might have to be reborn in hell for a number of lives as a result. But being a bodhisattva that is the risk that he is willing to take. It is better for him to go to hell alone rather than millions going there. This is purely the mindset of a bodhisattva.

What is very dangerous in this teaching is that this is absolutely not intended to give everyone licence to kill. If you are not a realized bodhisattva, chances are that you are still inflicted with the kleshas or defilements that cloud your mind. In that case it is always best to refrain from any form of killing.

The Soul of the Robot

One of the most discussed topics at the 5th Asia-Pacific Computing and Philosophy Conference (APCAP 2009) at the University of Tokyo was about the ethics of robots. This is not so surprising given that Japan is one of the leading countries in robot technology and thinking about robots which look like humans and do things that humans can do naturally make it necessary to ponder how these powerful robots can behave ethically. Robotic technology has advanced to such an extent that it is not far fetched any more to start thinking seriously about robots which are capable of making autonomous decisions and even can think on their own. In fact robots have beaten humans in many areas that require thinking, such as chess and doing algorithmic mathematics. We need to be able to anticipate the time when robots can be conscious just like us, capable of using and understanding language. Since they will be much more powerful than we do, thinking, autonomous robots pose a very serious threat to human security. it is possible that even our survival as a species is at stake once the robots are capable of complete independence from human supervision and guidance.

So the main task of the emerging field “robot ethics” is how to design robots which are capable of making ethical decisions and behaving ethically. In order to do that it is necessary to understand fully what really makes an action “ethical” and what principles lie behind ethical behavior. This is not an easy task at all. In the end thinking about robot ethics makes us understand ourselves better. Why are we ethical beings? What kind of mechanism lies behind ethical behavior? How can we teach someone to understand the need for ethics? These questions are important for us as much as for the emerging autonomous and conscious robots, perhaps more.

The conference started with a keynote talk by Hiroshi Ishiguro, who gained worldwide fame through his research on producing lifelike and humanlike robots, which he calls “geminoid.” The word comes from the zodiac gemini, whose constellation resembles a twin. So ‘geminoid’ means something like a smaller twin. Let us look at a picture of Ishiguro and his robotic twin:

Ishiguro also showed this picture during his talk in Tokyo, but I kind of forgot who was the real Ishiguro and who was the geminoid. My guess is that the one on your right is the real professor, but the left one is the geminoid. Ishiguro talked about how he engineered the geminoid. He said that he installed a sense of ‘touch’ to the robot so that if you touch it, it can make some kind of responses. He showed a video of another robot which does not look like a human. Somebody touched the robot on various parts of its body, and it trained its head to look at the source of the touch and even watched up to see who is touching it. The geminoid also has the capability of “talking” (through speaker) and it can make a variety of making facial expressions.

All these bring us to think whether the robot can have a soul. Of course Buddhism does not recognize an eternal soul, but metaphorically we can certainly talk about a being who has a ‘soul,’ meaning that it has a mind, thoughts, feelings, emotions. If we can finally have a robot which can really think just like we humans do, then does the robot have a soul in the same way that people say we humans have a soul? By having a soul, I mean the kind of inner representation. I represent to myself, thinking about myself and set myself apart from everything else in the universe. If the robot is fully conscious, it has to be able to do the same in every respect. That is, it must be able to think in terms of the subject and the object. It must be able to represent itself to itself and see that itself is completely different from whatever is outside. In other words, the conscious robot has to have a sense of the ego. It has to be able to refer to itself using the first person pronoun, ‘I.’

But if this is the case, then robots are no different from humans. As humans are capable of becoming released from the bondage of samsara in this very life, so can the fully humanlike robots. If the robot can represent to itself using the first person pronoun, then what this means is that the robot falls under the spell of ignorance (avidya), believing that there is an ‘I’ that is the core of the person in need of great care and protection.

I have said that thinking about thinking robots can provide us with insights on how to understand a human being. If a robot can have consciousness, then consciousness does not require a presence of an eternal soul that animates an organism. Only what is there physically suffices. Buddhism has nothing against that. But then there is the question how we can account for the inner life, the subjective experience that all of us have? This may be something that is not there substantially in the world. It is only our representations to ourselves, leading to our attachment and unchecked belief in the ‘I,’ that gives us a sense of there being a concrete, substantial ‘I’ that look so formidable.

So perhaps this implies that Buddhism would have less against robots than the other religions, especially those that insist that human beings were created in the image of God. However, Buddhism does have its own problem. If robots and humans in the end are not too different, then it must be possible for a human being to be born again as a robot, and vice versa? This question obviously did not make it to the Tokyo conference, but it does merit serious consideration, I think.