This podcast was recorded at EPFL Lausanne and in it we’re exploring the ethics of computer science and broader technological change. Ben Robinson is joined into the conversation by: Jim LARUS (Professor and Dean of Computer and Communications Sciences at EPFL); Philippe GONZALEZ (Senior Lecturer in Sociology at University of Lausanne) and David Galbraith (Partner at Anthemis VC).
Ben Robinson (BR): Welcome to Aperture. For this episode we are at the EPFL and we are discussing technology in society. And for this conversation we are with Jim LARUS, who’s the Professor and Dean of the School of Computer and Communication Sciences at EPFL. We’re with Phillipe GONZALEZ, who is a Senior Lecturer in Sociology at the University of Lausanne. The two guys teach a course together which is called Global Issues and looks at the social implications of technology. And then we also have David GALBRAITH with us who’s a Partner at Anthemis and a regular on the podcast and David’s here to make sure I get value from this conversation to ask him some provocative questions. So Jim, Phillipe maybe we can just start by you just explaining Global Issues, what’s the course and the subject matter that you cover in the course.
we want the students to appreciate that there’s a larger world out there, they are policies, they are social issues, they are legal issues, they are sort of unintended consequences from technology. That is fairly common, it’s nothing to sort of be surprised about but it’s something that a lot of students are just unaware of — JL
Jim Larus (JL): Sure happy to do it. Global Issues is a course that we require of all of the first year students at EPFL. Doesn’t matter what subject they’re going to take at EPFL and its taught in sections of about 100–120 students with a particular focus to each section. The section that Phillipe and I teach is called Communications, which we’ve interpreted to mean anything related to the Internet and that’s pretty broad, we cover a lot of technology that way. But the idea behind the course is that the students here are basically going to spend five years studying science, technology, engineering and it’s going to be a very technical, very focused education, they’re going to come out with a lot of skills but they may not sort of realize that what they’re building or what they’re discovering is going to be used in a larger context and people are going to take their inventions, are going to take their discoveries and use it for whatever purposes they want and so sometimes these uses of technology have larger implications. You know obviously something like the Internet has had tremendous impact on society in ways in which were impossible to predict 10 years ago, 20 years ago when it really took off.
And the idea of this course is to get the students to actually think about this during their education instead of finding this out when they get out into the real world and start realizing that oh, what I’ve learned, what I’m doing actually has implications for the broader society. Instead we want the students to sort of be aware of this and start thinking about it as principal, ethical people they need to think through some of what they do sometimes they actually need to be advocates for a particular position. They need to take a stand as to how their technology is going to be used because those are going to be the ones developing it you know particularly our school with computer scientists a lot of the information technology is going to be developed by students who have backgrounds similar to ours who will be working in companies. And so we want the students to appreciate that there’s a larger world out there, they are policies, they are social issues, they are legal issues, they are sort of unintended consequences from technology. That is fairly common, it’s nothing to sort of be surprised about but it’s something that a lot of students are just unaware of.
BR: And how did you decide or when did you decide to introduce this course. Was in response to some of the scandals we’ve seen around Facebook and others or was it just something you thought was missing from the curriculum?
JL: I didn’t actually introduce it. It’s been here for longer than I’ve been here so it’s longer than six years. I think keep EPFL decided on it probably about a decade ago, not in response to anything in particular. I mean a lot of the sections of the course have nothing to do with the Internet, nothing to do with the recent scandals so you know there are sections related to health related issues, there are sections related to climate change, sort of larger societal issues as well as information technology.
BR: So in a way, if I was quite early with this then because you don’t see many of these courses and yet a lot of people are talking more and more about the importance of grounding tech work in philosophy and discussing the broader implications.
JL: Yeah, I think that’s true EPFL was pretty advanced at it.
Philippe Gonzalez (PG): Yeah it started more than a decade ago with what we call Le Collège des Humanité (the Humanities College), which was the idea that engineers here at the EPFL would really benefit from being exposed to social sciences, history and it was a way of having this collaboration between the University of Lausanne and the EPFL. So we’ve been having social scientists, historians, people in literature giving classes here but the next level was teaching together and I think that’s the strength of this Global Issue Curriculum because every week we’re teaching together two hours and one hour the lesson about technology, about the specifics of the Internet or cryptography and then they get to listen for an hour about sociology and history. And I try to show them that some things in anthropology for example the invention of writing have a major impact on the way we can see the internet.
BR: : So I guess in general you would argue that it’s important for any student to consider the broader implications of what they’re working on but I guess would you consider even more important when people are working with technologies that can have almost by definition such a massive impact? And maybe I can express it more provocatively? Do you think that the Facebook scandal might not have happened if philosophy had been taught more broadly?
JL: (Laughs) Depends of who they were taught to.
Media is a way of organizing a society, a way of defining what is a common good, defining who’s in and who’s out. So with the Internet we have new ways of relating one to the other, of creating collectives and nations and this has an impact because it’s redefining what is public and what is private in a way that’s never been thought before. — PG
PG: I think that first of all, what we try to tell the students is that some of the things we think are new and technology and are not that really and some of them have been around for hundreds or thousands of years and they already had implications on the way society was made, thought, enforced. So one of the first things I tell the students is for example that when writing was invented it was about kingdoms and ruling kingdoms and ruling the economy. And writing was not something for the people, it was for the elite and so it was a very vertical society and that goes back to the Neolithic, to the invention of agriculture. Well, writing has been around for 6000 years but it was to enforce all those old societies like the pharaohs in Egypt or Babylon for example. So I’m telling them see in the internet you have that old technology that’s there and at the same time we have things that come up, that come from the French and the American Revolution because we have this idea of the Free Press. So we have to deal with the Internet, it’s a complex tool and it inherits previous technologies like the printing press and like the invention of writing. The invention of writing was to control society. The invention of printing and the revolution was to think of a more democratic free society and we have the two things combined in the Internet and now we have to be aware of which side we’re pushing.
David Galbraith (DG): I was going to ask this one thing. So the invention of the printing press certainly had a role in the Reformation and it literally rewired society. And we can prove this by looking at let’s say the distance from mines to cities like Hamburg or Seville or places that literally invented not just new… new types of organizations: so joint stock corporations and the replacement of feudal societies and with what we today think of as corporations happened because of that. Now we have something which seems like a similar phase change. We have technology where everyone has a channel, everyone has a voice, and we’ve gone beyond the broadcast medium of printing. Is this going to have a similar phase change in terms of the types of corporations, the types of structures that we will see?
We start seeing the implications of applying new technology to politics, where you can narrow cast your political message to a group that agrees with you. And then we lose one of the arguments for free speech which was: you put your ideas out in public and if people disagree with them, if they think they’re wrong they argue against you. Well that doesn’t happen anymore on the Internet. — JL
PG: Absolutely, it changes. I mean in media you have always to think that a media is a way of organizing a society, it’s a way of producing goods, defining what is a common good, okay and defining who’s in and who’s out. So with the Internet we have new ways of relating one to the other, of creating collectives, collectivities, nations and this has an impact because it’s redefining what is public, what is private in a way that’s never been thought before. For example I can have… That’s one of the things you say in class, if you’re not paying for it you’re probably the product. So it’s redefining the way as a consumer, I’m entering the market and I feel like I’m the one buying something when in fact I’m the one who’s being sold through the marks situation and lighting on my path and I’m not aware of that. And that’s, well now regulations nations are taking a stance, for example now if you click on the Internet in Europe, so you have to be okay with the policy about cookies but it’s a new enforcement because we started to see the consequences of being tracked so we’re starting to be aware of where this new technology is pushing our democracies and we’ve seen that with the Cambridge Analytica scandal around the Trump election, we see that well… social networks are not only to find old mates from my school, there are also ways, powerful ways to influence geopolitics.
DG: And so is, the current debate is focused a lot on privacy but one person privacy is another person’s secrecy so how much is this not just an issue about privacy per se but an issue about balance. And for example in the Reformation we had… before society settled down into new configuration.
JL: So I mean, one of the points we make is that none of this is all good or all bad. And we actually explain targeted advertising to the students as part of it. And you know it’s frankly it’s amazing to me but it’s a revelation to many of the students that this is even going on and they’re very well informed they’re smart but they have no idea that they’re actually a product being sold. You know I think that you know Google gives them a search engine for nothing, Facebook gives them this connection to their friends for nothing and that you know they’re doing it out of the goodness of their heart. They don’t realize sort of the flip side of the market.
And at that level you know a couple years ago didn’t appear to be too bad but then you sort of take this technology which has been honed by a lot of very smart people and become extremely, extremely good. And you say what other use it’s going to be put to and you say politics is a very obvious one and then you start seeing the implications of applying this technology where you really narrow cast your political message to a group that agrees with you. And then we lose one of the arguments for free speech which was you know you put your ideas out in public and if people disagree with them, if they think they’re wrong they argue against you. Well that doesn’t happen anymore on the Internet. And so we try to sort of explain to the students as part of this that it’s not a simple process. The technology is there, the technology can be used for many good purposes. You know ads that are shown that are relevant to you are certainly more valuable than ads that are showing that are totally wrong for you and both to you and also to the advertisers. But there are things that can happen because of this that are not predictable. I don’t think anybody you know 10 years ago when Facebook was much smaller company would have said that it would have changed the election made Donald Trump the President of the United States. But clearly it has.
DG: Do you think it is Facebook? Or do you think it’s the configuration of the network? It’s in the, there is a possibility…
JL: If it wasn’t Facebook, it would be some other social network. I don’t think Facebook I mean, I think Facebook is particularly effective because of its scale and because it’s a social network. The value of it goes up with the number of people and it basically has everybody… so its value is much higher than other social networks. But if it wasn’t Facebook, there would be some other social network… before it was MySpace. You know, Facebook hadn’t come there would have been something else.
DG: But are there any types of social networks that you think in that configuration are more powerful. So the President of the US uses Twitter for example not Facebook traditionally.
JL: He uses Twitter because Twitter is a perfect medium for him to just broadcast without any filtering. But he’s not engaged in a discussion. If you’d like to prevent a discussion… you know he is fighting in Court to be able to throw people off of his Twitter feed. Which you know in the US seems to be sort of blatantly impossible but he’s still arguing that he should be able to do this so it shows that it’s sort of but not a social network in some senses, it’s a super powerful mechanism for doing press releases.
DG: Right, and anti-social network.
the old Anglo-Saxon philosophy that led to free-speech was premised on this idea that it would be beneficial in eliminating incorrect or bad ideas because they would be put into the public, into the market. That is definitely not the way the Internet works. You don’t see what’s going on, you have this amazing ability to find people who will agree with you because of the advertising technology. So I’m a little wary of taking all these arguments for free speech and just saying “yeah, they should apply to the Internet” – JL
BR: So as I said we will come back to questions of monetizing the internet and the trade-offs of privacy and we also want to come back to the ethics of data-science…. but we just want to get you a bit more on this subjects of discourse and the quality of discourse which is… to go back to this analogy of the printing press and the Reformation. Initially things got worse but eventually things got better right? You know the Demos became better educated and better able to hold society and democracy to account. Do you think I guess the question is are you guys optimistic that because as we’ve been discussing that in this new world the discourse has been challenged because everybody has a voice and people with more extreme views potentially have an outsized voice. And so we’ve lost this sense of balance but do you think that over time we will establish the checks and balances such that the discourse will improve and this will be a fundamentally good thing for society?
JL: But you have to hope so… Not as deadly as that time because we have a lot worse arms than they did back then. You have you have to be sort of optimistic that things that these transitions are painful but that you end up at a better place in the end. This goes back to one of the other points I make is that on the Internet people sort of underestimate the radicalness of the change. I mean until the Internet occurred to disseminate anything, disseminate information it had to be physical, if I wanted to sort of share something with you I had to write it down on a physical piece of paper. And if you wanted it I somehow had to get you a physical object. There was this cost to it and that served as a moderating function. Book publishers in general wouldn’t go publish sort of complete nonsense because nobody would buy it and they would lose was money on it and they would lose the money up front to produce the copies of it. So there was this moderating influence even though there were plenty of extreme examples. This is God right?
The cost of disseminating something, for better or worse and I think of it many cases it’s definitely for better is essentially zero, right? I can communicate with pretty much anybody, anywhere in the world at zero cost, I can send them large amounts of information at zero cost and zero amount of time. We’re not used to this, people, even before the Internet really took off back when sort of all the academics were using it so this sort of phenomena where people behave very poorly in the groups online. The norms of the behavior which they would, they would do things they would never do in person, never do face to face with another person. They would have no qualms about so calling people names, insulting people, attacking their motives, when they would just never do this in the real world. And we saw this and we sort of, it was kind of a strange phenomenon but it was marginal at the time of small groups. We now see it on a much larger scale and we see it being utilized for bad purposes I think for a lot of politicians and a lot of political movement. This is going to be a hard thing to get over because it’s clearly something in us where in an animoty or even just distance from another real person lets us behave in ways that, as members of a group we don’t behave in. And it’ll be hard to reconcile that with the fact that we now have a network which has global reach, which allows you to do this with anybody anywhere in the world. So in some sense the two movements are very much going the wrong direction.
DG: And do you have a view on the political dimension for this because the left and right have very different views of, if there is such a thing as left or right anymore but they certainly seem to be operating very differently in terms of issues such as diversity for example or whether people who really care about minority issues should have a voice and whether the majority should dominate the discussion or not… so these things are nuanced. There’s no straight answer but there’s certainly very, very different views on these things. Do you have any view on this?
JL: You know I’m an American so my view is sort of shaped by the view in the United States that free speech should be allowed and encouraged up to sort of fairly clear but pretty far limits like advocating violence or injury… but I have to say that I think a lot of the argument for free speech… a lot of this sort of old Anglo Saxon philosophy that led to it was premised on this idea that it would be beneficial in eliminating incorrect or bad ideas because they would be put into the public, into the market and that is definitely not the way the Internet works. You don’t see what’s going on, you have this amazing ability to find people who will agree with you because of the advertising technology. So I’m a little wary of taking all these arguments for free speech and just saying yeah, they should apply to the Internet.
DG: So the Chomsky idea that you can manufacture consent, there’s this idea that maybe you don’t even need to conspiratorially do that, maybe that’s a flaw in the way human beings are that we, we self-configure dystopia?
The printing press in the Reformation brought a new technology and a new society. We had war but we also needed strong institutions that showed us how to use that technology and I’d like to mention two institutions. One is what what we call the encyclopedia […] we’ll try to share knowledge and to invoke reason and that will be a strong tool against despotism, against absolute monarchy…. The second big institution is free press, a concept which started before the English revolution, but the ideas we have about privacy, they come from the French and the American Revolution so we’re still living in that world. Actually right now we have a powerful media that is undermining those ideas. And we’re still trying to come up with new concepts to understand the links, which I mean the articulation between publicity and privacy. — PG
PG: Well actually, that’s really interesting. I believe it’s something inherent to being a human, a human being the fact that you have to trust someone else to give you some information. Let’s say, if I want to know what it was like for my father to be a kid, I have to ask him and he’s telling me a story. So one of the ways we get knowledge is to listening to someone telling us a story. So that’s a default position but I think it’s a serious advantage of our species when it comes to evolution because then we can share, we can trade and we can transmit that knowledge from generation to generation and we can spread it across the globe. So that’s one of the things but of course trust can be twisted.
So I think one of the issues with the Internet right now because it started on a paradox. It started with; it was it was something military; it was a military experiment, who was taking over, in a sense, by libertarians who were advocating for free speech, for a new continent. So you have the two things in there, built in the Internet, you have something like a kind of centralized control even if the military experiment was about if a bomb is dropped in a place and I want a signal to go somewhere, I can bypass that place and I can send the signal… but it was this idea of a chain of command and we see that right now. All the fears we have about facial recognition and the way China is collecting information about its citizens.
That fear is a fear that democracy as we’ve known it might disappear. Right now the UK is investing a lot of money into cameras of video surveillance and now the issues to know if they’ll match the cameras with software that have facial recognition. And that’s a major issue because I mean, in a democracy to we have to film and recognize the face of a citizen walking in a public sphere and into public sphere and in a public place, that’s a serious question. So one of the things is that yes the printing press in the Reformation brought a new technology and a new society. We had war but we also needed strong institutions that showed us how to use that technology and I’d like to mention two institutions. One is what what we call the encyclopedia. So you have all these guys, Didaut, Voltaire, who were philosophers and said okay, we’ll have something like The Republic of Letters, the Republic of the academics and we’ll try to share knowledge and to invoke reason and that will be a strong tool against despotism, against absolute monarcghy…. okay and actually they won. I mean the world we’re living in, the society we’re living in, in Europe and in America. It was made by their ideas. The second big institution is free press. Free Press started before the English revolution but it truly thrifts under the French and the American Revolutions and you know the ideas we have about privacy, they come from the French and the American Revolution so we’re still living in that world. Actually right now we have a powerful media that is undermining those ideas. And we’re still trying to come up with new concepts to understand the links, which I mean the articulation between publicity and privacy.
DG: This is not something I agree with but recently in a conversation from someone who’d lived in China for seven years said that when the West are getting it wrong, that the Chinese see the inequality in Hong Kong and they do not want this to spread to the mainland and that the reason that they’re clamping down is because they want to keep you know… Xi Jinping wakes up every morning and has one thought on his mind, which is to keep the country together. And that the only way you can see that happening is not let inequality run wild and in previous periods of technological progress, we know it creates enormous amounts of wealth but it also and Piketty has argued this recently that it creates inequality, it seemed to happen in Rome and it created an enormously long period after fall of Rome, of people being interested in stories rather than technology and reason.
The Internet started off as a totally decentralized organization. There was very little control, it certainly was not top-down and it was by design. It was taken over by academics, by libertarians and it was actually very badly built because of this because nobody sort of thought about the issues of what would happen if you put all the world’s population on it. Just give a very tangible example… there’s no notion of identity on the Internet. — JL
JL: But you know, China has vast inequality; they have the second largest number of billionaires after the US. And if you just go away from the coastal cities where the money’s been created you know the 300, 400 million people that are still left on the farms there… are poor, they’re not as poor as they used to be but there’s a huge gap between them and their children or whoever’s you know gone to live in the cities but without the permanent residents of the cities without the benefits of being permanent residents of the cities. So I think China has an inequality problem and I think a lot of the control of the Internet and get back to that in a second is very much related to political… that they want to show the dominance of the Communist Party. And yet a lot of their concern with Hong Kong is that it’s a revolt against the authority of the communist party.
DG: In fairness is some evidence of the inequality within the big Chinese cities is dropping, whereas Hong Kong’s is increasing.
JL: Yeah. So the inequality across China is still fairly large but just going back to that, I would sort of correct a little bit of what Philippe said, I think that this is one of the interesting parts of the course is that the Internet in my view started off as a sort of a totally decentralized organization. There was very little control, it certainly was not top-down and it was by design. It was taken over by academics, by libertarians and it was actually very badly built because of this because nobody sort of thought about the issues of what would happen if you put all the world’s population on it so and just give you a very tangible example is that there’s no notion of identity on the Internet. So you can’t tell who you’re communicating with, I could put up an account name Donald Trump and you know except for the fact that I would be more reasonable you couldn’t tell me from the President of the United States. You know this is a fundamental flaw that people will look at it now and say, how could we have gotten it wrong? but at the time it made a lot of sense to build an internet that was simpler without the sort of strong trust, the cryptography that we needed to do it right, didn’t exist at the time. So would have been a very difficult task and for a long time people thought this is actually strength of the Internet that it’s decentralized. There was no control, it wasn’t top-down and you know that there was a sort of saying in the 90s, “the internet sees censorship and routes around it”. But China prove this totally wrong basically that if you have enough money, technological smarts and people, you can control the internet. And so China has one of the sort of most interesting internets but it’s a different internet than we have, commercially they use it for far more things than we do. It’s a mobile internet as opposed to… sort of more desktop based internet because they started later. There are a couple of apps there that are sort of much more interesting than the apps that we have on our phone, things like We Chat, which allows you to pay and sort of interact with people and sort of ways that we can’t do in the West, because we have more static banking system. So you know they have the benefits of a lot of the Internet but they have sort of absolute political control. You cannot use it for sort of expressing political opinions that are counter to the government. You will be found, you will be stopped, regardless of whether you have strict identity or not. So this is interesting. I mean 20 years ago people would have said no, you couldn’t have done this with the Internet and China proves them wrong. And it’ll be interesting to see in the future which internet wins.
BR: We want to come back to this question. So the different, the different manifestations, the different types of internet depending, Chinese internet, the US internet, whether there’s even a concept of the European internet…
… but I just want to go back to something that Philippe was talking about when he talked about the institutions that grew up in response to the last significant change in the way we communicate. Do you think that those, the balances and the institutions we need for the digital world will happen organically, will people configure their activities in a way that’s different or does it take the intervention of the state? Because, and how would that work because we’ve talked a lot on other podcasts about GDPR and whether that’s really, actually any help at all. So does the State need to get involved and if the state needs to get involved, what might that state intervention look like?
We’re coming to the Tragedy of the Commons. Before, two solutions have been offered. One is privatization or option number two is the state takes care of it. But then came up Nobel-prize economist Elinor Ostrom and she said “well actually we can think about collective action, the commons, the way we share a common.” And I think that’s one of the most interesting ways of thinking about building new institutions for the web. It’s neither the individual, nor the state. If I share a common purpose and enter into a collective and we start thinking together like in the wiki philosophy… then it’s something in-between and we can arrive at some very interesting solutions. — PG
PG: That’s a very good question.
JL: You know, I think China has already intervened and sort of decided what the model is going to be in China. I think the US is struggling with it because of the, sort of free speech makes it very difficult for the government.
BR: Can I ask a different question then, which is like if you take a platform like Facebook. Does it have a duty to protect the truth? And can that even be done at scale?
JL: So who’s answering that question, I think are hands on. So the answer in China is it has a duty to protect the truth as defined by the Central Party period. In United States there is no duty because it is a private institution. And private institutions are not covered by the Constitution… there’s a very permissive law, which allows them to sort of get away with taking no responsibility for what’s posted on Facebook. In Europe I think it’s different, I think there is a sort of wider range of opinions about the obligations of companies and sort of the ethical point of view that companies have to take responsibility for the consequences of their decisions, their actions, their behavior here and so there’s a lot more freedom to negotiate and we’ve seen this with Germany for instance, forcing them to take down all mention of Nazi positions quite quickly. So I suspect that the answer will depend very much on where you are and where you live and what governments you’re under. And things like GDPR which have this sort of far reaching implication or even the earlier version of its which was the sort of right to be forgotten that you could sort of remove information from the Google search index.
BR: The interesting thing I guess about the GDPR approach is it’s trying to give us as individuals more power over our own privacy and our own data. But you yourself said that you’ve got students and you know… these students that come to EPFL this the top 1% of students right in terms of academic ability and if they’re not even aware of some of these tradeoffs…
Another way to look at GDPR is that it didn’t go far enough — it basically just gave you informed disclosure and consent, so you can say yes or no, but really what was missing was the fact that you’re giving something to this company that you’re interacting with and they’re monetizing it and they’re not paying you anything for it except by providing a service which may not reflect its value. So one very interesting idea is this notion of selling your information to companies — it becomes an actual transaction, but then you can make this trade off “do I really need the money, or do I want my privacy”, because right now the tradeoff is “do I want to use this website or do I want my privacy?” because you do want to use this website. — JL
JL: But it’s even worse than that actually, you sort of look at experience with privacy or security. You get this pop up on the screen that says this is not a good idea of the following reason; do you want to do this? There’s a huge amount of research that says that 99 out of 100 people are just going to click yes, I want to do this because you’re in the middle of doing something and this is getting in the way and you just want to get it out the way, you’re not really sort of thinking about the fact that it may sort of open up your machine to all sorts of problems or attacks or something like this. And the same thing with GDPR, I would predict that probably same thing. 99% of people see this pop up that subscribe to the view of their screen. They just click yes without sort of seeing it.
BR: That I suppose the counters, if the answer to this isn’t to empower the individual to take more responsibility then the answer has to be for a larger body e.g., the state to intervene somewhere.
PG: Well actually what you’re saying is really interesting because it’s… We’re coming back to something called the Tragedy of the Commons and it is this idea that came up in the 60s that if you have some land and you have sheep feeding on the land and I have got my flock, you’ve got your flock and we don’t coordinate, by the end of the week there is no more grass and by the end of the month I mean we just killed the land. Okay, so the idea is what can we do, so two solutions have been offered. One is privatization, I buy the land, I put a fence around it and that’s it and I’ll make sure I take care of the land. Or option number two is the state buys land, puts a fence around it and takes care of the land. The thing is one of the problems with individual action is that usually we don’t pay much attention to consequences… but then came up, very interesting economist and she got a Nobel Prize for it and her name is Elinor Ostrom and she said well actually we can think about collective action, the commons, the way we share a common. And I think that’s one of the most interesting ways of thinking about building new institutions for the web. It’s neither the individual because if it’s only about my personal responsibility, I’m not aware or I’m not capable of having the craft or having the knowledge, the tools, but if I share a common purpose and enter into a collective and we start thinking together like in Wikipedia and everything that has to do with the wiki philosophy then it’s not only individual, it’s not the state. It’s something in-between and we can… It’s more plastic. It’s less legalized… but we can arrive at some very interesting solutions.
So now I have an example, I mean, it’s not only about the wikis, it’s also about the issue of law because it’s not we’re talking about a technology that has an impact on your life and for example if you get your credit card number stolen okay, what can you do? Of course, you can go to the police and file a complaint okay. And then where can you voice your problem? You can voice it on the forum of your bank you know, under a website but it remains an inevitable complaint. But if you start to merge with other people whose card has been stolen and you build a collective and then you start lobbying at the state to change the law. You don’t have the state meddling into the technology because it wants to overpower the technology; you have citizens concern by the consequences of a certain technology asking to change the law to fit actual situations. So I would think that that’s the best way to build new institutions around the technology. That means that we’re not building it from scratch, from a needle idea. We’re not building this technology from a perspective that is ideological; we’re building this new technology through practical situations and through the consequences of those situations.
DG: I mean these kinds of pacts between the mass and the people who are rich basically… so that in the industrial era, you had collective bargaining through unions and things like this. What happened? That was when one bunch of people instead of working in farms, were now operating the machinery that you needed people to operate the machinery. What happens when you have robots and basically the people who have the capital also own the machines who operate themselves?
JL: So before we go to that which I think is a great topic, let me just add one thing to this thing is that another way to look at GDPR is that it didn’t go far enough that you know, it basically just said, we’re going to give you informed disclosure and consent. So you can say yes or no, but really what was missing was the fact that you’re giving something to this company that you’re interacting with and they’re turning around and monetizing it and they’re not really sort of paying you anything for it except by providing a service which may not reflect its value. So one thing that has been explored and I think is actually very interesting idea is this notion of sort of selling your information to companies. And so it becomes an actual transaction where you say yeah, you can track me, you can sort of use it to target advertising, you can use it to sort of sell information to your advertisers but you have to pay me whatever the unit is per your interaction and then you know that obviously would be a contract, would have to be enforced by a government because otherwise it’s not likely to happen… but there the interaction becomes a lot richer, you can make this trade off you know, do I really need the money or do I want my privacy, because right now the tradeoff is do I want to use this website or do I want my privacy? And you’re given a sort of much worse choice than almost everybody will say I’m in the process of using this website, I want to use this website.
DG: Do you think that needs to be, the amount of money for an individual might be quite small, do you think that needs to be a data union like a collective?
JL: There is that view that you could sort of have larger aggregates of people that would make it much more significant, I don’t know if the amount of money is that small Facebook in the United States makes between $200-$300 per person for the advertising. I mean that’s by far in a way, the largest in the world. Many countries makes almost nothing but there you know, countries where there’s less developed advertising industry but you know is it going to make you rich? I don’t think so… because it only wins in the aggregates of any individual, that’s not going to make a huge amount of money for them but it would shift the balance quite a bit. And you would be able to make a much more intelligent trade off, is it really worth disclosing this information for a penny? Probably not.
BR: You might ask whether that trade off should be necessary in the first place anyway that right? So what you’re saying is that at the moment we give our data in exchange for a service and what you’re saying is we should be paid in exchange for giving our data, right and…
JL: Look at the verb you use, give like it was a gift of our data? And what I’m saying is no, it’s a commercial transaction; the company on the other side turns around this gift and sells it to someone else for a lot of money.
BR: So what I was getting at was like, so we wanted to come back to this side of the Chinese internet because there we’re many issues which I think we’ve already touched on with the Chinese model, but one of the positives is that because it’s built on a mobile internet and a much better payments infrastructure. People pay for the services that they subscribe to rather than having through some opaque transaction actually give that data away? Because…
JL: But they give their data there is a vast amount of information collection, much more than we would tolerate in the West.
DG: It’s almost the default is this, that data is exposed. But certainly if you look at the podcast market in China for example, it’s 10 times the size of the US one and it’s all people paying for content whereas in the US it’s…
JL: There is a micro payment system. So you can pay for sort of small quantities things without getting a credit card fee involved, exactly.
BR: Don’t you think that’s inherently the problem with the US internet, which is because there’s no concept of micro payments, what the next best model was and is probably for the foreseeable future, is advertising, which has two issues right. One is the sort of you dupe in a way the consumers are giving you their content and secondly it creates a fundamental conflict of interest right? Because are you ever acting in the interest of the consumer or are you always acting in the interest of the advertisers who pay you?
The press was very much like what the Internet is today. There were very partisan journals on both sides, and the level of invective and insult in them was extremely high. Maybe even worse considering the level of discourse in general than what we would accept today. Politicians were routinely insulted and accused of all sorts of terrible things and this was just considered to be a consequence of the free press. — JL
JL: So I don’t think it’s a tradeoff. I mean, I think advertising was going to happen on the internet regardless of whether we had micro payments, it was way too attractive a vehicle. They are too many smart people who saw the connection between what you were doing and being able to sell an ad that was related to that, to sort of have that not happen, even if there was a micro payment system. Same time, I think a micro payment system would enable a lot of really interesting things to happen on the internet because people could get paid for it without doing advertising.
BR: I think you’d see a large, much larger monetization of the internet with micropayments because advertising is, what, 2% of GDP? Therefore can have that surely put some sort of constraints on the size of…
DG: Certainly, so we have reduced the number of ads because they can, because there’s much more micropayment purchasing of content. So there is some evidence it does seem to change the model as to whether people pay for stuff.
We need institutions to deal with technology and to deal with the shape of our society. So yes, I’m optimistic but we have to work a lot. — PG
PG: And one of the things we try to say as well is that advertisement on the internet has an impact as well on the other media, starting with the press. And one of the problems we have with democracy is that the standards of the press are getting low and low and low because they can’t afford it right now to have investigative journalists, because it’s very expensive and it used to be funded by the money coming from the advertisement for the newspapers and now all that money is migrating to the to the web. And so we see all those huge newspapers really being pushed to rethink the way they do journalism to the point that they can’t afford except for the New York Times or The Washington Post or Le Monde. They can’t afford to have very good journalism done in the field, long term journalism. So that’s a real problem for democracy.
BR: Even those publications that you mentioned, like the New York Times that still have the money to do investigative journalism. hey’re still doing things like changing the headlines of their articles several times during the day to see what gets more clicks. Everybody’s drawn towards sensationalism because sensationalism gets eyeballs and eyeballs get advertising dollars.
JL: Yeah but you know I read the New York Times regularly, I would say that their investigative journalism has gotten better. They went into a trough but they were really worried about surviving and now the Internet seems to be their friend and they have some very interesting series. Now quite a bit, with a lot of resources and actually it applied in new ways, they’re able to sort of take very large amounts of data and then present the information visually in ways that you could just never have done on paper.
BR: So do you almost see the New York Times as a reason for optimism that these sorts of Industrial age institutions can adapt themselves?
JL: Let me bring something to the table and basically say you know the fact that you can do a podcast with technology in our office and you can make a business out of it. You couldn’t have done this without the internet. And you know this is another form of journalism, this is another way of disseminating information. It’s not about sort of current news but it’s about the bigger issues behind the current news, investigative journalism. I think that’s fantastic, I think that there’s a lot of opportunity for people to sort of put a lot more information out there in front of it and it goes back to the fact that there’s no cost to do that. So yeah, it has really hurt the newspapers, they had basically a monopoly on a certain type of advertising for a long time. And you know the TV came along and hurt them but they responded and, it’s painful the transition they’re going through and I don’t think all of them are going to survive clearly but you know, I think if you look at the big picture, there’s a lot of really interesting things happening now in terms of disseminating news, disseminating information and some of it is fake. There’s some of it has no standards. One of the things I was going to say earlier, which you reminded me of is that earlier in the history of the printing, the press was very much more like what the internet was today. You know, there were very partisan journals on both sides. And the sort of level of invective and insult in them was extremely high you know sort of you know. Maybe even worse considering sort of the level of discourse in general than what we would accept today. You know politicians were routinely insulted and sort of accused of all sorts of terrible things and this was just considered to be sort of a consequence of the free press.
BR: And what changed was that the populace became more educated and demanded high standards?
JL: : At some point the press decided that they were going to rise above this and that they were going to be impartial but I think that’s relatively recent.
PG: Yeah, it’s in the 1920s. And you have this great figure Walter Lippmann; he wrote the book called Liberty in the News. And he was saying you see the press. It’s drawn between two opposites. It’s either the taste of the public, either the wealth or the richness that can be produced through advertisement. And he said the only way to counteract, to oppose, to fight against this current is to have a strong institution, which is journalism. It’s the ethics of the profession and since you start developing high standards of what’s journalism, then you will start to have a better press and he was correct. He was correct and the way he thought the press was enforced for 70 years until the Internet came and changed the rules of financing and as well as gate-keeping. Because what we’re changing here, what we have with your podcast for example is a different type of gate-keeping: who is allowed to speak, you come here to the EPFL and you select two scholars and you talk about them but you could do the same thing with two kids just doing skateboard in the street. So gatekeeping is one of the major tests of journalism to choose which information is valuable for democracy? And that’s what we lost in a sense because we opened the gates. Okay, so everyone now since if you have a mic and a connection you can produce and publish but the problem is that do you have the skills, you have the skills to define what good information is and that’s what we might lose?
DG: I was going to say that the… That point about gatekeeping that some of the evidence is that there are particular types of network topology that allow complex ideas to spread and because there is an infinite supply of lies, there’ an infinite supply of simple lies. Where’s there’s finite supply of truths, if you need a network that allows for complex ideas to spread for the truth to win out. Is that something you’re optimistic might happen with this form of gatekeeping?
PG: I mean Walter Lippmann, he wrote in the 20s and 30s. And he wrote three huge books about media. He wrote Liberty in the News, he wrote Public Opinion and he wrote the Public Phantom. And he was struggling exactly with the same problems and I’m talking about the 30s. And he was saying one of the ideas he has in Public Opinion he says, okay journalists should work with academics and especially with think tanks, followed by the government for providing statistics, very good surveys, inquiries. So I think the problem we have now with the Internet is the same. We have to imagine those kind of institutions and I think it’s not an accident if the Swiss television is talking about coming here to the EPFL and having their bowling’s on the campus, I think it’s a way of I mean, perpetrating that idea that comes from Lippmann and I mean, the strong idea from Lippmann, he was a columnist, he was a very good journalist and a philosopher. His idea was we need institutions to deal with the technology and to deal with the shape of our society. So yes, I’m optimistic but we have to do that, we have to work a lot.
BR: And just come back to this point about, so you still think that these are institutions, still don’t think that teaching philosophy to people to make better decisions about what they read, could be the answer?
JL: Why are the two distinct?
BR: Maybe they aren’t?
JL: I don’t think so. I mean institutions change because of popular pressure, public demand; people don’t sort of impose their will. And if they’re not even aware that there’s an issue, that there’s a problem, there’s no something that could be better and I think our class is a good example. We don’t sort of propose solutions to the problems. We don’t go and tell the students you should do this or you should do that. We basically sort of talk about the problems and the complexity of the problems and the historical precedents and historical basis for the problems and let the students make up their own mind as to what they think should happen in the future. The final project on the course is a presentation on a topic of their interest and part of it is they’re supposed to propose solutions to the problems and they think that it’s interesting because we don’t really do that in our presentation but we asked them to do it in theirs, and I think it brings out how hard it is to do it because their solutions in general are not very good and they’re sort of very partial. They don’t really cover the complexity of it. I think struggling to come up with these solutions is very good because it makes you realize that these problems don’t necessarily have simple solutions. But yeah, I mean going back to your question I think that you should, we have to do both.
DG: Do you think I mean, we’ve talked a lot about privacy in social networks and things like that but in terms of AI and machines talking to people? Is that worse in the sense that we’ve now got black box mechanisms where we don’t know, we have potentially biases that are encoded? Do we need to look at the institutions required for the provenance of the training of these things or?
JL: It’s a really interesting questions obviously a very hot topic in the whole field of machine learning these days. The answer is everybody who’s doing machine learning seriously and sort of applying it to any kind of problems should be concerned about the bias in the data and the bias in the training because otherwise, the answers you get are not necessarily the correct answers. So yeah, you know even separated from the sort of ethical, which I think comes about a lot because of the application domain. But even if you were solving sort of a purely technical problem with machine learning, you should be very concerned about the biases. You know, there are lots of examples were sort of, you train on a particular set and what you end up with is a machine learning system that works for people that are very similar to that set. You know, white people for face recognition because that’s all you had in your set, people who speak English like I speak English because you didn’t train with people with accents and you end up with this tool that basically doesn’t really work right. And so again that’s another form of bias but bias is also used for sort of much more serious consequences of using tools for judicial purposes and you basically are based that on sort of data that’s collected, that reflects people’s biases and sort of racial prejudices is a big one in the US. But you know again, it’s a problem that you’ve haven’t built a very good system, if you’ve sort of not accounted for this and not done your best to correct for it and not easy to even know what happens and it’s not easy to correct for it.
BR: And would you advocate for data science being a regulated industry?
JL: No, how can you regulate?
BR: Because there are a lot of responsibilities that come with data science so you’ve talked about bios but what about using people’s data against them? You know what about if on one, a bit, you’d be like were discussing earlier you know, on one level is great that we can pull data because we can diagnose illness faster. But what happens if we discover somebody has a preexisting condition and we want to charge them more for their insurance you know like the responsibilities around data science?
JL: Yeah but that’s not a data science problem, that’s an insurance problem. And I think the problem with regulating data science is that data science is just a technique that is going to be used by pretty much everybody in every single facet of industry or is already in the process of being used. You know, we have a Masters in data science, very popular masters for students because it’s an important area. But the sort of consequences of it depends on what the data is doing and what you’re producing from it and then what decisions you are making from it. So the one about health is a very good one in fact, I use it in the class because you can provide two perspectives, one of which is that the Insurance obviously want more accurate information about the people they’re ensuring because then there’s less uncertainty and they can price their product more fairly. But the flip side of it is that if you’re one of these people with a congenital problem then pricing it more fairly means that it would be much more expensive to you to have insurance. And the other idea of insurance is to distribute the risk among a larger group of people so that no one individually is forced to bear the entire cost of something they can afford.
BR: In keeping with that conversation, that’s a follow, some sort of middle ground is the way to go right because premiums that apply to everybody mean that we’re not pricing the risk correctly but allowing people to use data against their interest is not so it would seem like there’s some sort of halfway house.
JL: There is definitely a huge tension there and I mean EPFL and Switzerland in general has a lot of research going on and personalized health and that’s actually one of the sections of Global Issues because there are a lot of issues related to that and the professor in in SB science TV, who is leading the personalized Health Initiative. So it’s very clear that sort of believes that Switzerland needs to change the laws related to insurance before personalized health becomes widely deployed, that you just cannot allow this information to be out there in the hands of insurance companies applied to individuals as opposed to applied to groups of individuals for pricing.
DG: And a lot of the issues that we talked about information technology can be solved through openness and meritocracy, things like that. Isn’t the elephant in the room that the life sciences issues, eliminate all of that possibility in the sense that people actually buy advantage, permanent advantage is that is there…
JL: And what do you think?
DG: Genetic engineering, being able to choose that your children are going to live longer and be healthier and things like that through money.
JL: Rich people’s children have always lived longer and been healthier.
DG: Yeah, exactly. So this does seem to exacerbate that potentially, is that.
JL: It potentially could right, I mean I’m not sure the technology is really there yet to make an appreciable difference, except for sort of gross effect factors like sex selection, which is clearly a huge factor in certain countries.
BR: I wanted to ask you Jim, about this idea of because you mentioned earlier you said when the Internet was conceived, it was an incomplete conception because it didn’t have any notion of identity for example, there was no notion of a native currency for paying for things on the Internet. You in your course you talk about Bitcoin, in the in your course you talk a lot about technologies for establishing identity on the internet. Do you think these things can be almost sort of reversed engineered back into the internet? Are you confident that technology beholds? The answer to some of the problems we’ve been talking about?
JL: Yeah, I think they will eventually be added to the Internet. The problem of course is that because the Internet is everywhere and it’s embedded in so many different systems and has so many different players building it, running it, making any kind of change is extraordinarily difficult. Change occurs slowly, much slower and it would have been much easier to sort of fix these problems in advance but you know as I said, people didn’t perceive them as real problems. And in fact you know the technology of the, particular cryptography which is the foundation of Bitcoin didn’t really exist when the Internet was invented; this notion of public key cryptography was from the late 70s. Before that there was no way to do cryptography across the network. It was a brilliant idea. Brilliant observation but it took many years to develop it to where we are now. So yeah, I mean it’s great to sort of imagine what we could have done if we knew what we have now but this, that’s not the way things work.
BR: So just to make it more practical for… So here at the EPFL, you’ve got this Center for Digital Trust? What kind of technologies is that employ? What kind of stakeholders is it bringing together, how well is that working?
JL: Great question! So the answer is that we have a lot of people here who do cryptography and a lot of people who are very interested in sort of applications of blockchain which is the technology that underlies Bitcoin. It’s not necessarily related to financial information. So we’re interested in how you can share information more reliably and securely across the internet so… give you an example, one of the companies that participates in C4GT, and I won’t name it but they basically collaborate with a number of other companies who have information that would be very beneficial to share among this group of companies. But it belongs to the company. So they don’t want to sort of just put it there. They don’t want to give it to the central company because they want to maintain ownership. But all of the parties would benefit. I mean just basically, the tragedy of commons again, is that if they all act individually; they sort of are worse off. If they act together, they would all be better off and so the technology that we’ve been developing is basically.. allows the information to be shared but controlled at the same time.
So you know, you can maintain strict controls over who can see it, what they can do with it and you can maintain your ownership of the information but at the same time you can share it with a group of people, group of companies that are sharing with you. And so this is the type of technology we’re building. There are a number of companies that are interested in it. Even international organizations like the International Committee of the Red Cross is got a lot of information sharing problems, because a lot of their operations are done in very hostile parts of the world. Whereas there a lot of parties that would like to know about the people that they’re protecting as well and so they need to be able to preserve information.
And so technologies like blockchain have a lot of uses beyond just building things like Bitcoin and we’ve been exploring that and working with companies to take the core technology which was developed here and then applying that of applications of the companies come along with to demonstrate it and then another example is actually hospitals in Switzerland, all of which have information about their patients. And they protect it and they should protect that kind of information. But you know, if I’m a medical researcher doing research on a particular disease, it’d be great if I could access the medical records of the patients all across Switzerland in many different hospitals that have that disease, I could gain a larger set of samples with informed consent, patients could give me permission, but how do I get access to all this information? How did the hospitals at the same time maintain their responsibility to protect it? And to make sure that I’m not doing something with it that I shouldn’t, I’m not giving to insurance companies, I’m not selling and I’m not being a Cambridge Analytica as well? And this is the type of technology that we’re developing and trying to demonstrate by building applications and collaboration with other parties.
DG: And does this perform better than having a trusted third party being the gatekeeper of distributing that?
JL: Better in what sense? I mean there’s a lot of different dimensions right? You know having a trusted third party is usually more efficient. You know if you really do trust the third party just giving them all the information that they can aggregate it, they can then distribute it, they can control it. But coming up with a trusted third party is very difficult. And you know, you got to wonder about what their incentives are. The hospitals in Switzerland have never come up with a trusted third party for sharing medical information.
BR: Yeah. And in general, I’m sorry to be possibly pessimistic but and normally applications of blockchain like this are better when they are permissioned right? They are more efficient and I’m just wondering, are we trying to tackle it… because the issue, the interesting global thing, the ethics of AI is a global thing, the ethics of data science is a global phenomenon, all these problems that we’re facing happening in a time that’s… are we facing these problems at a time when the world is of fractionalizing and balkanizing? Does it make it harder to solve these problems?
JL: No. I mean, I don’t think so. I think the sort of research that goes on it’s become, if anything much more international over the last 10 years than it used to be, it used to be you know in this field the US dominated computer science research, you look in the conferences, the journals, who was basically US centric publication, it’s a lot more international than it ever was. Europe, China, various other parts of the world are participating and there’s still fortunately a culture of open publication. There are a lot of ideas, they move around very quickly. They’re the sort of people that are innovating in this area and then this whole sort of startup culture of this entrepreneurship, a lot of these ideas are now moving very fast from just being developed to actually having people to exploring them and trying to see whether there are commercial applications of them. Well they fail because a lot of the applications don’t work. They’re ahead of their timers. They don’t have the right market. This is great.
I think climate change is much more likely to wipe us out than killer robots. — JL
BR: So I just want to with the time we have remaining… just want to enter into kind of a final section where we indulge into some futurism and some future applications of these technologies and so I read on Wikipedia Jim, that you you worked on the Singularity Project at Microsoft. So let’s maybe let’s start there, how close to Singularity do you think?
JL: You got to remember that the only thing in common with that vision of singularity is the name. The Singularity Project was a project that Galen Hunt and I did at Microsoft…
BR: So it had nothing to do with the…
JL: No, to rewrite an operating system in a high level to try to attack the security problems by going and sort of redoing everything from fresh and It was a great project, we did a lot of impact of our follow on projects inside Microsoft, we had zero impact on the rest of the world…
BR: Okay, so having exposed that I didn’t do my research properly. I’m still going to get you on that question though. How far away, do you think the singularity is?
JL: Extraordinarily far away. I do not worry about machines replacing humans in general. There are two forms of artificial intelligence. There is sort of let’s call general intelligence, which is sort of a machine being as smart and as capable as a human being. And now this is what the Singularity worries about this, that there is going to build machines that become as smart as and even smarter than humans and decide that they don’t need us.
But if you look at what artificial intelligence machine learning is doing, they’re doing very individual human skills. They’re doing sight, they’re doing vision recognition, they’re doing hearing, they’re doing language recognition, they’re doing natural language processing but you put them all together and they’re not even as intelligent as a little year and a half year old baby. And they’re extraordinarily difficult to build; you show a baby, sort of a stuffed dog. And then they see a real dog and they realized that they’re the same thing, believe me, no machine learning system would ever make that kind of influence and we’re talking about sort of a little infant, we are a long way from general intelligence.
BR: What about killer robots?
JL: Killer robots? Yeah. With these technologies that we have you can use them to build sort of very lethal weapons whether they’re robots, they will be, you can call them robots and it’s sort of militarily attractive to do that kind of thing. And I think there’s a lot of pressure in a lot of countries to go down that path and I think it’s really quite dangerous.
BR: And how do you stop them?
JL: How do we stop any weapon? It has to be a treaty between countries where we agree that this is sort of beyond sort of reasonable and that the country’s all sort of step back and say we don’t want this to happen. We agree that this is an area of weaponry, we will not explore. You know, that’s happening for certain types of weapons. Not all, unfortunately.
BR: Philippe you’ve been quiet, what worries you about the application of these technologies, what’s the most worrying use case that it keeps you awake at night?
PG: Well one of the things that we kind of lost some of our access to pluralism, we’re not exposed to such a diverse community with you knows different kinds of opinions. Today with the social networks we’re well in this slope. That’s one of the main losses. The other one is that maybe we lost some of the institutions and I’ve been using a lot this word and I’d like to say that an institution for me is kind of a socially enforced collective habit. So what are the collective habits we are choosing for our society? And you were asking before about philosophy, do we have to teach philosophy or social sciences or build institutions and I think they’re quite different and they go together and what we try to do in the course with Jim is we try to have a space for reflecting on the implications of the technology and I come with my words, Jim comes with his and also we come with our passions and we have this space to reflect on those issues but at the same time it’s a training, it’s a kind of institution as well because we’re training those engineers, those very gifted students to think in another way about the work and its impact on society.
BR: Jim, do you worry about runaway functions, if I’ve even got that terminology correct? This idea that you optimize something for something but it just over time it does it at the expense of all else so you optimize the machine to make paperclips and then eventually sort of kills all human beings because it wants to make as many paper clips.
JL: No. I don’t think that that’s the way we’re, the minds of the human species is going to come. I think climate change is much more likely to wipe us out than killer robots. Sad, but I think climate change is very real. Yep.
BR: Okay, so I think we’re almost out of time. Dave, do you have another question any on the topic of futurism?
DG: : Just to close off on the climate change. Do you think for example, with fires in the Amazon that it would be morally justified to let’s say infect the cattle with BSE to stop the sales of beef and put out the fires? I mean, what level can we combat issues?
JL: I think that’s probably the wrong approach.(laughter) You know that if anything, history shows that just sort of aggressive action leads to aggressive counter actions and that you would be better off trying to do what’s been done, which quite effectively, I think is to shame Amazon and inform the world that this problem and yeah that is again, one of the sort of advantages of the Internet is that the sort of availability of technology, cheap satellites, made this imaging possible and then the Internet has made it possible to disseminate the information about these fires in essentially real time throughout the world, get the celebrities to jump in and play a lot of pressure in a very short amount of time to Brazil. Wether it changes, I hope so but you know, technology has good sides as well as bad sites too can be applied to fix problems.
BR: I think that’s quite an optimistic note to finish on but I think we can do better. So I’m going to ask each of you to conclude with an optimistic comment about how we can take the really positive foundations of the internet and use them for good.
JL: So one of the things I do in this course is I finish on a positive note. The last lecture we do is about the whole open, whatever movement, the open source movement for software, the open Science movement, open data movement, talking about how the fact that you eliminate the cost of disseminating information has opened up all of these different ways of doing software development, doing government, doing science, that was just would have been just totally impossible when even when my career got started. And to me now, this is an exciting thing is that you know the other bad things that happened and there are a lot of, sort of people make money off of but they’re also sort of a lot of really good people out there, taking advantage of the internet to do things that they really get benefit out of… and that the rest of us benefit out of and if you look on Facebook or Instagram or something for people that are sort of really benefiting from it. I mean, there’s another statistic which I don’t know if you’ve seen is that you know Facebook makes $200- 300 per person in the United States but the value that people get out of it is much higher. There was a very nice study where they basically paid people to stop using Facebook for a year and they had an auction and the price for people in the United States was about $2,000 to not use it for a year. And so, the benefit is 10 times what the revenue is to Facebook so that you know the difference is really what you know Facebook is giving you are giving you $1900 to $1800 dollars’ worth of benefit.
BR: I think that is a podcast in itself, this idea of how much consumer surplus the internet creates which isn’t captured in any of our existing measures of the economy but I think that is we’ll get you on the podcast for that discussion. We just really have to leave; we want to get Philippe’s optimistic conclusion to the podcast.
PG: Well I’ve talked about Walter Lippmann and he was very optimistic in the fact that democracies thrive with very good knowledge, very good media, and a link between the two. And I think the Internet has given us this opportunity. When I see Wikipedia, that’s it’s quite fantastic actually. And I would say, I would say that the way internet enables collective activity and research. That’s the thing that really makes me optimistic. Thank you.
BR: Amen to that. Okay so I think all that remains is to thank you very much for your time. Thank you, Philippe, thank you Jim, thank you David, for taking part in this podcast.
All: Thank you.