Logbuch:Netzpolitik
Einblicke und Ausblicke in das netzpolitische Geschehen
https://logbuch-netzpolitik.de
A discussion with Meredith Whittaker
For this episode we made an exception and decided to record an interview in English. We are talking to Meredith Whittaker, president of the Signal Foundation, a non-profit company that's developing the Signal app that is world's most popular Multiplatform private encrypted messenger system. We talk to Meredith about her way into the digital realm, how she shook up Google by organizing walkouts and more or less erasing all memories to it's "don't be evil" motto, how Signal came to be and what its principles are, how she views Europe and the regulations policies of the EU and much much more.
https://logbuch-netzpolitik.de/lnp504-the-politics-of-intellectual-shame
Veröffentlicht am: 17. Oktober 2024
Dauer: 2:04:19
Then well encrypting is part of that but then you need to make sure the encryption works and then encryption of message content is Das ist nicht möglich, weil du also hast Meta-Data. Du hast auch libraries, du hast du in. Du hast Core Services, du hast du mit. So, da ist viel ingenuity required zu actually create a system das ist rejecting den normative assumptions der Tech-Industrie heute. Wir alle wollen ein ton of Daten, und wir alle wollen monetize das Daten. Wir wollen Sie ads mit dem Daten. Wir wollen das zu anderen Kunden. Wir wollen das Daten-Broker. Wir wollen das AI-Modeln mit dem Daten. Most of the infrastructure in the tech ecosystem now assumes that as a given. And then we have to rewrite things too.
I was the book nerd. But in the story we tell now, because nerddom has been overlapped with monetary success and sort of a career in math and science, I think we forget about the book nerds. But hello, I'm here to remind you. We exist. We're here. And that's who I was. And so I studied literature and rhetoric at Berkeley, which I just thought were, I mean, they're still beguiling. Like being able to read and write is pretty fundamental for anything. Und dann ich war poor so ich habe einen Job bei Google weil sie waren die ersten Menschen zu offeren mich ein Job und dann habe ich sehr fasziniert mit was auf der Erde was going on When war das?
July 10, 2006 Ich habe mich über Berkeley und ich musste Geld und ich musste und ich put mein Resumé auf Monster.com which ist ein precursor zu LinkedIn, und ich habe mich überlegt out. And then I talked to my friend Michelangelo, because at the time you still needed to invite to join Gmail, as I recall. And I was like, hey, Michelangelo, can you get me an invite so I can make a Gmail address so I can appeal to this recruiter? And now my spam-filled Gmail address dates from that moment. So I was hired as something called a customer operations associate, and I didn't know what that was. No one knows what that is. It's just a bunch of words, But it sounded, I was like, that's a business job. It sounds like a business title. I had no idea. It was basically a customer support person who wrote technical documentation, who are kind of user documentation, some technical documentation, and answered inquiries about Google's free products. And I was doing that for Rightly, which was an acquisition that Google made that then they rebuilt to become Google Docs. Okay. Und das ist an der Zeit, die ganze Zeit, die Google-Businessen war, was nur Search. So Search war all auf der Main Campus. Es war wo die Money kam, und dann war das Dinky-little-Building, wo alles, Gmail, Blogger, Rightly, Reader, all of them sat und sie waren experimental projects. Und das war called Apps.
Well, they would email in. I only did this for a second because I kind of figured out I wanted to do other things. And that's part of the story. There's a part of the story, actually. So I did answer inquiries sometimes. And there were auto responses, like hotkeys you'd use in this really janky ticketing system. And this is back. We didn't have laptops. There's no such thing as a smartphone. Our desktops were chained to the desk. Und wenn du nach Hause bist, wenn du nach Hause bist, wenn du nach Hause bist, dann musst du nachdenken. Du hast du nachdenken, das klunky Lenovo oder so. Ich weiß nicht genau, aber es war... Ich war mir, okay, ich werde rewarded in meinem Job, based auf wie viele Tickets ich close bin. Aber ich weiß nicht die Engenheer, die haben zu close diese Tickets, weil ich mich reporting bugs, und ich muss sie fix. Und wenn sie nicht fix die Bugs sind, dann bin ich nicht rewarded. Und ich war mir, das ist wirklich silly. So ich war über die Apps-Büte, weil ich in einer anderen Bühne bin bin. Und ich mit der engineering team und ich war, hey, ich bin das Person, die gibt es diese E-Mails, die Sie anwenden Sie. Und ich war able, zu convince einem einen zu gehen, zu Costco, die ist dieses große Warehouse-Store, ein Walmart-E-Store in den USA, und put ein giantes Kuchen auf seine eigene Karte, und bring es in den Apps-Bilden, so ich konnte sit mit dem. Weil wenn ich mit dem satte, dann könnte ich mit dem, und dann würde ich mich mehr möglich zu fixen, meine Bugs. Und so all my bugs started getting fixed but then my manager got really, upset because it was like obviously I had breached the hierarchy and I was like and so this meeting appeared on my calendar that was basically about me being insubordinate but I'm also like I didn't come from that class I had no I was like what the fuck did I do wrong I don't know, and I was like oh god I like I did something and then. The day before I think that meeting was going to happen, this email hit, we had this all, like some group, I don't know, it was like the group to the consumer operations team. And one of the engineering directors in Apps must have had like a beer or something during one of the many, many, many drinking parties that happened during the day at Google in that era. And sent some email that was like, Meredith, this couch is a model for collaboration.
And then that meeting just disappeared and I was like, well shit, I need another job because I just burned someone who's going to figure out how to get back at me and that's where my street sense comes in. So I took a bike around campus for a couple months and asked every VP during their office hours, which is a thing they used to have like, hey, I want another job. And I would just drop in and be like, hi, you don't know me. I'd like a job. And then I got a job doing basically Standards work, like trying to push document interop standards. And so I started in standards and then standards parlays into measurement pretty quickly because measurement is ultimately the methodological standard. And co-founded MLab and then that became the nucleus around which the research group I founded existed and then that was the... That was when the fresh air of sort of these political and social complexities started hitting my technical work. And I was like, oh, standardization is power. Oh, creating data means you own the narrative. Oh, shit, none of this is neutral. All of this is contingent. I started seeing the balance sheet. I started seeing the capital that was involved in running infrastructure. And I remember around the time I met you, Linus, like maybe over a decade ago.
Yeah, aging is weird. Aber ich habe diese schluss-Ballanz-Ballanz-Sheets. Ich habe aus Google gepritten, die gezeigt, wie viel die Uplink-Bandwidth und die Infrastruktur und Power-Costs für MLAB war. Und ich würde das zeigen, das zu all diese Civil Society funders, wie gesagt, Sie können 250k verdienen. Es kostet 40 Millionen a Jahr in Bandwidth, nur zu MLAB. Du wirst nicht verstehen, die economics of this, denn wir sind in der Era der Civic-Technik, All we need is a good idea, right? And I think I was really lucky to get sensitized to the political economy and the fact that we're talking about infrastructure, capital, network effects, economies of scale and not some kind of brilliant idea that just ephemerally transformed our world, and that our side just needs to wait to have one of those to get our turn.
And the lady with the couch. And the conceit there, which felt really simple to me at the time, is everyone is buzzing about net neutrality. And I was a Kool-Aid drinker. I still think the value underlying that kind of mythology, let's say, like, yes, we should not have one gatekeeper deciding which news source, right? This is old school common sense. It goes back to Western Union, who there's a little, you know, in the US would refuse to carry telegrams from political candidates the company didn't support, right? So there's a real bedrock precedent here. And I was like, yeah, of course we need net neutrality. Neutrality itself is a really wafty, it's a loose concept. It needs to be augured in some type of benchmark against which we can assess is this neutral or not.
Yes. And then you just flung into the abyss of philosophy the second you try to do that. But we started and I would say it was not necessarily built to succeed. It was built as a sort of hypothesis project where we stood up three servers that hosted open source measurement clients and the thing that we were doing was putting servers that were all configured the same that gave each client a dedicated slice of resources and then way over-provisioned the uplink between the server and the switch so that we could all, for all intents and purposes, guarantee that any artifacts that were detected through the measurement methodology like some TCP, RTT or something, were not interfered with by our infrastructure and weren't suffering for bandwidth.
Yeah, and let's, you know, and that's very expensive. But then, we immediately DDoS the servers or like, oh shoot, because Vint's put a blog post out and he was you know he's a big name, and then it was like this job of years and years of getting servers in different interconnection points because we wanted to measure across consumer you know we're not just measuring to a server in the network exactly and we need to cross interconnection points to do that because that's where you begin to see interesting business relationships and kind of feuds, and it was just it was very it taught me about like how difficult and contingent and, ultimately subjective, creating data is, and the political process of then sort of defending that data as a reliable proxy for reality. Because I would be at the Federal Communications Commission and I had Comcast across from me being like, no, we want to measure it multi-threaded TCP versus single-threaded because you get a, let's just say, higher number that way. It's more forgiving. and we would be sort of defending the methodology of our tests and the openness principles. So it was open source code, the database architecture was open. I don't remember all the server architecture, it was old school, old school ideological.
It taught me everything about so much because you get down to the network layer and you're like, fuck, everything is broken. It's all tied together with string. What's going on? Okay, so your promises in the pitch meeting are guaranteed by this. This crusty old library that's maintained by a guy on a boat with a BitLocker account. Like, what is going on?
And we actually talked to them at Signal. They'll detect signal blocking. And I believe, so one of the things MLab did was provide backend infrastructure for projects like Uniprobe. And I don't actually know what that relationship is today. So academics and hackers and developers could write a test or kind of a measurement methodology deploy a client, to consumers like you test from your laptop or whatever and then we would be the ones paying for the bandwidth and infrastructure that would allow that to scale Uni.
Well, I mean, there are many. I think, so... I was not working on MLAB by that point. MLAB, there's a part of this story that I didn't cover, which is around, I don't know, 2014, 2015, we started seeing really interesting artifacts in the data, which showed essentially that at particular interconnection points between particular telcos, we were seeing drastic drops in performance. And what we were able to do was look at, you know, every intersection of Telco 1 and Telco 2, we see this drop. Intersections of Telco 3 and Telco 2 don't see this drop. And so what we're able to do is say, like, there's actually a business feud going on. And the interconnection point is the locus of that feud. And using traceroute data, we were able to say, like, look, these aren't, it's not just that they're sharing a path in some other region of the network that's slowing it down.
They're actively throttling. And we put together a report on that. And this is where I was sort of, I think, kind of sideloading myself into some academic research work. I was like, okay, how do we do this? Document our methodology. Document sort of everything was open. So we pulled that together. And it showed, I don't know if you remember, there was a sort of Comcast Cogent Netflix.
Netflix was shoving all its traffic through Cogent. What we had done is expose that and exposing that it exposed the principle that you have to if you wanted to ensure net neutrality, you have to take the interconnection points and the interconnection agreements into account. And that led Obama to add interconnection to the reclassification under Title II of these, you know, which kind of moved toward net neutrality that was nullified. But, you know, that was kind of the swan song, let's say, for my MLab time, because, of course, that was a huge deal, right? Like, that's where the business model rubber hits the road. And, you know, I'll just shorthand it. Like, that bought me a lot of capital at Google. And with that, I was, you know, I was already interested in a lot of things. MLab was sort of humming along and, like, it had grown from a hypothesis project to, like, a global thing that was working and doing stuff. And I started getting agitated by AI and a lot of these privacy and security concerns, being part of the community that you were part of, you know, kind of thinking around like tech alternatives and getting less comfortable with the business model. And from there I always had like eight different projects going on but I went on to found, to co-found the AI Now Institute which was really trying to bring the conversation, on what we called machine learning back then but like AI was like a flashier term, like bring it down to the ground a little bit and stop talking about superintelligence and stop talking about, you know, Political economy, what are these technologies, how are they being deployed, how do you oversee them, who uses them on whom, and what are the social and political dynamics of that.
Co-founded, yeah. I mean, the work was really good. And this was, again, gaining the capital at Google, cementing a reputation, being able to get to a level where I had a budget. And then part of what I was always trying to do is how much can I pull out of Google and get into the hands of people doing work that I think is cool. How do we carve tributaries in the massive river of this huge, rich company and get it out?
Well, beginnings are sometimes hard to date. It was born out of a request from the Obama administration to be the host of one of his AI summits. I don't remember exactly the contours there, but it was in 2016. And so it started as like, like the idea then was like, let's do this and let's do it big. Like, let's make this the most polished, flashiest, like hard to ignore kind of spectacle using all the tools we can get from like hiring an events agency, doing good press, all of that. But let's also make this the one that is the most engaged with these political topics, that's actually forcing the debate in that direction and kind of making it face these questions that are much more grounded and, you know, hopefully much more healthy for our population. So it was very successful and from there we got offers of funding and a lot of encouragement and the work just kept going.
Well, we in Europe, we are used to be a bit behind, but if I can recall 2016, there was no discussion about AI in Europe at all. And so, it's quite interesting to see that Obama has actually, you know, decided that it's finally time to do something about this topic nobody has ever really talked about.
Yeah, to answer this question, I'm going to be drawing on a lot of the research and the historical work, the work I've done since then. Because when this dawned in my life, when it started being a thing, I had basically that same question. What is this stuff? Why is it kind of... At Google, you would see... A shift toward a new paradigm or a new trend by the incentives that were structured into the OKRs, the quarterly goals, that kind of, you know, there'd be all these training modules that would pop up and it's like make your software engineer into an AI developer, you know, and you'd be like, there's an incentive here.
No, but they're chaotic. It's a kind of court in decline. So that, you know, actually where the... Actually the business model part has never been their strong suit beyond search. To be real, you know, like cloud is like the best technology presented confusingly with 18 versions all deprecated, right? Like that's... Um, um, but this, the, the AI stuff was, so if you look at the, if you look at the sort of recent history, which is something I'm spending, I've spent a lot of time on. I spend a lot of time on because I think we, you know, that gives us a really different picture than the Elon Musk narrative or the kind of popular narrative. There's a very important paper that was published in 2012 that introduced the AlexNet algorithm. und das war Jeff Hinton und his students, you may recognize.
And it was Ilya Stutzkever and then Alex. I'm sorry, Alex, I'm not grabbing your last name from the ether right now. But nonetheless, this was a paper that kind of pulled together key ingredients that became the foundation of the AI boom now. So this is deep learning algorithms, which is the paradigm we're still in. It doesn't matter. There's architectural sort of rejiggering, but nonetheless, it's deep learning. Huge amounts of data. So what I've called the derivatives of the surveillance business model.
I mean, yeah, that was Jeff Dean, I think. And then, you know, powerful compute, right? And they showed that sort of using gaming chips and a lot of data, you could beat the benchmark. So score much better against standard evaluations than past models and thus sort of catalyzed the industrial industry interest in AI and why were they interested? I think this is a key point because these optimist-seeking algorithms are really good at curating news feeds. They're really good at figuring out algorithms, right? I don't think it's an accident that Jeff was immediately hired, dass Jan LeCun, die die deep-learning-Algorithmen, die das Seed der Moment in AI in den 80ern, in den letzten 80ern, war er mit Meta. Und es war die Plattform-Companien mit der realen Investition in squeezing mehr Ad-Dollars aus der Daten, better-serving, all of das. Das ist Google mit DeepMind. Du sehen Meta und Google die leaders in das. Als measured by different evaluation standards, the measurement question hier ist eigentlich wirklich interesting und troubling. Until this generative moment where I think the ChatGPT Microsoft products shifted people's perception of AI and what it can do and just rearrange the leaderboard. But the paradigm is still the same And the paradigm is still that AI is applying old algorithms on top of the sort of massive platform monopoly business model, availing itself of huge amounts of data, which is produced via this surveillance business model, and really powerful compute that was designed, built up, you know, I would say, like, consolidated in the hands of these platform companies via the imperatives of the surveillance business model, right?
Would you say that, so clearly, I mean, you say, so here are Google and Meta that have these massive amounts of data, like larger amounts of data than probably ever existed or were in the hands of anybody or any organization. Would you say that they had amassed all this data and eventually learned, okay, we probably can't handle this anymore, so we're interested in this new paradigm to even monetize this data any further? Or was it rather like, oh, we have all this data, we're monetizing it well, here's another way to monetize it on top of that. because clearly these deep learning algorithms are not of much use if you don't have large data.
I think this is almost a perpetual motion machine right like every quarter you have to report progress you have to report growth the logic is metastasis and so you know you're trying to squeeze more out of what you have and you're trying to get more of what you have so you can squeeze more right there's also these sort of laws of scale remember big data we used to call it that, and so I don't I don't actually know the answer to your question but I think it, Which came first? But, well, I mean, the business model came first, right? You had to have the ingredients to know what they did together. And I think it was deep learning and AI had sort of languished in the backwater with some interesting experiments through the 2000s because its history is always promising too much and disappointing since the mid-50s when it was invented. And I think the goal was really to sort of supercharge their existing business model.
I think deep learning was just the technology that perfectly served their current beliefs in that they have to work on the data, that they have to build up algorithms to somehow predict your personal future and be there with an ad before you even know it and we've seen this, as an ad everywhere and we've also seen it in political influence as we've seen in the Brexit, decision and also in the elections where we heard there's going to be one in the near future as well in the US that might be influenced as well.
Let me check my calendar. I mean I think we can like peel back also just this concept of like what is an advertisement right? It's an influence. Is it trying to get you to buy something? Is it trying to get you to like something? Is it trying to get you to believe something? Is it trying to get you to vote a certain way? And I think that the term advertisement is usefully deconstructed when we start to think about the connection between all of those.
The term advertising is one of the best tricks the devil ever pulled. Because it's behavior manipulation. It's a stated goal. But I remember when. The first debates came up about, for example, the Cambridge Analytica scan. It's like, they used Facebook data to manipulate voters. And it's like, no, there's a red line over here. You can't manipulate behavior. I mean, Facebook and Google were basically built to manipulate users' behavior and ideally capture their attention to change their behavior. Wir haben wahrscheinlich dieses shirt oder das oder das oder das oder das oder das oder das oder das oder das oder das oder das oder das oder das oder das oder das ist. Wie können wir das so etwas machen? Das ist unheard of. So viewing advertising as like, oh, this is just an offer. This is, oh, we're just making our product. No, no, I have a limited amount of attention per day. You're capturing it and you're doing it with one simple reason. And that is changing my behavior in the sense that you aim for. And I guess eventually it sounds like, eventually you questioned your Google career, right? Because you were, I mean, clearly you had an impressive career there in just a few years. Um, but you began to question not only your, like Google and the company culture, but apparently also a little bit your, your, the way you want to continue with your, with your, you know, making use of the influence and the knowledge you have.
I mean, it's all interlinked and kind of periodizing your own consciousness is hard. But I think I'm pretty earnest and I also don't come from that world, I don't come from that class so there are often places where I just didn't, you know, I would take things sincerely or be like really committed and then only realize like two thirds of the way through whatever it was like oh no one else really cares about this they're just networking or you know whatever it was so I think there was an element there where like you know when I was doing MLab I was like I really want to win net neutrality Und dann we won net neutrality. Aber dann ist das, dass ich das nicht der Fall war. Google hat eine größere Netzwerke als Comcast. Das ist nicht die Gatekeeper. Aber das war eine Sincere Sache. Und dann war ich, okay, ich kann mich, um, Geld zu all diese coolen Privatsch-Hacker-Projekte. Das war Sincere. Und dann wurde ich in AI und ich war, okay, und ich denke, das ist etwas, das hat sich verändert für mich. I think I used to have a lot, lot more faith in the power of ideas to influence real change, right? And I still think, you know, I spend a lot of time in kind of thinking through discourses, how do we shape them? Like, how do we kindly walk people into understanding things that, you know, they may have an interest in not understanding or they may have been, you know, misinformed about or what have you. But I began you know I began around the time I was looking at AI and sort of making all these cases, that everyone loved right like I was out there giving talks that were completely against the Google party line and I was getting like applauses, I was getting promoted like I was like this is a this is a perfect job. And then you know like I was I envied you not only once I'm the house troll, but then there was I was getting more influence so I was becoming known outside and inside I was like the person you'd call into your team when it was like oh we want to implement this is there an ethical way to do it, and you would say no I would, I would be like my dear friends let us sit down, and then I was getting, I don't know, that was sort of my life and we kind of took the AI Now Institute and really did a lot to reshape the debate. I was very focused on that discursive intervention and how do we begin to talk about AI in a more realistic way and that was working outside of Google but it wasn't really influencing core decisions at Google and that was kind of the thing I kept hitting up against more and more strongly until I got a signal message in... Late 2017.
No, I mean, I probably did get one from Moxie at that time, but this one was not that one. It was from a friend of mine at Google who said, yo, there's a really disturbing project that is hidden that I'm very close to. And you should know about it because you're the AI person who cares about this and you have standing at the company around it. And this was the secretive contract that Google signed with the Department of Defense to build AI drone targeting and surveillance systems for the drone war. And of course, like I was politicized post 9-11, post Snowden. Like this was, you know, the drone war and the signature strike and all of that were really core in kind of my, you know, things that I ideologically rejected and, you know, felt like we needed to disarm, not supercharge. And I had like a, you know, like a righteous anger. I was just like, fuck this.
And making them look good, right? Because they get to be like, look, we platform such heterodox voices. We're surely benevolent, right? And then I'm like, okay, and then you're inking this deal with the DoD behind the scenes for technology that one we know doesn't sort of work for the purpose, right? Like, you know, it's not going to better identify a worthy target of death oder whatever the fuck it's you know it's not you know we know this is bullshit but like and you know this is a multinational company like more than half the employees are outside the US there is an issue with yoking yourself to you know not that you have one nation's government not that many tech companies care that much about that and then there was just the you know what is the structural danger which is deeply acute and, of a massive surveillance company with more data than the world has ever seen, more compromised than you can imagine, like yoking their fortunes and a key dependency to the US military, right? And, you know, we know from Snowden that that's already like, you know, seen as, you know, as long as it's a corporation gathering it, it's not, you know.
Yeah, like lay off the evil. Ja, so I mean, and this was, there were like a lot of old school people there at that time who really did drink the Kool-Aid and so it was you know, I just put my energy into organizing against that and that was when I turned toward labor organizing and thinking through traditional, methods and approaches to combating that type of corporate power or you know industrial power and we, you know, and that That was the on-ramp to the walkout. So the walkout was like a big, that was like a rupture, like a manifestation. It got a lot of press and it was the biggest labor action in tech.
I think it was November 11th, 2018. Und everyone walked out for 20 minutes at 11.11am in their local time so we called it Rolling Thunder and it started in the Singapore office as I was going to bed in New York and I was seeing the photos, and this was chaos I hadn't slept in days there's so many meetings there's so many tears it's hard to organize something like that, and I remember going to bed and seeing the images from Singapore mit ein paar hundert Leuten in Singapur und ich war, das ist großartig. Und dann habe ich New York time gewohnt, um 5 a.m. zu gehen in New York zu unserer location und prepen es, um, dass wir uns da sind. And then I just remember seeing, like, there was this little park near the New York office, and then it just, like, grew outside the park. And then no one could get into the park. And then I was looking at my home, and there's my phone, and there's live helicopter feeds. And we don't have bullhorns, because I'm one of the speakers. There's, like, speakers standing on chairs to address the crowd. And then this guy, like, some, you know, there's this sort of type. I don't know if in Germany you have this type, but they're, like, kind of, like, the leftist at every protest. And like some guy had like found out about it and came in and I just remember this like, like this man I'd never seen like handing me a bullhound from below and I picked it up and like we were like bullhound like a megaphone like a megaphone yeah yeah yeah. And then it was you know and then we walked over to a Mexican restaurant and sat at the table and had a press operation where like that we were running and what was cool was like everyone organizing that was kind of a professional most of them were femme and most of them had jobs at Google that were like organizing the company so organizing their comms being an administrator for like 13 different directors, this particular type of hyper competence at coordinating activity across a number of people who you may not have direct power over and like for that much more.
Respect I mean let's just say Yeah, yes. You know, you can, like somebody who can write an email and get someone to respond to that email by doing a thing is kind of a witch. And it was all these like these preternaturally competent femmes just turning that energy toward organizing across the company, which was building on this sort of base of meetings and locals and all of this sort of work that we'd done to kind of form a discursive environment inside Google where we're like weekly meetings to discuss the news, what Google could do, what campaigns we could do. So we had this sort of energy and solidarity already at that point.
I don't know. we were really careful not to keep lists yeah I mean you can dream up a number now you know I. 20,000 was the number estimated from the Sky Photos and the local reports. We had local leads at every office. We sent them zip files of the kit for handing out the flyers, the talking points, how to treat media, all of that. They all had that, and then they organized their local. And then they had reporting back in from press, reporting back into the central organizers around numbers. And then we were issuing the press releases. um but then you know there's employees of google and then there's contractors of google which more than double employees so it you know that number is and and then there's people who couldn't participate but were really supportive because they have you know need their health insurance or they'll die or they are on a visa and so it wasn't um i think there was a huge amount of support uh i know that we were able to get google to drop their military contract because there was enough support and there was you know there was like a spreadsheet of people who are quitting, conscientious objectors we had you know like every week at the all-town meeting there was a table we would set up with like banners and something like we would had questions like it was a very, it was a very like rigorous campaign and it kind of laid the laid the groundwork and i think it built some muscle that people are probably almost are definitely using now even if they don't call at organizing. It's like, how do we marshal the resources from this?
I mean, it leads to an interesting point because I would say, looking from the other side of the ocean, I think to us, this whole tech scene, The startups, the new stuff, the internet, everything that has developed in the last 20 years or so, always had this liberal touch to it. It felt as if it was mostly about an open, world-loving agenda and it's good for everybody. And Google kind of tuned in with their motto and some other companies did as well, some not so much. But it was always this feeling that liberal thinking is at the core of everything that is driving the internet forward. And I think we stopped thinking that now because it looks totally different right now. Currently, we have more the feeling that it's turned into a total right-wing apocalypse somehow. And I haven't really seen this coming. Can you explain what happened to this tech scene? What happened?
Ich habe die gleiche Sache gemacht. Und nicht jeder, ich glaube, war equating die growth und monetary success of the US-based tech industry with sort of, you know, values of social progress. I mean, I think we can, yeah, yes, it was liberal and then we can get back to a critique of liberalism and what have you, but it was, I think there were people who were looking at the infrastructure, who were looking at its capabilities, who were looking at the gap between the promises and the reality of what this tech did and calling that out. And I feel like, you know, when I entered into this kind of, Privacy, security, hacker development scene. There was a lot of that skepticism there around Google. That educated me a lot. There was a lot of skepticism around surveillance. I immediately recognized, yes, we need privacy because it doesn't matter if these people are good or benevolent. What we're doing is setting up an infrastructure that could be turned over at any moment to another regime. Logically, all of that made sense. But I don't feel that until the Snowden revelations, any of that was anywhere near in the nervous system of a kind of tech consciousness. And a lot of the work I've spent in the 1990s trying to sift through the crypto wars and sift through what happened with tech regulation to set up these surveillance giants and to permit this monopoly platform business model, has kind of looked at that gap between the rhetoric of liberal rights preserving. Open, free tech, and what was actually being built, right? And one of the things, if you look at, there's a scholar named Katharina Ryder, who I would really suggest, for show notes, I can send some of these links. But she did her dissertation looking at some of the negotiations in the crypto wars. And what you begin to see is that, yeah, we, you know, and this is a thesis sort of I build on top of in some of my work. Yeah, we won liberalized encryption, right? By 1999 in the US, it was finally legal to build, share, implement strong cryptosystems without approval from the government, without some threshold that made them useless. But the agreement there was basically, yeah, you can have encryption, but we're going to permit mass surveillance by companies. Und so you don't actually, like, you can just get the, you know, we're going to permit, we're going to endorse the advertising business model. We're going to endorse, and I can actually, I can start this point over, actually.
I can't say that there was a conspiracy. What I can say is that Katarina's work shows that Microsoft saying liberalize encryption, don't worry we're not going to encrypt all the data, we need it. Just come to us quietly and we'll give it to you. Instead of fighting over a backdoor, instead of doing this in sort of the public domain where we're kind of losing the fight on technical and other grounds. Allow companies free reign to surveil because that allows us to implement this ad-supported business model. And then the data agreements can happen behind the scene. Now, I've like completely compressed a very complex history into basically a meme. But I think the purpose is there is that there was always that gap between these sort of rhetoric and what was actually going on. And I think the, I don't know, like there's like a kind of internet people, right? This type maybe misunderstood exactly like how this, you know, who would have power over this technology, right? Like encryption is liberalized, but it's not gonna be applied to protect personal communication. It's applied to protect transactions, right? It's not, you know, like the people who get to choose whether or not it's used aren't us. In terms of, you know, actually this sort of mass infrastructure in terms of the tech ecosystem that's being built by the actors who are calibrating their decisions based on the surveillance business model that has been instantiated right, you know, through regulatory decisions made by the Clinton administration.
So then there were a couple of hundreds of thousands of nerds worldwide that used PGP encryption, using additional software and plugins to send an email with nonsense information to avoid government or private sector surveillance. But it was probably never really a significant number of individuals that... That got to the point of, you know, having mass encryption out there.
Yeah, well, I tried. It's just that my friend group didn't overlap exactly with those couple of thousand nerds. So what am I going to do, right? And I think that's, you know, this is the network effect. This is why it's actually very difficult to do that. And this is why if one of those actors that controls these infrastructures doesn't make the choice for us, it's really difficult to make that choice.
Right. But initially, I believe Moxie wanted to implement at Crypto for Twitter, direct messages, if I'm not mistaken. I think it was at Twitter before. And so his idea to roll out Twitter, Definitely, he devoted a few years of his life to implementing and rolling out mass end-to-end encryption, right? But I believe it was he wanted to do it at Twitter at the time. And maybe I'm wrong here, then our listeners will correct me two minutes into the show. But clearly, none of the large...
Und der Regulatory Issues. Ja. Ich meine, ich habe mich mit der Security Team und da waren immer diese Ideen, ich habe immer ein Projekt wo wir uns an die Keystore für andere Services und du hast diese wirklich exciting conversations mit den Security Guys and then it wouldn't go anywhere because then, you know, the other guys would get involved.
And the WhatsApp integration was driven by Brian and Jan, who are the co-founders. And my understanding there is that they were rushing to get that done before the Facebook integration, to make sure that they weren't selling something that would violate their principles. And I know Moxie was working on that. I remember that period of time. Und dann Facebook bought it. Und ich weiß nicht, dass die deal da ist, aber es bleibt, mit der Signals-Protokolle. Es war ein Post-Snowden Moment. Du sahst Android und iOS implementiert Full Disk Encryption. Du sahst Google encrypting. its. HTPS for its networks and I think a lot of this was just like we need to distance ourselves from bad government spying by adding encryption that proves that we're not actually part of the problem that they have just sort of, the bad government has attacked us as taxpaying corporations, taken this data that we really want to protect but god how could we have known and so now we're encrypting things and, ultimately good, right? But it was a way of not looking at the full story of like, why is that there to begin with? And what else is not being encrypted? What other data is being given over? And why do you have the choice to do that to begin with instead of a more socially beneficial, democratic process of determining how we're comfortable with technology entering our lives?
I probably would have even had an even better or even stronger perspective on it seems like the governments didn't really attack these corporations. The governments made their business model look bad. So now they needed to change something to convince people no, no, no, your data is safe here. We'll encrypt something and moving on, moving on.
I was, for the gentle listener, I was kind of joking because that rhetoric around being attacked and being like, oh my gosh, was Very much the mood at that time. I was at Google when Snowden dropped and I remember just things popped off and I actually had to get, I got on a plane. I don't know, you all probably have a memory similar to me of like when the Guardian stories with the Verizon water, like the Glenn Greenwald stories. It was night in New York. It was probably morning the next day you guys saw it. I remember sitting on my couch and being like, holy fuck. And realizing just how big that was because it was the kind of thing we'd been talking about, speculating about in the rooms that you and I were in, Linus. And then it was like, oh, receipts, shit. And there was a lot of unclarity. There was that Prism slide where it was like, is Google just giving them full access? That people were rioting inside. Security engineers were threatening to quit. And then that morning I got on a plane to TorDev. And so, yeah, in Berlin, actually.
I find it quite interesting how encryption as a topic has changed over time. It's just... More or less 10 years ago that Facebook actually changed to HTTPS on their website by default. And so there was a time not so long ago, you know, where most of the data was flowing around on the Internet, mostly unencrypted. And that although there was these already mentioned crypto wars, you know, about general encryption, but it was also always for nerds and for specific applications. Then it also got this nice paint with this whole cryptocurrency craze going on, which made it somehow popular and almost took the word away.
And it was also the rise of encrypted messaging that was really giving a new fuel. So Signal was in the middle of all of this, as we already heard. So, I'd like to focus on Signal for a moment as an organization that you now head. What's your understanding of what Signal is and what it's not and how the organization deals with it?
Well, I love Signal and I'm really... Yeah, it's the only cool tech company in my view. Und ich denke... So boiling down, like what is Signal in one word is a little, you know, I'll just, I'll start somewhere and we'll end another place because I think it's actually, it's a number of things and kind of represents even more. You know, Signal started back, you know, the late 2000s, right? And we can, you know, we can date it to whenever, right? Signal as the integrated app was 2013, but your Red Phone, TextSecure predated that. And this is, you know, there's no iPhone. Jabber is the competition, right? It's like web client-based chat. There's no, you know, people aren't carrying smartphones. There isn't, WhatsApp doesn't exist. IMessage doesn't exist. You have a very, very different marketplace.
Yeah, yeah. I mean, I remember using, like, I don't remember, I'd like send text messages sometimes, but they were expensive on my BlackBerry, maybe. But we're talking about a drastically different tech ecosystem. And this is particularly important in the context of messaging and communications apps. Because, of course, you need a network effect for those to work. No one buys the first telephone because you can't use one telephone. Your friend has to have a telephone. All your friends have to have an app if you're going to use it. Particularly, it takes two to encrypt. Group encryption, it takes everyone in the group. And if everyone isn't using your app for communication, it's very different in a saturated marketplace where you have WhatsApp, where you have iMessage, where you have these normative models that people go to just to have their regular communication to introduce a new platform. For secure messaging or insecure messaging, right? Because people don't switch outside of the network unless their friends switch outside of the network and there's a collective action problem there and an inertia problem there and all of that, right? So when I think about, there's many things that are very precious about Signal, but the fact that Moxie carried it on his back for that decade and was actually able to keep it going and surviving without selling out, without selling data and actually creating something that is now able to scale to hundreds of millions of people. Means that Signal actually has a position in this ecosystem that makes it useful to people. That means that it's actually providing encrypted communication to people all over the globe because their friends are using it. And my contention here, and I'm willing to discuss this, I don't think we can recreate Signal, right? You could shift, you know, because it has that user base, right? You can introduce a new app, but how do you get people to use it without an OEM, without an existing installed user base, without some way of, you know, kind of making it useful to people? Because again, it's, you know, one telephone or, you know, a couple thousand hackers who all use PGP, but they can't talk to their dad on PGP, right? They can't talk to anyone outside of themselves on PGP. So Signal has both sort of kept this form that is very heterodox in tech. It's a nonprofit, so I couldn't really sell out profitably. I could try to sell out, but I'm not going to get any money from it. It has to be reinvested in the mission, and there are certain rules around mission alignment there. So we're not pressured to monetize surveillance or to collect any data. We're really able to stay focused on that mission and in an ecosystem where. The diametric opposite is the norm. That's really, really important, irrespective about the flaws of the nonprofit model more generally. So it's achieved this pretty rare, like it's the only thing like it in the ecosystem. And I think it also serves as a model for how we could think about building tech differently. Like, how do we disarm, deconstruct the massive centralized power of a handful of platform companies that basically control most of the infrastructure and information ecosystem in our world and our jurisdiction in the US? And how do we build other models that may be interoperable, that are more open, that are more rights preserving, and that aren't subject to the pressures of and incentives of the surveillance business model?
Now, I find it interesting, you mentioned the network effect that for a long time worked against you, right? You said, okay, it took Moxie a decade. Now, that network effect, you know, the EU regulatory bodies have an idea on how to weaken it by enabling messenger interoperability, Thus, pretty much trying to force the large messenger operators, be it WhatsApp or whatever else people use, to offer an interoperability interface. So any new messenger would have it easier than Signal did to reach the critical, amount of users to actually have the network effect. Now I know Signal's position strongly opposes this idea or you decided not to participate in this interoperability or making use of it. There are two. Forces or two goals of Signal, I guess, that contradict each other here, and that is its security and building open communication systems. Maybe you can explain a little bit why, after, you know, having understood how hard it is to build Signal and, you know, against a market, why you would still oppose messenger interoperability.
Yeah, I like this question because I'm going to, I want to clarify our position, which is a little bit of nuance. I don't oppose interoperability in principle. Like if the interoperability mandate of the DSA were, you all have to, you know, interoperability needs to be an option and it has to happen at this rigorous security bar. You have to make sure that you're implementing metadata security, sealed sender. Basically, you're adopting the signal privacy and security bar as the conditions for interoperability. That would be really cool, right? And I think that's the issue here. I mean, and I want to put an asterisk here for a moment saying, like, there are a lot of other complexities around policy. Like, you know, how do you take in a, you know, who deals with a law enforcement request, right? Even if you have no data to give them, you can't just ignore that. You know, like, if a user has a complaint, who do they write in? If you're interoperating with a platform for communication that also has a social media arm, there's a totally different regulatory environment for Telegram or WhatsApp with channels than for a signal. How does that sort of work? So I want to say this isn't simple.
There's a massive can of worms as the EU often opens. Um, but the, you know, like the conditions of interoperability are actually really, you know, they're really political here. Right. So in order to interoperate with WhatsApp, am I going to be giving signal user data to Meta? Well, that would violate the entire premise of what I'm spending my life's energy doing and Signal, right? Is Meta going to decide to, you know, cut off the account of one of our users? Who gets to decide that, right? Are they, you know, are they collecting, you know, other data because they aren't implementing some of our libraries or whatever? And so I think, you know, that's where the rubber meets the road. And we have to have a duty of care to the people who rely on Signal. Das heißt, wir sind absolut nicht going zu verabschieden, dass wir wissen, wenn wir real sind, dass es eine Frau in der Jäule in den USA ist, weil Meta in den Facebook-Messagen hat, zwischen ihr und ihr daughter, die sie überprüfen hat, dass sie sie erhalten, in der Stadt von Nebraska, nach dem Dobbs-Dobbs-Dobbs-Dobbs-Dobbs. That's the stakes of this conversation, even when we're talking about the technical details of interoperability, right?
We will continue to advocate for a path that raises that bar, meets or exceeds Signal's bar. And if it succeeds, I'm like, yeah, I want to talk about that. But, you know, those are the conditions under which we would interoperate. So we're not, you know, we don't take a stand against it. You know, we just say like, look, these are the complexities and this is, you know, Signal stands with the people who rely on Signal, not with a sort of, you know, vision for some muddy middle where we're all interoperating, but we've sort of, you know, sold people out and made them susceptible to, you know, what we describe with Meta.
Is this mean or median, people? I mean, I don't think most people don't understand this at all because they got laundry to do and this isn't their area, right? I think some of the politicians I've talked to seem to get it, but it's not... Ich denke, dass es eine Inertia über das Prozess ist, das bedeutet, dass es nicht klar, wie weit diese Punkte haben, die Bedrock zu haben, so zu sprechen. Ich würde auch, als Amerikaner, ich bin hier, für heute. Aber ich also habe, ich würde sagen, mein Instinkt auf die Generalentwicklung der EU ist nicht was ich, was ich rely auf.
Which brings me to an interesting point because we are actually very interested in your view as an American knowing how things work on that continent, What's your impression of how Europe deals with tech these new technologies coming up and how it impacts society Can you just give me a feeling for how this is to you, in a good way, in a bad way, whatever, you feel just to...
It's almost, it's split in an interesting way. Because on one part, if you go to kind of the startup ecosystem, the VC ecosystem, like that world, there's a real, you know, there's a lot of smart people and cool people doing cool things. And there's sometimes a bit of magical thinking that I see, which is really like, you know, if we wish hard enough, if we're able to figure it out, we're going to be able to create competitors to the US incumbents, right? And we're going to have our own thing.
Which often just, you know, sometimes I'm just like, okay, that's a money play, right? Like, you get enough in, you know, your Series A, Series B, and then you'll get acquired, and no one will do anything with it, or you'll get rich, or whatever. You know, like, you may not necessarily believe that. Like, markets float on Hype. So, okay. But there is this thread where it's almost a willful misunderstanding of the reality of incumbent platforms, of the history that accrued that type of power to US companies, and of the dependencies that Europe and most of the rest of the world have on these companies. The three cloud companies based in the US have 70, 70% of the global market. You have five major social media platforms.
Yeah, AWS, Azure, GCP. And then I think the other percentage are made up by US companies as well. And then there's some Chinese companies. And then you have five platforms that effectively shape our global information ecosystem, like our perception of reality. The four biggest jurisdictions in the US, right?
Ja, ich denke, so flatly, ist es. Facebook, Instagram, X. Und dann, es ist nicht Twitch, aber irgendwie, da ist eine andere one auf YouTube. Und dann ist Twitch, too. Für all intents und purposes, das ist ein huge amount of concentrated power that, again, relies on network effects, relies on economies of scale, relies on all kinds of global infrastructure. It's trillions of dollars that can't just be interrupted by investment. This is a kind of...
Well, Telegram doesn't, I think, run most of their own infrastructure. They don't have a cloud business model. And they also don't really have a business model. It seems like they have this crypto play. But it's not clear how that money moves. There's a lot of UAE investment. So you're not talking about these big tech...
And what is it difficult? Where is the normative shape of the tech industry coming from, right? Like if the cloud companies all of a sudden decided to cut off half their APIs and change their infrastructure, there's like most startups in the entire world, including organizations like Signal, Telegram, whoever's writing on top, all their engineers' pagers go off. They got to respond to that, right? It's unidirectional that way.
It's like 500 million dollars European sovereign AI fund. And then you're like, but that's half a training run. Like, what are you buying with that? Which is like which is disturbing because it's like okay well that's a lot of money also let's not be flip about it and it could be going to really good things it could be supporting you know interesting open projects it could be supporting you know alternative interoperable alternatives are like you know smaller clouds for more, heterodox open source project like there's really cool stuff that is languishing without that money and I think it's where is that money going? Well if you're talking about going into AI it's going to one of those three cloud companies it's renting infrastructure from Microsoft Amazon or Google for model development or for deployment which is inference and inference is really expensive you don't just train once, you use a model using it is way way more expensive than normal information retrieval and so you're also it's just this massive computationally expensive, Und du hast nicht die Europäische Sovereinheit, du hast eine Gefühl, die... I don't know, like, not behind, which, like, you know, and feeling of not being ashamed by being technologically...
And then there's the other side, which is I often find a much more sophisticated and clear-eyed view of these problems, right? Like, having this discussion about that concentrated power in the hands of infrastructure and media ecosystem, way easier in Europe. I mean, people feel it, right? They see it. And there's been a history of pushing back against the encroachment of US tech, both effectively and often very ineffectively, that I really enjoy. And particularly in Germany, there's a very high sensitivity to privacy, very often clear-eyed view on some of these debates, which doesn't always translate into policy. But there's at least, like I find the intellectual environment around this stuff, when you talk to people who are knowledgeable and have thought about it, to be very, I don't know, to teach me a lot and be really sophisticated.
Ja, ich meine, es gibt beide threads. Du hast zwei Wolves inside European Politik. Der eine will seine eigene Tech-Industrie und der eine will sie nicht zu machen. Das ist nicht zu US-Tech-Colonialismus. Und ich denke, du hast du some weirden Laws. Du hast die AI-Act, die aufwärtigte, die aufwärtigte, kind of had this last-minute brinksmanship around whether foundation models. These big LLMs that are now the trendy kind should be included or not. And you often see bold regulatory attempts that then get kind of shaped in odd ways, trying to have it both ways, right? Like how do we regulate the Americans away and get our own, right? But how do we do that in a way that it is reflected in principles, not in actually declaring that as an intent. And I think that is, I think you're seeing a huge amount of money be spent by the US companies in Brussels right now, which is also influencing things in interesting ways. And then this is something I'm theorizing a lot in my intellectual work, or just, and I think is really important. You also see what I'm calling the politics of intellectual shame. Be really pervasive in this conversation and this is not just Europe this is across the board. I mean that there is a real. A real fear among a lot of people who are in decision-making positions, politicians or academics or whoever, and not even in decision-making positions, but it matters when it's them, of being stupid about tech, of being behind the ball on tech. And this plays right into patriarchal dynamics. Men hate when someone else knows something more than them, in particular if that's a small woman.
Ja. Well, you all are, generally, and I don't want to gender this in such a schematic, but there is an ego that can be very, very fragile here. And the way I've put it before is it kind of turns uncertain men into yes men. Like, they don't want to ask the dumb question. They don't want to be like, what's an LLM? What's a server? Like, how does that work? And that type of insecurity, The fear of being behind, the fear of being called technically unsophisticated or hampering progress or putting your finger on the scales of science, look the Nobels, how could you stand in the way of all this progress? I think really gives the upper hand to the companies and those who have an interest in creating products and growth and domination via these technologies. Because people really don't want to challenge them because challenging their dominance or their plans gets conflated with somehow being anti-science or being stupid about tech or not being smart enough to have a position on a topic. And I think that's something because I kind of came up through Google asking every dumb question in the book because I had no I didn't come from that world. Right. So I had to ask, like, how does a computer work? Right. I'm like, I'm like, can someone diagram what a function is? I don't know any of this stuff. Right. But I kind of I think I have a sensitivity to that because I also I remember feeling it. I remember people being mean about it. Like if I didn't, you know, like back in the day when I was trying to learn this stuff. And I think that, you know, a discourse that collapses scientific progress into kind of. The success of a handful of tech companies preys on that type of insecurity and has created an environment in which people have no idea what AI is and are still professing boldly on how to regulate it.
I should be clear. I don't think that what I was describing and the politics of intellectual shame are not unique to Europe, but I think are in Europe as well. And particularly folks who feel like, you know, the Americans beat us, we got to get ahead. Right. I think where I see the European position being most, let's say, under-informed or perhaps just in some cases pernicious is in the chat controls regulation and the desire, the apex of magical thinking, Which is, you know, let's rename a backdoor client-side scanning and then let's mandate scanning everyone's private messages, comparing what's in those messages against some database of permissible or impermissible content and then, you know, taking action on those, you know, in the name of protecting children, which is the justification during this instantiation of the crypto wars.
Well, I don't, you know, I see this as an ongoing power struggle, right? Between who? Well, between, like this is not a misunderstanding. I think a lot of the people pushing for this understand that backdoors are dangerous and understand that, you know, the pretext is flimsy, but that, you know. Asymmetric power constitutes itself in part through information asymmetry. And there's a deep discomfort that dates back to 1976 when Diffie Hellman were trying to publish their paper introducing public cryptography and the U.S. Government was trying to suppress it, trying to say, don't publish this, right? And then, you know, but databases weren't quite big enough, networks weren't quite big enough or ubiquitous enough for it to matter, matter. But they were already looking at like, oh, shit, we don't want this in the public, right? And then you go through the 90s and there's the Clipper chip and key escrow and you have Stuart Baker writing in Wired magazine, like PGP is just for terrorists. We have proof. No, PGP is for pedophiles, right? Which really echoes what we're hearing now. Right? Like who even has a computer in 1994, I believe, when this op-ed is written? And then we have Post 9-11 and then it's like, actually, PGP is for terrorists, right? And encryption is for terrorists. All the while, our dependency on digital infrastructures for communications is growing and growing and growing. Our dependency on digital infrastructures generally is growing. And the need for encryption to protect commerce becomes existential to the Internet industry. And then what do you do about communications, right? And I think this has been an anxiety that is pervasive among those who, you know, law enforcement, governments, whoever, who feel that they need to constitute their power via information asymmetry. And any encryption that protects people, not just commerce, is a threat to that, right? And so what I don't see is that we're going to win an argument, right, or that we're going to win this via strength of argument. I do think we can fight and we, you know, I think, I think we're in a position now where we're seeing, we're seeing chat controls, you know, I believe Hungary just tried to raise it and didn't get the support. There was the Belgian proposal a few months ago, also didn't get the support at the last minute. And we just had the Dutch law enforcement authorities writing a memo to the government there saying, yo, don't support this. You're talking about a very dangerous backdoor that would undermine Dutch cybersecurity, right? At the same time, we have reporting in the Wall Street Journal... Es gibt einen Receit für was, was all of us should have suspected all along, ist, dass die Backdoors, die in den US-Telecommunications Infrastruktur, für Government Intercept, haben wir von China-Intelligence und vielleicht anderen. Ich denke, an dieser Moment, wir haben ein vieles... Die Facts sind auf unserer Seite, und die Facts sind auf der Seite, ist permeatet in diese Diskussion, and making it harder and harder for them to push it forward in the European Commission.
Exactly. And that's why I think we're not going to win. There's going to be another pretext if we win this one, right? There's going to be another angle if we win this one. We just have to keep building our muscle to sustain this fight probably forever, because I don't think the will to power is going away. I think they're just going to keep trying to rearrange the reasoning.
In terms of politische Diskussionen, es hilft, dass wir richtig sind. We bring in, there's a huge amount of evidence that a lot of people haven't seen in these political discussions. I think we're on the, our side has been on the back foot for a while. There has been just in civil society, there's been cutting funding to privacy advocacy has happened since around, there's a sort of a history here. I think there's a move toward tech accountability happened after the 2016 election. There's the Cambridge Analytica scandal. There's all of this. And it's like, okay, we need to hold tech accountable. And then there are a number of – the way to hold tech accountable is to attack the business model, is my view. But there aren't that many pieces of legislation or proposals that actually do that. Many of them sort of use the wrapping and the language of accountability. But are actually just expanding surveillance, right? It's like, we're going to hold them accountable, so we need a database, so we need to know who's logging into websites so we can find the bad guy. We need to know what's in your messaging so that we can make sure that these tech companies aren't allowing crime on their platforms, et cetera, et cetera. So it was basically a hijacking of this, in many cases, kind of righteous moment where people recognize that this business model was pretty harmful, to fulfill the wishes that have been pervasive since well before then. At the same time that we're seeing privacy advocacy and a lot of those, you know, a lot of the things Linus, you and I had been doing for a long time, receiving less and less support and sort of, you know, out of the limelight. And so I think it was in that environment that things like chat controls, that things like the online safety bill and other, you know, paradigmatic examples of this, you client-side scanning to save children meme, grew up and then one of the reasons I was, There are many, many reasons I decided to move from being on the board of Signal to full-time at Signal. One of them was that I saw this moment and I realized there weren't that many people fighting it and that one of the things that I could bring was a staunch willingness to fight it.
I open up my laptop. Ja, um, um, well, we, you know, it's not, obviously, it's not just me, nothing like this is a singular thing. We work with a pretty broad coalition of folks. I'm sure, you know, many of your friends, many listeners, perhaps, are part of that. Um, Signal doesn't have a policy arm. It's a very kind of lean, targeted, pretty senior organization. But we do work with people around the globe EDRI and the EU a number of other organizations to keep tabs on what's happening we also are in a good position. We're a non-profit we are very committed to rigorous communication so we don't have a history of hyper-marketing we don't do hyper-marketing now and so we're very careful when we make a claim When we make a statement, we're backing that with citations. It's accurate. We're really marshalling the technical knowledge and prowess that we have. I almost think of it as clarifying the record. If there's a report that says client-side scanning is actually safe, we know it's not safe. Okay, well, there's an academic coalition that has written this letter. Signal can write a letter. We can begin to put a bit more weight on the scales that have been fairly light, given the dynamics I just outlined. And then I do media, I do public speaking. I think a lot about how to tell this story in a way that isn't boring or alienating for regular people, particularly because the story on the other side is so arresting. It's like we have to save children from abuse. And that every one of us it like hits you in the heart, right? Like myself, right? Like my amygdala is activated. I, you know, suddenly I just want to do something. I want to help, you know, give me the thing to do. How do we do that, right? And then sitting across from that and being like, well, let me tell you about a one-way function, right? Like you can't, that's not. That's not going to work, right? And so like, how do you, How do you enter into that debate in a way that isn't dismissing the very grim and real problem that is being evoked and make it clear that the solution to that problem that is being presented will not solve that problem, one, and two, will cause drastically worse problems for many people around the world? And that's the task at hand right now.
Well, the infeasibility, the danger of the approach, a lot of evidence around the infeasibility of the approach that is either kind of willfully ignored or just not understood, and then figuring out how we explain that without being either accidentally or genuinely callous about the concerns that have brought people to the table.
Jimmy Savile's walking around. You know, like what are the infrastructures in place to make sure that when children are going through this, they're believed, they're protected. You know, what happens when it's your priest? What happens when it's your teacher? What happens when it's your brother, right? Like these are the questions that are really hard to look in the face because they implicate social pathologies and interpersonal relationships and power dynamics that are really, really difficult and often relate to emotionally challenging factors outside of that or people's past experiences or what have you. So you're going right into very traumatic subjects. But I don't think we can have that conversation without having a real conversation. And then when you begin to pull back the layers there, you say, oh, well, the UK has been pushing for client-side scanning as a remediation to child abuse. But the UK government in 2023 funded social services at 7% of the amount recommended. Right? So like there's, you know, the roofs on the schools in the UKs are collapsing. Like, you know, there isn't support for this. And then if you, you know, if you look at, and I don't have public numbers to share, but I've had a number of personal conversations. Okay, well, how many law enforcement people, how many people are tasked with actually sort of pursuing the, you know, like the criminality that may be reported via online imagery? In some cases, it's two. In one case, it's two in one country. Two people, right? So, like, you're not actually, like, if you begin to map this, what you see is a story that does not add up. And you see, like, what, you know, and this is where I get enraged. Because I'm like, you are fucking trading... On children's pain, to get your backdoor, whatever the fuck you want, pretending that you're solving it, so taking up the space for actual solutions that could actually help real children who are suffering now, and turning no attention to every glaring problem in this massive list, which is pretty obvious, even for me, and I'm not an expert here, I've just sort of sifted through this. So I think that's the dynamic we're walking into.
I agree that, I mean, it is quite telling how much emphasis is being laid on, hey, we really need client-side scanning and then the world is going to be safe. And, you know, if you say, well, how about we, you know, we fund support or any kind of prevention, preventive activities in um social care it's like yeah.
Yeah yeah well sorry we use the prefix online so that's not our, um and i think it's also like there's something people gravitate to the abstraction right if this is online child abuse then we don't have to deal with it in our real lives it becomes an abstraction that we can almost blame on the same platforms that have been so unaccountable we can blame it as an internet phenomenon, not a phenomenon. And like, oh wait, our church doesn't have the infrastructure to actually deal with this in a humane way, right? And I think that's a dynamic that we're also seeing here.
This is, by the way, one thing I find so interesting about Signal as a secure messenger. Well, it has become mainstream, but it has also managed to maintain a reputation of goodness, right? I mean, saying like, okay, text me on signals. Oh yeah, that's the secure messenger, blue symbol, looks nice, you know, very friendly user interface. Or it would be like, let's text on 3MAR, be like, oh, that's the complicated black one. Well, how about Telegram? And that's like, okay, that's a completely different end of the internet. And that makes me think of the curious case of Pavel Durov being detained in France and apparently at least charged because they refused to cooperate in numerous cases. Why do you dare to come to Europe, Meredith?
Well, I'm a brave person, Linus. Aber, zu sagen, ich denke, This is one of the places where more public education is necessary because Telegram is actually, you said the other end of the internet, it is very, very, very different from Signal. So Telegram is a social media platform. It allows mass broadcasts to millions of people. You can go viral on Telegram. You can find strangers on Telegram via directories. There's a near-me feature that will geolocate things happening near you, all sorts of things that are not private, are not secure, are regulated completely differently from private and secure communications like Signal, which is solely a private and secure interpersonal communications app.
Right? And when we're designing Signal, we're actually very, very careful not to be a social media platform. We think about that in the design phase so everything we do can be as encrypted as possible. So that we don't know anything about you or as close to zero about the people who use Signal as possible. And what we do know is we can say, yes, this phone number did sign up for a Signal account. We know when that phone number signed up for a signal count and we know last time they accessed it. Um, but we don't, you know, we, we would like to even not know that if it were possible. On the other hand, Telegram is a social media platform, which retains huge amounts of data, has a duty to under law to cooperate in turning over that data and was search functions that has search functions, has, you know, directories so you can find new things. So it's a, it's a very different beast. And I think, one, because Durov has, you know, been very. I'm trying for a diplomatic word, like, has made statements that are not supported by fact around Telegram being private and secure and kind of taken on this, you know, like yeoman's defender of free speech and privacy position. People often think Telegram is private and secure because it has a DMs feature, right? But, you know, Signal is just private and secure. So the TLDR on that is, there's really no danger for Signal here because we are very, very far away from Telegram. And we have set ourselves up so that one, such cooperation isn't required. And two, such cooperation is not possible. Because we literally, like you could put a gun to my head, I don't have that data. Whereas Telegram has servers and servers and servers full of that data.
I mean, this is like overlaid the French legal system. I am not a lawyer, especially not in France. Some of the vagaries of their legal system in which, you know, any judge can open an investigation and the basis for the charges will not be known until, you know, trial. And we're looking at years and years until then, with me not speaking French well at all with like weird translation, you know, like, so I want to stay away from speculating there. But what it looks like based on the charges that were released in the press release is that it was, you know, failure to comply with requests for data and then, you know, kind of a handful of other charges added on that aren't as severe as those.
You can go to Signal.org slash Big Brother and every request that we have been forced to comply with because we fight them and have unsealed are posted there showing exactly how close to no data we are able to turn over and showing, and I think this is interesting for some of your listeners probably, You see what the law enforcement agencies in these requests are requesting, and it's often huge lists, massive amounts of data, which gives you a sense of just how much data surveillance, like a telegram or another platform, is commonly able to provide that Signal is not.
It's a non-profit. So we're funded by donations only only yeah um and we are thinking about in the future maybe having a paid tier for some features something like backups encrypted backups which we're you know building right now um you know could we charge people for media storage or other expensive features but um you know that would be in addition to donations and squarely within the non-profit structure that keeps us safe from pressure to surveil.
We pay very well. And we are, I mean, it's a really cool mission, right? So imagine the jobs in tech are kind of depressing in many cases. Not everyone wants to go optimize an ad server and then Signal, you know, Signal you can work for core infrastructure for dissent and human rights work and journalism around the world that, you know, without which a lot of those things would be deeply imperiled. Like it's a real cool thing to get to do and support. And we pay well.
Well, I would never presume to speak to the consciousness of another person without there. But I think, yeah, I am very happy. I think a lot of the people who are at Signal are very happy. And I think it's also like we're kind of part of a project. And this is... Shows that what we have in tech, what's built in the tech industry is not inevitable. There's a series of choices, a series of incentives, a business model that has shaped tech into the form we have now, but it does not have to be that way, right? Like we can rewrite the stack, we can build alternatives. Nonprofits can work, right? We need capital, we need will, we need talent, we need all of those things. But the thing we have now is not inevitable. And I think of Signal as like a keystone species in the ecosystem, Kind of like, you know, like setting the bar, kind of regulating the rest, right? Like, you know, you can have privacy. You can have, you know, the right to private communications. You can subsist outside of this paradigm. And I think the future I want is that it's not just Signal, right? There are many, many other organizations and efforts sort of doing it differently, rejecting that paradigm, you know, drawing in capital there and, you know, away from the other place. Und beginnt zu marscheln die type of political will that is often very shallow like the 500 million AI fund but marscheln it for something that is actually substantive and is actually, making the kind of change to the tech ecosystem that I think we need to have a livable world.
I do think that there's a model there. I think I'm interested right now in researching hybrid structures and tandem structures. Are there sort of for-profit, Are there areas of tech that aren't driven by surveillance? Are there ways you could fund nonprofits, fund some of this core infrastructure, libraries and other things that have been languishing for decades? How do you revitalize that? And then are there, you know, are there ways to build truly independent infrastructure outside of the, you know, three companies, five platforms model that, like, I think it's just clearly critically dangerous at this point.
So, when it's about building infrastructures in our hands, that's not going to be easier in an AI world, right? Where it's model data that we need, huge investments into these models just for training, then for operating them. Do you see any future for this whole AI thing in users' hands, in our hands, serving our actual privacy needs and let's say private and business needs?
Well, I think my answer to that is that that future will rely on laying an independent infrastructural bedrock and actually transforming some of the way we govern and think about digital technology generally, Including being really attentive to things like how is data created? Who gets to decide what data we use to reflect our complex lives and realities? Who gets to decide how patterns in that data are made sense of? What analysis is done to that data? And then what we do with the sense we make of it, right? What decisions we make, right? And so... We do all that, we transform what AI is and what it means. Because you're no longer just scraping all the detritus off the stupid web, which was deposited or created via this surveillance business model, packaging that in an LLM and calling that intelligence, right? You're actually having to grapple with the epistemic process by which data becomes a proxy for reality, and that proxy shapes our lives and institutions. And so I think AI itself, right now we're talking about these massive models, right? This laws of scale, this sort of like big American guy dream of the, you know, the largest in the world. But AI is a, you know, it's a very slippery term. It's not a technical term of art. It can apply to many, many different things. And there are small models. There are sort of, you know, heterodox approaches. There are expert systems, which they're now trying to bolt onto the side of generative systems. Because weight, probabilistic answers aren't true, so we need to bolt truth back on and we're kind of repeating a lot of the history of AI, kind of speedrunning it in the search for a business model. So my answer there is that a lot of the things that need to be done to simply disarm and draw down the centralized power of these surveillance and infrastructure companies are the same things that would need to be done to redefine, in a sense, what AI is and our relationship to how truth, how, you know, decisions, how, you know, I don't want to use the word truth actually, but like how information is made via analyzing data and who gets to control that. And I think, you know, my sensitivity to data in that answer comes directly from my measurement experience, right? Like where you, you know, one upgrade to the Linux kernel across our server fleet fundamentally changed. The kind of data we were able to create, like how it populated the schema. And meant that that data wasn't necessarily fungible with the data collected on the older version of the kernel, right? And in order to solve that problem, I had to get a guy to go sit with the kernel maintainers for like two years to make sure that the update wasn't going to fuck up the way we got TCB DOMS, basically. Um so like that you know that's and that's that's then think about social data then think about data that reflects like who gets access to resources then think about all of the other things and um i think i think it's actually an exciting idea to think on like you know how do we how do we create systems where we're much more attentive to that and recognize that there is a you know it really matters how those choices are made how those you know how data is created who gets a und wie es zu sagen ist, und wie es zu sagen ist.
Would you say in that future, so there is these large gen AI models and whatnot, and from other discussions we've had, I know that you probably believe there is a stronger future for specialized models, expert models, not the generative ones. Would that be a prediction of how this whole AI thing is going to evolve? Not from the business model perspective or from the political perspective but from the actual technology and research perspective. Do you think there is still going to be exponential improvement in the Gen AI world or do you think it's now the time for the smaller speedboats?
Well, I think it's definitely time for smaller speedboats and I want to index on that word improvement because if we scratch the surface on some of these large models, some of which are generative you begin to realize that a lot of the claims to improvement and accuracy are based on really narrow benchmarks and evaluations that don't reflect the performance of these models That's.
Exactly So it's there are things that Gen AI models can do I don't see them realistically going away But I do see the struggle for a market fit that can produce the kinds of returns necessary to prop up a massively energy intensive, massively infrastructurally intensive, extraordinarily capital intensive industry, right? So you have billions of dollars for a training run, just huge amounts of energy and effort needed to create a model. But okay, who's going to keep paying for a chatbot that's wrong? And so I think there is a struggle for market fit. I think you see this with things like Microsoft Recall where they pushed to implement this. I don't know. Microsoft Recall was supposed to ship with Windows 11.
Ich weiß, value to your boss, maybe. Und so you can type, oh, ich will find this browser, ich will find this thing, und es will... I get it for you. I obviously don't use it. And how does it remember is really the key here. It remembers because it's taking screenshots of your device every five seconds, creating a library of those screenshots and accessing those as the data on which it is able to claim intelligent memory.
And I don't need to know that I was doomscrolling. Like, you know, that's not a proud moment of memory for me. So, yeah, I mean, and to me what that says, like, that's not a very useful purpose. It's probably going to be marketed to enterprises for, you know, worker surveillance is my guess. But it shows that Microsoft is really trying to find a market for this, right? Because they clearly circumvented their QA process. They clearly circumvented their security evaluation. There was a lot of things that clearly didn't happen and didn't happen at a company that is actively... Die haben OpenAI, sie investieren eine große Menge in Azure, sie sind in der Infrastruktur überall, all of sie sind. Aber Microsoft ist joked zu OpenAI, so sie hat die Leader Position für ein Moment. Und ich denke, also, dass Microsoft ist wirklich zu regieren eine gute Reputation in der Security-World, es ist indicative of how desperate but that rush to market fit and the AI exceptionalism that is driving it is that they just mess that up so egregiously. And I can hear some hacker in a Microsoft hallway being like, I don't fucking know. I just left that meeting. Because you can kind of sense how those things happen.
Yeah, I mean, one use I think about a lot that is definitely, I'm sure, useful to intelligence services is, you know, I'm sure we all assume that POTS telephony data is being collected en masse by every intelligence service, who can and has been for many, many, many years, and that data was probably not that useful for a long time because, you know, you're going to have to know, you're going to have to have a human review it or, you know, something like, It's probably a lot more useful now that you can quickly transcribe that with AI and sort of synthesize and search using these generative systems, right? So that's one example where I think it's probably almost certainly very, very useful and changes the calculus on like how dangerous this surveillance business model is as well. But who is that useful to? It's not me and you.