ZeroNet Blogs

Static ZeroNet blogs mirror

If you haven’t read it yet, here’s the full story about the US agencies’ attempts to infiltrate Telegram last year:

It tells how the FBI tried to influence me and bribe our engineer in May 2016 to make Telegram less secure. Luckily, since neither of us are US citizens, we could afford to refuse their offers and I was able to tell the public about these attempts. If we were American citizens, the FBI would have likely tried to silence us using a legal procedure called a "gag order" – when the US authorities can not only demand that you do something (like plant a backdoor into your app), but also prohibit you from telling the public about it (otherwise you can end up in jail).

Source: By: Pavel Durov

That whole story made me ask myself this question: if our team experienced such pressure during just one week’s trip to America, what kind of pressure are US-based tech companies facing every day? How can a privacy oriented company permanently operate from America? We can hope that the open US legal system would defend them, but due to the secrecy of these “gag orders” we would never even know if things went wrong. And unfortunately, Edward Snowden’s revelations confirm some of the worst fears.

The article also provides facts that confirm something that I always feared could be true – that some of the famous and most vocal US-based influencers within the cryptography world are sponsored by the US government to push the agenda of its agencies. Some past cases are widely known (like NSA infiltrating RSA), but it looks like the level of collaboration between US agencies and these influential “privacy advocates” is much deeper.

All of this makes protecting privacy really hard, particularly considering the fact that Google and Apple – the two companies which we are dependent on for mobile operating systems – are based in the US. I don't see any easy recipe or solution to fix this. I wish one day huge companies like Apple and Google can become independent of any government that distort the mission of their founders (maybe start their own countries?).

Until then, I’ll continue doing my part building Telegram and protecting our users, even if that will require speaking out under gag orders. I know this can probably get me into trouble some day, as it did in the past when I was living in Russia. But this is the only way I can imagine myself going forward, so I don't have and won’t have any regrets. It’s all worth it because of you guys – the millions of users who entrusted their private data to Telegram.

The Crypto- Keepers

- Posted in The News Gateway by with comments

It’s 7:30 p.m. on a Monday in June at an undisclosed location somewhere in northern Europe. I’m sitting in a private dining room in an upscale hotel, talking to Pavel Durov—the “Mark Zuckerberg of Russia,” a young internet mogul who had built the country’s most popular social network and lost it to the Kremlin all before he turned thirty. Not long after the famed American whistleblower Edward Snowden had fled to Russia to avoid federal prosecution, Durov had offered Snowden a job—but then himself had to flee Russia because of a widening conflict with the Russian government. Initially hailed as a cyber-dissident because of his spat with the Kremlin, Durov has since drawn the repeated, aggressive interest of American intelligence officials, as well.

Source: By: Yasha Levine

A group of wealthy tourists milled around in the lobby, excitedly chattering about their day of sightseeing and museum tours. Our conversation was of a darker nature. Durov and I were talking about the murky, hyper-paranoid world of the crypto-obsessed privacy movement—a place where spies ruled, nothing was what it seemed, and no one could be trusted.

For me, the paranoia made sense. For the last three years I had been investigating the grassroots crypto tech accessories at the heart of today’s powerful privacy movement: internet anonymizers, encrypted chat apps, untraceable drop boxes for whistleblowers, and super-secure operating systems that even the NSA supposedly couldn’t crack. These tools were promoted by Pulitzer Prize-winning journalists, hackers, whistleblowers, and the biggest and most credible names in the privacy trade—from Edward Snowden to the Electronic Frontier Foundation and the American Civil Liberties Union. Apps like Tor and Signal promised to protect users from America’s all-seeing surveillance apparatus. And the cryptographers and programmers who built these people’s crypto weapons? Well, many of them claimed to live on the edge: subversive crypto-anarchists fighting The Man, pursued and assailed by shadowy U.S. government forces. Citing harassment, some of them had fled the United States altogether, forced to live in self-imposed exile in Berlin.

At least that’s how they saw themselves. My reporting revealed a different reality. As I found out by digging through financial records and FOIA requests, many of these self-styled online radicals were actually military contractors, drawing salaries with benefits from the very same U.S. national security state they claimed to be fighting. Their spunky crypto-tech also turned out, on closer inspection, to be a jury-rigged and porous Potemkin Village version of secure digital communications. What’s more, the relevant software here was itself financed by the U.S. government: millions of dollars a year flowing to crypto radicals from the Pentagon, the State Department, and organizations spun off from the CIA.

My investigation of this community had brought me a lot of abuse: smears and death threats lobbed by military contractors against me and my colleagues; false slanderous stories planted in the press about me being a sexist bully and a CIA agent paid to undermine trust in encryption. So I learned long ago to approach my sources with skepticism and wariness—especially someone as infamous as Durov, who had recently gotten into the crypto business with Telegram, which now enjoys the distinction of being ISIS’s favorite chat app.

Mogul on the Move

Durov, who asked me to obscure the location of our meeting because of his ongoing conflict with the Russian government, was wary, too. He had a right to be.

Now thirty-two, he is a multimillionaire—and, if the papers are to be believed, Russia’s most radical internet mogul. In 2006, while only twenty-two, he had cofounded VKontakte (“In Contact”), a Facebook social networking clone that became more popular in Russia and across the former Soviet Union than Facebook itself. The company didn’t stay under his control for long. In 2011, following mass opposition protests against Vladimir Putin’s ruling party organized largely via social media, the government wanted a firmer grasp over VKontakte. Durov resisted, and pulled off all sorts of acts of defiance: he took photos of documents ordering the company to block certain political groups and posted them online, and publicly mocked officials of Russia’s FSB state security forces.

But the Kremlin persisted, and finally got its way. Durov had wearied of the Russian state’s steady barrage of dramatic pressure tactics—including attempts by police to raid Durov’s apartment, a bizarre blackmail incident involving what Durov says was a fake video purporting to show him in a black Mercedes running over a traffic cop, and trumped-up criminal charges that forced him to flee the country. So in 2014, the young social media mogul was forced to sell his 20 percent stake of VKontakte to a business concern run by Uzbek-born Alisher Usmanov, a scary billionaire loyal to President Putin. Stripped of his empire, Durov could no longer claim to be the Zuckerberg of the Russian polis.

Durov fled Russia and, after making a strategic investment on the two- island nation of St. Kitts and Nevis, became a citizen of the Caribbean. For the past three years, he’s lived the life of an autonomous, self-facilitating multimillionaire, wandering the globe living in luxurious hotels, while forsaking material possessions like land and real estate. Durov could have done anything he wanted, and so while in exile, he worked with his elder brother Nikolai on the next big thing: channeling his time and wealth—estimated to be about $300 million—into the development of a new messaging app, Telegram.

With about 100 million users worldwide, Telegram is ten times smaller than Facebook’s WhatsApp, its closest competitor. But Telegram has found success in strange places: it’s huge in Iran and big in Uzbekistan. It’s got some users in Europe, as well as a growing fan base among Russia’s journalists. It’s also been a big hit with Al-Qaeda and ISIS, who seem to see Telegram as the most secure tool on the market. The groups have used the app’s encrypted chats to plan attacks, while deploying its “public channels” feature to broadcast propaganda, recruit lone-wolf terrorists and claim responsibility for successful strikes. Telegram has been implicated in attacks in France, Germany, Turkey and, most recently, in Durov’s hometown of St. Petersburg, where a lone suicide bomber struck a metro station in the heart the city, killing fifteen people and maiming many more.

Getting the Message

Not surprisingly, the Russian government has again put Durov in its sights. Russian security officials have been pressuring him to share data with them, or risk having his service blocked. But the Russians aren’t the only ones trying to put the screws on Durov. Apparently, the Americans want a piece of the action, too.

As a waitress brought out a plate of bread and some appetizers—sliced squid and tuna tartar—Durov explained that over the past several years, the FBI has been attempting to pressure him into secretly cooperating with the agency, and that agents had gone as far as trying to bribe one of his developers into becoming a mole. He had never fully discussed the details of his run-ins with the FBI in public—until now.

Durov says the pressure started in 2014, shortly after he sold his stake in VKontakte. That’s when he first started routinely getting interviewed and questioned by FBI agents on the American border. Sometimes they would detain him for further questioning on entry; other times they would catch up with him to “chat” while he was at the gate getting ready to board a plane. At first, the FBI was curious about his work portfolio at VKontakte and the company’s relationship with Russian law enforcement, including the procedures it followed for complying with government data requests. “I wasn’t comfortable with these questions,” he said. “I had no inclination of becoming an American mole, so I just provided them with the minimum information that was already available in the media.”

On later trips, though, FBI officials began asking about Telegram. Where was it based? How did it work? How could the FBI get in touch with Durov in the future? The agents followed up with friendly notes by email, telling Durov to reach out to them if he had trouble or needed help with anything. Durov says he continued to ignore the overtures, but the FBI clearly wanted something; the question was what. In 2016, Durov got his answer. That May, he flew from Europe to San Francisco to attend the annual Google I/O conference. The first morning of his visit, two FBI agents showed up unannounced at eight in the morning at a Mountain View home he was renting through Airbnb. “How did they get the address?” Durov asks. “Maybe they tracked my sim card? Followed me from the airport? Maybe they got the info from Uber? I don’t know.”

In any event, the two agents were clearly on a mission. “Right away they started asking about Telegram, which made me worry,” says Durov, explaining that it didn’t take long for his early-morning visitors to get to the point: the FBI wanted to set up some kind of informal backchannel process that would enable Telegram to hand over data on particular users in the event of a terrorist threat; they even came prepared with official-looking documents in hand. “They showed me a court order and told me, ‘We respect your values about privacy and cryptography very much, and we respect what you’re trying to do. But there is terrorism, it is a serious problem and we have a duty to protect society. We hope you understand and share our views. We want to create a process of data exchange so that you can help us when there is a terrorist threat,’” Durov recounted. During the twenty-minute interview, the agents made it clear they hoped that this was just the start of a long and fruitful relationship.

Telegram is registered in the UK as Telegram Messenger LLP, a company owned by two other companies—one in the British Virgin Islands; the other in Belize. Its data is also cut up and spread out over multiple jurisdictions—part of Durov’s master plan that in theory made legal access to user data as difficult as possible. The company had no legal presence in the United States, and so the FBI had no real authority to demand anything from Durov or his company. Durov said he understood that the court order was a ruse—an attempt to get him to cooperate—but he played along and promised that he would get back to the agents after he had Telegram’s legal team look at the document.

Still, Durov says he was a bit shaken by the experience. “In Russia, the FSB guys I’ve interacted with were not impressive. They were of middling ability; not really qualified. In the United States, the FBI is different. The ones who questioned me were competent. They spoke multiple languages. They had done their research, and knew exactly what questions to ask. They were of a high caliber. And I understood that America has so many resources dedicated to security that it is downright scary. Law enforcement in America is so much more efficient.”


The FBI agents went away, but they weren’t done. As Durov tells it, they also had set their sights on a Telegram developer who had flown in for the Google conference, and was also staying at the same Mountain View Airbnb with Durov. (An FBI spokesman declined to discuss any details of Durov’s account with The Baffler.)

This developer had already been stopped and questioned at the airport by agents from the FBI’s cyber division, but the FBI scheduled a follow-up meeting at a San Francisco café. The agents who met the developer there started by peppering him with general questions about Telegram’s architecture and how its encryption algorithm worked, all while lavishing him with praise for his expert knowledge. It didn’t take them long to get to what they really wanted: access, for which they were willing to pay. Durov would not disclose the name of this developer, but he recounted the story that his employee eventually told him. The FBI wanted to work out an arrangement in which the developer would secretly feed its operatives information about Telegram’s inner workings—things like new features and other components of the service’s architecture that they might want to know about. The arrangement would be strictly confidential, and they were willing to pay. “We will make it worth your while,” they said. They said he would be “consulting” for the FBI—a thinly veiled euphemism for what was clearly a pay-off. “The FBI agents gave him a range,” said Durov, munching on a piece of bread. “It was on the order of tens of thousands of dollars.”

After the developer turned down the offer, the FBI met him one more time. This time, the FBI interviewers asked that he not say a word to anyone about their conversation—and especially not to tell his boss. “They were specific,” said Durov. “Don’t tell Pavel about this, this is our secret.”

He shrugged and smiled. It appeared that the FBI was unable to close the deal. “We pay our developers very well,” he said in a small flourish of managerial self-congratulation. “Our developers are all millionaires. Naturally they can’t be bribed with that kind of offer.”

The FBI trying to turn his own employee into a mole against him? I was expecting Durov to make a big deal out of this disclosure. Silicon Valley companies and crypto privacy types jump at any opportunity to paint themselves as victims of government oppression, and frequently blow up tiny incidents that might redound to their brand advantage in the secrecy wars. Think, for example, of how Apple turned the FBI’s request to unlock a single phone used in a 2015 terrorist attack in San Bernardino that left fourteen people dead into a stand against government oppression—even as the company was also submitting to China’s data demands. (In the end, of course, the FBI got the data it was seeking in the San Bernardino case by using a third-party data hack.) Or there was the recent case of a developer who had worked for Tor, an internet anonymity tool funded by the Pentagon, and fled to Germany after an FBI agent left his business card at her parents’ home.

Given Durov’s libertarian leanings and his proximity to that world, I thought he would start raving against government tyranny—but Durov was surprisingly, almost unnervingly, levelheaded and reasonable about the whole thing. He was troubled and upset by the FBI’s pressure tactics, and pledged to resist all attempts by the agency to get at Telegram’s data. But he wasn’t surprised that it happened, either. After all, that’s what the FBI was there to do. “Basically, the Americans are doing their job. Look at it from their perspective. Here’s a young guy, his app is used by terrorists. We need to find out who he is. What kind of team he has. This is logical. I don’t see anything extraordinary in this,” he said. “I could have gone public with this when it happened and made a big stink. ‘Look at me, look how the Americans are putting the screws on me.’ But I thought it would be a bit pretentious and melodramatic.”

So why make the story public now? Durov says that he’s coming forward to make a bigger point that’s typically lost in the self-dramatizing scripting of Silicon Valley showdowns with the Feds: what happened to Telegram is quite representative of how the government seeks to gain influence over big data services. “I’m raising this issue only to point out that American security agencies are persistent and pushy, and that they’re just carrying out their jobs. They’ll catch up with you at the airport. They show up unannounced at your Airbnb—the address of which no one should know but you. They try to pay off developers. One way or another, the FBI is very carefully doing its job, and they do all this in the span of just a couple of days that my team and I spend in America,” he says.

If the FBI was so persistent and pushy with Telegram—going as far as trying to bribe its employees while they are on a short business trip—then what does the U.S. government do to companies permanently based in America? “I can’t imagine myself or anyone else running a privacy-oriented app in that environment. They may start their information requests with data related to terrorism and then gradually widen it to who knows what.”

Encrypt or Die!

In June 2013, Edward Snowden engineered a data leak heard round the world. An NSA contractor working for the Beltway data and law colossus Booz Allen Hamilton, Snowden blew the whistle on America’s internet surveillance apparatus, and helped shine a light on the symbiotic relationship between Silicon Valley and the U.S. government.

Documents that he stole from an NSA facility in Hawaii provided the first real evidence that our most respected tech companies—including Google, Facebook, and Apple—worked closely with American spies, secretly tapping their own server farms for the NSA and FBI. Snowden’s dramatic leak put the issue of privacy on the internet on the map in a way that it had never been before.

Suddenly, internet privacy was netting daily cable news coverage, Frontline investigations, and Pulitzer prizes. There were anti-surveillance protests, online campaigns, and a flurry of reports by government watchdogs and consumer rights nonprofits. Back in 2013, it seemed like we could be on the verge of a global movement that would galvanize people to push for meaningful privacy laws that would not only curb government surveillance, but put limits on Silicon Valley’s unrestricted data collection practices, as well. But things went a different way.

Now, four years after the Snowden leak, we can see that all that energy and outrage and potential for civic action has been redirected into a narrow band of mass-politics-by-app. The new consensus, bruited loudly in and around Silicon Valley, holds that all we need to do to protect ourselves from surveillance is download whatever crypto chat app is in vogue at the moment, and run it on our iPhones. Instead of finding political and democratic solutions to the government and corporate surveillance crisis plaguing our society, the privacy movement somehow ended up in a libertarian rut. In remarkably short order, online privacy advocates had abandoned the idea that people and politics could change the world for the better, and instead chased something closer to an NRA fantasy: the idea that if everyone was equipped with a crypto weapon powerful enough, they could single-handedly take on both corporations and powerful spy agencies like the NSA. They could use technology to guarantee their own privacy on their own terms.

Edward Snowden himself has been the principal promoter of this idea, never missing an opportunity to tell people that collective politics is useless, and that arming yourself with technology is where it’s at. He shrugged off the for-profit surveillance that powered the businesses of Silicon Valley, pithily telling the Washington Post that “Twitter doesn’t put warheads on foreheads.” Instead, he saw private companies like Apple and Facebook as allies—perhaps the only places that offered even a modicum of safety in the dangerous wilderness of the internet. To him, private developers and software engineers were the true protectors of the people, and he called on them to rise up against government oppression. “If you want to build a better future, you’re going to have to do it yourself. Politics will take us only so far and if history is any guide, they are the least reliable means of achieving the effective change . . . at the end of the day, law is simply letters on a page. They’re not gonna jump up and protect your rights,” he told the audience at Fusion’s 2016 Real Future Fair in Oakland via video-robot link from Moscow. To Snowden—now a leaker- turned-political-philosopher—political movements and collective action were fickle, merely human endeavors that offered no guarantees; encryption and computer technology was a sure thing, based on the laws of math and physics. “Technology works differently than law,” the fugitive leaker told the crowd at the Real Future Fair. “Technology knows no jurisdiction.”

It was an absurd position. Substitute “technology” with “assault rifle” and Snowden’s speech turns into something you’d hear at a Republican CPAC conference. Still, Snowden got a standing ovation at the Real Future Fair. And why not? From the moment Snowden appeared on the scene, his tech-centric worldview has been backed up by a chorus of award-winning journalists, privacy activists, left-leaning think-tankers, and powerful advocacy groups like the Electronic Freedom Foundation and the ACLU. Silicon Valley supported Snowden’s call to arms, as well. A brave new cohort of app developers backed very narrow technological privacy solutions that they claimed would protect their users from government snooping, all while shamelessly tracking these very same users for private profit and gain.

As it happened, Snowden’s call to encryption-arms helped inspire Pavel Durov to build Telegram. “I am far from politics and cannot lobby for a ban on total surveillance,” he wrote in October 2013, a few months after Snowden fled to Moscow and right before Durov in turn had to flee Russia. “But there is something that we as IT-entrepreneurs and programmers can do. We can develop and finance technologies aimed at making total surveillance technically impossible.”

In America, the initial movement to take the anti-surveillance fight to Silicon Valley fizzled and turned into something else that was at once bizarre and pathetic: privacy activists working with Google and Facebook to fight the NSA with privacy technology. This made precisely as much sense as siding with Blackwater (or Xe or Acadami or whatever the Pentagon contractor calls itself now) against the U.S. Army. Yet this trend of politics-by-app went into overdrive after Donald Trump was elected president. You saw it everywhere: civil libertarians, privacy advocates, and demoralized liberals arose to proclaim that encryption—even the stuff rolled out by Silicon Valley surveillance giants—was the only thing that could protect us from a totalitarian Trump administration.

“Trump Is President. Now Encrypt Your Email,” urged New York magazine’s technology editor Max Read in an opinion piece published in the New York Times in March. “In the weeks after Donald J. Trump won the election, a schism threatened to break my group of friends in two. Not a political argument brought about by the president-elect, or a philosophical fight over the future of the country, but a question of which app we should be using to chat. . . .” Buzzfeed concurred: “Here’s How To Protect Your Privacy In Trump’s America: Easy tips to shield yourself from expanded government surveillance,” wrote the outlet, offering its millennial readers a listicle guide to “going dark” on the net.

What were these apps? Who made them? Did they really work? That’s where the story got even stranger.

Secrets and Lies

Durov’s involuntary encounters with the FBI drive home one unpleasant fact of life in the big data economy: today’s app-obsessed privacy movement relies almost entirely on crypto tools that were hatched and funded by America’s foreign policy apparatus—a body of agencies and organizations that came out of an old-school Cold War propaganda project run by the CIA.

In 1948, the CIA was given a blank check to wage a full-spectrum “covert operations” program to contain and roll back the spread of communism, starting with the Soviet Union and Eastern Europe. Radio propaganda was a central tool in this covert war of ideas, and the CIA used private front groups to run stations with names like “Radio Liberation from Bolshevism” and “Radio Free Europe.” In the 1950s and 1960s, the agency expanded its radio network to include operations targeting communist, left-leaning, and otherwise suspiciously reformist forces that might be spreading the dread bacillus of Bolshevism through Asia and Latin America.

The idea was to prevent these states from exercising sovereign control over their information space—as well as to dominate and influence people’s ideas in a way that aligned with America’s interests. As far as the CIA was concerned, this sub rosa propaganda operation was a beauty, and the agency still proudly boasts that it remains one of the most successful covert psychological warfare projects ever run by the United States.

Eventually, the CIA’s multi-tentacled propaganda operation shed its covert status, and was transformed by Congress into the Broadcasting Board of Governors, a sister federal agency to the State Department. With a nearly billion-dollar budget, today the BBG operates America’s sprawling foreign propaganda nexus. The American public is only dimly aware of the BBG’s existence, but this media empire leaves almost no corner of the world untouched by satellite, television and radio transmissions. And just as was the case nearly seventy years ago under the CIA, the mission of the BBG is to systematically perpetrate the very same thing that America’s esteemed political establishment is currently accusing Russia of doing: sponsoring news—some of it objective, some wildly distorted—as part of a broader campaign to project geopolitical power.

But there was more. When the internet spread around the world, it became a powerful medium of influence, and the U.S. government moved ruthlessly to exploit its competitive edge against rivals under the banner of “Internet Freedom.” The policy, put into place by Secretary of State Hillary Clinton, was about more than just broadcasting news. Its aim was to weaponize this global communications technology in all sorts of creative ways to weaken rivals, topple unfriendly governments, and support opposition movements from China to Russia and Iran, Syria, and Libya. “The Obama administration is leading a global effort to deploy ‘shadow’ internet and mobile phone systems that dissidents can use to undermine repressive governments that seek to silence them by censoring or shutting down telecommunications networks,” reported the New York Times in 2011, when the Internet Freedom program first got going in a major way.

The effort includes secretive projects to create independent cellphone networks inside foreign countries, as well as one operation out of a spy novel in a fifth-floor shop on L Street in Washington, where a group of young entrepreneurs who look as if they could be in a garage band are fitting deceptively innocent-looking hardware into a prototype ‘Internet in a suitcase.’ . . . The suitcase could be secreted across a border and quickly set up to allow wireless communication over a wide area with a link to the global Internet.

This was just the beginning. Over the next several years, the BBG, backed by the State Department, expanded the Internet Freedom initiative into a $50 million a year program funding hundreds of projects targeting countries across the world—China, Cuba, Vietnam, and Russia. And here things, yet again, took a turn for the surreal: the Internet Freedom apparatus was designed to project power abroad—yet it also emerged as the primary mover and shaker in America’s domestic privacy movement. It funded activists and privacy researchers, worked with the EFF and ACLU and even companies like Google. Wherever you looked, privacy tools funded by this agency dominated the scene. That included the most ardently promoted privacy products now on offer: Tor, the anonymous internet browsing platform that powers what’s known as the “dark web,” and Signal, the chat app championed by Edward Snowden. Both of them took in millions in government cash to stay afloat.

From a Whisper to a Scream

When Pavel Durov first had VKontakte taken away from him by the Kremlin and fled Russia, he was hailed in the West as a hero—a modern-day Sakharov who fought for freedom and paid the price with his business. America’s crypto and privacy community embraced him, too. But it did not take long for the relationship to sour—and the chief culprit was Signal, a crypto mobile phone app built by a small opaque company called Open Whisper Systems, aka Quiet Riddle Ventures LLC.

Invented by a self-styled radical cryptographer who goes by the name of Moxie Marlinspike (although his real name may or may not be Matthew Rosenfeld or Mike Benham), Signal was brought to life with funding from the BBG-supported Open Technology Fund (which has pumped in almost $3 million since 2013), and appears to rely on continued government funding for survival. Despite the service’s close ties to an organization spun off from the CIA, the leading lights of America’s privacy and crypto community back the app. “I use Signal every day. #notesforFBI,” Snowden tweeted out to legions of followers who went out and downloaded the app en masse. Marlinspike leveraged Snowden’s praise to the max, featuring the leaker’s endorsement prominently on his company’s website: “Use anything by Open Whisper Systems.”

Largely thanks to Snowden’s endorsement and support, Signal has become the go-to encrypted chat app among American journalists, political organizers, and activists—from anarchists to Marxists to Black Lives Matter. These days, it’s also the secure planning app of first resort for opposition rallies targeting Trump. The app’s even made major inroads into Silicon Valley, with Marlinspike working with management at Facebook and Google to get them to adopt the chat app’s encryption architecture into their mobile chat programs, including WhatsApp. Not surprisingly, Facebook’s adoption of Signal into its WhatsApp program won plaudits from the BBG; managers at the propaganda shop boasted that government-funded privacy tools were now going to be used by a billion people.

Despite Open Whisper’s continued ties to the U.S. government, leading lights of America’s privacy and crypto community have taken to warning off people from using anything else. That includes Telegram, which deploys a custom-built cryptographic technique designed by Pavel Durov’s brother, Nikolai, a mathematician. Even Snowden has taken it upon himself to shoo people away from Telegram, advising political activists, journalists, dissidents, whistleblowers—in short, everyone—to use Signal or even Facebook’s WhatsApp instead. “By default, it is less safe than @WhatsApp, which makes [it] dangerous for non-experts,” he tweeted in response to a question from a Telegram-curious supporter.

But for an app designed to hide people from the prying eyes of the U.S. government, Signal’s architecture has given some security and crypto experts pause. Its encryption algorithm is supposed to be flawless, but the app’s backend runs as a cloud service on Amazon, which is itself a major CIA contractor. The program also requires that users connect the app to a real mobile phone number and give access to their entire address book—strange behavior for an app that is supposed to hide people’s identities. Signal also depends on Google and Apple to deliver and install the app on people’s phone, and both of those companies are surveillance partners of the NSA. “Google usually has root access to the phone, there’s the issue of integrity. Google is still cooperating with the NSA and other intelligence agencies,” wrote Sander Venema, a developer who trains journalists on security. “I’m pretty sure that Google could serve a specially modified update or version of Signal to specific targets for surveillance, and they would be none the wiser that they installed malware on their phones.” And given Signal’s narrow marketing to political activists and journalists, the app works like a flag: it might encrypt messages, but it also tags users as people with something to hide—a big fat sign that says: “WATCH ME, PLEASE.”

And anyway, Signal or no Signal, if your enemy was the United States government, it didn’t really matter what crypto app you used. A recent dump of CIA hacking-tool documents published by WikiLeaks revealed that the agency’s Mobile Devices Branch has developed all sorts of goodies to grab phone data, even when it’s quarantined by the firewalls of apps like Signal and WhatsApp or even Telegram. “These techniques permit the CIA to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide, and Cloackman by hacking the ‘smart’ phones that they run on and collecting audio and message traffic before encryption is applied,” wrote WikiLeaks.

Durov admitted that cryptography has its limits. Still, as he recounted how Snowden had talked down Telegram, Durov was frustrated and bewildered. He says he and his brother were very cautious about choosing cryptography techniques promoted by American experts—particularly since the NSA docs leaked by Snowden revealed the NSA secretly paid RSA, an influential computer security firm, to use a flawed technique that the NSA knew how to crack. The Durov brothers wondered if the same thing could now be happening with other popular encryption algorithms. They became even more concerned when Telegram began to draw public attacks on social media from American cryptography experts. “They based their criticism of our approach not on any actual weakness, but solely on the fact that we didn’t use the algorithms they were promoting,” he said. “Since they failed to engage in any meaningful conversation on cryptography, we started to realize there was some other agenda they were pushing rather than finding truth or maximizing security.”

But the attacks continued. Not only were Snowden and his crypto allies telling people to trust Facebook, a company that runs on surveillance and partners with the NSA; they were also promoting an app that was actively funded by the foreign policy wing of the U.S. national security state. It just didn’t make any sense.

Durov was dumbfounded. As we sat talking, he told me he could not understand how people could trust a supposedly anti-government weapon that was being funded by the very same U.S. government it was supposed to protect its users from.

I told him that I shared his bewilderment. Throughout all my reporting on this set of crypto radicals funded by a CIA spinoff, I asked a simple question that no one could properly answer: If apps like Signal really posed a threat to the NSA’s surveillance power, why would the U.S. government continue to fund them? I couldn’t help but think of how this alignment of government and corporate power would have been received among the tech and media establishment in the United States had something similar taken place in the former Soviet Union: imagine if the KGB funded a special crypto fax line and told Aleksandr Solzhenitsyn and dissident samizdat writers to use it, promising that it was totally shielded from KGB operatives. Then imagine that Solzhenitsyn would not only believe the KGB, but would tell all his dissident buddies to use it: “It’s totally safe.” The KGB’s efforts would be mercilessly ridiculed in the capitalist West, while Solzhenitsyn would be branded a collaborator at worst, or a stooge at best. Ridiculous as this fusion of tech and state interests under the rubric of dissidence is on the face of things, in America this plan can somehow fly.

As I laid out this analogy, Durov nodded in agreement. “I don’t think it’s a coincidence that we both understand how naïve this kind of thinking is, and that we were both born in the Soviet Union.”

Trusting the Force

Political agreement wasn’t exactly what I was expecting when I prepared to meet with Pavel Durov. From what I had read in the press, our politics and view of the world could not be further apart. He was a libertarian, a guy who threw 5,000-ruble notes down at pedestrians just to watch them scramble and fight to pick them up, someone who tweeted out that Hitler and Stalin were no different on the day that people across the former Soviet Union celebrated their victory over Nazi Germany.

Still, on a personal level, he was likeable and even humble. For someone in the crypto world, he was also unexpectedly realistic about the limits of cryptography, displaying none of the cult-like belief in technology that you see in America’s privacy movement. But there was something else as well: he was a fighter.

Begin with the simple fact that he was publicly coming out to detail the FBI’s attempt to bribe his team and pressure Telegram into secretly working with the agency—despite Durov’s own disclaimers and efforts to downplay the revelation, it was a big deal. Despite being chased out of Russia, he wasn’t throwing in with the U.S. security apparatus, but electing instead to fight a two-front war. It was an unusual and impressive move. Most people who run afoul of politics in Russia and find themselves seeking safety in the West as modern-day dissidents usually fall into line with the West’s own propaganda aims, uncritically siding with American interests and players, no matter how unpleasant. Think Pussy Riot fleeing Russia and criticizing Vladimir Putin, while doing photo ops with Secretary of State Hillary Clinton.

As far as his cryptography, well, there’s no assurance that Telegram will prove to be more secure than its Silicon Valley rivals. Then again, there’s no way that the West’s spy-funded, profit-driven quest for online privacy can yield any reasonable approximation of the real thing, either.

In our post-Snowden world, we have outsourced our privacy politics to crypto apps. By doing so, we’ve entered a paranoid game theory nightmare world—a place where regular people have no true power and must put their faith in the people and organizations stoking the algorithms that make this crypto tech. In the end, it all comes down to trust. But can any of these people and organizations be really trusted? The young Russian mogul on the skids with the Kremlin? The former American spy-for-hire on the run and hiding out in Russia? Boutique crypto apps funded by the regime change wing of the State Department? Google and Facebook, who partner with the NSA?

Confused? Don’t know who to trust? Well, that’s the state of our privacy movement today.

SpaceX’s Falcon 9 rocket at launchpad

SpaceX’s Falcon 9 rocket has delivered 10 satellites to low-Earth orbit for Iridium, a global leader in mobile voice and data satellite communications.

The 10 satellites are the first of at least 70 satellites that SpaceX will be launching for Iridium’s next generation global satellite constellation, Iridium NEXT.

SpaceX targeted launch of Iridium-1 was from Space Launch Complex 4E at Vandenberg Air Force Base in California. The instantaneous launch window opened on January 14 at 9:54:39 am PST or 5:54:39 pm UTC. The satellites was began deployment about an hour after launch.

Falcon 9 first stage has landed on Just Read the Instructions

Falcon 9 first stage has landed on Just Read the Instructions.

Exclusive: Privacy campaigners criticise WhatsApp vulnerability as a ‘huge threat to freedom of speech’ and warn it could be exploited by government agencies


A security vulnerability that can be used to allow Facebook and others to intercept and read encrypted messages has been found within its WhatsApp messaging service.

Facebook claims that no one can intercept WhatsApp messages, not even the company and its staff, ensuring privacy for its billion-plus users. But new research shows that the company could in fact read messages due to the way WhatsApp has implemented its end-to-end encryption protocol.

Privacy campaigners said the vulnerability is a “huge threat to freedom of speech” and warned it could be used by government agencies as a backdoor to snoop on users who believe their messages to be secure.

WhatsApp has made privacy and security a primary selling point, and has become a go to communications tool of activists, dissidents and diplomats.

WhatsApp’s end-to-end encryption relies on the generation of unique security keys, using the acclaimed Signal protocol, developed by Open Whisper Systems, that are traded and verified between users to guarantee communications are secure and cannot be intercepted by a middleman.

However, WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been re-sent. This re-encryption and rebroadcasting effectively allows WhatsApp to intercept and read users’ messages.

The security loophole was discovered by Tobias Boelter, a cryptography and security researcher at the University of California, Berkeley. He told the Guardian: “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access due to the change in keys.”

The vulnerability is not inherent to the Signal protocol. Open Whisper Systems’ messaging app, Signal, the app used and recommended by whistleblower Edward Snowden, does not suffer from the same vulnerability. If a recipient changes the security key while offline, for instance, a sent message will fail to be delivered and the sender will be notified of the change in security keys without automatically resending the message.

WhatsApp’s implementation automatically resends an undelivered message with a new key without warning the user in advance or giving them the ability to prevent it.

Boelter reported the vulnerability to Facebook in April 2016, but was told that Facebook was aware of the issue, that it was “expected behaviour” and wasn’t being actively worked on. The Guardian has verified the loophole still exists.

Steffen Tor Jensen, head of information security and digital counter-surveillance at the European-Bahraini Organisation for Human Rights, verified Boelter’s findings. He said: “WhatsApp can effectively continue flipping the security keys when devices are offline and re-sending the message, without letting users know of the change till after it has been made, providing an extremely insecure platform.”

Boelter said: “[Some] might say that this vulnerability could only be abused to snoop on ‘single’ targeted messages, not entire conversations. This is not true if you consider that the WhatsApp server can just forward messages without sending the ‘message was received by recipient’ notification (or the double tick), which users might not notice. Using the retransmission vulnerability, the WhatsApp server can then later get a transcript of the whole conversation, not just a single message.”

The vulnerability calls into question the privacy of messages sent across the service, which is used around the world, including by people living in oppressive regimes.

Professor Kirstie Ball, co-director and founder of the Centre for Research into Information, Surveillance and Privacy, called the existence of a vulnerability within WhatsApp’s encryption “a gold mine for security agencies” and “a huge betrayal of user trust”. She added: “It is a huge threat to freedom of speech, for it to be able to look at what you’re saying if it wants to. Consumers will say, I’ve got nothing to hide, but you don’t know what information is looked for and what connections are being made.”

In the UK, the recently passed Investigatory Powers Act allows the government to intercept bulk data of users held by private companies, without suspicion of criminal activity, similar to the activity of the US National Security Agency uncovered by the Snowden revelations. The government also has the power to force companies to “maintain technical capabilities” that allow data collection through hacking and interception, and requires companies to remove “electronic protection” from data. Intentional or not, WhatsApp’s vulnerability to the end-to-end encryption could be used in such a way to facilitate government interception.

Jim Killock, executive director of Open Rights Group, said: “If companies claim to offer end-to-end encryption, they should come clean if it is found to be compromised....In the UK, the Investigatory Powers Act means that technical capability notices could be used to compel companies to introduce flaws – which could leave people’s data vulnerable.”

A WhatsApp spokesperson told the Guardian: “Over 1 billion people use WhatsApp today because it is simple, fast, reliable and secure. At WhatsApp, we’ve always believed that people’s conversations should be secure and private. Last year, we gave all our users a better level of security by making every message, photo, video, file and call end-to-end encrypted by default. As we introduce features like end-to-end encryption, we focus on keeping the product simple and take into consideration how it’s used every day around the world.

“In WhatsApp’s implementation of the Signal protocol, we have a “Show Security Notifications” setting (option under Settings > Account > Security) that notifies you when a contact’s security code has changed. We know the most common reasons this happens are because someone has switched phones or reinstalled WhatsApp. This is because in many parts of the world, people frequently change devices and Sim cards. In these situations, we want to make sure people’s messages are delivered, not lost in transit.”

Asked to comment specifically on whether Facebook/WhatApps had accessed users’ messages and whether it had done so at the request of government agencies or other third parties, it directed the Guardian to its site that details aggregate data on government requests by country.

WhatsApp later issued another statement saying: “WhatsApp does not give governments a ‘backdoor’ into its systems and would fight any government request to create a backdoor.”

Concerns over the privacy of WhatsApp users has been repeatedly highlighted since Facebook acquired the company for $22bn in 2014. In August 2015, Facebook announced a change to the privacy policy governing WhatsApp that allowed the social network to merge data from WhatsApp users and Facebook, including phone numbers and app usage, for advertising and development purposes.

Facebook halted the use of the shared user data for advertising purposes in November after pressure from the pan-European data protection agency group Article 29 Working Party in October. The European commission then filed charges against Facebook for providing “misleading” information in the run-up to the social network’s acquisition of messaging service WhatsApp, following its data-sharing change.

The Finnish basic income experiment is being conducted among 2,000 persons between ages 25 and 58, who will receive a monthly basic income of €560 for two years. They will receive their first basic income payments from Kela today.


Those participating in the experiment will be paid a basic income from 1 January 2017 until 31 December 2018. The payments are made on the second business day of each month, with the exception of January 2017, when the payment date is 9 January 2017.

The amount of the basic income will remain the same throughout the experiment. The basic income is also not reduced by any earned income that the participants may have. Participants who find work during the experiment continue to be paid a basic income.

According to Marjukka Turunen, head of Kela's legal affairs unit, one of the most common questions is if it is really true that you can keep the basic income even if you work part-time, for instance. "When we reply that the purpose of the basic income is specifically to encourage recipients to seek employment, we get a very positive reaction.

Have you been selected to participate in the basic income experiment? Please keep in mind the following:

If you have been selected for the experiment, please report any benefits or restrictions that hinder payment of the basic income. Payment of the basic income will be interrupted, for example, if you start military service, are granted a pension or move abroad.

During the experiment, the participants will be monitored in terms of their success in finding employment. However, the monitoring will primarily be based on register data. It is not necessary to notify Kela if you receive a basic income and find work during the experiment. The basic income is tax free and does not affect the amount of your taxable income.

Other social benefits will be governed by the same rules as before. According to Marjukka Turunen, if someone finds work during the experiment, any earnings they have will affect for example the general housing allowance and social assistance. People may lose their entitlement to such benefits if they find work and their earnings exceed a certain limit.

The basic income replaces unemployment benefits either partially or completely. For example, if the unemployment benefit, including increases for children, is larger than the basic income, the difference between the unemployment benefit and the basic income is paid to the basic income recipient. The qualifying conditions for labour market subsidy and basic unemployment allowance payments remain the same, as do the practical steps required when receiving unemployment benefits. This means that anyone receiving unemployment benefits along with a basic income will need to complete unemployment status reports and submit them to Kela. Participation in the basic income experiment need not be indicated in the unemployment status report.

Additional information:

Declárase el Año 2017 como el “AÑO DE LAS ENERGÍAS RENOVABLES”


Decreto 9/2017

Documentación oficial. Leyenda.

Buenos Aires, 03/01/2017


Que a partir de la sanción de la Ley Nº 27.191, modificatoria de la Ley N° 26.190 por la que se estableció el “Régimen de Fomento Nacional para el Uso de Fuentes Renovables de Energía Destinada a la Producción de Energía Eléctrica”, se ha iniciado un proceso que apunta a lograr una contribución de las fuentes renovables de energía hasta alcanzar el VEINTE POR CIENTO (20%) del consumo de energía eléctrica nacional, al 31 de diciembre de 2025.

Que, asimismo, la búsqueda de la diversificación energética mediante fuentes limpias, se ha tornado una política de estado, en línea con los compromisos asumidos por la REPÚBLICA ARGENTINA con la adopción del “Acuerdo de París”, celebrado en el marco de la COP21, y aprobado por la Ley N° 27.270.

Que en este sentido, se destaca que el aprovechamiento de los recursos energéticos renovables tanto para la generación de electricidad como para usos térmicos, tiene numerosos beneficios que incluyen, en primer lugar, la reducción de emisiones de gases de efecto invernadero y otros gases nocivos que están asociados al uso de combustibles fósiles; en segundo lugar, la seguridad energética al no depender de otros países para el aprovisionamiento de energía; y por último, la creación de empleos locales calificados tanto para la instalación como para la fabricación de componentes y equipamiento, así como también para la provisión de servicios de mantenimiento.

Que durante el año 2016 se llevaron adelante numerosas acciones tendientes a incentivar la inversión a gran escala en el sector de energías renovables. Las principales acciones se englobaron en el denominado Programa RenovAr, a partir del cual se adjudicaron 59 proyectos que, una vez instalados, aportarán una cantidad de energía eléctrica equivalente al 6% de la demanda nacional.

Que en esta línea, durante el año 2017, se prevé avanzar en la realización de las obras necesarias para comenzar a disfrutar de nuevas fuentes de energía limpia y amigable con el ambiente, que resultan indispensables para el desarrollo del país.

Que en virtud de lo expresado en los considerandos anteriores, corresponde resaltar y difundir en el año 2017 la importancia del uso de las energías renovables para el desarrollo del país, en un ambiente sustentable.

Que la presente medida se dicta en uso de las facultades conferidas por el artículo 99, inciso 1, de la CONSTITUCIÓN NACIONAL.

Por ello,


ARTÍCULO 1° — Declárase el Año 2017 como el “AÑO DE LAS ENERGÍAS RENOVABLES”.

ARTÍCULO 2° — Dispónese que durante el Año 2017, toda la documentación oficial de la ADMINISTRACIÓN PÚBLICA NACIONAL, centralizada y descentralizada, así como en los Entes autárquicos dependientes de ésta, deberá llevar la leyenda “2017 - AÑO DE LAS ENERGÍAS RENOVABLES”.

ARTÍCULO 3° — En orden a lo establecido en el artículo 1° del presente, el PODER EJECUTIVO NACIONAL auspiciará actividades, seminarios, conferencias y programas educativos que contribuyan a la difusión en el país de diferentes aspectos relativos al desarrollo y uso de las energías renovables.

ARTÍCULO 4° — Invítase a los Gobiernos Provinciales y al de la CIUDAD AUTONOMA DE BUENOS AIRES a adherir al presente decreto.

ARTÍCULO 5° — Comuníquese, publíquese, dése a la DIRECCIÓN NACIONAL DEL REGISTRO OFICIAL y archívese. — MACRI. — Marcos Peña.

Fecha de publicación 04/01/2017

Hayao Miyazaki

Miyazaki says he would "never wish to incorporate this technology" into his work.

Vikram Murthi


Studio Ghibli director Hayao Miyazaki, responsible for such animation classics as “Spirited Away” and “My Neighbor Totoro,” recently came out of retirement for a new short film entitled “Boro the Caterpillar” and an untitled feature film. Before audiences around the world see new work from the legendary filmmaker, they can watch him articulate his principles regarding animation. In a widely-circulated clip from NHK’s documentary series “NHK Special: Hayao Miyazaki — The One Who Never Ends,” Miyazaki lays down his opinions on AI animation, i.e. animation created by artificial intelligence. Watch the clip below.

After seeing a brief demo of a grotesque zombie-esque creature, Miyazaki pauses and says that it reminds him of a friend of his with a disability so severe he can’t even high five. “Thinking of him, I can’t watch this stuff and find [it] interesting. Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.”

Near the end of the clip, after hearing that the animators’ goal is to create a machine that “draws pictures like humans do,” Miyazaki’s comments are even more grim. “I feel like we are nearing to the end of the times. We humans are losing faith in ourselves…”

Miyazaki’s “Boro the Caterpillar” will premiere at the Studio Ghibli museum in Mitaka, Tokyo by June or July 2017.

Early evidence suggests substantial health dividends


For four years in the mid-1970s an unusual experiment took place in the small Canadian town of Dauphin. Statistically significant benefits for those who took part included fewer physician contacts related to mental health and fewer hospital admissions for “accident and injury.” Mental health diagnoses in Dauphin also fell. Once the experiment ended, these public health benefits evaporated.1 What was the treatment being tested? It was what has become known as a basic income—a regular, unconditional payment made to each and every citizen. This ground breaking experiment, an early randomised trial in the social policy sphere, ran out of money before full statistical analysisafter a loss of political interest.

The link between inequality and poor health outcomes is long established.2 The actual mechanisms behind that link are less understood. The data from the Dauphin study, re-examined by a team from the University of Manitoba in the 2000s, suggest there might be an association between income insecurity and poorer health.1 All adults in Dauphin earning below $13 800 (£11 000; €13 000) were eligible for the grant of $4800 a year. The researchers compared Dauphin with other similar towns and looked for relative improvements in outcomes using public health and schooling data from the time.

Recently, there have been increasing calls for dialogue on a universal basic income (UBI) from political parties, think tanks (including the Royal Society for the Encouragement of Arts, Manufactures, and Commerce (RSA)), civic activists, trade unions, and leading entrepreneurs such as Tesla chief executive Elon Musk. These calls are a response to growing income insecurity, some sense that welfare systems may be failing, and as a preparation for the potential effects of automation and artificial intelligence on employment prospects in industries that might be better served by machines.3 UBI-style pilots are planned in Finland, the Netherlands, and Canada as a potential answer to these questions and concerns.4

While the Dauphin study included just the poorest residents of one small city, if we assume that it indicates a causal link between extra cash and better health then three effects could have been in play. Firstly, the cash sum itself would have reduced economic inequality directly. Secondly, the unconditional nature of the payment could have reduced income insecurity. Thirdly, there is a positive social multiplier whereby positive behaviours associated with greater financial security tend to reinforce one another—for example, more teenagers staying on in school because they see their peers doing likewise. Taken together, these effects could mean that financial insecurity is a key vector through which inequality worsens health outcomes for the least advantaged. It is certainly a serviceable hypothesis.

Dauphin was not an isolated study. A little known, unintentional, basic income pilot took place in North Carolina during the 1990s. Four years into a longitudinal comparative mental health study of Cherokee American Indian and non-American Indian children from ages 9 to 16, a casino was built on Cherokee land. As part of the deal, all Cherokee Indian adults received a share of the profits—roughly $4000 per year each.

The results were again striking. Children whose families received the payments showed significantly better emotional and behavioural health by age 16 relative to their non-tribal peers, who did not receive payments. Parents also reported that the drug and alcohol intake of their partners decreased after the payments began.5 These reported changes among adults were uncontrolled observations, but the researchers noted no other major policy changes during the study.

Mullainathan and Shafir describe a process of cognitive “bandwidth scarcity” whereby scarcity of resources impedes sound decision making with clear potential for negative health outcomes.6 The Canadian and North Carolina case studies suggest that bandwidth scarcity could be confronted through an unconditional universal basic income. Complex systems of tax credits and social security, such as currently used in the UK, send confusing signals, not least through poorly understood and sometimes arbitrary conditions and welfare sanctions that create new hardships for recipients.

Health professionals should be concerned. The evidence suggests that a universal basic income could help improve recipients’ mental and physical health. The RSA has already called for a trial of a universal basic income in the UK.7 It would give people a better foundation and greater control over their lives in and out of work. Failure to test this promising intervention in a rigorous way would be a failure of government and a missed opportunity to invest in the health and wellbeing of an increasingly insecure and unequal society.


Competing interests: I have read and understood BMJ policy on declaration of interests and declare that the RSA advocates for universal basic income in the UK.

Provenance and peer review: Commissioned; not externally peer reviewed.


  1. ↵Forget EL. The town with no poverty: using health administration data to revisit outcomes of a Canadian guaranteed annual income field experiment. 2011.
  2. ↵Pickett K, Wilkinson R. The spirit level: why more equal societies almost always do better. 2009.
  3. ↵Ford M. Rise of the robots: technology and the threat of a jobless future.Oneworld Publications, 2015.
  4. ↵Kela. Experimental study on a universal basic income.
  5. ↵Akee R, Simeonova E, Costello EJ, Copeland W. How does household income affect child personality traits and behaviors? NBER working paper No 21562. 2015.
  6. ↵Mullainathan S, Shafir E. Scarcity: why having too little means so much.Allen Lane, 2013.
  7. ↵RSA. Creative citizen, creative state—the principled and pragmatic case for a universal basic income. 2015.

The OnChip Open-V microcontroller is a completely free (as in freedom) and open source 32-bit microcontroller based on the RISC-V architecture.


The Open-V has a host of built-in peripherals you’d expect of any modern microcontroller and was designed to compete with the capabilities of ARM M0-based microcontrollers. This crowdfunding campaign will bring the Open-V into mass production and make it widely available to anyone. If you love hacking on embedded controllers, breaking down closed-source barriers, having the freedom to learn how things work even down to the transistor level, or have dreamed of spinning your own silicon, then this campaign is for you and we need your help!

First Mass-produced RISC-V Chip

The Reduced Instruction Set Computing (RISC) paradigm has been around for decades and many processors fall into this category. RISC-V (pronounced “risk-five”) is a particular implementation of RISC concepts as an open source instruction set architecture (ISA). Although the RISC-V standard has been around (and evolved) since 2010, it has never been used in a chip available on the open market. Some chips have been manufactured using RISC-V, but they have been relegated to the research lab and academia. We’re going to change that. With the OnChip Open-V, for the first time ever, you will be able to purchase a RISC-V-based chip and use it in real projects and products. This isn’t just a one-time thing - we’re planning on keeping the Open-V in production for as long as there is demand. Our initial manufacturing run will produce approximately 70,000 chips.

An Open-V bare die An Open-V bare die

Free and Open Source Silicon

We’ve open sourced all files for the entire OnChip Open-V design, including the register-transfer level (RTL) files for the CPU and all peripherals and the development and testing tools we use. These sources are available under the MIT license from our GitHub account.

We think open source integrated circuit (IC) design will give the semiconductor industry the reboot it needs to get out of the deep innovation rut dug by the entrenched players. Just like open source software ushered in the last two decades of software innovation, open source silicon will unleash a flood of hardware innovation. The Open-V microcontroller is one concrete step in that direction.

With open silicon, you can: - completely understand how your hardware works - build customized ICs without reinventing the wheel - optimize performance by tuning parameters usually hidden - teach a new generation of engineers with a real-world example - debug and even correct errors in a chip without waiting for the vendor - reduce costs by cutting out licensing fees

A bare Open-V die wire bonded to our OSH Park test board A bare Open-V die wire bonded to our OSH Park test board

Chips & Development Boards

In this campaign, you can order the Open-V chips themselves and a development board we’ve designed around the Open-V chip.

Open-V Chip Specifications

  • Package
    • QFN-32
    • No other packages are planned for the first run
  • Processor
    • RISC-V ISA version 2.1
    • 1.2 V operation
  • Memory
    • 8 KB SRAM
  • Clock
    • 32 KHz - 160 MHz
    • Two PLLs, user-tunable with muxers and frequency dividers
    • includes all clocking and bias circuitry
  • Analog Signals
    • Two 10-bit ADC channels, each running at up to 10 MS/s
    • Two 12-bit DAC channels
  • Timers
    • One general-purpose 16-bit timer
    • One 16-bit watch dog timer (WDT)
  • General Purpose Input/Ouput
    • 16 programmable GPIO pins
    • two external interrupts
  • Interfaces
    • SDIO port (e.g., microSD)
    • Two SPI ports
    • I2C
    • UART
  • Programming and Testing
    • Built-in debug module for use with gdb and JTAG
    • Programmable PRBS-31/15/7 generator and checker for interconnect testing
    • Compatible with the Arduino IDE

Open-V Dev Board Specifications

The dev board comes completely assembled. - USB 2.0 controller - 1.2 V and 3.3 V voltage regulators - Clock reference - Breadboard-compatible breakout header pins - microSD receptacle - Micro USB connector (power and data) - JTAG connector - 32 KB EEPROM - 32-pin QFN Open-V microcontroller - Dimensions: 55 mm x 30 mm (excluding USB receptacle)

Render of the Open-V dev board Render of the Open-V dev board

Arduino Compatibility

The Open-V core is Arduino-compatible, which means you will benefit from the abundant resources of the Arduino community. As we make progress toward manufacturing the first batch of Open-V chips, we will release demos showing how Open-V can be used with the Arduino toolchain and other resources.

Of course, the Open-V chip can be used completely independently from the Arduino ecosystem. For example, the RISC-V ecosystem is rapidly growing and immediately applicable for Open-V development.


Many commercial microcontrollers feature proprietary, licensed instruction sets. Licensed instruction sets and microprocessor cores restrict the process of modifying the core for different purposes such as improving performance and adapting it to specific applications.

RISC-V is a new open instruction set architecture (ISA) designed by the Berkeley Architecture Group with the aim to support architecture research and education. RISC-V is fully available to public and has advantages such as a smaller footprint size, support for highly-parallel multi-core implementations, variable-length instructions to support an optional dense instruction, ease of implementation in hardware, and energy efficiency.

Moreover, because Open-V and RISC-V are open, a curious person will be able to read and modify the register-transfer level (RTL) and propose changes to enhance the performance of Open-V. A maker would be able to understand the architecture by testing the RTL core with FPGAs or simulators. Already, researchers made some interesting RISC-V-based chips unrelated to Open-V.


The OnChip Open-V is the first microcontroller featuring both an open source CPU and open source peripherals. The glue between the CPU and peripherals (i.e., the buses), is also open source, both the specification and the actual implementation. Currently, we have ADC, DAC, SPI, I2C, UART, GPIO, PWM, and timer peripherals designed and tested in real silicon. We are working on other peripherals, such as USB 2, USB3, internal NVRAM and/or EEPROM, and a convolutional neural network (CNN).

The Importance of Buses

The Open-V microcontroller uses several portions of the Advanced Microcontroller Bus Architecture (AMBA) open standard for on-chip interconnection. This makes any Open-V functional block, such as the core or any of the peripherals, easy to incorporate into existing chip designs that also use AMBA. We hope this will motivate other silicon companies to release RISC-V-based microcontrollers using the peripherals they’ve already developed and tested with ARM-based cores.

We think buses are so important, we even wrote a paper about them for IEEE LASCAS 2016.

Chip Comparison

Power and area simulations show that a RISC-V architecture like that used in the Open-V can be used to replace ARM M0+ microcontrollers with similar performance. The table below shows some comparable chipsets and how they stack up.

Chip Comparison * Dynamic power measurement condition for Open-V: three while loops executed from SRAM, all peripheral clocks disabled, VDD = 1.2 V, Temperature = 25° C

Non-volatile Memory

Notably absent from the Open-V specifications is an internal non-volatile memory. The dev board will ship with an external 32 KB EEPROM, but the Open-V chip itself currently does not have an internal non-volatile memory. We are working to change that and hope to include non-volatile memory in the first batch of Open-V chips, but there are challenges yet to overcome.

Integration of non-volatile memory on regular CMOS technology is a challenge since the intellectual property is controlled by only a few companies and literature is lacking. Similarly, an IP license for NVRAM for pure CMOS has been difficult to find. Just a few companies control this market, their licensing offices are not friendly, and they haven’t yet responded to us as we would hope.

EEPROM will require additional masks and will increase the cost of the mask set, as well as increase the wafer cost. We have designed our own NVRAM, to be really open, and we are aiming to test it in the MPW in March 2017 (see schedule below). We are doing a design effort to get our NVRAM design to accommodate at least a bootloader capable of getting data from an external EEPROM and/or SD Card.

Our wire bonding machine at work Our wire bonding machine at work

Changing the Semiconductor Industry

We have over 30 years of cumulative experience designing chips in the semiconductor industry and we believe the industry can do better. We aim to open the door to chip design to a larger community and in so doing drive down the cost of innovation.

The semiconductor industry is today where the operating system and compiler industry was several decades ago: knowledge is concentrated in a small number of specialists using highly inefficient tools whose configuration and parameter tweaking alone take up the vast majority of the time and effort expended. We need to move away from a world in which a team with US- and India-based engineers cost upward of $100,000 per day.

The problem is simple: the people supposedly solving problems in the semiconductor industry are not themselves users of their own solutions. For example, rarely does a chip designer know the end application of the chip they are designing or the company who commissioned the chip in the first place. Similarly, the creators of electronic design automation (EDA) tools (the software used to design new chips) aren’t in the business of designing chips.

The solution is equally simple: spread the knowledge of chip design to those people who actually need a new chip to solve a real problem. The best way to do this is by open sourcing everything from the ground up. This will spur innovation and force existing players to be more competitive.

The OnChip Open-V is a concrete step in this direction. With the Open-V, for the first time, open silicon will be widely available. We hope it will be used in ways we haven’t yet imagined.

An Open-V bare die in a tray An Open-V bare die in a tray

Manufacturing Plan

The microcontroller will be manufactured by Taiwan Semiconductor Manufacturing Company (TSMC), who has a lot of experience manufacturing integrated circuits with state-of-the-art features. The development boards will be made and assembled separately.

2017 - 03/31 MPW Design Review - 04/24 MPW with v2 prototype tapeout - 06/17 PCBs for v2 fabricated - 07/18 Bare dies received - 08/25 MPW dies completely tested - 09/13 PCB for packaged chip tested - 10/26 Production level final design review - 11/22 Production level tapeout

2018 - 01/19 Wafer level testing/sorting - 02/23 Package level testing/sorting - 03/22 PCBs populated and tested - 04/27 Delivery

Our testing board with a wire-bonded Open-V bare die Our testing board with a wire-bonded Open-V bare die

Risks & Challenges

An analog and digital integrated circuit design is a complex venture, involving architecture design, circuit design, verification, software design, and more. As with any major engineering project, the Open-V carries the risks associated with such complexity. Although rigorous debugging and testing has been and will continue to be exercised throughout this project, there exists the possibility of bugs creeping in. Commercial chips often have bugs, but usually their bugs have a work around that can impact performance but not be a complete showstopper. However, a single unfixable bug is enough to make a chip unusable.

We are confident we can minimize the impact of potential bugs in the following three ways:

  1. Our well-defined design flow and functional project management.
  2. Our 30+ years of aggregate experience taping out integrated chips.
  3. Our enthusiasm for solving high-level challenges.

While the third parties associated with the project are trustworthy and have a track record of good work, it’s also clear their activities constitute a risk not in our control. Of course, we will keep everyone updated at each milesone and we will be transparent about any problems that might arise.

La repercusión de la denuncia del integrante de lavaca Bruno Ciancaglini impulsó la apertura de un sumario administrativo a cargo de la Auditoría General de Asuntos Internos del Ministerio de Seguridad provincial. No hay noticias aún de que el Ministerio Público Fiscal haya abierto una investigación, tal como es su obligación. Qué dice el fiscal que fue consultado esa madrugada de terror por la policía. El caso originó que muchos marplatenses compartieran sus experiencias de abusos policiales, pero también su impotencia por la impunidad que los protege, por inacción de la justicia.

Source (Argentina):

El Ministerio de Seguridad bonaerense ordenó un sumario administrativo para investigar la detención arbitraria y los abusos policiales denunciados por lavaca. A través de su jefe de prensa se comunicaron ayer con la redacción de MU, en horas de la tarde, para informar que la denuncia había sido remitida a Guillermo Berra, titular de la división de Asuntos Internos de Ministerio.

También se comunicó el fiscal marplatense Guillermo Nicora, quien intervino en el procedimiento de esa madrugada de pesadilla. Detalló:

“La policía me llamó a las 6 de la mañana. Me dijeron que les había sacado una foto y eso era coacción agravada. Les dije que eso no era ningúna coacción, que la gente tiene ese derecho”.

Luego explicó las características particulares de la administración de justicia en suelo marplatense:

  • Si es un delito penal (en este caso, coacción agravada) la policía llama al fiscal para que intervenga.
  • Si es contravencional, no. Explica Nicoria. “Desde 1999 los fiscales no tenemos actuación en el ámbito contravencional, solo la policía”. Esto significa que la policía hace todo el procedimiento y recién después, lo remite a un juez de paz. Para que quede claro: no hay ningún control del procedimiento policial.

Este caso es un ejemplo de cómo funciona el sistema en manos de la policía Bonaerense: ante la negativa del fiscal de legitimar la detención de Ciancaglini, la policía la justificó armando una causa contravencional.

¿Qué pasa cuando se denuncia este uso abusivo de la fuerza y de la ley?

  • Hasta el momento el Ministerio Público Fiscal no se presentó ante la justicia para impulsar que se investigue la actuación policial, tal como es su obligación.
  • Consultado el fiscal Nicoria sobre si él iba a investigar el abuso policial del que tiene evidente conocimiento, respondió: “Me solidarizo con Bruno y ofrezco dar testimonio, porque esto es una barbaridad”.

Por su parte, la presidenta de la Comisión de Derechos Humanos de la Cámara de Diputados de la Nación, Victoria Donda, habló con Federico Salvai, jefe de gabinete de la gobernadora María Eurgenia Vidal, para expresar su preocupación por la seguridad de las víctimas, en especial de las que no pudieron ser identificadas con nombre y apellido por Bruno. En su crónica describe los golpes que recibió uno de los detenidos: es probable que esas personas –testigos de esta denuncia- estén ahora en una celda de la policía bonaerense, y por lo tanto, expuestas a presiones y amenazas.

La violencia cotidiana

Fueron innumerables las expresiones de solidaridad, ofrecimiento de apoyo legal y repudio por la violencia policial, entre ellas la del Centro de Estudios Legales y Sociales (CELS) y el Sindicato de Prensa de Buenos Aires (SiPreBA), que en un comunicado “rechaza categóricamente los abusos policiales ocurridos en ocasión de la cobertura del Festival Internacional de Cine de Mar del Plata”.

También fueron muchísimos los mensajes de marplatenses que compartieron sus experiencias de abusos policiales. Un ejemplo es este que, firmado con nombre, apellido y DNI, relata:

“En mi barrio esto es el pan de cada día!! Nuestros hijos viven estas situaciones como normales. Lo peor no es que ellos crean que los policías tienen derecho a hacerles eso porque es su deber. Lo peor es que no hay manera de que una denuncia llegue más lejos que una nota en el diario. Las denuncias ante la fiscalía quedan archivadas por falta de pruebas. Y lo que los señores policías hacen queda invisibilizado. A mi hijo le rompieron la moto, lo cagaron a trompadas y patadas, le robaron la gorra y los cigarrillos, le quitaron el DNI, lo putearon, lo discriminaron por usar gorra de visera, etc. etc. etc. y como no hay testigos no pasa nada. Al otro lo apuntaron con la ametralladora porque les dijo que no lo podían maltratar. Por supuesto que siempre a los gritos y puteadas. Otro día estaban jugando a la pelota en la plaza y les pidieron los documentos y a uno se lo perdieron, los subieron al patrullero y los llevaron a la comisaría, Ahí metieron al más grande en el calabozo y al más chiquito lo dejaron encerrado en el patrullero hasta que le dio un ataque de pánico. Después lo soltaron y le dijeron que raje de ahí o lo cagaban a patadas. Disculpen que les cuente todo esto, pero lo que sucedió ayer me conmovió hasta las lágrimas porque no se puede hacer nada. Déspotas, tiranos, ladrones, completamente violentos, y están armados !!! Les mando un abrazo desde Mar del Plata y mis disculpas por la vergüenza de policía que tenemos”.

Otro mensaje sintetiza los muchos abrazos escritos que recibimos:

Querido Bruno,

Me parte el corazón y el alma lo que contás. No entra en la cabeza de este pibe de 24 años, que cada vez desconfía más del mundo y de sus monstruosidades, que un visitante como vos, que se acerca a cubrir el que creo yo es uno de los acontecimientos más bellos, enriquecedores y fascinantes de esta ciudad, tenga que pasar por las experiencias que con tanta habilidad conseguiste transmitir en palabras.

Lo lamento muchísimo.

_Por supuesto que lo lamento por vos. Porque nadie se merece pasar por semejante situación, jamás en la vida. La impotencia que debe generar semejante privación de libertad y derechos es indescriptible, por lo que realmente me deja sin palabras tu testimonio _

Pero también lo lamento un poco por mí, y por todos los marplatenses. Porque no termino de entender cómo semejantes atrocidades pueden pasar en mi ciudad, en mi hogar de toda la vida. Porque que me cuentes que estas barbaridades ocurren acá me hacen sentir como si ocurriesen en el patio mismo de mi casa, me parten el alma y me llenan de lágrimas de rabia y decepción.

Entiendo que es posible que jamás vuelvas a pensar en Mar del Plata de la misma manera. Me parece una lástima que nunca más puedas disfrutar como antes de las maravillas que puede llegar ofrecer este lugar que aparentemente hoy en día está bajo la “protección” de seres de tan bajo nivel.

Sin embargo, quiero dejarte la puerta abierta para que el año que viene, siempre y cuando estés de acuerdo, pueda remendarlo recibiéndote por cuenta propia, de parte mía y de todos los marplatenses, y asegurándome de que no te falte nada para que puedas llevar a cabo tanto la cobertura del festival como tu disfrute del mismo.

Porque nadie debería perderse de un evento tal, a pesar de que los malvivientes (porque estos son los verdaderos malvivientes) hayan querido mancillarlo para vos por siempre.

Te espero de brazos abiertos, y esperando que estés dispuesto a nunca bajar los tuyos.

Y por favor, nunca calles. Porque testimonios como el tuyo ayudan a abrirle los ojos a más de uno.

De parte mía y de todos los marplatenses te mando el abrazo más fuerte que pueda darse, y espero verte el año que viene, para demostrarle a los cobardes lo que es tener un par de pelotas bien puestas”.

A todas las personas que nos hicieron sentir tan acompañados queremos decirles: gracias. Nos dan fuerza para seguir.