ZeroNet Blogs

Static ZeroNet blogs mirror

I recently came across the routing algorithm of Distributed Hash Table, a distributed data structure in which the access and the storage of data is collectively supported by members of a virtual community. A member who respects Distributed Hash Table design is responsible for reducing the cost of searching and maintaining the availability of values. Individual characteristics of each member build up the efficiency of the whole system.

Limited Knowledge

Within a distributed network, each member has limited knowledge of his surroundings. Due to the fact that the distribution of useful resources is not deterministic, a peer without sufficient knowledge has to search desperately in a gigantic space. If I only know of one peer, who knows stuff but is unwilling to help, brute-force scanning the network is the only way to discover more data, which requires a lot of energy and time.

Knowledge by Sharing

Sharing knowledge is the key to know more and search less.

Fortunately, I found peer C who does not know stuff but is willing to help. He gave me a list of n = 10 people who may or may not have what I want. Okay, I will ask them. The time complexity of finding the right person who has what I want is O(n).

I asked peer D and he told me that peers E, F, G are most likely to have what I want, while the other people in my list are unlikely to help. At this point, peer D helped me reduce my search space by more than one half. Hopefully, everyone will guide me towards the right direction. The time complexity of finding the right person now becomes O(log(n)).

Interest and Responsibility

How does peer D know who are the most helpful people? It must be the case that either peer D used to retrieve the same files from peers E, F, G, or peers E, F, G are supposed to be responsible for saving the files I want.

Furthermore, I can verify if peer D told me the truth by asking peers E, F, G for the file. If any of them has the file I want, or brings me closer to my destination, I can be confident that peer D indeed helped me, and that peers E, F, G are also trustworthy. If none of them has the file I want, and most of them point me to a farther location, I will be skeptical about the trustworthiness of peer D. Perhaps peer D is innocent but peers E, F, G do not fulfill their responsibilities, so I will also avoid asking peers E, F, G for help in the future.

Finally, I have the file I am interested in. Most importantly, I know who has the file. I can share my knowledge with others, upon request.

At the heart of a DHT network is peer responsibility. However, ZeroNet works slightly differently. In ZeroNet, no peer has a predefined responsibility. We host data according to our interests, so that unsolicited content does not come into our computers without permission. However, responsibility can be defined anyway. In all of the files one is interested in, a subset of them should be kept longer.

The principle of sharing knowledge in a DHT network is to spread the most helpful information. By giving out a ordered list of addresses sorted by the possibility of having the desired file, a peer helps others by reducing time complexity.

Why does DHT work?

Everyone is responsible for keeping the system running. There must be people to help you store things. There must be people to help you find things.

At the beginning, everyone decides his own responsibility in such a way that if one peer fails, the network is still working. By hashing the data received, one knows if saving those data fulfills his responsibility. By comparing the other people’s responsibilities with one’s own responsibility, one knows who to keep in touch with in order to make parts of the network reachable.

Hashing also helps rank the usefulness of routing information. Time complexity cannot be reduced unless the returned list of peers is sorted by usefulness. Useful routing information makes one closer to his destination. The closeness to destination can be measured mathematically by computing requested_hash xor responsibility_of_any_person_I_know_of.

ZeroNet has a reputation mechanism in development. In addition to closeness, usefulness of routing information can be defined as to the best of the responder’s knowledge, mentioned people having high reputations. Reputation can be measured by the success rate of retrieving the desired data in untampered form. This does not utilize hashing functions, but is consistent with the principles of a DHT network.

How to fulfill routing responsibilities

Knowledge - Ask people about their responsibilities. Remember their addresses and responsibilities. Sort the addresses by closeness. - Bind the peer addresses to our interested files. - Remember Peer Exchange results.

Ranking - Evaluate the peers I know and yield reputation values. - The initial value of reputation should be 0 (neutral) and should change as the result of evaluation. - Impression fades due to infrequent contact. More complex algorithms can be applied to reset the reputations of peers we do not contact with.

Being Helpful 1. Closest: Given a file request, if I have the file, I say I have the file. 2. Second Closest: If I do not have the file, but I know who has given me the file, I return a list of peers. I put their addresses at the beginning of the list. I only include people with high reputations. 3. Closer: If I do not have the file, but I know who is responsible for saving the file, I append a list of peers to the result. I only put people with high reputations there. 4. Closer: If I do not have the file, and I do not know who is responsible for saving the file, I append a list of closest peers to the result. Hopefully, people there will help you find your file.

How to fulfill storage responsibilities

Our garbage recycler should value peer responsibility. Optional files that matches a node’s responsibility should be kept longer.

Other Notes

  • Integrity of values should be verified by checking digital signatures.
  • Key size does not need to be 160 bits. It can be any reasonable size. For instance, MORPHiS uses a bigger key size to fit a SHA-512 hash into a key.
  • DHT protocol does not need to rely on UDP. All we have to do is to get information around. For instance, MORPHiS uses TCP to transfer DHT protocol payload.
  • Responsibility of a node should be changeable by modifying configuration files.
  • ! Uniqueness of node responsibility allows potential fingerprinting attacks.

ZeroMux Dropbox

- Posted in ZeroBlog Lab by with comments

ZeroMux Dropbox is a self-hosted file sharing service for ZeroNet. I am trying to make it work completely inside ZeroNet. This "Dropbox" does not depend on WebRTC, IPFS or any additional plugins.

You can see my progress here. Source code is also available on GitHub.

Its [un]stable version (v0.1.5) has been released. Download it here. The bleeding edge version can be found here. Follow configuration instruction here.

Click on images for demos. Screenshot


Get Started

Detailed configuration instructions can be found in the docs.
(It is easier than you think!)

There are a few file sharing sites that are using ZeroMux. Have a look and help them seed!

Development is active on GitHub. You can download the latest unstable version there and help us test its most exciting features. An experimental video seeking support was added. The underlying data stream of ZeroMux becomes seekable, and long videos become seekable, too.


ZeroNet does not host any private data, and ZeroMux does not encrypt your data. Every file in this "Dropbox" is public. ZeroMux Dropbox is designed to help you share files you like, but you should not use it as a free unlimited backup service.

Got Stuck?

Got stuck? Feel free to ask for help via ZeroMail!


ZeroMux is developing at an average rate of 8 commits/month.


  • Video player now supports seeking.
  • Seek method has been added to abstraction stream.
  • Fixed ZIP extension bug on ZeroMux side.
  • Firefox >= 52 no longer assigns garbled names to files.
  • Made MSE stream work properly with Tor Browser.

Favorite Comments of the Day

- Posted in ZeroBlog Lab by with comments

Border0464fred · border0464fred@zeroid.bit ― 4 hours ago @nofish damn you scared the shit out of us, don't ever leave the internet again :p

You might be hosting a lot of important data with ZeroMux, a potentially unstable program. While I try to make ZeroMux as secure as possible, you should always keep your important data safe.

Secure your data

  1. Always keep the original copies of your data.
  2. In ZeroMux Dropbox, all of your data, including your file list and configuration, are saved in the files/ directory. Since the file slicing algorithm does not produce deterministic output, you should backup your files/ directory (the file chunks) before you update.

Understand how it works

  1. Keep your web browser updated.
  2. Read through its documentation and example code so that you don't misuse its features.
  3. Understand what Moov Box means and know what ZeroMux accepts.
  4. Keep track of ZeroMux development activity on GitHub and ZeroMe.
  5. Do not copy-and-paste old demo pages. Use the newest one instead.

Customization tips

  1. To show a poster before video loads, you can set the poster attribute of your <video> element to the path of your poster.
  2. To get the download link, you can make an event handler to get a blob URL. You attach the event handler to the stream.
  3. To make your browser assign a friendly name to your file, set the download attribute of your <a> element to whatever file name you want.

Migrating from ZIP bug

  1. Add an underscore _ after every folder name which ends with .zip, .tar.gz or .bz2.
  2. Use a good text editor to edit list.json. Replace every .zip/ with .zip_/ etc.
  3. Use a good text editor to edit every affected file.json. Replace every .zip/ with .zip_/ etc.
  4. Upgrade your ZeroMux Bundle.

How did str4d and nofish made ZeroNet join I2P and Tor? Inspired by RetroShare, I wrote up an abstraction on their ideas and design.

Outgoing connections

To communicate with other people, you need to go through the SOCKS5 proxy provided by Tor or I2P.

The parameter --tor_proxy is used to specify the SOCKS5 proxy address that ZeroNet uses to speak out.

Incoming connections

To let you receive messages from other people, Tor or I2P will inform ZeroNet on behalf of your distant friend.

Do not confuse the virtual port with the actual port of your Tor Hidden Service or I2P Server Tunnel. Tor or I2P will contact ZeroNet by sending messages to the actual port on local host. ZeroNet is listening on the actual port.


Nofish is looking for ways to create Tor Hidden Services and I2P Server Tunnels automatically, so you don't have to manually configure your hidden sites in Tor or I2P.

ZeroNet uses the Tor Control Port to create temporary (ephemeral) Hidden Services for you.

Getting to know each other

In order to get to know each other, ZeroNet typically posts its own contact information on trackers. Clear net trackers store IP addresses. Tor trackers store .onion addresses. I2P trackers store .b32.i2p addresses.

In addition, ZeroNet asks known friends for more friends, via Peer Exchange. The official ZeroNet is kind enough to give your peer contact information of both clear net and dark net. Even when you don't have access to dark net trackers, you can still get new contact information from dark net if one of your friends is using the dark net and clear net at the same time.


People hate running proprietary software (JRE). Thanks to a cool Russian guy, there is a C++ implementation of I2P, called PurpleI2P. The official I2P project believes PurpleI2P is stable enough to be used in production.

ZeroMe is spammed by BadDream. While many Hub owners have banned him, the owner of the Hub he joined took a little bit longer to respond. By default, the privilege of moderation is possessed by the Hub owner. If site owner failed to respond, spammers can do whatever they want until they run out of space.

We are aware of this centralized moderation issue, and we want to build a decentralized moderation system. In fact, many Hub owners are very responsive, and we want to refer to their decisions as well. Other ZeroNet people run blogs and forums. If they moderate their sites in a reasonable manner, we would also like to agree with them.

From all of the sites I am seeding, I collected 152 data/users/content.json files which might contain content moderation decisions.

DATA_DIR = "path/to/ZeroNet/data"

import os
import json
import io

def GetSiteJsons(data_dir):
    site_dirs = [
        data_dir + "/" + dir_name
        for dir_name in os.listdir(data_dir)
        if os.path.isdir(data_dir + "/" + dir_name)

    site_jsons = []
    for site_dir in site_dirs:
        json_path = site_dir + "/data/users/content.json"
        if os.path.isfile(json_path):

    return site_jsons

For each of these content.json files, I parsed the permissions defined by site owners.

def ParsePermissions(json_content):
        json_obj = json.loads(json_content)

        user_contents = json_obj.get("user_contents")
        if not user_contents or not isinstance(user_contents, dict):
            return None

        permissions = user_contents.get("permissions")
        if not permissions or not isinstance(permissions, dict):
            return None

        return permissions
        return None

Then I wrote this function to find out who are banned.

def GetBanned(permissions):
    banned = set()
    for person in permissions:
        if permissions[person] == False:
    return banned

Finally, I put together all of the data I collected to find out the most frequently banned.

freq_dict = {}

for json_path in GetSiteJsons(DATA_DIR):
    content =, encoding='utf-8').read()
    permissions = ParsePermissions(content)
    if permissions:
        for person in GetBanned(permissions):
            new_freq = freq_dict.get(person, 0) + 1
            freq_dict[person] = new_freq

print sorted(freq_dict.iteritems(), lambda a, b: b[1] - a[1])

Here is my result:

    (u'banexample@zeroid.bit', 94),
    (u'bad@zeroid.bit', 46),
    (u'doom1b@zeroid.bit', 8),
    (u'doom1@zeroid.bit', 6),
    (u'bbxxx@zeroid.bit', 4),
    (u'plac@zeroid.bit', 4),
    (u'thejumono@zeroid.bit', 4),
    (u'meylody@zeroid.bit', 4),
    (u'realdonaldtrump@zeroid.bit', 3),
    (u'crackzeronet@zeroid.bit', 2),
    (u'doom1@kaffie.bit', 2),
    (u'baddream@zeroid.bit', 2),
    (u'ancrap@zeroid.bit', 1),
    (u'requrequ@zeroid.bit', 1),
    (u'fair@zeroid.bit', 1),
    (u'doom1c@zeroid.bit', 1),
    (u'sdh@zeroid.bit', 1),
    (u'xp@zeroid.bit', 1),
    (u'authhh@zeroid.bit', 1),
    (u'doom1d@zeroid.bit', 1),
    (u'redacted@zeroid.bit', 1),
    (u'hakudushi@zeroid.bit', 1),
    (u'entropy@zeroid.bit', 1)

Observation: Despite I collected over 150 content.json files from both big and small sites, I don't have enough data. If you run a Hub, a blog, or a forum, I strongly recommend you ban the known spammers on ZeroMe.

Advice: - You can parse visitor permissions only from the sites you trust. - Choose a Hub that has a responsive moderator. - If you run a site, Multisig your data/users/content.json file to allow trusted people to moderate your site.

Peers over Time

- Posted in ZeroBlog Lab by with comments

People asked me how to do statistics for ZeroNet to discover interesting facts. ZeroNet actually provides a lot of debugging information that can be made good use of. Since Shift@zeroid.bit made a graph with the peer data he collected over 5 days, I decided to do a similar peer stat on my own ZeroNet client.

Get the peers Peer counts can be obtained by parsing the Stats page. They are stored in a <table>, so obtaining them is simply a job for an HTML parser.

import requests
from BeautifulSoup import BeautifulSoup
import re

The function for counting peers simply searches for relevant <tr> and <td> tags in Stats page.

def ParsePeers(site):
    stats_content = requests.get('', \
        headers = {"Accept": "text/html"}).text
    # Accept header is very important
    soup = BeautifulSoup(stats_content)

    def contains_site(tag):
        if != 'tr':
            return False
        link = tag.find('a')
        if not link:
            return False
        return site in link.string

    tr = soup.find(contains_site)

    if not tr:
        return None

    pattern = re.compile("[0-9]+/[0-9]+/([0-9]+)")

    def contains_peers(tag):
        return == 'td' and

    peer_td = tr.find(contains_peers)
    peer_count = int( pattern.findall(unicode(peer_td.string))[0] )

    return (tr.find('a').string, peer_count)

Decide which site to count, and make some space for peer data. Write a function to collect peer data.

from datetime import datetime

site = '1HeLLo'
x_time = []
y_peers = []

def CollectData(site, x_time, y_peers):
    result = None
        result = ParsePeers(site)

    if result:

Then write a function to plot the graph.

import matplotlib.pyplot as plt
import matplotlib.dates as mdates

def SavePlot(x_time, y_peers):
    fig, ax = plt.subplots()

    plt.plot(x_time, y_peers)

    if len(x_time) > 1:
        hour_loc = mdates.HourLocator()
        minute_loc = mdates.MinuteLocator()
        time_format = mdates.DateFormatter('%H:%M')


    fig.savefig('The Plot.png')

Finally, write the code for main routine.

import time

for i in range(400):
    CollectData(site, x_time, y_peers)
    print "Collected", i
    SavePlot(x_time, y_peers)

Result I kept the program running for two hours, and got this graph: Peers

I kept the program running for another 6 hours, and got another graph: Peers My laptop shut down at midnight, so it did not collect any data over that period.

Observation - ZeroNet seems to update peer count every 2.5 hours. - Around 100 inactive peers are removed after every cleanup.

The Lossless Music Player demo is finally there!

Stream and Listen to the song! (39.5 MB Lossless)

Please wait until half of the file has been downloaded, otherwise the song will be interrupted.

Remember to support ZeroMux and aurora.js.
Remember to seed, and enjoy the song!

雪ノ下雪乃(CV.早見沙織) 由比ヶ浜結衣(CV.東山奈央) Everyday World

At the time of writing, ZeroNet does not officially support file splitting. Though ZeroNet is aware of file "ranges" when asking peers for files, the progress can neither be monitored nor cached.

There is still some work left on ZeroNet's big file optimizations, but I worked out a temporary solution. I sliced a 5-megabyte song into 250-kilobyte pieces. The sha256 hashes of the big file and of each slice are stored in a file list called file.json. Each slice is named after *.dat and placed in the same folder as file.json. A JavaScript program will download all of the pieces and merge them into a big blob.

The progress can then be monitored, since each slice is in a reasonably small size that ZeroNet can handle quickly. In addition, the progress can be saved at any time, since each valid slice will be stored automatically on the hard disk. If a network problem cause the download to fail, each valid slice can still be read immediately from the hard disk when the page is reloaded.

I did not minimize the JS source code, so that you can read it easily. The "entry point" is located in big/js/ui.js. There is also a sample file.json showing its own structure.

To help you understand the code, I also wrote the documentation of this project.

Technical details will be logged to the console.

Update: - Added a link to download the song. - Tried to rename the file. - Added short videos and posters. - Made it prefer the first parts of a file. - In Firefox, "fragmented" MP4 videos can now be played when being downloaded. - To avoid blocking the main thread, a Web Worker now checks the hash of the reassembled file. - An in-browser MP4 fragmenting program will make non-fragmented MP4 files streaming friendly. Firefox users can now play as they download the videos on this page.

Limitations: - The script does not know whether a certain part of file is on the disk. - It is not easy to move the file parts to another location while you can still seed them. - The maximum Blob size is set to ~500MB by many browsers.

Known Issues: - Due to a bug of Firefox, the file cannot be automatically renamed in a sandboxed iframe. - It does not work in Tor Browser. - Google Chrome does not allow the script to access Source Buffer in a sandboxed iframe. Therefore videos will not stream in Chrome, but videos will be played when completely downloaded.

Thanks everyone who has been seeding this site and testing the program.

Download the song (5.16 MB)

Download a short video (19.30 MB) Download another short video (7.02 MB)

Firefox users, can also Stream yet another short video (15.60 MB)

Read the documentation of this project.