ZeroNet Blogs

Static ZeroNet blogs mirror

Kaffie's Blog

I apparently just write about my search engine. But I do more than that, honest!

The 'error 61' or w/e in the corner of the Zeronet homepage always bothered me. I knew I wasn't running it in the most secure way possible, and with it, losing connection to many peers who use Tor mode. So I finally decided to set it up on my mac, and the whole process was surprisingly simple. Just a quick brew install tor, followed by modifying the tor config file. After that, just rebooting ZeroNet worked like a charm.

There's also something I stumbled upon that's like --tor always, what's the difference between that and simply launching without? Is it simply the difference between using just one tor connection, and running everything through tor (rather than only some stuff)? Which should I be doing? Either way, it's nice to know I'm finally on the 'full' Zeronet. Haha.

Unfortunately I don't have any update news for Kaffiene yet. However, Bwoi has come back with it's new 2.1 look. Go check it out if you haven't yet.

As always, thanks for using Kaffiene and following along here on this blog.

<3 Kaffie

I'm finally back

- Posted in Kaffie's Blog by with comments

I'm sure you're all aware that Kaffiene hasn't received any updates (not even an index update) since early April. Part of this was on me. I just didn't ever get around to it. I had ZeroNet running for quite a while, but closed it and went on to do some other stuff. I also have been dealing with some health stuff, which is part of why I haven't been updating. But I'm back now, and have added 400 or so new sites to Kaffiene's index. Albeit, most of them are empty blogs, or chinese websites from that explosion of popularity in china back in may.

Upon running my tools again, there's another 600 or so sites queued up and ready to be tagged. Though, my current method of tagging by hand is getting quite tedious. So I might just go ahead and write some automated tagging tools before going through those 600 sites.

I also made a few other tweaks to the site. I removed some FAQ questions that aren't really relevant anymore. I also switched the proxy links over to proxy1.zn.kindlyfire.me which is a new proxy service I found, since Bit.no isn't up today.

I'm really trying to stick around and stay active in the ZeroNet community, and I really hope that this network flourishes into something great.

Sorry for the wait. I hope Kaffiene still remains the search engine of choice for everyone.

<3 Kaffie

About the lack of updates

- Posted in Kaffie's Blog by with comments

So... yea. Kaffiene lacked updates for a while simply due to me being lazy. Sorry about that. I went ahead and added the new sites picked up by grab.py. Hopefully that should cover most of the sites that have popped up recently. There seemed to be about 80 of them, with lots of them being duplicate chinese sites. So I don't think much was missed.

In related news, there's now two new search engines, each that go ahead and do things differently. There's ERR0R Search, which is based on ZeroTalk and allows for user submissions and discussion. To me it feels like what 0List should have been. I'll definitely be keeping an eye on it. There's also OneSearch which used queries to a clearnet DB for it's search originally, but has since switched to an optional files search with the index being entirely on ZeroNet. From what I see, it'd take 100mb or so to host the full database. Not a lightweight solution, but it provides much more indepth searching. It also has backlink searching, which is cool. It's a bit like a beefed up and cleaner Zearch. Though, like Zearch, lots of 'duplicate results' pops up and lots of stuff that makes it hard to find a clean list of sites to visit. Err0r has the ZeroSearch/Bwoi problem where it's difficult to search the sites listed properly. I personally still use and recommend Kaffiene, but these new sites are definitely welcomed with open arms.

Some small changes

- Posted in Kaffie's Blog by with comments

As you may or may not be aware, Kaffiene recently had an update that makes searches instant. Your results pop up as you type. I wanted to add it sooner, but the engine has only just recently been fast enough to do it.

I also added proxy links, in case you want to visit a site without it being added to your local machine. The proxy links are hardcoded to use http://bit.no.com:43110/. Whereas the regular outgoing links are relative to whatever you're viewing Kaffiene in. Which means if you view Kaffiene in that proxy, the links will be identical.

I also noticed a particular site reporting a surprisingly large number of sites with only a few (4) duplicates. I thought this was particularly shocking. Had they found a bunch of sites I somehow missed? So I went to investigate. It turns out that they simply added a new search mode, which uses a copy of Kaffiene's index without siterank. That's about 600+ sites. Combine that with their ~300, and it's easy to see how it shot up to 1000. I ran it through my check.py and sure enough it showed only 4 duplicates and 1000 sites! Thinking this was strange, I looked at it by hand. There were duplicates, they just were not being reported! How can that be? It turns out the check.py script (and that particular site's duplicate checker) didn't consider addresses that have mixed case to be the same as ones with all lowercase. This is appropriate, because bitcoin addresses (and thus zeronet addresses) are case-sensitive. I had a similar issue with the homepage listing some sites multiple times due to case sensitivity. So I went ahead and updated check.py to convert everything to lowercase. This may result in false positives (rarely), but I feel it's appropriate, since there's a much higher chance of false reporting due to case-sensitivity problems than there is to have two sites with roughly the same address. Running that 1000+ site index through it now, and the numbers are clear: Number of Duplicates: 357

Unique sites: 668

Which is more or less what Kaffiene had right before this last update. It's also worth noting the site has a 'Kaffiene search'. I do wish to point out that it fails to reflect how the results are shown on Kaffiene: with siterank and tag descriptors. However, the index is in there, so most results should pop up nonetheless. There's also a distinct lack of regex searching (the site uses naive searching).

As I've mentioned a few times, I don't mind Kaffiene's index being used. I only wish that it's not used in an attempt to one-up Kaffiene in a game of numbers and not used to intentionally misrepresent search results.

Huge Kaffiene Update

- Posted in Kaffie's Blog by with comments

So I ended up having a huge dev jam for Kaffiene. I guess that's what happens when family comes over :P. So without further ado, the changes:

Search Pages! A much needed feature since larger searches ended up taking a while and had a lot of results. So now pages exist and make searches run so much faster. In response, I enabled discover mode by default. The page selection at the bottom dynamically changes to accommodate for the page so that 10 pages are listed, rather than upwards of 40 if it's possible. Clicking 'next' or 'previous' (or any higher/lower page, really) will raise/lower the page numbers listed. Check it out.

Fixed Hashtags! As briefly mentioned before, the tags have gotten an overhaul. There were some issues with special characters and spaces that are now fixed. They have a # in front of them to indicate they're clickable. And switching discover mode on/off live updates the tags, rather than requiring a new search.

Updated siterank UI! Pretty self explanatory. They're moved over and greyed, with a 'peers' underneath to indicate how it's determined. Overall looks a lot nicer.

Regex Searching by default! Naive search ended up bringing up largely irrelevant results a lot of the time, so I switched the default to regex. The [r] tag still works (it just keeps it regex and the search ignores it). There's also a new [n] tag to access the old naive search.

FAQ was updated to acknowledge the changes.

I also went ahead and added some more sites that popped up in the last few days.

Various other changes in the code as well. Stuff like having a dedicated display function, fixed some regex results, etc.

Krawler.py is in development! It's a site crawler and will return links found around ZeroNet. It's not quite included, and not finished just yet, but I figured I'd announce that it's being developed finally.

And lastly, as usual, thanks for supporting Kaffiene!

I said I might not update, but I ended up getting people addicted to ZeroNet. So I went ahead and made some changes to Kaffiene. Now tags in discover mode are hashtags, to indicate they're clickable. And the discover mode checkbox now has a tooltip to explain what it does (incase you're lazy and don't read the FAQ).

There was also a suggestion to move the site rank over to the other side of the site titles, but IMO it didn't look very good. Thoughts on how these should be adjusted? The main complaint was that the site names didn't line up.

Yeah... So if I don't post anything for the next few days, that's probably why. Just wanted to let you guys know so you're not just sitting there wondering where I am or if I'd forgotten about ZeroNet or something.

That's all. No updates, sorry :(.

I finally got around to writing a tool to automatically grab new sites and update the siterank of Kaffiene's index. Grab.py uses phantomjs to grab that nice list of sites you see on the homepage of the various proxies. It then formats that list in a way that plays nice with the update merge.py tool. I kept the old merge.py since it now does something different (old merge adds site rank, new merge updates it). So the flow goes something like: run grab.py, run merge.py, tag the new sites if there's any, add them into the index, and then replace index with the new updated one.

That's still quite a bit of work to do by hand (I just had to visit ~30 sites manually). But it's a lot better than before. Next up is a tool to grab sites from that New Sites seeder thing, and maybe 0list.

Oh, and to auto-tag new sites. You can find the new scripts and their instructions in the 'script' section, like before.

Dreams of Decentralization

- Posted in Kaffie's Blog by with comments

An idea has been brewing in my head these past few weeks. Probably because I just dived in and went 0 to 60 in P2P software. I had no idea this stuff was so far along already. ZeroNet is fantastic. There's Bitcoin and Namecoin managing transactions. There's Bitmessage for a decentralized email type of thing. Then there's stuff like Tox, Ricochet, Ring and the like for instant messaging. IPFS and similar solutions for decentralized file hosting. And then I find out about cjdns and meshnets.


With all this sitting on the forefront of my mind, decentralized computing was on my mind. However, these thoughts started blending with my other views and interests. Things like anti-consumerism, minimalism, independence, libre software, etc. And the thought popped into my head. Why don't we have an all-in-one cheap PC that makes decentralized computing easy? At the heart could be something like a custom distro of linux with the above software pre-installed, and configured to jump into a new or existing meshnet. The software would run on a cheap pocket computer, like the Raspberry Pi. Or really any computer you could get your hands on. Have the OS/box lack traditional internet support. No central DNS. No centralized IP allocation. It'd be physically and digitally incapable of connecting to the existing internet, except through networking with a computer that is able to do so (though ideally this wouldn't be necessary).

Ultimately, you'd just get a bunch of these little guys and we'd effectively kickstart a new internet that's inherently secure, P2P, has the capability of viewing things while offline, has no restrictions on where you can go online (no need for a wifi), discourages giant corporate websites, lacks ads, has it's own currency, and encourages creativity and independence.

How does that sound? No slow browsers. No tracking cookies. No ads or adblockers. No data leaks or corporate espionage. No government spying on what you do. No ISP monopolies. No ICANN. Everyone's on an open source operating system using open source software. No fiat government funny money.

Though, we're not quite there yet. ZeroNet still needs various tweaks and developments to allow it to be truly P2P. Bitcoin/Namecoin might have issues when going 'off the grid' and having a bunch of localized versions (perhaps something like uCoin is better here?) And there's still the obvious issue of getting everyone to ditch their current line of crap and to divorce themselves from the consumerism/corporate mindset.

Hell. Maybe we can ditch currency altogether. ZeroNet seems to be doing great, and not a single site charges for anything. No one's getting paid here (well, nofish gets donations for his hard work). Though, when merging the P2P digital world with real world work, there's a conflict. I've been trying to think of a way to solve this. What are the core necessities to be self-sufficient? Definitely electricity, so that this P2P magic can run. I did some digging and there's some solar-powered raspberry pi projects. Would that be enough? Can we get pi's to communicate with each other in a mesh, or do we need extra hardware? There's also the issue of needing to keep the lights on. I have no clue if we can manage to self-generate that. Tesla had a home battery thing, would that work?

Water and Food seem to be the two largest problems. Getting everyone to grow their own seems problematic as not everyone would want to, or have the resources, or have the knowhow. And outsourcing this requires funny money, or perhaps bitcoin (which would require connecting to the internet). Both of those are problems. For food, I think getting something like soylent would work well, but I have no idea what the process of producing it is. Or even if we can get that to be sustainable within our P2P society. For water, I have no idea. Rain water? Piggy back off the public drinking fountains? Neither of these sound particularly sustainable or reliable. I'd dehydrate if I had to live off rain water (damn desert, I hate it here). And the public water is nasty, and requires reliance on the state, centralizing something and wrecking the entire idea.

Then there's issues of plumbing (toilets, showers), a place to stay, trash, aaannndd.... I think that's it... If all of those things are taken care of, either through decentralized automated means, or through self-sufficiency, I think we can honestly get rid of currency. At least as a means to live. In terms of actual physical stuff you need, the amount is surprisingly low. Some clothes (you already have those) and some basic toiletries are really it. The only thing that'd be needed and would be an issue, then, is the tech. I don't really know of an easy way to make raspberry pis appear out of thin air. So that might be an issue. Even the most rudimentary tech has a complex pipeline that stretches back to essentially slave labour. Cheap eastern manufacturing, minerals and resources from third world countries. Is there a better way to get tech running? How can you drive this production in a society like this?

And unless we solve these problems, we'll always need a bridge from our P2P paradise into the economic internet filled with crap. We'll always be reliant on the very people and structures we're trying to distance ourselves from. And that, to me, is a huge issue. Whether it be for computer parts, food, toiletries, water, or simply land to stand on, there's still that inherent reliance on a collapsing economy desperate to spy on people and censor what they say and do. That collapsing economy that wishes to control what you say, think, do, buy, and make.

The alternatives are obvious, but not appealing. We can either stay here, on the failing collapsing internet filled with crap. Or we can introduce the crap into our P2P haven. Require some sort of connection to the internet so that bitcoin may work, and in turn allow people to participate in the dying economy just so that we can get the things we need. Neither of these sound that great. One is to give up, and the other is to only carve out a small section for ourselves, rather than simply pack up and leave.

Is it impossible to fully decentralize society? Must there be some sort of central reliance to ensure things work reliably? I really doubt that. But for the time being, there doesn't seem to way to have fully decentralized tech, but also to keep building hardware. You'd need a meshnet the size of the internet, and that won't happen unless all the same crap can appear on it. Though, perhaps that's the end goal for everyone? Simply a more secure and reliable method for doing the same crap they already do?

I can't see how one would be able to splinter off and simply live without involving the exploitation of the participants of the system. Any thoughts on this matter would be appreciated.

Siterank for Kaffiene!

- Posted in Kaffie's Blog by with comments

I went ahead and added a new site rank value for entries in Kaffiene's index. The site now sorts by that when returning results. The site rank is essentially just the amount of peers each site has, as determined by the /Stats page on a particular proxy. It works pretty well, and provides the best/most relevant results at the top, and lesser more crappy sites down at the bottom.

While I was doing this, I went ahead and made a new merge.py tool. Which essentially streamlines this process. It takes a 'peerlist' that has addresses and site ranks and applies them to the data.txt. Any site not in the peer list gets a '-' dummy value. And any sites unique to the peer list get added to a 'new sites' file for later processing (by hand at the moment).

There was also an influx of new sites I went through and added. Kaffiene used to have about 360 something, and now it has over 600! Though, it appears a lot of these sites are blank dummy sites, or empty blogs. However, I've tagged them appropriately and kept them in. Not to boast a high site count, at this point I don't really care for that. My main purpose is to provide a reverse lookup service, so that people know what they're getting into. And I later plan to separate these sites into a 'deadlist' so that various services can reference that list as needed (either to auto-block the sites, remove them from your downloaded files, maybe to have an 'abandoned zeronet sites' site). And for the soon coming 'merger sites'. I think having indexes already separated and into categories ready to go is a good move for the future of decentralized search.

One last little thing is that I added a scripts/python page. Since that top list of links was getting a bit cluttered. I wrote down descriptions and instructions for the two python tools. And there'll be more there in the future. I also went ahead and updated the FAQ about Kaffiene's sorting.

I think that's all, but I might be forgetting something. Oh, and now blank searches take a long time again :(. This is due to the higher site count (nearly double) along with a new string appending. Both of those make the search take longer, due to the display code and javascript being lame. I'll have to start adding multiple pages for a search soon. Isn't that exciting?

As always, you guys are awesome, and thanks for supporting me and Kaffiene. <3