It's the business model.
No amount of tech is going to change that. If you want to change the situation, start social media businesses with a different business model.
The root of the problem - or at least the trunk very close to the root - is Software as a Service. It's the trend of turning software into services. Sure, it's nice for the vendors, and it's nice for corporate clients who can write off responsibility to a third party. But it also makes you no longer in control of your data, and the code that runs on it. The availability of your work becomes completely dependent on business decisions of third parties, which can - and frequently do - disappear suddenly. It's what leads to proliferation of ads, surveillance and growth strategies like dumbing down software to the point of almost complete uselessness.
If technology is meant to be - or at least is capable of being - empowering to individuals, then turning everything into a service is an exact opposite of that.
Here's a thought experiment I'd like people to ponder: Suppose we had this decentralized hardware utopia where every homeowner had his own web server node ... such as a "FreedomBox" or a hypothetical ISP smart router with builtin 16GB RAM and 10 terabytes of disk to hold Sandstorm or whatever self-hosting app stack you can think of.
Even with those favorable conditions, I'd argue we'd still evolve towards Youtube centralization. I'd encourage people to think about why that counterintuitive result eventually happens. (As a hint, Youtube does things that localized technology installed in the home doesn't solve and can't solve.)
>If technology is meant to be [...] empowering to individuals, then turning everything into a service is an exact opposite of that.
To further explore if eventual mass migration from home self-hosted videos to centralized Youtube is inevitable, we have to be open-minded to the idea that thousands of people will conclude that "putting my video on Youtube instead of hosting it on my own server is what empowers me." Why would people think that opposite thought?
Are you talking about search and content discovery? Are you talking about replicating the most popular videos over many servers so that the system can efficiently serve all the demand to watch those videos?
Maybe existing, fully-distributed products don't solve these problems. But it isn't obvious to me that they can't.
It’s not even that counterintuitive with some capitalist forethought.
"80% of the work is done by 20% of the workers".
Where this comes into play with regards to your thought experiment... is how the larger the playing field, the fewer the large players are - they tend to get "concentrated".
Plenty of Facebook alternatives exist... but why go to them? No one uses them. Large chunks of friends/family/acquaintances aren't "there" - they are on Facebook.
Plenty of authors other than Steven King exist... but given a few spaces on shelves at kiosks, who's going to get those spots? You can guarantee King will get one of those few spots...
The majority of activity is going to condense down until a few spots. Why? Because that is where everyone else is. That's where the skilled "creators" are.
The question becomes... how do you make sure those spots are fair, open, balanced, secure, etc.
Data in the hands of a few companies? What happens when they all lean left - politically? What happens when one of them gets hacked? When a "foreign actor" learns how to game that system?
I've thought for the last decade that things will continue to get "worse" before they get "better". How much worse? How can we make it better?
Million dollar questions.
I don't intend to derail the conversation so, rather than typing out an explanation of obscene length, I will refer all interested parties to the following:
Basics of Pareto Distributions
Example of some limitations in the application of Pareto Distributions
Once upon a time, to start a certain class of technology company, you needed to buy and provision a small data centre. There is a minimum capital cost associated with that. That cost presents a barrier to entry. Now, one spins up an AWS or Heroku instance.
Harvard economist Edward Glaeser argues the reason cities are more productive is they turn fixed costs into variable costs . (The density allows for the expensive, fixed capital assets to be reliably distributed to variable payers.)
TL; DR It's complicated. Reducing all software as a service to being dis-empowering to individuals ignores the barriers to entry such models tear down.
What? No. Plenty of new companies are being built off these SaaS providers. Why should a company just starting out build their own infrastructure, host their own CRM, build their own AI models. Companies would move much more slowely if they couldn't rely on these services.
If you purchase software that your company relies on, it belongs to you. Regardless if that third-party disappears. You still have use of the software until you can determine if you need to replace it or if it'll still work out for your needs.
Now renting the software as a service is a different matter. If the third-party goes under, well then you have problems. Problems you may have to solve immediately.
As an individual who is trying to build something, a disappearing third-party may not be detrimental if you still have the software at hand. You would not be locked down by that third-party's SaaS.
Letraset's ColorStudio cost $1,995 (in 1990 money), and Photoshop 1.0 cost around $1,000. And that didn't even buy you the upgrade, which was quite significant when layers were two point0 upgrades away.
Nah, you "own" it only as long as your rented keys are valid - what that means depend on implementation, and can mean anything from "you get to use the latest version for which you paid in perpetuity" (quite rare; notable example: Jetbrains), through "you can use only current software, as long as you pay, with forced upgrades, and only as long as we think it's not yet time to arbitrarily cut you off (and only as long as we're in business)" (the common way), to "software? what software. it's in the cloud, you have no say in anything, it's a service that works only as long as we care to provide it" (becoming the norm).
I suppose it's kind of like renting vs. buying a flat/house - the former is definitely much cheaper on a month-to-month basis, but you're subject to random landlord whims, and ultimately, you're just paying for a service - so when it ends, you're left with nothing.
As an Adobe customer, the software I use has become more “affordable” because it is easier for me to turn up $45 a month for software than the $X000 that it would cost up front. On the other hand, I am paying for software I don’t use, because to get the few programs I need I have to get the “everything” bundle.
My suspicion is that Adobe software is cheaper now because a) it has that new feeling of “affordability” (which could also be done through an instalment plan) and b) demand has gone up more than production costs have, which was already happening before they went SAAS (iirc PS CS6 was hundreds, not thousands).
Not to say that SAAS doesn’t sometimes work out the way you suggest—I don’t know enough to venture an opinion on that—but as an Adobe customer I definitely don’t feel like Creative Cloud is giving me a good deal. It feels like I am getting slowly soaked, according to some careful calculation of just how much adobe can get from me before they push me away entirely. Their CC software manager also gets a lot more cursing from me than it does appreciation.
Maybe my opinion is wrong, but if it is, I think it is a real marketing failure on their part...
I take your point but, for this one, you'd be surprised how far you can get with a decent email client and a spreadsheet.
Yes, first amendment rights are preserved in the "real" Internet through anonymous posting. And information gets shared that otherwise would not if identity were revealed.
But that begets spam, fishing, scams, and a number of bad actors that drive people towards walled gardens for communications.
I don't see how any western capitalist society couldn't fall into this trend. It's all about making more profits, no different from say buying some land with natural water sources, close it to the public, bottling the water then sell bottles. All perfectly legal and fine, until the day there's no more competition and whoever owns that land raises the prices so that some people cannot afford water anymore. That day ten people dead of thirst or shot by guards while trespassing won't make a difference when a thousand of them still can pay for that water. Still perfectly legal although not fine at all, but capitalism as we know it has no moral limitations. It's not just turning software into service being dangerous, but also what happens next.
And of course the total loss of control of our data. Lawyers using cloud services and social media for exchanging sensitive documents and discussing them may seem crazy, but I've already seen some doing exactly that. Being ignorant about the implications of today's technologies misuse can have disastrous effects.
This part is blatantly wrong. Saas enable on demand consumption and lowers tco by a great deal, i.e. indesign used to be 700$ each release, that now covers you 3 years of subscription and you don’t have to fear obsolescence.
Same with many many other software. Once you factor in obsolescence and upgrade cycles it’s very hard to keep the position of saas being more expensive.
The other point about losing control of your own stack do remain valid.
How did SaaS lead to all these things? Unless you're broadning the definition of SaaS from a revenue model and software deployment strategy to somethibg like "any software that you use that runs on remote servers". But that's the whole point of connecting computers together in a network!
Everybody wants something for free, but there are some people that recognize things are worth paying for.
That said, I started on the free plan and have friends who still use it and the ads really aren't intrustive at all - it hardly seems like it would be worth the effort?
1.) I recently had a splash screen for a Hulu partner "free trial" membership. Turns out it's actually a bundle, but I don't want to pay more for Hulu so keep your "ads" out of my premium feed Spotify. Put it in a special offers section where I can peruse at my liesure.
2.) My email is constantly bombarded with ads like "[pick a artist] wants to say 'thanks' with presale tickets". I already get presale with Amex. Again, these ads can be placed in a 'shows near you' section vice shoved in my email face.
No ads my hiney.
Doesn't CAN-SPAM legally require them to have an unsubscribe button on anything like that?
Not perfect, but it certainly didn't break the mood as much as a randomly placed ad.
As a side note, the web app is hidden -- they want you to download the desktop app, from which you can't block ads. The thing is, the desktop app performs even worse than the web app
Given that the act of making a payment is an inconvenience in and of itself, this does suggest it's _harder_ to get people to pay for a service, but the success of things like netflix and spotify is a clear indicator that people really like the "pay a consistent amount of money and get access to whatever" model.
In my country nobody cares about torrents and I don't know anyone who's using Netflix. There are some people, who don't know how to properly use computer, I can imagine that they could pay for something like that, but that's because they have no other choice, not because they like to pay.
: Despite linking to "reasons not to donate to Wikipedia," I'd still prefer more sites run like Wikipedia and less sites run like YouTube, and I think you should donate to the Wikimedia Foundation if you're a power user. Do not donate to the Wikimedia Foundation unless you're heavily involved in their wikis, but if you are heavily involved and have the money, you should. Their donors need to hold them accountable, and that's not possible if they're getting tons of money from people who have no idea what's going on behind the scenes.
I don't think that debate will end until we all create some sort of hive mind AI by networking our brains together.
I strongly disagree. These giant monopolies on different corners of tech caused by centralization are the very reason we end up having to have these stupid discussions about how everything should work.
All of this crap tied up in walled gardens means we just have to cry on the outside with #DeleteFacebook hash tags and hope stuff changes.
You claim we need to change the business model. I claim the business model is irrelevant if we instead focus on open source and distributed platforms.
Google is largely pay to play for search, content creators are having to leverage twitter, instagram, facebook and pinterest to try and deliver traffic so 90% of our business is derrived from creating value for thse social networks and search engines in hopes of return.
It really blows when the majority of your day isn't adding value to the web but trying to play in the perpetual game of fighting for morsels of traffic from the giants.
and as we've seen, those with money can manipulate...
I think the organizational model is relevant. Would we be having the discussions around the most popular social network selling our data and completely disrespecting user privacy if the most popular social network were a non-profit organization?
I have to think there is some middle ground between publicly traded megacorp and complete decentralization.
The middle ground is instead of a few major players in a given market, we have thousands of players in a market that people can move between easily. Kind of like mastodon, but every node is a different community with different interests. It would be like if we “upgraded” all of the existing forums on the internet today.
So instead of Facebook, we should have 10,000 smaller “social networks” all competing. This sounds like a much better outcome than never trying to compete with Facebook or twitter or google.
How can these large tech companies today serve the interest of every “user”? They can’t.
Decentralization is nice, but you don’t even need it to solve this problem.
Quite possibly. Non-profits still need to make money.
The question is what does a centrally controlled organization buy us? Why is that any kind of good compromise for users over a decentralized protocol?
NPR basically has a monopoly on public radio, and yet we don't constantly hear about all the evil stuff they're doing. Why? Because they're not being run as a loss leader to sell missiles and landmines or whatever.
Centralization isn't bad as long as the central organization exists to serve the relevant stakeholders in an equitable way.
NPR does not have a monopoly and it's answerable to the member stations so it's not a great example of centralization but I agree that centralization is not inherently bad.
If you think about that phrase for a second you'll realize how ridiculous it sounds.
There is nothing about NPR that discourages other public broadcasters.
(I'm not sure I'd call Google or FB a monopoly by that definition either, despite their market shares.
> and yet we don't constantly hear about all the evil stuff they're doing
We do, though, from conservatives.
Money is focus.
Some medium sized companies will test with an ad platform, find the best performing sites and make deals with them directly.
When your company spends millions on advertising, you have teams of people to figure out if your strategy is effective.
Follow the leaders.
I've seen plenty of McDonald's ads online. Coca-Cola, usually around Christmas and the Super Bowl. P&G? Occasionally, but probably more often than I realize because it has so many brands.
The business model doesn't need to be profitable. Almost every website is unprofitable in itself, and only exist to serve as a person's or company's marketing interface. Websites ARE the ads, because that's their point. The web is a giant marketing device to sell stuff, like your resume or your local plumber or your open-source software project.
The problem is the tech, and in particular, how a person can create a website from scratch, without going through any other gateway company.
Facebook/Twitter/Instagram makes it super easy to post your content online. Ebay makes it easy to sell. You don't need to write any HTML or any other piece of code, or build a server and find hosting or configure AWS. You just post content from an app, and that's it. Any elderly person can do that now.
There really is a huge technological barrier in making the usable - in particular, the entire UX ecosystem of creating content on the web. This difficult UX problem will be a hard limit to expanding creation of the web by the public, and should be a focus on anyone that wants to make the web open.
I would recommend that groups like W3C or WHATWG or hardware vendors like Apple or Cisco develop standards that not only display content, but also entry, storage, and distribution.
Why isn't there a default app, like SMS/MMS messages, on an iPhone that lets you post your own website? Think of all the infrastructure that had to be built to get SMS/MMS messages working... the web needs the same thing. Can you imagine if sending an MMS, you had to write your own software? Or build your own hardware interface?
This is the level the web is at right now.
This works fine for a business that sells direct on the internet, or a company that advertises themselves there (like my BigCo that has a website so people can learn about it, find people to contact, etc.), etc. It doesn't work so well for a company whose product is intangible, most notably news companies. How is NY Times going to stay in business if they don't make a profit on their website? In this age, people don't really want dead-tree newspapers any more. Also, sites that don't really have a product can't survive by selling stuff. Reddit, HN, etc. are good examples here; how is HN making enough money to operate itself? Basically, it isn't, it's run through goodwill by a VC company (though it might also be a form of advertising in and of itself). Smaller, more specialized discussion forum sites are also like this, probably to a greater extent: unless they're being financed by some sponsor (which is then likely going to advertise on it directly), they need a way to pay for themselves, and that usually means ads.
Sounds like that's a perfectly profitable business model, broadly defined
What do you mean by that? What is the point of technology, if you don't save time or effort by using it? Can you explain what should people do?
What about a general subscription model that gets you access to a wide range of websites? The point is to have a fixed price for consumers so they can read as much as you want. Distributing the money to sites you visit is a backend problem.
The downside is the high entry bar of installing a new browser and setting up a cryptocurrency wallet (payments are made with Basic Attention Tokens).
One obvious problem with all such schemes is how to prevent actors from gaming the system, e.g., to get free logins for self + a bunch of friends. I
I was very surprised how generous some people were. 10¢, 20¢, or more. But in the end, even with tens or hundreds of thousands of views, the article that got the most donations still only racked up about eight bucks.
The naive implementation might work by simply reseeding, say, the Wikipedia articles you've already got in your browser cache to whomever is also interested in them.
There are also UI concepts, however, to give you fine control over how much and how long you're willing to help seed. (Like .)
The latter might end up being the currency of low-friction microtransactions that actually stands a chance of taking off. The downsides here are that it only helps in offsetting distribution costs, but it's not good for turning a profit (i.e., funding the creator's real life living situation). It might be fruitful to create a more fungible currency on these principles, though.
Absolutely nailed it. Although I'd go further, and say this cuts to the core of human nature, and perhaps speaks to how fundamentally bad actors can lead to a general degradation of behaviour and standards across the board. So much behaviour in business is driven by the feeling that you have to do something because everybody else is, especially in industries that are in a downward spiral already (publishing!).
What in fact should have been a "protocol" based system has become a dictated, and law based one. That is to say the internet should have emerged as a means to distribute data by means of certain forms of etiquette, and agreements to communicate, but instead has emerged as means to centralize data where the protocol is obscured and dictated by a third party. To this day it is hilariously obscure, especially for the average person, for p2p communication, and so the driver for the "business model" is in fact the convenience that centralization brings, and the headaches/unreliability of decentralized systems.
The problem is a technology one.
Where can one find that definition of technology?
> What in fact should have been a "protocol" bases system has become a dictated, and law based one.
Do you think laws should not be applied to the Internet?
>Do you think laws should not be applied to the Internet?
Depends. For now I think it depends on what your opinion on communication between people is. Are the laws to regulate, monitor, enforce, and use coercion to prevent or hinder people from communicating with each other, or to allow the freedom of expression with each other?
In long term, since we are in the infancy of such a tool, I think computers/internet will eventually rein supreme over the old laws.
Traditional software products whose success relies on network effects often require enormous sums of money to get up and running, with the promise to somehow extract profit from the users at some point in the future.
Blockchain tech allows those kinds of products to succeed without the implicit agreement that your (as a user) data will be siphoned and monetized however the company sees fit. In fact, the owners of valuable content/contributions can be rewarded fairly.
There's a question of what kinds of business models are feasible in the current environment.
There's all sorts of business models that would be nice, if they were feasible, but which aren't currently feasible.
So actually, if certain types of business models are desirable, but aren't currently feasible, there's a deeper question of: are there ways we can change the environment the business models need to exist within?
It's brilliant, most people probably won't care about the extra workload as long as it doesn't make their computer "slow down" or get too noisy and it realigns the content providers incentives from "collect more/better data and pack in as many ads as possible" to just "keep people on the site as long as possible".
The cost is also just the price of electricity which, while not totally trivial, is cheap enough for this to work and something people are already paying for.
Mhh, why not go with a kickstarter style payment scheme? Have a campaign to fund the operating budget, another one to fund feature a and another one to fund feature b. You get user aligned prioritization for free while users are happy to be able to choose their level of support. Can’t believe I haven’t seen this model in action, yet!
In fact, I think tracking has reduced ads. If not, ads would be a throw at the wall regardless of what works affair.
Okay, it's maybe a bit cynic, but stuff like this will happen if the web browsers are twisted further to provide the appearance of privacy and security to the user.
For example, Google found it a wise investment to sponsor an entire browser just to keep referrer header and 3rd party cookie enabled.
If all browsers disabled this (apple already got rid of 3rd party cookies) then the cost to monetize the web as it is being monetized today, would be too high to be effective.
- Beaker lets you browse entire websites (dat archives) and fork them.
- It lets you create and serve your own sites directly from the browser and seed it from a server (like a torrent).
- It lets other sites create templated sites under your name for user generated content.
- Visitors by default temporarily seed your website which may reduce single point of failures, hug of deaths and costs.
With this peer-to-peer torrent-like approach, the web can become distributed again and feel more like a "web". There's still a lot of work left and maybe Beaker itself isn't the best implementation for this idea, but it's a good start.
*Disclaimer: I am a volunteer contributor to Bunsen Browser.
@loceng Yes, that is how Dat works, thus any browser serving Dat Archives help propagated changes to Dat archives. Note that only changes that are signed with the archive's private key for the archive are propagated. All of that happens under the hood as an owner of a dat archive though. You just run `dat share` command in a directory, you get a public address for the archive, and any time you make a change to a file in the archive it automatically signs with the private key and share it out to the network.
Emerging distributed tech won't fix a UX problem just because it happens to be technologically sophisticated (he calls out TOR, but I think he is making a general comment here).
Instead, he asks, why not spend some effort giving older tech like RSS a better UX?
I'm inclined to agree, but on the other hand it seems like the marketplace of ideas speaks for itself and we should be keeping our eyes on the future.
So you can have your own RSS subscriptions in a Dat, a feed reader in another Dat, click a button on a website to subscribe to it and add it to your Dat. The Feed reader can keep track of what you've read and store it in its own Dat or a different Dat (if you want client/data separation). Your mobile phone can sync to your Dat(s) so you have Desktop/Mobile sync all in a single place.
I've not tried this, but I don't see why it wouldn't work.
Mobile devices are slower, have less memory, and must consume less power than desktop, laptop, or server devices. To achieve good battery life they really need to be in an almost-off state most of the time. Add to this the fact that cellular data plans limit bandwidth and cellular networks are a lot slower than most land-line networks and you also have to be very efficient with the use of bandwidth.
This means that decentralized systems that rely on peer to peer participatory propagation of data or distributed compute just don't work well on mobile. Anything with P2P data propagation will use too much data plan and run the radio too much, shortening battery life, while anything with distributed compute will destroy battery life and turn your phone into a pocket hand warmer.
Mobile devices really are thin clients. I call them "dumb terminals for the cloud." Since the cloud is mainframe 2.0, mobile devices are the "glass TTY" (e.g. VT100) 2.0.
The best solution is probably not to fight the nature of mobile devices as thin clients but to tether them to stationary devices. But which stationary devices? Laptops are themselves mobile and are off half the time, and most people (myself included) no longer own desktops. I have a personal server but I'm a geek and a huge minority. Most people just do not own an always-on device.
Farming this out to random always-on devices is a security nightmare or at best is no better than the vertically integrated silo-ed cloud.
I see only three solutions:
(1) Create a niche for a personal always-on server type device and successfully market one to the end user. It would have to be open enough to allow the server side of 'apps' to be installed. Many have tried to do this but nothing has caught on.
(2) Create a mobile device that's designed to be a "real computer." With 5G coming the bandwidth for this might be on the way, but you'd also have to contend with battery life and heat dissipation. One avenue would be to split the CPU in two: a high-power burstable CPU and a low-power slow always-on CPU. Require the always-on parts of decentralized services to run there and as a result to be very optimized. The problem is that a mass-market mobile device is a huge undertaking. Another route might be to sell a snap-on case that carries an extra battery and also includes a mini-server CPU, RAM, storage, etc. This would make your phone a bit bulkier but if there are benefits / killer apps it could catch on.
(3) Solve the security problems inherent in appointing random stationary nodes to serve random mobile devices. This would probably involve a major innovation like fast scalable fully homomorphic encrypted virtual machines or really tough security enclave processors.
> The mobile revolution is and has been by far the most powerful driver of centralization in the last 10-15 years.
Many of us who were active users of Skype in its earlier days (mid-2000s) might remember Skype's first attempt at a mobile client. They took all the distributed P2P goodness of the desktop client and tried to have that run on the mobile environment.
The result was sadly a smartphone app that was slow and rapidly drained your battery. For those of us with many Skype group chats open, the mobile client was basically unusable.
So Microsoft/Skype had to go back and rethink the mobile client. To your points in your reply... they made it a "thin client" with all the power in the centralized servers.
As they did that, it seemed from the outside that they determined over time that maintaining a desktop P2P source code and a mobile thin-client/server source code didn't make sense. And so ultimately the desktop P2P was abandoned and everything became client/server. (Which is the case now - Skype on your desktop is basically a wrapper for a web client.)
And so... the quest for a good mobile user experience wound up being one of the drivers for centralizing one of the original decentralized P2P apps. 
 Yes, there were, I'm sure, many other contributing factors, including the issues around the supernodes that led to one of the major outages. And yes, I do realize that Skype, even its earliest form was NOT a completely-decentralized communications app. They did have a centralized mechanism for logins / authentication and also for PSTN gateways and other services.
Maybe regulation can solve some of the problems with the current systems, but the idealist in me really wants to see provably transparent (open source) and secure solutions which don't require trust in the hardware so we can still make use of modern, efficient (federated) server farms without having to giving up control over our data.
Since sites are just JSON, they're highly portable, and sections or whole pages can be simply copied from one file to another, to add content to your site.
The project is in late alpha - I'm just now completing the in-browser editor that uploads to S3. Other than requiring fewer server calls, it uses traditional browsers, servers, networks, etc.
I'm sure the internal story of why Google killed off Reader is far more mundane office politics that we'll ever know, but since then, there's not been another that's risen to popularity. As the article mentions there's Feedly who's UI is functional if a bit baroque (why does every feed need to be tagged?) but ultimately it's still like trying to drink from a firehose. There's not been an RSS reader company that has come about since that shows Google was wrong to kill off Reader.
It's easy enough to think up improvements to their UX, but we don't have a marketplace of ideas because there is so much friction (even ignoring the work involved in starting up a company and hiring a team, there's no way to introduce a small tweak to Feedly without recreating their platform - and then you'd still have to convince enough people to migrate to your Feedly-clone first).
What we have a marketplace of VC-funded corporations, and branding is king. There's no stock exchange for listing specific features Feedly could implement in order to promote better RSS reader software.
Any recommendation on how to make time series data available via Dat or IPFS is highly appreciated.
There will be base data of different systems and experiment data from experiments running in those environments. So far I have no concrete idea on how to segmentize and make available the data in a sensible way.
It does depend on the scale of your data and whether you want upload data in a streaming or discrete fashion. If by segmentize you mean to like chunk your data into smaller sections, then that is handled automatically by both DAT and IPFS.
Interesting idea and project. The browser's UI is very different... and lacks some (most) customization options... I spent a long time looking for a way to increase the font-size, for example. (Not all of us are accustomed to squinting at phones.) Preferences to set: I thought that was hiding, but it seems to be missing. Say what?
After an hour toying with the interface: technically it looks to have a lot of possibilities. I have serious concerns about the lack of transparency ... unlike most browsers today ... what's going on in security, ad-blocking, tracking? I found no way to tell.
In sum, cool project. The potential is clear. I can't imagine anyone non-technical not running away on first sight. It's more opaque than Ello, even. More like an oscilloscope than a mobile!
I can give an example of non-static data (though what is static vs dynamic can be a grey area):
SPAs work great with Beaker/Dat since users can download the app and use it offline. The data can be any Dat archive. So for a social network, each user can have their own Dat archives of images and posts. The root site can hold an index of each user and download individual files from their Dat and display them using client-side routing. In this scenario, each user has their own database as a Dat which is indexed by a parent Dat website.
Demo, a Twitter clone: https://github.com/beakerbrowser/fritter
Also, at the end of the day, they are still websites. So you can still use a central HTTP server to:
- Serve your data directly
- Provide an API to edit your Dat archive instead of distributing it into multiple user-owned archives.
It also means you don't need to migrate a website to a completely different paradigm in one big bang.
Myself and others are currently volunteering to help bring the Dat Archive API to Bunsen Browser, a mobile Dat Web client currently only for Android (unless someone wants to jump in and make the build for iOS).
Are there any DAT:// homepages or web-rings or whatever that I can start using to browse around? I have it installed but can't find any cool DAT sites to browse.
I also wrote a blog post about my experience with dat/beaker and getting my domain set up for access in beaker - dat://tomjwatson.com/blog/decentralising-the-web/
The whole issue I see with these articles is that they view the Web mainly from the content producer side. However, the majority of Web use is consuming content. And also the problem of the few walled gardens is on the consuming side. Everybody can nowadays put content on the Web with ease outside of these gardens. It is just hard to get attention for that content.
So if you really want to make the Web more distributed again, you need to come up with models for distributed relevance assessment, spam filtering, quality checking, etc. Work on a new 'Web infrastructure' totally misses the point.
(context: I have worked in P2P research for a while)
Replacing Facebook with RSS or Feedly is nonsense. My friends and family have no idea what those are. Facebook makes it easy for people to connect with old friends and family members through active human interactions: like this, follow her, read that. By doing so you do tell Facebook about you and others. Its interactive and solves a pain point - keeping in touch and interacting with others, your community. Feedly does not do that.
He was lamenting the loss of a particular forum with one of the other guys. “All just on Facebook now”, he said, and he said it with sadness. He missed that old forum.
Perhaps if forums could talk to each other you'd have an experience more like mastodon, but in concepts that more people understand.
Something like an open source Stack Exchange network. So you can log-in to one of them but then easily create an account using the credentials of the original forum that you signed in from.
Edit: I know discourse  is kind of working along these lines, but I don't think they're designed to have activity between forums.
Perhaps it could just be called 'Friend Forums'.
Though, maybe discourse has gotten better since I tried it a couple of years ago.
It follows standards that you need to know if you want to be able to easily hack it.
Which means learning standards.
On the other hand, hacking something like a Wordpress template takes almost zero knowledge. You can just start poking at things.
The lack of a relatively simple templating system is a big drawback in Discourse. Especially if you're not a rails developer, and you're running a community forum as a hobby.
I know how to program because of software such as phpbb and Wordpress, which are pretty much hacked together. Maybe because they're hacks, they have easier entry points.
I do think the Discourse people have different goals than I'm talking about here, however. If their goal was to make software that was easy to modify, hack, and deploy, they would have made different choices.
2. Single sign-on, and (as you wrote), "using the credentials of the original forum":
What about using Scuttlebut's identities? Everyone has his/her own ed25519 key pair. "The public key is used as the identifier", see: http://scuttlebot.io/more/protocols/secure-scuttlebutt.html. These can be created in a decentralized manner. The keys are really long, but maybe, in a specific forum, one wouldn't need to show the whole key. One could instead show a forum-local-@username + the shortest unique key prefix (unique in that forum).
I despise cynics who in the process of underestimating others crash the party, our party.
If the man wants his damn forum back give him his forum, in as many possible flavours and sizes as we possibly can. Then let him deliverate, hopefully learn and ultimately adapt.
Distributed community discussion/chat clients, with universal single-sign-on, could maybe be equally simple?
Two tricky tech things could be 1) how to share one's key, between all one's devices. And 2) how can all one's browser tabs, and discussion apps, get access to the key, once it's installed on localhost? without being able to steal the private key.
(If one is a Scuttlebut developer, though, then one might need to know about ed25519 identities.)
It would hopefully avert the issue Reddit like sites have where different groups want to control what others can say on the platform, by taking control out of the hands of a large corporation.
> Of course, it still raises questions about how it’d be financed or supported. No investor would back a service that couldn’t be controlled at all and could lose most of its userbase overnight.
I think the WordPress/WooCommerce model works for this. So you handle forum set up for businesses, or have paid for plugins. You can also still have a centralised system like basically reddit/wordpress.com , but allow subdomains just to be redirected to someones own domain (all for a minor cost).
Anyway, you wrote (in the blog): "independently hosted forums" — I like that, for various reasons.
Actually, I might slightly have built a Reddit alternative minus single-sign-on. Here's an a bit Reddit like discussion:
And one can create per site sub communities, a bit like subreddits. Actually, there are improvements over Reddit: https://www.talkyard.io/-32/how-hacker-news-can-be-improved-... . Someone is actually emailing with me about migrating their subreddits from Reddit to a Talkyard community at their own domain.
You wrote: "It would be a very complicated forum aggregator with a ton of features necessary to create the combined community side like on Reddit or its alternatives." — I agree. And it'd also be lovely with a mobile phone app, that could connect to all these disparate communities. So one didn't need to type the address in the mobile phone browser. Instead one just clicked in a list of recently visited communities. And it showed notifications too. ... A bit like mobile Facebook and Reddit, but connected to 999 decentralized communities.
I've been thinking a bit about building this mobile app, and forum aggregator / search engine. And initially make it work with Discourse (https://www.discourse.org), Flarum (http://flarum.org) and Talkyard (https://www.talkyard.io).
Discourse is just another forum software, but it's open source and you can setup your own. Jeff's taken some of the ideas from StackOverflow and tried to apply them to forums. To be honest I think most people are happy with regular forums, they work as you expect and there aren't many rules to learn. I've come across Discourse forums a couple of times. Plotly charts use Discourse for their forum . They work well, but as you say Discourse tried to rebrand the forum and so introduces a layer that most people just won't get.
https://www.proboards.com/ is still around but doesn't have global accounts as far as I can remember.
I'm not convinced there's much payoff waiting for you if you were to create such a system.
I think that Reddit's format is so popular because you get to just scroll a stream of provocative/vetted headlines. Even the comments are geared in a way where you're scrolling a stream of one-off provocative comments. There is no long form conversation over time. Everything is ephemeral.
By the way, kind of interesting: https://en.wikipedia.org/wiki/Ezboard#Technology
Of all places, G+ does reasonably well at this, with a suitably selective group.
Mailing lists / usenet are still probably the pinacle.
Pick a few niches and make a new account that only subscribes to small subreddits about them; it really does put a community feel to it all.
I remember when I knew everyone on rec.sports.basketball.college and then the damn web came along, and everyone was on the internet.
Each forum could still have their own domain and installation.
Ideally it would be in some way compatible to existing forums.
Perhaps better integration of forums via RSS. One thing regular forums tend to still have is RSS feeds built in. They can also be a nice go between of email and online users (although as has been said many times, email an be the death of forums).
As people who know the dangers of poor privacy choices we should protect those who have not had the opportunity to consider it yet.
Social media creates a semi-public record for bad actors to identify targets to persecute for whatever they've decided to retroactively declare to be a crime. That may be a small risk in a stable democracy, but it's still worth considering due to the magnitude of the potential consequences.
I personally think it is better to tackle the issue by making it harder/impossible for companies to obtain this data and allowing users to force companies to delete ANY data they have on them by request.
Almost three quarters of all smokers in the US want to quit: http://news.gallup.com/poll/163763/smokers-quit-tried-multip...
I think it is pretty clear that the information and social stigma campaigns run by people who care about the health of their fellow citizens have been effective.
So why not try to help them do what they know is best for them instead of throwing our hands up in the air and saying "your problem buddy".
Humans are notoriously bad at estimating future vs. current value. I think we should help people make good long term decisions instead of letting biology sabotage us.
This is clearly a huge appeal of social media sites, as they act as content aggregators that do the ‘dirty work’ (oftentimes very poorly) in getting relevant content to chosen eyeballs.
As much as I want there to be a reversion to more ‘vanilla’ content consumption, I’m completely in agreement with your refutation that Feedly would indeed be a nonesense bet to make on where we should be headed.
If webmasters and bloggers realized the benefit of a honest , dead simple user experience, they might earn themselves a bookmark (which is currently the only alternative to social media that average users can probably understand)
and it is also sometimes the peoples fault when they forget civility and humility in discussions and behave like a$$holes.
othertimes sane and healthy online communities fail to defend themselves against those and against more sneaky trolls.
on top of that communication manners seem to degrade more and more with social media usage. I am often appauled by the tight lipped one line responses to contact messages on $craigslist-like-service. a friendly message asking about the availability of an item on sale is met with "yes it is still available.". no "hello", no end greeting, nothing along the lines of "if you'd like to buy, give me a call at $number".
Makes it incredible easy to just move along.
Another anecdote: a few days ago my Mum mentioned she was going to cut down on Facebook and move to other social media in light of what's been happening. And she's about as non-technical a user as you get.
Thanks for sharing, but I wonder if she realizes this isn’t going to help...
We just need to tweak the advertising model to disable micro targeting by both advertisers and platforms. Google is a far worse offender here. This will kill the incentives for surveillance and stalking at source.
People stalk others because they want to, because there are no rules against it yet, because they are greedy and don't care about externalities, because they are happy to profess and expect ethical behavior from others in society when they can't demonstrate it themselves. Advertising existed and thrived long before the existence of the internet.
Advertising by textual context and immediate location will retain their business model. The hoovering up of data, stalking and building profiles for micro targeting is unethical and has to go.
I get that.
The comparative advantage of Google and Facebook was that they could provide a demographic with greater specificity than anyone before, by a lot. There is an old saying, "I know half my advertising is wasted, I just don't know which half." Google and FB were going to end that.
I agree that RSS and Feedly are no replacement. But other replacements (closed media enabled chats) are happening right now and people happily switch because they see their privacy in danger.
Tl:dr: it's in the point of view.
This scandal is so incredibly overblown it's crazy. I've been arguing this since the beginning.
Media blew this up because it's loosley connected to Trump and because there's no better story than when you tear down a high flyer. Facebook was/is an American success story.
The web as it was in the early 90s was an alternative to major content delivery platforms (TV, press, mostly). So there was these massive systems that were the press and the TV who would own most of our attention span. And then there was this cute alternative technology with a great community that was yet unpolluted by the big guys.
Today, the big content guys colonized the medium, so it no longer feels like the web is "a cacophony of different sites and voices" to quote the article. But in fact, we're in the same situation: big guys with money and loads of content on one side and small guys with communities on the other side.
The web as it is today doesn't prevent you from spreading your ideas to the world on your very own server... And websites like reddit and hacker news are great amplifiers of small voices.
I would probably never have read this article if it had been published in the 90s. Since it's #1 on HN, perhaps 50k people have read it today! The way I use the web feels very much like the 90s: a few aggregation sites, a lot of excellent content written by independent guys, links between them...
Who cares about the centralized internet when the internet that we've loved since the 90s is still there and thriving?
Which economy has better prospects in the long run? Currently internet is very U.S. centric because most of the big players are located there. That could change if European legislation is more supportive for small agile companies to evolve.
Surely you see the contradiction there.
> Currently internet is very U.S. centric because most of the big players are located there. That could change if [...]
That's not just a coincidence and I don't believe it's any kind of first-mover advantage. It's about the environment in which you operate and passing more and more laws and rules, regardless of the intentions, is the opposite direction. One can lament the digital lawlessness, but we can't pretend it didn't have value or that tailored laws could have a similar effect.
One possibility, of course, is that larger companies will have more resources to tackle GDPR compliance, and thus be better able to respond effectively than small companies. However, it's also possible that, by taking privacy / security seriously from day one and storing the minimal set of user data needed to operate, a smaller company will have a distinct advantage over some BigCo that must now migrate sprawling, inter-tangled distributed systems never designed for GDPR compliance. After all, that small company now has a compelling legal reason to avoid feature / data warehousing bloat and save their limited engineering resources, whereas the larger company most certainly has that bloat baked deeply into their stack due to years and years of "Big Data" hype.
It's not too early to claim that the risks are disproportionate. Those of us with small companies really just count on subjective enforcement. Sadly, it seems everyone says "do nothing wrong, nothing to worry about" with these kinds of laws and don't understand risk management or the cost of conformance.
You're presenting the situation as a failure of the EU, but I think you've missed the point here. The big players are located in the US because the environment there rewards big players and has a general snowball effect. The EU system rewards small-player innovation and limits the creation of very large, dominant, monopolistic big players. This should be a net benefit to the overall system, if it weren't fot big players just coming in from elsewhere. Unfortunately, due to the global nature of the web, this leads to less-regulated US big players dominating the EU players. Solving that is tough, but the fact GDPR applies to US companies is an interesting start.
I disagree. Do you believe this is the case today or is it the goal? From what I understand, the gap is quite large between number of small companies based on environment. Many of us with smaller companies stand on the shoulder of giants. Sometimes you have to take the bad with the good, and that can mean the financial incentive to become large spurs those of us who are small. Artificial ceilings, however altruistic people tell themselves they are by limiting the big, bad, scary companies, often are just a low tide lowering all boats in a trickle-down way.
Also, GDPR applies to companies of all size - which can hurts small companies more than bigger ones.
My point was not that they would be about the same thing. My point is that GDPR is pretty clearly about the rights of the little guy. Lack of net neutrality is clearly about the rights of big businesses. Neither alone does not seem to do that much good or harm. But if those attitudes are permanent in the legislating systems, that will have an effect.
So do you believe companies like Google, Microsoft and Facebook are going hire the best people, make the best money and offer the best content? It could be. Or there could be something disruptive. Where is that disruptive going to happen?
if European legislation is more supportive for small agile companies to evolve
GDPR is great, but it seems to me many of the Congresswomen/men were questioning how would legislation impact the ability of US tech companies to remain agile. I do not think we have such pro-business forces in the EU. That's why we are so far behind US and China in tech. I fear more and more legislation will make us less likely to have small agile companies.
No need to get infected by American exceptionalism. The UK Parliament Select Committee asked Zuckerberg in for questioning over this, and he would have received a comparable grilling. Zuck refused to go.
It's easy to found a company and pay a fair flat rate percentage tax rate as eu (or schengen for this topic) citizen. It's just not economical to stay here once you get big tho.
The US definitely has its problems, but as long as there's still an ISP in the Valley who upholds Net Neutrality, development is still probably going to happen there.
Asia will replace it, if anything.
A non-neutral internet would have impacted the 90s internet in the same way.
Is priced content worse than the current situation, where services are monetized exclusively by essentially running malware on the user's machine?
You're conflating two things: the revenue model for publishers and ISPs cutting off access to services which do not sign an agreement with them. The fee ISPs charge, you can take as read would not go towards subsidizing independent content.
Everyone is being nudged to where there's money to be made.
This is an ass-backward way of looking at what actually happens. People (you, me, CEOs of large corporations) are the entire system. You and I want a thing, there is a demand, smart people notice that demand and fill it. Folk flock to the product that satisfies their demands.
I admit there is clever marketing that manipulates our ape brains in very effective ways, perhaps that nudges us toward one thing over another, but there is legitimate demand beyond the marketing.
Then there are communities and marketplaces. These products are largely successful because of a demand for an agreed meeting place to share ideas, sell things, exchange messages. You don't need to be 'nudged' toward these places, they just happen to be objectively very useful places because everyone else agrees they are.
Money being made is a byproduct of the success of these places. I am so tired of people looking at big successful corporations as somehow being existential to an otherwise utopian perfect system. Regular people give power to corporations by patronizing them, it's as simple as that. Most-always no one is forcing anyone to go to one place over another, we almost always have a choice. We need to recognize that as consumers we should take responsibility for the companies we promote to the forefront. If you don't like marketing or large corporations, reject them! Take responsibility for how you spend your money, we have the power.
The layman doesn't generally know what's good for them in any specialized area, that's quite clear when we saw the level of technical understanding shown during questions to Mark Zuckerberg; it's obvious career politicians will need to eventually evolve over to successful people with strong holistic standing of all systems, with the ability to learn in depth and have strong critical thinking.
We need to do a better job as a society of developing these people, and of capturing society's attention with genuine interest and excitement, and not merely with hype and paid-for reach.
Today we don't "pay" for quality content. Today content simply has authority if a lot of people who think like "me" read it _else_ -1.
I don't think that really means anything. There is so much junk, that anything that honestly attempts to be good is better than "so much of what comes via the web today."
Don't forget that you're on Hacker News, a relatively small site with a very aggressive name for "normies" :)
No, the Web has been weaponized by a collusion between corporations and government in order to press as hard as they can for censorship and thought control. Because at the end of the day, if people realized they didn't need big government or big corporations, where would that leave all these billionaires with their revolving doors in and out of publicly appointed power roles?
I honestly get depressed when I think about the dream of open government and more widespread knowledge that was supposed to come about with all this technology working for the little guy. Well, some of it came true, but as soon as it became a threat to the orthodoxy, they found a way to shut it down. And now the baseline model for all these companies is China. Google would be happy to censor Americans in the same way they censor content for the Chinese Communists. One policy for the whole world, and it's not a policy of freedom and openness, but one of oppression, as usual.
AOL was one big ass walled garden back in the 1990s. It was pretty popular back then.
The funny thing is, back then, it was popular for 2 groups of people. First were younger kids who wanted to stir up trouble on the network (AOHell, etc.). The second were old people.
My uncle is ~70 and still has an AOL account because he thinks it's "the internet". That's how strong AOL's walled garden / marketing was. Although, it didn't help that back then you would have to dial an AOL phone number on your modem to connect, so I could see how it could be confusing.
I'm not sure the internet is much different now than it was in 2013, or even 2008. I suppose the accessibility (always on phones), and the increase in online media (video/audio streaming as the norm) is the biggest change from 08 to 13, but last 5 years?
AOL only really died when they decided to block port 25 to make you use their webmailer.
What if I said that? Is this hypothetical tense helping the conversation?
Is not being able to copy code from Facebook using view source killing the web? Of course not. If you are just starting out there are more resources available now than ever.
And I hope to God we never have a web built on block chain technology.
Now if he means you can’t build Facebook the service...we’ll yeah Google didn’t go around telling everyone in the 90’s how it’s algorithms wirked either. But you can clone Googles website super easy!
Right now there are two webs and they're interwoven to the point that it has become hard to see, but there is the 'web of applications' and the 'web of information'. There is no particular reason why the second should be implemented using the tools for the first.
"The web was never supposed to be a few walled gardens of concentrated content owned by Facebook, YouTube, Twitter, and a few other major publishers. It was supposed to be a cacophony of different sites and voices."
It still is. Facebook, YT and Twitter are simply more known than your local pet owners forum but that doesn't mean the pet owner forum doesn't exists. It's similar to supermarket chains and your local grocery store. Both still exist.
When I made All About Berlin, it was just HTML and CSS. Nothing complex there. I eventually added Disqus and Google Analytics, but it's still possible to build simple websites.
1. No encryption, meaning any intermediate router could have altered the contents or at the very least surveilled them.
2. Illegible line length without me boosting the font size or browser width.
3. No protection for the reader that this is actually motherfuckingwebsite.com and not some .com with a Cyrillic character instead.
4. No protection for the reader that what is on this site hasn't been fingerprinted somehow (zero-width characters, Cyrillic, etc).
5. Even if the site was encrypted the DNS lookup would have exposed that I was looking at it anyway.
6. It's on the root of the domain which has a number of headaches at scale and roots vs www in general complicate things like understanding certs, CORS, etc.
I agree that we need something simpler and locked down and I've even thought through not just the web portion, but the underlying portion as well, but it requires a massive change to get something that is both secure / trustworthy, not prone to spam, useable by the non-technical, and resilient against well funded mal-actors like warring nation states. And none of the replacement makes the world better for most of the worlds well funded corporations. Silicon Valley and Hollywood would both take a real hit because their business models both rely on insecurity. Hollywood is reliant on surveillance to enforce copyright and SV is reliant on surveillance to supply targeted ads.
They are increasingly moving to FB Groups. It's a good tool for what it does.
Scale matters. Information inequality was inevitable.
Free trade led to growing concentrations of economic power and rising economic inequality. We also have information inequality to go with it, and because information transfer is much lower cost than trade, information inequality will be much higher and power more concentrated (social media stocks should thus be valued higher than companies which make real things)
Not sure the article understands that inequality is inherent in the structure of the web.
If you wanted to reduce inequaliy there are two options:
2. raising the 'transaction' costs
In economics we know what those look like:
1. progressive taxation and a welfare state
2. legislation to raise environmental standards, increase minimum wages, etc.
What's the equivalent of a social welfare state and higher minimum wage when it comes to information?
Another way of looking at this is in terms of externalities: free social media generates both positive (connecting with old friends) and negative externalities (fake news, election mischief, etc).
One way of internalising both would be for Facbook to charge users. That is never going to happen. Carbon credits and the like make polluters internalise the costs that society would otherwise incur. So, what kind of legislation (and it will have to be legislation) would get social media companies to be pay for the negative externalities?
What are the numbers on views of original Youtube content vs. views of clips of material that infringes copyright? My guess is that the latter still makes up a significant portion of clicks.
If someone makes a serious P2P alternative to Youtube, not only will infringing content be there-- such content's existence will be a measure of how successful the platform is.
Either that, or you design something like the unexplained decentralized internet on the show "Silicon Valley" which somehow still has a single company controlling the pipes.
If not that, then "P2P-tube" has to house Silvestri's "Back to Future" theme just as it houses impassioned vlogs of walled-garden escapees. If you don't have both you'll end up with a bunch of walled-garden escapees rationalizing the virtues of housing unpopular, homogenized content.
Anyway-- as passive onlookers its easy to dismiss Youtube's infringing content. But it's a lot more difficult when its a fledgling service that content owners cannot monetize. I think we've been through this before with Bittorrent. What was O'Reilly's opinion then?
 The first Youtube link that popped up was obviously put together in a consumer video editor and currently has over 9 million views:
But YouTube, Facebook, Google destroy (a.o.) the 'normal media' as they oligopolize advertising. They hinder progress/competition as they cross-subside their ventures with ad-money.
The simplest solution imho would be to tax advertising strongly. This could be done independently by countries. E.g. every single advert in Germany would bear a tax of 30 - 70 %. This gives a country the power (and money) to steer in directions which benefit the country as a whole, i.e. support traditional media, public broadcaster, culture.
Concentrated advertising money is dangerous. It should be regulated. The same approach is being done with e.g. cigarettes and gas. Tax is here to contain an unwanted action (smoking) or to build infrastructure (roads). Couldn't the same approach be taken for the internet?
A thought which came to mind would be for a country to levy an access charge (think of the BBC license fee) to access social networks based on usage. The money would then be used to clean up the negative externalities caused by the social network in that country.
The internet in the 90s seemed more fun and more open. Perhaps it's because the only people really participating were not interested in abusing sites or people. It was mostly nerds sharing nerdy things. Once money got involved, and free everything was available, it turned into this soup of bots, trolls, AI, fake this and that, big money, swaying public opinion, and gross content.
Smaller sites and discussion boards have been at a disadvantage recently trying to fend off spam, bots, and sock accounts and very often lose out to the big sites. Effectively controlling abuse and doing it a cost-effective way can be very hard.
RealPerson.io (https://realperson.io) was created as a way for websites to verify that a user is a real person, but without disclosing personal details about the user. Each person can create a single account on RealPerson.io and then can create separate, randomly-generated codes for each site they use. Websites register on RealPerson.io and then for each user signup on their website they simply asks the user for their RealPerson code for their website. A backend REST call to RealPerson.io with the user code for the site returns "yes" or "no." Sites don't share codes so you can't track across websites. No shared logins or authentication code. Bots would have to pay for a code for each account which would be cost prohibitive to run a bot farm.
The first implementation of this approach was done with RealPeople.io (https://realpeople.io) which uses RealPerson.io to verify that the user is real.
I distinctively remember people giving purposefully bad advice on innocently sounded IRC channels. By bad advice I mean hiding "rm -rf" somewhere in the command that is given out as advice. If the person complained about losing work, more fun to them. This was something I witness firs time I started randomly poking around IRC (I was not target). I also remember how there was whole philosophy around it, how newbies who come are vampires for daring to ask questions.
I remember there being a lot of "nerds" bragging about how funny it was to cause harm to that or this person - including putting their personal enemies phones into fake sex ads. Plenty were everything except nice innocent people sharing nerdy things.
There were plenty of nice thoughtful people and plenty of normal people who just care about board topic. They were not only ones out there.
That web page doesn't explain how they guarantee uniqueness of users or pay for the required ID checking?
Also, a user only gets one RealPerson code per website, so a user can't create multiple codes for a website and hence create multiple accounts on that website. And if that website bans the user, that user would need to get a new code for that website, which would mean creating a second account on RealPerson and paying again and face the scrutiny of RealPerson detecting that the user has two accounts on RealPerson.
I was curious too so I clicked on the sign-up button to see how it worked. It immediately takes you to a page to pay $9.00 by credit-card.
I guess it guarantees you're a "real person" because bots† don't pay with credit-cards.
†(At the large scale required for profitable fraud)
This is the primary way to prevent bots, but there still are secondary means like IP address, credit card, behavior pattern than can be used to detect bots. But with estimated millions of bots operating on Twitter and Facebook and elsewhere, the bot operators are not going to spend millions of dollars on RealPerson accounts.
Nope. Not going to happen. A HN user might, but regular user? No way in hell. Honestly I'd be surprised if you managed to win them over with $1/year or whatever the card minimum TX cost is. Even I was blindsided by it with a "wooah, whats this about" and I'm happy paying up for apps etc just to test them once. I thought I'd accidentally signed up to the website side of it
It's hard enough for Netflix to get customers to pay (account sharing) never mind a site that - from the user's point of view - does nothing
On the thought of the website side - Why would I as a website owner have my users pay your service when I could set up my own paywall and also enjoy the monetary rewards? I really feel like I've missed something here
Side note, there's also no way to delete my account
Incidentally, this is how RealPeople.io (https://realpeople.io) is able to have no ads because the revenue from RealPerson.io subsidizes it since they're owned by the same company.
If your site ever gains traction then would quickly become rife with stolen credit card details as the amount is small enough that most banks wouldn't question it (at first at least - but it's small amounts like these that attackers use to verify the card details are still valid) and those attackers would get the added bonus of growing their database of fake user accounts, thus rendering the point behind your site irrelevant.
Worse still, you'd then need to find some way to differentiate between stolen details and legitimate accounts (you will legally need to do this otherwise you'd quickly get shut down) - which means you're back into the arms race against the bad actors. This is going to cost you money (hopefully not more than you're making but that really depends on how heavily you get targeted) and will mean you'll likely end up adding counter measures that add further hurdles for legitimate customers.
And thus we're all back to square one.
I do wish you luck with your endeavor; but I don't agree with you that this solves any of the problems you're hoping to address (or at least described in this thread - if your problem was merely to earn some extra cash on the side then this might work beautifully for you).
The 90s had fair share of the above as well:
* bots: I used to write bots to troll 90s HTML chatrooms (I was young and an idiot). IRC bots have been around since forever at well.
* trolls: trolling is older than the web. Platforms like IRC and newsgroups used to be rife with trolls if you wondered into the wrong place or said something stupid to the wrong people.
* AI: this isn't really a web problem but more just a natural advancement of technology. I mean we had bots in the 90s so you can bet if AI was as far along then as it is now then we'd have seen AI then as well.
* fake this and that: this has always been a problem. Let's not forget that Snopes.com was launched in 1994.
* big money: The web definitely attacks big money now, but even in the 90s some businesses were sinking huge quantities of money in the bet that it would pay off big. Probably the most famous example being Amazon, who were founded in 1994.
* swaying public opinion: I agree here. This more recent trend of using user identifiable information to target persuasive pieces (eg what Cambridge Analytica were doing) is very worrying too.
* gross content: shock sites are nearly as old as the web itself. Goatse, for example, is so old it's now part of the mainstream consciousness.
* spam: Spam on forums is less of a problem now than it's ever been thanks to new techniques in user verfication (captcha and similiar, developers more aware to validate users with an activation email, etc). And spam email is an order of magnatude better now that it's been in years.
* sock accounts: To be honest I think this is another area where they were more common then than they are now. This time I think it is due to the current trend of using real world identities. In the late 90s it was particularly easy to create sock accounts due to how easy it became to create a multitude of free email accounts (eg Yahoo Mail).
* very often lose out to the big sites: this is where I think the biggest shift has happened. People seem less interested to stumble on new content than thye did in the 90s. Of course this might just be age bias on my part; I was in college in the 90s so had both the time and the social crowd to stumble upon random stuff online. Whereas these days I'm older and look for different requirements from the web so for me I look to it more as a tool than a toy.
I'm not saying things are better now nor then (actually I do kind of miss the 90s web) but there was definitely still a darker undercurrent present even in the 90s.
It depends on what you're measuring. Take trolls for example; is the proportion actually any bigger? Sure there are more trolls but there's also more users on the whole so you'd expect the number of trolls to also grow while the percentage might remain the same.
> And the effect is now raised to the level of concern that elections are affected. Some people are even talking about how it is threatening democracy.
That's not really the same point you were making in your first post. I agreed with you about how worrying those specific cases are but you were originally complaining about a more general problem of rot and giving examples of stuff that also existed in the 90s. But yes, I too am concerned about targetted "marketting" being abused in a way that is new to anything we had seen in the decades previous.
> We need to change the economics so that bad actors go broke trying to act bad, while real people have to pay very little and not have to give up privacy.
I don't disagree with you on principle but it's a lot easier said than done. I mean just look at how hard it has been getting a handle on spam email and as a result it's now harder than ever to host your own mail server.
Ultimately I don't think it is possible to have privacy / anonymity and to prevent spam. I also don't think it's possible to prevent bad actors from automation while allowing the good actors to do the same things on the cheap. The problem is the same controls that are used to make it difficult for bad actors will also make it difficult for the good ones, And equally the same controls that give us privacy also make it easy for malicious sock accounts to be created. It's a double edged sward like that of free speech allowing opinions we don't want to hear amongst those that we do need to here.
I think the best approach is education. There was a time when we were taught not to trust what we read online. Not to trust other people online. But things have since flipped and perhaps it's time to re-educate everyone to be cautious of anything presented online?
That's how RealPerson.io is different. It's pay to play so it doesn't try to out-tech the bad actors, and so it doesn't make it harder for the good actors.
* It's an additional service that people need to discover / learn and sign up for
* It's not free. While the cost might seem cheap for people like ourselves in well paid jobs, not everyone has a disposable income. Anyone in poorly paid jobs / unemployed, expensive bills or family etc wouldn't want to or even might not be able to afford such a service
* It requires people pay with a bank card, which excludes anyone who doesn't have a bank account / credit card (only a small group of people but they do exist). It also excludes anyone who doesn't feel comfortable with entering payment details online (I personally only use PayPal these days on all bar a very small handful of sites)
* I also don't trust handing "identity token" over to that site any more than I trust Facebook. What happens if/when they get hacked? Will they then have my bank card details? Will the be able to use my identity token to access other sites? These points matter to me because I know nothing about the company and they are gearing themselves up for being an obvious target for attackers.
So in summary there is no such thing as a perfect solution. By making it harder for bad actors you're going to make it harder for at least some good actors as well. That is an inescapable truth.
Of course, part of the reason they seem worse now is because sites like Facebook, Twitter and Reddit have completely ignored all community management advice found online and done nothing to discourage bad actors or keep the quality control up.
Seems like the idea of a hobby in general has died off sometimes.
We've been working on proof-of-stake and blockchain scaling/interoperability infrastructure since 2014. It all starts with a classical BFT algorithm which provides simple light-client proofs and fork-accountability. Your mobile phone can verify transaction finality in 5 seconds, with no need for an hour of block confirmations as in Bitcoin proof-of-work mining.
https://github.com/tendermint/tendermint <-- blockchain consensus engine
https://github.com/cosmos/cosmos-sdk <-- blockchain framework in Go
https://github.com/keppel/lotion <-- blockchain framework in JS
https://github.com/cosmos/voyager <-- blockchain explorer
https://github.com/tendermint/go-amino <-- blockchain codec derived from Proto3
While we're at it, lets fork Go too. https://groups.google.com/forum/#!msg/e-lang/cQiXS_GnKS4/Zsk...
Disclaimer: I'm a cofounder of the Cosmos project, cofounder of Tendermint, and long-time HN lurker.
Tendermint provides BFT security in the likes of Bitcoin proof-of-work, except without the mining. It allows for massively replicated public decentralized databases. (e.g. BFT as opposed to Raft's FT). The Tendermint algorithm guarantees that as long as less than 1/3 are malicious, the blockchain will continue to function very well. And if there ever is a double-spend attack, we can figure out who to blame (and it will necessarily be 1/3 of bonded stakeholders).
The human-readable part is relatively simple. You can easily program a name-resolution system using the Cosmos SDK. Using the KVStore implementation there, you can commit the value of a name in the Merkle tree, and get a Merkle proof all the way up to the block-hash. And the block-hash is signed by 2/3 of the validator-set by voting power.
There's more to it than that, but that's the gist of it. Now we can really, really securely know that a name resolves to a certain IP, and know that it would be extremely expensive to attack this system.
NameCoin almost solved it but you need something like this to keep persistent state and provide Merkle proofs: https://github.com/tendermint/iavl . You can with the simple Bitcoin UTXO Merkle tree, but then you can't provide fast updates or revocation easily.
p.s. anyone know why my parent post is getting downvoted? goes up and down, up and down ;)
Playing with Glitch.com, it's possible to see a way forwards that isn't just iterating a better UI on top of an RSS reader. It's still lines of code as opposed to trying to push some GUI-based programming language, but does very well to promote the idea of building blocks for the web.
If most of us are basically plumbers but for data on the Internet, glitch.com is the consumer hardware store where they sell all the various fun pipe connectors that you'd use for building an epic ball machine as a kid. (Github.com would be is the store with a catalog that has table after table of parts in every possible variation.)
How many third (and first!) party trackers would you guess were blocked by my privacy and security flaws-preventing browser plugin when I clicked on the link to read the story?
I'm sure they "add value" to my experience and I am missing out on all of the incredible opportunities that they present, but that's a chance that I'm willing to take.
That website even (attempts to) logs when and where you click and what text you highlight, and then tries to send that info off to a third party.
The Vice President of Content Strategy for ORM, Inc. seems to be advocating against content strategies he has implemented.
Is "society made us do it" still a thing?
Most projects attempting to "fix the web" as admirable as it be, suffer for pretty bad UI. Usually because the UI designer was also the developer.
I'm also guilty of this.
A recent project that got around this was Mastodon and Pleroma, both have pretty decent UI/UX.
Shameless plug for our solution:
It’s FOSS so download and use it just like Wordpress. Feel free to email me at greg (qbix.com) if you want any help getting started and I will either personally help you or connect you with one of our developers on the team.
Tim Berners-Lee is leading another project called Solid, that started about 3 years ago. We are a bit ahead because we started much earlier.
A "cacophony of different sites and voices" may be more fair, but it isn't very useful. Most people shouldn't have a public voice, because most people don't have anything valuable (or accurate) to say.
This includes me, of course, on many subjects. But here on HN, if many people think that my comments are garbage, they vote me down and at least new readers will be somewhat forewarned that my content may be of questionable value.
Search was first supposed to help us find things. But when there were too many things to find, it should have helped us identify the good stuff. Instead, search is now 90% bogus results that are generated to help steer people to a particular product or service.
So honestly, some walled gardens where people reasonably knowledgeable about the local subject can affect the visibility of the content within the silo, at least visitors less able to discern good from bad will probably see more good than bad.
Of course, sites need money to operate, so then comes the corruption of the system. Even sites that try to be neutral still end up making some concessions in order to get funded by some source (and the corporations or political groups have the money, so they end up with influence).
Part of this obviously is about search and curation. Part of it also is probably just the massive loss of intimacy on the web now that it is so big. It's easy to feel less relevant when you get a few hundred views, even though in the 1990s you might also have got only a few hundred views.
So what I'd like is to have a new intimate internet. It would be effectively searchable by a Google equivalent that only indexed a small subset of the internet. How exclusive it would be, how exactly it is curated and managed I have zero idea. I'm just saying that is what I would like.
Now HN: unleash your list of stuff like this that already exists.
Oh dear, we’ve reinvented Facebook!
Seriously though, I think this is more about community than technology, and more about channel saturation than centralization.
I think the best shot for what you want is the re-distributed web and niche social networks (subreddits?). The peer-based web stuff will give you that feeling of being ahead of the masses on tech, and subreddits for your favorite hobby will let you connect with people who care about your random thing.
Quite apart from centralisation, comments, chat, commerce and ads which I also don't want, I view the Feed as really the bad thing. Instead of my mind controlling what I am interested in and would like to know about, the feed gives me a range of topics and things to take an interest in.
I know there will always be a feed somewhere, but online it becomes compulsive.
I agree with the sentiment of the article, but developer tools are better, not worse, than they were 20 years ago.
Understanding how a modern webpage source in combination with a browser results in what you see is a pretty complicated affair, in the past you just did 'view source' and it was obvious what went where.
The web is now much more machine-to-machine than it was in the past when the humans could process the pages roughly as easily as the machines could.
There are obvious benefits to separating layout, style, software and content. But it isn't categorically true that this will always result in something that is easier to understand by a human.
I suggest a corollary:
"Any web community gets what it admires (fancy UX and feeds that work like magic), will pay for (=nothing), and, ultimately, deserves."
Sure the web is being abused for tracking by Google and Facebook but that was also possible 20 years ago with a single img-tag.
Meh, Office and Adobe products still have widespread use, as do various code editors and related tools, along with specialized applications like SPSS or Matlab. Also, things like Slack are desktop, even though they utilize web technologies. And there remain plenty of PC games which still get made. Some things you don't want to run in a browser.
The installed version is still fundamentally a desktop platform. I'm pretty sure it's written C# or C++ and runs on dotnet.
An SPA is a web page run in the browser over http(s). Anything else is desktop, server, or mobile app, regardless of what tech it's made from.
I'd argue that if they are using web technologies (like vscode) they are still profiting from web innovations. Isn't electron just a way to distribute SPA on the desktop (plus some integrations).
Yes perhaps your definition is clearer. But the point remains that "web technolgies" have expanded tremendously in scope. And that it does not suffice anymore to look at "View Source" to become a web developer.
Because today the field of web development includes things like electron and you want to bring your application to as many devices as possible, with as little development overhead as possible. And this is what users expect, it's just not possible with the 90-style web.
Also, the stuff that powers the web such as web servers and browsers would obviously not be SPAs.
Adobe, office for desktop, text editors, video games, my command line, iTunes, yada yada yada. Native applications are a hugely important part of my computer usage and I personally prefer to keep it that way.
And don't young people use YouTube/Amazon Music/Google Play to listen to music these days?
I'm a developer so of course I still use a lot of desktop apps, but for the normal user, the only thing that rivals the convenience of web apps are Android/iPhone apps, but nobody enjoys writing those.. So perhaps they will get replaced by something based on web technologies too one day.
> DevTools have deprecated view source
This is not good.
Ms office > gdocs
Msn messenger -> Hangouts
At least for most (non professional) users
Edit: and often you are better off using webapps because they are sandboxed.
And in my experience Microsoft Office is still used more than Google Docs for non professional users, but YMMV.
Hangouts is also way behind natives apps like WhatsApp, iMessage, Telegram in market share.
vscode may or may not count as a SPA, that's beside the point. It is certainly based on and would not be possible without web technologies. And illustrates perfectly why today it is not possible anymore to become a web developer just by "viewing the source".
That sounds surprisingly like how our economy ends up working.
Perhaps we need a broader view?
You're not going to fix the problem by changing the technology.
You can't <airquotes> fix </airquotes> that by "rebuilding the web".
It's amusing because the tech crowd are opposing the NIMBY people in meatspace but are the NIMBY people in cyberspace.
I agree that web bloat is a problem, but why is it the responsibility of web developers to teach users of their site how to code? Is there any other area of software development where that's the case? Even major FOSS projects aren't designed to easily enable new developers to "copy the code they want".
So, why even bother arguing? Everyone lives on their special little islands of some niche web/IP semi-commercial projects and do not particularly care about ads or cookies, because whatever.
So Facebook is banned. Fine. That shouldn't stop me from being able to at least send data somewhere that will end up at Facebook. And I should have been able to send an e-mail that bounced through intermediary relays without connecting to Gmail's servers.
A lot of the web that _could_ be peer to peer isn't. And it's not because it is difficult, but because we simply haven't decided to use it that way. Email is the oldest, most successful distributed decentralized application. It does not require complex consensus. It does not use distributed databases. And it just works. You don't even need to be connected to the end server. There's no reason the web couldn't work like this. It is feasible today.
On top of this, where your web content is stored should be simple: it should be stored on a device in your pocket, and cached and distributed on servers around the world, just like email already is. Again, no complex algorithms are needed for this. A very simple design could accomplish this with existing technology, and perhaps an extension to a protocol's RFC and an update to some core software.
Absolute shite. It's 100% China's fault. Given the widespread, damaging, lingering ramifications of the lack of security on the Internet in general, the absolute last thing we should be doing is promoting the idea that this information and these connections should be less secure.
I am baffled that anybody could even propose this idea.
Maybe I have a skewed point of view, but after being involved in computer security for 15 years, I literally have two cases at all where I care about the data I'm sending over the internet: logging into my bank, and making a purchase with a credit card. Virtually everything else, I don't care about.
Plus, they would probably block any direct access to intermediary servers outside the country.
With the web, or chat apps, they block the whole target system and provide an alternative, which is somewhat reasonable. You could make this more difficult by designing the technology to make it more difficult to design alternate systems, and easier for it to work in a way that is acceptable to China and still supports existing users.
This isn't a difficult or bad thing. We've had this forever on the web, where you could see a page was not using a secured connection (saying "this page is secure" just because the connection is secure is a big overstatement). It's mostly a recent development that privacy zealots have been shoving this "secure or nothing" idea down everyone's throats.
Well, no, they could (and would) block domains, like "*@facebook.com". Which would leave you exactly where you are now.
If what you want is an a non-encrypted web where China can analyze and choose the specific content it wants to block, then fine, but I don't see what the whole email-like system has to do with it.
I don't think China wants to block specific content because that's a lot more complicated an approach than blocking target systems. By making the web more e-mail like, you can pass data through the web, and it can reach the intended target without that target being discovered. You could in essence borrow the Tor method, but for plaintext communications over HTTP.
You could use the same approach for web data by decoupling the addresses from the content. Federated services and pre-authorized applications is one way to connect disparate content and networks to each other. Another would be a way to submit data to a network in general, and allow others to retrieve it, for example with some pre-shared key or authorized service. You could make this work across different types of data and networks. For example, I should be able to send an e-mail to some address, and an intended recipient could check their Facebook messages and see the data, even though the mail address was something arbitrary.
Look at the ultimate “decentralized” technology, blockchain. The majority of mining power is now centralized in vast centralized farms with custom hardware and cheap access to power and half the Bitcoin is held by about 1000 people.
Yeah the ONLY way forward is to get everything compatible with FB. Sure.
It’s easier to use Wordpress than building your own, even easier to write a blog entry for O’Reilly.com but if your message is a decentralized network of enthusiasts, then maybe you should publish on your own platform, you know, to be the change you want.
I would have agreed with you if it was published on Medium or Blogger or whatever.
I've become a believer of 'walled gardens' after watching my elderly aunt struggle with PCs and websites for years, until I set her up with an iPad, Facebook, and Skype.
Feed authentication would allow for a variety of potential services business models.
- Paywalls: Whether subscribing to a newspaper to supporting a blog through
Patreon, it lets user directly pay for content provided through RSS.
- Customized curation services: Whether manually curated by an editor, or
using algorithms, you could build a business around providing content to
- Social network integration: Instead of joining one or two massive Social
networks, I could be involved in dozens of small, focused ones, all
collected in a single RSS reader.
Lynx is still available.
The actual problematic part of the web from the command line is dynamic web application (in this case I see a couple of links to youtube).
The engadget link about Jolla is the worst because it somehow ends up as white text on a blue background.
[ reply posted from links2 -g ]
I believe the double negation is wrong there. You probably mean either "if it doesn't contain any css or js" or "if it contains no css and no js".
I recently learned about jsonfeed  from Hacker News and built a news aggregator for my own personal use. I believe the future of news content is in aggressive, hackable aggregation tools for a variety of content. jsonfeed is a solid foundation to build on :).
RSS and jsonfeed are just containers. Both may contain real or fake news.
Everybody can put up a feed in RSS or jsonfeed without any trouble.
RSS 0.9x was really simple, just XML and a half-dozen tags. But RDF was the new hotness, so RSS 1.0 was born, built around RDF and the Semantic Web. Then Dave Winer got annoyed at the complexity, and published a slightly cleaned-up up 0.9x as 2.0. Then a bunch of other people decided that this was a mess, and so they created a new XML format called Atom, which is only slightly more complex than 0.9x/2.0. (I'm leaving out some ancient politics and oversimplifying.)
When I was an author of feed readers, the only one of these formats that ever annoyed me was RSS 1.0, because RDF requires much more than a simple XML parser. Oh, and maybe 25% of webmasters are totally incapable of generating valid XML, so you needed to tell your XML parser to use a dodgy "recovery mode".
In practice, this has very little "cognitive overhead", because you just hand a URL to a library and it figures out what parser to run.
But as usual, it looks like people are frustrated with RSS 0.9x/RSS 1.0/RSS 2.0/Atom/etc., and think that the solution is to create yet another format that does the exact same thing. I'm sure somebody out there is writing a "robust jsonfeed" parser that can handle badly-mangled invalid JSON.
In 4 years, people will be saying, "The whole RSS 0.9x/RSS 1.0/RSS 2.0/Atom/jsonfeed mess is terrible, and we need to create a simple new syndication format based on _____!"
A simple RSS feed can easily be hand written. But of course, you don't do that because there are so many mature tools to create or manipulate an XML feed already.
Whereas with jsonfeed there is not much yet because it is so new.
If you can hack JSON, you can hack XML as well. Even programmer novices can do this.
Edit: Reworded the last sentence that could be interpreted as a personal attack. Wasn't meant as such, I was going for the impersonal you.
I have already justified my preference for jsonfeed. XML has proved many times to decrease developer productivity. To my knowledge open source projects that have dropped or diminished their use of XML have only become more popular over time. Extrapolating a little bit, switching to JSON can only be a good thing.
Or would you care to share a hack you've done with jsonfeed that I can't do with my RSS reader?
Historically, XML is infinitely better than the custom formats we had before, where might even have been in binary instead of textual (the horror). Of course they went overboard with XML (oh hello XSLT).
But RSS feeds are a prime example for a good use of XML. Standard, validated, machine-parsable, and easily extendable data structures. For example podcast feeds are just extended RSS feeds.
I don't think that projects got popular because they dropped XML but because they dropped unnecessary baggage.