

It’s very simple, one should legally target what’s advertised as selling when it’s not really selling. Heavily. Like fraud. Like obvious crime.
That will improve the situation.
It’s very simple, one should legally target what’s advertised as selling when it’s not really selling. Heavily. Like fraud. Like obvious crime.
That will improve the situation.
On the basis of having bought it. If they haven’t sold it but made such an impression, then they’ve committed a crime.
When you are buying a cure against all problems with miniscule text saying it’s just a metaphor, the seller is committing a crime. It’s the same here.
Morally. Regardless of how courts interpret this right now. That feature that courts and practice officially do not equal morality and thus we can decide differently this time, if we can provide an explanation, is the main advantage of English legal system and those descended from it over others.
Edit: I should add, if corporations can’t be bothered to respect what the word buy means, why I should I bother to provide them money? morality is a two way street, if one side is dishonest and shady, do they really have a right to whine when others steal from them?
Ah, yes, remember all that tone of honesty and seriousness from companies in the 00s against bad, bad pirates, and also scorn at FOSS, like those amateur toys, we make better things? And now from time to time those “serious professional” programs from then are found to contain GPL violations. Or how Sony put a virus on music CDs.
TBH, there was a time when things were better with actually buying software and music and such. And probably the surge of piracy was first.
But somehow that doesn’t hurt Steam. Quoting GN - because piracy is a service problem. People generally pirate what they can’t comfortably buy. There were games I’ve never seen in stores in my childhood (no official localization, and by the time I got interested in them people selling bootleg discs in subway road crossings were coming out of fashion here). Piracy was the way I got them.
They know all that. They want you to be able to only consume content the exact they they publish it.
That simplifies market analysis, removes the dilemma of supporting or not supporting some other way users want, and ideally selling the same thing a few times.
I mean, “fuck you OnlyFans” seems correct phrasing
I’d prefer a Mandalorian helmet with a removable physical display inside. OK, walking in such a helmet is a bit weird. But better than bigass glasses, since a helmet can at least be supported with something on your shoulders, have weight and pressure distributed better.
It was in the movies they liked when they were kids. Or at least in the movies they think users want to see brought to reality.
As in an answer to the question “what’s cool and futuristic”. Solving medieval barbarism and wars is futuristic, but turns out to not be achievable. Same with floating/underwater oceanic cities, blooming deserts, Mars colonies and 20 minutes on train from Moscow to New Delhi. At the same time the audience has been promised by advertising over years that future will be delivered to them. So - AR. For Apple this is the most important part, I think.
Also to augment something you have to analyze it, and if you have to analyze it, you are permitted to scan and analyze it. That’s a general point of attraction, I think. They are just extrapolating what led them to current success.
Also in some sense popular things were toys or promises of future for businesses and individuals alike, in the last 10-15 years. The audience is getting tired of toys and promises, while these companies don’t know how to make something else.
So let Tim Apple care about anything from AR in front of him to apples in his augmented rear, he surely knows what he wants. As another commenter says, a source of instructions and hints for a human walking drone is one, with visualization. I’m not sure that’s good, because if you can get that information for the machine, having a human there seems unnecessary. And if that information is not reliable enough, then it may not improve human’s productivity and error rate.
And the most important part is that humans learn by things being hard to do, it’s like working out in an exoskeleton, what’s the purpose? And if training and work are separated here, then it seems more effort is spent in total. Not sure.
Makes sense why they want this technology so much, one thing has really been achieved - in year 2005 you couldn’t make a program that would be a keylogger and a useful thing all in one, so you had to make a keylogger somehow detect those rare events one can risk it running, or something like that. You couldn’t instruct it in English “send me his private messages on sites like Facebook”, you had to be specific and solve problems. Now you can. And these “AI”'s are usually one program with generic purpose. To stuff everything together with kinda useful things.
All you need for this is a global overlay network and a global DNS untied from physical infrastructure. Cryptographic identities (hash of pubkey will do) instead of IP addresses (because NATs are PITA and too many people use mobile devices behind big bad NATs), and finding (in something like Kademlia) records signed by authority you yourself chose to trust instead of asking DNS.
Then come encryption and dynamic routing and synchronization of published states.
One can have some kind of Kademlia for discovery of projects too, but on the next level.
I2P comes close, but it’s more focused on anonymity.
OK, I’m not sure what I wrote makes sense. These things are easy to grasp somehow, but hard to understand well.
Ah, it’s more about the receiver than the sender. If they cut it off, their letter gets deleted or moves to spam directory. Provided someone configures that.
With centralized mail services of today 1990s’ techniques don’t work so well, but that’s a problem of adoption, not allowing mail without a correct token is still pretty modern.
(Also this won’t really help you because Linux is a mainstream system with big corporate input. Backdoors hidden in plain sight are a thing.
This will make you feel better though, Windows sucks.)
Let’s look at a scenario where there’s an exploit that requires a change to an API.
To the plugin API, you mean? Yes, that’s the borderline case of added complexity of having modularity.
But in that case it’ll work similarly to browser APIs fo JS being fixed. In one case browser devs break plugins, in another they break JS libraries.
Some plugin vendors will be slower than others, so the whole thing will see massive delays and end users are more likely to stick to insecure browser versions.
How is this different from JS libs? Except for power imbalance.
Just - if we are coming to Chrome devs being able to impose their will on anyone, let’s be honest about it. It has some advantages, yes. Just like Microsoft being able to impose Windows as the operating system for desktop users. Downsides too.
Plugin vendors are going to demand the same API surface as current web standards and perhaps more, so you’re not saving anything by using plugins, and you’re dramatically increasing the complexity of rolling out a fix.
Well, I described before why it doesn’t seem so for me.
What I meant is that the page outside of a plugin should be static. Probably even deprecate JS at all. So - having static pages with some content in them executed in a sandbox by a plugin. Have dynamic content in containers inside static content, from user’s perspective. Like it was with Flash applications except NPAPI plugins weren’t isolated in a satisfactory manner.
I like some things of what we have now. Just - drop things alternative browsers can’t track, and have in the browser a little standardized VM, inside which plugins (or anything) are executed. Break the vertical integration. It’s not a technical problem as much as social.
With the web being a “platform for applications” now, as opposed to year 1995, that even makes more sense.
I think the current web is a decent compromise. If you want your logic in something other than JavaScript, you have WebAssembly, but you don’t get access to nearly as many APIs and need to go through JavaScript. You can build your own abstraction in JavaScript however to hide that complexity from your users. The browser vendor retains the ability to fix things quickly, and devs get flexibility.
We should have the ability to replace the browser vendor.
Yes, WebAssembly is good, it would be even better were it the only layer for executable code in a webpage.
The modularization was a security nightmare. These plugins needed elevated privileges, a d they all needed to handle security themselves, and as I hope you are aware, Flash was atrocious with security.
Those - yes. But generally something running on a page receiving keystrokes when selected and drawing in a square and interpreting something can be done securely.
And modern browsers have done a pretty good job securing the javascript sandbox.
One can have such a sandbox for some generic bytecode separated from everything else on the page. Would be “socially” same as then, technically better.
Yes, as a part of userbase I don’t want to be on sale, thank you very much. Hence the comment above.
Not exactly what I said. I think these two were bad, but the idea of plugins was good.
Especially the uncertainty of whether a user has a plugin for the specific kind of content.
One could use different plugins, say, that plugin to show flash videos in mplayer under Unices.
It’s worse when everyone uses Chrome or something with modern CSS, HTML5 etc support.
The modularization was good. The idea that executable content can be different depending on plugins and is separated from the browser. I think we need that back.
And in some sense it not being very safe was good too. Everyone knew you can’t trust your PC when it’s connected to the Interwebs, evil haxxors will pwn you, bad viruses will gangsettle it, everything confidential you had there will turn up for all to see. And one’s safety is not the real level of protection, but how it relates to perceived level of protection. That was better back then, people had realistic expectations. Now you still can be owned, even if that’s much harder, but people don’t understand in which situations the risk is more, in which less, and often have false feeling of safety.
One thing that was definitely better is - those plugins being disabled by default, and there being a gray square on the page with an “allow content” or something button. And the Web being usable in Lynx.
I mean, yes, let’s accept refugees fleeing bad evil cleptocratic dictator, like many other such.
Then when he’s overthrown by beheaders massacring whole towns, let’s stop accepting refugees. Those beheaders are the good democratic new free government, and we are friends with them.
Like - western countries gave all these cleptocracies a chance to show themselves for a few decades. To then demonstrate how you make things even worse.
Ukraine does have units with neo-Nazi symbolic. Just not any further at that than Russia.
those could easily be authenticated with a key provided at signup both to make filtering and easier and to be able to revoke authentication
That’s what Tox links had for spam protection, an identifier of user plus an identifier of a permission. Agree on this.
More structured … I’m not sure, maybe a few types (not like MIME content type, but more technical, type not of content, but of message itself) of messages would be good - a letter, a notice, a contact request, a hypertext page, maybe even some common state CRUD (ok, this seems outside of email, I just aesthetically love the idea of something like an email collaborative filesystem with version control, and user friendly at the same time), a permission request/update/something (for some third resource).
Where a letter and a hypertext page would be almost open content as it is now, and a notice would have notice type and source, similarly with contact request (permission to write to us, like in normal Jabber clients, also solves those unannounced emails problem, sort of), and permission requests.
If so, then the password reset and such fit in well enough. Spam problem would be no more, at the same time all these service messages could be allowed, and having only ID and basic operational information wouldn’t be used for spam.
You would be delusional to think a web browser should be worth as much as an IMAP client.
This is a problem with web browsers and that set of protocols, not with my comparison.
You still ultimately run networked sandboxed applications in a web browser and view hypertext, it’s an unholy hybrid between two things that should be separated.
And it was so 20 years ago.
For the former Java applets and Flash were used a lot, as everyone remembers. The idea of a plugin was good. The reality was kinda not so much because of security and Flash being proprietary, but still better than today. For the latter no, you don’t need something radically more complex than an IMAP client.
I think Sun and Netscape etc made a mistake with JavaScript. Should have made plugins the main way to script pages.
I wanted to say something about easily hosting searchable repositories, and solving a few of the problems because of which the Web as it exists still has users.