abucci

joined 2 years ago
[–] abucci@buc.ci 3 points 3 weeks ago (1 children)

@dogslayeggs@lemmy.world

So let me make sure I understand your argument. Because nobody can be held liable for one hypothetical death of a child when an accident happens with a self driving car, we should ban them so that hundreds of real children can be killed instead. Is that what you are saying?

No, this strawman is obviously not my argument. It's curious you're asking whether you understand, and then opining afterwards, rather than waiting for the clarification you suggest you're seeking. When someone responds to a no-brainer suggestion, grounded in skepticism but perfectly sensible nevertheless, with a strawman seemingly crafted to discredit it, one has to wonder if that someone is writing in good faith. Are you?

For anyone who is reading in good faith: we're clearly not talking about one hypothetical death, since more than one real death involving driverless car technology has already occurred, and there is no doubt there will be more in the future given the nature of conducting a several-ton hunk of metal across public roads at speed.

It should go without saying that hypothetical auto wreck fatalities occurring prior to the deployment of technology are not the fault of everyone who delayed the deployment of that technology, meaning in particular that these hypothetical deaths do not justify hastening deployment. This is a false conflation regardless of how many times Marc Andreesen and his apostles preach variations of it.

Finally "ban", or any other policy prescription for that matter, appeared nowhere in my post. That's the invention of this strawman's author (you can judge for yourself what the purpose of such an invention might be). What I urge is honestly attending to the serious and deadly important moral and justice questions surrounding the deployment of this class of technology before it is fully unleashed on the world, not after. Unless one is so full up with the holy fervor of technoutopianism that one's rationality has taken leave, this should read as an anodyne and reasonable suggestion.

[–] abucci@buc.ci 3 points 3 weeks ago (4 children)

@theluddite@lemmy.ml @vegeta@lemmy.world
to amplify the previous point, taps the sign as Joseph Weizenbaum turns over in his grave

A computer can never be held accountable

Therefore a computer must never make a management decision

tl;dr A driverless car cannot possibly be "better" at driving than a human driver. The comparison is a category error and therefore nonsensical; it's also a distraction from important questions of morality and justice. More below.

Numerically, it may some day be the case that driverless cars have fewer wrecks than cars driven by people.(1) Even so, it will never be the case that when a driverless car hits and kills a child the moral situation will be the same as when a human driver hits and kills a child. In the former case the liability for the death would be absorbed into a vast system of amoral actors with no individuals standing out as responsible. In effect we'd amortize and therefore minimize death with such a structure, making it sociopathic by nature and thereby adding another dimension of injustice to every community where it's deployed.(2) Obviously we've continually done exactly this kind of thing since the rise of modern technological life, but it's been sociopathic every time and we all suffer for it despite rampant narratives about "progress" etc.

It will also never be the case that a driverless car can exercise the judgment humans have to decide whether one risk is more acceptable than another, and then be held to account for the consequences of their choice. This matters.

Please (re-re-)read Weizenbaum's book if you don't understand why I can state these things with such unqualified confidence.

Basically, we all know damn well that whenever driverless cars show some kind of numerical superiority to human drivers (3) and become widespread, every time one kills, let alone injures, a person no one will be held to account for it. Companies are angling to indemnify themselves from such liability, and even if they accept some of it no one is going to prison on a manslaughter charge if a driverless car kills a person. At that point it's much more likely to be treated as an unavoidable act of nature no matter how hard the victim's loved ones reject that framing. How high a body count do our capitalist systems need to register before we all internalize this basic fact of how they operate and stop apologizing for it?

(1) Pop quiz! Which seedy robber baron has been loudly claiming for decades now that full self driving is only a few years away, and depends on people believing in that fantasy for at least part of his fortune? We should all read Wrong Way by Joanne McNeil to see the more likely trajectory of "driverless" or "self-driving" cars.
(2) Knowing this, it is irresponsible to put these vehicles on the road, or for people with decision-making power to allow them on the road, until this new form of risk is understood and accepted by the community. Otherwise you're forcing a community to suffer a new form of risk without consent and without even a mitigation plan, let alone a plan to compensate or otherwise make them whole for their new form of loss.
(3) Incidentally, quantifying aspects of life and then using the numbers, instead of human judgement, to make decisions was a favorite mission of eugenicists, who stridently pushed statistics as the "right" way to reason to further their eugenic causes. Long before Zuckerberg's hot or not experiment turned into Facebook, eugenicist Francis Galton was creeping around the neighborhoods of London with a clicker hidden in his pocket counting the "attractive" women in each, to identify "good" and "bad" breeding and inform decisions about who was "deserving" of a good life and who was not. Old habits die hard.

[–] abucci@buc.ci 2 points 1 month ago* (last edited 1 month ago) (1 children)

@theluddite@lemmy.ml @kersplomp@programming.dev I didn’t fully follow the connection between the social media post and multi-armed bandit problems. Is the idea that a user has k options about what to view, chooses one, and experiences some kind of payoff from the choice? If so I’m not sure the situation is well-modeled by bandits, since the typical social media user is presented with a smallish set of options chosen for them by an algorithm, with each user choice resulting in an algorithm presenting them with another smallish set of options that might be of different size and comprise different options. That kind of situation might be better modeled as an extensive form game of user against “the algorithm” with a finite but variable set of choices for the player at each ply. It’s common in a turn-taking game for both player’s and opponent’s choice to affect the choices available to player next ply, which is why this feels like a better model to me than k-armed-bandits or the POMDP type setups usually explored in RL.

If what the algorithm does can be approximated that way (as a reward-maximizing player in a multi-ply game that chooses what category of content to show a user at each turn), then you can get partway towards understanding how it works functionally by understanding how the tradeoffs between monetization, data gathering, and maximizing surprisal (learning) in its reward function are struck. I suspect that splitting the bins/categories more and more finely sometimes makes the tradeoffs look better, which might explain why social media companies tend to do this (if you have one bin of stuff with red and blue objects, and people choose randomly from it, they’ll be less happy on average than if you have a bin of red objects and a bin of blue objects and are able to direct red-preferring and blue-preferring users to the appropriate bin better than a coin flip would).

People are not static utility maximizers, but these types of algorithms assume we are. So I think they tend to get stuck in corners both because of how they strike tradeoffs (you get manosphere content because that’s what’s most monetizable) and because people’s preferences aren’t expressed consistently in their actions and change through time (you keep getting shown scifi content because you looked at a few scifi videos in a row awhile ago when you were feeling nostalgic but you don’t usually prefer it).

That’s what I have for now. Sorry for length.

[–] abucci@buc.ci 5 points 1 month ago (3 children)

@theluddite@lemmy.ml @kersplomp@programming.dev I have a lot to say on this subject but unfortunately do not have the time right now to write out anything worth reading! I will return perhaps tomorrow.

[–] abucci@buc.ci 4 points 5 months ago (1 children)
[–] abucci@buc.ci 3 points 5 months ago (1 children)

@theluddite@lemmy.ml @queermunist@lemmy.ml Though I'm probably a bit older than you both, occupy was also the moment where I first engaged in a protest for a sustained period of time and then continued to do so after. There was a lot of incoherence around occupy that took me years to get my head around. But I've come to believe a totally horizontal, leaderless movement organized through social media platforms is dead on arrival. I thought I'd throw a few observations into the mix if that's OK.

It was pointed out above that such a thing is like shouting "NO!" at the government; I fully agree with that. Bevins argues (at least in interviews; haven't had a chance to read his book yet) that these spontaneous NOs can be dangerous: if they go far enough they can create a power vacuum that the most prepared (read: organized and ruthless) forces quickly move to fill. This is the real story of what happened in several countries during the Arab Spring, by Bevins's read (I take it). So while folks are excitedly believing they're participating in the birth of a new form of democracy, what they're really doing is inflicting a dark Shock Doctrine on themselves. I have to confess that I, too, did not see this at the time.

There must be some kind of theory of change, pre-organizing to build power, and a clear-eyed recognition of the situation to avoid these DOA movements and have some hope of bringing lasting, meaningful change for a lot of people. Much of the US left (such as it is) seems allergic to looking reality squarely in the face. I'd almost go so far as saying there should not be attempts at lefty mass protest until such power is built, such theory is developed, and widespread recognition of our situation, grounded in reality, exists, exactly because of the danger that actors with very different goals from ours are better positioned to take advantage of the chaos mass protests generate.

Personally I'd refer to (what used to be) social media as "surveillance media". The form the modern US state takes is public-private partnership, with many state functions dispatched by private corporations and actors. Though Musk clearly has his own aims, he is almost surely playing a state role with Twitter not too different from the one he plays through SpaceX. So, though social media's always been corporate mediated, I'd add that recognizing the role of public-private partnerships in the modern US context leads to the probability that Twitter has become something else. In that view, the finances are almost irrelevant, and LOLing about this or that number going down or this or that many advertisers leaving the platform amounts to copium. If Twitter really is performing useful functions for the state then it will continue to exist no matter how much money it "loses"; failing to perform those functions is what would put it in jeopardy, not revenue figures.

[–] abucci@buc.ci 2 points 7 months ago

@t3rmit3@beehaw.org That'd be such a great thing to see in data. I was alluding more to the theory of voting systems, like rational choice theory. The setup in those is something like you have a set of people, and there's a choice they need to make collectively. Each person can have a different preference about what the choice should be. Arrow's impossibility theorem states, roughly, that in most cases no matter what system you use to take account of the people's preferences and make the final choice, at least one person's preferences will be violated (they won't like the choice).

What I was imagining was, in the same setup, everybody modifies their preferences based on what they think the other people's preferences are. So now the choice isn't being made based on their preferences, it's being made on the modification of their preferences. Arrow's impossibility theorem still holds, so no matter how the final choice is made some people will still be unhappy with it. But, I think it's possible that even more people will be unhappy than if they'd just stuck with their original preferences. Or, maybe the people who'd already have been unhappy are even more unhappy. I'd have to actually sit down and work it out though, which I haven't.

The example of your dad talking himself out of voting for Buttigieg because he thinks other people won't vote for Buttigieg is exactly the kind of case I was thinking of! Except I was thinking more theoretically than data-wise. It'd be great to see data on it too, for sure.

[–] abucci@buc.ci 6 points 7 months ago (3 children)

@theluddite@lemmy.ml @luciole@beehaw.org I swear one day I'm going to sit down and do the actual math to prove that voting systems are broken by having a majority of voters factor their perception of "electoral math" into their preferences even when their perceptions are accurate. Arrow's impossibility theorem is already pretty discouraging without all this meta stuff.

[–] abucci@buc.ci 16 points 1 year ago (1 children)

@theluddite@lemmy.ml @jeffw@lemmy.world Since most people spend most of their best hours at the workplace, what this person is really saying is that there shouldn't be any politics at all. I.e., this is a confession: "I am an authoritarian".

[–] abucci@buc.ci 1 points 1 year ago (1 children)

@theluddite@lemmy.ml @sabreW4K3@lazysoci.al Sorry to dive in uninvited, but from a different angle I'd recommend reading Energy and Human Ambitions on a Finite Planet by Thomas Murphy ( https://staging.open.umn.edu/opentextbooks/textbooks/980 ). Murphy is an astrophysicist and the book is an entry-level introduction to energy, its use in human societies, and all the implications that flow from our energy use. It's quite accessible if you're comfortable reading STEM textbooks; it might be a bit tough if you find reading about physics and math boring or difficult. He does provide a lot of handholds and personally I think it's worth the struggle.

The reason I suggest this book in this context is that I find a lot of people tend to be "energy blind", meaning they don't see the implications of human energy use and what it would actually mean to do something like reduce fossil fuel usage. Reducing fossil fuel usage would necessarily reduce quality of life for billions of people, for instance--there's almost no way around it. The book goes into why. This simple fact is deeply relevant to any theory of change. How can you convince several billion people to purposely lower their quality of life or forego apparent opportunities to increase their quality of life in order to force the reduction in fossil fuel use that is necessary to keep human civilization from ending altogether? How do you do this without falling back on authoritarian structures, especially as the situation becomes increasingly desperate-looking?

I think another ideology we need to get past, one a lot of people seem to be deeply defensive about, is the one built on the belief that we can have large amounts of energy whenever we want it and the supply will continue to go up in perpetuity. This belief is false--it's like believing the Earth is flat, or that your maladies are caused by unbalanced humors--but a large number of people in the so-called developed world take it as a fact or at least as an operating principle (before anyone dives down my throat about this: read Murphy's book. Seriously. Read it with care). "The economy" is fundamentally grounded in this false ideology. "Car culture" in the US is grounded in it. What many of us think "work" and "a job" are/should be is grounded in it. What many of us think of as "fairness" and "equity" is grounded in it. Etc etc etc.

[–] abucci@buc.ci 2 points 1 year ago (2 children)

Right! And the US Democratic party seems to be obsessed with means testing, so that many times when there is government assistance available people who need it are forced to subject themselves to intrusive surveillance, frequent paperwork and sometimes shifting requirements, etc. It's rare (in my experience) to hear anyone critique this state of affairs, let alone make substantive moves to change it.

view more: next ›