AI and ethics
Things that I’m thinking about:
- Social Media as a foreshadowing for what’s to come with AI. The Social Dilemma becomes the AI Dilemma. The Attention Economy becomes The Intimacy Economy.
- Bias.
- LLMs, algorithms, can’t be biased because humans are involved at some point and we are biased. Humans are involved in choosing the training data, refining the model, using the output, interpreting the output.
- Data represents the past, including our mistakes. In particular, systemic bias.
- The data implies what’s Average or Normal. But that’s reducing the complexity of human existence. Sometimes we want the outliers, the more creative options.
- Reinforcement Learning from Human Feedback (RLHF). The training data to output to training data feedback loop. Only a small group of humans, with one set of perspectives, providing feedback. The AI trained to say what we expect to hear, not what is true or correct.
- Opaque. We don’t know why: who got the job, the loan, the medical treatment, the prison sentence?
- Regulation. Preventing harm, including things we have thought of yet.
- Confabulation. AI’s are great at Confident Bullshitting. They make things up, but present it as fact.
- Quality. AI is good at quantity-related things, but bad at quality-related things.
- Failure modes. New technology has two failure modes: it works very well, get used for nefarious purposes; it doesn’t work well, gets widely used anyway. We’re seeing more examples of the second one: “Will AI take my job, despite the fact that it can’t really do it?”.
I’m keeping some notes in my Ethics and Tech section.
Added 2023-06-23, last updated 2023-08-11.