15 Comments

loved working on this, tysm for publishing it 🤍 🤍 🤍 🤍

Expand full comment

love u so much!!!!

Expand full comment

I generally find all this techy stuff hard to read about because 1) it bores me a bit and 2) I often feel to stupid to understand what I am reading but Zoe you really made it easy to read this one, thank you so much this is so interesting!!

Expand full comment

she slayed right!!!

Expand full comment

I LOVED THIS!!

Expand full comment

Super interesting but the generated ‘stories’ definitely lacked Luce’s personality. It was like Luce trying to be Luce on a WOW day. Amazing how tech is advancing and a great way to explain what is meant by training it etc

Expand full comment

Omg I love this description of it. Ur my EVERYTHING

Expand full comment

This. Was. WILD!

Incredibly interesting and a really relatable walkthrough for us all

Expand full comment

Wow that's super interesting Zoe! I wonder if it picked up Ben from when Luce has talked about her little brother?

Expand full comment

now I downloaded the Substack app I no longer get the newsy via email 😕. Is there a way to have both?

Expand full comment

yes! You can toggle it in settings!! xx

Expand full comment

Thank you!!

Expand full comment

Wow! I mean, it was a great result for ChatGPT and prompt engineers, not so great for the humans. Are we all heading for an artificial world where the AI dictates our worldview? Once the style is created and established, there's no longer a need for the human and that's a depressing thought. I'm also guessing that the in the future, Chat will improve and even editing would become an unnecessary chore. I fear the future will become less creative.

Expand full comment

I definitely am scared of OpenAI’s goal to create “Artificial General Intelligence” because they have soo much power over the model’s outputs. I agree that the perspectives and connotations used by GPT’s base model to describe the facts will reflect the statistical regularities found in opaque training data controlled by OpenAI, further perpetuating their worldview. However fine-tuning gives some control of this worldview back to experts who understand the task they’re training the model to perform and the worldview they’re trying to perpetuate. If you ask ChatGPT its opinion on abortion, it says something like "As an AI language model, I do not have personal opinions” because OpenAI trained it to say that. Some of the training data I used had stories about abortion, and after being trained with Luce’s pro-choice sentiments the fine-tuned model echoed her pro-choice values when prompted with abortion-related headlines. So imo as long as GPT is fine-tuned on evidence-based, value-driven, and curated datasets, it can be used by experts to solve a targeted set of problems without losing the creativity and critical thinking unique to humans. I also think there should ALWAYS be a human in the loop reading through the outputs, editing them, and periodically updating fine tuned models with a dataset of their edits so that they’re always evolving.

Expand full comment

And that's probably the biggest issue...do we trust humans to use evidence-based, value driven and curated datasets? I guess with all new technology there's always a worry to how people abuse it, like face recognition software or deepfake videos. Already we're seeing the problem with datamining images from the internet when you ask midjourney (or any other image generator) anything related to Jews. It's a caricature and very antisemetic. Here's hoping Chat's deep-dive is more nuanced.

Expand full comment