Discussion about this post

User's avatar
Chara's avatar

Thank you Nicholas! So, the report doesn’t lay out the specifics but, we are told that the machines were provided with datasets specifically curated to run their policies with. THIS would be where the highest degree of ethical conscience matters. Machines only learn by what we provided them, so as scientists, engineers and practitioners the onus is on us to train our machines with the highest degree of ethically sourced data.

Expand full comment
Nicholas Bronson's avatar

Great article Chara, though I can't help but think we could use some more information on the ethical side of things here. You mention that ethical guides are used during training, how does that work?

This sort of ongoing learning is exactly the sort of runaway process that might lead to AGI, but it's also exactly the sort of process that could lead to a worst case scenario if the safety aspects aren't carefully managed.

I watched an interesting talk on the shutdown problem last night: (https://www.youtube.com/watch?v=64s1r1AV7WY) as much as this sort of training will increase LLM's utility, it could also significantly (orders of magnitude) complicate the shutdown problem which could be a concern.

Expand full comment

No posts