You’re right. It is as dangerous as it is exciting (maybe slightly more dangerous than that). The problem with any program isn’t in the program itself (at least not so far), but the way it is designed. So, a sentient AI system designed without taking care of all possible outcomes could be disastrous. I guess that’s what musk has been talking about as well, when he talks of the need to have regulations. And that’s what I think is their philosophy behind creating OpenAI also — safer, more responsible artificial general intelligence.