I have been considering ways in which AI might become dangerous to our species recently and consequently how humanity would go about combatting this. To the left is an image created with DALL-E 2 which is an OpenAI text to image model. The user inputs text prompts and the AI produces images such as this rendition of an AI designed to hunt other AI. OpenAI is releasing this model slowly to select people on their waitlist and has certain restrictions on what types of things can be inputted into the AI. For example, my desire to see "Elon Musk eating an ice cream" was thwarted by the model prohibiting people being used in DALL-E 2 creations. These are elements of what is no doubt an extremely sophisticated security strategy being used at OpenAI to make the products safer. However, although security in AI seems like a prudent consideration one has to wonder if there is really a way to maintain a decent hold on the development of AI and its security. It seems nearly inevitable based on the extreme interconnectedness of the world through the web that at some point in our collective future AI technology will cease to be something any one organization or government can control. The question then becomes how do we deal with for example a rogue AI that aims to optimize humanity out of existence to complete some mundane goal.