OpenAI's Defense Against LLM Attacks OpenAI's study unveils an instruction hierarchy to boost LLM security against attacks like prompt injections, enhancing model safety. AI attacks AI robustness GPT security instruction hierarchy jailbreak attacks Large Language Models LLM security model safety OpenAI prompt injection July 14, 2025