Company:Preamble, Inc.

From HandWiki

Preamble Inc. is a software company that provides a safety layer for artificial intelligence (AI) systems by applying safety and security policies and integrating environmental, social, and governance (ESG) to control the output of systems like ChatGPT, Claude, GPT-4, and other AI-as-a-Service applications.[1][2][3]

Preamble was co-founded by Jonathan Rodriguez in 2020 who is the co-founder and CEO of Preamble.[4] He also co-founded Vergence Labs in 2014 and was in the Forbes 30 Under 30 list of 2017.[5] Trousdale Ventures, a privately held investment firm invested in Preamble in 2021.[6][3]

Prompt injection attacks were first discovered by Preamble, Inc. in May 2022, and a responsible disclosure was provided to OpenAI.[7][8][9]

See also

Prompt engineering

References

  1. Gilbert, Thomas Krendl; Brozek, Megan Welle; Brozek, Andrew (2023-02-23). "Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI". arXiv:2302.12149 [cs]: 4. http://arxiv.org/abs/2302.12149. 
  2. "Trousdale Ventures | Preamble" (in en). https://trousdalevc.com/portfolio/7326/preamble. 
  3. 3.0 3.1 "Trousdale Ventures Seeds Preamble's AI Tools". https://www.socaltech.com/trousdale_ventures_seeds_preamble_s_ai_tools/s-0081363.html. 
  4. "Jon Rodriguez Cefalu (Preamble) on the Future of AR, AI, Sentient Robots, and Human Manipulation" (in en). https://www.iheart.com/podcast/256-ar-show-with-jason-mcdowal-43045124/episode/jon-rodriguez-cefalu-preamble-on-the-100402610/. 
  5. "Jonathan Rodriguez" (in en). https://www.forbes.com/profile/jonathan-rodriguez/. 
  6. Magazine, Authority (2023-03-21). "Wisdom From The Women Leading The AI Industry, With Leyla Hujer of Preamble" (in en). https://medium.com/authority-magazine/wisdom-from-the-women-leading-the-ai-industry-with-leyla-hujer-of-preamble-289db76f2c69. 
  7. Selvi, Jose (2022-12-05). "Exploring Prompt Injection Attacks" (in en-US). https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/. 
  8. Edwards, Benj (2022-09-16). "Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack" (in en-us). https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack/. 
  9. "Newly discovered prompt injection tactic threatens large language models" (in en). https://www.linkedin.com/pulse/newly-discovered-prompt-injection-tactic-threatens-large-anderson.