CTO AI Corner: What are the basic guardrails you should have in place when doing AI enhanced development?
Usually, the AI plays nice when helping you develop software. But every now and then, in rare moments, it might do something catastrophic. So how do you protect yourself?
It depends on how much autonomy you want to give the AI.
Supervising the AI
- Manually approve any system commands it wants to run
- Manually control when Git commands (especially commits) happen
- Run the code yourself only after you’ve read it and understand what it's supposed to do
- Use a dev environment that's connected only to disposable, easily rebuildable resources like dummy databases or limited-scope APIs
- Never let the AI touch sensitive secrets. Keep API keys, passwords, and tokens out of reach
Giving AI more autonomy
If you're feeling brave and want to give the AI more autonomy, you need to sandbox it properly:
- Only give it access to disposable containers or VMs, and even then, with minimal privileges
- Give it access only to test resources that are meant to be broken and ideally easy to restore, because it will break them eventually
- It can commit to version control, but don’t let it rewrite history or push to sensitive branches
- Limit its resources. Tokens, processes, memory, etc.
- Lock down network access. Don’t let it see your whole network, and don’t let it wander the internet unsupervised