Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Thinking of “hiring” an AI tool for your development needs?

CSO – “Ever since ChatGPT was released in late 2022, the internet has been abuzz with equal parts doom and optimism. Love it or hate it, AI is coming to your development organization. Even if you don’t plan on developing an AI product or hiring an AI development bot to write code for you, it may still be integrated into the tooling and platforms used to build, test, and run your artisanal, handmade source code. And AI tools will bring unique risks that potentially offset the huge gains in productivity offered by automating tasks that once required human brain cycles. These risks stem from how the AI is trained, built, hosted, and used—all of which are different than other software tooling currently used by developers. Understanding risk is the first step in managing it, and to help you understand potential risks associated with your incoming AI tooling, we’ve written some interview questions that should be part of the onboarding process. These questions should be asked regardless of the AI’s type or purpose.

  • Where will it be hosted? Modern AIs currently require dedicated and expensive hardware to do the astounding tasks we’re seeing make headlines today. Unless you’re going to acquire a brand-new data center, your AI bot will work remotely and require the same security considerations as remote human workers using remote access and offsite data storage.
  • What kind of safeguards are in place to prevent IP loss as code leaves the boundary? Everything from smart TVs to cars are reporting usage data to their manufacturers. Some are using that data to improve their software, but others are selling it to advertisers. Understand exactly how your AI tool will use or dispose of source code or other private data it uses for its primary task.
  • Will your inputs be used in future training for the AI? Ongoing training of the AI models will be an increasing area of concern both for owners and those whose data is used to train the model. Owners, for example, may want to keep advertisers from biasing the AI bot in a direction that benefits their clients. Artists who share works online have had AI image-generation bots replicate their styles and are very concerned about the loss or theft of creative identity.
  • What is the fidelity of its results? ChatGPT’s most well-known drawback is the inaccuracy of its results—it will confidently assert falsehoods as truths. This has been referred to as the AI “hallucinating.” Understanding how and where an AI may hallucinate can help manage it when it does…”

Sorry, comments are closed for this post.