Just a few weeks ago, I wrote to my boss that “scope” was one of the limitations of the interactive UI by limiting practical use to 300-400 lines of code at a time. 

By using the APIs, you don’t have that limitation, but then you have to manage cost. The interactive UI costs a flat fee of $20 a month. But to use the APIs, you get charged by the amount of data in and out. Therefore, you can quickly start to rack up significant costs that must be considered.

Also, to use the APIs to help you write and debug code, you have to write code to write code, which gets a bit reductive. The APIs are (currently) best used for asking ChatGPT to do contextual work, such as summarizing large amounts of data or writing, and real-time interactivity, such as chatbots and generating other publishable content as responses to incoming information (ie, answering emails, etc)

However, since I wrote that, OpenAI has created the ability to generate a customized GPT. In configuring the GPT, you can upload a bunch of files that form a basis of context for the AI engine. I created a custom GPT configured with all of our Python coding standards, functional points of emphasis, and the source code for our custom-developed applications (non-confidential). Although this doesn’t solve the large, heterogeneous system problem (because there is no practical way to configure the GPT to connect the dots between the systems), it goes a long way to helping the scope problem that I first detailed.

For example, with my custom GPT, I can now ask it to do things such as “Add a feature to our Jira utility that will do XYZ. Make this function available via the –xyz argument.” Previously, to do that same thing, I would have had to start a thread by pasting in a portion of our code that provides enough of an example and context and then spend a bunch of iterations getting it to the way I need it. Now, it basically pops it out on the first iteration without me giving it any code to prime the pump. 

Part 3 tomorrow.Β 

Pin It on Pinterest

Share This