Using ChatGPT API with Node.js and Python
July 27, 2023 by Ayerhan Msughter, Enwerem Udochukwu
This article aims to introduce you to the Openai ChatGPT API and explains how to generate accurate responses using inference parameter tuning and advanced prompt engineering. The examples in this article are in Node.js and Python, but are applicable to other programming languages as well.
Before we start using the API we need to sign up with Openai at https://openai.com/api/ to get access.
Getting your API Secret Key
After signing up we can login and create our secret API key which would be used for calling the API from our program by clicking on the user icon at the top left. Thereafter, click on the view api keys option from the drop down.
Click on the create new secret key button to generate your secret key and copy it.
Installing Required Libraries
Openai libraries are available in all major programming languages for interacting with their API. But in this article we would be focusing on Node.js and Python.
You can simply run this command from your Node.js application folder:
$ npm install openai
Head on to https://www.npmjs.com/package/openai for more details.
You can simply run this command from your Linux operating system or environment:
$ pip install --upgrade openai
Head on to https://pypi.org/project/openai for more installation details.
Setting API Key in Envrionment Variable
After installing the library, we have to save our new API key in a platfrom environment variable first before using it in our application code. Because the key is meant to be private, it is advisable to set them in a different private code from our main public code as shown below.
Setting API Key in Node.js
Save the following code snippet in a file and run it to set the API key in a Node.js environment variable. Replace the 'key' string with the API key you copied earlier.
process.env['OPENAI_API_KEY'] = 'key';
There are several other ways you can set an environment variable in Node.js. See here.
Setting API Key in Python
Save the following code snippet in a file and run it to set the API key in a Python environment variable. Replace the 'key' string with the API key you copied earlier.
os.environ['OPENAI_API_KEY'] = 'key'
There are several other ways you can set an environment variable in Python. See here.
Using ChatGPT API Key
After setting the appropriate key in an environment variable, we can now use the API key in our code to generate responses.
Node.js Code Example
The code snippet below is an example of how to use the API in Node.js.
Python Code Example
The code snippet below is an example of how to use the API in Python.
API Parameters and their Meaning
As you can see from the above examples the API contains some parameters that are provided when making an API call.
There are basically five models of varying sizes we can choose from. The bigger models are generally more accurate and more expensive to use while the smaller models are faster, less accurate and cheaper to run.
The models are listed below arranged from biggest to smallest:
- text_davinci_003 (code-davinci-002 for programming )
This is the snippet of text you would like the API to complete. The AI looks at the prompt and gives it a completion that logically fits it. For example a prompt such as “ what is the definition of a computer” would make it to return the definition of a computer as the logical completion.
This is a value between 0 and 1 that controls the creativity or randomness of the completions you get from the AI. When the value is closer to 1 the AI gives completions in a more creative way and it tries as much as possible to avoid repeating words. When the values is closer to 0 the completions would be less creative and tend to have a lot of repeated words.
This parameter is used to set a maximum number of tokens for the prompt and completions to avoid long prompts.
This parameter is also used to control the randomness of the completions. It is recommended that when using top p, temperature should be set to 0 and when using temperature, top p should be set to 0.
The frequency penalty reduces the models tendency to repeat words in a sentence. It is usually a value between 0 and 1.
The presence penalty allows the model to make novel predictions by lowering the probability of a word if it already appeared in the text. Unlike Frequency penalty presence penalty does not depend on the frequency at which the word appeared in the past. It is also a value between 0 and 1. These parameter should be set to 0 for most tasks.
Advanced Prompt Design
The quality of our prompts usually determines the accuracy of the responses or completions we get. Thus there is a need to put good efforts in designing quality prompts. In most cases you would have to experiment with different prompts to see the ones that yield the most accurate and consistent responses. Different problems require different styles of prompt and could be classified into:
- question answering
- code completion
In this article we will focus on the chat prompt.
Chat Prompt Design
Chatting prompts are usually very similar to question answering prompts, but unlike question answering prompt you also have give the AI a personality that would help it center its discussion around. The prompt text can be as follows:
The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.
Human: Hello, who are you?
AI: I am an AI created by OpenAI. How can I help you today?
Human: I'd like to cancel my subscription.
Other parameter settings are as follows:
Notice how the first sentence in the prompt gives the AI a personality.
In this article we have explored how to setup the Openai API on your local development environment, how to use your custom API key within your application and how to design custom prompts for use with the ChatGPT API. If you are really enthusiastic about ChatGPT and AI in general, head on to openai.com to explore more uses of these technologies. We also hope to feature more interesting articles on other applications of ChatGPT in the future.