Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

How to build an application that uses ChatGPT

It's easier than you think! Here's a step-by-step tutorial on how to create and modify an application that leverages ChatGPT.

Apr 15, 2024 • 7 Minute Read

Please set an alt value for this image...
  • Software Development
  • AI & Data

So you want to build an app that uses ChatGPT, or add it to your website or existing application? It's actually quite simple, so we're going to cover it today.

In this tutorial, we will use the sample app provided by OpenAI and break it down so you can understand the basics. Then we'll explain how to modify it. By the end of this tutorial, you'll have a basic idea of how it all works. 

You will:

  • Get a demo application running

  • Change the temperature of the responses

  • Craft a custom prompt 

So let's jump right in!

NOTE: If you're looking instead for how to build an app for ChatGPT (such as a ChatGPT plugin), check out the tutorial "How to make a ChatGPT plugin." It comes with a demo GitHub project for you to follow along with.

Getting Started

You'll need a few things before you can start building this app.

1. Register an account

You will need an account with OpenAI for this tutorial. You can sign up here.

2. Get an API Key

After creating your account, you must generate your unique API key. This key will authenticate your requests to the ChatGPT API. You can get your API key here.

3. Install Node.js and NPM on your machine

You'll need NodeJS to run this demo. I am using the LTS version. 

You can install Node in:

Or you can install Node on Windows Subsystem for Linux, which I do when working in Windows. 

4. Pick an Editor

For this tutorial, you'll only need a text editor. Visual Studio Code, Sublime Text, or any text editor will work here. I'll be using Visual Studio Code.

Note: I have VS Code installed with the terminal open. No matter my operating system, I can have a terminal open to run npm commands or any other prompt activity. In this tutorial, I refer to the "terminal" as a Windows, Linux, or macOS terminal to a command prompt.

Now we're set up and ready to go. Let's build an app.

Loading the Quickstart Application

We're going to use the OpenAI Quickstart Application for Node. We'll download it and run it. Then we'll go through the components to understand the application. Then we'll modify it. 

Let's clone the Node quickstart application:

      git clone https://github.com/openai/openai-quickstart-node.git
    

Now you will have the repo on your machine locally.

Let's install our dependencies. At a terminal, type in:

      cd openai-quickstart-node
    

To get into the application directory. Now type in the following:

      npm install
    

This will install everything needed for the demo application.

Now let's set up the config and enter your API key.

      cp .env.example .env
    

Note: In Windows Terminal, the command is copy instead of cp. 

Open up the .env file in your text editor. You will see a config file with a single line. It specifies your API key, and looks like this:

      OPENAI_API_KEY=
    

Place your key after the equal sign in the line, then save it:

Now in the terminal run:

      npm run dev
    

And now, we can enter an animal and see the output:

Perfect! It works! Let's dig into it.

About the Quickstart Application

Of course, we want to modify this application to learn how to build with it. But we need to understand how this application works. 

This is a Node app, and it's pretty simple. Most of the action happens in two files: index.js and api/generate.js. Together, they create a small application we can modify and use.

Index.js

This is a React component that renders a form with an input field (name of pet) and a submit button. When you submit the form, it sends a POST request to api/generate. The application takes the input field and inserts it into a JSON payload. If it's successful, it will render the result under the submit button. If it fails, it will send an alert with a message.

api/generate.js

This is a Node.js module that exports an asynchronous function that handles a POST request to the /api/generate endpoint.

First, it checks if the OpenAI API key is configured. It sends a 500 error with an error message if it's not. If the API key is configured, it checks if the animal field isn't empty in the request body. It sends a 400 error message if it's empty.

If the API key is configured and the animal field is not empty, it calls the openai.createCompletion() method to generate three superhero names. It uses the text-davinci-003 model and a prompt that includes the animal value from the request body. Temperature is set to 0.6, which controls how creative the names are (we'll get to that)

If the API call is successful, it sends a 200 response with the generated names in the response. If there is an error, it logs it to the console and sends an error response with an error message.

The generatePrompt() function generates the prompt for the OpenAI API call. It takes the animal value as an argument and returns a string that includes the animal value and some example superhero names for different animals. We will tinker with this for sure.

Now that we know how it works, let's modify it!

Our first change

The first change we want to make is to the temperature of the query. What the heck is that? 

Temperature is a parameter that controls how random the AI responses are. It goes from 0 to 1. When you set a high temperature (closer to 1) the AI's response is more creative. 

If you set a lower temperature (closer to 0) the responses are focused and deterministic. It's like a little knob to adjust the creativity of ChatGPT. 

It's located at line 32 in generate.js:

      try {
    const completion = await openai.createCompletion({
      model: "text-davinci-003",
      prompt: generatePrompt(animal),
      temperature: 0.6,
    });
    

Let's set our temperature to 0, then save the file:

      temperature: 0.0,
    

Then we will run the application. Type in npm run dev at the console. 

I'm going to type in “Llama” and see what name suggestions we get:

Those seem pretty sensible. Let's change it to 1.0:

Much more creative. Plus, every time I submit, I get something different. Let's change it to 2.0:

Completely nonsensical. Anything over 1.0 is generally too random. 

This is the first change you can make to your application. Play around with it and see what temperature you like best. 

Changing the Prompt

You can change the prompt to something completely different. Let's do that.  Instead of "name my pet" we will change this to "tell a story". 

In index.js change:

      <h3>Name my pet</h3>
    

to this:

      <h3>Tell a Story</h3>
    

Then change:

      <input type="submit" value="Generate names" />
    

to this:

      <input type="submit" value="Tell a Story" />
    

Now in generate.js look for this function:

      function generatePrompt(animal) {
  const capitalizedAnimal =
    animal[0].toUpperCase() + animal.slice(1).toLowerCase();
    return `Suggest three names for an animal that is a superhero.
    Animal: Cat
    Names: Captain Sharpclaw, Agent Fluffball, The Incredible Feline
    Animal: Dog
    Names: Ruff the Protector, Wonder Canine, Sir Barks-a-Lot
    Animal: ${capitalizedAnimal}
    Names:`;
    }

    

This is where we can modify the prompt any way we want! Let's make it tell a story in the style of a funny comedian. 

We're going to use the animal[] object. However, we don't need to capitalize the first character and can remove that. For now, we won't rename animal, it will be bound to our input box.

Change the function to look like this:

      function generatePrompt(animal) {
  return `Act as ${animal[0]}. Tell a funny story to the audience`;
}
    

The generatePrompt method returns a string that will be used as a prompt. We set it up so ChatGPT to act as your favorite comedian and tell a funny story. Save it and run it:

You may notice right away that the characters are completely cut off. At first I thought this was a CSS issue. It’s not. The text is cut off because ChatGPT is only sending a small amount of tokens back. Let's change that.

Find this code:

      try {
    const completion = await openai.createCompletion({
      model: "text-davinci-003",
      prompt: generatePrompt(animal),
      temperature: 1.0,
    });
    

and add the following:

      max_tokens: 1024,
    

Now the function looks like this:

        try {
    const completion = await openai.createCompletion({
      model: "text-davinci-003",
      prompt: generatePrompt(animal),
      temperature: 1.0,
      max_tokens: 1024,
    });

    

Let's try it again with Carrot Top:

That looks much better!!

The tokens you set will determine how large the answer is. Note that you are charged for tokens, so it's good to put a limit on the amount you choose.

Summary

In this tutorial, we learned how to set up the quickstart application, change the temperature of the response, and generate a new prompt string for ChatGPT. This is just the beginning, but it's a working app that uses ChatGPT. This is your first step towards building robust applications using ChatGPT. 

Learning more about ChatGPT & GenAI

Want to take your ChatGPT skills to the next level? The videos below cover how to use prompt engineering to get the most out of tools like ChatGPT, security concerns you should take into account, and how to explore GenAI models and architecture. You can access them with a free 10-day Pluralsight trial (and, you know, see what you think of Pluralsight courses in general! No presh).

If you're more the reading sort --- which you probably are if you've got this far --- here's some other articles you might find useful:

If your organization is using ChatGPT, you'll definitely aware of plugins — third-party add-ons that make the chat bot more useful. Here are some articles to get started:

Jeremy Morgan

Jeremy M.

Jeremy Morgan is a consultant, tech blogger, and speaker. He likes to stay immersed with the latest in .NET Development and DevOps during the day and working on Linux machines and microcontrollers at night.

More about this author