Getting Started with Azure OpenAI Service
Introduction
Azure OpenAI is a feature that provides API access to OpenAI’s powerful language models such as GPT-4 and GPT-3.5 Turbo. Users can access the service through REST APIs, the Python SDK, or Azure OpenAI Studio. Businesses can use it for content creation and design, personalized marketing, chatbots and virtual assistants, and more. This is all provided with the enterprise-level security that is built into the Azure platform. In this tutorial, we’ll use Azure OpenAI Studio to demonstrate how to set up and get started using Azure’s OpenAI resource.
Setting Up
Setting up the OpenAI service in Azure is straightforward but has a few restrictions. The main one is that it must be set up with a corporate email address, not a personal one. This means that you can’t sign up with your Gmail or Outlook account. This also means that when you sign up, you’ll have access to your business’ OpenAI account so you may find some of the setup already done by administrators, and you may have limits on your account based on your role. This tutorial will cover starting from scratch, but before you do, make sure you’re a proper representative of your company’s domain and clear things through your IT department.
Assuming you have a company’s e-mail and the company’s blessing, you first need to have an Azure account. If you don’t have one, you can go to Azure’s AI Services page and sign up.
Note: Azure does not allow the free account tier to access OpenAI services.
Once you have an Azure subscription, you need to go to the Azure Portal Page, sign in, and get your subscription ID. You’ll find it through the “Subscriptions” service. If you don’t see it on the home page, type “Subscriptions” in the search bar to find it.
Under “subscriptions” you’ll see a table listing your Azure subscription(s). Click on the subscription name. You’ll see a page with details about your subscription. If you hover over the subscription ID on this page, you’ll see a link appear that allows you to copy the subscription ID to the clipboard. Copy that ID. Then go to request access to the Azure OpenAI Service though this form. You’ll need your subscription ID to fill out the form.
Once the form is filled out and submitted, you’ll need to wait for an email from Microsoft confirming your access to Azure OpenAI. Save the email because it will have important information you may need later.
Once you’re approved, head back to the Azure Portal and type “Azure OpenAI” into the search bar. Click on the Azure OpenAI service and then click on “create”.
In the next screen, you can choose your subscription, resource group (you can create a new one from this page), and region and enter a name for the resource. There is a selection for pricing tier that only has a single option at this time. After filling out this form, you click on the “Next” button to go to a screen setting the network security for your resource. Clicking “Next” again will allow you to set up optional tags for your resource. One final click on “Next” allows you to review your entries. If everything looks good, you can click on “Create” to create your resource. It will take a few minutes for your new resource to be deployed.
Once it is deployed, it will appear on the list of Azure OpenAI services. Again, you can reach this by typing “Azure OpenAI” into the search bar and clicking on the Azure OpenAI service. There, you can click on the name of your new resource to see a page with details about it. On that page is a link to go to Azure OpenAI Studio, which is where we’ll conduct the remainder of this tutorial.
Alternatively, you can go to Azure OpenAI Studio directly at https://oai.azure.com/portal (you may have to login again) and you will be prompted to select your Azure OpenAI resource.
Working with the Azure OpenAI Portal
Once you go to the Portal page, you’ll see a notice that you don’t have any deployments. To proceed you must create one. This is where you select the OpenAI model you want to use. You can have multiple deployments with different models to use with your resources. You can learn about the various models in the documentation. The notice will have a button you can click for “Create New Deployment.” Alternatively, you can create a new deployment by going to the menu on the left of the home page and selecting “Management>Deployments” then on the manage deployments page, click the link “Create New Deployment.”
A box will pop up allowing you to select a model, a version, and name the deployment. For this tutorial use “gpt-35-turbo” and leave the model version as the default. Then you can name the deployment and click on “Create”. The new deployment will appear on the list of deployments on the manage deployments page. Once you have a deployment you can click on the link for the home page (at the top of the left sidebar menu) and we can look at some of the playgrounds Azure OpenAI Studio gives us.
Note: Not all models will work with all playgrounds.
There are four playgrounds available, the Chat Playground, the Completions Playground, the DALL-E Playground, and Bring Your Own Data. We’ll concentrate on the first two, the Chat Playground and the Completions Playground.
The Completions Playground
The completions playground allows experimenting with prompt engineering for your selected model. It also has built-in examples to demonstrate the selected model in action. It also has a set of parameters that you can experiment with to adjust its responses. Let’s try out one of the examples.
Note: There are two potential errors you might see here. One is “Deployment Not Found,” which will happen if you try using a deployment before Azure is finished creating it. If that happens, wait a few minutes and try again. The other potential error is “Operation Not Supported,” which happens if you have a model/version that doesn’t support this playground. If that happens, go and create a new deployment with a different model.
The Completions Playground page is dominated by a large text box. Above it, are two drop-down boxes. The box on the left selects the deployment you want to use. Right now, it should have defaulted to the deployment you just created. If not, click on it and select the deployment you made with the default version of “gpt-35-turbo”. Click on the other dropdown and you’ll see a large list of demonstration prompts you can try out in the playground. Many of these examples are templates to do particular tasks, things such as creating SQL or Python code from natural language prompts, summarizing an article, or generating an email. Let’s choose “Generate an email”.
Selecting the example fills the textbox with a pre-loaded prompt you can edit. You might also notice that selecting an example also adjusts the parameters to the right of the text box. The examples aren’t just prompt examples, but examples of how to adjust these parameters for various purposes. In the case of “Generate an email” the example adjusted the “Max length” to 350 tokens. Tokens are about 4 English characters each and are shared between the prompt and the response. The upper limit you can set is 4000. Below the text box, you can see a token count and see that the email prompt has 75 tokens taken up, so the model has 275 tokens left for its response. This is the parameter you’ll probably adjust most often in using the playground.
Now that there’s a prompt in the text box, we want to see the model’s response. To do so, make any edits you want to the example prompt, then click the “Generate” button below the text box.
You can run the same prompt multiple times by using the “Regenerate” button. This is a great way to experiment with the parameters, by running the same prompt multiple times, while also tweaking the settings to see how they affect the output.
In addition to “Max length” there are the following parameters:
- Temperature: A measure of randomness. The lower the temperature the more deterministic the responses. Similar to Top Probabilities and shouldn’t be used with that setting. Adjust one or the other, but not both.
- Top probabilities: Adjusts randomness like Temperature but uses a different method. Use one or the other, but not both.
- Frequency Penalty: Reduces the chance of repeating a token based on how often it appears. Decreases the chance of repeated text in a response.
- Presence Penalty: Reduces the chance of repeating any token in a response. Increases the likelihood of introducing new topics in a response.
If you want to go deeper into these parameters, you can check out “Setting Parameters in OpenAI”.
Go ahead and experiment with the other example prompts.
The Chat Playground
The chat playground allows configuring and training an AI chatbot. There are three panels on the Chat Playground page: one to set up the assistant, one for the chat itself, and one for configuration and parameters. The configuration and parameters panel is where you select the deployment to use. Again, if it doesn’t show the deployment you made earlier, select it.
In the assistant setup panel, you set up the “system message” which instructs the behavior of the chatbot. Like with the Completions Playground, there are several built-in templates that you can select using a dropdown. We’ll choose “Marketing Writing Assistant.”
Note: The system message counts against the 4000 token limit.
To use the chatbot, you use the textbox at the bottom of the chat session panel. You type in questions for the bot and click the send button and the model will respond based on the information provided in the system message.
Like with the Completions Playground, you can modify the built-in system messages before you use them. But remember that whenever you change the system message, you will reset the chat window, so prior chat responses will not be used to inform the current chat even if they’re still visible in the chat window.
In the configuration panel, there is a “Parameters” tab that holds parameters similar to what we saw in the Completions Playlist. The main difference is that instead of “Max length” it has a “Max Response” setting. This setting limits the number of tokens used for any one response in the chat.
Go ahead and chat with the marketing assistant chatbot.
Conclusion
This article has shown you how to use Azure OpenAI. We covered getting access to Azure OpenAI, creating the resource, and using the Azure OpenAI studio. We saw how to create a new deployment, and how to start using the Completion and Chat playgrounds. We now have the beginning needed to use Azure OpenAI. If you want to look deeper into AI, you can check out “AI Prompting Best Practices”, “Detecting Hallucinations in Generative AI”, or “What are GPT Assistants?”.
We also offer many AI courses, such as “Intro to Generative AI” or “Intro to OpenAI GPT API”. These and more can help on your journey into Generative AI.
Author
'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'
Meet the full teamRelated articles
Learn more on Codecademy
- Skill path
Code Foundations
Start your programming journey with an introduction to the world of code and basic concepts.Includes 5 CoursesWith CertificateBeginner Friendly4 hours - Career path
Full-Stack Engineer
A full-stack engineer can get a project done from start to finish, back-end to front-end.Includes 51 CoursesWith Professional CertificationBeginner Friendly150 hours