Breaking into tech can be hard without guidance. Many self-taught programmers and career switchers have a difficult time figuring out if their skills are up to par, and if not, what they need to learn to bridge the gap. You might’ve finished dozens of courses and even built a few projects, but is it enough to land a job? That’s why we’ve created our new AI resume analyzer: The job-readiness checker.
Our new job-readiness checker, powered by GPT-4 (the latest model from OpenAI, the company behind ChatGPT), helps evaluate how well you meet the requirements for a given role based on your skills and experience.
When you’re logged in to Codecademy, you can access the job-readiness checker under Resources in our navigation menu. Once you have a job posting’s URL from LinkedIn, just copy and paste the link along with your resume (or just copy and paste the full job description from another site). The job-readiness checker will parse through this information and the courses you’ve completed on Codecademy to generate a compatibility percentage that summarizes how well you meet the requirements. Here’s an example of how it can break down your compatibility and skill alignment with a given role.
The job-readiness checker is designed to take the guess work out of your job search, so you can be strategic about where you send your resume. “The job-readiness checker crystallizes how ready you are for a particular role,” says Owen Ou, Codecademy Senior Product Manager who helped spearhead its development. With this tool, we’ll also break down exactly which skills you already have and which ones you need to pick up. “We’re hoping that this empowers learners to know like, Oh, I’m only 30% ready,” he says. “Hopefully that can motivate some to go through our content faster to accelerate getting that outcome.”
Here’s a peek behind the scenes at how our engineers used emergent AI technology to build our job-readiness tool, and how you can use the job-readiness checker in your own job search.
The project: Create a tool so learners can check if they’re ready to apply for a job.
When you’re learning to code so you can get a job in tech, it’s hard to figure out when you’re ready to take the next step and enter the job market. Our learners have diverse educational backgrounds and past professional experiences, so there’s no one-size-fits-all way to evaluate job readiness. That’s why we thought this would be a good opportunity to use generative AI.
The biggest tasks involved in developing the job-readiness checker included:
- Creating an OpenAPI endpoint
- Developing a new data model to save users’ sensitive resume information
- Calibrating prompts to ensure ChatGPT provides the desired output
Investigation and roadmapping
Owen: “Previous to AI, the only way that a learner would be able to access this sort of capability was to sit down with a real human. A human advisor would have to review what courses and projects you’ve done, how well you’ve done on each, and tell you manually [if you’re ready to apply] by reviewing the data that they have about you and looking at the roles that you’re interested in.
I’d followed developments in AI and neural networks for many years. With the latest iteration coming out of OpenAI and how easy and powerful GPT-4 was, it seemed like we could start tapping into the predictive capabilities a little bit more. The project was basically connecting lots of different dots: connecting the learner pain point; and connecting the business interest to the technology trend that was developing. We thought this was an interesting problem to work on.
The first part was just a tactical investigation. We had Jon, our tech lead, spend quite a bit of time initially just playing around because it’s a bleeding-edge technology; no one really knows what it’s capable of, and it changes every month. We were just tinkering and messing around with this technology in the back end against our hypothesis. This probably took over a month, just seeing what was capable, and could it return a score that roughly made sense based on a Codecademy progress data that we fed it? Directionally, we were checking different components of our vision and whether the existing technology could deliver something. There’s like a huge laundry list of tools that we used — you’ll have to ask our Senior Software Engineer Jon Sanders.”
Jon: “This is the first Codecademy feature that uses ChatGPT, so we started by making a new micro service which contains the endpoints for all our AI-related API calls (right now just OpenAI). This took some time to get right, and we are still iterating on it, but it’s nice to have one service to track all of our AI usage. It allows us to see how much each feature is spending on API calls, how they are being used, and we can add all of our various prompts there (amongst other things). It’s good we built it early, because now multiple teams are making use of it for upcoming AI-related features.
Once we got the service running, we started building the job-readiness checker’s frontend. At some point, I became focused on building the features instead of modifying the prompts, so our team’s manager, Aditya Srinivasan and Curriculum Developer Melanie Williams, began work on calibrating the prompts to give us the desired outputs: accurate scores of job-readiness and helpful feedback for what someone should work on to become more compatible with a given job posting. Aditya made a script to run many examples through the service at one time, so we can see how any prompt changes affect a host of different example learners.
The most important feature of GPT-4 (also available in GPT-3) is
function_calling, which allows us to get predictably formatted JSON the way you would expect from a “normal” API. Before
function_calling was released, we were having trouble ensuring ChatGPT’s responses were in the correct format, which made building the feature nearly impossible. I imagine almost all of the upcoming features from Codecademy that use ChatGPT will also make use of
function_calling. It’s a vital part of the API for us.”
Owen: “It’s a novel use case, and as a result, we’ve had to build new playbooks. We’re just using principles and working through the muck, the mess, like the low-level details to kind of achieve an objective. It’s not like we’re just following a standard process that other projects have. It’s just a lot of on-the-fly problem solving, and using resources and talent within the company that you think have the raw skills to help tackle these problems. And then just trust that it can be done.”
Jon: “We did a lot of testing with GPT-3.5-turbo-0613 and GPT-4-0613 and found that we get much more accurate and reliable results with GPT-4. We had hoped to use GPT-3.5 because it is faster and less expensive, but the qualitative difference in results, for a prompt as complicated as we are using, was obvious.
Unfortunately this means our learners need to wait up to 30 seconds for a job-report to get generated, so building a UI reflects this without being confusing was an interesting design challenge that our designer, Mat Stevens, did a great job with.”
Owen: “We were all holding our breath to see if this would work. Every day we’d be like, Oh my god, could it do it? No. But then we’d try a work around. When you’re doing stuff that hasn’t really been done before, you know that some of your ideas just aren’t going to be feasible. But this one we were both lucky and well-timed. We knew the wave was coming, and we bet that this wave would be the right wave to ride — and you could crash a little bit once in a while.”
Owen: “This one stood out against the other projects I’ve worked on because we didn’t even know it was possible. We hadn’t seen this done before in the edtech industry — that’s what’s unique about this project. We were pioneering.”
Jon: “I’d like to add a shout-out to Ahmed Abdallah, Staff Engineer — huge snaps for building out a lot of the AI-service, including rate-limiting, infrastructure, and security. His help was vital to the project.”