AI
Pass

AI Interview Coach Agent Skill — AI Pass API

AI Interview Coach Agent Skill — AI Pass API

Add interview preparation capability to your AI agent. Generates role-specific questions and provides STAR-method coaching on answers.

Skill Overview

  • Name: interview-coach
  • Description: Generate interview questions and evaluate candidate answers
  • Model: gpt-5-mini
  • API: AI Pass chat completions

Skill File

---
name: interview-coach
description: Generate interview questions and evaluate candidate answers
version: 1.0.0
---

Setup

export AIPASS_API_KEY="$AIPASS_API_KEY"

Usage

generate_questions(role, level, interview_type, count)
evaluate_answer(question, answer, role, level)

Implementation

import requests, os, json, re

AIPASS_API_KEY = os.environ["AIPASS_API_KEY"]
ENDPOINT = "https://aipass.one/apikey/v1/chat/completions"
HEADERS = {"Authorization": f"Bearer {AIPASS_API_KEY}", "Content-Type": "application/json"}

def generate_questions(role: str, level: str = "mid-level", interview_type: str = "behavioral", count: int = 5) -> list:
    r = requests.post(ENDPOINT, headers=HEADERS, json={
        "model": "gpt-5-mini",
        "temperature": 1,
        "max_tokens": 16000,
        "messages": [
            {"role": "system", "content": f"Generate exactly {count} {interview_type} interview questions for a {level} {role}. Return a JSON array of question strings only."},
            {"role": "user", "content": "Generate the questions."}
        ]
    })
    content = r.json()["choices"][0]["message"]["content"]
    match = re.search(r'\[.*\]', content, re.DOTALL)
    return json.loads(match.group() if match else content)

def evaluate_answer(question: str, answer: str, role: str, level: str = "mid-level") -> dict:
    r = requests.post(ENDPOINT, headers=HEADERS, json={
        "model": "gpt-5-mini",
        "temperature": 1,
        "max_tokens": 16000,
        "messages": [
            {"role": "system", "content": f"You are an expert interview coach for {level} {role} roles. Evaluate answers and return JSON with: strengths (list), improvements (list), score (1-10), improved_example (string)."},
            {"role": "user", "content": f"Question: {question}\n\nAnswer: {answer}\n\nEvaluate and return JSON."}
        ]
    })
    content = r.json()["choices"][0]["message"]["content"]
    try:
        match = re.search(r'\{.*\}', content, re.DOTALL)
        return json.loads(match.group() if match else content)
    except Exception:
        return {"strengths": [], "improvements": [content], "score": 5, "improved_example": ""}

Examples

# Generate behavioral questions for a PM role
questions = generate_questions(
    role="Product Manager",
    level="senior",
    interview_type="behavioral",
    count=5
)
# -> ["Tell me about a product you launched...", ...]

# Evaluate an answer
feedback = evaluate_answer(
    question="Tell me about a time you handled a difficult stakeholder",
    answer="I once had a stakeholder who kept changing requirements...",
    role="Product Manager",
    level="senior"
)
# -> {"strengths": ["Good situation setup"], "score": 7, ...}

# Full mock interview session
def run_mock_interview(role: str, level: str = "mid-level"):
    questions = generate_questions(role, level, "mixed", 5)
    results = []
    for q in questions:
        print(f"\nQ: {q}")
        answer = input("Your answer: ")
        feedback = evaluate_answer(q, answer, role, level)
        results.append({"question": q, "score": feedback.get("score", 0)})
        print(f"Score: {feedback.get('score')}/10")
    avg = sum(r["score"] for r in results) / len(results)
    return {"questions": len(results), "avg_score": round(avg, 1)}

API Reference

Endpoint: POST https://aipass.one/apikey/v1/chat/completions

Get API key: aipass.one/panel/developer.html

Live App

AI Interview Prep

Skill File

---
name: interview-coach
description: Generate interview questions and evaluate candidate answers
version: 1.0.0
---
Setup: export AIPASS_API_KEY="$AIPASS_API_KEY"
Model: gpt-5-mini, temperature:1, max_tokens:16000
Functions: generate_questions(role, level, type, count) -> list; evaluate_answer(question, answer, role) -> dict
Download Skill File