DEV Community

Phanindhra Kondru
Phanindhra Kondru

Posted on • Edited on

Match You CV — Import Your Notion Resume, Tailor It with AI, Export a Perfect PDF

Notion MCP Challenge Submission 🧠

This is a submission for the Notion MCP Challenge

What I Built

Match You CV is an AI career assistant that lets job seekers import their resume directly from Notion, tailor it to any job description using AI, and export a polished, ATS-friendly PDF — all in under 10 minutes.

The core workflow:

  1. Connect Notion — One-click OAuth to link your Notion workspace
  2. Pick a page — Search and browse your Notion pages, select the one with your resume
  3. Preview & import — See a structured preview of your resume content before importing
  4. Edit in the builder — All your Notion data (contact, experience, education, skills, projects, certifications, languages) is parsed and mapped into our resume editor
  5. ATS Score Check — Paste a LinkedIn job link or any job description and get a detailed ATS match score with 6 scoring dimensions, keyword analysis, and actionable gaps
  6. AI Tailor — Let AI rewrite your resume to match the role — optimized for ATS, with optional company culture matching (Amazon, Google, Startup, Enterprise, Consulting styles)
  7. Share Your Score — Generate a visual Career Match Card and share it on LinkedIn with one click
  8. Export PDF — Download a clean, professional PDF ready to submit

We also provide a Notion resume template that users can duplicate into their workspace, fill in their details, and import directly into Match You CV.

Why Notion?

Many professionals already maintain their career information in Notion — it's their personal knowledge base. Rather than forcing users to re-type everything into yet another form, we meet them where their data already lives. Notion becomes the single source of truth for your career history, and Match You CV becomes the tool that transforms it into job-winning applications.

Live at: matchyou.cv

Show us the code

The Notion integration spans the full stack — frontend (React + TypeScript), backend (Supabase Edge Functions / Deno), and database (Postgres with RLS). Here's the complete implementation:


Database Schema

Stores Notion OAuth tokens per user with Row Level Security:

-- supabase/migrations/20250306000000_notion_connections.sql

create table if not exists public.notion_connections (
  id uuid primary key default gen_random_uuid(),
  user_id uuid not null references auth.users(id) on delete cascade,
  access_token text not null,
  workspace_id text not null,
  workspace_name text,
  bot_id text,
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now(),
  unique(user_id)
);

alter table public.notion_connections enable row level security;

create policy "Users can read own connection"
  on public.notion_connections for select
  using (auth.uid() = user_id);

create policy "Users can delete own connection"
  on public.notion_connections for delete
  using (auth.uid() = user_id);
Enter fullscreen mode Exit fullscreen mode

Backend: Notion OAuth Edge Function

Handles token exchange, connection status, and disconnection — all server-side so secrets never touch the browser:

// supabase/functions/notion-auth/index.ts

import "jsr:@supabase/functions-js/edge-runtime.d.ts"
import { createClient } from "https://esm.sh/@supabase/supabase-js@2"

const cors = {
  "Access-Control-Allow-Origin": "*",
  "Access-Control-Allow-Headers":
    "authorization, x-client-info, apikey, content-type",
}

function jsonResponse(data: unknown, status = 200) {
  return new Response(JSON.stringify(data), {
    status,
    headers: { ...cors, "Content-Type": "application/json" },
  })
}

async function getUser(req: Request) {
  const auth = req.headers.get("Authorization")
  if (!auth?.startsWith("Bearer ")) return null
  const supabase = createClient(
    Deno.env.get("SUPABASE_URL")!,
    Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
  )
  const {
    data: { user },
  } = await supabase.auth.getUser(auth.slice(7))
  return user
}

Deno.serve(async (req) => {
  if (req.method === "OPTIONS")
    return new Response(null, { headers: cors })

  try {
    const user = await getUser(req)
    if (!user) return jsonResponse({ error: "Unauthorized" }, 401)

    const { action, code, redirectUri } = await req.json()

    if (action === "exchange")
      return await handleExchange(user.id, code, redirectUri)
    if (action === "status") return await handleStatus(user.id)
    if (action === "disconnect")
      return await handleDisconnect(user.id)

    return jsonResponse({ error: "Unknown action" }, 400)
  } catch (e) {
    return jsonResponse({ error: String(e) }, 500)
  }
})

async function handleExchange(
  userId: string,
  code: string,
  redirectUri: string
) {
  const clientId = Deno.env.get("NOTION_OAUTH_CLIENT_ID")
  const clientSecret = Deno.env.get("NOTION_OAUTH_CLIENT_SECRET")
  if (!clientId || !clientSecret)
    return jsonResponse({ error: "Notion OAuth not configured" }, 500)

  // Exchange code for access token
  const tokenRes = await fetch(
    "https://api.notion.com/v1/oauth/token",
    {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization:
          "Basic " + btoa(`${clientId}:${clientSecret}`),
      },
      body: JSON.stringify({
        grant_type: "authorization_code",
        code,
        redirect_uri: redirectUri,
      }),
    }
  )

  if (!tokenRes.ok) {
    const err = await tokenRes.text()
    return jsonResponse(
      { error: "Notion token exchange failed", details: err },
      502
    )
  }

  const tokenData = await tokenRes.json()

  // Store in database
  const supabase = createClient(
    Deno.env.get("SUPABASE_URL")!,
    Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
  )

  const { error: dbError } = await supabase
    .from("notion_connections")
    .upsert(
      {
        user_id: userId,
        access_token: tokenData.access_token,
        workspace_id: tokenData.workspace_id,
        workspace_name: tokenData.workspace_name ?? null,
        bot_id: tokenData.bot_id,
        updated_at: new Date().toISOString(),
      },
      { onConflict: "user_id" }
    )

  if (dbError)
    return jsonResponse(
      { error: "Failed to save connection", details: dbError.message },
      500
    )

  return jsonResponse({
    connected: true,
    workspaceName: tokenData.workspace_name,
  })
}

async function handleStatus(userId: string) {
  const supabase = createClient(
    Deno.env.get("SUPABASE_URL")!,
    Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
  )
  const { data } = await supabase
    .from("notion_connections")
    .select("workspace_name, workspace_id, created_at")
    .eq("user_id", userId)
    .single()

  return jsonResponse({
    connected: !!data,
    workspaceName: data?.workspace_name ?? null,
  })
}

async function handleDisconnect(userId: string) {
  const supabase = createClient(
    Deno.env.get("SUPABASE_URL")!,
    Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
  )
  await supabase
    .from("notion_connections")
    .delete()
    .eq("user_id", userId)

  return jsonResponse({ connected: false })
}
Enter fullscreen mode Exit fullscreen mode

Backend: Notion Pages Edge Function

Searches the user's workspace and reads page blocks with full pagination:

// supabase/functions/notion-pages/index.ts

import "jsr:@supabase/functions-js/edge-runtime.d.ts"
import { createClient } from "https://esm.sh/@supabase/supabase-js@2"

const NOTION_API = "https://api.notion.com/v1"
const NOTION_VERSION = "2022-06-28"

const cors = {
  "Access-Control-Allow-Origin": "*",
  "Access-Control-Allow-Headers":
    "authorization, x-client-info, apikey, content-type",
}

function jsonResponse(data: unknown, status = 200) {
  return new Response(JSON.stringify(data), {
    status,
    headers: { ...cors, "Content-Type": "application/json" },
  })
}

async function getUserAndToken(req: Request) {
  const auth = req.headers.get("Authorization")
  if (!auth?.startsWith("Bearer ")) return null
  const supabase = createClient(
    Deno.env.get("SUPABASE_URL")!,
    Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
  )
  const {
    data: { user },
  } = await supabase.auth.getUser(auth.slice(7))
  if (!user) return null

  const { data: conn } = await supabase
    .from("notion_connections")
    .select("access_token")
    .eq("user_id", user.id)
    .single()

  if (!conn) return null
  return { userId: user.id, notionToken: conn.access_token }
}

function notionHeaders(token: string) {
  return {
    Authorization: `Bearer ${token}`,
    "Notion-Version": NOTION_VERSION,
    "Content-Type": "application/json",
  }
}

Deno.serve(async (req) => {
  if (req.method === "OPTIONS")
    return new Response(null, { headers: cors })

  try {
    const ctx = await getUserAndToken(req)
    if (!ctx)
      return jsonResponse(
        { error: "Unauthorized or Notion not connected" },
        401
      )

    const { action, pageId, query } = await req.json()

    if (action === "search")
      return await handleSearch(ctx.notionToken, query)
    if (action === "read" && pageId)
      return await handleRead(ctx.notionToken, pageId)

    return jsonResponse({ error: "Unknown action" }, 400)
  } catch (e) {
    return jsonResponse({ error: String(e) }, 500)
  }
})

/** Search for pages in the user's Notion workspace */
async function handleSearch(token: string, query?: string) {
  const res = await fetch(`${NOTION_API}/search`, {
    method: "POST",
    headers: notionHeaders(token),
    body: JSON.stringify({
      query: query ?? "",
      filter: { value: "page", property: "object" },
      page_size: 20,
    }),
  })

  if (!res.ok) {
    const err = await res.text()
    return jsonResponse(
      { error: "Notion search failed", details: err },
      502
    )
  }

  const data = await res.json()

  const pages = data.results.map((p: any) => {
    let title = "Untitled"
    if (p.properties) {
      for (const prop of Object.values(p.properties) as any[]) {
        if (prop.title && prop.title.length > 0) {
          title = prop.title
            .map((t: any) => t.plain_text)
            .join("")
          break
        }
      }
    }
    return {
      id: p.id,
      title,
      icon: p.icon?.emoji ?? null,
      lastEdited: p.last_edited_time ?? null,
    }
  })

  return jsonResponse({ pages })
}

/** Read all blocks from a Notion page and extract text content */
async function handleRead(token: string, pageId: string) {
  // Fetch page metadata for title
  const pageRes = await fetch(`${NOTION_API}/pages/${pageId}`, {
    headers: notionHeaders(token),
  })
  let pageTitle = ""
  if (pageRes.ok) {
    const pageData = await pageRes.json()
    if (pageData.properties) {
      for (const prop of Object.values(pageData.properties) as any[]) {
        if (prop.title && prop.title.length > 0) {
          pageTitle = prop.title
            .map((t: any) => t.plain_text)
            .join("")
          break
        }
      }
    }
  }

  // Fetch all blocks with pagination
  const blocks: any[] = []
  let cursor: string | undefined

  do {
    const url = new URL(`${NOTION_API}/blocks/${pageId}/children`)
    url.searchParams.set("page_size", "100")
    if (cursor) url.searchParams.set("start_cursor", cursor)

    const res = await fetch(url.toString(), {
      headers: notionHeaders(token),
    })

    if (!res.ok) {
      const err = await res.text()
      return jsonResponse(
        { error: "Failed to read page", details: err },
        502
      )
    }

    const data = await res.json()
    blocks.push(...data.results)
    cursor = data.has_more ? data.next_cursor : undefined
  } while (cursor)

  // Convert blocks to structured sections
  const sections = extractSections(blocks)
  return jsonResponse({ pageTitle, sections })
}

interface Section {
  heading: string
  items: string[]
}

function getRichText(block: any): string {
  const typeData = block[block.type]
  if (!typeData?.rich_text) return ""
  return typeData.rich_text.map((t: any) => t.plain_text).join("")
}

/** Extract structured sections from Notion blocks */
function extractSections(blocks: any[]): Section[] {
  const sections: Section[] = []
  let current: Section = { heading: "", items: [] }

  for (const block of blocks) {
    const type = block.type

    if (
      type === "heading_1" ||
      type === "heading_2" ||
      type === "heading_3"
    ) {
      if (current.heading || current.items.length > 0)
        sections.push(current)
      current = { heading: getRichText(block), items: [] }
    } else if (
      [
        "paragraph",
        "bulleted_list_item",
        "numbered_list_item",
        "to_do",
        "toggle",
        "callout",
        "quote",
      ].includes(type)
    ) {
      const text = getRichText(block).trim()
      if (text) current.items.push(text)
    } else if (type === "divider") {
      if (current.heading || current.items.length > 0)
        sections.push(current)
      current = { heading: "", items: [] }
    }
  }

  if (current.heading || current.items.length > 0)
    sections.push(current)

  return sections
}
Enter fullscreen mode Exit fullscreen mode

Frontend: Notion Service Layer

Client-side functions that communicate with the edge functions:

// src/services/notion.ts

import { supabase } from '../lib/supabase'

const NOTION_CLIENT_ID = import.meta.env.VITE_NOTION_CLIENT_ID

/** Build the Notion OAuth authorization URL */
export function getNotionAuthUrl(): string {
  const redirectUri = `${window.location.origin}/notion/callback`
  const params = new URLSearchParams({
    client_id: NOTION_CLIENT_ID,
    response_type: 'code',
    owner: 'user',
    redirect_uri: redirectUri,
  })
  return `https://api.notion.com/v1/oauth/authorize?${params}`
}

async function callEdgeFunction(
  fnName: string,
  body: Record<string, unknown>
) {
  if (!supabase) throw new Error('Supabase not configured')
  const {
    data: { session },
  } = await supabase.auth.getSession()
  if (!session) throw new Error('Not authenticated')

  const supabaseUrl = import.meta.env.VITE_SUPABASE_URL
  const res = await fetch(
    `${supabaseUrl}/functions/v1/${fnName}`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        Authorization: `Bearer ${session.access_token}`,
      },
      body: JSON.stringify(body),
    }
  )

  const data = await res.json()
  if (!res.ok) throw new Error(data.error ?? 'Request failed')
  return data
}

/** Exchange OAuth code for token and store connection */
export async function exchangeNotionCode(
  code: string
): Promise<{ workspaceName: string }> {
  const redirectUri = `${window.location.origin}/notion/callback`
  return callEdgeFunction('notion-auth', {
    action: 'exchange',
    code,
    redirectUri,
  })
}

/** Check if user has a Notion connection */
export async function getNotionStatus(): Promise<{
  connected: boolean
  workspaceName: string | null
}> {
  return callEdgeFunction('notion-auth', { action: 'status' })
}

/** Disconnect Notion */
export async function disconnectNotion(): Promise<void> {
  await callEdgeFunction('notion-auth', { action: 'disconnect' })
}

/** Search pages in user's Notion workspace */
export async function searchNotionPages(
  query?: string
): Promise<
  Array<{
    id: string
    title: string
    icon: string | null
    lastEdited: string | null
  }>
> {
  const data = await callEdgeFunction('notion-pages', {
    action: 'search',
    query,
  })
  return data.pages
}

/** Read a Notion page and get structured sections */
export async function readNotionPage(
  pageId: string
): Promise<{
  pageTitle: string
  sections: Array<{ heading: string; items: string[] }>
}> {
  const data = await callEdgeFunction('notion-pages', {
    action: 'read',
    pageId,
  })
  return { pageTitle: data.pageTitle ?? '', sections: data.sections }
}
Enter fullscreen mode Exit fullscreen mode

Frontend: Intelligent Resume Parser

The brain of the import — converts raw Notion sections into structured resume data:

// src/services/parseNotionResume.ts

import type { Resume } from '../types/resume'
import { emptyResume, emptyContact } from '../types/resume'

interface Section {
  heading: string
  items: string[]
}

const HEADING_PATTERNS: Record<string, RegExp> = {
  contact: /^(contact|info|personal|details)/i,
  summary: /^(summary|about|objective|profile|overview)/i,
  experience: /^(experience|work|employment|career|professional)/i,
  education: /^(education|academic|school|university|degree)/i,
  skills: /^(skills|technical|technologies|competencies|expertise)/i,
  tools: /^(tools|software|platforms)/i,
  projects: /^(projects|portfolio|works)/i,
  certifications: /^(certifications?|licenses?|credentials?)/i,
  languages: /^(languages?)/i,
}

function classifySection(heading: string): string {
  const h = heading.trim()
  for (const [key, pattern] of Object.entries(HEADING_PATTERNS)) {
    if (pattern.test(h)) return key
  }
  return 'unknown'
}

/** Parse "Role at Company" or "Role - Company" patterns */
function parseExperienceItem(
  text: string
): { role: string; company: string } | null {
  if (
    /^(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec|\d{4})\b/i.test(
      text.trim()
    )
  )
    return null
  const match = text.match(
    /^(.+?)\s+(?:at|@|-|–|—|,)\s+(.+)$/i
  )
  if (match)
    return { role: match[1].trim(), company: match[2].trim() }
  return null
}

/** Extract date ranges like "Jan 2020 - Present" */
function extractDateRange(
  text: string
): { dateFrom: string; dateTo: string; rest: string } {
  const datePattern =
    /\b((?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[a-z]*\.?\s+\d{4}|\d{4})\s*[-–—to]+\s*((?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[a-z]*\.?\s+\d{4}|\d{4}|[Pp]resent|[Cc]urrent|[Nn]ow)\b/
  const match = text.match(datePattern)
  if (match) {
    return {
      dateFrom: match[1].trim(),
      dateTo: match[2].trim(),
      rest: text
        .replace(match[0], '')
        .replace(/[|,;]\s*$/, '')
        .replace(/^\s*[|,;]\s*/, '')
        .trim(),
    }
  }
  return { dateFrom: '', dateTo: '', rest: text }
}

/**
 * Parse structured sections from a Notion page into a Resume.
 * Uses heuristics to map headings to resume fields — no AI call needed.
 */
export function parseNotionSections(
  sections: Section[],
  pageTitle?: string
): Resume {
  const resume: Resume = {
    ...emptyResume,
    contact: { ...emptyContact },
    experience: [],
    education: [],
    skills: [],
    tools: [],
    otherSkills: [],
    projects: [],
    certifications: [],
    languages: [],
  }

  // ... classifies each section, parses experience entries,
  // extracts dates, contact info, skills, etc.
  // Full implementation handles edge cases like pages with no headings,
  // page titles as names, pipe-separated contact details, multiple date formats

  return resume
}
Enter fullscreen mode Exit fullscreen mode

Frontend: OAuth Callback Handler

Handles the redirect after Notion authorization:

// src/pages/NotionCallback.tsx

import { useEffect, useState, useRef } from 'react'
import { useNavigate, useSearchParams } from 'react-router-dom'
import toast from 'react-hot-toast'
import { exchangeNotionCode } from '../services/notion'

export function NotionCallback() {
  const [searchParams] = useSearchParams()
  const navigate = useNavigate()
  const [status, setStatus] = useState<'loading' | 'error'>('loading')
  const exchanged = useRef(false)

  useEffect(() => {
    if (exchanged.current) return
    exchanged.current = true

    const code = searchParams.get('code')
    const error = searchParams.get('error')

    if (error) {
      toast.error('Notion authorization was denied')
      navigate('/app/import-notion', { replace: true })
      return
    }

    if (!code) {
      toast.error('No authorization code received')
      navigate('/app/import-notion', { replace: true })
      return
    }

    exchangeNotionCode(code)
      .then((data) => {
        toast.success(
          `Connected to ${data.workspaceName || 'Notion'}!`
        )
        navigate('/app/import-notion', { replace: true })
      })
      .catch((err) => {
        console.error('Notion exchange failed:', err)
        toast.error(err.message || 'Failed to connect Notion')
        setStatus('error')
      })
  }, [searchParams, navigate])

  // ... renders loading spinner or error state
}
Enter fullscreen mode Exit fullscreen mode

Frontend: Import Page (UI)

The full import experience — connect, search, preview, and import:

// src/pages/ImportNotion.tsx (key logic, UI simplified for brevity)

import {
  getNotionAuthUrl,
  getNotionStatus,
  disconnectNotion,
  searchNotionPages,
  readNotionPage,
} from '../services/notion'
import { parseNotionSections } from '../services/parseNotionResume'

export function ImportNotion() {
  const navigate = useNavigate()
  const { setResume } = useResume()

  const [connected, setConnected] = useState(false)
  const [workspaceName, setWorkspaceName] = useState<string | null>(null)
  const [pages, setPages] = useState([])
  const [selectedPageId, setSelectedPageId] = useState<string | null>(null)
  const [preview, setPreview] = useState(null)

  // Check connection on mount
  useEffect(() => {
    getNotionStatus().then((status) => {
      setConnected(status.connected)
      setWorkspaceName(status.workspaceName)
      if (status.connected) loadPages()
    })
  }, [])

  const handleConnect = () => {
    window.location.href = getNotionAuthUrl()
  }

  const handleSelectPage = async (pageId: string) => {
    setSelectedPageId(pageId)
    const sections = await readNotionPage(pageId)
    setPreview(sections)
  }

  const handleImport = () => {
    if (!preview) return
    const resume = parseNotionSections(
      preview.sections,
      preview.pageTitle
    )
    setResume(resume)
    toast.success('Resume imported from Notion!')
    navigate('/app/editor')
  }

  // Renders:
  // - Not connected: Connect button + link to Notion resume template
  // - Connected: Split view with page search (left) and preview (right)
  // - Preview: Structured sections with "Import to Resume Editor" button
}
Enter fullscreen mode Exit fullscreen mode

Backend: Server-Side AI (Security Architecture)

All AI calls run through Supabase Edge Functions — API keys never reach the browser. This is a critical security decision: Vite's VITE_* environment variables are embedded into the client-side JavaScript bundle, meaning anyone could extract them from browser DevTools. By routing through edge functions, we keep secrets server-side.

// supabase/functions/ai-tailor-resume/index.ts (simplified)

function getProvider() {
  const groqKey = Deno.env.get("GROQ_API_KEY")
  if (groqKey) {
    return {
      provider: "groq",
      url: "https://api.groq.com/openai/v1/chat/completions",
      apiKey: groqKey,
      model: "llama-3.3-70b-versatile"
    }
  }
  const openaiKey = Deno.env.get("OPENAI_API_KEY")
  if (openaiKey) {
    return {
      provider: "openai",
      url: "https://api.openai.com/v1/chat/completions",
      apiKey: openaiKey,
      model: "gpt-4o-mini"
    }
  }
  throw new Error("No AI provider configured")
}

Deno.serve(async (req) => {
  const { resume, jobDescription, atsAnalysis, cultureProfile } = await req.json()
  const { url, apiKey, model, provider } = getProvider()

  // Build prompts with optional ATS gaps and culture profile
  // Call AI provider server-side
  // Return tailored resume JSON

  const res = await fetch(url, {
    method: "POST",
    headers: { "Content-Type": "application/json", Authorization: `Bearer ${apiKey}` },
    body: JSON.stringify({ model, messages: [/* system + user prompts */] }),
  })

  const data = await res.json()
  return jsonResponse({ resume: JSON.parse(data.choices[0].message.content) })
})
Enter fullscreen mode Exit fullscreen mode

Five AI edge functions handle all AI operations:

  • ai-tailor-resume — Culture-aware resume tailoring with ATS gap targeting
  • ai-analyze-ats — 6-dimension ATS scoring with keyword extraction and actionable gaps
  • ai-cover-letter — Cover letter generation with relevance scoring
  • ai-parse-resume — PDF/DOCX text to structured resume JSON
  • ai-culture-rewrite — Single bullet point rewrite in a company's culture style

Client services call them via supabase.functions.invoke():

// src/services/tailorResume.ts (client side)

export async function tailorResumeToJob(resume, jobDescription, atsAnalysis, cultureProfile) {
  const { data: { session } } = await supabase.auth.getSession()
  const { data, error } = await supabase.functions.invoke('ai-tailor-resume', {
    body: { resume, jobDescription, atsAnalysis, cultureProfile },
    headers: { Authorization: `Bearer ${session.access_token}` },
  })
  if (data?.error) throw new Error(data.error)
  return normalizeResume(data.resume)
}
Enter fullscreen mode Exit fullscreen mode

ATS Score Check with Actionable Gaps

The ATS check analyzes resumes across 6 weighted dimensions and identifies specific, fixable gaps:

// src/services/analyzeAtsScore.ts — Types (AI logic runs server-side)

export interface AtsGap {
  type: 'missing_keyword' | 'weak_bullet' | 'missing_section' | 'skill_translation'
  severity: 'high' | 'medium' | 'low'
  title: string
  description: string
  fix: string        // exact text the user should add to their resume
  section?: string   // which resume section to modify
  keyword?: string   // related keyword from the JD
}

export interface AtsAnalysis {
  overallScore: number          // 0-100
  dimensions: AtsDimension[]    // 6 scored dimensions
  keywords: AtsKeyword[]        // 15-25 keywords, each marked found/missing
  recommendations: string[]     // 3-5 actionable tips
  gaps: AtsGap[]               // 3-8 specific gaps with fixes
}
Enter fullscreen mode Exit fullscreen mode

The UI renders these gaps as a checklist that users can track:

  • Missing keyword — An important JD keyword completely absent from the resume
  • Weak bullet — A vague bullet point that could be strengthened with metrics
  • Missing section — A section the JD expects (e.g., certifications) that the resume lacks
  • Skill translation — Resume has equivalent experience under a different name

Company Culture Detection & Optimization

Resumes are automatically optimized for the hiring company's communication style:

// src/services/detectCulture.ts — Heuristic detection (no AI call)

const COMPANY_MAP: Record<string, string> = {
  amazon: 'amazon', aws: 'amazon',
  google: 'google', alphabet: 'google', deepmind: 'google',
  microsoft: 'enterprise', ibm: 'enterprise', oracle: 'enterprise',
  accenture: 'consulting', mckinsey: 'consulting', deloitte: 'consulting',
  // ... 30+ company mappings
}

const SIGNAL_WORDS: Record<string, string[]> = {
  amazon: ['ownership', 'customer obsession', 'bias for action', 'leadership principles'],
  google: ['data-driven', 'at scale', '10x', 'experimentation', 'moonshot'],
  startup: ['fast-paced', 'wear many hats', 'scrappy', 'move fast', '0 to 1'],
  enterprise: ['stakeholders', 'governance', 'compliance', 'fortune 500'],
  consulting: ['client-facing', 'engagement', 'deliverable', 'advisory'],
}

export function detectCulture(jobDescription: string) {
  // 1. Try exact company name match
  // 2. Score by signal word frequency (requires 2+ matches)
  // Returns { profileId, companyName } or null
}
Enter fullscreen mode Exit fullscreen mode

Five culture profiles with tone, keywords, bullet style, and examples:

// src/data/cultureProfiles.ts

export const CULTURE_PROFILES = [
  {
    id: 'amazon',
    name: 'Amazon',
    tone: 'Direct, ownership-driven, customer-obsessed',
    bulletStyle: 'Start with a Leadership Principle verb, quantify impact, show ownership',
    example: 'Owned end-to-end migration of payment service to microservices, reducing latency by 40%',
  },
  {
    id: 'google',
    name: 'Google',
    tone: 'Data-driven, collaborative, impact-focused',
    bulletStyle: 'Lead with measurable outcome, reference scale, show technical depth',
    example: 'Designed recommendation engine serving 50M+ daily active users, improving CTR by 18%',
  },
  // ... startup, enterprise, consulting
]
Enter fullscreen mode Exit fullscreen mode

Career Match Card — Viral Sharing

After an ATS check, users can generate a visual card (1200×630px, OG-image sized) and share it on LinkedIn:

// src/components/CareerMatchCard.tsx

export const CareerMatchCard = forwardRef<HTMLDivElement, CareerMatchCardProps>(
  ({ score, jobTitle, companyName, keywordsFound, keywordsTotal, topSkills }, ref) => {
    return (
      <div ref={ref} style={{ width: 1200, height: 630, background: '#0a1217', color: 'white' }}>
        {/* Logo + "ATS Match Report" header */}
        {/* Color-coded score circle (green ≥75, yellow ≥50, red <50) */}
        {/* Job title + company name */}
        {/* Keywords matched progress bar */}
        {/* Top matched skills as pills */}
        {/* "Check your match score at matchyou.cv" footer */}
      </div>
    )
  }
)
Enter fullscreen mode Exit fullscreen mode

Image export using html2canvas at retina quality:

// src/lib/exportMatchCard.ts

export async function exportCardAsImage(cardElement: HTMLElement): Promise<Blob> {
  const canvas = await html2canvas(cardElement, {
    width: 1200, height: 630,
    scale: 2, // retina quality
    useCORS: true, backgroundColor: null,
  })
  return new Promise((resolve, reject) => {
    canvas.toBlob(
      (blob) => (blob ? resolve(blob) : reject(new Error('Failed to generate image'))),
      'image/png'
    )
  })
}
Enter fullscreen mode Exit fullscreen mode

The share flow supports download, clipboard copy, and native Web Share API (mobile share sheet).


LinkedIn Job Description Fetching

Users paste a LinkedIn job URL and the system auto-extracts the job description — no copy-pasting needed:

// supabase/functions/fetch-job-details/index.ts (key logic)

// LinkedIn guest API — no auth required
const linkedInJobId = extractLinkedInJobId(url)
const guestUrl = `https://www.linkedin.com/jobs-guest/jobs/api/jobPosting/${linkedInJobId}`
const response = await fetch(guestUrl, {
  headers: { "User-Agent": "Mozilla/5.0 ..." }
})

// Parse HTML directly — no AI needed for LinkedIn
const title = document.querySelector('.top-card-layout__title')?.textContent
const company = document.querySelector('.topcard__org-name-link')?.textContent
const description = document.querySelector('.show-more-less-html__markup')?.innerHTML

// For non-LinkedIn URLs: generic HTML parsing + AI extraction via edge function
Enter fullscreen mode Exit fullscreen mode

How I Used Notion MCP

Match You CV integrates with Notion through a full OAuth + API pipeline that gives users secure, scoped access to their workspace data.

1. OAuth Authentication Flow

Users connect their Notion workspace with a single click. The flow:

  • Authorization — User is redirected to Notion to grant workspace access
  • Token Exchange — Our Supabase Edge Function exchanges the auth code for an access token server-side (secrets never touch the browser)
  • Secure Storage — Tokens are stored in a notion_connections table with Row Level Security — users can only access their own connection
  • Status Check — On every visit to the import page, we verify the connection is still active
  • Disconnect — Users can revoke access anytime with one click

2. Workspace Search & Page Reading

Once connected, users interact with their Notion workspace directly from Match You CV:

  • Search — Queries all pages in the user's workspace using Notion's Search API. Returns titles, page icons, and last-edited timestamps for easy identification
  • Read — Fetches all blocks from a selected page using the Block Children API with full cursor-based pagination (handles pages with 100+ blocks)
  • Block Parsing — Supports all common Notion block types: headings (H1/H2/H3), paragraphs, bulleted lists, numbered lists, to-dos, toggles, callouts, quotes, and dividers

3. Intelligent Resume Parsing

This is where Notion data becomes a resume. The parser:

  • Classifies sections by matching headings against known resume patterns (experience, education, skills, etc.) using regex heuristics
  • Parses experience entries — detects "Software Engineer at Google" or "Designer - Spotify" patterns
  • Extracts date ranges — recognizes "Jan 2020 - Present", "2019-2021", and similar formats from free-form text
  • Extracts contact info — identifies emails, phone numbers, job titles, and locations from unstructured text
  • Handles edge cases — pages with no headings, page titles as names, pipe-separated contact details, multiple date formats

4. Notion Resume Template

We provide a ready-made Notion resume template that users can duplicate into their workspace. It follows a structure optimized for our parser, but the import works with any reasonable resume layout in Notion.

What This Unlocks

  • Notion as your career source of truth — Update your resume in Notion, import the latest version anytime
  • No re-typing — Skip the tedious form-filling; your Notion content flows directly into the editor
  • AI on top of your Notion data — Import from Notion, then use AI Tailor to customize your resume for each job posting — with culture-aware optimization for Amazon, Google, Startup, Enterprise, and Consulting styles
  • ATS gap analysis — See exactly which keywords you're missing and get actionable fixes
  • Viral sharing — Generate a Career Match Card and share your ATS score on LinkedIn
  • End-to-end workflow — Notion page → Import → ATS Check → AI Tailor → Share → PDF export, all in one session

The integration turns Notion from a static document store into the starting point of an AI-powered job application pipeline.

Bug Discovered: Notion OAuth Page Crashes for Logged-Out Users

During testing, we discovered a bug on Notion's side. When a user who is not logged into Notion in their browser initiates the OAuth flow (redirected to https://api.notion.com/v1/oauth/authorize), Notion's own authorization page crashes instead of showing a login prompt.

The browser console shows multiple errors originating from Notion's bundled JavaScript:

  • QuotaExceededErrorFailed to execute 'setItem' on 'Storage' — Notion's code attempts to write to localStorage without checking available quota
  • TypeError: Cannot read properties of undefined (reading 'name') — Internal Notion component tries to access workspace data that doesn't exist for unauthenticated visitors
  • ClientError — Multiple Notion client errors logged as the page fails to initialize

These errors are entirely within Notion's minified bundles (app-*.js, mainApp-*.js, ClientFramework-*.js) and cannot be resolved by the integrating application.

Our workaround: We added a visible warning on the import page advising users to log into Notion in their browser before connecting. We also ensure our OAuth callback handler gracefully handles the case where the user returns without a valid authorization code.


Version 2 — Notion-Powered Blog & Platform Improvements

After the initial submission, I continued building on the Notion integration. The biggest addition is a full blog system powered entirely by Notion — every article is authored in a Notion database and rendered on the site in real time, with no separate CMS or markdown files involved.

5. Notion-Backed Blog (New Notion Integration)

This is a second, independent Notion integration alongside the resume import. While the resume import uses user OAuth (each user connects their own workspace), the blog uses a server-side internal integration that reads from our team's Notion database.

How it works:

  • We maintain a Notion database called "Blog Database" with properties like Title, Slug, Category, Description, Tags, Author, Published, Published Date, and Cover Image
  • A Supabase Edge Function (notion-blog) queries this database via the Notion API using an internal integration token
  • The frontend calls this function to list posts or fetch a single post by slug
  • Post content is fetched as Notion blocks and converted to HTML server-side

Architecture:

Notion Blog Database
        ↓ (Notion API — internal integration token)
Supabase Edge Function (notion-blog)
        ↓ (JSON response)
React Frontend (Blog.tsx / BlogPost.tsx)
Enter fullscreen mode Exit fullscreen mode

Backend: Blog Edge Function

The edge function handles two actions — listing all published posts and fetching a single post with its full rendered content:

// supabase/functions/notion-blog/index.ts

const NOTION_API = "https://api.notion.com/v1"
const NOTION_VERSION = "2022-06-28"

function notionHeaders() {
  const token = Deno.env.get("NOTION_BLOG_TOKEN")
  if (!token) throw new Error("NOTION_BLOG_TOKEN not configured")
  return {
    Authorization: `Bearer ${token}`,
    "Notion-Version": NOTION_VERSION,
    "Content-Type": "application/json",
  }
}

function getDatabaseId() {
  const id = Deno.env.get("NOTION_BLOG_DATABASE_ID")
  if (!id) throw new Error("NOTION_BLOG_DATABASE_ID not configured")
  return id
}
Enter fullscreen mode Exit fullscreen mode

Listing posts — queries the database filtered by Published = true, sorted by date descending, and extracts all properties using a generic property extractor:

async function handleList() {
  const dbId = getDatabaseId()

  const res = await fetch(`${NOTION_API}/databases/${dbId}/query`, {
    method: "POST",
    headers: notionHeaders(),
    body: JSON.stringify({
      filter: {
        property: "Published",
        checkbox: { equals: true },
      },
      sorts: [
        { property: "Published Date", direction: "descending" },
      ],
      page_size: 50,
    }),
  })

  const data = await res.json() as { results: PageResult[] }

  const posts = data.results.map((page) => ({
    id: page.id,
    title: extractProperty(page.properties, "Title"),
    slug: extractProperty(page.properties, "Slug"),
    category: extractProperty(page.properties, "Category"),
    description: extractProperty(page.properties, "Description"),
    coverImage: extractProperty(page.properties, "Cover Image") || getCoverUrl(page),
    publishedDate: extractProperty(page.properties, "Published Date") || page.created_time,
    tags: extractProperty(page.properties, "Tags"),
    author: extractProperty(page.properties, "Author"),
  }))

  return jsonResponse({ posts })
}
Enter fullscreen mode Exit fullscreen mode

Generic property extractor — handles every Notion property type so adding new database columns requires zero code changes to the extractor:

function extractProperty(props: Record<string, unknown>, name: string): string {
  const prop = props[name] as Record<string, unknown> | undefined
  if (!prop) return ""

  const type = prop.type as string
  if (type === "title") {
    const arr = prop.title as Array<{ plain_text: string }> | undefined
    return arr?.map((t) => t.plain_text).join("") ?? ""
  }
  if (type === "rich_text") {
    const arr = prop.rich_text as Array<{ plain_text: string }> | undefined
    return arr?.map((t) => t.plain_text).join("") ?? ""
  }
  if (type === "select") {
    const sel = prop.select as { name: string } | null
    return sel?.name ?? ""
  }
  if (type === "multi_select") {
    const arr = prop.multi_select as Array<{ name: string }> | undefined
    return arr?.map((s) => s.name).join("||") ?? ""
  }
  if (type === "date") {
    const d = prop.date as { start: string } | null
    return d?.start ?? ""
  }
  if (type === "people") {
    const arr = prop.people as Array<{ name?: string }> | undefined
    return arr?.map((p) => p.name).filter(Boolean).join(", ") ?? ""
  }
  if (type === "files") {
    const files = prop.files as Array<{ type: string; file?: { url: string }; external?: { url: string } }> | undefined
    if (!files?.length) return ""
    const f = files[0]
    return f.type === "file" ? (f.file?.url ?? "") : (f.external?.url ?? "")
  }
  // ... checkbox, url, created_by also supported
  return ""
}
Enter fullscreen mode Exit fullscreen mode

Recursive Block Rendering

A critical improvement over the initial version — the blog renderer recursively fetches child blocks from the Notion API. Notion's block structure is a tree: list items can have nested sub-lists, toggles contain child content, callouts can have paragraphs inside them, and tables contain row blocks. Without recursive fetching, all nested content is silently lost.

async function fetchChildBlocks(blockId: string): Promise<NotionBlock[]> {
  const children: NotionBlock[] = []
  let cursor: string | undefined

  do {
    const url = new URL(`${NOTION_API}/blocks/${blockId}/children`)
    url.searchParams.set("page_size", "100")
    if (cursor) url.searchParams.set("start_cursor", cursor)

    const res = await fetch(url.toString(), { headers: notionHeaders() })
    if (!res.ok) break

    const data = await res.json() as {
      results: NotionBlock[]
      has_more: boolean
      next_cursor?: string
    }

    children.push(...data.results)
    cursor = data.has_more ? data.next_cursor : undefined
  } while (cursor)

  return children
}

async function renderChildren(block: NotionBlock): Promise<string> {
  if (!block.has_children) return ""
  const children = await fetchChildBlocks(block.id)
  return await blocksToHtml(children)
}
Enter fullscreen mode Exit fullscreen mode

Every block type that supports children calls renderChildren():

async function blockToHtml(block: NotionBlock): Promise<string> {
  const rt = getRichTextArray(block)
  const html = richTextToHtml(rt)

  switch (block.type) {
    case "paragraph": {
      const childHtml = await renderChildren(block)
      return (html ? `<p>${html}</p>` : "") + childHtml
    }
    case "bulleted_list_item":
    case "numbered_list_item": {
      const childHtml = await renderChildren(block)
      return `<li>${html}${childHtml}</li>`
    }
    case "toggle": {
      const childHtml = await renderChildren(block)
      return `<details><summary>${html}</summary>${childHtml}</details>`
    }
    case "callout": {
      const calloutData = block.callout as { icon?: { emoji?: string } } | undefined
      const icon = calloutData?.icon?.emoji ?? ""
      const childHtml = await renderChildren(block)
      return `<div class="callout">${icon ? `<span>${icon}</span> ` : ""}${html}${childHtml}</div>`
    }
    case "table": {
      const childHtml = await renderChildren(block)
      return `<table>${childHtml}</table>`
    }
    case "table_row": {
      const cells = (block.table_row as { cells?: RichText[][] })?.cells ?? []
      const tds = cells.map((cell) => `<td>${richTextToHtml(cell)}</td>`).join("")
      return `<tr>${tds}</tr>`
    }
    case "column_list": {
      const childHtml = await renderChildren(block)
      return `<div class="columns">${childHtml}</div>`
    }
    case "synced_block": {
      const childHtml = await renderChildren(block)
      return childHtml
    }
    // ... heading_1-3, code, divider, image, bookmark, video, embed, to_do also handled
  }
}
Enter fullscreen mode Exit fullscreen mode

The renderer supports 20 Notion block types: headings (h1–h3), paragraphs, bulleted lists, numbered lists, quotes, callouts, code blocks, dividers, images, bookmarks, toggles, to-dos, tables, table rows, column layouts, videos, embeds, synced blocks, and child pages/databases.

Rich text annotations are fully preserved: bold, italic, strikethrough, code, underline, and hyperlinks.

Frontend: Blog Service Layer

The client calls the edge function without any authentication (the blog is public):

// src/services/blog.ts

const SUPABASE_URL = import.meta.env.VITE_SUPABASE_URL
const SUPABASE_ANON_KEY = import.meta.env.VITE_SUPABASE_ANON_KEY

export interface BlogPost {
  id: string
  title: string
  slug: string
  category: string
  description: string
  coverImage: string
  publishedDate: string
  tags: string
  author: string
}

export interface BlogPostFull extends BlogPost {
  contentHtml: string
}

async function callBlogFunction(body: Record<string, unknown>) {
  const res = await fetch(`${SUPABASE_URL}/functions/v1/notion-blog`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      apikey: SUPABASE_ANON_KEY,
    },
    body: JSON.stringify(body),
  })

  const data = await res.json()
  if (!res.ok) throw new Error(data.error ?? 'Request failed')
  return data
}

export async function fetchBlogPosts(): Promise<BlogPost[]> {
  const data = await callBlogFunction({ action: 'list' })
  return data.posts
}

export async function fetchBlogPost(slug: string): Promise<BlogPostFull> {
  const data = await callBlogFunction({ action: 'post', slug })
  return data.post
}
Enter fullscreen mode Exit fullscreen mode

SEO: Structured Data from Notion

Each blog post generates full Article schema.org structured data and breadcrumbs, all derived from Notion properties:

const articleSchema = post
  ? {
      '@context': 'https://schema.org',
      '@type': 'Article',
      headline: post.title,
      description: post.description,
      datePublished: post.publishedDate,
      author: { '@type': 'Organization', name: 'Match You CV' },
      publisher: {
        '@type': 'Organization',
        name: 'Match You CV',
        logo: { '@type': 'ImageObject', url: 'https://matchyou.cv/og-image.png' },
      },
      image: post.coverImage || 'https://matchyou.cv/og-image.png',
      mainEntityOfPage: `https://matchyou.cv/blog/${post.slug}`,
    }
  : undefined
Enter fullscreen mode Exit fullscreen mode

Two Notion Integrations, One Platform

Match You CV now uses Notion in two distinct ways:

Resume Import Blog
Auth User OAuth (per-user workspace) Internal integration (server token)
Notion API Search + Block Children Database Query + Block Children
Data flow User's workspace → Resume editor Team's database → Public blog
Access Authenticated users only Public, no auth required
Block handling Text extraction (structured sections) Full HTML rendering (recursive)

Both integrations share the same patterns — cursor-based pagination, block type handling, rich text extraction — but serve completely different purposes. The resume import treats Notion as a data source for structured parsing, while the blog treats it as a full CMS with rich content rendering.

Other Platform Improvements

  • Server-Side AI — All AI calls (resume tailoring, ATS scoring, cover letter generation, resume parsing, culture rewriting) are proxied through 5 Supabase Edge Functions. API keys are stored as server-side secrets and never exposed to the browser
  • Company Culture Detection — Heuristic-based detection of company culture from job descriptions (30+ known companies + signal word scoring), with 5 culture profiles (Amazon, Google, Startup, Enterprise, Consulting) that adapt resume bullet style and tone
  • Career Match Card — Visual 1200×630 card showing ATS score, job title, keywords matched, and top skills. Exported as PNG via html2canvas at retina quality. Supports download, clipboard copy, and native Web Share API for LinkedIn sharing
  • ATS Gap Analysis — 6-dimension scoring (Keyword Match, Experience Relevance, Skills Alignment, Impact & Metrics, Section Completeness, Clarity & Action Verbs) with specific, fixable gaps categorized by type and severity
  • LinkedIn Job Fetching — Paste a LinkedIn job URL and auto-extract the job description via LinkedIn's guest API. Non-LinkedIn URLs use generic HTML parsing + AI extraction
  • Cover Letter Generation — AI-powered cover letters with 4-paragraph structure, relevance scoring, requirement matching, and gap identification. Supports personal "About You" notes to shape voice and angle
  • Account Deletion — Users can request account deletion from their profile. Requests are scheduled with a 7-day grace period and processed by a background Edge Function that cascades through all user data
  • SEO — OG images, JSON-LD structured data, meta tags, robots.txt, dynamic blog sitemap via Edge Function, and Netlify Edge Function proxy for search engine compatibility

Top comments (2)

Collapse
 
phanikondru profile image
Phanindhra Kondru

From the last submission, we received very positive feedback through comments, DMs, and emails. We carefully considered that feedback, implemented a few new ideas, and updated the submission accordingly stating "Version 2". Please take a look and share your thoughts. Excited to see how it goes.

Collapse
 
phanikondru profile image
Phanindhra Kondru

I’d really appreciate your feedback after trying the platform, especially if it helps with your job applications.