What Is Skills Processing Customization?

Skills processing customization allows you to fine-tune how Mapademics analyzes your educational content and job descriptions to extract skills. Rather than using one-size-fits-all settings, you can adjust the AI processing pipeline to better match your organization’s specific content, terminology, and quality requirements. This customization happens at multiple levels - from the AI prompts that guide the analysis, to the confidence thresholds that determine which skills make it into your final results. Think of it as training the system to understand your organization’s unique educational context and standards.
Why Customize? Default settings work well for most content, but customization can significantly improve accuracy for specialized programs, industry-specific terminology, or content with unique formatting.

Before You Start

To customize skills processing, you’ll need:
  • Administrative Access: Settings configuration requires admin permissions
  • Processed Content: At least some syllabi or job descriptions already processed for comparison testing
  • Time Investment: 30-60 minutes to understand options and test configurations
  • Domain Knowledge: Understanding of your organization’s educational content and terminology
Important: Always test configuration changes before applying them to large batches of content. Changes affect all future processing in your organization.

Understanding the Processing Pipeline

Before customizing, it helps to understand how Mapademics processes content through a multi-stage AI pipeline:
1

Structure Extraction

AI analyzes document layout and extracts key information like learning objectives, course requirements, and job responsibilities.
2

Skill Extraction

The system identifies potential skills by matching content against a comprehensive skills database and recognizing skill-related language patterns.
3

Grading & Confidence

Each identified skill receives a proficiency level (1-4 scale) and confidence score based on how clearly it appears in the content.
4

Foundational Skills (Syllabi Only)

For course syllabi, the AI identifies implicit cognitive skills that support the explicit technical skills.
Each stage can be customized with different prompts, models, and filtering settings.

Types of Customization

1. Prompt Templates

What it does: Controls exactly how the AI analyzes your content by providing custom instructions for each processing step. When to use:
  • Your content uses specialized terminology
  • You need different analysis focus (e.g., emphasizing practical vs theoretical skills)
  • Default prompts miss important skills specific to your field
How to customize: Navigate to Settings → Prompt Templates to create custom prompts for specific pipeline steps.

2. Processing Configuration

What it does: Selects which prompt templates to use for each step of the pipeline. When to use:
  • You have multiple prompt templates and want to choose the best combination
  • You want to test different approaches for different types of content
How to customize: Go to Settings → Prompt Configuration and select prompt versions for each pipeline step.

3. Model Settings

What it does: Controls which AI model processes your content and how creatively it interprets the instructions. When to use:
  • You need faster processing (use smaller models)
  • You want more accurate analysis (use larger, more expensive models)
  • Your content requires very precise interpretation
How to customize: Visit Settings → Model Configuration to adjust model selection and creativity settings (temperature).

4. Confidence Filtering

What it does: Sets minimum confidence thresholds and evidence quality requirements for skills to appear in final results. When to use:
  • You’re getting too many low-confidence skills cluttering results
  • You want only the most clearly-evident skills
  • You need to balance precision vs completeness
How to customize: Built into the processing pipeline with configurable thresholds.

Step-by-Step Customization Process

Step 1: Analyze Current Results

Before making changes, review your existing skills extraction results:
  1. Navigate to any course or job with processed skills
  2. Look for patterns in the results:
    • Are important skills being missed?
    • Are irrelevant skills being detected?
    • Do proficiency levels seem accurate?
    • Are there too many or too few skills overall?
Document Your Goals: Write down specific issues you want to address. This helps you measure whether your customizations are working.

Step 2: Start with Prompt Configuration

Most improvements come from adjusting which prompts are used:
  1. Go to Settings → Prompt Configuration
  2. Choose either Syllabus Processing or Job Description Processing
  3. Review the current configuration for each step:
    • Structure Extraction: How content is parsed and organized
    • Skill Extraction: How skills are identified and consolidated
    • Grading: How proficiency levels are assigned
    • Cognitive Skills (syllabi only): How foundational skills are identified
  4. If all steps show “System Default”, you’re using standard settings
  5. Save any changes before testing

Step 3: Test Your Configuration

Before processing large batches, test your settings:
  1. In the Prompt Configuration page, click Test Configuration
  2. Select a course section or job that was previously processed
  3. Click Run Test to process it with your new settings
  4. Compare the results:
    • Skills Added: New skills detected with current settings
    • Skills Removed: Skills that were detected previously but not now
    • Skills Modified: Same skills with different proficiency levels
    • Configuration Differences: Shows which prompts/models changed
Perfect Testing: The comparison shows exactly how your changes will affect future processing, without modifying your existing data.

Step 4: Create Custom Prompts (Advanced)

If system defaults aren’t working for your content, create custom prompts:
  1. Go to Settings → Prompt Templates
  2. Click Create Template
  3. Choose the pipeline (Syllabus or Job Description) and step
  4. Write custom instructions that:
    • Use terminology specific to your field
    • Focus on skills types most relevant to your programs
    • Address weaknesses you identified in testing
  5. Save the template with a descriptive version name
  6. Return to Prompt Configuration to select your custom template
  7. Test the new configuration thoroughly

Step 5: Adjust Model Settings (If Needed)

Fine-tune the AI models and creativity settings:
  1. Visit Settings → Model Configuration
  2. For each processing step, you can adjust:
    • Model: Choose between different AI models (larger = more accurate but slower)
    • Temperature: Control creativity (lower = more consistent, higher = more creative)
  3. Recommended starting points:
    • Structure Extraction: Lower temperature (0.1-0.3) for consistent parsing
    • Skill Extraction: Medium temperature (0.3-0.5) for balanced accuracy
    • Grading: Lower temperature (0.1-0.3) for consistent proficiency levels

Step 6: Apply and Monitor

Once you’re satisfied with testing:
  1. Save your final configuration
  2. Process a small batch of new content to verify results
  3. Monitor the quality of extracted skills over time
  4. Adjust settings if you notice issues with new content types

Understanding Configuration Options

Prompt Template Options

For each processing step, you can choose from:
  • System Default: Standard prompts that work well for most content
  • Custom Templates: Organization-specific prompts you’ve created
  • Version History: Previous versions of custom templates

Model Configuration Options

Available Models (capabilities vary by organization):
  • GPT-4o: Most accurate, best for complex content, higher cost
  • GPT-4o-mini: Good balance of accuracy and speed, lower cost
  • GPT-3.5-turbo: Fastest processing, lowest cost, adequate for simple content
Temperature Settings:
  • 0.0-0.3: Very consistent, literal interpretation
  • 0.3-0.7: Balanced creativity and consistency
  • 0.7-1.0: More creative interpretation, less predictable

Confidence Filtering Settings

The system applies confidence filtering to remove low-quality skill detections:
  • Confidence Threshold: Minimum confidence score (1-5 scale) for skills to be included
  • Evidence Quality: Whether to allow skills with weak supporting evidence
  • Default Settings: Confidence ≥ 3, no weak evidence allowed

Common Customization Scenarios

Scenario 1: Technical Programs Missing Industry Skills

Problem: Engineering or technical courses aren’t detecting industry-specific tools and technologies. Solution:
  1. Create custom skill extraction prompt that emphasizes technical terminology
  2. Include examples of your specific tools/technologies in the prompt
  3. Test with representative technical syllabi
  4. Adjust confidence threshold if needed

Scenario 2: Too Many Irrelevant Skills Detected

Problem: System finds skills that aren’t actually taught or required. Solution:
  1. Increase confidence threshold to 4 or 5
  2. Disable weak evidence allowance
  3. Create custom grading prompt that’s more conservative
  4. Test to ensure important skills aren’t filtered out

Scenario 3: Inconsistent Proficiency Levels

Problem: Similar courses show very different skill proficiency levels. Solution:
  1. Lower temperature settings for grading step (0.1-0.2)
  2. Create custom grading prompt with specific proficiency criteria
  3. Use consistent model across all processing
  4. Test with multiple similar courses

Scenario 4: Processing Too Slow for Large Batches

Problem: Skills extraction takes too long when processing many documents. Solution:
  1. Switch to faster models (GPT-4o-mini instead of GPT-4o)
  2. Increase confidence threshold to reduce processing complexity
  3. Consider processing in smaller batches during off-peak hours

Best Practices

Testing and Validation

  • Always test first: Never apply untested configurations to important content
  • Use representative samples: Test with different types of content you process
  • Document changes: Keep notes on what you changed and why
  • Monitor results: Check quality of new extractions regularly

Prompt Writing

  • Be specific: Clear, detailed instructions work better than vague guidance
  • Use examples: Include 2-3 examples of the type of analysis you want
  • Stay focused: Each prompt should address one specific aspect of processing
  • Test iteratively: Make small changes and test, rather than major rewrites

Model Selection

  • Start conservative: Begin with standard models and settings
  • Consider costs: Larger models are more expensive for high-volume processing
  • Match complexity: Use powerful models only for complex content that needs them
  • Monitor performance: Track both accuracy and processing speed

Troubleshooting Configuration Issues

Skills Not Being Detected

Check:
  • Are your prompts too restrictive?
  • Is the confidence threshold too high?
  • Does your content use non-standard terminology?
Solutions:
  • Review and relax grading prompts
  • Lower confidence threshold temporarily
  • Add domain-specific terminology to extraction prompts

Too Many False Positives

Check:
  • Are prompts too broad or creative?
  • Is temperature setting too high?
  • Are confidence filters working properly?
Solutions:
  • Tighten prompt instructions with negative examples
  • Lower temperature settings
  • Increase confidence threshold

Inconsistent Results

Check:
  • Are you using different prompts for similar content?
  • Are temperature settings too high?
  • Do different team members have different configurations?
Solutions:
  • Standardize configurations across your organization
  • Lower temperature for more consistent results
  • Document and share your final configuration approach

What Happens Next

After customizing your skills processing configuration:
  1. All future processing uses your new settings automatically
  2. Existing processed content remains unchanged (unless you reprocess it)
  3. Configuration changes are logged and can be reverted if needed
  4. Team members in your organization use the same configuration for consistency
Remember, good configuration is an iterative process. Start with small adjustments, test thoroughly, and refine based on results. The goal is to find the sweet spot where the system accurately captures the skills most important to your organization while filtering out noise.