What Is Skills Processing Customization?
Skills processing customization allows you to fine-tune how Mapademics analyzes your educational content and job descriptions to extract skills. Rather than using one-size-fits-all settings, you can adjust the AI processing pipeline to better match your organization’s specific content, terminology, and quality requirements. This customization happens at multiple levels - from the AI prompts that guide the analysis, to the confidence thresholds that determine which skills make it into your final results. Think of it as training the system to understand your organization’s unique educational context and standards.Why Customize? Default settings work well for most content, but customization can significantly improve accuracy for specialized programs, industry-specific terminology, or content with unique formatting.
Before You Start
To customize skills processing, you’ll need:- Administrative Access: Settings configuration requires admin permissions
- Processed Content: At least some syllabi or job descriptions already processed for comparison testing
- Time Investment: 30-60 minutes to understand options and test configurations
- Domain Knowledge: Understanding of your organization’s educational content and terminology
Important: Always test configuration changes before applying them to large batches of content. Changes affect all future processing in your organization.
Understanding the Processing Pipeline
Before customizing, it helps to understand how Mapademics processes content through a multi-stage AI pipeline:1
Structure Extraction
AI analyzes document layout and extracts key information like learning objectives, course requirements, and job responsibilities.
2
Skill Extraction
The system identifies potential skills by matching content against a comprehensive skills database and recognizing skill-related language patterns.
3
Grading & Confidence
Each identified skill receives a proficiency level (1-4 scale) and confidence score based on how clearly it appears in the content.
4
Foundational Skills (Syllabi Only)
For course syllabi, the AI identifies implicit cognitive skills that support the explicit technical skills.
Types of Customization
1. Prompt Templates
What it does: Controls exactly how the AI analyzes your content by providing custom instructions for each processing step. When to use:- Your content uses specialized terminology
- You need different analysis focus (e.g., emphasizing practical vs theoretical skills)
- Default prompts miss important skills specific to your field
2. Processing Configuration
What it does: Selects which prompt templates to use for each step of the pipeline. When to use:- You have multiple prompt templates and want to choose the best combination
- You want to test different approaches for different types of content
3. Model Settings
What it does: Controls which AI model processes your content and how creatively it interprets the instructions. When to use:- You need faster processing (use smaller models)
- You want more accurate analysis (use larger, more expensive models)
- Your content requires very precise interpretation
4. Confidence Filtering
What it does: Sets minimum confidence thresholds and evidence quality requirements for skills to appear in final results. When to use:- You’re getting too many low-confidence skills cluttering results
- You want only the most clearly-evident skills
- You need to balance precision vs completeness
Step-by-Step Customization Process
Step 1: Analyze Current Results
Before making changes, review your existing skills extraction results:- Navigate to any course or job with processed skills
- Look for patterns in the results:
- Are important skills being missed?
- Are irrelevant skills being detected?
- Do proficiency levels seem accurate?
- Are there too many or too few skills overall?
Document Your Goals: Write down specific issues you want to address. This helps you measure whether your customizations are working.
Step 2: Start with Prompt Configuration
Most improvements come from adjusting which prompts are used:- Go to Settings → Prompt Configuration
- Choose either Syllabus Processing or Job Description Processing
-
Review the current configuration for each step:
- Structure Extraction: How content is parsed and organized
- Skill Extraction: How skills are identified and consolidated
- Grading: How proficiency levels are assigned
- Cognitive Skills (syllabi only): How foundational skills are identified
- If all steps show “System Default”, you’re using standard settings
- Save any changes before testing
Step 3: Test Your Configuration
Before processing large batches, test your settings:- In the Prompt Configuration page, click Test Configuration
- Select a course section or job that was previously processed
- Click Run Test to process it with your new settings
- Compare the results:
- Skills Added: New skills detected with current settings
- Skills Removed: Skills that were detected previously but not now
- Skills Modified: Same skills with different proficiency levels
- Configuration Differences: Shows which prompts/models changed
Perfect Testing: The comparison shows exactly how your changes will affect future processing, without modifying your existing data.
Step 4: Create Custom Prompts (Advanced)
If system defaults aren’t working for your content, create custom prompts:- Go to Settings → Prompt Templates
- Click Create Template
- Choose the pipeline (Syllabus or Job Description) and step
-
Write custom instructions that:
- Use terminology specific to your field
- Focus on skills types most relevant to your programs
- Address weaknesses you identified in testing
- Save the template with a descriptive version name
- Return to Prompt Configuration to select your custom template
- Test the new configuration thoroughly
Step 5: Adjust Model Settings (If Needed)
Fine-tune the AI models and creativity settings:- Visit Settings → Model Configuration
-
For each processing step, you can adjust:
- Model: Choose between different AI models (larger = more accurate but slower)
- Temperature: Control creativity (lower = more consistent, higher = more creative)
-
Recommended starting points:
- Structure Extraction: Lower temperature (0.1-0.3) for consistent parsing
- Skill Extraction: Medium temperature (0.3-0.5) for balanced accuracy
- Grading: Lower temperature (0.1-0.3) for consistent proficiency levels
Step 6: Apply and Monitor
Once you’re satisfied with testing:- Save your final configuration
- Process a small batch of new content to verify results
- Monitor the quality of extracted skills over time
- Adjust settings if you notice issues with new content types
Understanding Configuration Options
Prompt Template Options
For each processing step, you can choose from:- System Default: Standard prompts that work well for most content
- Custom Templates: Organization-specific prompts you’ve created
- Version History: Previous versions of custom templates
Model Configuration Options
Available Models (capabilities vary by organization):- GPT-4o: Most accurate, best for complex content, higher cost
- GPT-4o-mini: Good balance of accuracy and speed, lower cost
- GPT-3.5-turbo: Fastest processing, lowest cost, adequate for simple content
- 0.0-0.3: Very consistent, literal interpretation
- 0.3-0.7: Balanced creativity and consistency
- 0.7-1.0: More creative interpretation, less predictable
Confidence Filtering Settings
The system applies confidence filtering to remove low-quality skill detections:- Confidence Threshold: Minimum confidence score (1-5 scale) for skills to be included
- Evidence Quality: Whether to allow skills with weak supporting evidence
- Default Settings: Confidence ≥ 3, no weak evidence allowed
Common Customization Scenarios
Scenario 1: Technical Programs Missing Industry Skills
Problem: Engineering or technical courses aren’t detecting industry-specific tools and technologies. Solution:- Create custom skill extraction prompt that emphasizes technical terminology
- Include examples of your specific tools/technologies in the prompt
- Test with representative technical syllabi
- Adjust confidence threshold if needed
Scenario 2: Too Many Irrelevant Skills Detected
Problem: System finds skills that aren’t actually taught or required. Solution:- Increase confidence threshold to 4 or 5
- Disable weak evidence allowance
- Create custom grading prompt that’s more conservative
- Test to ensure important skills aren’t filtered out
Scenario 3: Inconsistent Proficiency Levels
Problem: Similar courses show very different skill proficiency levels. Solution:- Lower temperature settings for grading step (0.1-0.2)
- Create custom grading prompt with specific proficiency criteria
- Use consistent model across all processing
- Test with multiple similar courses
Scenario 4: Processing Too Slow for Large Batches
Problem: Skills extraction takes too long when processing many documents. Solution:- Switch to faster models (GPT-4o-mini instead of GPT-4o)
- Increase confidence threshold to reduce processing complexity
- Consider processing in smaller batches during off-peak hours
Best Practices
Testing and Validation
- Always test first: Never apply untested configurations to important content
- Use representative samples: Test with different types of content you process
- Document changes: Keep notes on what you changed and why
- Monitor results: Check quality of new extractions regularly
Prompt Writing
- Be specific: Clear, detailed instructions work better than vague guidance
- Use examples: Include 2-3 examples of the type of analysis you want
- Stay focused: Each prompt should address one specific aspect of processing
- Test iteratively: Make small changes and test, rather than major rewrites
Model Selection
- Start conservative: Begin with standard models and settings
- Consider costs: Larger models are more expensive for high-volume processing
- Match complexity: Use powerful models only for complex content that needs them
- Monitor performance: Track both accuracy and processing speed
Troubleshooting Configuration Issues
Skills Not Being Detected
Check:- Are your prompts too restrictive?
- Is the confidence threshold too high?
- Does your content use non-standard terminology?
- Review and relax grading prompts
- Lower confidence threshold temporarily
- Add domain-specific terminology to extraction prompts
Too Many False Positives
Check:- Are prompts too broad or creative?
- Is temperature setting too high?
- Are confidence filters working properly?
- Tighten prompt instructions with negative examples
- Lower temperature settings
- Increase confidence threshold
Inconsistent Results
Check:- Are you using different prompts for similar content?
- Are temperature settings too high?
- Do different team members have different configurations?
- Standardize configurations across your organization
- Lower temperature for more consistent results
- Document and share your final configuration approach
What Happens Next
After customizing your skills processing configuration:- All future processing uses your new settings automatically
- Existing processed content remains unchanged (unless you reprocess it)
- Configuration changes are logged and can be reverted if needed
- Team members in your organization use the same configuration for consistency
Related Tasks
- How Skills Extraction Works - Understanding the underlying process
- Reviewing and Approving Skills - Quality control after extraction
- Batch Processing Syllabi - Processing large numbers of documents