Overview

Organization Configuration in Mapademics allows administrators to customize how the platform processes academic and job market data for their institution. This includes setting up basic organization details, configuring AI models for content processing, and managing custom prompt templates that control how skills are extracted from syllabi and job descriptions. These settings apply across your entire organization and affect all users and processing jobs. Proper configuration ensures optimal accuracy and performance for your institution’s specific needs.

Before You Start

Prerequisites:
  • Administrative access to your organization
  • Understanding of your institution’s data processing requirements
  • Time estimate: 15-30 minutes for initial setup
What You’ll Configure:
  • Organization name and API access
  • AI models for different processing tasks
  • Prompt configurations for content analysis
  • Custom prompt templates (optional)

Basic Organization Settings

Updating Organization Information

Your organization’s basic settings control fundamental identification and API access.
  1. Navigate to Organization Settings
    • From your dashboard sidebar, click Settings
    • Select Organization Settings from the left menu
  2. Update Organization Name
    • Edit the Organization Name field with your institution’s official name
    • This name appears in reports, widgets, and email communications
    • Click Save Changes to apply
  3. Manage API Access
    • View your Public API Key for widget integrations
    • Click Copy to copy the key to your clipboard
    • This key is used for embeddable widgets and external integrations

AI Model Configuration

Mapademics uses different AI models for various content processing tasks. You can customize these models and their settings to optimize performance for your needs.

Understanding Processing Pipelines

The platform has three main processing pipelines:
  • Syllabus Processing: Analyzes course syllabi to extract skills and learning outcomes
  • Job Description Processing: Analyzes job postings to identify required skills
  • SOC Classification: Maps jobs to Standard Occupational Classification codes

Configuring Syllabus Processing Models

  1. Navigate to Model Configuration
    • From Settings, select Model Configuration
    • Choose the Syllabus tab
  2. Configure Each Processing Step Structure Extraction:
    • Purpose: Extracts basic information from syllabus documents
    • Recommended Model: GPT-4o Mini (faster, cost-effective)
    • Temperature: Lower values (0-0.3) for more consistent extraction
    Skill Extraction:
    • Purpose: Identifies skills taught in the course
    • Recommended Model: GPT-4o Mini (balance of speed and accuracy)
    • Temperature: Lower values (0-0.3) for consistent skill identification
    Skill Grading:
    • Purpose: Assesses the level at which skills are taught
    • Recommended Model: GPT-4o (higher accuracy for complex analysis)
    • Temperature: Low to moderate (0.1-0.3) for reliable grading
    Cognitive Skills Assessment:
    • Purpose: Evaluates higher-order thinking skills
    • Recommended Model: GPT-4o (complex reasoning capabilities)
    • Temperature: Low to moderate (0.1-0.3) for consistent assessment

Configuring Job Description Processing Models

  1. Switch to Job Description Pipeline
    • Select the Job Description tab in Model Configuration
  2. Configure Processing Steps Structure Extraction:
    • Extracts job requirements, responsibilities, and qualifications
    • Similar model recommendations as syllabus processing
    Skill Extraction:
    • Identifies skills required for the job position
    • Use consistent models with syllabus processing for comparable results
    Skill Grading:
    • Assesses skill proficiency levels required by employers
    • Higher-end models recommended for nuanced level assessment
    SOC Classification:
    • Maps job descriptions to occupational classification codes
    • Requires sophisticated reasoning, use GPT-4o or Claude 3.5 Sonnet

Model Selection Guidelines

For High-Volume Processing:
  • Use GPT-4o Mini for extraction tasks to reduce costs
  • Reserve premium models (GPT-4o, Claude 3.5 Sonnet) for complex analysis
For Maximum Accuracy:
  • Use GPT-4o or Claude 3.5 Sonnet across all steps
  • Accept higher processing costs for improved precision
Temperature Settings:
  • 0.0-0.2: Most consistent, predictable outputs
  • 0.3-0.5: Balanced consistency and creativity
  • 0.6-1.0: More creative but less predictable (rarely recommended)

Prompt Configuration

Prompt configuration controls which instruction sets are used for each processing step. This allows you to use system defaults or custom prompts tailored to your institution.

Understanding Prompt Versions

Each processing step can use different prompt versions:
  • System Default: Mapademics’ built-in, tested prompts
  • Custom Versions: Organization-specific prompts you create

Configuring Prompt Versions

  1. Navigate to Prompt Configuration
    • From Settings, select Prompt Configuration
  2. Choose Processing Pipeline
    • Select either Syllabus or Job Description processing
  3. Configure Each Step
    • For each processing step, select your preferred prompt version
    • System Default: Use for most organizations
    • Custom Versions: Use when you have specific institutional requirements
  4. Test Your Configuration
    • Click Test Configuration to validate your settings
    • Upload a sample document to see processing results
    • Adjust settings based on test outcomes

Custom Prompt Templates

Advanced organizations can create custom prompt templates to fine-tune how content is processed for their specific needs.

When to Create Custom Prompts

Create custom prompts when you need:
  • Institution-specific terminology or skill classifications
  • Specialized analysis for unique program types
  • Custom formatting for extracted skills
  • Integration with existing institutional taxonomies

Creating Prompt Templates

  1. Navigate to Prompt Templates
    • From Settings, select Prompt Templates
  2. Create New Template
    • Click Create Template
    • Choose the processing pipeline (Syllabus or Job Description)
    • Select the processing step (Structure Extraction, Skill Extraction, etc.)
  3. Define Template Details
    • Name: Descriptive name for your template
    • Version: Version identifier (e.g., “v1.0”, “institutional-v2”)
    • Description: Purpose and use case for this template
  4. Write Custom Instructions
    • Enter your custom prompt instructions
    • Follow the format guidelines for your chosen step
    • Include specific terminology or requirements for your institution
  5. Test and Validate
    • Save your template
    • Use Prompt Configuration to select your custom template
    • Run tests with representative content

Template Best Practices

Writing Effective Prompts:
  • Be specific about desired output format
  • Include examples of good vs. poor results
  • Use clear, actionable language
  • Avoid ambiguous terminology
Version Management:
  • Use descriptive version names
  • Test thoroughly before applying to production processing
  • Keep previous versions available for comparison

Advanced Configuration Options

Background Processing Settings

While model and prompt configuration affect processing quality, some performance settings may be configured at the organization level:
  • Batch Processing: How many items are processed simultaneously
  • Retry Logic: How failed processing attempts are handled
  • Quality Thresholds: Minimum confidence levels for automated processing

API Configuration

Your organization’s public API key enables:
  • Widget Integration: Embedding program-career alignment widgets
  • External Systems: Connecting with learning management systems
  • Data Export: Accessing processed skills data programmatically
The API key is automatically generated and cannot be regenerated by administrators. Contact support if you need a new API key.

What Happens Next

After configuring your organization settings:
  1. Immediate Effect: New processing jobs use your configured models and prompts
  2. Existing Data: Previously processed content retains its original analysis
  3. Reprocessing: You can reprocess existing content with new configurations if needed
  4. Monitoring: Check background job status to ensure processing works as expected

Troubleshooting Configuration Issues

Models Not Performing as Expected

Problem: Processing results don’t meet quality expectations. Solutions:
  • Review your temperature settings (lower = more consistent)
  • Test different model combinations for your content type
  • Consider creating custom prompts for institution-specific terminology
  • Run test configurations with sample documents

Custom Prompts Not Working

Problem: Custom prompt templates produce inconsistent or poor results. Solutions:
  • Verify prompt syntax follows the required format
  • Include more specific examples and instructions
  • Test with various document types from your institution
  • Consider starting with system defaults and gradually customizing

High Processing Costs

Problem: AI processing costs are higher than expected. Solutions:
  • Switch to GPT-4o Mini for high-volume extraction tasks
  • Reserve premium models for final analysis steps only
  • Monitor processing volume and adjust models accordingly
  • Contact support for volume pricing options

Configuration Changes Not Applied

Problem: Changes to model or prompt settings don’t affect processing. Solutions:
  • Ensure you clicked Save Configuration after making changes
  • Wait a few minutes for configuration to propagate
  • Check that new processing jobs are being created (not reprocessing old jobs)
  • Verify you have administrative permissions to modify settings