The Cognitive Architecture Blueprint
How Inner G Complete applies the PMI Cognitive Project Management for AI (CPMAI) framework to architect the Aesthetic Domain Intelligence model — a governance-first methodology for building institutional-grade AI that survives enterprise due diligence.
CPMAI is the PMI-certified standard for AI project governance. This is how we use it.

The differentiator between an AI system that survives enterprise procurement and one that gets cancelled in legal review is not the model architecture — it is the governance methodology behind it. Inner G Complete adopts the PMI Cognitive Project Management for AI (CPMAI) framework as the operational standard for every ADI engagement. This brief explains what CPMAI is, how we apply it across all six phases, and — critically — how it maps to the Aesthetic Domain Intelligence project currently in active development.
What Is CPMAI?
CPMAI is the Project Management Institute's certified framework for managing AI and cognitive computing projects. It is built on the architecture of CRISP-DM — the industry-standard data science methodology — but extends it with AI-specific governance layers including Trustworthy AI requirements, ethical AI considerations, Go/No-Go decision gates, human-in-the-loop protocols, and a formal Model Operationalization and Governance structure.
Where generic project management frameworks treat AI as a software delivery problem, CPMAI treats it as what it actually is: a living intelligence asset that must be designed, validated, governed, and continuously maintained against defined business outcomes.
Business Understanding
Data Understanding
Data Preparation
Model Development
Model Evaluation
Model Operationalization
Living Audit Disclosure
This document is the first in a series of public-facing implementation updates for the Aesthetic Domain Intelligence (ADI) project. As Inner G Complete moves through each CPMAI phase, we will publish corresponding briefs documenting our decisions, findings, and outcomes. This is "Building in Public" for the enterprise — a real-time portfolio signal of our methodology, rigor, and institutional capability.
The Six CPMAI Phases: Applied to ADI
Each phase below is described at two levels: (1) what CPMAI requires in general, and (2) how Inner G Complete applies it specifically to the Grooming, Beauty & Wellness ADI engagement.
Business Understanding
Enterprise Outcome
“We define exactly what the AI must accomplish — in business terms — before any technology selection begins.”
This is where most enterprise AI initiatives are won or lost. CPMAI Phase I demands ruthless clarity on the business problem statement, the cost-benefit economics, the ROI model, and — critically — the question of whether AI is even the right solution. We evaluate noncognitive and non-automated alternatives first. If a non-AI solution can achieve the business objective, we say so. This phase concludes with a mandatory three-gate Go/No-Go decision before any development resources are committed.
Key Task Groups
Business Objectives
- Define the business problem statement
- Establish success criteria & KPI benchmarks
- Perform cost-benefit and ROI analysis
Cognitive Requirements
- Define AI value proposition vs. non-AI alternatives
- Identify AI pattern selection (classification, generation, prediction)
- Map hybrid scope: AI + non-AI components
Trustworthy AI Requirements
- Trustworthy AI framework selection
- Ethical AI considerations & bias management
- AI failure mode identification & handling protocols
Go/No-Go Decision Gates
ADI Project Application — Phase I
For the Grooming & Wellness ADI project: We establish that the business objective is a sovereign intelligence layer that enables personalization, predictive operations, and enterprise-grade compliance — capabilities that no off-the-shelf SaaS platform can deliver. The non-AI alternative analysis confirms that rule-based automation alone cannot replicate the domain-specific learning cycle required.
Data Understanding
Enterprise Outcome
“We audit your data landscape with precision — mapping what exists, what is missing, and what is compliant enough to train on.”
The data understanding phase is where our Cognitive Feedstock framework operationalizes. We conduct a full Data Landscape Audit across all 15 source categories, evaluate data quality across structure, completeness, and compliance dimensions, and produce a Data Readiness Score (DRS). This phase also includes a critical evaluation of pre-trained foundation models that could accelerate development — reducing the need to train from scratch in domains where public models already carry relevant knowledge.
Key Task Groups
Initial Data Collection
- Data inventory & location mapping across all 15 source categories
- Nature and structure assessment (structured, unstructured, semi-structured)
- Data inspection & volume sufficiency check
Data Quality Audit
- Current quality assessment per source
- Preparation & transformation pipeline planning
- Training / test / validation split requirements
Foundation Model Analysis
- Identify applicable pretrained models (LLMs, vision models, recommendation systems)
- Transfer learning & fine-tuning requirements
- Edge device data requirements for on-premise deployments
Go/No-Go Decision Gates
ADI Project Application — Phase II
For the ADI project: We map all 15 data sources against our Cognitive Feedstock framework (see companion brief). PHI-sensitive sources (consultation forms, visual diagnostics, treatment records) are isolated under HIPAA-compliant architecture before ingestion. Pre-trained recommendation and NLP models from established foundation providers are evaluated to accelerate Phase IV.
Data Preparation
Enterprise Outcome
“We build the production-grade data pipeline — cleaned, labeled, and architected for continuous ingestion.”
Raw data is not training data. Phase III transforms the audited corpus from Phase II into a structured, model-ready dataset through rigorous cleansing, augmentation, and labeling operations. Every decision made in this phase is documented for future auditability — a requirement for Trustworthy AI compliance and a non-negotiable for any client seeking regulatory defensibility. The output of this phase is not just a dataset: it is a repeatable data pipeline that will continuously feed the model as new operational data flows in.
Key Task Groups
Data Selection
- Select and document inclusion/exclusion criteria for each data source
- Define selection methodology for auditable reproducibility
Data Cleansing & Enhancement
- Missing value treatment
- Outlier handling
- Normalization and format standardization
- Augmentation to address class imbalance or low-volume sources
Data Labeling
- Label strategy by data type (manual, semi-supervised, synthetic)
- Labeling cost and scale projection
- Quality verification protocol for labeled sets
ADI Project Application — Phase III
For the ADI project: We build the Aesthetic Data Pipeline — a continuous ETL infrastructure that ingests operational data from PMS platforms (Mindbody, Zenoti), CRM systems, and digital consultation forms on a real-time or scheduled basis. All ingested data passes through our HIPAA compliance layer for PHI screening before entering the training corpus.
Model Development
Enterprise Outcome
“We select, configure, and train the model architecture that converts the prepared corpus into a functioning domain intelligence.”
This is where the intelligence is built. CPMAI Phase IV covers algorithm selection, ensemble configuration, foundation model fine-tuning, generative AI integration, prompt engineering strategy, and hyperparameter optimization. Crucially, this phase is not a one-shot process — it is iterative by design. The CPMAI model explicitly accommodates multiple training runs, each producing measurable performance outputs that inform Phase V evaluation. For the ADI project, we anticipate a hybrid architecture: a fine-tuned recommendation model for personalization, combined with a generative AI layer for conversational client interaction.
Key Task Groups
Algorithm & Architecture Selection
- Domain-relevant algorithm/modeling technique selection
- Ensemble method configuration if multi-model approach
- AutoML tool evaluation to accelerate training cycles
Foundation Model Fine-Tuning
- Selection of base pretrained models
- Fine-tuning method (transfer learning, RLHF, prompt tuning)
- API cost and limitation analysis for hosted models
Generative AI Integration
- Generative AI approach and API selection
- Prompt engineering strategy for domain-specific outputs
- LLM chaining logic for multi-step reasoning workflows
Training & Optimization
- Model training execution and result documentation
- Validation design (train/test/validation split enforcement)
- Hyperparameter optimization and fit measurement
ADI Project Application — Phase IV
For the ADI project: The core model is a multi-layer intelligence architecture — (1) a recommendation engine trained on regrowth cycles, service history, and behavioral preference data; (2) a generative conversational layer fine-tuned on domain-specific client interaction transcripts; and (3) a predictive analytics module trained on scheduling, inventory, and staff performance metrics.
Model Evaluation
Enterprise Outcome
“We validate model performance against the exact KPIs established in Phase I — not against engineering metrics that have no business meaning.”
Phase V closes the loop between what was promised in Phase I and what the model actually delivers. CPMAI requires evaluation against both technology KPIs (precision, recall, F1, latency) and business KPIs (rebooking rate improvement, no-show reduction, conversion lift). If either dimension fails to meet the thresholds established in Phase I, the model returns to Phase III or IV for iteration — not to production. This hard gate is what distinguishes a CPMAI-governed project from a project that ships a model and calls it done.
Key Task Groups
Model Performance Audit
- Confusion matrix / ROC curve analysis (classification tasks)
- Generative output quality evaluation (relevance, hallucination rate)
- Benchmark comparison against Phase I acceptable performance values
Business KPI Verification
- Measure against Phase I KPI targets: rebooking rate, no-show reduction, lead conversion
- Document any KPI gaps requiring model iteration
- Stakeholder review and approval readiness
Iteration Planning
- Define iteration approach for underperforming metrics
- Identify which previous phase requires re-entry
- Document learnings for next training cycle
Go/No-Go Decision Gates
ADI Project Application — Phase V
For the ADI project: Model evaluation is conducted against a predefined scorecard established in Phase I. Business KPIs include: rebooking conversion rate uplift ≥ 15%, no-show prediction accuracy ≥ 80%, personalized recommendation acceptance rate ≥ 40%, and inbound lead response time < 30 seconds. A model that passes technology benchmarks but misses business KPIs does not advance to Phase VI.
Model Operationalization
Enterprise Outcome
“We deploy to production under a formal governance framework — with monitoring, maintenance, and iteration roadmaps in place from day one.”
Deployment is not the finish line — it is the starting point of the intelligence lifecycle. CPMAI Phase VI defines the deployment architecture, continuous monitoring infrastructure, governance ownership structure, and the criteria for initiating the next iteration. An operationalized ADI model that is not continuously monitored will degrade over time as client behavior, market trends, and product formulations evolve. The governance framework established in this phase ensures the model is treated as a living intelligence asset, not a static software release.
Key Task Groups
Operationalization Plan
- Deployment mode selection (cloud, on-premise, hybrid, edge)
- IT integration requirements and API surface definition
- Hybrid non-cognitive components (frontend apps, dashboards, automation workflows)
Monitoring & Maintenance
- Continuous monitoring approach and tooling selection
- Model drift detection protocols
- Retraining trigger thresholds and cadence
Governance Framework
- Ownership structure and accountable stakeholders
- Model governance policy documentation
- Response protocols for model errors, bias events, or compliance violations
Next Iteration Planning
- Post-mortem review: what worked, what didn't
- Scope definition for next training iteration
- Resource requirements for iteration continuity
ADI Project Application — Phase VI
For the ADI project: The model is deployed via a secure cloud architecture with API endpoints consumed by client-facing applications (booking interfaces, AI concierge, analytics dashboards). A real-time monitoring layer tracks model output quality against live KPI data. The governance framework designates Inner G Complete as the model steward with quarterly review cycles and defined escalation paths for compliance events.
The Trustworthy AI Layer
CPMAI explicitly mandates a Trustworthy AI framework as a non-negotiable Phase I requirement. Inner G Complete adopts the following five-pillar standard across every ADI engagement:
Regulatory Compliance
HIPAA/HITECH for all PHI-touching systems. State medical board licensing requirements. FDA cosmetic ingredient standards where applicable. BAA architecture for every enterprise deployment.
Transparency & Explainability
Every model recommendation must be traceable to the data inputs that drove it. Clients and end-users receive explainable outputs — not black-box scoring — so that human judgment can always override.
Bias Identification & Management
Training data is audited for demographic, preference, and historical bias before ingestion. Models are tested against underrepresented client segments before any production approval.
Failure Mode Engineering
Every AI system deployed includes formal failure mode documentation: what triggers a failure, how it is detected, how it is surfaced to human oversight, and how it is remediated without client impact.
Data Source Transparency
Enterprise clients receive full documentation of every data source in the training corpus, including collection methods, data selection logic, and any exclusions — prior to model operationalization.
Human-in-the-Loop (HITL)
All high-stakes decisions (clinical recommendations, PHI handling, client escalations) preserve a human override layer. The AI augments human judgment; it does not replace it in safety-critical workflows.
What Comes Next: The Living Implementation Series
This document represents Phase 0 of the public ADI architecture record. As Inner G Complete progresses through each CPMAI phase, we will publish corresponding updates to this series:
Phase I Update
ADI Business Case: Objectives, KPIs, and the Go Decision
Phase II Update
Data Landscape Audit Results: Our DRS Score and What We Found
Phase III Update
Building the Aesthetic Data Pipeline: Our ETL Architecture
Phase IV Update
Algorithm Selection & Foundation Model Fine-Tuning Decisions
Phase V Update
Model Evaluation: KPI Verification Against Phase I Targets
Phase VI Update
Go-Live: Deployment Architecture and Governance Framework
Enterprise clients who engage with this series receive unfiltered transparency into how a production-grade ADI is actually built — not a marketing deck, but a live architectural record. This is what "institutional signal" looks like in practice.
Engage a Methodology.
Not Just a Team.
Every Inner G Complete engagement begins with a CPMAI Phase I Business Understanding audit. We don't quote a build until we can confirm — with evidence — that your project will survive all six gates.
Request Phase I Audit