Your contracts, your shipments, your invoices. One platform that connects all three and governs what happens next

The Freehand platform is powered by three purpose-built layers that hold supply chain context, reason over it with logistics-specific intelligence, and give enterprises the controls they need to deploy AI Teams with confidence.

Trusted by global leaders in Logistics, Manufacturing, and Retail

What Freehand is built on

Our AI Teams forecast demand, negotiate bids, audit complex invoices, and orchestrate global payments, giving you absolute, real-time control over your total freight spend.

Knowledge Graph

The full picture of your supply chain in one place

The foundation of the platform. It captures and connects every data point relevant to a spend decision - contracts, rates, shipments, invoices, exceptions, and vendor history.

Full-context ingestion

Connects structured data (ERP exports, rate cards, TMS records), semi-structured data (EDI feeds, shipment events), and unstructured data (emails, contracts, PDFs) into a single queryable layer.

Relational mapping

Maps entities natively - how a lane links to a rate card, a rate card to a carrier, a carrier to an invoice, an invoice to what was actually delivered.

Grounded reasoning

Every AI output is anchored in verified graph facts, so Agents reason from evidence and never approximate.

LANGUAGE MODEL

The reasoning layer underneath every Freehand Agent

A multi-model reasoning layer that routes each task to the right architecture - and grounds every output in the knowledge graph so Freehand Agents act from verified facts.

Task-based routing

A model hub routes each task based on what it requires. Reasoning, document extraction, multilingual work, and freight-specific judgment calls all go to different models - including Freehand's own SLMs.

Domain-specific intelligence

Freehand's SLMs are trained on supply chain specific knowledge like NMFC classifications, dimensional weight rules, carrier-specific accessorial logic.

Knowledge graph grounding

Every model output is anchored in verified facts from the knowledge graph. Agents act on what is documented.

FREEHAND STUDIO

Where Freehand AI Teams are built, configured, and deployed.

Studio is the control layer where you define what agents do autonomously, what they escalate, and how far they go.

Natural language rules

Define what agents approve automatically, what they escalate, and what they hold — in plain language, no code required.

Agent observability and evals

See exactly what every Agent did on every decision, flag underperformance, and tune rules in real time.

Continuous learning

Every resolved exception feeds back into the knowledge graph, making the system more accurate with every cycle.

Every decision has a reason. Every reason is logged.

Every Freehand Agent decision carries a complete reasoning trace, and Studio gives your team the controls to define exactly how far agents go without escalating.

Decision traceability

Every decision a Freehand Agent makes is logged with the specific document, rule, or data point that justified it.

Agent observability

Freehand surfaces what every agent did, flags performance anomalies, and lets your team run evals on agent output over time. Every resolved decision makes the AI more accurate.

Configurable guardrails

Freehand lets you configure exactly where agents act autonomously, where they escalate to a human, and where they hold for review.

Enterprise - grade security, built in from the start.

TLS 1.3 and AES-256 encryption
Audit trail
Full RBAC

Context lives everywhere. Freehand operates in all of it.

Freehand Agents execute in the tools your team already uses and post results to the systems you already trust.

Frequently Asked Questions

Have further questions and can’t find the answers?

How does Freehand connect to our existing systems?

Freehand has native connectors for SAP, Oracle, Microsoft Dynamics, JDE, and NetSuite, and supports EDI, API, SFTP, and database sync for real-time and batch processing. Middleware support includes Seeburger, MuleSoft, and Boomi.

Do we need an engineering team to set it up or maintain it?

No. Freehand Studio is built for operations and finance teams. Rules are configured in plain language. Integrations use standard connectors. Your IT team handles initial system connections - ongoing management does not require engineering resources.

How does Freehand handle contracts that are non-standard or highly complex?

The knowledge graph is built to hold contract complexity - sliding scales, fuel surcharge indices, multi-tier accessorial structures, currency adjustments, and SLA penalty logic. Agents apply the rules in your actual contracts, not a simplified version of them.

How do we know the AI is not hallucinating or making up answers?

Every output from the Logistics Language Model is grounded in verified facts from the knowledge graph - not generated from training data alone. Agents act on what is documented in your contracts, shipment records, and rate cards. That grounding is what makes autonomous financial decisions defensible.

How does Freehand improve over time?

Every resolved exception, corrected decision, and validated outcome feeds back into the Supply Chain Knowledge Graph through Studio's built-in feedback loops. The system gets more accurate with every cycle - it does not require manual retraining.

How do we build the organization knowledge graph?

The knowledge graph is built by integrating data from Apache Iceberg (structured data) and OpenSearch (unstructured data). It represents entities and relationships, enabling graph-based queries and semantic reasoning.

Can we do this through unstructured data as well?

Yes, unstructured data can be integrated using OpenSearch, which stores embeddings generated from unstructured data like emails, contracts, and customer feedback.

How do we showcase this explainability of AI?

The agentic framework can trace the steps it took to arrive at a recommendation or suggestion, making the process transparent. Custom embeddings enhance the accuracy of vector similarity search, allowing the AI to provide more relevant and context-aware recommendations.

Is there a way we can configure/train the AI agents?

AI agents can be configured and trained using custom fine-tuned models deployed via Hugging Face and SageMaker.
The models can be trained on customer logistics-specific data to capture customer-specific nuances.

What measures are in place to ensure the security and privacy of customer data integrated into the knowledge graph?

We employ several security measures including encryption at rest and in transit, access controls, and regular security audits . Customer data is anonymized where possible, and we comply with data protection regulations such as GDPR and CCPA. Access to the knowledge graph is restricted to authorized personnel only.

What are the protocols for updating and maintaining the knowledge graph and custom embeddings?

Regular updates to the knowledge graph and custom embeddings are conducted through a version control system. Updates are tested in a staging environment before being deployed to production. Maintenance includes monitoring for data drift and retraining models as necessary to ensure they remain accurate and relevant.

What are the fallback mechanisms if the AI agent fails to provide a recommendation?

Fallback mechanisms include manual override by human operators, default rules based on historical data, and escalation to a higher level of support. The system is designed to log instances where it fails to provide a recommendation for further analysis and improvement.

What are the disaster recovery and business continuity plans for the AI system?

Our disaster recovery plan includes regular backups of the knowledge graph and model parameters, stored in geographically redundant locations. We have a business continuity plan that outlines steps to be taken in the event of system failure, including failover to secondary systems and communication protocols with stakeholders.

What is the process for scaling the AI system to accommodate growing data volumes and user demands?

Scaling is achieved through a combination of horizontal scaling (adding more instances) and vertical scaling (upgrading instance types). We use cloud-native services to automatically scale resources based on demand. Additionally, we continuously optimize our algorithms and data structures to handle larger volumes efficiently.

How long does implementation take?

Freehand arrives pre-trained on thousands of carrier rate structures, freight classifications, fuel surcharge tables, and accessorial logic. That means AI Teams can begin auditing from day one without a lengthy training period. Typical deployment timelines depend on the complexity of your data integrations and the number of spend categories you are starting with.

What happens when an agent makes the wrong call?

Every agent decision is logged with its full reasoning trace - the contract clause, rate card, or shipment record it acted on. If a decision needs to be challenged or reversed, that trace gives your team everything required to understand what happened and correct it. Studio's configurable guardrails also let you set the thresholds where agents act autonomously vs. escalate to a human, so you control the risk exposure from day one.

Is our data secure?

Freehand is SOC 2 Type II certified and ISO 27001 aligned. All data is encrypted with TLS 1.3 and AES-256. The platform is GDPR compliant and provides a full audit trail with role-based access controls across every layer.

Can we bring our own AI models?

Yes. Freehand supports bring-your-own-model (BYOM). Enterprises with proprietary models can plug them into the same infrastructure and benefit from the same knowledge graph grounding and Studio controls.

What are the LLMs we use today?

We consume Large Language Models (LLMs) from AWS Bedrock, including models like Meta, Claude, Stable Diffusion, Amazon
Nova, and custom fine-tuned models deployed using Hugging Face and SageMaker.

How will we get customer consent to integrate with their data across systems?

Customer consent can be obtained through standard data integration agreements and consent forms. Clear communication about how their data will be used and secured is essential.

What are the decision guardrails for the AI decisions/recommendations?

The decision guardrails include the use of custom embedding models tailored to the logistics context, which ensure that the AI understands domain- specific semantics. The knowledge graph provides a unified view of structured and unstructured data, ensuring that decisions are based on comprehensive context.

How do AI agents do reasoning to arrive at the recommendations/suggestions?

The AI agents use a combination of the knowledge graph, custom embeddings, and the agentic framework for reasoning. The planning module uses the knowledge graph to understand relationships and generate SQL queries or reasoning steps. The solver module uses the knowledge graph and OpenSearch to resolve complex issues by retrieving relevant information and identifying root causes. The explainability of AI is showcased through the use of the knowledge graph, which provides a semantic layer that connects structured and unstructured data.

How are the LLMs and custom models validated for accuracy and performance?

The LLMs and custom models are validated through a combination of automated testing, human-in-the-loop reviews, and performance metrics such as accuracy, precision, recall, and F1 score. We use benchmark datasets specific to the logistics domain to evaluate model performance. Additionally, A/B testing is conducted in a controlled environment before full deployment.

How is the explainability of AI decisions ensured and documented?

Explainability is ensured through the use of the knowledge graph, which provides a semantic layer that connects structured and unstructured data. The agentic framework can trace the steps it took to arrive at a recommendation or suggestion, making the process transparent. Documentation includes model cards that describe the model’s capabilities, limitations, and the data used for training.

How are biases in the AI models identified and mitigated?

Biases are identified through regular audits of model predictions against a diverse set of test cases. We use techniques such as fairness constraints during model training and post-hoc analysis to detect and mitigate biases. Additionally, we involve diverse teams in the model development process to catch potential biases early.

How is customer consent managed and documented for data integration?

Customer consent is obtained through standard data integration agreements and consent forms. We maintain a record of all consents in a secure database and provide customers with the option to revoke consent at any time. Clear communication about how their data will be used and secured is provided upfront.

How are the AI agents audited for compliance with industry standards and regulations?

AI agents are audited by internal and external auditors to ensure compliance with industry standards such as ISO/IEC 27001 for information security and NIST guidelines for AI risk management. Regular compliance checks are conducted, and audit reports are maintained for review.

Recover margin. Enforce contracts. Close the loop.

See how Freehand recovers margin you're already losing

Map your commercial agreements to real-world execution - recovering 2-5% in lost margins and ensuring 100% audit coverage.

What to expect in the call

We identify exactly where you’re leaking margins

See how our AI Teams cross-check contracts, and resolve overcharges

Get a savings estimate based on your current spend and systems.

Trusted & Recognized by

KEARNEY
pwc
Gartner

See AI teams in action