From idea to production in just few lines
The first neuro-symbolic Language Model (LM) framework leveraging the simplicity of Keras and the rigor of Deep Learning best practices.
Build RAGs, autonomous agents, multi-agents systems, self-evolving systems and more in just few lines
Deutsch | English | Español | Français | 日本語 | 한국어 | Português | Русский | 中文
Documentation · FAQ · Discord · Code Examples
⭐ If you find Synalinks useful, please star the repo! Help us reach more AI/ML engineers and grow the community. ⭐
Too busy to read the documentation? Give the llms.txt or llms-full.txt to you favorite LMs or AI coding tools. Or better, use Synalinks Claude Skills with Claude Code to use Synalinks right away!
Synalinks is an open-source neuro-symbolic framework that makes it simple to create, train, evaluate, and deploy advanced LM-based applications, including RAGs, autonomous agents, and self-evolving reasoning systems.
Think Keras for Language Models applications, a clean, declarative API where:
- 🧩 You compose
Modules like you would with deep learningLayers. - ⚙️ You train & optimize with in-context reinforcement learning.
- 🌐 You deploy as REST APIs or MCP servers.
- Progressive complexity: Start simple and grow advanced naturally.
- Neuro-symbolic learning: Combine logic, structure, and language models.
- In-context optimization: Improve model reasoning without retraining weights.
| Role | Why Synalinks Helps |
|---|---|
| 🧑💻 Developers | Build complex LM apps without boilerplate. |
| 🧠 Researchers | Prototype neuro-symbolic and RL-in-context systems fast. |
| 🏢 Data Scientists | Integrate LM workflows with APIs & databases. |
| 🎓 Students/Hobbyists | Learn AI composition in a clean, intuitive framework. |
Building robust LM apps is hard. Synalinks simplifies it with:
- Prompt/Anything optimization per module via In-Context RL
- Versionable, JSON-serializable pipelines
- Constrained structured outputs (JSON) for correctness
- Automatic async & parallel execution by default
- Metrics, rewards & evaluations built-in
- Native integrations: OpenAI, Ollama, Anthropic, Mistral, Azure, Groq, Gemini, XAI
- Embeddable fast knowledge base support: based on DuckDB
- API-ready: Deploy with FastAPI or FastMCP
- KerasTuner compatibility for hyperparameter search
- Built-In MLFlow callbacks and hooks for observability
| Framework | MCP | Logical Flow | Robust Branching | Parallel Function Calling | Hyperparameter Tuning | Ease of Use |
|---|---|---|---|---|---|---|
| Synalinks | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | 😀 |
| DSPy | ✅ Yes | ❌ No | ❌ No | ❌ No | ❌ No | 😢 |
| AdalFlow | ✅ Yes | ❌ No | ❌ No | ❌ No | ❌ No | 😢 |
| TextGrad | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | 😭 |
| Trace | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | 😭 |
uv pip install synalinksimport synalinks
import asyncio
class Query(synalinks.DataModel):
query: str = synalinks.Field(
description="The user query",
)
class NumericalAnswer(synalinks.DataModel):
answer: float = synalinks.Field(
description="The final numerical answer",
)
language_model = synalinks.LanguageModel(
model="gemini/gemini-2.5-pro",
)
@synalinks.saving.register_synalinks_serializable()
async def calculate(expression: str):
"""Calculate the result of a mathematical expression.
Args:
expression (str): The mathematical expression to calculate, such as
'2 + 2'. The expression can contain numbers, operators (+, -, *, /),
parentheses, and spaces.
"""
if not all(char in "0123456789+-*/(). " for char in expression):
return {
"result": None,
"log": "Error: invalid characters in expression",
}
try:
# Evaluate the mathematical expression safely
result = round(float(eval(expression, {"__builtins__": None}, {})), 2)
return {
"result": result,
"log": "Successfully executed",
}
except Exception as e:
return {
"result": None,
"log": f"Error: {e}",
}
async def main():
inputs = synalinks.Input(data_model=Query)
outputs = await synalinks.FunctionCallingAgent(
data_model=NumericalAnswer,
tools=[
synalinks.Tool(calculate),
],
language_model=language_model,
)(inputs)
program = synalinks.Program(
inputs=inputs,
outputs=outputs,
name="math_agent",
description="A math agent",
)Synalinks provides Python operators for combining and manipulating data models, enabling sophisticated control flow:
| Operator | Name | Description | Use Case |
|---|---|---|---|
+ |
Concatenation | Combines fields from both data models. Raises exception if either is None. |
Merging outputs from parallel branches |
& |
Logical And | Safe concatenation that returns None if either input is None. |
Combining with potentially null branch outputs |
| |
Logical Or | Returns the non-None data model. If both are non-None, merges them. |
Gathering outputs from conditional branches |
^ |
Logical Xor | Returns data if exactly one input is non-None, otherwise None. |
Exclusive branch selection |
~ |
Logical Not | Returns None if input is non-None, or a empty data model if None. |
Inverting branch conditions |
in |
Contains | Checks if a string key exists in the schema properties, or if another data model's schema is contained. Returns True or False. |
Conditional field checking, schema validation |
# Parallel branches with concatenation
x1 = await generator1(inputs)
x2 = await generator2(inputs)
combined = x1 & x2 # Merge both outputs
# Conditional branches with logical or
(easy, hard) = await synalinks.Branch(
question="Is this query complex?",
labels=["easy", "hard"],
branches=[simple_generator, complex_generator],
)(inputs)
result = easy | hard # Get whichever branch was selectedTo print a tabular summary of your program:
program.summary()Or a plot (Useful to document your system):
synalinks.utils.plot_program(
program,
show_module_names=True,
show_trainable=True,
show_schemas=True,
)To run your program use the following:
result = await program(
Query(
query=(
"A bookstore receives a shipment of 135 new books."
"They place the books evenly onto 9 shelves."
"Later, they decide to move 3 books from each shelf to a display table"
" at the front of the store. "
"How many books are left on the shelves after the books are moved?"
)
),
)async def main():
# ... your program definition
(x_train, y_train), (x_test, y_test) = synalinks.datasets.gsm8k.load_data()
program.compile(
reward=synalinks.rewards.ExactMatch(
in_mask=["answer"],
),
optimizer=synalinks.optimizers.OMEGA(
language_model=language_model,
embedding_model=embedding_model,
),
)
batch_size=1
epochs=10
history = await program.fit(
x_train,
y_train,
validation_split=0.2,
batch_size=batch_size,
epochs=epochs,
)
if __name__ == "__main__":
asyncio.run(main())To save the entire architecture and variables (the program's state) into a JSON file, do:
program.save("my_program.json")In order to load it, do:
loaded_program = synalinks.Program.load("my_program.json")To save only the state your program (the variables) into JSON:
program.save_variables("my_program.variables.json")To load its variables (needs a program with the same architecture), do:
program.load_variables("my_program.variables.json")To enable logging, use the following at the beginning of your script:
synalinks.enable_logging()Synalinks provides built-in observability through MLflow for tracing and monitoring your programs.
Important: Call
enable_observability()before creating any modules.
import synalinks
# Enable observability first
synalinks.enable_observability(
tracking_uri="http://localhost:5000", # Optional: MLflow server URI
experiment_name="my_experiment" # Optional: defaults to "synalinks_traces"
)
# Then create your modules - they will be automatically traced
inputs = synalinks.Input(data_model=Query)
outputs = await synalinks.Generator(...)(inputs)For training metrics and artifacts, use the Monitor callback:
monitor = synalinks.callbacks.Monitor(
tracking_uri="http://localhost:5000",
experiment_name="training_runs",
)
await program.fit(x=train_x, y=train_y, callbacks=[monitor])See the Observability documentation for Docker setup and advanced configuration.
You can learn more by reading our documentation. If you have questions, the FAQ might help you.
Contributions are welcome, either for the implementation of additional modules, metrics, or optimizers. For more information, or help for implementing your ideas (or ones from a paper), please join our discord.
Beware that every additional metric/module/optimizer should be approved by the core team, we want to keep the library minimal and clean as possible to avoid an uncontrolled growth leading to bad software practices like in most current leading LM frameworks.
If you have specific feedbacks or features request we invite you to open an issue.
Your contributions, feedback, and support are what make this project thrive.
From small bug fixes to major features, thank you for believing in open collaboration and the future of neuro-symbolic AI.
Join our community to learn more about neuro-symbolic systems and the future of AI. We welcome the participation of people from very different backgrounds or education levels.
This work have been done under the supervision of François Chollet, the author of Keras. If this work is useful for your research please use the following bibtex entry:
@misc{sallami2025synalinks,
title={Synalinks},
author={Sallami, Yoan and Chollet, Fran\c{c}ois},
year={2025},
howpublished={\url{https://github.com/SynaLinks/Synalinks}},
}Synalinks would not be possible without the great work of the following open-source projects:
