Role summary
We are hiring an Agent Systems Engineer to build Python-based services and workflow logic for AI-assisted product features and internal systems. This role is about shipping useful systems: API integrations, orchestration, retrieval, background jobs, state handling, and reliable execution around LLM-supported workflows. It is not a research role and it is not a prompt-only role.
What you will work on
- Python services that power agent-driven workflows and internal AI-supported functions
- APIs and integrations between product systems, internal services, external tools, and data sources
- Retrieval, structured outputs, task execution, and state handling in multi-step workflows
- Background jobs and processing pipelines for asynchronous or long-running tasks
- Data persistence and operational reliability across databases, logs, and service boundaries
Responibilities
- Design and implement Python services for agent-based and LLM-supported workflows.
- Build orchestration logic for multi-step tasks, including tool usage, retrieval, validations, and fallback paths.
- Create and maintain APIs that connect AI-supported workflows with product and backend systems.
- Work with databases and background processing to support durable, auditable execution.
- Improve logging, observability, and error handling so workflows can be understood and supported in production.
- Contribute to engineering decisions on structure, service boundaries, and implementation quality.
- Collaborate directly with product, backend, infrastructure, and C-level management.
What you should bring
- Strong hands-on experience with Python in backend or applied AI systems.
- Experience building APIs and integrating external or internal services.
- Practical understanding of LLM application patterns such as retrieval, structured outputs, tool calling, or multi-step orchestration.
- Experience with background jobs, asynchronous processing, or workflow execution.
- Confidence working with databases and production-oriented software design.
- Clear communication and the ability to work in a small team with direct ownership.
Helpful but not required
- Experience with orchestration frameworks such as LangGraph or similar execution models.
- Experience with vector search, semantic retrieval, or hybrid search.
- Experience with tracing, evaluation, or quality control for LLM-based systems.
- Experience in an early-stage product environment where systems are still taking shape.
What success looks like in the first 6 months
- Ship at least one production-grade workflow or service used by a real product or internal process.
- Stabilize key integrations and background execution paths.
- Improve traceability, logging, and failure handling for at least one important workflow.
- Become a reliable technical counterpart for product and backend teams on AI-supported execution logic
Team and work setup
You report to C-level management and work day to day with product, backend, infrastructure, and external partners. The company is small enough that decisions are visible and direct. Founders and decision-makers are in the office and reachable.
What this role is not
- This is not a pure prompt-engineering role.
- This is not a model research or model training role.
- This is not a narrow integration role limited to wiring tools together without owning system quality.
Candidate note
Candidates should know that we care more about reliable implementation and systems thinking than about hype or demos. We value people who can move from idea to production, document what matters, and keep complexity under control.
We look forward to receiving your detailed application materials (cover letter and resume). Please email them to application[at]ttxtech.eu , indicating your earliest possible start date.