Link copied to clipboard!
Back to Jobs
AI/ML Engineer (LLMs & RAG) at Particula
Particula
Hoboken, NJ
Information Technology
Posted 0 days ago
Job Description
Particula is the prime rating provider for digital assets, now bringing trusted, data-driven ratings on-chain. As DeFi matures and converges with TradFi, we’re building the rails that help institutions, protocols, and builders use ratings to unlock safer, more efficient capital flows.About the roleYou’ll help design, build, and ship LLM‑powered features that underpin our ratings and monitoring products. Working closely with the Head of AI, you’ll focus on AI‑powered token and asset analysis, automated report generation, multi‑modal document analysis, robust evaluation and observability, and reliable production delivery on AWS.No one ticks every box. If you bring solid fundamentals, curiosity and the drive to learn quickly, please apply even if your experience doesn’t align one‑to‑one with the description. We care about potential and attitude.TasksBuild and maintain LLM‑powered features end‑to‑end (prompting, RAG pipelines, structured extraction/classification such as entity extraction).Develop data ingestion, cleaning and indexing pipelines for RAG, including n8n workflows for intake and enrichment (connectors, transformations, error handling, scheduling).Contribute to lightweight model tuning and systematic evaluation.Establish evaluation and observability for RAG (dashboards, automated reporting, experiment tracking) to ensure reliability and factual grounding.Optimise prompts, retrieval and context strategies to improve accuracy, reduce hallucinations and control latency/cost.Work hand‑in‑hand with our ML/DevOps engineer to ensure smooth deployments, reliability and continuous improvement.Coordinate and provide technical guidance to a small offshore AI development team (clear specifications, code reviews, quality standards), with support from the Head of AI.Collaborate with product and engineering to scope and deliver incremental value in short, iterative releases.RequirementsStrong Python skills; experience with PyTorch or Transformers; familiarity with the Hugging Face ecosystem.Practical knowledge of LLM tooling (e.g. LangChain or LangGraph) and RAG concepts.Experience building on AWS, ideally including:• Serverless functions (AWS Lambda) for orchestration,• Elastic compute (EC2) for workloads,• Foundation model services (Bedrock or SageMaker) for model hosting and tuning.Hands‑on with n8n; workflow automation experience is a plus.Containerisation with Docker; proficiency with Git and CI/CD.MLOps fundamentals: MLFlow for experiment/model tracking; evaluation frameworks (e.g. RAGAS).Clear communication, collaborative mindset and focus on shipping.Languages: strong English; German is a plus. Applications in English or German are welcome.Education: Degree in Computer Science or equivalent preferred; equivalent practical experience acceptable.BenefitsOffsites with the team in exciting locationsFlexible working hours in a remote‑first companyExciting product in a very dynamic market environmentValues‑based start‑up cultureMany opportunities to develop further and network with committed peopleFlat hierarchyLet’s build the next layer of trust for digital assets - together!
Resume Suggestions
Highlight relevant experience and skills that match the job requirements to demonstrate your qualifications.
Quantify your achievements with specific metrics and results whenever possible to show impact.
Emphasize your proficiency in relevant technologies and tools mentioned in the job description.
Showcase your communication and collaboration skills through examples of successful projects and teamwork.