Key Responsibilities
- Design, develop, and deploy Generative AI solutions (LLMs, multimodal models, etc.) for enterprise use cases.
- Fine-tune and optimize large language models (e.g., OpenAI, Anthropic, Llama, Mistral, HuggingFace).
- Develop end-to-end AI pipelines including data preprocessing, model training, evaluation, and deployment.
- Build AI-powered applications (chatbots, copilots, knowledge assistants, automation tools).
- Collaborate with cross-functional teams (consultants, data engineers, cloud architects) to integrate AI solutions into enterprise workflows.
- Ensure security, scalability, and performance of deployed AI systems.
- Stay updated with the latest advancements in GenAI, RAG (Retrieval-Augmented Generation), and prompt engineering.
Required Skills & Experience
- 5–8 years of overall software engineering / AI/ML experience.
- Strong programming skills in Python (plus experience in frameworks like PyTorch, TensorFlow).
- Proven experience with LLMs (GPT, Llama, Claude, Mistral, Falcon, etc.).
- Hands-on expertise with LangChain, LlamaIndex, RAG frameworks, and vector databases (Pinecone, Weaviate, FAISS, Milvus).
- Cloud platforms: Azure AI, AWS Sagemaker, GCP Vertex AI
- Experience in building and deploying API-based AI services and microservices.
- Strong understanding of data engineering workflows (ETL, pipelines, unstructured text handling).
- Familiarity with MLOps & CI/CD for AI (MLflow, Docker, Kubernetes).
- Good problem-solving skills and ability to translate business requirements into AI solutions.
Disclaimer:
Please Note: Fraudulent job postings/job scams are increasingly common. Beware of misleading advertisements and fraudulent communication issuing ‘offer letters’ on behalf of Hadron GBS in exchange for a fee. Please look for an authentic Hadron GBS email ID – [email protected].
Stay vigilant. Protect yourself from recruitment fraud!
To know more, please visit: https://www.hadrongbs.com/career/
To apply for this job email your details to info@hadrongbs.com

