Hi, I'm

Utsha Ghosh

|

Full-Stack Software Engineer with experience developing scalable, high-performance web applications across diverse domains. Strong focus on clean architecture, user experience, and building reliable, data-driven solutions that drive business impact.

Full-Stack Engineer

React.jsNext.jsTypeScriptJavaScript (ES6+)Node.jsExpress.jsGraphQLPostgreSQLMongoDB

AI Engineer

PythonLangChainOpenAI APIVector DatabasesRAG PipelinesPrompt Engineering

DevOps & Cloud

AWSDockerCI/CDGitHub ActionsRedisLoad BalancingCloudWatch

Latest Articles

Learning Articles

Hugging Face

Hugging Face is the most popular platform for working with transformer-based models like BERT, GPT, T5, LLaMA, Whisper, and many others. It provides tools such as Transformers, Datasets, and Evaluate, which make it easy to train, fine-tune, deploy, and share machine learning models.

What I'm Learning

I am currently learning how to use Hugging Face Pipelines, tokenizers, model loading, and dataset preparation to build NLP and multimodal AI applications. I am also exploring the Hub for open-source models and understanding how to push my own models.

Model Microservices

A Model Microservice is a dedicated backend service built specifically to run inference for an AI model. Instead of bundling the model inside the main app, it runs independently—with its own API endpoints, scaling strategy, and resource usage (GPU/CPU).

What I'm Learning

I am learning how to build model inference microservices using Python, FastAPI, Docker, and GPU-backed instances. The goal is to create scalable LLM and ML inference endpoints that can support batch requests, streaming outputs, load balancing, and monitoring.

Retraining Models Using Hugging Face

Retraining (fine-tuning) a model on Hugging Face means taking a pretrained model and adapting it to a new dataset or domain-specific task. This drastically improves accuracy on specialized tasks like support queries, sentiment, classification, embeddings, or chat behavior.

What I'm Learning

I am studying how to fine-tune transformer models using Trainer, custom training loops, evaluation metrics, and dataset tokenization pipelines. This includes learning about hyperparameters (epochs, learning rate, batch size), checkpointing, pushing models to the Hugging Face Hub, and setting up automated retraining workflows using microservices.

LLM (Large Language Models)

Large Language Models are advanced neural networks trained on massive datasets that enable machines to understand, generate, and reason with human language. They power applications like chatbots, document summarization, code generation, and knowledge retrieval.

What I'm Learning

I'm learning how LLMs process tokens, attention mechanisms, embeddings, and how they can be optimized or fine-tuned for domain-specific tasks. Understanding LLMs is crucial as they form the foundation of modern AI applications and intelligent automation.

LLD Design (Low-Level Design)

Low-Level Design focuses on creating class diagrams, method structures, data models, and object interactions. It converts high-level architecture into technical blueprints that developers can implement directly.

What I'm Learning

I'm learning LLD to improve my ability to design maintainable systems using OOP principles, design patterns, SOLID rules, and clean code practices. Strong LLD skills reduce bugs, improve scalability, and ensure long-term system stability.

HLD Design (High-Level Design)

High-Level Design describes the overall architecture of a system—covering modules, services, APIs, databases, caching layers, load balancing, security, and integration points.

What I'm Learning

I am studying HLD to understand how enterprise-level systems scale to millions of users while staying highly available and fault tolerant. Mastering HLD helps me build distributed systems, microservices, and SaaS applications that are modular, efficient, and resilient.

OOPs Concepts

Object-Oriented Programming concepts like Encapsulation, Abstraction, Inheritance, and Polymorphism form the backbone of modern software engineering. They help organize code in a way that is reusable, modular, and easier to maintain.

What I'm Learning

I am reviewing OOPs principles to strengthen my low-level design skills and to write cleaner architectures. Good OOPs understanding ensures better system extensibility, easier debugging, and more professional software engineering practices.

Experience

Springboard Incubators Inc.

Software Engineer

May 2023 - Present

Built and scaled an AI-powered e-learning platform for 100K+ users, delivering secure video streaming, responsive dashboards, and real-time learning analytics.

  • Scaled platform from 5K to 100K+ users, enhancing performance and reliability
  • Architected adaptive bitrate video streaming using AWS S3, CloudFront, reducing buffering by 40%
  • Built AI-powered learning assistant using LangChain and OpenAI API
  • Developed React.js + TypeScript dashboards improving page load speed by 35%
  • Reduced deployment times from hours to under 15 minutes via CI/CD

Boneflare Wellness

Full Stack Engineer

May 2020 - Dec 2021

Developed full-stack healthcare analytics tools for real-time patient monitoring, improving data performance and deployment efficiency.

  • Built responsive React + TypeScript dashboard for healthcare analytics
  • Refactored legacy JavaScript into modular TypeScript components, reducing bugs by 35%
  • Optimized API workflows and caching, cutting latency by 30%
  • Implemented CI/CD pipelines with GitHub Actions reducing deployment time by 40%

Atoll Solutions

Software Engineer

June 2018 - April 2020

Engineered scalable geospatial tracking systems for 300K+ devices, optimizing APIs and dashboards for high performance.

  • Scaled asset tracking system from 2K to 300K+ active users
  • Re-architected Google Maps API integrations reducing latency by 60%
  • Migrated to Node.js microservices with Redis caching achieving sub-200ms response time
  • Built interactive D3.js and Recharts dashboards for 100K+ tracked devices

Projects

Chrome Extension

Prompt Enhancer

Chrome extension integrated directly into the ChatGPT textbox, providing one-click prompt enhancement with improved grammar, clarity, and context. Includes inline enhance button, side-by-side preview, and optional "use history" toggle.

Next.jsNode.js/ExpressMongoDBRedisOpenAI APIAWS
View on GitHub

Contact Me

Let's build something impactful together. Feel free to reach out!