saas · 15 min · 2026-03-23 · Last updated: April 9, 2026
I Built a Hootsuite Competitor Solo
How I built Sellanto — a social media SaaS with 5 AI models — as a solo developer. Architecture, tech stack, and lessons learned.
TL;DR: Sellanto is a social media management SaaS built by a solo developer using Django, React, Celery, PostgreSQL, and five AI models (content generation, image analysis, engagement prediction, sentiment analysis, and hashtag optimization). The codebase is 50,000+ lines across 100+ API endpoints and 40+ database tables. Key architectural decisions: monolith over microservices, a platform abstraction layer for social media APIs, Celery priority queues for scheduled publishing, and Django Channels for real-time WebSocket updates.
Short answer: I built Sellanto — a social media management platform that competes with Hootsuite and Buffer — as a single developer using Django, React, Celery, and five AI models. It took months of focused work, hundreds of architectural decisions, and a willingness to build things the hard way when no shortcut existed. Here is the full technical story.
This is not a motivational post about hustle. This is a technical deep-dive into how a solo developer architectured, built, and shipped a SaaS platform that manages social media accounts, schedules posts across multiple platforms, generates AI-powered content, and provides analytics — all from a one-person engineering team.
Why Build a Hootsuite Competitor?
I was not trying to dethrone Hootsuite. I was solving a problem I kept seeing with clients: small businesses and agencies needed social media management, but existing tools were either too expensive, too bloated, or missing the AI-powered content capabilities that were becoming essential.
The gap I saw was specific: a platform that combined scheduling, multi-platform management, and genuine AI content generation — not the superficial "AI-assisted" features that most tools had bolted on as marketing checkboxes. I wanted AI that actually understood brand voice, generated platform-specific content, and learned from engagement data.
That gap became Sellanto.
The Tech Stack Decision
Choosing the right stack as a solo developer is a survival decision, not a preference decision. You do not pick the trendiest tools. You pick the tools that let one person do the work of a team.
Backend: Django + Django REST Framework
I chose Django for reasons that are boring but critical:
- Batteries included. Auth, admin, ORM, migrations, sessions, CSRF protection — all built in. As a solo developer, every feature I do not build myself is time saved.
- The Django admin. This alone saved weeks of development. Instead of building a full internal admin panel, I used Django's admin interface for user management, content moderation, subscription management, and debugging. For a solo operation, the admin panel is your operations dashboard.
- Mature ecosystem. Django has battle-tested packages for everything:
django-allauthfor social auth,django-celery-beatfor periodic tasks,django-cors-headersfor API access,djangorestframeworkfor the API layer. These are not experimental packages — they have been in production for years. - Python for AI. This mattered enormously. All five AI models I integrated have Python SDKs as their primary interface. Using Django meant no language boundary between my web backend and my AI pipeline.
I considered FastAPI for its async performance, but the ecosystem gap was too large. I would have spent weeks rebuilding what Django gives you for free. For a solo developer, development speed matters more than request-per-second benchmarks.
Frontend: React + TypeScript
The frontend is a React SPA with TypeScript, communicating with the Django backend through a REST API. I chose React because:
- Component reusability. The social media post composer, the scheduling calendar, the analytics charts — these components get reused in multiple contexts throughout the app.
- State management. Social media management involves complex state — draft posts, scheduled posts, connected accounts, real-time publishing status. React's ecosystem (I used Zustand for state management) handles this well.
- TypeScript. As a solo developer, TypeScript is your pair programmer. When you do not have a team to catch mistakes in code review, the type system catches them at compile time.
Task Queue: Celery + Redis
This is where the architecture gets interesting. Social media management is inherently asynchronous. You do not publish a post to five platforms and wait for all five API calls to complete before responding to the user. You queue it.
Celery handles:
- Scheduled post publishing. The core feature. Posts are scheduled with a future publish time, and Celery Beat triggers the publishing task at the right moment.
- Multi-platform publishing. A single post might need to go to Instagram, Twitter/X, Facebook, LinkedIn, and TikTok. Each platform is a separate Celery task, running in parallel, with independent retry logic.
- AI content generation. Generating content with AI models takes 2-30 seconds depending on the model and task. These run as background tasks so the UI stays responsive.
- Analytics aggregation. Pulling engagement data from platform APIs, processing it, and updating dashboards. This runs as periodic tasks every few hours.
- Webhook processing. Platform callbacks for post status, comment notifications, and engagement events.
Redis serves double duty as the Celery broker and as a caching layer for frequently accessed data (user session data, platform tokens, rate limit counters).
The Database: PostgreSQL
PostgreSQL was the only serious option. The data model for a social media management platform is relational at its core — users have accounts, accounts have platforms, platforms have posts, posts have media, posts have analytics, posts have comments. These relationships need referential integrity and complex queries that relational databases handle naturally.
I use PostgreSQL-specific features heavily:
- JSONB columns for storing platform-specific post metadata (each platform returns different data structures)
- Array fields for tags and category lists
- Full-text search for searching across post content and analytics
- Database-level constraints for data integrity that the application layer should not be solely responsible for
The AI Architecture: Five Models, One Pipeline
This is the part that sets Sellanto apart from tools that slapped a ChatGPT wrapper on a text field and called it "AI-powered."
The Five AI Models and Their Roles
Model 1: Content Generation (GPT-4 class). The primary content creation engine. Takes a brief, brand voice guidelines, and target platform, then generates post content tailored to each platform's character limits, hashtag conventions, and content style. A LinkedIn post and an Instagram caption for the same topic should read completely differently — this model handles that.
Model 2: Image Understanding and Description (Vision model). When users upload images, this model analyzes the visual content and generates relevant captions, hashtag suggestions, and alt text. This is not generic image captioning — it is contextualized within the user's brand and content strategy.
Model 3: Engagement Prediction (Custom fine-tuned model). A smaller model trained on engagement data that predicts how a post will perform based on content, timing, platform, and historical account performance. It provides a score and suggestions for improving predicted engagement before publishing.
Model 4: Sentiment Analysis and Brand Safety. Screens generated and user-written content for tone, sentiment, and potential brand safety issues. Catches things like accidentally negative phrasing, controversial topic proximity, or tone mismatches with brand guidelines.
Model 5: Hashtag and Keyword Optimization. Specialized in platform-specific discovery — hashtag research for Instagram, keyword optimization for LinkedIn, trending topic alignment for Twitter/X. This model pulls from real-time trend data to suggest timely, relevant tags.
The AI Pipeline Architecture
These models do not run independently. They form a pipeline:
User Input (brief/image/topic)
|
v
Content Generation (Model 1)
|
v
Image Analysis (Model 2, if media attached)
|
v
Hashtag/Keyword Optimization (Model 5)
|
v
Sentiment & Brand Safety Check (Model 4)
|
v
Engagement Prediction (Model 3)
|
v
Final Content (with scores and suggestions)
Each step is a Celery task. If the engagement prediction score is below a threshold, the system can loop back to content generation with feedback for a second attempt. The user sees the process happening in real-time through WebSocket updates — content appearing, then refinements, then the final scored version.
Managing AI Costs
Five models per content generation request gets expensive fast. Cost management strategies:
- Tiered model selection. Not every post needs the most expensive model. Quick social posts use faster, cheaper models. Long-form content uses premium models.
- Caching. Hashtag suggestions for common topics are cached. Brand voice analysis is computed once and cached per account.
- Batching. When users schedule a week of content at once, generation tasks are batched to reduce API overhead.
- User-facing limits. Free tier gets limited AI generations. Paid tiers get progressively more. This is both a monetization strategy and a cost control mechanism.
Five-layer architecture — PostgreSQL, Django, Celery, AI services, React frontend
The Platform Integration Challenge
Connecting to social media platform APIs is the most frustrating part of building a tool like this. Each platform has its own:
- OAuth flow (and they all implement it slightly differently)
- Rate limits (and they all enforce them differently)
- Content format requirements
- Media upload procedures
- API versioning and deprecation schedules
The Abstraction Layer
I built a platform abstraction layer — an internal API that normalizes the interface to each social platform. The rest of the application never talks directly to the Twitter API or the Instagram Graph API. It talks to my abstraction layer, which handles:
- Authentication. Token storage, refresh flows, re-authentication prompts when tokens expire.
- Content formatting. Converting a universal post object into platform-specific formats (character limits, image size requirements, video duration limits).
- Publishing. Unified publish interface with platform-specific implementation details hidden behind it.
- Error handling. Each platform fails differently. The abstraction layer normalizes errors into a consistent format: transient (retry), permanent (notify user), or auth-related (request re-authentication).
- Rate limiting. Platform-specific rate limit tracking with automatic backoff and queue management.
This abstraction layer is probably the most valuable piece of code in the entire codebase. It took weeks to build properly, but it means adding a new platform is now a matter of implementing a single adapter class rather than touching code throughout the application.
The OAuth Nightmare
Every developer who has integrated with social media APIs knows this pain. OAuth flows break. Tokens expire at unpredictable times. Platforms change their API requirements with minimal notice. Instagram requires a Facebook Business page. LinkedIn deprecated their v1 API. Twitter/X overhauled their API tiers and pricing.
My solution was aggressive error handling and a "trust nothing" approach. Every API call is wrapped in retry logic. Every token is validated before use. Every failed operation creates a user-visible notification explaining what happened and how to fix it (usually re-authenticating the account).
The Scheduling Engine
The scheduling system is the heart of the product. It sounds simple — "post this thing at that time" — but the complexity is in the details.
Time Zone Handling
Users are in different time zones. Their audiences are in different time zones. The server runs in UTC. Posts need to publish at the correct local time for the target audience, which might differ from the user's local time.
Every scheduled post stores three time references:
- UTC publish time — what the server uses to trigger the task
- User's local time — for display in the user's dashboard
- Optimal audience time — calculated from engagement analytics for that account
Queue Priority and Reliability
Not all scheduled posts are equal. A post scheduled for "right now" needs higher priority than a post scheduled for next Tuesday. Posts hitting their publish time need to jump the queue ahead of analytics processing or AI generation tasks.
Celery priority queues handle this:
- Critical queue: Immediate and time-sensitive publishing tasks
- Standard queue: Scheduled publishing within its time window
- Background queue: Analytics, AI generation, cleanup tasks
Each publishing task has retry logic — if the platform API returns a transient error, the task retries with exponential backoff up to five times. If it fails permanently, the user is notified with the specific error and the post is moved to a "failed" state where they can retry manually or edit and reschedule.
The "Bulk Schedule" Problem
One feature that users love but was technically challenging: bulk scheduling. Upload a CSV or connect a content calendar, and schedule 50-200 posts at once. This creates hundreds of Celery tasks simultaneously, each with different scheduled times, different platforms, and different content.
The solution was a two-phase approach. Phase one creates all post records in the database in a single transaction. Phase two spawns Celery tasks for the next 24 hours of posts only, with a periodic task that looks ahead and spawns tasks for upcoming posts. This prevents overwhelming the task queue with hundreds of tasks that will not execute for weeks.
The Hardest Technical Challenges
Challenge 1: Real-Time UI Updates
When a user publishes a post to five platforms simultaneously, they need to see real-time progress — "Publishing to Instagram... Success. Publishing to LinkedIn... Success. Publishing to Twitter... Failed (rate limit). Retrying..."
Django is not naturally real-time. I used Django Channels with WebSockets for live updates. Each publishing task sends status updates through a WebSocket channel that the React frontend subscribes to. The complexity is in managing the connection lifecycle — what happens when the user navigates away, loses internet, or has the app open in multiple tabs.
Challenge 2: Media Processing
Social media platforms have wildly different media requirements. Instagram wants square or specific aspect ratio images. Twitter has different image size limits. LinkedIn has its own video format requirements. TikTok wants vertical video.
The media processing pipeline handles:
- Image resizing and cropping for each platform
- Video transcoding for platform-specific requirements
- Thumbnail generation
- File size optimization
- Format conversion (HEIC to JPEG, for example)
This runs on Celery workers with FFmpeg for video processing and Pillow for image manipulation. It is resource-intensive and was one of the main drivers behind choosing appropriately sized server instances.
Challenge 3: Data Consistency Across Platforms
When a post is published to five platforms, you have six sources of truth — your database and five platform APIs. Post IDs, engagement metrics, comment counts, and share counts all need to stay synchronized. But platform APIs have different update frequencies, rate limits, and data freshness guarantees.
I settled on eventual consistency with a clear hierarchy: the Sellanto database is the primary record, and platform data is synchronized periodically. Users see a "last synced" timestamp so they understand the data freshness.
Challenge 4: Multi-Tenancy on a Solo Budget
Sellanto serves multiple users, each with their own accounts, posts, analytics, and AI-generated content. Proper multi-tenancy means data isolation — no user should ever see another user's data through any path, including API bugs or query mistakes.
I use Django's ORM with a mandatory tenant filter pattern. Every queryset in the application is scoped to the current user's organization. I wrote a custom middleware that injects the tenant context and a test suite that specifically checks for data leakage across tenants. This is the kind of thing that a solo developer cannot afford to get wrong — a single data leak would destroy trust permanently.
What I Would Do Differently
Use Next.js or a Similar Full-Stack Framework
The Django backend + separate React frontend architecture works, but managing two deployment pipelines, two build processes, and the API contract between them adds overhead. If I started over, I would seriously consider a full-stack JavaScript/TypeScript framework for the frontend with the Django backend as a pure API service. The developer experience improvement would be significant.
Start with Fewer Platforms
I launched with support for five social media platforms. I should have launched with two (Instagram and Twitter/X) and added platforms based on user demand. Each platform integration added weeks of development time and ongoing maintenance burden. Some platforms are used by ten percent of users but consume thirty percent of the maintenance time.
Invest in Automated Testing Earlier
I wrote tests, but not enough and not early enough. The social media API integrations are particularly fragile — platforms change their APIs, and without comprehensive integration tests running against sandbox environments, breakages can reach users before I catch them. I now have extensive tests, but building them after the fact cost more time than building them alongside the features would have.
Use Feature Flags from Day One
I added feature flags (using a simple database-backed system) after the first few months. They should have been there from the start. Being able to roll out features gradually, A/B test changes, and quickly disable problematic features is invaluable for a solo operation. When something breaks at 2 AM and you are the only engineer, a feature flag that turns off the broken component is the difference between a five-second fix and a two-hour emergency deployment.
Better Monitoring and Alerting
My initial monitoring was basic — server health checks and error logs. I should have implemented structured logging, distributed tracing (even for a single service, request tracing is valuable), and proactive alerting from the beginning. When you are the sole developer, you need the system to tell you about problems before users do.
The Solo Developer Survival Kit
Building a SaaS platform alone is as much about what you choose not to build as what you do build. Here are the tools and decisions that made it possible:
Third-Party Services I Rely On
- Stripe for payment processing and subscription management — building billing from scratch is a trap
- Resend for transactional email — deliverability is someone else's problem
- Sentry for error tracking — see errors before users report them
- Cloudflare for CDN and DDoS protection — essential infrastructure I do not want to manage
- GitHub Actions for CI/CD — automated testing and deployment on every push
Architecture Principles for Solo Developers
- Monolith first. Microservices are a team sport. A solo developer running a monolith can deploy, debug, and iterate faster than a solo developer managing five services.
- Boring technology. Django, PostgreSQL, Redis, Celery — none of these are exciting or trendy. All of them are proven, documented, and well-understood. When something breaks at midnight, the solution is a Stack Overflow search away.
- Automate deployments. If deploying requires more than a git push, you will avoid deploying, and avoiding deployment means features and fixes reach users slower.
- Log everything. You do not have a team to call when something goes wrong. Your logs are your team. Structured logging with context (user ID, request ID, action, outcome) makes debugging possible.
- Say no to features. Every feature you build is a feature you maintain. As a solo developer, your maintenance capacity is your most constrained resource. Build fewer things better.
The Numbers
Without revealing confidential business metrics, I can share directionally:
- Development time to MVP: Several months of full-time work
- Lines of code: 50,000+ (Python + TypeScript combined)
- Database tables: 40+
- API endpoints: 100+
- Celery task types: 30+
- Platform integrations: 5 social media platforms
- AI model integrations: 5
- Uptime since launch: 99.9%+
Frequently Asked Questions
Can a solo developer build a SaaS product?
Yes, but the technical challenge is not the hardest part — scope management is. A solo developer with the right stack (Django, React, PostgreSQL, Celery) can build a production SaaS with 50,000+ lines of code, 100+ API endpoints, and 99.9%+ uptime. The key is choosing boring, proven technology, automating deployments, and ruthlessly prioritizing features.
What is the best tech stack for building a SaaS as a solo developer?
Django + React + PostgreSQL + Celery + Redis is a proven combination for solo SaaS development. Django's batteries-included approach (auth, admin, ORM, migrations) saves weeks of development, Python gives direct access to AI/ML libraries, and Celery handles all background tasks. The Django admin panel alone eliminates the need to build a separate internal operations dashboard.
How long does it take to build a SaaS MVP?
A functional SaaS MVP typically takes several months of full-time focused work for a solo developer. Sellanto required building 40+ database tables, 100+ API endpoints, 30+ Celery task types, and 5 platform integrations. Starting with fewer platforms (2 instead of 5) and adding based on user demand would have reduced the initial timeline significantly.
How do you handle social media API integrations reliably?
Build a platform abstraction layer that normalizes authentication, content formatting, publishing, error handling, and rate limiting across all platforms. Every API call should be wrapped in retry logic with exponential backoff, every token validated before use, and every failure should create a user-visible notification. This architecture means adding a new platform requires implementing a single adapter class.
Is it better to use microservices or a monolith for a startup?
A monolith is better for solo developers and small teams. Microservices are a team sport — a solo developer running a monolith can deploy, debug, and iterate faster than managing five separate services with independent deployment pipelines. Start monolithic, and only extract services when specific scaling bottlenecks demand it.
How do you manage AI costs in a SaaS product?
Use tiered model selection (cheaper models for simple tasks, premium for complex content), cache repeated operations like hashtag suggestions and brand voice analysis, batch generation requests when users schedule content in bulk, and implement user-facing generation limits tied to subscription tiers. This serves as both a monetization strategy and a cost control mechanism.
Was It Worth Building Solo?
Yes — with a caveat. Building Sellanto alone meant I made every decision, understood every line of code, and could move fast without coordination overhead. The product reflects a single, coherent vision.
But it also meant being the developer, the designer, the DevOps engineer, the QA tester, the customer support agent, the technical writer, and the on-call engineer. Simultaneously. Every day.
The experience taught me more about full-stack development, system architecture, and product engineering than any team environment could have. It proved that a single developer with the right tools, the right stack, and the right priorities can build a real product that competes with well-funded teams.
If you are considering building a SaaS product solo, my advice is this: the technical challenge is not the hardest part. Scope management is. The temptation to add one more feature, support one more platform, or optimize one more component is constant. The developers who ship solo are the ones who ruthlessly prioritize and accept that version one will be imperfect.
Ship it imperfect. Then make it better. That is how Sellanto was built, and that is how it continues to evolve.
Mostafa Faysal
Systems developer who builds ecommerce platforms, business automation, and SaaS products. 15+ production systems shipped.
