This is the story of my, almost one-year-long ongoing project called Feedster, how it emerged, pivoted, and where I am so far.
The story is a good example of how things easily can be overengineered, and how far descoping can go.
Solving the own problem
I always was keen to make software around the inbox, the incoming information seems to be endless, more and more year after year, so at some point, automated curation should start to be a thing.
So as a dedicated engineer, I want to be on the edge of techs and innovations in my niche (engineering at early-stage startups) and I faced that I need to process a lot of information.
For instance, I subscribed to newsletters about AI, SaaS, Indie Hacking, React and other web development tools updates, productivity tools updates, startups bootstrapped, and so on.
The summarization was the first, obvious solution to reduce time reading.
That was the initial idea of how it should work
Moreover, not all info given by the sources I use is applicable to what I do, e.g. newsletters about AI give not only integrations applicable in software, but also design, chatbots, and other stuff, which may be interesting, but not really relevant to my work.
For even curation my inbox (selecting what I want to read among all incomings) I spend like 5 to 15 minutes daily!
So in addition to summarization, I decided to declare a set of goals-topics (basically prompts) and use it as a filter for all incoming information, which I called a "goals-driven news reading".
All the development started as a pet project between friends, we just wanted to practice our tech skills on challenging tasks, but it went out of control really fast
Initially, we wanted to make a news-reader with an infinite-scroll feed and settings, AI stuff wasn't on our plate, the idea was just to build a common, "Facebook-like" newsfeed from newsletters and RSS feeds.
The summarized newsfeed we have designed in Figma
Later, when ChatGPT was released, and all the world went crazy about this we came up with the "killer feature" - what if GPT will process our feed and make it more efficient to consume?
The first solution was huge, it includes all possible trendy tech stuff, such as event-based frontend, GraphQL, caching, and even queue-based summarization/filtering on a background!
We deployed the application on VPS using the set of Docker containers, and even start talking about the build automation. The system was ridiculously complex and was given a ton of bugs!
That’s right, we had 0 customers, however, almost established a CI/CD pipeline for our application.
No matter all the fancy techs our prompts sucked, and the output wasn't really useful. This is weird how we built so much for summarization, but we left the summarization itself to the end of development.
The settings page
We have been working on the application for more than half a year, we were a team of enthusiasts, and we even hired a UI designer to make all layouts, should I say we never talk to any kind of audience about this, all the marketing was postponed to "someday". And at some point, I was not sure we even building something convenient even for us as target customers.
Eventually, we found ourselves burned out not really believing in the idea, and struggling with every tweak of the buggy and clumsy platform we've built. I decided to take a break off even to switch the idea.
However, after a while, I decided that the Feedster is worth trying, and I conducted a radical descope to make it happen at least somehow.
I decided to throw away all UI, queue, and newsfeed, take Notion as a frontend, and build a super simple, synchronous daily digest builder, so far I generate my digest from my local machine using dev mode. Not going to even deploy without a strong reason for doing this. no more engineering for the sake of engineering!
To make my prompts work I passed the Prompt Engineering for Developers course, and find out the best practices (at least the simplest ones), before actually implementing processing I test my prompts a lot using ChatGPT, and only when it started working fair enough I jumped on coding itself.
The Notion-based MVP I made in 3 weeks (and it works!)
The thing is I could make the algorithm of filtering-summarization in just 3 weeks. It still gives bugs and non-relevant info but the small amount of code allows me to tweak it fast, and take immediate feedback in Notion.
The main takeaway - build only the core feature before any secondary infrastructure/UI/optimization. If the core feature doesn't work nothing else matters. It sounds so obvious, but it took almost a year for me to conclude!
Long story short - if something could be cut from your MPV - it should be cut.
The next steps
I really believe that I solve my own problem with this piece of software, and I also, think that I'm not the one.
After being a hermit for the 3 weeks I went out and I have a few ideas on how to both test it for myself and show off to other people.
My goal is to make it work at least for myself - I will tweak the algorithm, seek useful integrations (one with Reader does seem interesting!), and share how it goes under #buildinpublic and here.
To prove it works - I will post top peaks from my daily digest, and tuning the algorithm will help me to reduce the effort of manually polishing the output.
Finally, to let others try - I'm planning to run a “beta by providing an OpenAI API key”, the goal is to make it safe.
Also, there’s a landing page at the top of my mind. Right now I even don’t know what link to send to a person in case of any questions.
I know that I'm not really trying to validate it so far, but once it works for me I will start to be setting up the Feedster for others who are willing to try.
I'm an engineer and love to be one, that's why building in public always leads me to building, not to marketing, selling, or promotion. But maybe this is the way for me to promote myself as an engineer, and people tend to trust good ones, especially in terms of software.