As Caremmunity approaches the first release stage I just wanted to take the time to share some of my technical observations, thoughts and patterns as we raced to release an MVP for our mobile app over the last 9 months.
Caremmunity was started last year by my friend Nathan, he’s always wanted to find a better solution to managing the care of his relatives who currently still live at home. The core functionality of the app is to provide a platform for friends, family, neighbours and professional services to coordinate care for a loved one. The app is not just for those who care for someone but also designed to be able to give control and independence back to those who want to organise their own care in the future. You can read more about it here.
I joined as a technical co-founder to help drive the technology strategy for the business and It’s worth noting that the effort and progress we’ve made in 9 months has all been done in our spare time whilst we both worked full time elsewhere.
There aren’t any groundbreaking technology stories here, it’s more a tale of how I leveraged existing tools in my armoury to be able to produce our Caremmunity app whilst setting ourselves up for the future.
As soon as I joined the project I wanted to create a technical strategy that comfortably supported the product roadmap as well as enabling the technology we built to be resilient for a highly integrated future. As with all industries that are glacially slow to adopt technology, often the integration layer between businesses is non existent. Caremmunity aims to smash down that barrier and promote a healthy technology ecosystem for future product disruption.
Kicking things off is often difficult and structure is key. To help, I’ve got a few headlines I regularly use to help draw out some of the tech detail. Some of the key areas to start documenting and working through are….
- Coding standards, guidelines, conventions & policies
- Application Services
- Security (Application data, user data & practices for employees)
- Global availability (target SLA’s)
- Caching strategy (Origin, object)
- Offline support
- Scalability & Performance
- Testability (QA, Developer & automated)
- Monitoring & metrics
- Data (Source, strategy, transactional requirements, volatility, migration & maintenance)
- Infrastructure (Cloud providers, deployments, disaster recovery, network)
- Team (Skills we have, skills we’re missing, prioritisation and product development process)
9 months ago I was regularly questioning whether completing this technical strategy was worth my time, especially considering we hadn’t got a single line of code written at this point. I’m glad I stuck it out, this document forced the product development to consider every available compromise and possibility of the technology choices we made.
Building out the API
I’m a passionate advocate of quality domain driven design coupled with the power of Graphql schema and approaching the early stages of product development with this in mind helped drive the creation an organised MVP product which had sensible domain boundaries.
When building the strategy I knew that it’d just be me writing code for MVP and that these domain boundaries would be crucial to the scalability of Caremmunity beyond our first release. Going back to the earlier objectives, I needed to make sure that others could pick up this technology and start interacting with it to build real value from day one. Too often, MVP tech and product literature focus on speed and compromise without talking about the tools needed to monitor and accurately assess MVP/future trade offs.
This is where the true power of Graphql comes in for product development. It’s flexible type and resolver patterns help you derive a schema that provides the right data to your fronting application (App/Web UI/Alexa) whilst simultaneously organising your model infrastructure into sensible domain entities. It’d be ludicrous if I decided to build anything other than a monolith when developing MVP, but a monolithic graphql service with discrete resolver/model relationships is basically the closest ‘list and shift’ pattern you’ll get in terms of setting up your estate for a future of Microservices fun.
Every step of the Graphql journey forces you to make smart decisions about the organisation of your code for the future, e.g.
- Are there common complex types across my estate that will need regularly checking?
- Are my models too relational for a future of single responsibility Microservices?
Here’s an illustration inspired by (Netflix’s Graphql blog) that really helps drive the separation between MVP and ‘scale up’ architecture. If you can get comfortable working in the trenches between both realities you’ll find it much easier to accept the API trade offs you’re making during MVP.
Knowing we were only a few refactors away from being able to split apart our Monolithic app (on the left) into a set of single responsibility services (on the right) was a huge reassurance when it came to quickly hacking our way through the product build.
I raced to produce the schema first and it paid off.
From that point on, I could easily reason with backend data stores and the demands for the frontend features without having to worry about breaking a contract between them. Hopefully, if you’ve done it right, your schema should barely change during MVP development and slowly but surely, you start swapping out stubbed data sources with real data. I liken a ‘schema first development’ to TDD, in many ways they share the same values of forcing the engineer to focus on the signature of function input/output first.
Building out the app
Similarly, the frontend was shaping up pretty nicely with a very thin layer representing our reducer logic. The key here is to focus on the data touch points which sync your application state with your origin.
- Document predicted offline/online cadences
- When will application state need to rehydrate from an external change?
- When will application state need to rehydrate from an internal change?
- What TTL’s should be set on origin and object caching?
After you’ve got the answers to a few of these questions you can start mapping your application logic to respond to the various API event listeners. A few lines of useEffect combined with some Firebase & React Native extent listeners will get you a long way to creating a slick application. The focus on data consistency at this stage enabled me to avoid bloating reducers with unnecessary logic in conjunction with keeping an almost entirely functional component library.
Automate just enough
For me, context switching is a hard task to master and moving from a day job to evening/weekends focused on an entirely different stack is pretty tricky
When you’re a one man band it can be difficult to justify the time spent on automating the delivery of your code but its well worth it when you realise you can just drop in one evening and get straight into the product development.
Taking time out to fix deploys or debug production containers is a draining exercise and will sap you of your product enthusiasm!
Here’s some of the tech I used to make my life easier with the product development and it’s at this point I’ll ask you to ignore the following tools of choice if they’re not currently in your wheelhouse. Too often engineers get stuck on exploring new technologies when they already have the right tools to do the job. Only switch the tools up when they’re not delivering what you expect. My tools of choice for speed and resilience were…
- GCP for Google K8's Engine
- Cloud SQL as a datastore
- GKE to house NGINX, Graphql and cron job pods
- Circle CI for building and shoving containers onto GCR
- Github for keeping the code safe
- Sentry for smart out of the box error tracing across the estate
- React Native for the IOS and Android application
- A React context for storing the results of Graphql mutations/queries in the React Native app
- React Hooks and Selector patterns for creating a close relationship between the functional component state and schema being developed in the Graphql API.
Where we’re at now!
I’m really excited to say that we’re now in the process of going through our first round of structured user testing. Soon after we’ll be releasing the App to the Play Store and App Store. I’m thrilled with the progress to date and can’t wait to see it start to make a dent in the way we care for our loved ones.
Please get in touch if you want to chat about Caremmunity in any more detail, either from a technology or business perspective. We have a hugely exciting data layer that’s awesome to integrate with and is very much the future of where we see Caremmunity going.