Back to Blog
ARTE Concert migration: a retrospective

ARTE Concert migration: a retrospective

As we previously announced, we successfully concluded the complex migration of ARTE Concert website to a Drupal platform. This post will be a short introspective journey through the processes we followed to make this project a success.

The project seemed to be an interesting challenge, but we quickly realised that besides the technical aspect of it, we had to fit our own team within a broader team of freelancers, with the whole workflow managed by an agency. Contributing with our expertise to the team while maintaining our company culture was the first hard task. We wanted to stick to our vision and procedures, without stepping on the toes of the other skilled people that composed the team. Turns out that it was actually an easy task. Not only the rest of team happily adopted our best practices, but everyone saw tremendous benefits to it. Our very first success was right there! Within days the whole team was leveraging a proper git flow to independently develop features and functionalities of the platform. Another life saver adoption was the usage of Phing to automate builds, standardise all the local fixture content and bootstrap quickly. Finally, but not the least, we brought our cloud driven development flow on the table, which allowed us to keep testing each other's work and tweak our work to fit the hosting platform infrastructure.

The second challenge was to get a monster backlog completed within a tight time frame. While we believed that the technical resources were sufficient, we rapidly started to hit walls and rethink about upsizing the team. In fact, the issues lied elsewhere: some specs where too generic, some features were not enough thought through, and the sprints were not organised to respect requirement dependencies. So, rather than upscaling, we worked together with the product owners to remove these bottlenecks.

By softening less flexible terms of the contract, we could make the processes become a lot more agile. First, we took the time to do a transversal overview of the backlog, and re-organised the tickets into thematic and functional clusters that made sense to the dev team, keeping in mind the initial segmentation made by the product owners. This split allowed us to allocate devs as initiative owners according to these topics, balancing experience, expertise and work load.

This mid-way preparation work lead not only to a drastic improvement of the backlog, but also got the product owners to re-think about the minimum viable product. As a matter of fact, we always tried to convey the idea that the quality of a product must be dissociated to the quantity of feature it has. This was a key shift in the process. Once again, we introduced yet another fundamental concept that we use in Marzee Labs and which the team eagerly welcomed: code revisions. This improved not only the overall quality of the code, but also contributed for a worry-free and smooth go live phase.

Content migration was another core, big topic. After all, without a proper migration strategy, all the features we were building would have been pretty much useless. From the beginning, we worked with content fixtures that allowed us to build the same, fresh content on all the dev environments using the same mechanism that would be then used for the real migration. As the functionalities were being implemented, this dummy content would evolve accordingly. This meant yet another critical part of the project could be smoothly handled, by re-using our work and remove the typical bottleneck of having content in the website to properly develop and debug.

Unfortunately not everything was a success and after conducting a team introspection, we came up with the boxes we will have to tick once we start preparing phase two:

  • We must implement a proper testing flow using proper testing tools, as well as improving the user testing model. We want to use a more Behaviour-Driven Development approach, with improved functional specifications and exhaustive narrative cases.
  • We shall not be the slaves of our own tools anymore. It is very easy to get different flows and ways to use a same project management tool, in our case Jira. So we will seek to standardise and simplify its usage, maintaining the flows that worked and wisely improve what went wrong. It will hopefully avoid useless overhead or complexity, and implement a proper scrum flow.
  • We need to build a better continuous integration stack and standardise dev boxes to a maximum. Only then we can reduce the amount of hotfixes derived from the environment differences. For that tools like Vagrant will come to the rescue and will make things so much easier.

We've come a long way and, as with every large scale project, we happily wandered on sunny plains but also fought our way through stormy paths. All in all, it was worth the walk. Phase two will need some tweaks, but we reached a good cruise speed and the whole team is to be congratulated!

How an open mindset can boost the evolution of a continuous development platform Connecting Stripe with AWS Serverless holy grail: Cognito, Lambda and DynamoDB