There are plenty of reasons to look at how the solution is performing.
- People are paying for hopefully a good service and want proof.
- Pesky awards want soundbites.
- There are chances to perform better, and knowledge in failures should adjustments be needed.
...it also makes for a good portfolio.
Did I tell you multi-million users made millions of transactions daily with something I worked on before? Tweaks multiply and the results are more obvious under that sort of a load, but definitely not inexistent at smaller scales. So there's really no reason not to monitor (and respond) to performance.
Very early on the frame of the project should have included follow-up measures. If not I will include them anyway.
Quantitative performance
The type of metrics to measure success is depending on project and solution. Click-rates doesn’t apply to print, units sold have a crazy amount of variables, So on.
Whatever system is applicable it should mapped out early on, perhaps during the few different kinds of pre-launch strategies laid out in this sheet (but preferably around/after the solution idea start taking shape, should no previous reference be available), and now would be a good time to look at performance. Performance which shouldn’t come as a surprise.
If there are surprises, and as this design sheet focus on the designer troubleshooting aspects of things, identifying the cause of surprises should be a start (or a complete revisit). Follow up with needed adjustments.
Qualitative performance
Less quantitative is quality of the work.
Poor performance is one thing, but then there is downright poor graphic quality (optimization notwithstanding).
Things like “added brand recognition” is nice but any type of metric measure of it should be taken with a grain of salt.
Some hints should have been captured during the many previous CCs tests, but sample tests never account for everyone, or every scenario. Should a problem be identified post launch some adjustments are needed as well.
Luckily, if the project frame would have allowed, considerations should have been made in one of the many post-launch strategies previously set up throughout.
Speaking on qualitative actions, some of the best design is seamless and near invisible. The need for it may not be obvious. Maybe even not obvious when there is something there.
An example could be of banking services.
- "How delightfully boring" one might say, from a graphic designer's point of view. Nay. Reductive actions through information structure could result in fewer steps for a customer to do their bidding. Invisible, but appreciated - arguably (but much likely) more appreciated than neato graphics during banking errands.
Adjustments needed
Nobodys prefect.
As mentioned before people sometimes don’t always do as they say (excluding myself of course, I’m great). Which make designing on certain premises susceptible to fault.
Adjustments are often needed; to make bad things good, or good things great.
Since this section accounts for post-launch adjustments I won’t mention client-induced revisions.
What should be mentioned is the importance of identifying the actual problem.
How big of an adjustment needed depends on the type of problem. Some (if not all) of the phases in the exploration stage would help investigating bigger performance drops.
Simultaneously, adjustments need not be relating to poor performance. Performance tweaks are always worth considering, especially when there's a grander crowd.
Strategic continuation
Wrapping up the project means leaving some ends open for future expansions. Be they further work on the brand, extensions and continuation of the solution, or similar. Some common graphic design considerations:
Styleguides have been devised before. With the total project just about finished they can now be made all the more inclusive (not to mention done without the same time-constrain). This to retain better consistency and make for an even smoother transition of future projects with the same client, regardless of designer.
The accumulative data from the various tests done throughout the project have lots of worth. Even if the material may not be applicable for future foreseeable reference, things change.
I habitually save all presentation material throughout a project and make them available to others in order to understand the reasoning behind most-, if not all decisions. Not just internally during the project's current duration but for future teams reference sake as well.