Green Grow Precision Planting - Your Digital Growing Companion

Green Grow Precision Planting - Your Digital Growing Companion

Project Overview

Amid rising concerns about food security in the UK, prices have been increasing since August 2021 and were measured in March 2023 to have a 19.1% increase (“United Kingdom: Inflation Rate for Food 2023”, n.d.). Other factors include climate change and population growth (Tomlinson 2013). This project aims to support the home grower in successfully producing consistent yields of their vegetables, fruits, and herbs through the support of sensors and a mobile device. Green Grow Precision Planting is a digital growing assistant. The concept utilises soil moisture-based scheduling, which provides high accuracy and water savings with affordable sensors (Ahamed, 2023). Whilst, ideally, automation would provide accurate water delivery, this app also provides estimations based on the sensor information to suggest the volume of water or feed needed for users. The output of the proposal and research is a high-fidelity prototype mobile app built in response to two identified personas. It was tested across fidelity stages to validate the ideas.

Category
Mobile Design
Date
January 15, 2023 → August 24, 2023
My Role
UX DesignUI Design
Figma File
Keywords:

Smart Agriculture, Deep Learning Technology, User Experience, Community Agriculture, Sensors

AI summary

Project Objectives

The primary aim was to develop a digital growing assistant that simplifies the complexities of modern agriculture. The project focused on:

  • Enhancing crop yields using data-driven insights.
  • Optimizing water usage by integrating moisture monitoring technologies.
  • Creating an intuitive and user-friendly interface that would simplify the growing process for users of all levels.

Hypothesis

I hypothesised that a mobile application and smart sensor technology could significantly improve the cultivation success rates for gardeners and small-scale farmers by providing real-time feedback and actionable insights.

Project Process

A hybrid design thinking methodology guided the project. This involved:

  • Research Phase: Conducted in-depth market analysis to identify gaps and user requirements.
  • Ideation Phase: Generated multiple concepts through brainstorming sessions, focusing on innovative yet practical solutions.
  • Prototyping Phase: Developed several prototypes, each iteratively improved based on user feedback and technical feasibility.

Methodology

Emphasising

Survey Design and Analysis: Crafted a concise, targeted survey to gather initial user insights. Analyzed responses to identify key user pain points and expectations.

In-depth Interviews: Conducted interviews with a broad spectrum of users, from hobbyists to professional growers, to gain diverse perspectives on the product's usability and functionality.

Define

Personas

Developed detailed personas to represent the user base, focusing on two main types:

The Amateur Hobbyist: Represents users who engage in gardening as a leisure activity. This persona helped me understand the needs of users new to digital gardening tools.

image

The Experienced Grower: Represents more tech-savvy users, focusing on efficiency and productivity. This persona guided the development of advanced features.

image

Journey Mapping

Created comprehensive journey maps for each persona, outlining their interaction with the app from initial discovery to regular usage. These maps highlighted critical touchpoints where the app could enhance the user experience.

Competitor Analysis

Conducted a thorough analysis of existing digital agriculture solutions. This research included studying direct competitors and related applications in the market and providing insights into industry standards and user expectations.

image

Ideation

Card Sorting and User Feedback:

Utilised card sorting techniques to understand user preferences in-app navigation and features. This feedback directly influenced the app's information architecture.

image

How Might We Statements (HMW)

I categorised responses into distinct themes from the interviews. Using these, I crafted several 'How Might We' statements to encapsulate and address the identified pain points. While I'm content with the statements I've generated, the solitary nature of this project posed a challenge in terms of validating and refining these ideas through collaborative feedback. Ideally, in a team-based setting, I'd bounce these ideas off colleagues or, if feasible, circle back to users for validation. However, solo endeavours might not always provide such luxuries.

image

Task Flows

Before sketching and wireframing, I delineated potential task flows, mapping out plausible user paths to complete specific tasks within the app. Reflecting upon this, revisiting the competitor analysis could have enhanced my task flows. Drawing inspiration from established examples and observing how they handle similar user actions might have provided additional insights to enrich my task flows.

image

Visual Design

Sketching

When beginning the sketching I wanted to explore different options and directions in a quick amount of time. To do this, I used Crazy 8’s to work through ideas, and this is where I made a mistake towards the first error, which extended out throughout the version of the project. This also extended to the chalk mark tests I ran again using Optimal Workshop. I should have used reference material from existing design structures as part of this process. However, this error did allow for significant learnings to be made, which I will discuss more later in this reflective documentation.

image

Chalk Mark

After completing my initial set of lo-fi wireframes, I sought to gauge user interaction and understanding through the chalk mark test. For those unfamiliar, the chalk mark test is a method used to determine how users interact with a design, marking areas they engage with. This exercise involved 19 participants, yielding a mix of insights.

Interestingly, there were instances where users' expectations from the card sort didn't align with their interactions during the chalk mark test. For example, users expected to find certain features in one category were sought in other places during the test. Feedback from the exit questionnaire revealed an evolution in user understanding as they navigated the design. While initial interactions might have posed challenges, the design's learnability became evident as users felt more familiar and intuitive with continued use. However, the balance between initial usability and learnability is crucial. Users might lose patience if a design takes too long to become intuitive, potentially leading to app abandonment.

On the positive side, participants appreciated certain design elements. They highlighted the large, clickable button sizes and the dashboard's user-friendliness. Many even remarked on how the design evoked a sense of familiarity, echoing the layouts of other apps they've used yet offering a fresh perspective.

Evaluating the feedback, I wanted to delve deeper into user motivations and experiences. Inspired by Tinbergen's four questions, primarily used for understanding animal behaviour, I incorporated them into the exit interview of the chalk mark test. The authors of Think Like a UX Researcher argue for the applicability of these questions in UX, using the example of stopping at red traffic lights to probe deeper into behavioural motivations (Travis and Hodgson, 2023, 23–27). Questions like, "How did your understanding of the design evolve as you interacted with it more?" allowed me to obtain richer insights than the conventional "What do you think about this design?". The nuanced phrasing encouraged users to reflect on their journey, revealing how the design became more intuitive as they progressed.

In reflection, the chalk mark test was both enlightening and enjoyable. As I move forward, I'll consider the balance of user expectations, design familiarity, and the depth of feedback to refine the designs.

Iterations

image

Design

The first version presented a vast array of learning opportunities; the design was trying to do too much and, in doing so, was a mess at this stage. This would also explain a few of the previous obstacles I encountered, such as the card sorting exercise's abandonment rate and participants possibly getting frustrated and grouping everything together. The same could be said of the chalk mark test and the poor task completion rate. My design was failing, and it wasn’t until a candid conversation made me have long thought about its present condition. There were several points to learn from this version which I will cover below.

Colour

My initial choice of colours lacked harmony and appeared jarring when placed next to each other. I had opted for a particular palette, including green and dark grey, which I thought aptly reflected an outdoor setting, and I had also introduced a cream colour. However, this resulted in screens where the green acted as a universal action point, and the cream-coloured text seemed to fade into the background.

During a supervision meeting, it was suggested that my colour scheme might not be entirely appropriate. This prompted me to revisit the colour choices. In the book UX Magic, the author notes that colour choices can make or break a user experience and points out that poor colour decisions often stem from UX amateurs expressing an artistic point of view rather than selecting colours for their functional purpose (Rosenberg 2020, 302). This made me realise the need for a more thorough understanding of colour theory, and I embarked on a quest to find a functional and appropriate colour palette.

Initially, I was slightly disheartened by this feedback. However, I soon recognised that this was an opportunity for growth and learning. Failure, after all, is a crucial part of the creative process. It prompted me to adopt a more objective viewpoint and revisit the stage where I had created the personas. From this point, I began to reconsider my colour choices with a fresh perspective, focusing more on their functional utility rather than purely aesthetic considerations.

Exploring Colour Theory

To enhance my understanding of colour theory, I delved into "Design Elements Colour Fundamentals," which enriched my design language, particularly in terms of 'Value' and 'Intensity' (Sherin 2012, 15). While I had previously touched upon colour in a different module, this revisit was a poignant reminder of its significance.

Drawing Inspiration and Revising the Palette

On the journey to revise the colour palette, I drew inspiration from design layouts on Behance and nature itself. Retaining green—a hue prevalent in similar designs—seemed apt. However, finding the right shade and complementing hues was a challenge. An inspirational moment struck when I photographed our garden and extracted potential colours outdoors with my children. A colour generator, Good Palette, was instrumental in refining the chosen hues and producing a range of compatible tints and shades. A shade inspired by a red cabbage was selected as an accent colour.

Prioritising Accessibility

The resultant palette boosted the design's accessibility by offering combinations that meet at least the AA rating and, in some cases, even the AAA rating. This choice of tints and shades facilitated smoother transitions, enhancing component readability like the navigation bar. By opting for an accessible palette, the design became more inclusive, catering to a broader range of users, which underscores the importance of accessibility in user-centred design.

Typography

My initial typographic selection, while functional, needed more depth and distinction for my design. Relying solely on the Inter typeface and manipulating its weights and colours for hierarchy made the design seem disorganised, particularly against the backdrop of my initial colour choices. With a renewed colour palette in hand, I sought a more thorough exploration of typography. Drawing inspiration from design principles, pairing Serif and Sans Serif typefaces to create a harmonious balance. Online font pairing tools provided direction in this pursuit.

Although safe, my initial pairing of Source Open Sans Serif Pro and Source Open Serif Pro received mixed feedback. Venturing further, I toyed with the idea of a handwritten typeface for a distinct flair. However, when applied broadly, it cluttered the design—a sentiment echoed in feedback. After consulting with my supervisor, I explored the potential of a Slab Serif. This choice resonated well with potential users, confirming the value of iterative design and feedback incorporation.

Margins

Margins and padding are subtle yet crucial elements that can influence a user's perception and experience with an app. In my initial design, I opted for narrower margins with the intention of maximising on-screen content. However, user feedback indicated that this choice impacted the app's aesthetic and usability.

In my subsequent iterations, I allocated more generous margins and padding while maintaining a consistent 4px grid for design uniformity. This decision was guided by design inspirations sourced from Behance, where effective use of space in interface designs was evident. By recalibrating the margins, the app looked more visually appealing and enhanced user readability and navigation.

Features

Feedback from the initial design iteration consistently raised a pertinent question: "What is the primary purpose of this design?" While the original intent was to assist users with smart sensors, my eagerness to provide holistic support inadvertently led to including weather information. As reflected in user feedback from the chalk mark test and interviews, this deviation muddled the core proposition of soil moisture-based sensing.

Recognising this dilution, I realigned the design focus in the subsequent iteration. The revised design offers clarity and purpose by emphasising interaction with the smart sensors and prioritising users' primary goal – successful growth. While integrating additional features like weather metrics remains an enticing prospect, it's reserved for potential future development, ensuring the design's essence isn't compromised.

Feedback/Reflection

The ideation phase has been both challenging and immensely rewarding. In retrospect, it's clear that during the initial design stages, I became somewhat tunnel-visioned, often failing to pause and realign with the fundamental question: "Does this cater to the user's needs?" A more regular self-assessment, anchored in the user's requirements, might have kept the design more user-centric. As I advance in my design journey, I aim to frequently juxtapose my work against the project brief and the user's needs to ensure alignment.

Additionally, the utility of creating a style tile became evident. Equipped with a predefined colour palette, fonts, text styles, and imagery, I could maintain a focused approach. This experience underscored the significance of establishing a design system early on. As I delve deeper into UI design, I'm drawn to a method suggested in "UX Magic" — adopting a greyscale palette. Not only does this cater to users with complete colour blindness, but it also paves the way for seamless stylesheet alterations without compromising legibility (Rosenberg 2020, 302). Such a method seems prudent, potentially elevating accessibility and overall design quality.

Interaction Design

To enhance the app's user experience, I sought ways to make interactions functional but also delightful and memorable. While exploring various design elements, I chanced upon a desktop version of a virtual pet reminiscent of the iconic Tamagotchi. This discovery sparked the idea of using emojis to depict the health and mood of plants. Healthy? Thirsty? In need of care? Emojis could instantly convey these states.

While introducing this concept to the dashboard, I grappled with concerns about its appropriateness. Would it appear too whimsical, detracting from the app's utility? However, user feedback was overwhelmingly positive. Users found the emojis endearing and compelling as visual cues indicating which plants required attention. Building on this enthusiasm, I evolved the emojis to resemble the plants they represented.

Moreover, to enrich the interface, I introduced micro-interactions. These subtle design touches, like a gently pulsing dot on real-time data charts or jubilant emojis, added layers of engagement. I also integrated feedback mechanisms, such as animated ticks, confirming user actions, and ensuring clarity and satisfaction.

All imagery and emojis within the app were crafted using Midjourney, adhering to their General Commercial Terms. I curated these visual elements by inputting common keywords and phrases, ensuring relevance and resonance. Some of the images required some minor use of Photoshop to remove the existing expressions and also the use of After Effects to create some minor animations using basic keyframes.

Prototyping

During the prototyping phase, Figma rolled out an update introducing variables, a feature that enhances dynamic interactions within prototypes. Adapting to this new functionality was a learning journey. I successfully integrated it into my design, allowing for more responsive interactions. For instance, based on user interactions, a care list could dynamically switch between a happy and sad emoji, indicating the plant's health.

Moreover, I leveraged the use of Midjourney for content generation, especially for specific imagery like tomato plants. This saved valuable time otherwise spent searching for the right stock images and ensured precision by letting me dictate the content specifics of each image. This streamlined approach and Figma's versatility facilitated swift modifications in line with feedback from testing rounds. Such adaptability in my design process underscored the emphasis on user-centricity and iterative refinement.

Testing

During the app's development process, I conducted ongoing tests and invited users to participate in specific testing rounds using a shared Figma link. Even my six-year-old daughter provided valuable feedback reinforcing my belief in the emoji-based feedback system. Seeing her joy when the tick appeared after clicking on the 'tomato', and her empathy towards the happy and sad emojis on the dashboard was heartwarming. As I worked on the second version of the design, I faced various design challenges based on user feedback. For instance, users found the tomato image on the overview to be too small. However, when I adjusted its size to address this issue, it ended up being too overpowering and distracting users from the header text. Users were also uncertain about what to do next on the overview and details pages. The content's weight often caused them to overlook the text buttons at the top of the screen, which were meant for toggling between simple and detailed views. To solve this, I introduced a more prominent button that naturally guided users towards the detailed information option. As the design was nearing completion, concerns about colour accessibility resurfaced, especially where stronger contrast was needed. Fortunately, my earlier work using a broad spectrum of values and intensities allowed for easy adjustments, improving the contrast ratios. Feedback also highlighted issues with typographic hierarchies that needed more clarity. To solve this problem, I used the secondary colour to differentiate and enhance clarity, ensuring headings did not share the same green shade.

Future Work

While the initial design suffered from feature creep, leading to an overwhelming interface, I've identified potential features for future iterations based on structured user feedback. These additions, introduced incrementally, could further enhance user experience:

Planting Seed Reminder

Users expressed interest in understanding optimal seed or seedling planting times for timely harvests. This feature could utilize historical data from prior user crops and weather patterns to predict the best planting window.

Pests & Diseases Management

A significant user concern is managing pests and diseases. Future iterations could explore machine vision for plant observation and environmental data to assess potential risks.

Weather Integration

While the initial version overly emphasized weather, its reintroduction aims to offer weather-based irrigation scheduling and, potentially, plant water status-based scheduling, ensuring the app remains a gardening assistant first (Ahamed 2023, 60).

Soil Quality Analysis

Catering to users like 'Cid', this feature would provide insights into soil quality preferences for different plants. One user pointed out the soil quality differences between new and established homes, a factor worth considering.

AI Integration

AI can significantly augment the app's capabilities. While it's already envisioned for water and feed calculations, integrating a conversational bot could offer users guidance on planting, growing, and more.

Conclusion

This project's original proposal sought to explore using smart sensors within a vertical farming context. However, the initial stages of research pivoted my focus towards a smaller, more intimate scale. The kind of participants available for the survey and interviews predominantly influenced this shift. The core hypothesis revolved around the idea that home gardeners, armed with an app delivering real-time feedback from smart sensors, would experience a marked enhancement in their cultivation success rate. While the feedback has been largely positive, quantifying this success still needs to be discovered.

The project changed a lot from start to finish. One big lesson I took away was the danger of adding too many features; it made me realise just how important it is to keep user needs at the centre of everything. When I took some inspiration from the Tamagotchi for the app, it showed me how putting the user first can lead to new ideas. Using emojis to show how plants are doing was a simple touch, but it clicked with the people I showed it to.

The iterative process was important in progressing the design to the best it could be and allowed for different experiments with the design; I found myself guilty of blending what users said they wanted against the research themes. By which I mean different users wanted different things and, in an effort to please, I was adding the requests, but I found this was having the opposite effect and created an interface which, when tested, people were lost in, and the intention of the design was lost.

Reflecting upon the entirety of this module, the journey was replete with invaluable lessons. One standout moment was the strategic shift from Axure to Figma, a decision underscored by Figma's

timely updates that further enriched my toolset. This module wasn't just about delivering a product; it was a deep dive into the intricacies of user experience design, a journey of self-discovery, and an affirmation of the dynamic nature of learning.

While the project's outcome is a testament to iterative design and user-centric principles, it also underscores the delicate balance between creative vision and usability. Consistent feedback and my subsequent reflections ensured the design aligned with user needs. This journey wasn't just about creating a product but understanding the fluid dynamics between a designer's intent and user experience.

References

Ahamed, Tofael, ed. 2023. IoT and AI in Agriculture. 1st ed. Singapore, Singapore: Springer.

Dapunt, Alex, Christian Strunk, and Nikki Anderson. 2020. “Product Bakery – The Product Management, UX & Design Podcast: #9 User Research Can Be Fast & Easy - with Nikki Anderson @Zalando On.” Apple Podcasts. October 22, 2020. https://podcasts.apple.com/gb/podcast/9-user-research-can-be-fast-easy-with-nikki-anderson-zalando/id1533923493?i=1000495618110.

“How Long Should a Survey Be?” n.d. SurveyMonkey. Accessed August 1, 2023. https://www.surveymonkey.co.uk/curiosity/survey_completion_times/. Kore, Akshay. 2022. Designing Human-Centric AI Experiences. 1st ed. Design Thinking. Berlin, Germany: APress.

Kvale, Steinar. 2012. Doing Interviews. Qualitative Research Kit. London, England: SAGE Publications. Rosenberg, Daniel. 2020. UX Magic. Independently Published.

Sherin, Aaris. 2012. Design Elements, Color Fundamentals: A Graphic Style Manual for Understanding How Color Affects Design. Rockport Publishers.

Tomlinson, Isobel. 2013. “Doubling Food Production to Feed the 9 Billion: A Critical Perspective on a Key Discourse of Food Security in the UK.” Journal of Rural Studies 29 (January): 81–90.

Travis, David, and Philip Hodgson. 2023. Think like a UX Researcher. 2nd ed. London, England: Taylor & Francis.

“United Kingdom: Inflation Rate for Food 2023.” n.d. Statista. Accessed July 1, 2023. https://www.statista.com/statistics/537050/uk-inflation-rate-food-in-united-kingdom/.