OmniVis is a biotechnology company that has developed an integrated platform to transform the speed, accuracy, and economics of point-of-care pathogen detection. OmniVis has a device which detects Cholera in water samples. Cholera is a disease that occurs often in areas without much infrastructure to prevent water contamination. There are 5 million cases each year and $3 billion dollars in treatment costs that could be minimized with more effective detection and reporting.
Currently, the web portal is difficult to navigate and does not show the data clearly. We are to change the experience and make the portal easy to navigate so that we can better aid organizations to save lives.” The portal needs a redesign so that it is user friendly and understandable to users. The portal’s UI needs to be more meaningful in terms of what information is being displayed and how, so that organizations can prevent outbreaks. The users of this portal are often under great strain and must act quickly. Our redesign has to address stakeholder needs to display data effectively.
In this project, my role was being a student collaborator. I took a heavy role in market research, user testing, and creating some of the high fidelity mockups.
In order to start our project, our team needed to get a good understanding of the data portal problem space. For the first few weeks of our project, I helped lead the charge in identifying actionable takeaways which led in to our next phase, initial sketching.
Once our team had a grasp on the problem space, we went in to initial sketching and user testing. I created the initial sketch that the team moved forward with and starting testing.
Alongside my teammates, I led the role in the "field researcher" side of the portal. I created some of the final screens that we handed off to our sponsors.
Below you will find our teams process to come up with our final designs. Our process consisted of research, ideation, user testing, and iterations.
The goal of conducting a competitive analysis was to identify the qualities of other portals that we considered useful and which qualities we should avoid when redesigning the Omnivis portal.
Each member of our team chose separate portals to look over on opendataportals.com, and write the qualities that we found to be desirable and undesirable qualities on sticky notes. We then made affinity diagrams to find patterns and common themes between the portals we individually reviewed. These themes would drive our sketches when designing for our users.
Our team found that websites with accessible navigation options were easier to find specific information. We also found that sorting and filtering features are a must-have when working with large amounts of data. The portals that were easy to scan through were noted as well, as our goal was to make a visual-based portal. Also, we saw a large group of comments on how data was visualized. Our group saw this as a push to look at ways of showing the data in ways that aren't just a spreadsheet of information.
In order to break down the portal OmniVis has already implemented, we decided to take a look at its structure and features. We wanted to understand which features are currently prioritized for the Omnivis portal as it stands today.
To complete an audit of the current portal, we identified which features are being used and what their purpose is. We wanted to see what appeared important in the creation portal, and later compare the information we gathered from looking at the portal with information we gathered through research to assess how the layout aligned. We would keep this outline of information, but restructure its display.
Overall, we found that the original portal was visually and functionally limited, and showed similar information for each side of the portal regardless of its necessity for the user’s role. The System Admin, for example, was seeing the same exact test data (displayed in the same way) as the Organization Admin and Field Researcher, when their needs when seeing the data differ.
When creating the field researcher side of the portal, we decided to conduct research on different filtration displays. This round of research was focused on shopping and housing sites. Our main goal in looking at the shopping and housing sites was to find inspiration for how to visualize filtration and sorting methods.
Members of our team looked at different sites like Audible, American Airlines, Airbnb, and Apartments.com for inspiration, noting unique or intuitive filtration methods.
Audible.com | inspiration for contaminated/clean filter
American Airlines | inspiration for time slider
Airbnb | inspiration for general interface
We conducted two interviews - one with a user of the cholera testing device and portal, and one with someone using an organization_admin point of view. We chose to interview individuals of differing roles in order to gain a broader perspective of the use of the portal and the needs associated with each role group.
From the interviews, we found that other portals don’t have the ability to filter by body of water but this is an important feature for tracing the spread of cholera through the water is im. We found that we could use “Clean vs. contaminated” terminology instead of positive/negative like we had been using, and researchers would like to see where they’ve been and where they need to go.
On the Organization Admin side of the portal, we found that it was important to compare locations and find “hot spots” of positive and negative test results. They also want to see data over time and need to see urgent data to know where to allocate resources
To move forward, we wanted to synthesize the information we gathered from interviews to create a representative of our user group. This persona was kept in mind through our creation process in order to remember the needs of our users. She also helped us in telling our story to external audiences; explaining why we designed the way we did.
As a team, our goal was to create a list of necessary features to include in the field researcher side of the portal, based on our synthesis of research.
Through our interviews mentioned earlier, we gained insight into what was important to include in this side of the portal. We also analyzed our secondary research from housing websites and combined these insights to come up with a comprehensive list of the core features necessary for this side of the portal.
We did a journey map to visualize the process of how cholera is tested in the current system with Jordan and the ideal situation for field researchers. This would help us synthesize the information gathered in our interview, and guide us in our design of the portal.
We referred to our interview with Jordan who had field experience, highlighting action points that were critical to researchers. We also tried to uncover moments in between for potential places we could improve their experience. We initially whiteboarded and then refined our map digitally.
We white boarded takeaways from interviews and research to piece together the experience of a field researcher. Opportunity areas for improving communication and reducing stress were identified and confirmed. We were also trying to understand the system in which the field researcher would interact with the portal through their process of testing.
From this we learned our goals for the user portal were to create a space that uplifts the field researcher in their stressful day to day life by allowing an easy way to check their quota of sites.
For our ideation process, we took the findings from our primary and secondary research to conduct a few sketches to spark some ideas. Our main goal of these sketches was to ideate and find the best way to display the information on the portal.
Some of our sketches are shown to the left.
After sketching, we wanted to show a variety of ideas for how the portal could serve field researchers, knowing that the field researcher side would be a foundation for the other sides of the portal. Our interface had to: Prioritize location, date and time, and test results. These are emphasized in the current portal, and called attention to in interviews. We are to stay visually focused and move away from the table-based design.
For this activity we tried “pin-up” sketching. That consisted of each team member drawing their own interpretation of the user focused screen on the white board.
After each member was finished, we used green post-it notes to represent the items we thought worked well in the design and used the blue post it notes to point out some flaws in their design or things each team member had questions about. We then gave feedback as a team and identified ideas we wanted to implement.
We then chose a final design and moved forward into low fidelity mockups.
We took the final design from our whiteboard sketches and turned them into a low fidelity mockup to get a better understanding of how the layout of the design would work on a computer screen. We wanted to have this and use it for evaluation.
Our sponsors recommended that we could filter by a range of dates; their reasoning being that a researcher may need to see how many tests they have done out of their quota for the date. After gaining this feedback, our next steps were:
1. Do testing on our mockup to see what is confusing or non-intuitive to someone who hasn’t used it before.
2. Find information from field researchers on how to organize the portal, both in terms of task-flow and information hierarchy.
After having low fidelity mockups, we wanted to evaluate the usability of the user-facing wireframe, using tasks that reflect scenarios users might face. We also wanted to understand what would be expected with interactions because this was one static wireframe.
We tested with UX students by following a scenario-driven usability protocol. This took place within the experience studio classroom using our printed wireframe. We gave a brief introduction of OmniVis’ device and gave scenarios for a field researcher. Our tasks in the scenario were based off of takeaways from earlier interviews. Our protocol from this round of testing can be found in the index.
We them identified takeaways through an affinity diagramming activity and made the necessary iterations to move forward.
Our team wanted to create different visualizations for the filtering and sorting features we identified to be essential. We mainly focused on ways of showing filtering and sorting. We created even lower fidelity, barebones mockups of our designs for this round of iteration.
We each kept in mind the essential features: Display essential data from results in a scannable way, focus on location Filtering by date, time, body of water, clean/contaminated, and Inputting keywords from notes
Each team member was tasked with creating wireframes on a whimsical board. We evaluated the interfaces to choose three distinct layouts to test with.
In this round of user testing, we wanted to understand what layout makes most sense to the users and what is most effective for the tasks a field researcher does on the portal. Also, to understand if the filtering options are intuitive and familiar.
We used a few variations of wireframes/low fidelity mockups that took different approaches at UI in terms of layout and usability tested those with a variety of people in an academic building at Purdue.
We wanted to see what seemed familiar to the users and how would the users want to filter the results. We used the methods of filtration derived from our secondary research and knowledge of priorities from interviews.
Based on our multiple rounds of testing, we created a high fidelity mockup in Figma.
Our team made adjustments from lo-fi/med-fi. We removed the slider for time, replaced "type in time" with a "clicking date opens up calendar," added patterns to cards, and added for color-blindness.
We decided on a color palette based on brand guides and we decided on the micro-interactions such as hover on cards, hovering on pins, filter interactions, etc.
Our three goals going into this round of testing included:
1. to see if users could understand the filters, specifically the dates and time
2. to see if users could understand how the map/cards connect
3. what the pins mean on the mapto see if users felt that all of the information on this side of the portal was connected and easy to use/understand
As a team, we conducted 5 tests with users who had no background in the field or with Omnivis. Due to the complications brought to life by COVID-19, the people we could test with were limited. During the tests, we showed our high fidelity mockup and had users run through tasks/scenarios in order to test the functionality of the portal. Our protocol can be found in the index.
After our team received takeaways, we made iterations to our high fidelity prototypes.