BCycle is a public bicycle‑sharing system in Indianapolis. Through a simple mobile app, riders can find nearby stations, see how many bikes or docks are available, differentiate between standard bikes and e‑bikes, unlock a bike and pay for rides. Residents with an IndyRides Free annual pass can ride any bike free for the first 30 minutes, and the network includes about 50 stations operating 24×7 across the city. With the app acting as the primary touchpoint for planning, starting and ending trips, the quality of its experience directly influences how people decide to travel.
I led a student team through a mixed‑methods usability evaluation of the BCycle app during Fall 2025. We chose this project because bike‑share plays an important role in campus life and urban mobility, yet its usability had not been systematically studied. Over several weeks we planned, conducted and synthesized research to understand how well the app supports riders in real‑world conditions and to identify opportunities to strengthen trust, clarity and convenience.
Bike‑share trips are spontaneous and time‑sensitive. Riders often decide on the fly whether a station has a working bike, whether they can unlock it quickly, and how long they can ride before fees begin. When information is missing or inaccurate, people fall back on workarounds checking multiple stations, switching apps for navigation, or setting their own timers. These extra steps increase cognitive load and diminish confidence.
Unlike other mobile services, the BCycle app must support users outdoors, often while walking or in inclement weather. Glare, rain or cold hands make it hard to read small icons or dive into menus. Connectivity gaps can delay updates. If riders are unsure whether a bike is unlocked or a ride has ended, they risk unexpected charges. Designing for this environment requires minimizing friction, providing clear feedback and accommodating real‑world constraints.

To capture what riders say, do and feel, we combined qualitative and quantitative methods:
Participants were primarily students and young adults (ages 18–30) recruited near stations. Combining methods let us triangulate findings interviews uncovered expectations, think‑aloud sessions revealed in‑moment challenges, surveys provided broad sentiment, and heuristics highlighted systemic issues. This mixed approach helped us understand both why riders behave a certain way and where the interface breaks down.

The map is the starting point for every task; all participants quickly found stations and appreciated the current‑location feature. However, availability data was often wrong. Riders saw mismatches between the number of bikes or docks shown in the app and what existed on site. Quotes such as “even though there are seven available, it shows only three” and “there have been bikes in there, but it just shows 0” illustrate how inaccurate counts erode trust. To cope, participants checked multiple stations or simply walked until they found bikes. When an app that should simplify planning forces users to double‑check reality, confidence suffers.
