Wearables track sleep, heart rate, and recovery in real time. Logging a meal still means searching a database, estimating a portion, and typing it up by hand. This gap has been one of the most challenging digital health problems.
Polyverse, the developer behind Calcuttatries to close it. The app uses the Google Gemini 2.0 Flash model to identify meals and generate calorie and nutrient breakdowns from a single image. The user takes a photo. CalCam takes care of the rest.
Where wearables stop
Consumer health platforms have built deep insight into body metrics. Nutrition falls outside that loop. Unlike fitness trackers that automatically measure heart rate and step counts, nutrition tools have relied largely on manual input, a structural mismatch that researchers have documented for years.
Nutrula Found Nearly 80% of calorie tracker users stop logging within the first two weeks, with traditional tracking requiring 15 to 23 minutes of data entry per day across three to five meals. A decade-long scoping review Found The time cost of manual entry is a major contributor to low adherence across calorie counting apps.
A user who stops logging food after two weeks creates an incomplete data picture. The wearable device continues to work. The feed log went dark.
Make food machine readable
CalCam addresses this problem at the input layer. According to Google’s developer blogthe app uses Gemini 2.0 Flash to process a photo of a meal with prompts that identify nutrients, estimate portion weight, and calculate macronutrient distribution, including sauces and condiments that loggers typically omit by hand. The model returns structured output that is fed directly into the CalCam interface, cutting the step between analysis and display.
Advertisement: Scroll to continue
Speed was a deliberate design limitation. Polyverse reported that results were delivered approximately one second faster after switching to Gemini 2.0 Flash than previous models, along with a 20% increase in user satisfaction with food recognition results. For an application whose value depends on frictionless recording, latency is as important as accuracy.
Previous image recognizers had difficulty dealing with plated dishes, mixed meals, and restaurant portions. Multimodal models handle these cases differently. Gemini 2.0 Flash It has been identified Not only the dish but also sauces and seasonings, contributing to a more comprehensive analysis of macronutrients. This moves food from a category that requires human interpretation to one that the model can analyze in the background.
Sharing problem
Health platforms have spent the last decade building retention of body metrics. Food was the missing variable. feeding. FM I mentioned Comprehensive platforms that combine exercise, recovery, and nutrition data outperformed single-metric tools in user retention in 2025, with analysts identifying nutrition as a leading gap for platforms heading into 2026.
Image-based registration eliminates the activation cost that manual entry creates. 2021 Meta-analysis I’ve found that consistent self-monitoring of food increases the likelihood of achieving significant weight loss within 12 months. The bottleneck is not the driver. It’s friction.
Consumer appetite for AI-powered sanitary ware is growing along with technology. PYMNTS INTELLIGENCE Found Nearly one in four American consumers said they would let an AI agent help manage health and wellness information.
Multiverse Plans Expands CalCam with AI-driven recipes and personalized coaching features. The company did not disclose user numbers or revenue. A wider launch is scheduled for later this year.





