The Good Tech Companies - AI in Product Design: Three Practical Cases From inDrive
Episode Date: August 28, 2025This story was originally published on HackerNoon at: https://hackernoon.com/ai-in-product-design-three-practical-cases-from-indrive. AI in inDrive design: UX interviews... without interpreters, automated Figma localization, and fast realistic visuals for product and promo Check more stories related to product-management at: https://hackernoon.com/c/product-management. You can also check exclusive content about #product-design, #ai, #indrive, #figma, #mobile-apps, #ai-in-product-design, #automating-localization, #good-company, and more. This story was written by: @indrivetech. Learn more about this writer by checking @indrivetech's about page, and for more stories, please visit hackernoon.com. inDrive’s inDrive product design team is using AI for field UX interviews across different countries. They also use it to automate routine tasks in Figma.
Transcript
Discussion (0)
This audio is presented by Hacker Noon, where anyone can learn anything about any technology.
AI in product design.
Three practical cases from Indrive by Indrive.
Tech.
In the Indrive product design team, AI is already a working tool.
It's used forfield U.X interviews across different countries, for automating routine tasks
in Figma, and for quickly generating realistic visuals from illustrations.
Below are three real-life stories.
How designers implemented AI solutions.
what challenges they faced and what results they achieved.
Research without intermediaries, Polina Gladcova experience.
Previously, interviews with drivers in Egypt and Latin America were conducted with the help
of local colleagues acting as interpreters.
This was helpful, but since they were not professional researchers, they often wanted to
assist the drivers, suggesting answers are showing where to click in the app.
To make the research more accurate, Polina decided to run the interviews herself using
voice-based Chad GPT. The designer speaks Russian, the driver hears the translation into Arabic,
the driver answers, and Chad GPT translates back. In practice, it looked like this, preparing the
interview script in Chad G beforehand. During the ride, enabling voice chat and setting a translator
prompt with dialect clarification, EG Egyptian Arabic, when needed, after the interview, asking
Chad GPT for a detailed review of the conversation, a summary table for a series, E. G. 10 interviews,
recurring patterns, differences, and hypotheses. Simultaneously recording audio right pointing arrow
transcribing it in another AI tool right pointing arrow feeding the text into Chad GPT for more
accurate processing. Challenges. Sometimes Chad GPT got stuck and repeated the Russian phrase instead
of translating it mixed up participants e g attributed responses to the wrong driver during long interview
series it previously lost context what helped specifying the dialect in the prompt working via text
audio right pointing arrow transcript right pointing arrow chad gpt manual monitoring during
interviews result time savings of about three to five times compared to the traditional
interpreter scheme. Cleaner. Experiment. Less influence from third parties, calm or one-on-one
dialogue. In daily work, regular use of chat GPT for translations and textual feedback on logic,
Ux. Automating localization and routine in Figma, Sergei Goldsov experience.
Sergei addresses repetitive tasks by building Figma plugins. The approach E-Simple. Take a real
pain point. From Figma forums, chats or personal practice. Formulate a detailed request.
and create a plugin using the combination of Chad GPT plus Figma plugin API documentation.
By his estimate, Chad GPT generates up to 80% of the code.
The rest is manual review and refinement, HTML, CSS,js, testing in the editor and in Figma.
Publicly available plugins by Sergei, Chad Builder, 9,500 plus users, featured in Figma Weekly.
Chart BG, approximately 3,600 users, border mock-up, approximately 2,600 users.
One internal case was solving a recurring pain point, the monotonous manual work of creating
translation keys and linking them to layers in Figma. To address this, Sergei built the text
to strings plugin. It scans the entire file, including groups, frames, and auto layout,
finds all text layers, and converts them into text variables. The plugin automatically
cleans variable names according to API requirements. If the same text is repeated, only one
variable is created and all relevant layers are linked to it. Another internal case was automating
translations in Figma. Previously, localizing layouts was tedious routine work. Texts had to be
copied manually, separate versions of screens created, and updates had to be applied every time
something change. To remove this monotony, Sergei set up a process based on Figma variables and
T-H-E sheet to variables plugin. Texts are automatically turned into variable keys, translators work
only in Google Sheets, and the designer imports the completed translations via CSV. Once the variables
are linked to the layout layers, switching the language in Figma takes just a couple of clicks,
and all texts update instantly. What to keep in mind, create highly detailed prompts,
even ask all clarifying questions before generating, manually validate code and cross-check with documentation.
The model can affect working logic or suggest outdated API calls.
The popularity of Chad Builder was boosted by community posts and being featured in an industry digest.
From illustrations to realistic photos, Arthur Siddykov experience. In Indrive's food tech
direction, flat illustrations had long been used. Arthur wanted to test the hypothesis.
Realistic images of products perform better because people see exactly what they are buying.
Conversion tests are still ahead, but the immediate task, to quickly
assemble quality assets was already solved. How it was done, taking existing illustrations,
bananas, bread, etc., and arranging them directly in the layout. Asking Chad GPT to re-render,
the composition in a realistic style, receiving ready to use assets with transparent backgrounds
and applying them in product and promo. Sample prompt, without changing the composition,
make them realistic, like in magazines. On a transparent background, what was observed
First outputs were often good enough.
During long sessions, errors began to appear.
Usually two attempts per image were enough.
Sometimes the model, forgot, about transparency and left a checkerboard pattern.
This had to be hidden.
Result.
First acceptable variance in approximately 15 minutes.
A complete set of assets in two evenings instead of lengthy approvals with photo shoots or stock
purchases.
The approach is already in use in product, promo.
In parallel, 3D images were generated for
interfaces, and outputs from photoshoots were mixed with AI generations. Key takeaways from the
three cases. In interviews, voice-based Chad GPT enabled direct contact with respondents, sped-up analysis,
and reduced interpreter influence, time savings estimated at three to five times. Layout preparation
for localization became faster thanks to automatic variable creation in a CSV translation cycle,
language switching in one-click. Visual generation produced realistic assets in two-eastern.
evenings, with first variance in minutes, enough to quickly show ideas to stakeholders and move
them forward. Conclusion, these are three concrete ways designers already use AI in daily work,
as a translator and secretary for interviews, as a co-author of plug-in code and ASA tool for
fast, realistic visuals. In each case, the designers describe the limitations they encountered
and how the overcame them. The overall benefits are clear, faster speed, less manual routine, and
cleaner data for making design decisions. Thank you for listening to this Hackernoon story,
read by artificial intelligence. Visit hackernoon.com to read, write, learn and publish.