Grace Alexander

Hi I'm

Grace

I'm a senior user researcher based in San Francisco. I love being creative and spending time outdoors.

I'm a Senior UX Researcher at Apple.

I focus on mixed-methods research, with the goal of grounding product decisions in real user insight. These projects are personal work that reflect how I think and create outside of Apple.

Portfolio
Lamp with a wood base and translucent porcelain shade

Lighting Design.

A two-year project in translucent porcelain, in pursuit of a warm, atmospheric glow.

I was living in an old San Francisco apartment with warm, flattering light — most of it coming through frosted glass fixtures and incandescent bulbs. I wanted to recreate that quality of light — that warm, atmospheric glow — with a ceramic light.

I'd heard that porcelain, thrown thin enough, can be translucent. That became the starting point: two years of making, testing, and iterating toward a ceramic light cover that glows.

01

Prove the Concept

Translucent porcelain cup placed on top of a light fixture — proof of concept

The first question was whether porcelain could be thrown thin enough to let light through. I made a test piece, and it worked — the translucency was clear, and the concept held.

You can see here my test piece placed on top of a light fixture in my house.

Next steps

With the concept proven, I needed to design the full lamp — how the lighting component would fit inside, how the parts would connect, and how to make it a standalone object.

02

Develop a Plan

Develop a plan

With the concept proven, I sketched out a plan. I needed to figure out how to fit the lighting component inside the piece, how the parts would connect, and how to make it sit flat on a surface. The goal was a fully standalone lamp — one self-contained object you could plug in and use.

Next steps

Build it and test it — see how close the reality was to the plan.

03

Initial Design

Second prototype with translucent porcelain shade
Second prototype with shade removed

The first version got the overall look right, but had real problems. The light fixture sat separately from the rest of the piece, the cord exit felt unresolved, and the components didn't come together as one. Only about one in every five shade attempts came out properly translucent.

Next steps

Improve the shade success rate and find a method for making it consistently. Integrate the components so the lamp reads as one unified object.

04

Iterate on the Shade

Lamp with ceramic base and translucent ceramic shade
Inside the lamp with the light off and shade removed

After trying hand building, carving, and other methods, I landed on slip casting. I threw the lampshade form on the wheel, made a plaster mold from it, and used the mold to cast extremely thin porcelain shells. I also took a slip casting class to learn the technique properly.

Traditionally, slip is left in a mold for 30–40 minutes to build up the walls. For these, I pulled it at five — well outside the standard parameters of the method, but the only way to get the walls thin enough to glow.

Next steps

Raw white porcelain looked right for the base, but getting it to fit the shade precisely was difficult. Clay shrinks about 15% during firing, and starts shrinking as soon as it begins to dry. The shade used a liquid slip clay while the base was thrown from a different porcelain, so they had different shrinkage rates and were always at different stages of dryness. Aligning the two consistently was a persistent problem.

05

Iterate on the Base

Lamp with a wood base and translucent porcelain shade
Wooden base carved out to fit the ceramic lampshade, with lighting fixture attached inside

While I'm most comfortable with the ceramic medium, it was time to try a new medium that could solve some of the issues a ceramic base introduced. I took a wood carving class, learned to use the lathe, and switched the base to wood. Wood made it straightforward to install the lamp hardware and drill a clean hole for the cord — both things that had always been messy in ceramic.

The bigger gain was fit. With a wood base, I could bring the fully fired shade to the lathe and carve the groove to match it exactly — test-fitting the shade as I went, shaving a little more until it sat properly. The shrinkage problem went away.

Slip casting for the shade, wood turning for the base — after two years of experimentation, I had a consistent, replicable method for making the light.

Result

A standalone, plug-in lamp — one object, consistent to make, and the warm glow I'd set out to recreate.

Portfolio
Conversations with Women — zine cover art

Conversations with Women.

A zine exploring the everyday feelings we don't always talk about.

I've always been curious about the feelings people carry through ordinary days — not the hardest moments, but the quiet, everyday ones that rarely get named out loud. This zine grew out of that curiosity.

I took a zine-making class to learn the craft, designing and printing the cover art by hand. Then I set out to interview women — not about trauma or turning points, but about the rhythms of regular life. Every conversation followed the same five questions, starting with: tell me about your day.

The goal was to make space for these everyday feelings and the conversations around them.

Crafting the five questions took care. I wanted each one to be open enough that participants could take it anywhere — no specific emotions named, no examples offered that might anchor their thinking. The intent was to let each person fully describe their own experience, in their own words, without being steered toward what I expected to hear.

After the first few interviews, I was surprised by the range of feelings people named — I'd expected more overlap with my own. The variety was a useful reminder of how differently we each experience the world.

01

Tell me about your day so far today — what's happened?

02

Is there a feeling you've been feeling a lot recently?

03

Tell me about a time recently when you felt that way.

04

How do you feel about the role that feeling plays in your life?

05

In the future, what do you want your relationship to that feeling to look like?

Full interviews coming soon.

Coming soon
Portfolio
Diagram of an iterative product loop cycling through usability testing, user feedback, development, and new design

The AI Feedback Loop.

What happens when you take design, dev, and product out of the product lifecycle?

This project explores what happens when AI and usability testing are connected in a closed loop with no designer or developer in the middle. The premise is simple: an AI generates a prototype, real users test it, and their feedback goes straight back into the AI to generate the next version.

The interesting question isn't whether AI can design — it's what kind of design emerges when the only input is real user behavior, iterated on continuously and automatically.

Tools like UserTesting.com already let you connect to users on demand. In a fully realized version of this loop, a design could go from prototype to tested to revised without a human decision point in between. And since most usability testing doesn't require specialized expertise, this could be an inexpensive way to iterate.

To be clear: I believe humans should be in the loop with AI. This project is a deliberate experiment to see what happens when they're not.

App home screen with a tree photo and two action buttons: Take a Photo and Choose from Library
App results screen identifying the tree as a White Oak with detailed information about the species

It took more iterations than I expected to get to a decent-looking app. But, surprisingly, it did get there.

That said, the designs Claude produced didn't always follow best practices or standard design patterns — it was optimizing for the feedback it received, not for what a skilled designer would know to do from the start.

Claude also didn't prioritize findings the way I would when conducting research — it treated feedback from one user the same as feedback from three. A more experienced researcher would weight findings by how many users raised them. More specific instructions, a tighter prompt structure, or a more specific skill could help address this.

01

Iteration One

Iteration 1 home screen — camera viewfinder UI with a question mark placeholder, a button to identify a tree, and the prompt 'point your camera at any tree or leaf'
Iteration 1 results screen — tree identified as a White Oak, with an About section and a fun fact about the species

Research findings

The core flow worked — users could identify a tree without friction. Three problems emerged: a "?" placeholder icon that everyone flagged as unfinished, a results page that didn't telegraph scrolling (one user never found the fun fact or the CTA), and a consistent ask for more depth — reference photos, tappable match cards, a habitat map.

AI's recommendations

Removed the redundant Recent strip from the home screen. Added a sticky "Identify Another Tree" button outside the scroll view. Made each match card expandable with a Wikipedia detail sheet. Linked the habitat card to an Apple Maps search.

My take

Three of the four recommendations were each supported by only one user — a real generalizability problem. The AI also missed the "?" icon entirely, despite all three users raising it independently. That's both a sampling problem and a prompt problem. I needed to tighten both.

02

Iteration Two

Iteration 2 home screen — camera viewfinder with a green circle placeholder, the prompt 'point your camera at any tree or leaf', a button to identify a tree, and a History button in the top right
Iteration 2 results screen — top matches showing White Oak at 91% confidence, followed by Bur Oak and Chinkapin Oak, with an About section and an option to identify another tree

Research findings

Results improved but the landing page was unanimously called "funny" or unpolished. The bigger issue: the green "Identify a Tree" button was so dominant that users interpreted the confirmation dialog as a continuation of the same step. They kept expecting to press the main button again rather than choose from the dialog.

AI's recommendations

Redesigned the landing page with a centered hero structure — app name and tagline at the top, illustration in the middle, buttons anchored at the bottom. Replaced the single CTA with two explicit buttons ("Take Photo" / "Choose from Library"), eliminating the dialog entirely. Added reference photos to expanded match cards. Removed confusing scroll-hint chevrons.

My take

The biggest visual leap of the three rounds. The landing page fix was the right call. But the AI was still overgeneralizing — several recommendations came from two users rather than three. The pattern-matching was improving but the threshold for what counted as a "finding" was still too low.

03

Iteration Three

Iteration 3 home screen — green circle placeholder, the tagline 'identify any tree instantly', two action buttons to take a photo or choose from library, and a History button in the top right
Iteration 3 results screen — tree identified as White Oak at 91% confidence, with two other oak species as alternate matches, an About section, and a button to identify another tree

Research findings

The identify flow and results screen stopped drawing complaints entirely — a meaningful improvement from round one. The one issue all three users raised independently, in almost identical terms: a black "?" box sitting on top of the green circle in the hero. The tree emoji wasn't rendering in the iOS simulator and was falling back to a placeholder. One rendering bug, otherwise working.

AI's recommendations

Replaced the emoji with an SF Symbol (tree.fill) for reliable rendering across all iOS devices. Nudged the fun fact text back from secondary grey to primary — the deprioritization from the previous round had gone slightly too far.

My take

Three rounds in, the loop had produced a working, coherent app. That surprised me. What it exposed clearly: AI is good at pattern-matching feedback into UI changes, but it consistently overgeneralizes from single data points and misses what users aren't saying. The loop may work — but having an initial design pass could save a lot of iterations.

I love to build beautiful, functional things.

I'm a UX researcher and designer with an engineering background.

At Northwestern, I studied Computer Science and Design.
Within the CS curriculum, I focused on Human Computer Interaction and Machine Learning. I was a TA for 5 quarters across different CS classes. This taught me how to break down complex topics and explain them clearly, and gave me my first real experience in mentorship and building relationships through teaching.

I started at Apple in a cross-functional rotational program.
Across my rotations, I worked in product management, front-end development, and user research, which gave me a strong foundation for working across disciplines. From there, I grew into increasingly scoped research roles, leading research on Apple TV+ before building Apple Legal's research practice from the ground up.

Outside of work, you'll likely find me at my ceramics studio.
I also love backpacking, rock climbing, and exploring in the Sierras.

Grace Alexander throwing a pot on the wheel
Handmade ceramic teapot and mug set
Handmade ceramic citrus juicer
Handmade ceramic plant pot with leafy plant

Let's connect!

I'm always happy to connect with other UX researchers — I'm curious about what the research experience looks like across different companies and teams. Feel free to reach out!