Building a 3D App Icon in Blender MCP, Then Making It Physics-Enabled in visionOS

claude-codeai-developmentvisionosblenderrealitykitdevelopment

Today’s session connected two things I hadn’t expected to connect: the Blender MCP bridge and visionOS’s RealityKit physics. It started as an icon audit and ended as a spatial launch experience with 38 throwable constellation nodes floating in your real room.

The Audit

I dropped the NARSSR SVG icon files into the conversation — four files that define the icon in layers: N letterform, constellation edges, constellation nodes, composite preview. The icon is designed with hidden letters: the graph nodes inside the N spell out A R S S as constellations. Clever. Also complicated to render.

The visionOS solid image stack was in rough shape:

  • Back layer: plain black. The N was completely missing.
  • Middle layer: near-invisible white wireframe on a white background. Effectively invisible.
  • Front layer: the nodes only. Actually okay.

Vision Pro uses these three layers for a parallax effect — Back sits deepest, Front floats closest. If Back is just a black square and Middle is invisible, you’re losing the whole depth story.

Blender as an Asset Pipeline

The Blender MCP bridge (blender-mcp addon, port 9876) lets Claude write Python directly into Blender’s scripting context. So instead of manually recreating the SVG geometry, I described the conversion once and let it run.

The SVG coordinates map into Blender space:

SCALE = 5.0 / 512.0  # 1024px icon → ±5 Blender units

def sv(x, y):
    return ((x - 512) * SCALE, -(y - 512) * SCALE)

Each SVG element becomes geometry:

  • N strokes → emission cylinders (primitive_cylinder_add, rotated to align with the line direction using atan2(dx, -dy))
  • Constellation edges → thin cylinders, same rotation math
  • Nodes → UV spheres at each node position, radius scaled from SVG units

The emission materials are pulled straight from the brand hex values — #B86800 for the N verticals, #D07800 for the diagonal, the amber progression (#FF9A20#FFB045#FFC870#FFD090) for nodes by hierarchy.

All three layers got Cycles renders at 128 samples:

  1. Back.png — background plate + N strokes, solid (no alpha)
  2. Middle.png — constellation edges only, transparent background
  3. Front.png — all 38 nodes, transparent background

The parallax now actually works. N sits behind, the edge network floats mid-depth, the bright corner anchors pop in front.

The Unexpected Extension

Once the icon geometry existed as a 3D scene, there was an obvious next question: what if the app launched with these nodes floating in your actual space?

That’s a NARSSRSpatialIntroView. The same 38 node positions that define the SVG get mapped into visionOS world space:

static let scale: Float = 0.6 / 1024.0  // 60cm wide at 1m depth
static let iconZ: Float = -1.0           // 1m in front of user

var worldPos: SIMD3<Float> {
    SIMD3(
        (svgX - 512) * Self.scale,
        -(svgY - 512) * Self.scale + 0.10,  // slight Y lift
        Self.iconZ
    )
}

Each node is a ModelEntity with a PhysicallyBasedMaterial using the same brand colors, plus:

entity.components.set(CollisionComponent(shapes: [shape]))
entity.components.set(PhysicsBodyComponent(
    shapes: [shape],
    mass: r * 800,
    material: PhysicsMaterialResource.generate(friction: 0.1, restitution: 0.55),
    mode: .dynamic
))
entity.components.set(InputTargetComponent(allowedInputTypes: .all))
entity.components.set(HoverEffectComponent())

HoverEffectComponent gives the gaze highlight for free. InputTargetComponent(allowedInputTypes: .all) catches both indirect (gaze + pinch from across the room) and direct (reach out and touch) input.

The Flick Mechanic

DragGesture doesn’t give you 3D velocity. You have to track it manually:

@State private var dragVelocity: SIMD3<Float> = .zero
@State private var lastDragPos: SIMD3<Float> = .zero
@State private var lastDragTime: Double = 0

// in onChanged:
let now = Date().timeIntervalSinceReferenceDate
let dt  = Float(now - lastDragTime)
if dt > 0.001 && lastDragTime > 0 {
    dragVelocity = (worldPos - lastDragPos) / dt
}

On drag end: switch to .dynamic physics mode, apply velocity × 1.6 amplifier plus random angular velocity:

entity.components.set(
    PhysicsMotionComponent(
        linearVelocity: vel,
        angularVelocity: SIMD3<Float>(
            Float.random(in: -6...6),
            Float.random(in: -6...6),
            Float.random(in: -6...6)
        )
    )
)

The 1.6× feels right — less and the node drifts away lazily, more and it rockets. The random tumble makes it feel physical rather than scripted.

During drag, nodes go .kinematic so the physics simulation doesn’t fight the hand. On release, .dynamic resumes.

The Return

Nodes don’t float away forever. After 5 seconds of free flight, a scheduled Task fires:

Task { @MainActor in
    try? await Task.sleep(for: .seconds(5))
    guard !isReassembling, let home = homes[name] else { return }
    returnToHome(entity, pos: home)
}

returnToHome switches back to .kinematic, zeroes the velocity, then entity.move(to:) with .easeInOut over 1.2 seconds. The node glides home. The icon reforms.

“Begin Reading” triggers a staggered wave-return — letter nodes first, corners last, 18ms between each — then dismissImmersiveSpace() after a 1.8-second hold on the reformed icon.

Zero Gravity Is Actually Dead

PhysicsSimulationComponent.gravity = .zero makes nodes feel inert. A tiny downward pull — gravity = SIMD3<Float>(0, -0.02, 0), about 0.2% of Earth gravity — makes them feel like they’re floating in a dense fluid. Much better.

The API Verification Step

Before shipping any of this, I ran the key RealityKit APIs through Sosumi (the Apple docs MCP) to confirm the exact initializer signatures. Found that I’d incorrectly marked buildScene() as async (it doesn’t await anything — the fly-in Tasks are fire-and-forget). Also caught a [self] capture in gesture closures on a struct View, which is an unusual pattern in Swift and not needed. Minor things, but worth catching before the visionOS build.

What’s Next

The intro needs real-device validation — physics tuning that feels right in a simulator often needs adjusting in an actual room. The entity.move(to:relativeTo:duration:timingFunction:) behavior with relativeTo: nil (parent space) is well-documented but worth confirming against a live build.

The bigger opportunity flagged in the audit: ImmersiveGraphPlaceholder — the article relationship graph in visionOS is still a text label stub. Same RealityKit infrastructure, same physics setup. Next logical step is making the actual article graph spatial.


Session ran on Apple Silicon, Blender 5.1, visionOS 26 target. Blender MCP bridge via github.com/ahujasid/blender-mcp.