top of page

AI and Architecture
Titled “AI and Architecture”, the course was part of a skill lab that addressed the growing demand to learn generative AI. The course evolved progressively over six days, from initially accepting what AI gives us to gaining precise control over the output and negotiating with AI to reflect the student’s design vision.
Students began the course by understanding how prompting works, what tokens are, and how models are trained. Students explored four ways of interacting with generative AI: text to image, image to image, text to 3D model, and sketch to render to 3D model. This covered a wide range of generative AI applications from prompt crafting to physical outputs.
To do this, students used tools such as:
- ChatGPT: To design an architectural space on a real site and generate compositional drawings. (Less precise)
- Midjourney: To create personalized moodboards by exploring all - -parameters and visualizing spaces shaped by their imagination. (Less precise, more creative)
- RhinoMCP: To generate precise outputs in Rhino using ClaudeAI. (More precise, less creative)
- ComfyUI: Sketch to Render to 3D model workflows. (Precise and creative)
At every step, students had to adapt their prompts for each tool and think of prompting as an extension of their design process.


bottom of page




















