This was done for a developer who needed a clear visual of a single unit before scaling it to an entire master plan.
The goal?
✅ Get approval from partner investors
✅ Gather early feedback from potential buyers
Here’s what I used:
🔹 ControlNet with SDXL model for precision ... it’s not the newest tool, but it’s still incredibly powerful. 🔹 Then I played with denoise values under the Flux model to enhance and upscale the image. 🔹 A lot of closed apps can offer similar results ... but this setup? It’s free, flexible and runs locally on your machine.
To generate the animation, I used Klingai as an image-to-video workflow with camera movement control.That’s also can be local and free, and I’m currently testing a few variations to push it further.
Now you can place different elements into your scene to set up your ideal composition ... with full control over how, what, and where.It’s like building a moodboard, but for exterior scenes, and way more dynamic.Once the composition is locked in, you can generate your vision in different scenarios and moods.
Here are the steps I used:
1️⃣ Start with a sketch or screenshot from your model
2️⃣ Use ComfyUI (Enrico's custom nodes) to place up to 8 elements in your layout
3️⃣ Plug the output into ControlNet for SDXL + craft a good prompt for mood variation
4️⃣ Play with the denoise values in the Flux model to enhance detail
5️⃣ Upscale the result for clarity and polish
6️⃣ Run an image-to-video workflow to animate and bring your vision to life
We’ve always built 3D models to generate images. But what if we could reverse the process?
From Image to 3D Model ... The Opposite Workflow!
In the design and visualization world, we follow a traditional workflow ... modeling, texturing, rendering ... to create images.
This takes time and resources.But now…We can do the opposite!
With a custom ComfyUI node, you can transform a single image into a 3D model!
Back in 2016, we took on an exciting challenge ... an international competition in Aarhus, Denmark.
It was the first phase of a School of Architecture design competition, where three winning proposals would later compete against Kazuyo Sejima + Ryue Nishizawa / SANAA, Lacaton & Vassal, and BIG - Bjarke Ingels Group.
Back then, I remember spending a couple of hours creating a Photoshop collage from a clay render of a mass model to better express the idea.We didn't have enough time or resources to produce a high-end render.
Today, I revisited this image and, with some AI enhancements, achieved this result in just a few minutes.
Designing on site... that's quite interesting!
Idea iteration based on the surroundings first, then creating contrast with them.
Integrate your CAD facade into the real context and explore material variations.
By using Closed App (here YanusAI), you can input a color ID image (an image with areas defined by different colors), and the app extracts these areas into separate masks (also known as segmentation).
Then, you can use this masks to:
➡️ Describe each texture to define the material, context, and surroundings.
➡️ Choose from a predefined library of textures available in the app.
With this control, you can:
✅ Iterate textures to find the best fit.
✅ Change context, lighting, or even the weather.
✅ Enhance details and upscale your final output.
What if we could inpaint an entire building from just one image?
I’ve been developing a custom ComfyUI workflow that allows seamless inpainting and integration of objects, furniture, and now… buildings!
_ How it works?
I used a single reference image of a building ... no need to remove the background!_Integrated it into various real-world sceneries (only the last two scenes are AI-generated).
In this case, I took a render I made 8 years ago—and of course, I felt it could be enhanced.
Here’s how I did it:
🔹 Step 1: Adjust Framing & Composition
I extended the image using outpainting, allowing AI to perfectly fill in the extra space for a better composition.
🔹 Step 2: Iterations
I refined elements in the image to make them more cohesive. The key here? The "Denoising Value"!
1.0 → The model follows the prompt entirely.
0.0 → The model strictly follows the original image.
I typically adjust between 0.75 and 0.40, depending on the goal.
🔹 Step 3: Adding New Elements
I used inpainting to seamlessly integrate birds, a swimming pool, and people into the scene.
🔹 Step 4: Final Enhancements
To finish, I used an upscaling workflow to enhance details, improve resolution, and bring the image to life.
In this case, I experimented with removing consistent elements from a random exterior image ...
The model perfectly reconstructs the hidden parts, even for reflections, based on the existing information in the image. It’s amazing to see this level of capability.