Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
After several decades of hope, hype and false starts, it appears that artificial intelligence (AI) has finally gone from throwing off sparks to catching fire. Tools like DALL-E and ChatGPT have seized the spotlight and the public imagination, and this latest wave of AI appears poised to be a game-changer across multiple industries.
But what kind of impact will AI have on the 3D engineering space? Will designers and engineers see significant changes in their world and their daily workflows, and if so, what will those changes look like?
Design assistance and simulation
It’s like a law of the universe: AI brings value anywhere there is a massive volume of data, and the 3D engineering space is no exception.
Thanks to the huge datasets that many engineering software vendors already have at their disposal — in many cases, we’re talking about millions of models — AI has a wealth of data to draw upon to provide design guidance and design optimization.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Register Now
For example, if an architect is looking to put a certain type of room — a laundry room, maybe — inside a house or apartment with a certain floor plan, the AI within the building information modeling (BIM) software will have seen enough successful examples of this being done in other situations to know exactly how to seamlessly make it happen.
AI can also start carrying some of the load when it comes to putting the finishing touches on a design. For example, a designer, by typing in a prompt to “make this building look more appealing,” could trigger AI to populate an architectural drawing with 3D models of just the right type of furniture, perhaps, or some perfectly manicured trees and hedges out front. All the designer has to do then is approve AI’s suggested additions. This kind of assistance will make the life of the actual human in front of the computer screen much, much easier.
We’re going to see more and more of this type of role for AI, where it serves as an assistant embedded within computer-aided design (CAD) programs, working side-by-side with the designer and springing into action when called upon.
Similarly, AI’s ability to learn from large datasets and make sense of what it learns can assist with simulation. In both design assistance and simulation, these capabilities are still evolving and will take years to realize their full potential, but they will gradually be able to take more and more items off of humans’ plates — greatly increasing the productivity of individual designers and engineers in the process.
New ways of 3D visualization
AI also has some interesting implications when it comes to reconstructions and digitizations of the physical world. Case in point: We’re now at a stage where we can feed AI a couple of basic 2D images of a particular building or object, and it will create a full 3D, volumetric representation (i.e., not just a superficial surface model) of that item.
This is thanks to neural radiance fields (aka NeRFs), AI-powered rendering models that ingest multiple 2D viewpoints and then perform some internal calculations and extrapolations to generate a 3D model out of those 2D images.
3D object recognition
Of course, as photos and 2D images become increasingly viable source material for the creation of a 3D model, point clouds — which are created by scanning objects or structures — will still remain a valuable source of data.
AI has some great potential applications here as well. Similar to the way that AI has become quite adept at feature recognition in photos — identifying the furry, four-legged thing in a picture as a “dog” while identifying the rectangular object as a “couch” — it can bring similar capabilities to point cloud data, helping to pick out hyper-specific features within the sea of scanned points or triangles. Examples here could be the ability to identify the walls and ceiling of a scanned building or holes and other features in a complex CAD assembly.
Amidst all these AI-assisted developments in visualization and object recognition, however, there are implications for the graphics capabilities of engineering software. Many products already manage both mesh and point clouds — but in the near future, they may have to manage NeRFs and other representations, all while finding a way for the different representations to coexist.
Of course, that’s part of the beauty of game-changing innovations like AI: As its impact ripples through the larger ecosystem, other technologies must respond in kind to the new world it creates, spurring even more innovation.
Uncertainty and opportunity in 3D engineering
After a slow build, AI has reached a tipping point — and the 3D engineering world will feel its impact in ways that range from extensions of what is already possible to surprising new capabilities and functionality. For the designer or engineer, this is nothing to fear. While AI might be the number one game-changer in the coming years, bringing massive change and uncertainty, it also promises to change the game in interesting ways, helping designers and engineers to tackle their work and shape the world around us with greater efficiency, more creativity and new levels of virtuosity.
Eric Vinchon is VP of Product Strategy at Tech Soft 3D.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers
Credit: Source link