ASK EE WORLD'S AI ANYTHING: POWERED BY ENGINEERS FOR ENGINEERS

Engineers: Don’t shun AI, learn to use it

Bookmark

/

Share

AI can help you design, its about how you use it. EE World spoke with Kyle Dumont of Allspice.io about how engineers can, with some effort, learn to use AI to make design decisions. AI is a tool that can shorten design times, but you need to learn its capabilities and limitations. It’s all about telling AI what to do.

engineer confused about AI

AI is disrupting everything, and electronic design is no exception. Software coders may soon become software specifiers with AI writing the actual code. We’re seeing that already. Using AI to design circuits, especially analog circuits, is harder. That’s because the context includes understanding the physics and application environment. Being an EE is inherently a multi-disciplinary job; you might not think that AI can completely understand the design constraints you face. According to Dumont, AI is good for narrow tasks. “We might get there, but right now we’re way more focused on narrow solutions that are adding value in your stack, in your workflow today.”

Hardware engineers already have many design tools that are now getting AI integrated into them. General-purpose AI tools can also help, but in many cases, you need to add proprietary or localized data into the large-language models (LLMs). It’s no wonder that hardware engineers may hesitate to use AI-based tools for circuit design. Rather than shun AI, you can start learning how to use it. That’s the point of the conversation in the video and in the edited transcript below. EE World spoke with Kile Dumont, CTO of allspice.io, a company that produced software to automate electronic design flow.

https://youtube.com/watch?v=w4uCBfVf5PY%3Fsi%3DRZbWlw32tyISQY33%26rel%3D0

EE World: Welcome to EE World. I’m senior technical editor Martin Rowe. We are here with Kyle Dumont, who is the CTO of all spice.io, a company that specializes in software for engineering project management, specifically electrical engineering project management. We’re going to talk about hardware engineers and AI. And I wonder if there seems to be a belief that hardware engineers, and maybe even particularly analog as opposed to digital engineers, we’ll discuss that may be hesitant to use. Ai Kyle has some thoughts on that. So Kyle, why don’t you give us a little background and then explain why you think that there may be some hesitation, at least today, for hardware engineers to use AI.

Dumont: Thanks Martin and happy to talk today. To kick things off, I’ll give a brief background on myself. I was an electrical engineer for the start of my career. I worked in robotics, both in consumer products and industrial products. I got a lot of great experience my early career. And just the real role process for how electronic designs were created, reviewed, and released, the whole end to end process, which is far beyond just circuit entry and theory of operation.

The real-world complexities of designing and building electronics projects mean multi-disciplinary QA and test teams, and manufacturing teams. Picking the right components and working with software teams, or working with firmware teams, working on mechanical teams, make sure you fit in the enclosures and all that. I think this starts to lead into where we start to get the feeling as hardware engineers that AI systems can really struggle to work in such a multi-faceted, multi-disciplinary environment that has many physical constraints. Many, many different file formats and integrations that need to be considered. to have the full context and have the full picture for electronic design. So happy to talk through a lot of those areas today.

EE World: What is it about AI that you think makes hardware engineers hesitant? I know we talked a lot about that. Just now, you mentioned things like even just manufacturing and getting and dealing with the mechanical aspects of it, and that it’s multi-disciplinary. Do you think there’s a difference between analog engineers and digital engineers regarding using AI, and if so, why?

Dumont: I don’t, not in large part. Some of the context is different in the areas that AI is most helpful for are going to be anything that saves you time. AI tools are assistants to help you do your job as an electrical engineer better. For instance, one of the areas that we find that tools, even general-purpose tools, are very functional right now, is parsing and understanding PDF data sheets. That’s because a lot of specialized time has been spent on tooling to allow models and LLMs to actually parse and get data out of these files. PDFs happen to be like one of the data contexts that electrical engineers are using a lot. The context of those PDFs might be different if you’re talking to digital engineer, who is going to be looking at register diagrams and timing and speed and rise time, fall time, things like that, although maybe that’s getting an analog world a little bit.

One of my early mentors used to tell me that all analog engineers, at some point, are digital engineers, and all digital engineers are analog engineers, because that line certainly blurs, depending how you cut it up. But, but on the flip side, analog engineers might be looking a lot more at simulation profiles and transit capacitance and things like that. So the data is a bit different. But the fundamental principles are still the same. It really comes down to your utility in an AI tool is really going to come down to whether or not the AI agent is able to understand the data that you’re working with. That’s one of the blockers.

I think that’s one of the things that we as engineers assume, that all those different multifaceted tools are just so specific to my workflow. The off-the-shelf tools are not going to work for me in some ways, that that is true.

There are companies working on these translation tools to make these systems understand those cases better and interfaces better. If you’re thinking out of the gate, there’s this recent phenomenon in the last six months or so of vibe coding and software where somebody’s just entering this kind of flow state where they’re just throwing things over the wall at an LLM. In 15 or 20 minutes, they can code end-to-end. Google Docs or Google Spreadsheets or something like that would have taken months and months, probably took Google engineers years to release the first version in a very short amount of time.

If your vision is that is what hardware is going to be today, we have to kind of course correct here, because we understand mistakes are very, very critical. Even in software that kind of vibe coding comes with its own issues. People really have to dig into that to understand. So if your thoughts as an engineer are that we’re going to be able to throw things over the wall and you’re going to come back with these multifaceted solutions, That’s not where the state of the art is right now. We might get there, but right now we’re way more focused on narrow solutions that are adding value in your stack, in your workflow today. So tools that understand your current data workflow, understand PDFs, understand your schematics, your PCBs, is if you can have read/write access into those and then start to add in things like your component libraries. You’re giving this system a pretty darn good vision into your work context, to be able to make some good, localized decisions that will help you out.

EE World: When you talk about AI, are you talking about the Chat GPTs of the world or are you talking about some of the simulation software? There’s plenty of them out there, and they’re starting to use AI, from what I’ve been hearing. Do you see AI catching up to these software programs that have been around for years?

Dumont: The workflows are going to change. I mean, the real magic here is when you start pairing the systems that we use as engineers with LLMs and agents. LLMs really understand text representations and again, usually you can get a translator to kind of convert from more specific languages to those more kind of general-purpose tools. Sometimes you have these highly specialized models trained just on a specific data set and may not have any real context for language. You can kind of use these.

Those different tools are used in different aspects. It might be that when you get into simulations, they’re using much more specialized kinds of highly trained tensors. But the reality is that when you can combine your day-to-day work in the tool with a model that is going to make some of the very repetitive and complex work that you’re doing faster that’s where the magic happens. If you’re doing anything repeatedly, if you’re talking about systems and simulations where you’re fine tuning, you’re trying to find by trial-and-error such as finding the right impedance for a net, and you’re able of give a system with simulation over to an AI tool that’s going to be able to much more easily iterate and predict and rapidly predict 1000s of times in a minute, as opposed to what you can do as a human, you’re going to get some great results. You see a similar thing all the way on the other side, when you’re talking about even for things like ChatGPT, upload a data sheet for a component that are looking at and have a chat with that and say, what is the timing for a given characteristic that you’re looking for, and tell me which page of the spreadsheet it’s on, so that if I want to build that trust and go look at that, you can get that craft references. That’s the type of thing where you’re talking about just tedious work that you can start to offload on these systems.

EE World: Let’s say you’re trying to decide which component to design into your circuit. Can you use this to compare data sheets and give it some parameters and say “What should the results be if I use this part versus another company’s part?

Dumont: We’ve seen a lot for this. You can go to ChatGPT, and you can say, “I’m trying to evaluate the latest Bluetooth modules,” and ask it, it’ll give you some fantastic information. Because there’s a lot of data that is just out there, there are forums, there are blog posts. You have to recognize that the system is taking in all of that information, and so that’s what it’s giving you. If that is biased in some ways, you will get those results out. That’s kind of the reality of the world that we live in. But even now, you can get great responses from these general-purpose tools on engineering advice. I do recommend doing that. It’s always helpful to have that and have that context. Just like in the software world, though, what separates a good engineer from an engineer you probably don’t want to work with as much is the engineer that can separate the good feedback from the bad and kind of use those tools and kind of use their own knowledge, embody that in the tools. That is not going away. You need to have those first principles for yourself, for an engineer to make use of those things, the more data you give those systems. So when you start with these small, very helpful tasks, the more context you give it, the more powerful that becomes. We are working, for instance, on giving more context into the design, into the simulations, to expand that pool of decisions that it can make.

For instance, if you’re asking ChatGPT “What is a component that I can use to solve this problem,” It can make some good decisions now. If it also has access to all of the components in your internal database, it will highlight those, and it will know exactly what is the format, what is the schema that you want to see these results out in. Hopefully, if you’re doing things right, pick something that’s in your library. If it’s going outside of that, it should have a good reason and basically be able to suggest the different properties of that component that are going to map to your library so that it shouldn’t be a big surprise if something doesn’t match up.

EE World: You mentioned the possibility of bias. Can you give me an example of what you mean by that?

Dumont: Like many things, it comes down to the access in data, so the quality of an LLM or the quality of a response that you’re going to get out of any AI agent is going to depend on what it has access to. If you’re using a general-purpose tool that is for the most part, has a lot of access. I don’t know everything that the OpenAI folks and the anthropic folks are giving the tools access to and I don’t think anybody does. Those articles pop up every now and then, but there’s lots and lots of information that it has access to, from Reddit posts to Stock Exchange articles to all of the different forum conversations to all of the actually component PDF information that is available on manufacturer websites. Their crawlers are going on and collecting all that information.

There are companies now that specialize in what was and still is search engineering optimization. They’re focusing on AI result optimization so companies can basically focus on the crawlers rom those companies. They prioritize their components and their web pages more, so that if a user goes and searches for whatever it is from an electronics component to what is the best Ben and Jerry’s ice cream flavor so that their thing kind of comes up first and foremost. There is some ability for those companies to optimize on those search results. For instance, you could just imagine that if a company spends a lot more money on advertising dollars and gets more articles out there about their new chip, that company and that chip might prop to the top. But again, some of those things can combat it, if you will, by good prompt engineering. By focusing on exactly what is the question I’m answering, should still give you the answer for the question that you’re asking. You just need to recognize where the sources of information come from. But again, if you’re starting to control some of those sources of information here. So you give this thing priority access to your schematics, your past designs, your libraries. That takes, of course, specialized translation tools you’re either developing internally or you’re partnering with a third-party company on. Then all of a sudden you’re starting to really give it clear, clear instructions for how it should be prioritizing results that come back.

EE World: There’s a lot of software out there that handles the physics of real analog. Now that’s not just analog, but the physics. By that, I’m referring to EMI, signal integrity, or thermal issues. There’s lots of software available today. It has been for years to simulate that and try to solve those kind of problems. Do you see AI changing that at al?. Or, do you think we will still have a lot of places for lots of consultants?

Dumont: There will always be a place for consultants, but I absolutely see AI changing that. I like to think of a pretty pragmatic understanding and in view of the world of AI, but I think in hardware, there’s no doubt that as we get these tools better and better at communicating, that they will be incorporated into the specialized tools better at communicating with an with an LLM or another agent, they will get incorporated into the process. What I mean by that is, specifically, there are capabilities in any of the major LLMs to add tools, and they sometimes are called functions. This is always changing quite rapidly, but that is essentially your ability to give LM access to go, call something that has another AI agent, but maybe it’s a deterministic software program to come back with results. You can use that to basically start to now plug in these different tools in the toolbox, just like we would as engineers, to start to make some of those informed decisions.

The main problem that I’ve always seen with simulation is not the capabilities of the simulation tools or the quality of the outputs or the accuracy or anything like that. It’s the ability to quickly run something. It’s the barrier to adoption. You either need massive model database or you need tons of time training on the tool. You need to take time to kind of rebuild and give all of the input parameters and structure everything appropriately. That is always what I would call as the problem with simulation engineers, if you define them not being used as often as maybe we should simulate things. It’s because they take a lot of time. Sometimes it’s just easier for us to say, “let’s ship a board out and get it back and then probe the outputs and see if any smoke comes out.” Well, if you can kind of lower that barrier, and if you have an AI agent that starts to do some of that heavy lifting, like take a swag at building up your model, or kind of putting in some of those input parameters. Now you’re talking about some really, really powerful results that you can get with minimal input.

EE World: What do you say to people who are a little skeptical? What would you suggest that they do to evaluate some of these AI tools and see how they can make things go faster or make designs better? How would you go about doing that? How would you go about evaluating something you’ve done with AI?

Dumont: The biggest answer is, try anything where you start with just having a conversation. The easiest thing that there’s always the most helpful is you have very powerful off-the-shelf general-purpose chat clients and just seeing what they can do. Sometimes you’ll be quite surprised.

You can put in a specific part number. You say, “I’m looking for this kind of circuitry.” Sometimes the general-purpose tools will definitely come back, and especially when they’re trying to generate some circuit block diagrams, those generally don’t work so well. But if you’re asking about first principles, give things a shot. If you have a document that you don’t go through a lot, you spend a lot of time kind of parsing and trying to figure out, and you’re able to upload those.

If you’re at a company that has some localized LLMs that you’re prepared to work on, or you have some non-descript data that you’re allowed to upload, maybe it’s public available data sheets and things like that, I would say the easiest place to start is that uploading. If you have a PDF parser, or if you have a BOM, you can upload and say, “suggest alternatives for these things.” Just start to get creative a little bit, start to experiment a little bit and just see what the tools can do. Because all you’re going to do is learn here. And as I mentioned, they’re not meant to replace the context that engineers are providing and are bringing. In fact, those first principles of engineering are just going to allow you to be that much more effective when you’re using tools that will just enhance your kind of what you can do.

EE World: Kyle thank you very much for your time. I appreciate it. I think AI is something that that engineers are well, engineers are skeptical by definition. I think this is just another, another one. I think we all tend to try to keep the old technology working as long as we can, as opposed to just going and trying the new one. I find that with all my old computers that I have in the house, they just keep working. I keep finding ways to keep them going. Thank you again for your time, and I hope to talk to you again in the future.

Dumont: Likewise, thank you, Martin.

Please add a publication code in the theme settings.

Leave a Reply

Your email address will not be published. Required fields are marked *