Changing FOC Switching frequency

@Deodand

Are you still working on improving MCU performance on the Unity? This may seem a little random, but are any engineers working on the Unity familiar with digital design/FPGAs?

I started reading into this stuff, and more and more sources are pointing towards using an FPGA instead of a MCU for FOC motor control applications, especially ones with more than one motor.

Many tool chains for FPGA development advertise that they already have SVM and FOC libraries for use. I’m thinking, would it be possible to use an FPGA to run the FOC algorithm, and then just double it up (a.k.a. copy-paste) to control two motors? Since many FPGAs have built in CPU cores, it might be possible to port Vedder’s USB/UART interface code and FOC detection code, and rewrite the emulated flash library so that it writes to the relevant memory addresses/registers used in the hardware FOC algorithm.

With an FPGA, a motor controller can probably get up to some really crazy switching frequencies without issue.

3 Likes

My limited experience with fpga’s tell me this is possible, and the crazy high polling rates they have would definitely be a step in the right direction.

But I’m completely unsure of how much processing power these things have and how they scale. Pricing could be a significant hurdle.

It’s been done before and as far as I’m aware and it’s under patent. I’m taking a class in vhdl desgin with fpga’s this semester and my professor has done research realted to this field. I’l talk to him and see what he says. But like @TowerCrisis said I think cost is the biggest hurdle

https://www.eenewseurope.com/news/field-oriented-motor-control-fpga-takes-resolver-inputs-0#

1 Like

I think you’re onto something with the VESC FOC implementation being somewhat constrained by the MCU.

FPGAs would be one way to dramatically increase switching frequency, but there is also specific hardware on teh market that’s way cheaper than an FPGA. I’ve been looking into the Trinamic TMC4671.

https://www.trinamic.com/fileadmin/assets/Products/ICs_Documents/TMC4671_datasheet_v1.05.pdf

I’ve got the BOB, but haven’t sourced/built a power stage for it yet. I’m no EE, so it’s all learning and i’m not sure exactly what i should do in that regard yet. I’ve got a couple local experts though, so it’ll happen, just slowly.

Anyhow, the Trinamic IC does FOC in hardware, upto 100khz switching. It seems from my limited knowledge that there is going to be a switching frequency for each set of FETs combined with a particular motor. Too fast and switching losses take over, too slow and the motor losses dominate. I expect you could (relatively) easily calculate each function and combine them to find the ideal switching frequency for a controller/motor combination.

If anyone wants to build a power stage for the TMC4671, i’d be glad to test on my dyno (which just got some upgrades last night). I’ve got a small stack of motors, and some other controllers to compare the Trinamic to.

Peter

4 Likes

What part of the performance are you trying to improve by moving to an FPGA? You mention “crazy switching frequencies”, if you mean the PWM frequency, that is constrained by switching losses. The only thing to be gained in our application by an FPGA is efficiency at high speed, but its a few percent at most. The things lost are cost, complexity, availability of parts etc…

Ok, finally got around to making some new mods to the dyno. Cleaned everything up, improved rpm sensor, added a better ADC for current and voltage measurements (importantly with a much more stable voltage reference). Basically numbers are getting closer and closer to reality and more and more consistent.

Somewhat gladly, the latest numbers seem to show no big change in total system efficiency on the VESC+motor when changing switching frequencies.

efficiency%20with%20varying%20switching%20frequencies

Thoughts? (edit: this is with a different motor to some previous motor tests, so perhaps there is a motor inefficiency at higher switching frequencies i discovered before (rather than a controller inefficiency), that this SK3 6364 213kv doesn’t see. as always more testing needed to get concrete results, as though such a thing is even possible :slight_smile: )

2 Likes

Some of us are really concerned about efficiency, and to us a few percent matter. Most folks don’t care in the slightest. In certain cases faster switching makes more sense, and the vesc is pretty limited in max switching rate it seems, so the thought might be to open up different types of motors with an FPGA or faster hardware. Again though, you’re right in that for most cases it likely doesn’t matter, and the added cost isn’t worth the marginal benefit.

1 Like

That’s good, now that my board is back working I will do the tests if I have some time this week

Which MOSFETs are you using that are making switching losses a problem? Also, one of the things an FPGA provides is deterministic behavior across a range of conditions, assuming it is still performing the same algorithm, which is unlike a MCU, which may behave differently depending on what you load it with. This provides consistent performance across said range of conditions. Additionally, a surprisingly small FPGA can accomplish complex algorithms, and, this is what one of my professors told me, a relatively cheap FPGA can implement an algorithm that would otherwise require beefy and expensive MCU. However, you have a great point that FPGA development is much more difficult and costly than MCU software development, and since they aren’t produced in as much volume, unit availability could also be a problem (BUT since it is an FPGA, the design can be easily ported over to other, potentially available, parts). The potentially lower production cost may not in many cases justify the additional develoment cost and time to market. It was just a thought I had, if it is not suitable for this application, then that is my mistake.

2 Likes

Mosfet switching losses are a problem no matter which mosfets you use. I would guess that at 30kHz switching losses would equal conduction losses.

MCUs are also deterministic as long as the program is written correctly.

Oh your professor told you? Take what people who work in universities with a grain of salt unless they have extensive industry experience. Working in commercial design vs a university is a completely different world.

The production cost will always be more because you need a host MCU AND an FPGA.

Alex, if you can disPROVE a statement, do that. Don’t simply tell him that his professor is wrong without anything other than some snark. I’ve seen folks with doctorates be wrong, i’ve seen people with 40 years of industry experience be wrong, i’ve definitely seen folks on internet forums be wrong.

Can we be civil, and discuss things without the “Oh your professor told you?” snark, can we?

2 Likes

Never said his professor was wrong

Check out the NTMFS5H600NL, at 1A source, 2A sink (gate drive currents), 40V system voltage, 25khz PWM (50khz switching) switching losses don’t equal conduction losses until 160A, assuming three phase sine is generated. Even higher if running trapezoidal control. It has 15nC of gate switching charge, and 1.3miliohms rdson.

Yes, but proving that a program implementing an algorithm is deterministic is much more rigorous than proving a digital circuit is deterministic, assuming you can write such a program in the first place (and if so, such programs that can be formally proven to be deterministic often prove very difficult and/or time consuming to write). See halting problem. Although it’s not hard to write and prove that a program that can be reliably said to behave a certain way, though not necessarily formally and rigorously proven.

There are many FPGAs (that are under $10 in single quantity) nowadays that have embedded CPU cores and non-volatile memory, meaning you don’t need an external chip to store and load the configuration file.

One problem I’ve noticed with industry is the inertia they have behind what’s already done, now there’s really good reasons for going with what has been done and proven itself in the field, but sometimes trying out some of the new-fangled toys might yield promising results.

2 Likes

So we agree that the switching losses are significant.

Given that FOC is not written for the banking industry, the program doesn’t have to be proven. To the extent that it matters for motor drive applications the VESC code is deterministic. It doesn’t run differently depending on what else it has to manage (like CAN, USB, displays etc). The switching parameters are calculated at a fixed time interval by design and the peripherals are run at a lower priority.

Absolutely right. The reason 99% of eskate piggybacks off the open source VESC is that no other system existed with that level of configuration ability. All motor drive systems that I have developed were for a specific motor and so didn’t need to be user configured. For those applications maybe an FPGA would be a good option. Even then it would have to be extremely high speed or some other difficulty.

Wait, I made a mistake with the current benchmark, what I should say is, switching losses dominate at under 16A, in which case it is about 1W of power at 16A across the entire power stage (no problem at all, even without a passive heatsink most power stages can dissipate 5W of power without a problem), after which, conduction losses are the main issue.

Even at 50khz PWM, (100khz switching) switching losses dominate under 32A, in which case its losses are 4W at 32A, yeah this may require additional heat dissipation hardware, but considering that the VESC default pwm frequency is 10khz (20khz switching), I think it has quite a bit of headroom before it runs into power dissipation issues. (from my own tests, the VESC can only go up to 35khz PWM, 70khz switching, before stalling and bricking itself).

This means if a VESC were to use the NTMFS5H600NL at the default switching frequency in FOC, switching losses would be utterly negligible, as it would amount to less than 1W of power across a large part of its usable range, and even then, conduction losses are your problem. I.e. 80A is a reasonable upper limit, switching losses in default configuration are 2.2W, conduction losses 23W, more than 10 times as much.

You are correct that switching losses must be considered when specifying and qualifying a power stage, what I’m trying to say is conduction losses are a more significant contribution (among other considerations) when designing the motor controller if you spec out the proper MOSFETs, (i.e. if your application requires switching, I would advise against MOSFETs from International Rectifier, they are rugged, but their switching characteristics don’t compete very well against similarly price MOSFETs on the market. Infineon is trying to fix that, but there is still some work to be done).

Although, one problem with “specing the proper MOSFETs” is that the “proper MOSFETs” may cost twice as much, and that can prove problematic. However, in this case, the NTMFS5H600NL is actually cheaper than the IRFS7530.

For our application, (eskating) yes, for all intents and purposes it can be reliably said to run an FOC algorithm up to a certain switching frequency. Occasionally, there’s a glitch, but most of the time it works. However, other applications require a much higher standard of reliability, such as Electric Cars and precision servo controllers, and I have seen others consider the use of the VESC firmware to scale up to significant EV applications. In those cases, deterministic behavior would need to be ensured.

Since most of the configuration in the FOC algorithm is in tweaking the values of certain (or many) parameters, the main FOC algorithm is essentially the same (in terms of the instructions it executes and how it handles data) in most configurations. What I’m trying to say is, maybe if we can hardwire the base algorithm in something like an FPGA, and offload the tuning/detection, configuration functionally, and communication off on the embedded CPU core, which then loads the relevant parameters in shared memory, then there is a possibility to lower the cost of the controller as well as push the limits on how hard and how fast the controller can run. Additionally, since the algorithm itself becomes a digital circuit, scaling it up to two or even four simultaneous motors becomes much easier, all the while preserving deterministic behavior.

(this is what I meant by programs are harder to be proven as deterministic, if a program meant for one motor is scaled for two motors, but run on the same processor, at any given point in time, the motors might require more computation than they do normally, and this can potentially cause problems with hitting certain real-time deadlines, what I mean is, let’s say the average computation of one motor is less than half that of the CPU’s maximum processing power, this does not necessarily mean the CPU can handle the computation for two motors 100% of the time, since there may be times the motors require more computation than other times, and if this happens with both motors simultaneously, there will be a problem).

This was just an idea I wanted to share, if it is impractical, then by all means ignore it :).

1 Like

It definitely is a very interesting idea and if you ever make one, please open source it and I will contribute (or buy some of your hardware). However, to my knowledge, nobody uses FPGA in BLDC applications. All the major players make motor controllers (TI, stmicro, microchip, LT, Trinamic etc…), AFAIK none of them use FPGA or anything similar.

Similarly I have never seen a teardown of an EV, train, Formula E where the custom controller uses an FPGA. I believe the reason is that they are not required and are inferior to the other options.

I think in general you underestimate the ability of a normal microcontroller to be reliable in time-critical control applications. The VESC uses an update rate that will never glitch if you run it at the recommended speeds. Of course what would be nice is being able to run a much faster update rate so that high ERPM contains smoother control signals.

1 Like

I am hoping to develop some sort of BLDC motor controller with an FPGA as a research project next semester in one of the embedded labs at my university, I will do my best to have it come to fruition.

What I know some of these companies and applications do, is they design a prototype in an FPGA, but then move forward to creating an ASIC, (I think that is what Trinamic has done, their products are ASICs, I also know Allegro Microsystems has some sine wave controller ASICs as well.)

And also back to the point of the development cost and time to market, it is much cheaper and faster to develop an algorithm on an MCU rather than an FPGA, so that is likely why in those teardowns an FPGA was nowhere to be found, either an ASIC was put in, or the developer didn’t want to spend the extra time and resources on what they deemed unnecessary and put in a MCU or application processor.

I don’t doubt that microcontrollers can be reliable in time-critical real-time applications, what I was trying to say was, some applications require a much more expensive and powerful MCU in order to ensure that reliability and that there was a possibility that a potentially cheaper FPGA could accomplish the same task. For example, lets say an algorithm doesn’t need to do much with data, but has to handle a lot of data at a time and do it quickly. Going the MCU route means needing something with SIMD instructions, but with an FPGA, just copy-paste the circuit a bunch of times. Potentially cheaper to produce, potentially much more headache.

Basically what I mean is, with MCUs, a lot of feature may end up unused, that’s unused silicon that you paid for. With an FPGA, you can use all of the silicon for your application if you wish.

I also forgot to mention, I used to intern for a company that produces rockets, their engineers heavily preferred the use of FPGAs because ASICs were not cost effective, but they needed 100% reliability and short latency from resets or other events. Some of them told me that certain events couldn’t wait for the processor to do a reset cycle.

3 Likes

Ok, what do y’all think is going on here? long%20wires2

I’m testing the same setup with and without about 60in of 10awg phase wire. Various FOC switching frequencies. at first i didn’t redetect the motor with new phase wire lengths, but then redetected. No real change there. Certainly there is some run to run variation (partly because i don’t have a burly enough power supply, and keep swapping batteries back and forth, so voltage from one run to another might vary from 32v to 29v. or so.

I colored these points and fit lines so that BLUE is long wires ORANGE is short wires

My instinct is to suspect the longer wires are acting as antennae, and inducing currents in the other measurement wires. I’ve tried waving the loop of extra phase wire around the apparatus, without any effect.

The other thought it that maybe the added inductance is beneficially holding energy then releasing it over time to the motor coil. This could decrease the current spikes, and thus decrease the resistive heating on te coils.

Am i crazy? what the hell is going on?

Next up i’ll be putting a current probe in one phase wire attached to a scope to see if there is a visible difference. Thoughts?

Few things are going on, firstly, at what point are current and voltage measurements being taken? How far away is it from the controller, and what path do the measuring wires take? Electroboom did a series of videos illustrating how critical probe points and the path that the wires take to those probe points for dealing with picking up EMI in your measurements.

Adding the additional wire increases BOTH resistance and inductance, according to an online calculator adding 60in of 10AWG increases phase resistance by 5miliohms, and inductance by 1uH, (assuming it is uncoiled).

This has a significant impact on the phase angle of the current with respect to the applied voltage. Basically, the greater the phase angle, the less DC current needed from the DC bus to generate the three phase AC. It also impacts the reactance, meaning a higher applied voltage is needed to achieve the same current (but this is reactive power and should be returned to the DC bus). More inductance = bigger phase angle, more resistance = smaller phase angle, but it’s based off the arctan of reactance (which is also dependent on frequency) so the relationship is very non-linear. DC current consumption is scaled by the cosine of the phase angle.

It’s a little counterintuitive that this would result in less power consumption from the DC bus, but I guess not outside the realm of possibilities if the motor has a very high phase resistance but very low phase inductance (and as a result would also lower the max speed and power output of your motor). (I guess the wire acts as a ballast???) What were the original detected phase inductance and resistances?

1 Like

What kind of lowpass filtering are you doing to your input/output power measurements? (to avoid induced errors like you suggest)