I'm puzzled by Espressif's naming here. We had the ESP32-S3, so "S31" sounds like "S3, variant 1," but this part doesn't really look like a simple S3 variant. And then there's an ESP32-E22, but no E21 or even a plain E2 anywhere.
It reminds me a bit of the new STM32s (STM32MP2) which are actually 64 bit, but they kept the name STM32 because everyone knows it
beng-nl [3 hidden]5 mins ago
Didn’t Intel also try to brand the 64bit x86 extensions as ia-32e initially? Seemed like wasting an opportunity to me.
(Disclaimer: I work at Intel but this was way before my tenure.)
p_l [3 hidden]5 mins ago
It was because IA-64 was a completely different unrelated architecture that until AMD succeeded with K8 was "the plan" for both 64bit intel roadmap and the roadmap to kill off compatible vendors (AMD, VIA)
madduci [3 hidden]5 mins ago
I stopped following the producer logic when Intel went from Pentium 4 to Pentium D
Rochus [3 hidden]5 mins ago
They claim that the chip has an "MMU". But unfortunately this doesn't seem to be a true RISC-V MMU (according to the Sv32 specification) integrated into the CPU core itself, but just a peripheral designed for memory mapped SPI flash and PSRAM. So as far as I understand there is no true process isolation with page faults and dynamic paging.
volemo [3 hidden]5 mins ago
That’s a shame, it’d be a cool and, afaik, unique feature for this niche.
Rochus [3 hidden]5 mins ago
Maybe Espressif will notice that there are no RV32 chips with MMU so far (at least to my knowledge); we only have 32 bit MCUs or then only 64 bits for the CPUs. Something like Cortex-A7 is missing.
bschwindHN [3 hidden]5 mins ago
The upcoming Baochip is an RV32 chip with an MMU, I believe.
Does anyone know what CPU this uses? Is it their own first party design?
madushan1000 [3 hidden]5 mins ago
Interesting that they made a new chip with BLE+BR/EDR again. all the chips after the original ESP32 were BLE only.
Hope this chip has good low power options so we can use it in Bluetooth audio workloads.
moepstar [3 hidden]5 mins ago
I believe this is the first ESP to gain Ethernet capability?
I totally wish that a board would come with PoE…
Because as it is right now, powering a fleet of those with USB power supplies is annoying as fsck…
elcritch [3 hidden]5 mins ago
Nah, ESP32's have had ethernet capability for a while and ESP-IDF supports it well. I've been using one I built for 5+ years now. Unfortunately RMII (ethernet phy) interface takes up a lot of the GPIO pins. This part looks like it'll remedy that issue.
There's two ESP32 boards that have been around for a while with PoE:
> I'm more hopeful for single-pair ethernet to gain momentum though!
I keep looking for a reasonably priced 10baseT to 10Base-T1L bridge... everything commercial seems too expensive (for me) and the two hobby designs [1] [2] I've seen are not orderable :(
But I'm seeing more commercial options lately, so that's hopeful.
I really wish there was a camera option. You’d have made wired doorbell cameras possible without a retrofit.
I’d buy in a heartbeat
albuic [3 hidden]5 mins ago
SPE with multidrop and PoDL would be awesome ! They are working on that and it will be everywhere.
cpgxiii [3 hidden]5 mins ago
Multidrop SPE isn't going to outperform newer CAN versions though. Somewhere in the sub-100Mb/s (e.g. 10-20Mb/s range) is the practical maximum speed of a multidrop bus at useful lengths, and that essentially applies equally to CAN or SPE. The only way to really get faster in a "multidrop-like" sense is with logically loop-like systems like ethercat and Fibre Channel where each network segment is point-to-point and the nodes are responsible for the routing.
anigbrowl [3 hidden]5 mins ago
M5stack has PoE offerings
Geof25 [3 hidden]5 mins ago
The original ESP32 has Ethernet as well, I believe in the form of RMII. Then it has been removed from the chip, never specified the reason.
3form [3 hidden]5 mins ago
This would be great indeed.
On that note, why does the PoE capability often add such a big proportion of the price of various items? Is the technology really costly for some reason, or is it just more there's fairly low demand and people are still willing to pay?
jwr [3 hidden]5 mins ago
PoE is not obvious to implement (take it from someone who has done it with a fair share of mistakes), uses more expensive components that normal ethernet, takes up more space on the board, makes passing emissions certification more complex, and is more prone to mistakes that ruin boards in the future, causing support/warranty issues. In other words, a bag of worms: not impossible to handle, but something you would rather avoid if possible.
ldng [3 hidden]5 mins ago
And what would a better alternative look like ?
timschmidt [3 hidden]5 mins ago
I wouldn't call it "better", but the least-effort path among hobbyists and low end gear is often 12v or 24v sent over a pair with Gnd and a forgiving voltage regulator on the other end.
jwr [3 hidden]5 mins ago
There is none, I never said PoE is "bad": it's a very good solution, it's just difficult to implement.
easygenes [3 hidden]5 mins ago
A full-module add-on in this power class is about $7 at 1,000 unit scale [0]. It would be around $3 with your own custom PCB design in terms of BoM addon at scale. That’s power only. Add another dollar or two for 10/100 PHY.
The trick is as others have said in what adding it to your design does in terms of complicating compliance design.
PoE power supplies need to be isolated (except in rare exceptions) and handle much higher voltages than common USB-C or wall wart power supplies.
They have to use a transformer and a more complex control strategy, not a simple buck regulator with an inductor. PoE inputs need to tolerate voltages several times higher than the highest USB-C voltages, so more expensive parts are used everywhere.
namibj [3 hidden]5 mins ago
Any Ethernet (well, any RJ45 you expect in a home/office) has to have at least 1500V isolation from the RJ45 wire to anything metal that can be touched or is a connector on the device.
A PoE-only device with no electrical connectors besides the RJ45 can just use a very cheap RJ45 port with integrated magnetics and PoE allowance (tiny bit bigger wires and a center pin exposed, less than 50ct more than the cheapest RJ45 with integrated magnetics) and a cheap buck from 40~80V to e.g. 5V.
Oh, and a cheap bridge rectifier and some signaling resistors to take care of input polarity and signal to the source that we in fact want the approximately 50V that could hurt a device not made for it.
cruffle_duffle [3 hidden]5 mins ago
It sounds like the PoE spec was designed before the arrival of “IoT” type things like the esp32, raspberry pi’s, etc.
How much of the complexity is a “fundamental electrical engineering problem” and how much of it is just a spec written to solve a different set of problems?
cpgxiii [3 hidden]5 mins ago
Almost all of the complexity of PoE is fundamental. To get enough power over 100m of ethernet cable (10x longer than USB) you have to run at much higher voltages like 48V. The same has eventually come to USB: for USB-C Pd to reach 240W, it also has to use 48V.
There have long been lower-voltage "passive PoE" systems which expect a lower always-on voltage on some of the ethernet pairs (usually 12V, 24V, or rarely 48V), which can be very easy to implement so long as your users can handle the setup and incompatibility with other ethernet devices (in the most extreme case of passive PoE on 100Mb/s ethernet, you simply connect the positive pair to the input of the device and the negative pair to the ground, no additional hardware needed).
throwup238 [3 hidden]5 mins ago
Ethernet is already one of the most expensive standards because you need magnetics for isolation. Adding power on top of that is genuinely expensive.
Etheryte [3 hidden]5 mins ago
Whenever you combine two things into one, the complexity and cost go up considerably. A regular coffee machine is pretty cheap. Add high pressure so it can make espresso and it gets considerably more expensive. Add milk so it can make cappuccino, again more complex and expensive. The same holds for electronics. Isolating power when it's alone is fairly straightforward. It gets considerably more tricky and hence more expensive the moment you want to place any kind of a meaningful data signal in its vicinity.
solarkraft [3 hidden]5 mins ago
I’m sure the other commenters are right, but I’m guessing market segmentation may play a role here too.
amelius [3 hidden]5 mins ago
> Because as it is right now, powering a fleet of those with USB power supplies is annoying as fsck…
Therefore, wifi is more convenient than ethernet.
You don't need long cables, just a local power source.
albuic [3 hidden]5 mins ago
> You don't need long cables, just a local power source
Which means batteries that have to be replaced and maintained or cables... So ethernet with PoE or even better SPE (single pair Ethernet) with PoDL (power over data lines which is PoE for SPE) is the best from my point of view
PunchyHamster [3 hidden]5 mins ago
Well, yes, but then you need to be "in range" of PoE switch and drag the ethernet cable from it vs the nearby socket. Still, nice to have options
amelius [3 hidden]5 mins ago
I mean, if I just look at my house. There is just one ethernet outlet, but many power sockets. If I want to connect devices all over my house, the best way is to use wifi and usb power adapters. Not ethernet.
Both solutions require 1 cable per device, but the first solution would require only short and thin cables, and the second solution would require very long cables which I don't know even how to do properly without milling my walls.
stinkbeetle [3 hidden]5 mins ago
Yep. Mains electricity is ubiquitous, highly interoperable, very reliable, very high power available per drop, can be outdoor capable, common standards, understandable by users, requiring no active components, with many on-call experts available who can come to fix problems or extend/alter connectivity. Mains power wall plates with inbuilt USB power outlets are even available at quite small cost if the look of the bigger plug and wiring is not appealing.
PoE is much fewer of those things. Difficult to recommend it these days with wifi being fast and reliable and so widely used. Certainly not for average residential user.
cyberax [3 hidden]5 mins ago
On the other hand, _all_ the WiFi devices that I had at some point fell off the network, at least once. Including doorbells and cameras. While PoE devices just work.
Another point is that mains power in my area can go down periodically. My PoE switch is powered by a Li-Ion UPS and can provide power for about a day.
ImPostingOnHN [3 hidden]5 mins ago
That's half the equation. The other half is the reliability and security of wifi, which is less than that of ethernet for people without physical access to my wall innards
whynotmaybe [3 hidden]5 mins ago
Esp32 's wifi is only 2.4ghz though.
exe34 [3 hidden]5 mins ago
Can't you run a 5V supply from where your router is all the way to every god damn device in your house, and then pretend the wifi is also going through it? If you just want it to be inconvenient, there's no reason to let a lack of PoE stop you!
elevation [3 hidden]5 mins ago
Are there any US vendors with wifi/BLE-integrated MCUs -- a single package that does it all?
vlan0 [3 hidden]5 mins ago
I don't understand what possesses these folks to continue making 2.4ghz devices. I understand there are use cases for low bandwidth, high range. But surely we've passed the point where that is more desirable to most than lower latency and high throughput, right?
joshryandavis [3 hidden]5 mins ago
> I understand there are use cases for low bandwidth, high range.
Use cases like IoT? The very thing this is for?
vlan0 [3 hidden]5 mins ago
Is what you described a truth for all IoT devices? If I have los of my AP, why do I need 2.4Ghz? Even so, what SNR do you truly need for this low bandwidth application? Where is the engineering here?
I have a unique position of having a data set over 8000 APs with 40k unique devices. If you design properly, there is no need for 2.4 ever. 2.4Ghz congestion (with nearly no actual 802.11 traffic) is very high. To the point where the IoT folks are struggling.
Havoc [3 hidden]5 mins ago
My 2.4ghz is basically all IOT these days. Things that matter are on 5 or 6 ghz. Busy moving the entire thing to be entirely firewalled off given how clean the separation is
vlan0 [3 hidden]5 mins ago
>My 2.4ghz is basically all IOT these days.
Yup. And it's exactly why some of my IoT admins are struggling. There is only so much spectrum to go around.
mschuster91 [3 hidden]5 mins ago
2.4 GHz has the advantage of it passing through obstacles easier. The higher the frequency the more easily it gets blocked.
vlan0 [3 hidden]5 mins ago
Yes. And 2.4 lives and dies by that sword. What downsides might there be in areas where dozens of APs hear each other and 100s of clients hear each other?
leptons [3 hidden]5 mins ago
It's an IoT device, not a laptop. It does not really need 5ghz to fulfill its purpose as an embedded CPU, and adding 5ghz likely would require making some room for it by removing other functionality.
vlan0 [3 hidden]5 mins ago
Yes and in some uses cases it works against you. 2.4 is incredibly crowded without adding 802.11 to the mix. My IoT admins would have less complaints if they could take advantage of my small cell 5Ghz spectrum. This isn't 2005 with widely deployed asymmetrical wireless networks.
urba_ [3 hidden]5 mins ago
I don’t trust Espressif’s releases, I am still waiting for ESP32-P4 to hit distributors.
It is now more than 2 years and 3rd chip revision
I assume their chips don't really exist until they're actually supported by ESP-IDF. The ESP32-C5 was announced in June 2022, received initial support in -IDF in August 2025, and more complete support in December. It seems to have only recently started getting third party dev boards.
ivanjermakov [3 hidden]5 mins ago
HN title entropy record?
sgraz [3 hidden]5 mins ago
Pardon for off topic. The designer in me hates that they used AI for every single asset in this release post, looks so amateurish.
peterus [3 hidden]5 mins ago
It would be good if this chip had good idle current comparable to other MCUs. I have used the ESP32S3 and it's idle current with the radio enabled, but not transmitting, is quite terrible.
My application needed both can bus and Bluetooth (though no wifi) so the S3 was one of the only options available. I'm sure the high current draw is because the wifi and ble share the same radio?
nirav72 [3 hidden]5 mins ago
Argh…Wifi 6 , but 2.4ghz.
hajile [3 hidden]5 mins ago
Adding 2 frequency ranges is more expensive. If you're going to choose just one, you should choose the one with the greatest backward compatibility.
0x457 [3 hidden]5 mins ago
Realistically 2.4Ghz is far from "greatest backward compatibility" since there is a real benefit of running 5Ghz and 6Ghz only networks.
2.4Ghz makes sense because this tiny device does not need high speeds Wi-Fi connection, and deployment scenarios benefit from 2.4 GHz penetration more.
bdavbdav [3 hidden]5 mins ago
Love ESP boards, and with Raspberry pi pricing though the roof, I’m hoping more will discover the love of getting the job done on a 10mm2 package.
I suspect a lot of the things people are using RPi for are better served by things like this (and virtualisation for the heavier end)
jxm262 [3 hidden]5 mins ago
Perfect timing. I just started planning to build a DIY smartwatch and was looking into the S3. Having native zigbee support could be nice.
ricardobeat [3 hidden]5 mins ago
I hope this one has multiple radios so you can actually use BT/Wifi/Thread simultaneously.
MrBuddyCasino [3 hidden]5 mins ago
> high-speed 250 MHz 8-bit DDR PSRAM with concurrent flash and PSRAM access
This is perhaps lost in the noise but IMO a large deal. PSRAM starting to get serious bandwidth.
1e1a [3 hidden]5 mins ago
For reference, the 4-bit PSRAM interface on the ESP32-S3 normally runs at 80 MHz (maximum 120 MHz) and shares bandwidth with the external flash.
I wonder if it will be possible to (ab)use the faster PSRAM interface on the ESP32-S31 as a general purpose 8-bit parallel interface, eg. for ADCs...
Mashimo [3 hidden]5 mins ago
Oh neat. Zigbee support.
I wonder if I at some point can create low power devices with EspHome for home assistant. I assume this should use less power than connecting to wifi?
cataphract [3 hidden]5 mins ago
The C6 and the H2 already support ZigBee. Their SDK has a thin layer on top of zboss.
zrail [3 hidden]5 mins ago
You already can with nRF52 boards. Presumably they'll add ESP32 support soon too.
How do Espressif’s RISC-V cores compare to existing ARM or RISC-V options in terms of power efficiency (computational power / electrical power)?
6SixTy [3 hidden]5 mins ago
Without being hands on, it's difficult to make a direct comparison. There's 2 processors according to CNX [0], and the HP core's instruction set might roughly be comparable to M55.
Don’t know the specifics of the Espressif RISC-V cores, but in general they can’t really compete on those aspects with ARM.
ARM is a much more mature platform, and the licensing scheme helps somewhat to keep really good physical implementations of the cores, since some advances get “distributed” through ARM itself.
Compute capabilities and power efficiency are very tied to physical implementations, which for the best part is happening behind closed doors.
you can run linux on riscv without an MMU. There is mainline support for Kendryte K210 chip, so it should be possible port to this chip provided you have enough PSRAM.
la_oveja [3 hidden]5 mins ago
why would you do that? (unless for the fun of it)
amelius [3 hidden]5 mins ago
Just to get an idea of its capabilities.
system2 [3 hidden]5 mins ago
Imagine installing Claude Code in it with full root access. Just by asking a few things, you can have a semi-broken ESP32 doing stuff.
burnt-resistor [3 hidden]5 mins ago
Interesting.
Although, I'd like to seem some non-paid blogger head-to-head reviews benchmarking instruction cycle efficiency per power of comparable Arm vs. ESP32 Xtensa LX6* and RISC-V parts.
* Metric crap tons of WROOM parts are still available and ancient ESP8266 probably too.
wosined [3 hidden]5 mins ago
The ESP32 boards I own have bad support and are a bit of a hit and miss. (arduino nano esp32) Did this get better? Or is the support still messy?
mianos [3 hidden]5 mins ago
That native sdk and the vscode plugin are very professional. There is a bit of a learning curve to get into it, but once you do, it's very functional and the developers are super supportive. They have fixed bugs for me in days.
ricardobeat [3 hidden]5 mins ago
Arduino nano are made by arduino using Espressif chips, and Arduino IDE support is indeed hit and miss.
ESP-IDF, the official C SDK, is a bit more work, and there is drama around platform-io, but it’s significantly more stable.
whynotmaybe [3 hidden]5 mins ago
> there is drama around platform-io
What do you mean ?
cdcarter [3 hidden]5 mins ago
What do you mean by that? bad support for what?
MallocVoidstar [3 hidden]5 mins ago
Don't use the Arduino framework, use ESP-IDF or Rust.
usagisushi [3 hidden]5 mins ago
For those using PlatformIO, the folks at pioarduino[0] are doing a great job keeping up with Arduino Core 3.x support.
Still requires using rust compiled against their llm fork. 'espup' makes it easy if you're okay with using it.
Other than that it works pretty well. This is if you run ESP-IDF, with bare-metal rust it's either best thing ever or meh. Rust community seems to use stm32 and picos more.
anymouse123456 [3 hidden]5 mins ago
Since the Snowden leaks in 2013, it just doesn't make sense that *any* foreign customers would put US technology inside their firewall. But they do.
It shocks me even more that any Western customer would do the same with network-connected Chinese chips. But we do.
The Espressif chips are truly incredible value, but what are we doing here?
Is there any doubt that these don't represent a major attack surface if a conflict were to heat up?
If you had network-connected chips of your own design inside every household of your adversary, what could you do with that?
khalic [3 hidden]5 mins ago
It’s not like creating a chip gives you unfettered access to it. You _can_ add 0-day flaws and backdoors, but these can be discovered, leaked, etc. Has there been any case of such a backdoor built in consumer chips like theses? I’m not talking about CIA ops like snowden described, that’s supply chain interception. I mean, has anybody ever found such a backdoor?
xondono [3 hidden]5 mins ago
Well, that depends on what you count as a backdoor, but Espressif has had some questionable flaws:
- Early (ESP8622) MCUs had weak security, implementation flaws, and a host of issues that meant an attacker could hijack and maintain control of devices via OTA updates.
- Their chosen way to implement these systems makes them more vulnerable. They explicitly reduce hardware footprint by moving functionality from hardware to software.
- More recently there was some controversy about hidden commands in the BT chain, which were claimed to be debug functionality. Even if you take them at their word, that speaks volumes about their practices and procedures.
That’s the main problem with these kinds of backdoors, you can never really prove they exist because there’s reasonable alternative explanations since bugs do happen.
What I can tell you is that every single company I’ve worked which took security seriously (medical implants, critical safety industry) not only banned their use on our designs, they banned the presence of ESP32 based devices on our networks.
khalic [3 hidden]5 mins ago
You can hide malicious intent, so the repeated negligence patterns you’re pointing out make a better signal. Smart. Thx for the perspective
albuic [3 hidden]5 mins ago
Obviously... They are not made for safety critical systems. It's for hobbyists.
elevation [3 hidden]5 mins ago
Except if you penetrate the market with modules that cost 5% of similar US made solutions, you start to win mindshare. At least some of those hobbyists start making a product, and sometimes the determination of whether a product is "safety critical" isn't agreed upon until after it's failed catastrophically.
Edit: found an article explaining some of their naming logic, and said that the SoC naming will get its follow-up article, but sadly it never happened. https://developer.espressif.com/blog/2025/03/espressif-part-...
(Disclaimer: I work at Intel but this was way before my tenure.)
https://www.bunniestudios.com/blog/2026/baochip-1x-a-mostly-...
Edit - Oops GeorgeHahn beat me to it
I totally wish that a board would come with PoE…
Because as it is right now, powering a fleet of those with USB power supplies is annoying as fsck…
There's two ESP32 boards that have been around for a while with PoE:
- https://www.tme.com/us/en-us/details/esp32-poe/development-k... - https://wesp32.com/
I'm more hopeful for single-pair ethernet to gain momentum though! Deterministic, faster than CANBUS, single pair, with power delivery:
https://www.hackster.io/rahulkhanna/sustainable-real-time-la...
I keep looking for a reasonably priced 10baseT to 10Base-T1L bridge... everything commercial seems too expensive (for me) and the two hobby designs [1] [2] I've seen are not orderable :(
But I'm seeing more commercial options lately, so that's hopeful.
[1] http://robruark.com/projects/10BASE-T1L/10BASE-T1L.html
[2] https://matthewtran.dev/2024/08/10base-t1l-converter/
https://www.crowdsupply.com/silicognition/mant1s
I’d buy in a heartbeat
On that note, why does the PoE capability often add such a big proportion of the price of various items? Is the technology really costly for some reason, or is it just more there's fairly low demand and people are still willing to pay?
The trick is as others have said in what adding it to your design does in terms of complicating compliance design.
[0] https://www.digikey.com/en/products/detail/silvertel/AG9705-...
They have to use a transformer and a more complex control strategy, not a simple buck regulator with an inductor. PoE inputs need to tolerate voltages several times higher than the highest USB-C voltages, so more expensive parts are used everywhere.
Oh, and a cheap bridge rectifier and some signaling resistors to take care of input polarity and signal to the source that we in fact want the approximately 50V that could hurt a device not made for it.
How much of the complexity is a “fundamental electrical engineering problem” and how much of it is just a spec written to solve a different set of problems?
There have long been lower-voltage "passive PoE" systems which expect a lower always-on voltage on some of the ethernet pairs (usually 12V, 24V, or rarely 48V), which can be very easy to implement so long as your users can handle the setup and incompatibility with other ethernet devices (in the most extreme case of passive PoE on 100Mb/s ethernet, you simply connect the positive pair to the input of the device and the negative pair to the ground, no additional hardware needed).
Therefore, wifi is more convenient than ethernet.
You don't need long cables, just a local power source.
Which means batteries that have to be replaced and maintained or cables... So ethernet with PoE or even better SPE (single pair Ethernet) with PoDL (power over data lines which is PoE for SPE) is the best from my point of view
Both solutions require 1 cable per device, but the first solution would require only short and thin cables, and the second solution would require very long cables which I don't know even how to do properly without milling my walls.
PoE is much fewer of those things. Difficult to recommend it these days with wifi being fast and reliable and so widely used. Certainly not for average residential user.
Another point is that mains power in my area can go down periodically. My PoE switch is powered by a Li-Ion UPS and can provide power for about a day.
Use cases like IoT? The very thing this is for?
I have a unique position of having a data set over 8000 APs with 40k unique devices. If you design properly, there is no need for 2.4 ever. 2.4Ghz congestion (with nearly no actual 802.11 traffic) is very high. To the point where the IoT folks are struggling.
Yup. And it's exactly why some of my IoT admins are struggling. There is only so much spectrum to go around.
My application needed both can bus and Bluetooth (though no wifi) so the S3 was one of the only options available. I'm sure the high current draw is because the wifi and ble share the same radio?
2.4Ghz makes sense because this tiny device does not need high speeds Wi-Fi connection, and deployment scenarios benefit from 2.4 GHz penetration more.
I suspect a lot of the things people are using RPi for are better served by things like this (and virtualisation for the heavier end)
This is perhaps lost in the noise but IMO a large deal. PSRAM starting to get serious bandwidth.
I wonder if it will be possible to (ab)use the faster PSRAM interface on the ESP32-S31 as a general purpose 8-bit parallel interface, eg. for ADCs...
I wonder if I at some point can create low power devices with EspHome for home assistant. I assume this should use less power than connecting to wifi?
https://esphome.io/components/zigbee/
[0] https://www.cnx-software.com/2026/03/24/esp32-s31-dual-core-...
ARM is a much more mature platform, and the licensing scheme helps somewhat to keep really good physical implementations of the cores, since some advances get “distributed” through ARM itself.
Compute capabilities and power efficiency are very tied to physical implementations, which for the best part is happening behind closed doors.
I wish I could run DiscoBSD/RetroBSD [2] on an ESP32, I like the idea of running on a MCU something that was originally meant for a PDP/11 (2.11 BSD)
[1] https://en.wikipedia.org/wiki/%CE%9CClinux
[2] https://github.com/chettrick/discobsd
Although, I'd like to seem some non-paid blogger head-to-head reviews benchmarking instruction cycle efficiency per power of comparable Arm vs. ESP32 Xtensa LX6* and RISC-V parts.
* Metric crap tons of WROOM parts are still available and ancient ESP8266 probably too.
ESP-IDF, the official C SDK, is a bit more work, and there is drama around platform-io, but it’s significantly more stable.
What do you mean ?
Other than that it works pretty well. This is if you run ESP-IDF, with bare-metal rust it's either best thing ever or meh. Rust community seems to use stm32 and picos more.
It shocks me even more that any Western customer would do the same with network-connected Chinese chips. But we do.
The Espressif chips are truly incredible value, but what are we doing here?
Is there any doubt that these don't represent a major attack surface if a conflict were to heat up?
If you had network-connected chips of your own design inside every household of your adversary, what could you do with that?
- Early (ESP8622) MCUs had weak security, implementation flaws, and a host of issues that meant an attacker could hijack and maintain control of devices via OTA updates.
- Their chosen way to implement these systems makes them more vulnerable. They explicitly reduce hardware footprint by moving functionality from hardware to software.
- More recently there was some controversy about hidden commands in the BT chain, which were claimed to be debug functionality. Even if you take them at their word, that speaks volumes about their practices and procedures.
That’s the main problem with these kinds of backdoors, you can never really prove they exist because there’s reasonable alternative explanations since bugs do happen.
What I can tell you is that every single company I’ve worked which took security seriously (medical implants, critical safety industry) not only banned their use on our designs, they banned the presence of ESP32 based devices on our networks.