Increasing RAM available to GPU on Apple Silicon Macs for running large language models

Apple Silicon Macs have proven to be a great value for running large language models (LLM) like Llama2, Mixtral, and others locally using software like Ollama due to their unified memory architecture which provides both GPU and CPU access to main memory over a relatively high bandwidth connection (compared to most CPUs and integrated GPUs).

High end consumer GPUs from NVIDIA cost $2000+ and top out at 24GB RAM. Using multiple cards requires more expensive motherboards and careful case, cooling and power supply selection. Apple Silicon Macs are available with up to 192GB RAM. MacBook Pros are available with 128GB and refurbished systems are available at a deep discount.

By default, MacOS allows 2/3rds of this RAM to be used by the GPU on machines with up to 36GB / RAM and up to 3/4s to be used on machines with >36GB RAM. This ensures plenty of RAM for the OS and other applications, but sometimes you want more RAM for running the LLM. Fortunately the VRAM split can be altered at runtime using a kernel tunable.

You change these settings using a utility known as sysctl, which has to be run with sudo. The key you use depends on whether you are running Ventura or Sonoma

  • On Ventura: debug.iogpu.wired_limit
  • On Sonoma: iogpu.wired_limit_mb

By default they have a value of 0, which corresponds to the default split described above. If you want to increase the allocation you set them to the value you want in megabytes.

For example

  • Sonoma, sudo sysctl iogpu.wired_limit_mb=26624 will set it to 26GB. 
  • Ventura the equivalent is sysctl debug.iogpu.wired_limit=26624

I’d leave ~4-6GB free for the OS and other apps, but some people have reported success with as little as 2GB free for other uses. Before going that low I’d quit non-essential applications and make sure files are saved just in case there is a problem and the system panics.

Note: Ollama now runs models on CPU if it thinks they are too big for the GPU RAM allowance. Originally, it based this on total system memory, rather than the actual allowance. However, as of 0.1.28, Ollama honors this setting change and so you can run larger models on GPU

Fixing broken DNS/internet access when upgrading an OpenWRT + Unbound installation

I use OpenWRT firmware for my home WiFi accespoints and router/firewall. This morning I installed an update that broke my internet access. This post is about what I did to fix it.

One of the advantage of OpenWRT over the firmware the hardware vendor provides is that they provide security updates for years. Another advantage is that they provide a variety of add-in packages for extra functionality.

One of the downsides of OpenWRT is that firmware upgrades don’t include add-on packages. You have to re-install them after you install the main firmware upgrade. This is a bit tedious but hasn’t really been a big deal because most of the extra packages I install aren’t essential.

Recently though, I started using Unbound to perform recursive DNS locally in order to reduce the amount of information about my browsing habits easily available to my ISP. Unfortunately, once OpenWRT is configured to use Unbound, DNS requests on the router and the local network fail if Unbound isn’t working, or is no longer installed.

The solution is obvious: re-install Unbound. Unfortunately reinstalling Unbound involves downloading files over the Internet, and downloading files over the internet involves DNS lookups, which are broken, because Unbound is no longer installed.

To solve the problem I had to figure out how to revert OpenWRT to using dnsmasq to do DNS forwarding for long enough to re-install Unbound. I went poking around in the LuCI UI for an appropriate setting. I was hoping there was a simple checkbox, but at the same time, worried that the checkbox would only be available if Unbound support was installed.

After a bit of poking around, I found the setting I needed under Network > DHCP and DNS > Advanced Settings. About 2/3rds of the way down the page there is a setting for DNS Server Port. It’s set to a non-standard port (1053 in my case) to get out of the way of Unbound. Setting it back to 53 temporarily will restore DNS service. You can then update the package lists and re-install luci-app-unbound. Then you can change the port back to 1053 (or whatever it was set to on your system) and reboot. When the router comes back up, DNS should again be working through unbound.

In summary: If you are using OpenWRT with Unbound and your internet access is broken after an OpenWRT update, you’ll need to re-install Unbound. In order to reinstall Unbound, you’ll need to temporarily change Network > DHCP and DNS > Advanced Settings > DNS Server Port to 53. This will restore internet access so you can reinstall Unbound.

What is iOS doing while my iPhone is sleeping?

The short answer to what iOS is doing while my iPhone is sleeping is that I did some digging, and I still don’t know, but something doesn’t seem quite right.

I upgraded my 3 year old iPhone XR to iOS 15 a few days after release. Overall I’ve been happy with it, but I noticed something odd a few days after upgrading. When I clicked the People album in photos, which is populated by face detection and analysis algorithms, it said that it had only processed about half of the 42K photos in my photo library and that I should lock my phone and connect it to power to complete the process.

I didn’t think to much of it at first. I usually charge my phone at my desk in the morning, rather than overnight by my bedside, so I can use a solar-charged battery bank. This means it only spends a couple hours a day locked and connected to a charger instead of the average 8 hours most phones get when charged overnight. So, I left my phone locked plugged into power for most of the day. I was surprised to see that the face scanning made little-to-no progress in that time.

I tried leaving it charging overnight without much improvement. I tried force quitting photos before powering and locking the phone. That yielded some progress the first time I did it, but not the second or third time. I tried lots of different obscure things which didn’t make much difference either. I tried putting it in airplane mode before leaving it powered and locked on the theory that perhaps iCloud was locking a critical resource and not releasing it. No one thing made an obvious impact, and certainly not twice. Never the less, in the space of a few days, it managed to process almost 40K of my photos in fits and starts. There were, however, 2,032 left and the phone made no progress against that.

At this point I started wondering if I could gain insight into what the phone was doing while locked and powered. I thought I remembered that the Console app on MacOS could stream logs from iOS devices, in addition to those of the host Mac system. I started out by filtering the copious log messages by “face,” and then excluding terms like “interface.”

In time I learned that there was something called that sounded processing, only it wasn’t getting the chance to run because it wasn’t compatible with something else that was running. By watching the logs a bit I learned of other agents that contended with eachother for the chance to run on my sleeping phone. Many of them also seemed involved in analyzing photos, for example:


I’ve seen most of them get chances to run. Once they run, they run for tens of minutes. However, they all take a back seat to and the associated spotlightknowledged process, which runs for at least 15 minutes at a time, and seems to get the chance to run every other time. I’d guess than in the last 18 hours or so it’s been running for at least 10 hours. I have no idea what it’s doing, either.

Most of the other agents write a fair amount of information to the console. It can be pretty esoteric, but one gets a sense of progress. Spotlightknowledged says very little and most of what it says comes at the beginning of a run. From then on, the main indication of progress is dasd giving updates on how long it’s been running and all the other jobs that aren’t running because of it. Then, at the end, often near the 15 minute mark, spotlightknowledged announces “SpotlightKnowledge – too long till check in” and then that it’s exiting. Often as not, dasd then gets spotlightknowledged running again, though sometimes it gives others a chance.

Google doesn’t turn up any information about or spotlightknowledged, none at all! There is Siri Knowledge which provides access, in various contexts, to information from Wikipedia, Wolfram Alpha and perhaps other curated sources. There is also something called knowledgeC.db tucked away in /private/var/mobile/Library/CoreDuet/Knowledge/ which stores information on all sorts of your activities on your iOS device. Spotlightknowledged could be involved in either of those, both of them, or neither of them. Whatever it’s doing, it seems to be busy.

Or maybe spotlightknowledged is buggy? “SpotlightKnowledge – too long till check in,” along with the relative lack of other log messages makes me wonder if it never finishes what it’s setting out to do and starts all over again. In doing so it deprives other agents, like the one doing face analysis, of time to do their own work.

Apple Music Lossless and AirPlay don’t Work Like You Think They Should

UPDATE June 23, 2021: My testing, below, was done on MacOS. There is evidence that the iOS version of Apple Music behaves differently with AirPlay v1 receivers. I will investigate further as time and ability permits.

June 25, 2021: More information that adds to the picture on iOS. It looks like AppleMusic on iOS and MacOS use AAC when the apps is set to use an AirPlay 2 receiver for output. However, when the output is an AirPlay v1 receiver, the iOS version apparently maintains a lossless chain from Apple’s servers through to the AirPlay receiver (MacOS switches to stream an AAC version from Apple’s servers and then uses ALAC to transport that over the LAN)

Earlier this month Apple released an update to Apple Music that allows “lossless” streaming in addition to the previous high-quality AAC compressed version. People assumed that they’d be able to play back these lossless streams losslessly over AirPlay because AirPlay uses lossless ALAC (Apple Lossless Audio Codec) to transport audio. Unfortunately, this isn’t true.

What is true:

Apple Music on MacOS does not transfer lossless data over to AirPlay 2 receivers when playing Apple Music lossless tracks. The situation on iOS is unclear at this point.

  • Apple doesn’t say anything about AirPlay on their Apple Music Lossless support page (as of 2021-06-17). They do say that HomePods, which are AirPlay 2 receivers, only support AAC playback at this time.
  • An AirPlay 2 licensee reports that Apple Music sends an AAC stream to their devices when playing back Apple Music Lossless tracks, despite device support for ALAC. Their example suggests they were using an iPad to send the stream to their AirPlay 2 receiver.
  • When playing an Apple Music Lossless track on MacOS to an AirPort Express v2 the lossless icon is displayed. The data rate between my computer and the Airport Express v2, which supports AirPlay 2, averages out to be about 256kbps (delivered in bursts), rather than the steady 800-1000kbps I see when playing an ALAC rip I made of a CD.
  • Apple does use ALAC when playing to an original AirPort Express over the original AirPlay (which is all the hardware supports). This is evidenced by the steady 800-1000kbps data stream between my computer and the AirPort Express. However, in this case, Apple Music reports that the track being played is in AAC format.
  • The Wikipedia page on AirPlay is out of date.

There is no reason to think it works any differently on iOS devices, particularly since the AAC support was probably added, in part, to reduce power consumption on battery powered devices. AirPlay 2 added support for a variety of codecs, bit-depths and sample rates, in addition to the ALAC used by the original AirPlay/AirTunes protocol.

For some reason they seem to be bending over backwards to avoid an unbroken lossless chain between Apple Music’s servers and AirPlay (v1 and v2) receivers. We’ll see if this changes.


AirPlay, or AirTunes, as it was then called, originally only supported lossless transmission using the ALAC codec when it debuted in 2004 alongside the original AirPort Express. This remained true when Apple released the upgraded AirPort Express v2. Along the way Apple also added AirPlay support to the Apple TV along with licensing the technology to 3rd parties to incorporate into devices like AV receivers. People also cracked the encryption used by AirPlay and reverse-engineered the protocol leading to software like shairport-sync which has been incorporated into various commercial and open source products.

Then, in 2018 Apple released AirPlay 2 alongside the new HomePod smart speaker. They upgraded Apple TV to support AirPlay 2, and, to people’s pleasant surprise, also released an firmware update for the long discontinued AirPort Express 2 that added AirPlay 2 support. The new protocol was also licensed to 3rd parties for incorporation in their products.

AirPlay 2 enabled multi-room playback over AirPlay from iOS devices; they’d previously only been able to play to a single AirPlay endpoint at a time. Related to that, AirPlay 2 allowed multiple HomePod speakers to be used as stereo or surround-sound speakers. People assumed that this new functionality continued to use the lossless ALAC codec for transferring audio data between the Mac or iOS device and an AirPlay 2 receiver. The truth was more complicated.

In order to support the new use cases mentioned above, AirPlay 2 included some significant changes. First, it added a buffered mode that allowed a significant amount of data to be stored in RAM on the AirPlay 2 receiver device, rather than streaming it in realtime. More significantly, they added support for other codecs, sample depths and sample rates besides the 16-bit, 44.1kHz ALAC used by AirPlay 2. One of those new combinations was the high-quality but lossy 256kbps AAC that Apple used for iTunes downloads. Many of these details are documented by people working to reverse-engineer AirPlay 2. Combined, these changes reduced power consumption for mobile devices and allowed more headroom to avoid glitches when playing to multiple AirPlay 2 receivers at once.

When Apple announced that Apple Music would support CD-quality lossless audio, people (including me) assumed that it would work with AirPlay devices because they too supported CD-quality lossless audio. However, as I detailed above, this has proven not to be the case.

It seems like Apple has gone to some effort in order to avoid an unbroken chain of lossless audio from Apple’s servers to AirPlay 2 receivers. I speculate they may doing this to keep people from downloading and distributing high-quality lossless content by way of the already cracked AirPlay v1 protocol, or an eventual compromise of AirPlay v2. Given this, I wonder if we’ll ever get the fully lossless signal chain we want over AirPlay, at least not without a new version of the protocol, a change that may require a “forklift” upgrade of AirPlay receivers to take advantage of.

Or maybe it’s all a bug. Apple Music’s Lossless and Atmos roll-out certainly has come with plenty of other glitches and blunders.


  • I’ve only focused on the evolution of AirPlay audio support; It also has support for photos, video and display extension/mirroring.
  • On a Mac, if you choose “Computer” as the output in Apple Music and then choose an AirPlay (v1 or v2) device as your output from the MacOS “sound” menu it will use a lossless signal chain, but it will run the output through the system mixer (so system sounds like alerts will come out the AirPlay receiver).

The Absence of a BTS “Army Bomb” Teardown

There was a discussion on the forum for the WLED firmware for controlling LED pixel strips about how the K-pop group BTS coordinates light-shows at its concerts using “Army Bombs.” Army Bombs are light-sticks that fans buy and take to concerts. They can be centrally controlled and used to put on huge stadium-wide light shows.

Someone in our house is a fan of BTS. They got an “army bomb” in preparation for a show in the US that ended up being cancelled due to COVID-19. I’m not allowed to disassemble it, but I was able to look at the label in the battery compartment and get the FCC ID and find their certification info.

For some reason the internal photos are embargoed until 10/11/2010, despite the thing being in the wild. However, the test report reveals that it uses…BLE, which I already knew. It doesn’t seem to have another radio, though. I did find internal photos for a similar device, registered in 2016, from the same manufacturer. The markings on the bluetooth SoC aren’t legible in the photos, unfortunately.

I can’t find any real teardowns, but one fan took hers apart. No closeups, but its clear there isn’t an IR receiver anywhere IR could be received.

The company that makes the Army Bomb is Fanlight.

Open Source and Hardware Engine Management Systems (ECUs)

I recently fell down a rabbit hole of car modding videos on YouTube. Most/all the videos I watch involve engine modifications, and there inevitably comes a time where the car gets a new tune on a Dyno in order to realize the potential afforded by the mods.

Tuning a modern car involves tweaking parameters in the engine management system, or engine control unity (ECU). Budget builds often make use of the OEM ECU that originally came with the engine (which may or may be the engine originally installed in the chassis) but the addition of forced induction via a turbo or supercharger to a normally aspirated engine usually calls for an aftermarket ECU.

Aftermarket ECUs offer another advantage for older vehicles; by using existing engine sensors, and perhaps adding a few more, a lot of engine control functions can be consolidated in the ECU, rendering various vacuum controls and mechanical linkages unnecessary. This can help clear space in a crowded engine bay, and also improve reliability, serviceability and, potentially performance, fuel economy and emissions.

From my research, tuning an OEM ECU may involve hundreds of dollars in software and hardware. A modern aftermarket unit suitable for upgrading a four cylinder engine starts at about $850, and one can spend more than twice that for an advanced feature set and the ability to run sequential ignition and fuel injection on a six or eight cylinder engine.

The prices of aftermarket ECUs aren’t outrageous compared to the cost of parts or labor for a big project, but they start looking more substantial if you plan to do most of the work yourself while scavenging parts as cheaply as possible. This got me wondering about whether there was a community of people either developing open source firmware for common OEM ECUs, or perhaps custom hardware.

It didn’t take too much looking to find two active projects with healthy communities around them, Speeduino and rusEFI.

Speeduino has been around since X. In 2017 it was a finalist for the hack a day prize. The author, Josh Stewart started out by using Arduino based hardware to run a lawnmower engine. By now it’s been used to run a variety of engines with 4-8 cylinders. The hardware is based on an Arduino Mega. It adds a robust automotive power supply, protection for the I/O channels and driver circuitry suitable for ignition coils, fuel injectors and other components.

A board capable of sequential fuel injection and ignition on 4 cylinder engines is available assembled for under $200. A unit that is plug-compatible with the ECU on a first generation Miata is available for ~$250, including an external housing.

Interestingly (to me), the Speeduino firmware takes advantage of the Arduino build environment and some libraries. This has enabled people to port the firmware to more capable ARM-based Arduino-like devices, like the Teensy. These ARM based platforms afford the possibility of more advanced peripherals, like CANbus controllers, more memory for data logging, and more headroom, allowing things like unlimited software timers to replace a limited number of hardware timers on the ATMEGA.

I think RusEFI has been around since, at least, 2013. There are already some great hardware options. $270 gets you a unit with a robust waterproof case and someone is gearing up to sell a board capable of running sequential ignition and fuel injection on a V-12 engine.

Both systems (currently?) rely on 3rd-party commercial software for the tuning process, but the software, Tuner Studio, is available for less than $100.

I could do a much longer post, but this post is long enough, so, this is where it ends.

Apple Store Feedback

I recently visited an Apple Store to get my MacBook Pro’s “butterfly” keyboard replaced. After the visit I received a survey asking about the experience. I ended up writing a long (considering the context) critique, and thought I’d also post it here.

It’s too noisy and chaotic. The chaotic feeling is partially due to the noise, but also because the design of the store doesn’t offer affordances for customers with differing levels of experience and differing priorities.

When I go to an Apple store, I generally have a sense of urgency, otherwise, I’d be handling things through the Internet and mail.

I’ve been to the Apple store for service 2-3 times in the past 12-24 months, and yet, I’m still not confident in my understanding of what needs to happen when I’m in the store. Where do I need to go? Who do I need to talk to? Is that a black shirt, or a dark navy blue shirt?

I go to an Apple store because I have some sort of problem I need to solve, either with a purchase, or by getting service or tech support. When I walk through the door, I should start feeling calmer. Instead, I feel more agitated. When I leave, I should feel, if not satisfied, then at least relieved. Instead I have lingering agitation.

There are too many people doing too many things in an undifferentiated space. I imagine that having classes sharing that space is supposed to invite people in to the session. Maybe it does, but it also adds amplified noise to the entire space. The fractured repetition of a Garage Band class is literally crazy-making for those in the service/support area. It makes conversations difficult, and makes waits seem longer.

I am happy, though, with the latitude that the support staffer had to make things right with the manufacturing defects in my out-of warranty computer. I’d prefer the keyboard just worked and the screen didn’t have growing blemishes, but one of the reasons I stick with Apple is that I know you’ll make things right.

IoT Software Notes

I’m trying to get my head around all the IoT “hub” software options. I’m focusing on software that runs locally, rather than proprietary cloud services like IFTTT.

List o’ IoT Softwares

Open Source


  • Hubitat (link) Proprietary gateway that works without “the cloud.” Provides API for apps and drivers and SmartThings compatable scripting language. Ships with support for with ZigBee, ZWave and IP based devices.
  • Phillips Hue Bridge
  • Samsung SmartThings Hub (link). Closed source running on proprietary hardware HUB, but extensible and works with both Samsung and 3rd Party IoT hardware over Zigbee, WiFi and Z-Waver


Home Assistant

“Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts.”

Has native (python) HomekKit support


Homebridge is opensource software built on Node.js.

It describes itself as “HomeKit support for the impatient”

It provides a path to Apple HomeKit support for a variety of 3rd party ecosystems, like Samsung SmartThings by emulating the HomeKit API.

People have provided plug-ins and guides to enable all sorts of useful integrations.

Wyze Came HomeKit support using Homebridge and Wyze RTSP firmware

Mozilla Webthings Gateway


Other Notes

Wyze Cam Notes

A year or so ago I bought a Wyze Cam to try out. I’d wanted an IP camera for a while, but with the exception of a ~$15 Gearbest special, I’ve deemed them all too expensive to experiment with. The Wyze Cam was different. It had a good reputation, and was available for $20 + shipping. So, I bought one. My first impressions of the hardware and image quality were favorable, the software, less so.

It’s clear to me that Wyze was trying to differentiate itself on a combination of low price and ease of use. They focused on a few simple scenarios as a route to ease-of-use. The Wyze Cam wasn’t/isn’t a typical security/surveillance camera, intended for watching home and exterior property. Instead it was more a personal, indoor surveilance camera. The Wyze Cam debuted as the sort of thing you could use to check in on your child sleeping or playing elsewhere in your house, or see which of your dogs was knocking stuff off the kitchen counter.

The software, both the camera firmware, and the smartphone app needed to access the camera, was basic, but capable enough to serve these scenarios. The camera firmware could identify motion using crude techniques. The firmware used the motion detection capability to trigger a cascade of events. Motion triggered the camera to upload 5s of video before the motion was detected and 5s after to Wyze’s cloud service. It could also store the same video locally, if there was an SD card on the camera.

Their cloud service can send notifications of new video clips to the smartphone app, from which the stored video can be viewed. The app also allows viewing of the live stream remotely over any Internet connection, can view video stored on a self-provided TF card in the camera itself when on the local network, and also allows viewing of video clips stored in the cloud over the previous two weeks.

Wyze was successful in delivering these basic features, but the overall experience was a bit clunky. Getting the camera connected to my local WiFi was relatively painless, and once connected, access when I was away from home just worked (except for a few occasions when it didn’t work at all). Accomplishing things in the app often took more steps than seemed necessary and some actions were hit-or-miss.

The (free) cloud service also imposed limits on how often video clips could be uploaded, presumably in order to manage the cost to Wyze of offering the free service. After a clip was recorded, the camera waited ~10m before motion could trigger recording of a different clip. This would probably be fine for keeping watch on the interior of one’s home while away at work, or vacation, but, combined with other limitations made the camera considerably less useful for keeping watch on the outside.

The camera firmware allows some coarse tuning of the motion detection. You can adjust how much the image has to change before triggering. You can also limit motion detection to a (rectangular) subset of the camera’s field of view. The utility of these limited settings really depends on the specifics of the scenario you are trying to accomplish. If you want to keep close track of who visits your porch, you can limit the motion detection region and turn the sensitivity down enough to minimize false positives from car headlights, the sun going behind a cloud, or moths flitting around, but then you loose any utility for seeing who is leaving dog crap in your yard.

Another limitation is that browsing the video stored locally is clunky. There is no way to scrub though in faster than realtime. You just have to jump forward manually and wait for the network to catch up. If you want to see more than 10s around a motion event you have to take note of the time of the triggered event, then switch to viewing stored video, then navigate to the proper time — there is no shortcut to jump from a stored 10s to the same time in the locally stored continuously recorded video with a single tap.

These limitations were enough that I stopped using the camera and just left it running on the assumption that if some event was severe enough, like, say, an animal sacrifice on my front lawn, I might go to the trouble of trying to find and view whatever the camera captured.

That assumed that the camera captured anything at all. I had trouble with the camera crashing or loosing track of the TF card and not recording video again until I noticed and rebooted the camera, and/or ejected and reinserted the card.

There have been some other severe annoyances, too. You have to log-in to Wyze’s service to use your camera, even though the only scenarios that require an account are sharing access with another person. Worse, the login expires after some number of days without use. Once your login expires the app doesn’t take advantage of FaceID on iOS for a quick login; you have to type your password in.

Recently, after more than six months of disuse I decided to try the camera again — I’d grown fed-up with people leaving their dog’s bagged crap in my waste bins, particularly when they put it in the recycling or yard waste bins.

Logging in to the app is still more tedious than it should be. Some operations are smoother and less hit-or-miss than they were, but I still find that a number of actions take more steps than they really should.

In the end, the Wyze Cam didn’t catch anyone in the act, but it did catch someone behaving like someone might if they were about to put poorly-bagged dog crap in my yard waste bin before the clip cut out.

It’s still tedious to review the continuously captured video from the TF card. Pulling the TF card is difficult, because of where and how the camera is mounted, so I’m limited to using the clunky, slow UI in their iOS app. I can’t download an hour or days worth of video quickly to my phone for easier/faster review using a native UI on my phone or computer; I can only “record” the stored video in real time. — if I want to download an hour worth of of video, I need to start “recording” in the app and leave it running for an hour. Ridiculous.

Wyze has made one major improvement since I first got the camera, though. Earlier this year they added “person” detection to the firmware, using software licensed from a third party. Once enabled, it works by waiting for the camera to detect motion, then software running on the camera does further processing to see if the captured footage contains a human being (generic). If it does, the captured clip is marked as containing a person. Notifications can then be limited to clips that contain a person. The list of captured clips also notes whether or not a person is detected.

I want to underscore a few important details about the person detection. First, it runs entirely on the camera, which means that it isn’t being used to help train some big tech companies Artificial Intelligence brats, like some sequel to the Handmaid’s Tale (unless you submit clips for review on an individual basis). Second, doesn’t identify specific people, just the presence of a generic human being.

The accuracy is quite good, too. I’ve seen very few false positives (detected a person when there is no person) or false negatives (didn’t detect a person when motion was detected and there was a person in-frame).

The motion detection works well-enough that I’ve turned on notifications. At this point, though, more than half of the people detected are walking by with dogs, which isn’t something I need to know about right away. I think i’m going to adjust the motion detection region and limit it to my porch. I won’t be able to use the camera to catch dog crap offenders, but it’ll be good for checking to see who is at the front door, and what packages have been delivered.

Wyze does still have the option of making their camera useful for outdoor surveillance. My understanding is that the crude motion detection is a result of the limited motion detection support provided by the SoC (system on chip) and the SDK the manufacturer provides. The SoC isn’t very powerful, but the fact that it can run person detection algorithms in near realtime means that it is powerful enough to do simpler image processing. It should be possible to filter motion detection through non-rectangular regions, and/or adjust and apply the detection sensitivity on a region by region basis. It should also be possible to better filter false positives caused by insects attracted to the camera’s infrared illuminator LEDs at night.

For $20, Wyze Cam is an interesting product, and it is even a useful product for some scenarios. If my dog was still living, I’d probably have a few Wyze cams just to see what it does when we aren’t at home and it wasn’t sleeping. I may be picking up a few more Wyze Cams soon though, because we’ve started thinking we might be ready for new dog.

Solar Panel Notes

I’ve been tinkering with a small, semi-potable solar setup and have an eye to upgrading it. These are my notes. My facts my be incorrect, they are certainly incomplete.

My system is currently configured as:

  • 18W, 18v flexible Sunpower panel.
  • CN3791 MPPT charge controller
  • LiIon battery pack(s) made from 1S20P Samsung 28A 18650 cells. Configured as two packs of 1S6P and one pack of 1S8P, each with a cheap protection board. Estimated capacity is ~150Wh
  • ~15W 6v folding, portable panel, made from Sunpower cells. I’ve added a bypass for the buck converter that supplies regulated 5V USB so I can use it with MPPT controllers.
  • 21W 6V folding, portable panel, made from Sunpower cells. I’ve added a XT30 connection so I can swap in different loads, including the original 5v USB buck regulator, a different buck regulator, or an MPPT controller.
  • These folding panels are hooked up parallel, they are connected to another CN3791 MPPT charger.
  • Both chargers are connected in parallel to the battery bank. This could have some weird effects, particularly as the pack reaches 4.2v and goes into constant voltage mode.

The panels are not optimally deployed. They are lying flat, and due to trees, etc, only get unobstructed sun for ~4-6 hours a day. In this arrangement, peak power for the 18W panel has been about 10-12W.

The 6v panels are even more suboptimally deployed due to sitting on the floor of a window platform for cats, which means they are obscured at times by the frame and even the 18W panel which rests above them.

I also have a laminated ~6W 5V Sunpower panel that is currently unused. It originally had a buck regulator to power USB devices, but I removed it and replaced it with a quick release terminal so I can use it directly with a battery charger.

I’d like to move up to 100-200W of panel capacity before a new wave of Trump’s dumbass tarifs hit. Options in consideration:

  • Rigid mono or polycrystalline panels.
    • Polycrystalline is currently slightly cheaper per nameplate wattage, but maybe not enough to be compelling. Currently <$1/W.
    • Pro: Cheapest option. Con: Since I’m not making a permanent installation, the fact their weight and fragility of the glass is a concern.
  • Flexible panels. Lots of options, most of them dubious.
    • The cheapest flexible panels are available at a ~20-50% premium over rigid panels.
      • Use PET encapsulation on the sun-facing side, which isn’t suitable for constant environmental exposure.
      • Use cell constructions that don’t hold up to flexing and don’t deal well with microcracks that develop in the silicon wafer due to flexing.
      • Use panel interconnects that won’t hold up to flexing.
    • Quality flexible panels are 2-3x as expensive as cheap rigid panels.
      • Use EFTE top layer for long life and durability.
      • Use primarily Sunpower, but occasionally Day4 or Merlin cells which are well suited for the challenges of flexible substrates.
      • Use rugged, flexible interconnects.
    • Folding flexible panels.
      • One, common variety uses ~6v, 7W subpanels connected in parallel to power a 5V USB buck regulator. The subpanels are made from twelve Sunpower offcuts in series. These are typically encapsulated in PET and sewn into ballistic nylon covers with cardboard for added stiffness. Newer designs use EFTE and may forgo the fabric construction a fully laminated construction and a panel thickness of 2-3 millimeters..
      • $1W at the low end, >$2W for branded products like Anker or RavPower.

Background Information

  • Panel Basics
    • Solar panels are constructed from multiple photovoltaic (PV) solar cells in series.
    • A typical PV solar cell has an optimal voltage of about 0.5-0.6v, which is determined by the bandgap of the doped silicon junction.
    • The number of cells assembled in series determines the panel voltage.
    • Panel voltages are generally matched to their intended application.
      • Six cells in series (6S) are well suited for 3V electronics of the sort powered by two Alkaline cells in series or a single lithium metal cell (like the ubiquitous CR2032 button cell.
      • Ten (5V) to twelve (6V) PV solar cells in series are typically used to charge/power 5V USB devices by way of a buck-converter voltage regulator. These configurations are also well suited to charging Lithium Ion batteries, which are used in smartphones and most other battery-powered devices that can be charged from USB.
      • Panels made from 32-36 cells in series are common. They have an optimal voltage of 18V, but are often labeled as 12v because they are used to charge 12v lead acid batteries without a regulated charging circuit. The are also used with LiIon batteries in conjunction with a suitable charging controller. ~100W, 18V panels are often connected in series for higher-voltage and higher powered systems, including AC systems
      • 150-300W panels with 50-72 PV cells in series are also used in larger installations.
    • Panel Construction
      • Panels are assemblies of multiple, electrically interconnected, solar cells. They protect the component PV cells from the elements, and provide support when deploying and mounting the cells
      • Rigid
        • Framed Laminate panels sandwich the cells and their interconnects between glass and a sturdy backing material. The laminated panel is then held in an aluminum frame to enhance rigidity, provide protection, support and points of attachment for mounting the panel.
        • Cast panels are typically under a few watts of power. They seal the cell in protective epoxy or another cast resin.
      • Flexible
      • Laminated