Friday, February 6, 2026

Three levels of AI usage

When researching a topic, there are three levels of AI usage:

  1. Ask "what" (better Google search)
  2. Ask "why" (requires curiosity)
  3. Ask if a specific alternative could be used (requires experience)
Examples:
  1. What is the difference between a mutex and a semaphore?
  2. Why would I choose a microkernel architecture over a monolithic one for this specific embedded system?
  3. I'm currently using FreeRTOS for this task, but would an Event-Driven State Machine be more power-efficient for this specific low-power MCU?

Sunday, February 1, 2026

The first rule of convincing others: Don't be a jerk

We are often raised on the romantic myth of the lone genius. We love the story of the visionary who sees a truth no one else can see, fights the ignorant masses, and eventually is proven right. The 19th-century Hungarian physician Ignaz Semmelweis is the poster child for this myth. 

Semmelweis discovered in 1847 that doctors were literally carrying death on their hands. By mandating a hand-washing protocol at the Vienna General Hospital, he slashed maternal mortality rates from 18% to less than 2% almost overnight.

However, he was a diplomatic disaster. When his colleagues didn't immediately adopt his findings, he didn't refine his argument or seek allies. He called his peers murderers and irresponsible ignoramuses. Most people would rather believe the data is wrong than believe they are monsters. His life ended in a mental asylum, beaten by the guards.

The antiseptic revolution required the arrival of Joseph Lister, a man who was as tactful and methodical as Semmelweis was erratic and angry, to finally make the idea stick. The twenty-year gap between Semmelweis’s data and the medical world’s adoption of hand-washing represents thousands of preventable deaths.

If you believe you have discovered something vital, you have to be likeable enough to be heard. If your language attacks the listener’s intelligence or character, they will stop listening to your data. If you ignore the human element of your truth, you aren't being a martyr, you’re being an obstacle to your own cause.

Saturday, January 10, 2026

Advice for a New Avionics Software Engineer

A new computer engineering graduate who started working at an avionics company last week was given documents such as DO-178C to read. As you might guess, this is rather boring, and he asked me how he could make the initial learning phase more interesting. He is already using NotebookLM to convert the documents to audio and to ask questions.

Since avionics involves safety-critical software development, where we don’t just care whether the code works but also how it fails, I suggested that he write a toy software project in which he simulates sensors (such as an angle-of-attack sensor) and asks an AI about typical failure conditions. The sensors might produce out-of-range values, stop working for some time and then start again, they could stay within bounds but have sudden jumps (e.g. GPS positions under spoofing) and so on. On sensor error, his code should first enter a degraded mode (using the previous good value and displaying a warning message) and, if the sensor error persists, transition to a safe mode (displaying an error message). He could then ask the AI which types of hardware defects can be detected by software (hint: Error Correction Code). This exercise would make the concepts of safety-critical software development more concrete.

The next step would be to understand how avionics sensors actually work, which would increase his domain knowledge. Adding simple mathematical models for the sensors and a bit of digital signal processing to his toy project would also be a very useful learning experience.

He could ask experienced engineers at his company how they arrived at the safety level for the system they are currently developing. What kinds of hazards did they take into account? How did they calculate their probability values?

For more low level topics, he could look into hard real-time concerns, such as interrupt latency and jitter, how cache misses, pipelining, and branch prediction adversely affect determinism and worst-case execution time (WCET).

Lastly, he could read about or watch analyses of software and aircraft failures to get an idea of how systems fail. For example, he could ask AI why the Boeing 737 MAX MCAS did not use both angle-of-attack sensors, despite the aircraft already having two. Finally, he could ask whether a relatively simple solution could have been found while staying within the original cost constraints. One possible answer for MCAS might be: if the designers had limited MCAS to a single input and ensured that it disengaged when the pilot pulled back on the yoke, it would have preserved the commonality assumption in normal flight while also preventing the catastrophic failure mode.

Avionics is a fascinating field, offering endless opportunities for exploration. By being curious and working hard, he can become a valuable engineer who not only solves problems correctly but can also spot problems worth solving within a few years.

Music: Khaled - Aicha