Embedded project development process with significant hardware changes

I have a good idea about the Agile development process, but I don’t know how to correlate this with an embedded project with significant hardware changes.

Below I will talk about what we are doing now (Ad-hoc way, not yet defined process). Changes are divided into three categories, and for each of them different processes are used:

  • complete hardware change

    example: use a different IP video codec

    a) Learn the new IP

    b) RTL / FPGA simulation

    c) Deprecate the interface - go to b)

    d) Wait until the hardware is ready (tape)

    f) Test on real equipment

  • hardware improvement

    example: improving image quality by improving the basic algorithm

    a) RTL / FPGA simulation

    b) Wait while the hardware and hardware check

  • Minor change

    example: only changing the display of the equipment register

    a) Wait for hardware and hardware testing

Anxiety, it seems that we do not have too much control and confidence in the maturity of software for hardware changes. This confidence is critical to the success of the project, because the subscription schedule is always very tight, and the customer wants a smooth change when upgrading to a new version of the equipment.

How did you deal with such a hardware change? Did you solve this using the Equipment Abstraction Layer (HAL)? Did you have an automatic test for the HAL layer? HAL works for a natural product, but it may not work well for a consumer product that is changing rapidly. How did you test when the hardware platform is not ready yet? Do you have well-documented processes for this kind of change?

+4
source share
1 answer

Adding an Equipment Abstraction Layer (HAL) is mandatory if you expect the underlying equipment to change over the life of the product. If done correctly, you can create unit tests for both sides of the HAL.

For example, HAL is just an API from your GUI to real display hardware. You can write a fake hardware driver that does not control the physical device but reacts differently to make sure your upper API levels process all possible response codes from the HAL. Perhaps it creates a bitmap in memory (instead of external I / O) that can be compared with the expected bitmap to see if it is displayed correctly.

Likewise, you can write unit test, which provides good HAL coverage from the upper levels, so you can check if your new hardware driver responds correctly. Using the display example, you view all possible screen modes, interface elements, scrolling methods, etc. Unfortunately, for this test you need to physically observe the display, but you can potentially work side by side with old equipment to see speed improvements or behavioral deviations.

Back to your example. How different is the switch to another video codec? You are still just pushing bytes on your upper layers. If you are implementing a well-known codec, it would be useful to have source files that act like a unit test (covering the full range of possible data formats) to ensure that your codec decodes and displays them correctly (no glitches!). Decoding to a bitmap in memory does a good unit test - you can just make memory compared to a raw compressed frame.

Hope this helps. If not, ask more questions.

+3
source

Source: https://habr.com/ru/post/1304110/


All Articles