Modeling embedded hardware in Rust and how to have multiple mutable links cleanly?

I am simulating a system with a processor, GPU, MMU, APU, and possibly other things. The CPU will have mutable references to the GPU, MMU, and APU. I would also like the MMU to be able to name certain functions on the GPU and APU. When this comes into play, I collect a memory card in different places. The MMU will take care of this and send it to the GPU or APU if the memory request is in these devices.

This is how I modeled it with Arcand Mutex. I was wondering if there is a cleaner way to achieve what I did here, or if this is the right method.

use std::sync::{Arc, Mutex};

trait MMU {
    fn read(&self, addr: usize) -> u8;
    fn write(&mut self, addr: usize, value: u8);
}

#[allow(dead_code)]
struct Cpu {
    apu: Arc<Mutex<Box<MMU>>>,
    mmu: Box<Mmu>,
    gpu: Arc<Mutex<Box<MMU>>>,
}

struct Mmu {
    map: Vec<(usize, usize, Arc<Mutex<Box<MMU>>>)>,
}

impl Mmu {
    fn new() -> Mmu {
        Mmu { map: vec![] }
    }

    fn add_mapping(&mut self, start: usize, end: usize, cb: Arc<Mutex<Box<MMU>>>) {
        self.map.push((start, end, cb));
    }

    fn read(&self, addr: usize) -> u8 {
        // See if the addr is in a range that is mapped, then
        // call read on it.
        for i in self.map.iter() {
            if i.0 <= addr && addr <= i.1 {
                let d = i.2.clone();
                let d = d.lock().unwrap();
                return d.read(addr);
            }
        }

        println!("Mmu.read: {}", addr);
        0
    }

    fn write(&mut self, addr: usize, value: u8) {
        // See if the addr is in a range that is mapped, then
        // call write on it.
        for i in self.map.iter() {
            if i.0 <= addr && addr <= i.1 {
                let d = i.2.clone();
                let mut d = d.lock().unwrap();
                d.write(addr, value);
                return;
            }
        }

        println!("Mmu.write: {} {}", addr, value);
    }
}

struct Gpu;
impl MMU for Gpu {
    fn read(&self, addr: usize) -> u8 {
        println!("Gpu.read: {}", addr);
        0
    }

    fn write(&mut self, addr: usize, value: u8) {
        println!("Gpu.write: {} {}", addr, value);
    }
}

struct Apu;
impl MMU for Apu {
    fn read(&self, addr: usize) -> u8 {
        println!("Apu.read: {}", addr);
        0
    }

    fn write(&mut self, addr: usize, value: u8) {
        println!("Apu.write: {} {}", addr, value);
    }
}

fn main() {
    let apu = Arc::new(Mutex::new(Box::new(Apu) as Box<MMU>));
    let gpu = Arc::new(Mutex::new(Box::new(Gpu) as Box<MMU>));
    let mut mmu = Box::new(Mmu::new());

    // If a memory read/write occurs at 0x300-0x400, then the
    // GPU should handle it.
    mmu.add_mapping(0x300, 0x400, gpu.clone());
    // If a memory read/write occurs at 0x100-0x200, then the
    // GPU should handle it.
    mmu.add_mapping(0x100, 0x200, apu.clone());
    // Otherwise the MMU will handle it.

    let mut c = Cpu {
        apu: apu,
        gpu: gpu,
        mmu: mmu,
    };

    c.mmu.read(0);
    c.mmu.write(0, 5);

    c.mmu.read(0x150);
    c.mmu.write(0x150, 5);

    c.mmu.read(0x350);
    c.mmu.write(0x350, 5);
}

Rust url

, , apu gpu, Cpu, MMU, apu gpu. , Cpu gpu apu, MMU.

+4

Source: https://habr.com/ru/post/1649726/


All Articles