Why is only one aggregate instance changed per transaction?

I am reading Vernon's article Effective Aggregate Design . And I have a question, why does only one aggregate instance per transaction change?

Let's take an example, consider the history of managing invert storage.

Inventory represents an item with quantity in stock. 5 Implementing a Driven Design project in a warehouse in Shanghai, for example.

A record represents a log of I / O operations in Inventory . Input 2 Implementation of the Driven Design project in a warehouse in Shanghai, for example.

Quantity Inventory must be changed if Entry is sent.

It easily occurs to me that this invariant can be realized through a sequence of transactions.

Solution A : Use the same aggregate and cluster Entry in Inventory .

public class Inventory implements Aggregate<Inventory> { private InventoryIdentity id; private Sku sku; private int quantity; private List<Entry> entries; public void add(Entry entry) { this.quantity += entry.getQuantity(); this.entries.add(entry); } } public class Entry implements LocalEntity<Entry> { private int quantity; // some other attributes such as whenSubmitted } public class TransactionalInventoryAdminService impelments InventoryAdminService, ApplicationService { @Override @Transactional public void handle(InventoryIdentity inventoryId, int entryQuantity, ...other entry attributes) Inventory inventory = inventoryRepository.findBy(inventoryId); Entry entry = inventory.newEntry(entryQuantity, ..); inventory.add(entry); inventoryRepository.store(inventory); } } 

Solution B : Use a separate unit for Inventory and Entry .

 public class Inventory implements Aggregate<Inventory> { private InventoryIdentity id; private Sku sku; private int quantity; public void add(int quantity) { this.quantity += quantity; } } public class Entry implements LocalEntity<Entry> { private Inventory inventory; private int quantity; private boolean handled = false; // some other attributes such as whenSubmitted public void handle() { if (handled) { throw ..... } else { this.inverntory.add(quantity); this.handled = true; } } } public class TransactionalInventoryAdminService impelments InventoryAdminService, ApplicationService { @Override @Transactional public void handle(InventoryIdentity inventoryId, int entryQuantity, ...other entry attributes) Inventory inventory = inventoryRepository.findBy(inventoryId); Entry entry = inventory.newEntry(entryQuantity, ..); entry.handle(); inventoryRepository.store(inventory); entryRepository.store(entry); } } 

Both options A and B are possible, but solution B is kind of inelegant in order to leave unintended opportunism for calling Inventory .add (quantity) without Entry . Is it that the rule (changes only one cumulative instance per transaction) is trying to point to me? I basically got confused why we should change only one aggregate in a transaction, which is not the case if we do not.

Update1 start

Does he intend to alleviate concurrency problems (with another โ€œmake smaller aggregatesโ€ rule)? For example, an Intro is an aggregate with a relatively low level of competition, and an Inventory is a relatively high level of competition (provided that several users can manipulate the same Inventory )., This causes an unnecessary concurrency failure if I change them as in a transaction.

Update1 end

Some additional issues need to be addressed if I make decision A:

1. What if there is a lot of Intro for Inventory , and I need a user interface with a page request? How to implement a page request using Collections? One way is to load all Entry s and choose what the page needs, another way is InventoryRepository.findEntriesBy (invoiceId, paging), but this seems to violate the rule of getting a local object only by getting it aggreate, then go to the object's graph.

2. What happens if there is too much Entry for Input for Inventory ), and I have to load them all when adding a new Entry

I know that these issues are related to a lack of full understanding. So any idea is welcome, thanks in advance.

+6
source share
1 answer

The rule of thumb is to keep your aggregates small as you want to avoid transaction failures due to concurrency. And why should we make a large amount of memory large if it should not be?

So, the solution A is not optimal. Large aggregates often introduce problems that can easily be avoided.

It is true that another rule of thumb is only to change one aggregate in one transaction. If you make Entry your own aggregate, you can make the inventory count ultimately consistent, which means that the Entry aggregate can raise an event that is subscribed to. This way you only change one aggregate per transaction.

 public class Entry { public Entry(InventoryId inventoryId, int quantity) { DomainEvents.Raise(new EntryAdded(inventoryId, quantity)) } } 

If you do not feel comfortable with possible consistency, you can still split aggregates, but modify them as in a single transaction, until you feel pain using the encapsulating domain service. Another option is to save domain events in the process, so that they also occur in a single transaction.

  public class InventoryService { public void AddEntryToInventory(Entry entry) { // Modify Inventory quantity // Add Entry } } 
+10
source

Source: https://habr.com/ru/post/949878/


All Articles