This is a common misunderstanding:
MS Access must pull the entire database file to the local computer to make any queries or updates.
Consider this query:
SELECT first_name, last_name FROM Employees WHERE EmpID = 27;
If EmpID is indexed, the database engine will just read the index to find which row lines match, and then read the corresponding row. If the index contains a unique constraint (for example, EmpID is the primary key), reading will be faster. The database engine does not read the entire table and even the entire index.
Without an index on EmpID, the engine will perform a full scan of the table in the Employees table - this means that it will need to read each row from the table to determine which ones correspond to EmpID values.
But in any case, the engine does not need to read the entire database ... Customer tables, inventories, sales, etc. .... she has no reason to read all this data.
You are right that there is overhead for connecting to database databases. The engine must manage the lock file for each database. I do not know the extent of this impact. If it were me, I would create a new base database and import tables from others. Then create a copy of the interface and reconnect to the external tables. This will give you the opportunity to directly examine the impact of performance.
It seems that relational integrity should be a strong argument for consolidating tables into one back-end.
Regarding locking, you will not need to lock the entire base database for regular DML operations (INSERT, UPDATE, DELETE). The underlying database engine supports more granular locking. Also a pessimistic or opportunistic lock - whether the lock occurs after you start editing a line or is delayed until you save the changed line.
In fact, a βslow networkβ can be the biggest problem if slow means a wireless network. Access is protected only in a wired LAN.
Change Access is not suitable for a WAN network environment. See this page by Albert D. Callal.