MS Access Databases on a Slow Network: Is it Faster to Separate the Back Ends?

I have an Access database containing information about people (employee profiles and related information). The front end has one console interface that simultaneously changes data of one type (for example, academic degrees in one form, contact information from another). Currently, it is associated with several back ends (one for each data type and one for basic profile information). All files are located on a network share, and many of the back ends are encrypted.

The reason I did this is because I understand that MS Access needs to pull the entire database file to the local computer in order to make any queries or updates, and then put any changed data back into the network share. My theory is that if a person changes the phone number or address (contact information), they will only need to pull out / change / replace the contact information database, and not pull one large database containing contact information, projects, degrees, awards , etc., to change one phone number, thereby reducing the potential for blocked databases and network traffic when multiple users access the data.

Is that a reasonable conclusion? Don't I understand a lot? Did I miss something?

I understand that with each file there is a consideration of overhead, but I do not know how important this is. If I were going to combine back ends, there is also the potential advantage of allowing Access to handle referential integrity for cascading deletes, etc., rather than coding for this ...

I would be grateful for any thoughts or (reasonably grounded) criticisms.

+6
source share
1 answer

This is a common misunderstanding:

MS Access must pull the entire database file to the local computer to make any queries or updates.

Consider this query:

SELECT first_name, last_name FROM Employees WHERE EmpID = 27; 

If EmpID is indexed, the database engine will just read the index to find which row lines match, and then read the corresponding row. If the index contains a unique constraint (for example, EmpID is the primary key), reading will be faster. The database engine does not read the entire table and even the entire index.

Without an index on EmpID, the engine will perform a full scan of the table in the Employees table - this means that it will need to read each row from the table to determine which ones correspond to EmpID values.

But in any case, the engine does not need to read the entire database ... Customer tables, inventories, sales, etc. .... she has no reason to read all this data.

You are right that there is overhead for connecting to database databases. The engine must manage the lock file for each database. I do not know the extent of this impact. If it were me, I would create a new base database and import tables from others. Then create a copy of the interface and reconnect to the external tables. This will give you the opportunity to directly examine the impact of performance.

It seems that relational integrity should be a strong argument for consolidating tables into one back-end.

Regarding locking, you will not need to lock the entire base database for regular DML operations (INSERT, UPDATE, DELETE). The underlying database engine supports more granular locking. Also a pessimistic or opportunistic lock - whether the lock occurs after you start editing a line or is delayed until you save the changed line.

In fact, a β€œslow network” can be the biggest problem if slow means a wireless network. Access is protected only in a wired LAN.

Change Access is not suitable for a WAN network environment. See this page by Albert D. Callal.

+8
source

Source: https://habr.com/ru/post/886967/


All Articles