There is always a trade-off between productivity, maintainability and developer productivity; you can really really make this compromise reasonable if you can measure it. Performance is measured by how long it takes to do something; maintainability is harder to measure, but fortunately, performance is simple enough to quantify. In general, I would say to optimize productivity and serviceability first, and only optimize performance if you have a measurable problem.
In order to work in this way, you must have performance targets and a way to regularly evaluate the solution for these goals - it is very difficult to retool the performance in the project. However, optimizing performance without proven need leads to obscure, hard-to-debug software solutions.
First, you need to turn the target audience into a number that you can measure; for web applications, which are usually "dynamic page requests per second." 400 concurrent users, probably not all request pages at the same time - they usually spend some time browsing the page, filling out forms, etc. AJAX-driven sites, on the other hand, request much more dynamic pages.
To use Excel or something to work from peak concurrent users to dynamic page roles per second depending on latency, interaction requests and creation in the buffer - I usually reassign by 50%.
For instance:
400 concurrent users with a session length of 5 interactions and 2 dynamic pages for each interaction 400 * 5 * 2 = 4000 page requests.
With a 30 second timeout, these requests will be distributed within 30 * 5 = 150 seconds.
Therefore, your average page requests / seconds are 4000/150 = 27 requests / second.
With a 50% buffer, you should be able to maintain a peak of approximately 40 requests per second.
This is not trivial, but by no means exceptional.
Then set up a performance testing environment, the characteristics of which you fully understand and can replicate, and can display the production environment. I usually do not recommend re-creating production at this point. Instead, reduce the number of pages / second page benchmark according to the performance testing environment (for example, if you have 4 servers in production and only 2 in the performance testing environment, halve it).
Once you start developing, regularly (at least once a week, ideally every day) deploy your unfinished work in this test environment. Use a test load generator (Apache Benchmark or Apache JMeter work for me), download load tests that simulate typical user rides (but no latency), and run them against your performance testing environment. Measure success by clicking on the Generation / Second landing page. If you aren’t in the benchmark, find out why (Redgate ANTS profiler is your friend!).
As you approach the end of the project, try to get a test environment that is closer to the production system in terms of infrastructure. Expand your work and repeat performance tests, increasing the load to reflect the “real” pages / second requirement. At this point, you should have a good idea of the characteristics of the application, so you really confirm your assumptions. It is usually much more difficult and expensive to get such a “production" environment, and it is usually much more difficult to make changes to the software, so you should use this only for verification, and not for the usual work to increase productivity.