1) In the case of a thick client or thin client, I observed a trend when data is exported to server-side XLS / CSV (using some libraries, such as Apache POI). Then, this exported data is streamed back to the client in a fragment, and then the client saves it on the local client computer. I understand that this approach is usually followed, because sometimes the size of the exported data can be huge, and clients do not have the luxury to use resources with intensive memory. Thus, usually the data is first exported to the server and then sent back to the client. Is there a reason other than the one mentioned above to follow this design approach?
2) Also, if I have a GUI (thin client \ ck) that displays huge data in rows (selected from the server). If I need to export this data to XLS, should I transfer this data as a server and then export xls to the server? I also observed another trend where people only pass primary / composite key data back to the server for strings. Then the data is retrieved from the database again using these keys. This fresh data is then exported to xls. Am I just trying to understand why we are following this approach?
Please let me know your feedback on these two requests. Thanks in advance:)
Note. I found another thread on SO asking a similar question. But the answer was related to security related to web applications. So I found it incomplete, and I decided to post my own question.
source
share