Multi-threaded Java servlet

I am trying to create several output text data files based on the data present in a servlet request. The limitations for my servlet are as follows:

  • My servlet is waiting for enough requests to reach the threshold (for example, 20 names in a file) before creating the file
  • Otherwise, it will expire in a minute and produce a file

The code I wrote is such that:

  • doGet not syncing

  • Inside doGet I create a new thread pool (the reason is that the calling application for my servlet does not send the next request until my servlet returns a response, so I check the request and return an instant confirmation to receive new requests)

  • Transfer all request data to the stream created in the new thread pool

  • Call a synchronized function to count streams and print files.

I am using wait(60000) . The problem is that the code creates files with the correct threshold (of names) within a minute, but after a timeout of a minute the files created (very few) exceeded the throughput, for example, the names are more than I determined in capacity.

I think this has something to do with those that, when awakened, cause a problem?

My code

 if(!hashmap_dob.containsKey(key)){ request_count=0; hashmap_count.put(key, Integer.toString(request_count)); sb1 = new StringBuilder(); sb2 = new StringBuilder(); sb3 = new StringBuilder(); hashmap_dob.put(key, sb1); hashmap_firstname.put(key, sb2); hashmap_surname.put(key, sb3); } if(hashmap_dob.containsKey(key)){ request_count = Integer.parseInt(hm_count.get(key)); request_count++; hashmap_count.put(key, Integer.toString(request_count)); hashmap_filehasbeenprinted.put(key, Boolean.toString(fileHasBeenPrinted)); } hashmap_dob.get(key).append(dateofbirth + "-"); hashmap_firstname.get(key).append(firstName + "-"); hashmap_surname.get(key).append(surname + "-"); if (hashmap_count.get(key).equals(capacity)){ request_count = 0; dob = hashmap_dob.get(key).toString(); firstname = hashmap_firstname.get(key).toString(); surname = hashmap_surname.get(key).toString(); produceFile(required String parameters for file printing); fileHasBeenPrinted = true; sb1 = new StringBuilder(); sb2 = new StringBuilder(); sb3 = new StringBuilder(); hashmap_dob.put(key, sb1); hashmap_firstname.put(key, sb2); hashmap_surname.put(key, sb3); hashmap_count.put(key, Integer.toString(request_count)); hashmap_filehasbeenprinted.put(key, Boolean.toString(fileHasBeenPrinted)); } try{ wait(Long.parseLong(listenerWaitingTime)); }catch (InterruptedException ie){ System.out.println("Thread interrupted from wait"); } if(hashmap_filehasbeenprinted.get(key).equals("false")){ dob = hashmap_dob.get(key).toString(); firstname = hashmap_firstname.get(key).toString(); surname = hm_surname.get(key).toString(); produceFile(required String parameters for file printing ); sb1 = new StringBuilder(); sb2 = new StringBuilder(); sb3 = new StringBuilder(); hashmap_dob.put(key, sb1); hashmap_firstname.put(key, sb2); hashmap_surname.put(key, sb3); fileHasBeenPrinted= true; request_count =0; hashmap_filehasbeenprinted.put(key, Boolean.toString(fileHasBeenPrinted)); hashmap_count.put(key, Integer.toString(request_count)); } 

If you got here, I thank you for reading my question and to consider it in advance if you have any solutions to resolve it!

+4
source share
3 answers

I did not look at your code, but I think your approach is quite complicated. Try instead:

  • Create a BlockingQueue for the data that will work.
  • In the servlet, put the data in the queue and return.
  • Create one workflow at startup that retrieves data from the queue with a timeout of 60 seconds and collects it in a list.
  • If the list contains enough items or when a timeout occurs, write a new file.

Create a thread and queue in the ServletContextListener . Interrupt the flow to stop it. In the stream, flush the last remaining items into the file when you get an InterruptedException while waiting in line.

+4
source

As I understand it, you want to create / create a new file in two situations:

  • The number of requests falls into a predefined threshold.
  • Threshold timeout completed.

I would suggest the following:

  • Use the APPLICATION-SCOPED: requestMap variable containing the HttpServletRequest object.
  • Each time you delete a servlet, simply add the received request for matching.
  • Now create a listener / filter requestMonitor no matter what is appropriate to control the requestMap values.
  • RequestMonitor should check if requestMap has increased to a predefined threshold.
  • If this is not the case, this should allow the servlet to add the request object.
  • If it has one, then it should print the file, an empty requestMap, and then let the servlet add the next request.
  • For a timeout, you can check when the last file was created with the LAST_FILE_PRODUCED variable in APPLICATION_SCOPE. This needs to be updated every time a file is created.
+1
source

I tried to read your code, but there is a lot of information, so if you could give more details:

1) the indentation is messed up, and I'm not sure if there were any errors when copying the code.

2) What code are you posting? Code that gets called in another thread after doGet?

3) Perhaps you can also add variable declarations. Are these types of safe streams (ConcurrentHashMap)?

4) I'm not sure that we have all the information about the HasBeenPrinted file. It also seems to be a boolean that is not thread safe.

5) you are talking about "synchronized" functions, but you did not specify them.

EDIT

If the code you copied is a synchronized method, this means that if you have many requests, only one of them will be executed only at the specified time. It seems that 60 seconds of waiting is called (this is not entirely clear with indentation, but I think there are always 60 seconds of waiting, regardless of whether the file is written or not). Thus, you lock the synchronized method for 60 seconds before another thread (request) can be processed. This may explain why you are not writing the file after 20 requests, as more than 20 requests can arrive within 60 seconds.

+1
source

Source: https://habr.com/ru/post/1493648/


All Articles