Char [] to Byte [] for optimizing network output (java)

I am just in the idea of ​​sharing experiences from infoq. He claims that if you convert the string to byte [] in the servlet, this will increase QPS (the number of requests per second?). The code example shows a comparison:

Before

private static String content = "…94k…"; protected doGet(…){ response.getWrite().print(content); } 

After

 private static String content = "…94k…"; Private static byte[] bytes = content.getBytes(); protected doGet(){ response.getOutputStream().write(bytes); } 

Result before

  • page size (K) 94
  • max QPS 1800

Result after

  • page size (K) 94
  • max QPS 3500

Can someone explain why it has been optimized? I believe that is true.

UPDATE

In case I will be misleading. I need to explain that the original presentation uses this as an example only. Thus, they actually reorganize the speed engine. BUt this source code is a bit long.

In fact, the presentation did not imply how they do it in detail. But I found some advantage.

In ASTText.java, they cached byte [] ctext instead of char [] ctext, which greatly improves performance ~!

As above. That makes a lot of sense, right?

(BUT definitely they should also reorganize the Node interface. Writer cannot write bytes []. This means that OutputStream is used instead!)

Because Perception actually advised the Write finally delegate for StreamEncoder. And the StreamEncoder entry will first change char [] to byte []. Then pass it to OutputSteam to do the actual recording. You can easily refer to the source code and prove it. A review of the rendering method will be called up each time to display the page, the cost savings will be significant.

StreamEncoder.class

  public class ASTText extends SimpleNode { private char[] ctext; /** * @param id */ public ASTText(int id) { super (id); } /** * @param p * @param id */ public ASTText(Parser p, int id) { super (p, id); } /** * @see org.apache.velocity.runtime.parser.node.SimpleNode#jjtAccept(org.apache.velocity.runtime.parser.node.ParserVisitor, java.lang.Object) */ public Object jjtAccept(ParserVisitor visitor, Object data) { return visitor.visit(this , data); } /** * @see org.apache.velocity.runtime.parser.node.SimpleNode#init(org.apache.velocity.context.InternalContextAdapter, java.lang.Object) */ public Object init(InternalContextAdapter context, Object data) throws TemplateInitException { Token t = getFirstToken(); String text = NodeUtils.tokenLiteral(t); ctext = text.toCharArray(); return data; } /** * @see org.apache.velocity.runtime.parser.node.SimpleNode#render(org.apache.velocity.context.InternalContextAdapter, java.io.Writer) */ public boolean render(InternalContextAdapter context, Writer writer) throws IOException { if (context.getAllowRendering()) { writer.write(ctext); } return true; } } 
+1
source share
1 answer

In addition to not calling the same output methods, in the second example you avoid the overhead of converting a string to bytes before writing to the output stream. These scenarios are not very realistic, but the dynamic nature of web applications does not allow you to pre-convert all your data models into byte streams. And , there are no serious architectures where you will directly write to the HTTP output stream like this.

+5
source

Source: https://habr.com/ru/post/895045/


All Articles