OpenFire 3.10.0: chunked transfer encoding

I’m using BOSH (from Strophe on Chrome 42) into an OpenFire 3.10.0 server. Small packets are working fine. But as packet sizes get larger, around 27K we start to see OpenFire returning Transfer-Encoding: chunked. That works (although is not supposed to be happening based on recent fixes). However, once the packets get a bit larger, we are frequently seeing errors in Chrome where it is reporting: ERR_INVALID_CHUNKED_ENCODING. From what I understand, this probably suggests that Chrome thinks that the chunks are not being properly terminated with CRLF.

I saw OF-885 in 3.10.0 and think that perhaps this may be related.

Not a developer myself, but i have filed it for an investigation

OF-908

A partial workaround for this issue might be enabling compression. This will reduce larger packets, and thus remove the need for chunked encoding.

Compression is already being used. (The client accepts gzip encoding.)

But unless I’m missing something, there should never be a reason for the server to use chunked encoding. Right? The server always knows the size of the packet being returned, so it can always set the buffersize and content length. Am I missing something?

In any case, the reduced performance from chunked encoding is less important to me right now than the fact that Chrome seems to find the chunked encoding being returned by OpenFire to be invalid in some way. I’ve seen several references to this online where some servers are not properly terminating chunks with CRLF. Other browsers seem tolerant of this, but Chrome refuses to adapt, apparently.

As it stands, even though my packets are modest in length, they can bunch up, and over BOSH, these get grouped together by OpenFire and returned within a single HTTP request, and this larger packet (30 to 60K) containing 2 or 3 messages gets chunked encoding.

You’re absolutely right - chunked encoding should not need to occur. I’m working on having this fixed. As Openfire is using a third party library, it’s probably going to be hard to adapt the formatting of the chunks (the termination). If at all possible, I’d like to avoid going down that path.

I did some tests with up to 200k of data - that appears to work fine with the modifications I have now. Stay tuned.

Lastly: 60k of compressed data, modest? … Are you sending binary data?

Yes. Using chunked encoding is not, per se, the problem. The problem seems to be that Chrome doesn’t like the formatting of the chunked response.

And, yes, we’re developing an extension that uses something similar to file transfer in-band when out-of-band mechanisms are not available. It is not exactly the same as the standard file transfer XEP, but very similar. The message bodies contain Base64-encoded binary data. We can control the maximum size of the chunks. But, unfortunately, even when we make the chunk size smaller, multiple of these get bundled together by OpenFire and are returned into a single BOSH response, leading to the same issue.

I might be on to something now. Could you try running without compression? Although our code nicely sets a content-length (thus preventing chunked encoding), the optional compression filter modifies the data (and thus - the content length). In my test setup, I’ve had no chunked encoding while going as far as 30kb responses.

I’m aware that running without compression might not be a feasible work-around for everyone - but I’d love to hear if this at least prevents chunked encoding in every setup.

Yes, indeed. If I reconfigure OpenFire to disable compression, I can confirm that I never see chunked encoding and everything works well.

Of course, we still need to resolve this because our client will be used by people who are using their own OpenFire server and we don’t want to have to make them reconfigure their server before our client will work with them. As you said, however, this test is helpful in confirming your theory.

Well, I may have spoken too soon. Because now even with client compression turned off on my 3.10.0 OpenFire server, I’m seeing chrome clients reporting errors and see that chunked encoding is being used. Pasted below is the dump from the Chrome network inspector. Note the Transfer-Encoding header in the response.

  1. Remote Address:52.1.209.19:7070
  2. Request URL:http://xmpp.sevogle.com:7070/http-bind/
  3. Request Method:POST
  4. Status Code:200 OK
  5. Response Headersview source
  6. Access-Control-Allow-Headers:Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control
  7. Access-Control-Allow-Methods:PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST
  8. Access-Control-Allow-Origin:*
  9. Access-Control-Max-Age:86400
  10. Content-Type:text/xml; charset=UTF-8
  11. Date:Mon, 04 May 2015 19:10:55 GMT
  12. Server:Jetty(9.2.z-SNAPSHOT)
  13. Transfer-Encoding:chunked
  14. Request Headersview source
  15. Accept:/
  16. Accept-Encoding:gzip, deflate
  17. Accept-Language:en-US,en;q=0.8
  18. Connection:keep-alive
  19. Content-Length:83
  20. Content-Type:text/xml; charset=UTF-8 text/plain
  21. Host:xmpp.sevogle.com:7070
  22. Origin:http://127.0.0.1:8888
  23. Referer:http://127.0.0.1:8888/index.html
  24. User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36
  25. Request Payload

The prior patch addressed this issue for asynchronous responses. An additional patch was applied a few days ago to prevent chunked encoding for synchronous responses. This fix has been applied to both the 3.10.x branch and the master.

I just downloaded and installed the latest nightly build. Indeed, it is no longer using chunked-encoding. But now long packets are causing errors ERR_CONTENT_LENGTH_MISMATCH.

Any update on this? This is a blocker for us.