Abnormal behavior of certification subsystem of Openfire and Spark

Dear colleagues,

I have a jabber environment with next properties.

  1. There is MS AD forest with single domain “DN=domain, DN=local”.
  2. There is MS CA.
  3. Xmpp domain name equals to “domain.local”.
  4. There are two openfire servers in cluster with next FQDNs: “server1.domain.local” and “server2.domain.local”.
  5. There is balancing based on SRV records in DNS zone.

I look next abnormal situation.

SSL certification subsystem of openfire requires (why?) that certificate has DN with “CN=xmpp_domain_name” not “CN=host_FQDN”, but clients (browsers for admin console, Gajim for messaging) require “CN=host_FQDN” (obviously, it is normal).

I partially solved this problem using certificates have DN with “CN=host_FQDN, CN=xmpp_domain_name”. In this case Openfire servers, browsers, Gajim work fine with SSL.

But Spark says (without ignoring incorrect SSL certificate name option): “Hostname verification of certificate failed”.

It is seem that developers of Openfire and Spark consider, that xmpp_domain_name MUST equals to host_FQDN.

I have a one question: when developers will solve this abnormal situation?

Evgeniy

But maybe Gajim (fresh profile) still shows a warning first and then you have to add an exception for the certificate? Spark doesn’t have an exception dialog mechanism, so it’s either allows a connection or not.

Xmpp clients have to login to xmpp domain, not host. Certificates are generated for the xmpp domain, so login ‘server’ should match what is in the certificate. As you try to login to xmpp host, you get the mismatch error. You may try putting xmpp domain name into Spark’s “domain” field. Though it might not work and you might need SRV records for xmpp service, like these: Check DNS SRV records for XMPP

Thanks for you reply.

  1. About Gajim. Before doing I full remove Gajim with profile. It is work correctly with CN=host_FQDN, not CN=domain.

  2. About Spark. According RFC 2782 “A DNS RR for specifying the location of services (DNS SRV)” clients MUST connect to host, NOT to domain.

  3. I think developers can provide that Spark will look at all CN in DN, not only first, moreover, any browser for webadmin console requires that (some require first CN in DN) CN=host_FQDN.

Evgeniy

That’s right. SRV record points to the host. E.g. when i query igniterealtime.org for xmpp service i see such SRV

_xmpp-client._tcp.igniterealtime.org 5222 xmpp.igniterealtime.org

Which means when an xmpp client (or anynone) queries for xmpp-client service on igniterealtime.org domain (FQDN) it will be pointed to the host xmpp.igniterealtime.org where a server actually is. So in Spark i put igniterealtime.org into its Domain field and SRV record resolves everything for me and points me to the server. In xmpp your username is user@domain, not user@host. You can login as user@host (using IP or some other ways), but that’s not standard and can create problems with some aspects of xmpp.

Maybe you are right and Gajim checks both domain and host in the certificate and allows the connection. Can’t answer about Spark (not a developer myself, nor have much experience with certificates). Spark is using Smack library for connections, so it might be a restriction coming from Smack itself. And i don’t know which way is actually proper one in this case.