Memory leaks?

Can you post or pm me the contents of the ofpubsubnode table. I am trying to confirm the root of the problem. If it is a very large list, a sample should suffice, along with some information as to the total number of entries.

I am having the same problem. A couple of years ago I ran openfire with no server lock ups. Does this have soemthing to do with version 3.71?

@ wroot…where is this PEP setting? I do not see it right off in server properties, I see all the other xmpp settings but nothing for PEP.

Will be happy to post any logs if that would help track this issue down.

-Baci

Please provide the info I already requested.

As for the property, it is not there by default, so you just add it yourself via the console.

I have added the property and now I need to wait and see if memory usage keeps going up.

Thanks! I will grab those logs for you.

What does xmpp.pep do in openfire? All the information I can find is about the memory leak and that disabling fixes it…however no one seems to say what it does. I will give it a shot since it seems to be the culprit but any info of what it does would be great so I am not blindly pulling the triger.

From what I can tell this is what it does in a nutshell?

"

Personal eventing provides a way for a Jabber/XMPP user to send updates or “events” to other users, who are typically contacts in the user’s roster. An event can be anything that a user wants to make known to other people, such as those described in User Geolocation [1], User Mood [2], User Activity [3], and User Tune [4]. While the XMPP Publish-Subscribe [5] extension (“pubsub”) can be used to broadcast such events associated, the full pubsub protocol is often thought of as complicated and therefore has not been widely implemented. [6] To make publish-subscribe functionality more accessible (especially to instant messaging and presence applications that conform to XMPP IM [7]), this document defines a simplified subset of pubsub that can be followed by instant messaging client and server developers to more easily deploy personal eventing services across the Jabber/XMPP network. We label this subset “Personal Eventing Protocol” or PEP."

Is that pretty much how its used in openfire? Thanks in advance for any help.

Yes, this is it. Openfire supports that protocol, though in general one don’t need it for the simple chatting, group chat or status updates. This is a more advanced event system (like RSS for XMPP).

Yes, it seems that tune PEP messages are popular with many clients.

I also am seeing this behaviour - usually takes about a week. However disabling PEP has not solved the issue for us.

What other logs should I be looking in?

Openfire 3.7.1

Ubuntu server 11 running on ESXi 4

Spark clients across the board

Thanks for any info.

Hi,

How much memory have you allocated to openfire? How many MUC rooms do you have active? Do you have the archive plugin running?

daryl

2 GB

Maybe 20 rooms

Archiving is on

And now as soon as I restart the VM java usage goes up over 90%.

How large are the archive plugin index files? Found in /opt/openfire/archive/index/

daryl

/opt/openfire/archive/index/ does not exist.

This is installed on Ubuntu server 11.10

Just another me-too reply. After ~10-14 days, java ends up consuming 99.9% CPU. We’ve got Openfire running on a CentOS release 6.2 (Final) Hyper-V VM w/ 4GB RAM and a single CPU. Embedded database, Active Directory integration. Search and Kracken IM Gateway 1.1.3b3 are the only plugins installed. We generally have less than 150 active sessions on our busiest days; and there are currently 26 group chat rooms. Restarting the service seems to take longer than simply rebooting the OS. After a clean boot, we’ll see java eat away at the CPU for approximately 3-5 minutes, then it backs down to 1-15%. OPENFIRE_OPTS="-Xms256m -Xmx2048m"

Please let me know if there’s anything I can do to help resolve this issue.

Causing the cpu to race and a memory leak are two different things.

I have verified that there are no more apparent memory leaks in my setup so it was PEP that was the problem for me. It has been almost a month now and I have not had to restart Openfire where before I had to restart it about once a week.

Me four, I guess.

Found this post after a google search. I run openfire at work, and as a (volunteer) member of the NWSChat system know that openfire is more reliable than my experience.

I have been restarting the process weekly. I’ve just had a second to look into the problem. I’ve increased memory available, will start monitoring java memory usage, and disable PEP.

The symptom is that java runs out of memory. I’ve seen a race condition, but it’s probably from java frantically trying to reclaim memory.

just commenting here for the log.

Hi, disabling PEP will more than likely resolve your issue, it is disabled on nwschat

I would also like to note that now for some reason the Java memory usage dropped from staying around 200-300MB to about 12MB.