GFI Software

Welcome to the GFI Software community forum! For support please open a ticket from

Home » GFI User Forums » Kerio Connect » Performance Problems With Kerio Connect 8.3.1
Performance Problems With Kerio Connect 8.3.1 [message #116326] Thu, 25 September 2014 16:55 Go to next message
amedina is currently offline  amedina
Messages: 16
Registered: April 2009
Location: Santo Domingo
Hi All,
this is a problem that we was presenting in other versions of Kerio Connect, we recently upgraded to 8.* (actually 8.3.1) to see if this could resolve the problem. Constantly users report that Outlook freeze with "Not Responding"when they change between folders . We have about 305 users, Kerio connect is running on a VMWare Linux Virtual Machine with 8GB RAM, 2 Quad Core processors and 8 600GB 15K RPM Hard Disks in RAID 10 (internal storage), users connect to server at 100 Mbps. I've been monitoring the disk use in the server and right now is at 100% (using "iostat" command, see the attached print-screen).

When I check the disks via WMWare vSphere Client, al the disk appear as "OK", so I don't think could be physical problems with the disks. How can I know what is doing the Kerio Process in the system to know is it is doing some operations that could use all the disks time. I want to know if I need to change my disks system and go for a External Storage System via iSCSI, or if I can resolve this in other way.

Please let me know how can I determinate the root cause of this performance issue.

Thanks and best regards,
Alberto Medina
Re: Performance Problems With Kerio Connect 8.3.1 [message #116335 is a reply to message #116326] Thu, 25 September 2014 21:45 Go to previous messageGo to next message
MarkK is currently offline  MarkK
Messages: 342
Registered: April 2007
Sorry, I don't have an answer to the issue, but I have seen the same High Disk Usage issue with Windows on a RAID5 internal storage, regardless of the Connect version. Since Connect does the emails in individual files, that makes it very disk intensive. I have been wondering how much a RAID10 vs my RAID5 would make.

So I'm hoping that someone is able to provide a good answer.
Re: Performance Problems With Kerio Connect 8.3.1 [message #116411 is a reply to message #116335] Mon, 29 September 2014 17:38 Go to previous message
Maerad is currently offline  Maerad
Messages: 275
Registered: August 2013
Some more information would be nice ...

How are the users connected to kerio with outlook? Offline or Online Client? Imap? Terminalserver? Local PC's? If offline client, where is the cache?

Kerio is a file based system, means almost no DB in the backround, and that makes it slow, because if a user refreshes the directory if has to load almost all information in the dir. And if he has 10k mail in it and the other 305 too ... fun for the server Smile

Does anything else run on the server btw?

In our environment we've got 25 users and 2 terminalservers with outlook. On the TS I have installed a SSD for the outlook connector offline cache (works like a charm), server has almost nothing to do.

If you run all outlook with IMAP, I would suggest to change for the KOFF, that way Outlook builds a local cache and the server only needs to serve the changed data, not all dirs. So the workload gets shifted to the client side.

If you need to do it all on the server with IMAP, more disks wouldnt help that much or a iscsi system. With more disks in a raid 10 you get a higher throughput, but your latency goes up too. No to mention the overhead from the virtualisation (even the actual vmware and hyperv vhd are still a performence issue, even more with that many accesses like you have). And kerio does need a low latency and fast access to small files. With that many users I would suggest to go for a SSD Raid. And with around 80% or more of the usertraffic is reading, the wear is pretty low IMHO.
Previous Topic: Secondary Backup Mail Server
Next Topic: Active Directory ldap authentication
Goto Forum:

Current Time: Fri Mar 24 21:33:19 CET 2023

Total time taken to generate the page: 0.02433 seconds