Home » GFI User Forums » Kerio Connect » Connect performance tuning (under Linux specifically)
Connect performance tuning [message #144098] |
Wed, 03 October 2018 20:20  |
Bud Durland
Messages: 586 Registered: December 2013 Location: Plattsburgh, NY
|
|
|
|
One thing we are seeing with the upgrade to 9.2.7 sp3: The new version seems to be even more disk i/o intensive than the previous one. Our Connect server runs in a Debian 8 VM, with 4 CPUs and 12GB of RAM. Our mail store is appx 2.7TB, with ~7.5 million files
Such things are often hard to find the root cause for, but I'm trying. Running IOSTAT consistently shows the CPUs having a '%iowait' value over 25, which implies a disk problem to me. I'm not as fluent in Linux as I should be, so looking to others for testing / tweaking advice with regard to file system settings (the volume is EXT4), and so forth. Our storage is on a SAN, so I'm also looking to the vendor to advise if there's a way to speed that up.
Or maybe version 9.3 with the new speedier file system that was promised a year ago is on it's way. 
|
|
|
Re: Connect performance tuning [message #144230 is a reply to message #144098] |
Tue, 16 October 2018 10:46   |
Maerad
Messages: 275 Registered: August 2013
|
|
|
|
With 9.2.7 Linux had encryption added - maybe this is turned on? Would incrase the I/O with encrypt/decrypt.
Also I'm not sure that your setup is great for kerio, even more so with a SAN. 7.5 mil files is quite something, that over a SAN connection with increased latency is already bad. And with kerio using a local file DB instread of something like MySQL... that could become a problem over time.
Yes, depends strongly on the SAN itself (raid type, ssd cache, type of harddrive, connection etc.) and for kerio you would need a SAN with fast read access at least. So a SSD Raid of some form or 10-15K SAS with SSD Cache.
Not to forget, depends on the users that access the files
|
|
|
Re: Connect performance tuning [message #144232 is a reply to message #144230] |
Tue, 16 October 2018 14:20   |
Bud Durland
Messages: 586 Registered: December 2013 Location: Plattsburgh, NY
|
|
|
|
Maerad wrote on Tue, 16 October 2018 04:46With 9.2.7 Linux had encryption added - maybe this is turned on? Would incrase the I/O with encrypt/decrypt.
I wouldn't even know where to look to enable/disable that.
Edit: So I took off my dunce cap and looked in the Admin console. and found it under advanced options. Encryption is not enabled for our server.
Quote:Also I'm not sure that your setup is great for kerio, even more so with a SAN. 7.5 mil files is quite something, that over a SAN connection with increased latency is already bad. And with kerio using a local file DB instread of something like MySQL... that could become a problem over time.
Agreed; I keep hoping for the speedy new message store system that was promised pre-GFI. Until then...
Quote:Yes, depends strongly on the SAN itself (raid type, ssd cache, type of harddrive, connection etc.) and for kerio you would need a SAN with fast read access at least. So a SSD Raid of some form or 10-15K SAS with SSD Cache.
That's why we're budgeting for a new Synology Flashstation with enterprise level SSD.
[Updated on: Tue, 16 October 2018 15:39] Report message to a moderator
|
|
|
Re: Connect performance tuning [message #144264 is a reply to message #144232] |
Fri, 19 October 2018 14:10   |
Maerad
Messages: 275 Registered: August 2013
|
|
|
|
Bud Durland wrote on Tue, 16 October 2018 14:20Maerad wrote on Tue, 16 October 2018 04:46With 9.2.7 Linux had encryption added - maybe this is turned on? Would incrase the I/O with encrypt/decrypt.
Quote:Also I'm not sure that your setup is great for kerio, even more so with a SAN. 7.5 mil files is quite something, that over a SAN connection with increased latency is already bad. And with kerio using a local file DB instread of something like MySQL... that could become a problem over time.
Agreed; I keep hoping for the speedy new message store system that was promised pre-GFI. Until then...
Quote:Yes, depends strongly on the SAN itself (raid type, ssd cache, type of harddrive, connection etc.) and for kerio you would need a SAN with fast read access at least. So a SSD Raid of some form or 10-15K SAS with SSD Cache.
That's why we're budgeting for a new Synology Flashstation with enterprise level SSD.
I doubt there will be a new speedy store system in the near future. Maybe they will give you the option to have the msg index etc. not in a sqlite db per mail folder, but centralized in a mysql db like icewarp does. Would increase the ram usage, but at least it would be quite faster.
Guess the best way really is to get a new SAN with SSD's. Btw. on SSD's - thing about the enterprise thing. We also got a new server some time ago and I tested and read some reviews about consumer ssd vs. entperprise (like Samsung 960 PRO vs. Samsungs Enterprise line).
Well, TB written etc. is basically the same for both. There is virtually NO difference between a pro or enterprise sdd, aside from the price. The only thing I could imagine are those with capacitors for power loss protection.
While kerio runs on a usual raid 10 with 4x 15k SAS, my ERP server runs 2x Samsung 970 Pro NVMe, bounded as mirror (raid 1 basically) over windows storage spaces. Even with full load with crystalmark and 20 GB files, I couldn't measure any impact on the CPU. Ok, 24 core server, but still. And 6 GB/s read rate is awesome :3
|
|
|
Re: Connect performance tuning [message #144265 is a reply to message #144264] |
Fri, 19 October 2018 14:24   |
Bud Durland
Messages: 586 Registered: December 2013 Location: Plattsburgh, NY
|
|
|
|
Maerad wrote on Fri, 19 October 2018 08:10Well, TB written etc. is basically the same for both. There is virtually NO difference between a pro or enterprise sdd, aside from the price. The only thing I could imagine are those with capacitors for power loss protection.
While kerio runs on a usual raid 10 with 4x 15k SAS, my ERP server runs 2x Samsung 970 Pro NVMe, bounded as mirror (raid 1 basically) over windows storage spaces. Even with full load with crystalmark and 20 GB files, I couldn't measure any impact on the CPU. Ok, 24 core server, but still. And 6 GB/s read rate is awesome
I'm still kinda on the learning curve on the fine details. With SSD, we are planning on a more "more inevitable" failure. Synology has an interesting take on RAID using SSD that concentrates more parity bits on one drive, on the theory that writes are what kills an SSD, and doing so will prevent simultaneous multiple drive failures. I was under the impression that the 'enterprise' SSD drives used the longer-lived type of flash memory (can't remember the acronym), but again, I'm still learning.
The testing you referred to was on the SSD or SAS drives? I know that 15K drives are becoming much harder to find -- I think manufacturers are moving away from them to SSD, or 'tiered' storage devices with SSD and lower-rpm spinning rust.
|
|
|
Re: Connect performance tuning [message #144266 is a reply to message #144265] |
Fri, 19 October 2018 14:48   |
Maerad
Messages: 275 Registered: August 2013
|
|
|
|
Test was refering to SSD drives of course. For SAS it just works 
I might also be in a bit of a special position here. First of all I tried power losses while writing data with crytstaldisk and some other tools to the SSD "raid" and cut the power. Did this like 10 times in a row. No data lost, no problems with the raid, everything was fine (Samsung Pro 970 Nvme).
So I decided to use them in my final build for the server, (added by some reviews (just google) pro vs. enterprise ssd). The special position is, that I know how much is written with the ERP system and we wouldn't reach the max. of any SSD, even if the load would increase by 1k%. Mostly data read.
Also - and that's for me the most important thing - I use hyperv replica. So I have a second server in another fire zone (nothing fancy) running windows server with hyperv, mainserver also runs hyperv. Replica syncs all blocks of the virual hdd to the secondary server in a 30 second window. So even IF the raid fails because of whatever happens, I still have a realtime copy on the secondary server. Both with UPS if course. And usual backups.
Guess your installation is a "bit" bigger then mine, small company here with 30 ppl or so 
Personally I prefer local, serverside storage and are not a fan of SAN, at least if they are not clustered (so if one fails, the other takes over). But that is for my small business here, dunno how/what you do in bigger enviroments, at least practically. Read/learnt about it, but never had the opportunity in real life.
|
|
|
Re: Connect performance tuning [message #144267 is a reply to message #144266] |
Fri, 19 October 2018 15:35   |
Bud Durland
Messages: 586 Registered: December 2013 Location: Plattsburgh, NY
|
|
|
|
Kerio's read -to- write ratio is tipped highly toward reading, so I'm optimistic that the SSD will be the silver bullet. We're a VMWare shop, so the SAN enables high availability fail over of a VM from one host to another. We have everything nearly real-time replicated to another site in real-time via Zerto.
[notsalgia mode]I remember when we only had 30 e-mail users. We were using a product called 'NetMail' on our NetWare 6.0 server. Blindingly fast and easy to administer, but not a groupware (shared calendars, to-do, etc) product.[/nostalgia]
|
|
|
Re: Connect performance tuning [message #146268 is a reply to message #144267] |
Wed, 17 July 2019 19:55  |
AniaDeely
Messages: 1 Registered: July 2019
|
|
|
|
Hi...server optimisation is a complex science dependent on a whole host of factors and there is no 'press this button' type of answer I'm afraid.
As a bare minimum, we need to know the exact server specifications, including OS, version, RAM, php version, MySQL version, current settings, etc such as php memory limit, mysql wait_timeout, and a whole host of others.
|
|
|
Goto Forum:
Current Time: Sun Jun 04 05:39:57 CEST 2023
Total time taken to generate the page: 0.03501 seconds
|