[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Performance Issues w/Ledgersmb 1.3.x
- Subject: Re: Performance Issues w/Ledgersmb 1.3.x
- From: Chris Travers <..hidden..>
- Date: Sat, 3 Dec 2011 18:02:14 -0800
Something sounds very wrong here. I have been watching LSMB
performance on various systems and never seen anything that bad.
On Sat, Dec 3, 2011 at 4:51 PM, Steven Marshall
> Looking for some help resolving a performance issue with Ledgersmb 1.3.7. I
> originally had an earlier version 1.3.x installed on a Xen VM guest and
> things ran very slowly so I setup a new server without using virtualization
> and loading Ledgersmb pages is still extremely slow to the point it is not
> usable for production. I don't have anything else running on the server and
> have not done any tweaking to Apache2, Perl, Postgresql, or openSUSE. Here
> is my setup:
> Hardware: Poweredge 6400 Dual Xeon 700 Mhz processors w/1 Gb of RAM
> OS: OpenSUSE 12.1
No major warning flags there.
> Applications: Ledgersmb 1.3.7 (Base setup with Postgresql 9.1, Apache2, and
> Perl 5.14.2)
No major warning flags that I know of though my environment is running
Pelr 5.12 and Pg 8.4.
> Network: Currently only using my private network (i.e. 192.168.1.x). There
> is very little traffic on this network and in most cases I am the only one
> on the network.
> I ran the following test:
> Test 1; Connecting to Apache's default html page from laptop to server via
> URL: http://192.168.1.10
> Results: I get Apache's "It works!" page in less than a second
> Test 2; Connecting to Ledgersmb's setup.pl from laptop to server via WiFi
> URL: http://192.168.1.10/ledgersmb/setup.pl
> Results: It takes on average about 13 seconds for this page to open.
> Test 3; Connecting to Ledgersmb's setup.pl directly from server.
> URL: http://localhost/ledgersmb/setup.pl
> Results: It takes on average about 4 seconds for this page to open. Same
> thing if I substitute 192.168.1.10 for localhost in the server's browser.
That's also unexpected.
> Test 4: Traceroute test from laptop to server
> C:\Users\Steven Marshall>tracert 192.168.1.10
> Tracing route to ledgersmb.tekmerge.com [192.168.1.10]
> over a maximum of 30 hops:
> 1 1 ms 2 ms 1 ms ledgersmb.tekmerge.com [192.168.1.10]
> Trace complete.
> It doesn't appear to me while running "top -i" on the server that it is
> being heavily taxed.
What else is going on on the server?
> My initial thought was there was something wrong with
> the router until I loaded the Apache's default page and it loaded almost
> instantaneously so that seems to tell me the bottleneck is specific with
> ledgersmb. It would also seem to be specific to connecting via a computer
> and not directly from the server. I have ran these test from several
> browsers on my laptop along with my iPad all with the same results.
> My suspicion is the bottleneck is due to apache waiting for postgresql to
> respond, but I don't really have anything to base that on other than
> watching (i.e. tail -f) the Apache logs when their appears to be a delay
> before Apache's response gets written to the log. That being said, I don't
> know if just loading the setup.pl page actually requires a connection to the
> database server. Has anybody else experienced this issue or anyone have any
> suggestions how best to troubleshoot this?
Setup.pl initial load (login page) doesn't hit PostgreSQL so we can
rule that part out.
Anything prior to login is basically mostly loading the relevant Perl
libs, rendering the template and sending it to the browser.
So a few notes on your test cases:
1) There seems to be significant overhead to sending it out over the
network (adding 9 sec. to average load time)
2) You don't see a lot of CPU time so what are your load averages?
In particular what percentage of CPU time is spent waiting for I/O
returns? There's a lot more I/O in the cases where you are having
trouble than in the other cases. Is something up there?
Typically I would start by watching top while you are running these
tests. Looking at general CPU stats, plus anything going on in the
process list. High wait times would be consistent with some sort of
I/O problem. But something is going on....
If that failed, I would filter the process list only to those owned by
the apache user.