I currently work for a company that runs Senior Living, Memcare, etc. type facilities. At some of these locations Sara is used for Pendant alerts, roam tags, wander gaurd, door, pull chain, etc. type alerts and notifications for the residents. One of the main features is a phone messaging system that texts the alerts to to staff phones when a pendant is triggered, etc.
Generically speaking; those phone messages are delivered to our CUCM server via: SARA (notified)->NIPA message to IPCelerate->IPCelerate to CUCM->Phone
Recently these notifications were randomly getting delayed or dropped. After looking everything over I noticed the IPCelerate transaction log was almost 2 gigs and clearly the DB had never been maintained since it’s installation (almost 5 years). I was getting ready to go through an annoying (and potentially software breaking) routine of installing SP1 for 2008, updating .NET, etc. in order to get SSMS installed on a non-SP1 2008 R2 server. However, with some deep googling and some archive.org help, I found this guy:
The only guy on the internets that basically had the same issue was good enough to document it. I’m going to update it. It’s basically correct, but there are some things to correct just so someone less experienced can still make it through.
Keep in mind, these are about legacy installations. My version of Sara is 4.4 and this information should apply all the way up to 4.7, maybe 4.8 and up.. but I do not have that enviroment so cannot confirm. Our IPCelerate version is up to date (as is JTAPI) with our CUCM version (10.5(2.x)). One difference between my situation and “webmaxtor”‘s (I believe) is that we have a separate servers; One running IPCelerate and another running Sara. IPCelerate is located @ our DC that has our CUCM server and the Sara server is installed @ the location.
I’m going to replicate his information below, with my notes added in red.
Before continuing !
*PERFORM A WINDOWS DISK DE-FRAGMENTATION FIRST ON THE DRIVE CONTAINING YOUR DB FILES. *IF YOU HAVE A SEPARATE DISK FOR YOUR DB, GO AHEAD AND DEFRAG YOUR SYSTEM DRIVE AS WELL. *WHY? Because if you’re in this state with a Sara or IPCelerate system, it’s because there has been no maintenance plan on these servers. *You want, nay, NEED, those transaction log files externally de-fragmented before you go and try to shrink them..
This is likely dated material as I haven’t had the chance to work with IPCelerate based solutions in years, but the following just helped me out of a jam. The issue was communication between a Status Solutions TAP paging interface, the IPCelerate IPSession server and ultimately Cisco 7925 handsets was delayed.
1. Open up Windows Service and stop the following services:
Apache Tomcat Tomcat5
2. Open command prompt and type osql -E
3. Then type the following commands
1> use nipa
2> backup log NIPA TO DISK =’NUL’
Before you can shrink a transaction log, a backup HAS to be performed.
This here is a ‘fake’ backup as you’re directing the output to a null device.
Note: This may take a few minutes before you receive a message similar to this:
Processed 33136 pages for database ‘NIPA’, file ‘NIPA_log’ on file 1.
BACKUP LOG successfully processed 33136 pages in 3.697 seconds (73.423 MB/sec).
4. Once you see this message type in the following.
1> dbcc loginfo(nipa)
This will display a long list and the far left column is where you will either see 0 or 2.
Example of 2: 2 5570560 224526336 200
0 64 1960000000961100001
Example of 0: 0 5570560 224526336 200
0 64 1960000000961100001
“where you will either see 0 or 2” is just a description. This information isn’t needed specifically for a later step.
Also, this step accesses the log which will later prevent you from shrinking it due to “all transaction files are in use” type errors. So, this step, while good (it gives you an idea how many virtual logs are contained in the single transaction log file … and verifies it is actually there) — is out of order when you hit the “repeat” stage. I’ll clarify further ahead.
5. After running this wait about 5 minutes and then type exit to logout of the SQL.
Waiting won’t really help, so just logout go to #6. People say to wait when dealing with this because sometimes the loginfo is still running in the background and you can’t perform the shrink, but…just keep going. No need to wait.
6. Go to Windows Services and stop the MSSQLSERVER service.
7. Wait about 2 minutes and then Start the MSSQLSERVER service
8. repeat steps 2 through 4
Don’t repeat steps 2 through 4. Repeat steps 2-3. Running another loginfo will just prevent you from performing the shrink. So go through step 3 and perform the “fake” backup, then move on to step 9.
9. No wait is needed this time, begin typing the following:
dbcc shrinkfile(NIPA_log, 10) means, shrink my transaction logfile to around 10 mb
1> dbcc shrinkfile (NIPA_log,10)
Note: You should get the following message below with no errors.
Example output with no errors:
DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
—— —— ———– ———– ———– ————–
5 2 1303 128 1296 128
(1 row affected)
DBCC execution completed. If DBCC printed error messages, contact your system
(I deleted a copy and paste that was here as a note)
10. Verify the NIAP_log.LDF file under C:\Program Files\Microsoft SQL Server\MSSQL\Data is rougly 10MB, if not repeat step 9 again.
11. Type the following to exit SQL:
12. Start the following services in this order:
Apache Tomcat Tomcat5
1. open sql command line
2. create “fake backup” to null device
3. list entries in transaction file using loginfo. Good to keep a record of.
4. Stop/Restart SQL Server service (aka MSSQLSERVER)
5. Repeat steps 2 & 3 (just for verification and to see the difference in processed time/pages)
6. After repeating step 3 (fake backup) – Shrink the transaction log (step 9)
7. Verify the shrink took place.
8. Exit & restart your services in the appropriate order
If this is a VM, take a snapshot of your server so you have something to go back to if you mess up.
I rebooted my server after all was said and done
Check your event viewer for any “new” MSSQL errors. There shouldn’t be any.
Check IPCelerate to make sure the services are started. Easiest just to login to the webpage as admin and check connections.
etc. Due diligence.
Final notes – I will say I had very little expectation for this to solve our issues (at least entirely). Also, our transaction log was just over 1.4GB…which isn’t insane or unheard of. However, performing this DB maintenance totally did fix the delays we were experiencing.. other issues aside.
Also, if you’ve ever.. EVER.. google for information regarding the shrinking of a db transaction log (or db for that matter) then you’ll know that there are shit ton of DBA know-it-alls that act like using shrink for anything, ever, is a horrible idea and will make your life hell. I’m here to say that is BULLLLLLLLLLLLLLLLSHIT. It’s 100% ok to shrink your transaction log files for small db systems like this given you’ve performed other basic maintenance like disk de-fragmentation. There is very little in the way of something going wrong with shrinking a transaction log file and if there were something to go wrong it’s easy to fix. Besides… are you REALLY telling me you’re going to migrate and re-build/organize your entire DB that’s associated with the log… just to clean it up? Are these people out of their minds?
DB’s on the other hand, no matter the size or utilization, I would never shrink…ever…the idea of it makes me wanna shit my pants.