Top 5 MySQL Performance Tips for SMBs

For multiple reasons like security, reliability, robustness, and cost efficiency, the open source license, MySQL is considered as the best option for small and medium sized organizations.

MySQL blogMySQL is the 2nd most popular relational database management system preceded by Oracle and followed by MS SQL database management systems in 2015, according to DB-engine rankings. It is an open source database system; therefore, it is more suitable for the small and medium business SMBs, which normally have very limited budgets to expend carefully. The extravagant expenses can not only deplete the available resources, but also can hurt their survival in the marketplace. MySQL database is a rational database management system that is considered as the most secure and reliable for all sized businesses globally.

It owns many great features that make it one of the best database management systems in the world. It is an integral part of LAMP stack, where ‘M’ stands for MySQL database. It is easy-scalable, secure, and flexible in both installation and operations. A large number of the companies, especially small and medium sized organizations use this database management system for their applications and services.

If properly tuned and optimized for the best performance, it provides the most desirable results otherwise, it is very complex to create bottlenecks for the entire application or service. So, to make it high performing database management system, the following tips should be considered strictly.

1.     Hardware Optimization

Hardware optimizationMySQL database server performance is directly proportional to the optimized hardware resources, provided that the other parameters are working as per required standards. So, you should make sure that you have sufficient hardware resources and optimized hardware configuration to allow MySQL database system to provide high performance. The hardware optimization can be done both before and after installation. You should follow the below given key points for hardware optimization.

  • Always install sufficient hardware resources – Disk, Memory, CPU and Network Bandwidth – to cater all resource needs for a database query as well as a process. A shortage in any of these resources will cause problems in the performance very badly.
  • Use sufficient RAM to load entire database engine for an efficient performance. This will reduce swapping process that can decrease the performance.
  • Use high quality hardware and advanced features such as, SATA/SSD drives, battery backed RAM & Cache Controller, RAID 10+, Solid State Cards, XFS file system, high speed fibers, and others
  • Always use small disks and multiple partitions both logically and physically to create high performance and speed of your database and application.
  • Scale-up the resources immediately, if any bottle-necking is observed.

2.     MySQL Configuration Optimization

MySQL configThe configuration is a very critical part of MySQL database server that plays very vital role in the performance of a server. The experts suggest that you should neither emphasize overly nor slightly on the MySQL configuration to achieve optimized performance. You should properly assess your requirements and objectives carefully to make changes in the configuration. Sometimes, the default configurations are okay for small sized businesses, but that does not work for everybody, always. Therefore, be specific about your requirements before optimizing MySQL configurations. In general, some of the basic configuration optimization activities that are useful for SMBs are given below.

  • Configure enough buffer pool size to load whole database engine
  • Configure small size of log files
  • Define a suitable number of connections as per your available resources; the extra number of connection can deplete your resources and server performance.
  • Increase the maximum and temporary table sizes to avoid extra disk writes
  • Configure moderate value for the buffer size
  • Configure server to remove DNS lookup procedure to increase processing speed
  • Insert index on data and remove excessive rows on searches and returns in queries
  • Always keep optimizing your database tables through close monitoring
  • Use VACHAR or ENUM to save the disk space.
  • Use EXPLAIN keyword for queries to have a deeper insight on the query processing.
  • It is good practice to partition the tables in smaller units of database information based on zip codes through the RANGE command. It will give high efficiency and processing speed.

3.     Correct Selection of Database Storage Engines

Database enginesThe database storage engines handle the database operations of tables. There are many different kinds of storage engines supported by MySQL database system. The most common of those storage engines are InnoDB and MyISAM. InnoDB is a default storage engine due to its certain powerful features. You can choose the suitable one in accordance with your requirements and performance objectives. For heavier applications, it is better to choose MyISAM because it is fast in different processes; on the other hand, InnoDB is very easy to scale, but a bit slower than MyISAM. I would recommend using InnoDB for small business applications owing to its flexibility and its support to row-based locking. Many advanced features are also supported in this storage engine. Use can also choose other options for storage engine, if they fulfill your business needs.

4.     Benefit from Opinions of Experts in Industry

MySQL ForumsThe first version of MySQL was released back in 1995; during that past 20 years’ time, the number of MySQL experts has grown tremendously. Similarly, a large number of online forums, group discussions, technical blogs and other resources have developed during past twenty years. These forums are very powerful source to get help and guidance from the industry experts in the domain of MySQL database system. You can ask about your issues related to MySQL database from those experts, and get substantial help from numerous platforms such as, conferences, seminars, Q&A sessions, and others. So, never feel alone in this open source domain of database platform. You can get help from a large number of experts globally. You can get solution for all types of issues or queries related to MySQL database management system on those forums and from the industry experts.

5.     Leverage an Enterprise Level MySQL Monitoring Service

SiteObserversLike every product, service or machine, you should also monitor your MySQL database management system very carefully to avert any kind of issues that can lead to performance degradation or a complete failure of service. To improve the performance of your database, you need to watch your database through an enterprise level automated cloud based monitoring service.

SiteObservers is one of such enterprise level MySQL monitoring services that offer all powerful features, functions and capabilities that your MySQL database needs. It tracks all important parameters that improve the performance of your database. SiteObservers MySQL monitoring service tracks not only performance parameters, but also help find out the root causes of the problems that occur on your database system. It provides in-depth perspective on the performance of the database through detail oriented reports and powerful drill-down functions.

In the nutshell, SiteObservers acts like an eagle eye on every event in your MySQL database management system. To know more about SiteObservers enterprise level free cloud based MySQL service, click here.

Read More

Top 5 Useful Tips on Efficient Network Management

Enterprise level monitoring, consistent performance-data analysis, rapid corrective measures, and the use of latest technologies make your network super fit for the present day cut-throat competitive business ecosystem.

Network diagramThe business applications’ operational budget was recorded as 23% of the entire IT budgets of the global organizations, according to the computer-economics survey report 2015. It was about 4% higher than the last year’s figures. The white paper by Vision Solutions, Inc calculates that the annual loss of any one of the fortune 500 companies due to IT failures is more than $46 million, which includes both the tangible and intangible cost incurred on the company. According to this study, the brokerage firms in the USA sustain the most loss of over $6.48 million per hour due to network failures. The global annual impact due to IT failure was estimated over $3 trillion in 2011 by the ZDNET study. The network support staff of different companies ranges from 3.8% through 9.1% of the entire IT staff, as per survey conducted by Computer Economics Inc. Meanwhile, the Information Week study estimated the IT downtime cost for the USA companies in 2011; in that research, it was found that a staggering revenue of as much as $26.5 billion is lost annually.

Under the shadow of such flabbergasting network downtime figures, it is very critical for both the enterprises and the IT professionals to properly envisage the network management policy, and effectively implement of industry best practices to reduce the nasty revenue losses due to network down-times. In this article, we are going to figure out top 5 tips for the IT managers and technical professionals to manage their IT networks efficiently.

1.       Proactive Network Monitoring

Proactive monitoringA study by Gartner Research suggests that market of application performance monitoring APM crossed $2.4 billion mark back in 2013. The improvement in the performance of the network is not possible until you closely monitor different processes, functions, and transactions of your IT networks. More than 63% of organizations believe that using automated performance monitoring tools are more productive than consolidating data through other sources, say TRAC Research Inc. The main components, which are vulnerable to contribute in the degradation of network performance include websites, web applications, mobile apps, servers, automated business processes, business transactions, hardware resources, user experience, raw data analysis and others. The TRAC Research, Inc report suggests, the performance visibility has decreased to as low as 61% in the public cloud environment as compared to 32% in the private cloud ecosystems. More than 61% companies reported a substantial decrease in the performance visibility and customer experience after adopting public clouds. TRAC also reveals, sizable organizations are relying on application performance monitoring (APM), Network Performance Monitoring (NPM), Web Performance Monitoring (WPM), and Wide Area Network (WAN) enhancement tools for increasing the user experience on their networks.

In the light of above mentioned research figures, it is very imperative to have a unified monitoring platform implemented on your network to create greater value, and increase the efficiency of your network. A unified web environment monitoring platform should have powerful network performance monitoring NPM, website performance monitoring WPM, and network performance monitoring features all in one single platform like SiteObservers enterprise level unified free monitoring service.

2.       Regular Performance Review

Regular performance overviewThe second important tip for IT managers and professionals is that they should review the performance parameter related data of their network components on a regular basis. Normally, a substantial performance related technical intelligence lies in that data. If properly analyzed, you can get deeper and more useful information about the causes/issues that are contributing to the performance degradation of your network. A proper analysis of those issues and causes would help you identify the future threats and risks that are in offing within your network elements NE. Once you catch up the root of the issues, you would be in a much better position to make strategies for the continual improvement in your network performance. Aberdeen Group’s research in 2012 suggested that every single second delay in the loading of your website causes an 11% reduction in overall page views, which cause more than 17% reduction in customer satisfaction, and subsequently a considerable reduction in conversion rate at about 7%. This can lead to multi-million revenue losses to the global industry. According to KissMetric information, any online platform that gets 400,000 unique visits per month – if the loading time of that platform increases by just one second, it will sustain more than $1.3 million of revenue in a month. The major network performance monitoring tool – known as Network Performance Monitoring and Diagnostics NPMD – has achieved a consistent success during the past few years, and its market value has crossed $1.9 billion mark in 2013.

You need to analyze multiple functions and factors such as data packet losses, network latency, security compromise, firewall settings, firewall capacity, capacity of network resources, loss of connectivity, and the variations in speed of the network. This is also very important to note that the performance of online services based on website or an application does not dependent only on the website performance, but also on many other factors that I mentioned above. So, to make sure that your network performs efficiently, you should conduct a deeper analysis of the performance, and then, devise effective strategies to remove the performance bottlenecks immediately.

3.       Latest Technology Implementation

Latest technologiesThe technology is playing an instrumental role in today’s fiercely competitive marketplace under a huge impact of disruptive technologies globally. To achieve a competitive advantage, every entrepreneur is trying to use the best technologies available in the marketplace to reduce the operational cost, increase the performance, create customer value, and achieve the desired business bottom lines. New technologies reduce the need of staff, and make the jobs more technical and skill oriented; thus, obtaining better business results through cutting edge technologies are possible for the organizations. A few years back, hosting a website or running an automated business process across multiple geographical locations was confined to only large corporations, but with the advent of cloud computing technology, the entire business scenario changed, and all size businesses became able to use the cloud hosting services at very affordable rates. The internet of things became possible due to reduced computing/processing cost as well as the internet bandwidth cost. Latest technologies in application level firewall or the next generation firewall have changed the landscapes of web security without compromising on the performance of the web environments. Similarly, bring your own device concept BYOD would reduce the operations as well as capital cost of the organizations drastically in the near future. The BOYD and mobility market is going expand massively to cross $266.17 billion mark by 2019, estimated by many new researches. Meanwhile, new products such as software defined networks SDN, and latest communication technologies will change the landscape of the global business.

So, it is always a good idea to implement the latest technologies in the networks, for example, next generation firewalls, intelligent routers, fastest transmission networks, latest wireless technologies, and web development technologies to reduce the OPEX as well as CAPEX of the company, and increase the return on investment (ROI). Upgrading the existing technologies to newer versions, and the implementation of software patches is very imperative to keep with the pace of the changes in technology.

4.       Redundant Configuration

Redundant statiionsThe redundant configuration/installation of critical components – both the software and hardware – is very crucial to create a reliable network environment where the performance of network improves tremendously. The major areas where the redundant resources are required to be configured include core network, transmission links, routers, firewalls, data backups, database servers, and others. Though, the redundancy of the resources costs more, but its results are amazingly great for all kinds of businesses including small, medium and large companies.  It is also an important to implement the latest software protocols/technologies that are self healing in case of any performance degradation. They are efficient enough to recover from the fault through redundant resources or other embedded mechanism.

5.       Load balancing & Resource Scaling

Load balancingThe last but not the least is the resource scaling and load balancing. It is directly related to the close monitoring of network performance to identify the level of the use of resources; once the volume of resource use is known, the second step is either to balance the load or scale up the resources to remove the performance bottlenecks. Sometimes, it happens that the load is not properly distributed among the available resources, and subsequently, the performance degradation occurs badly. If the available resources are not sufficient, then an immediate action is required to scale up the required resources such as bandwidth, processor, disk space, RAM, number of servers, redundant links, and many other such network components. A proper implementation of resource scaling and load balancing would increase the efficiency of your network to the desired level easily.

One way or the other, all these crucial tips to manage network performance efficiently rely on the close monitoring of the networks. There are many unified monitoring services available in the marketplace that offer enterprise level network performance monitoring services for all components of a entire IT network. The SiteObservers is one among such enterprise-level unified network performance monitoring services in the marketplace. To know more about SiteObserver’s unified monitoring services, click here.

Read More

Top 10 SQL Server Counters to Monitor Closely for an Industry Grade Performance

System resource contention, bad database schema, bottle-necking, and excessive time procedures or queries are the major reasons that cause performance issues of an SQL server.

SQL Server LogoSQL server is one of the mostly used databases in all kinds of industries all around the globe. It is a very important responsibility of a database administrator to maintain the performance of SQL server professionally.  The close monitoring of different critical and service affecting counters/parameters of SQL server not only increase the system up-time, efficiency and effectiveness but also chance to achieve the desired business goals of any organization.

The causes of the performance degradation of SQL server can be classified into three broad categories – bad configuration, shortage of resources and malfunctioning of different processes. Specifically speaking, we can say that the insufficient server resources, bad configuration, excessive query compilations/re-compilations, memory bottle-necking, bad execution plans and database schema designs, and CPU usage pressure can create service effecting impact on the performance of SQL server directly. A close monitoring of different parameters of SQL server can help diagnose and resolve the performance related issues very fast, and thus, help improve the performance of SQL server in an enterprise ecosystem.

The main counters of SQL – if monitored closely – can help improve its performance are given below with detailed description and their recommended range of values.

1.   Buffer Cache Hit Ratio

This is a very critical counter in all databases including SQL database. It represents the rate of how often an SQL server can find the desired data page in the buffer cache rather than going to hard disk for the same. It is recommended to maintain a higher rate above 95% for better performance of the SQL server. It is closely related with the size of memory of the server. If the buffer cache hit ratio is less than recommended range, quickly increase RAM or check for other issues because performance starts degrading rapidly.

2.   Batch Requests/Sec

The batch of requests per second counter is the number of batches handled by the SQL server in one second. It indicates how busy the SQL server CPU is. The value of this counter is arbitrary and depends on different parameters such as speed of the network link, capacity of the processor, and other server resources. Ideally, a normal SQL server with 100 Mb link can handle up to 3000 batch requests per second. So, you should monitor this counter very closely in relation with the resources of your server to get better insight.

3.   Plan Cache Hit Ratio

This is a percentile ratio of the plan-cache use. A higher ratio indicates that your server is working efficiently and effectively without creating new plans for every incoming request; the lower ratio indicates that server is struggling to do more work than it is desired to do. So, try to find out the reason immediately and resolve the issue to increase the performance. This counter should also be analyzed in the light of plan cache reuse counter for a better perspective.

4.   SQL Compilation/Sec

This counter indicates the number of times the execution plan has been compiled by the SQL server. It should always be kept as low as possible. The higher value indicates that there is a huge pressure on your server resources such as memory, processor and others. This ratio should also be compared with the batch request per second value for a deeper perspective. The rule of thumb is that each compilation should accomplish at least 10 batch requests. The higher ratio may also be indicative to the fact that adhoc queries are using resources excessively, and should be re-written those queries for a better performance.

5.   Page Life Expectancy (Sec)

Page life expectancy is the duration (in seconds) of a data page to stay in the buffer cache. The value of this counter should be longer for the better performance of an SQL server. Many experts believe that any value of this counter less than 300 seconds is not good for server performance; but this value is not a standard one too. It is an arbitrary value that depends on existing server environment. There should be a close monitoring of this parameter to maintain the performance of the server.

6.   Full Scans/Sec

The ‘full scans per second’ counter indicates the total number of scans made by server for database tables or indexes. A higher value of this counter may be the cause of missing indexes, requests for too many data records or very small data tables. Sudden increase in this value may be due to reaching of threshold value of indexes, or any other uneven condition. Meanwhile, de-fragmentation of indexes should also be done on a regular basis to improve the performance of the server.

7.   Lock Waits/Sec

This counter pertains to the management of concurrent users in the SQL environment. The number of times per second the SQL server has to wait to lock resources for the request is called lock-waits/sec. Ideally, the value of this counter should be zero, because no request should wait for resource in an industry grade performance environment of SQL server. The lock wait time counter is another useful counter that can help you understand the lock waits per second much clearer. Any increase in this counter should immediately be addressed to keep the SQL performance high.

8.   Deadlocks/Sec

This counter is closely associated with the lock waits per second. The number of lock-waits per second that resulted in the deadlocks is called the deadlocks per second. It should be kept at zero ideally, but sometimes a smaller value (less than 1 per second) for a very short period may be okay, but it lasts for longer duration, then you should take immediate action.

9.   Page Splits/Sec

SQL server splits the pages when it inserts or updates any page due to the overflowing of index pages. The number of page splits performed in a second is called page splits per second. It should be always kept low to maintain the high performance of database server because it requires huge resources for splitting a page for the insertion of data into the tables. This problem occurs due to the bad configuration of tables and indexes. To decrease page splits modify tables and indexes to lower non-sequential insertions of data into the pages. You can also use pad_index and fill factor to create more empty space.

10.   Checkpoint Pages/Sec

The dirty pages are flushed back to disk by the checkpoint operation of an SQL server. This counter measures the total number of dirty pages flushed to disk in a single second. This is an arbitrary value and depends on different parameters especially on memory. It is recommended to maintain as low value as possible of this counter. Any abrupt increase in this value is indication of pressure on the memory use. So, always monitor this SQL parameter closely to maintain high performance of the server.

An automated SQL server monitoring does not only increases performance of your SQL server but also improves the business performance of your organization by achieving great customer satisfaction. To know more about an enterprise level SQL server monitoring service, click here.

Read More

Useful Tips – How to Improve Performance of Your Linux Environment?

Consistent monitoring of hardware resource-utilization and performance of different processes help you figure out and improve bad performing areas of your Linux environment.

LinuxLinux is one of the most popular operating system in the public server domain and the enterprise environments where the security, performance, robustness, cost, and reliability matters a lot. Linux is an open source operating system that was developed back in 1991 and released under general public license GNU. Since its first release, many different versions and its replica have been developed. The major operating systems that are based on Linux platform include Android, Debian, Ubuntu, CentOS and others. Linux leads the server market with over 36.72% share followed by different versions of Unix OS. A whopping share of 97% in supercomputer market is held by Linux operating system. The mainframe market share of Linux OS stands at about 28%; while in the desktop and laptop market, Linux has a very low share of about 1.34%. More than 54% of smart phone and other handheld devices run on Android, which is a derivative of Linux OS.

In an industry grade environment, the performance of Linux operating systems matter a lot because Linux machines are responsible for handling many mission critical processes that directly impact the revenue, reputation and business survival of the companies. You can improve the performance of your Linux environment through different activities such as:

  1. Proper monitoring & correcting of server parameters
  2. Optimizing server configurations
  3. Optimizing third party applications’ configurations

In this article, we are going to discuss about some major tips that are handy in improving the performance of Linux environment; a few of those major tips are listed below.

  • Consistent Monitoring

Consistent monitoringIt is the first and most important thing to monitor the performance of your Linux environment on a regular basis to track any kind of issues or under-performance of different processes and parameters. A few very important commands such as sar, vmstat, free, iostat, top and others are very useful to manually check the parameters of server performance on a regular basis. The checking of performance of your Linux server can also be done through a cloud based sever monitoring service. SiteObservers is one of the best such server monitoring service option available in the marketplace.

  • Turn-off Unnecessary Services

Linux servicesLinux operating system comes with numerous features, capabilities and services that are not used by everyone; therefore, it is very important to turn off those services that are not in use such as Xfs, Autofs, Apmd, Sendmail and others. By turning off unnecessary services, you can save a lot of RAM and CPU utilization, and subsequently can improve server performance considerably.

  • Disable Unused Modules and Control Panels


Linux server environment supports multiple cloud based control panels, and different modules such as FrontPage support and others. Disable all unnecessary modules and control panels such as Plest, Cpanel and others. You can also enable them when they are required via very simple step by step procedures. By disabling control panels and other software modules, you can improve RAM utilization as much as about 30% or even more.

  • Keep System Updated

Updates symbolIt is highly recommended to keep Linux kernel parameters as well as other software up to date to enhance the performance of your Linux environment. Always customize the default parameters of Linux OS as per the requirements of your own environment; thus, you can improve server performance substantially. Meanwhile, always keep implementing the newly released patches available in the marketplace to boost the system performance.

  • Tuning of Network Functions

tuningLinux environment supports both IPV4 and IPV6 versions and other features related to the network performance. If you are not using IPV6 then disable it and also tune up the transmission control protocol TCP to improve network performance, which will, subsequently improve overall performance of your OS environment. You can set larger TCP data size for that purpose.

  • Appropriate Apache Configuration

apacheIt is very critical to configure Apache and other such modules properly to improve the performance of the Linux environment. Make sure that Apache is using the proper volume of memory; you can save RAM up to 40%, if you properly configure Apache according to the “MinSpareServers” & “StartServers” directives. You can also regularly keep Apache performance under monitoring, and make desired changes in configurations that improve the performance.

  • Proper MySQL Configuration

MySQLDatabases are very fundamental components of an entire Linux or other OS environments; a proper allocation of server resources can improve the performance of whole Linux environment. It is very important to allocate adequate memory cache size for MySQL to make sure no bogging down of database at larger requests takes place. You can also decrease the cache size, if not required.

Other than above mentioned main tips, you can improve the performance of Linux environment by many other small/big activities such as, tuning up virtualbox, disabling GUI, optimizing cache directories, over-clocking the memory, optimizing boot speed, reducing system logging activities and others.

To have a clearer and deeper perspective on your Linux environment, you can use enterprise level monitoring service to monitor the Linux and related software’s performance. To learn more about the enterprise grade free cloud based Linux monitoring service click here.

Read More

Importance of Server Parameter Performance Monitoring

The performance of a healthy cloud server is determined by functional status of its parameters that is only possible via cloud server monitoring.

CLoud Server SnapshotIt was a big deal to own and manage an online server a couple of decades back; only large and medium sized organizations had that privilege to have one or more than one servers at their office premises to deploy and run. With the advent of cloud computing era, having an online server is just a matter of a few dollars and a few minutes’ of time. Today’s industrial world is projected to have hundreds of millions online servers located all over the globe, and the number is counting on a regular basis. Almost every micro, small, and medium sized business owns at least one cloud server to host its website or other applications. In such a fiercely competitive online environment, the healthy performance of a server means the business revenue and the bad performance means the missed business opportunity or even business loss in extreme conditions.

The monitoring of server parameters is very critical to keep the servers performing well and earning revenue for their respective businesses. The major areas of the degradation of server performance are numerous, which cause many issues related to the application speed and smoothness; some of those issues are listed below:

  • Slow speed or response time
  • Processor overloading
  • High consumption of server memory
  • Locks and mutual exclusions
  • Network communication errors
  • Loss of data records
  • Rejection of requests
  • Breakdown of processes
  • Complete breakdown of server

A proper monitoring of server parameters such as CPU usage, memory usage, disk usage, CPU interrupts, network data transfer, hits/sec, active sessions and other parameters will help an administrator to preempt any untoward incidents that can disrupt the smooth performance of a server. Therefore, the performance monitoring of server parameters is very crucial both technically and commercially for SMBs. There are many good reasons to monitor the parameters of servers to maintain their high performance; the major ones are given below.

  • Increase in the business revenue and customer satisfaction
  • Decrease in the maintenance and recovery cost
  • Timely utilization of available technical resources
  • Increased server performance
  • Deep analysis of factors impacting the server performance
  • Accurate Root Cause Analysis RCA
  • Reduced server downtime
  • Instant and automated alerts and reports
  • Better security and disaster recovery strategies

In the old days, all large enterprises and corporations have been using their proprietary monitoring application for many years to maintain high server performance; but, it was a very costly option for SMBs to opt for. Nowadays, many cloud based server monitoring services are available online either at very affordable prices or at no cost at all. SiteObservers is one of such free all-in-one monitoring service provider company that offers numerous kinds of free monitoring services such as free website monitoring, free server monitoring, free website uptime monitoring, free application performance monitoring, free web transaction monitoring and many others. Any small and medium sized business can benefit from those offerings to enhance their business growth.

Read More