Do your process have been stopped in Linux unexpectedly? How to troubleshoot unexpected errors?

Do your process have been stopped in Linux unexpectedly? How to troubleshoot unexpected errors?

Your resource in the system has some limitations! It keeps an eye on each web server activity to inform when it crosses its limitation. You can even say it as behavior that happened unexpectedly! It crosses it's over limitation to inform you like a warning at starting stage. Later it forces the process to stop without intimation. One of the biggest advantages of Linux is using the physical memory in sequence. More than data in application it uses the RAM in the system.

Let us find out the reason for the process stopping unexpectedly:
  1. You would have heard about the word “out-of-memory”! It is nothing but the task kills itself by resulting in memory running out from the system. It results in BsoD or blue screen death soon without intimation.
  2. Alerts inform you saying “Out-of-memory”. Check out the messages in logs using the command:
Code:
sudo grep -i -r ‘out of memory’ /var/log/
The command “grep” helps to cross-check the directory having logs and then the command goes through all the folders like /var/log/auth.log.

OOM shows the error as:
Code:
Kernel: Out of memory: Kill process 9163 (mysqld) score 511 or sacrifice child
  1. The killed process shown in the log is 9163 PID with mysqld and the score obtained is 511 in OOM when the process is killed. Each Linux has different distribution in the content and messages. OOM gives you some situations like process asks you to reduce the memory amount, when overcommit excess in memory it disallows the process and at last server configuration takes place by adding more memory.
Usage of current resources:

Use the command to identify the shortfalls that occurred in resource potentially with the help of tools given by Linux:

Code:
free –h
In this step, you need to identify the difference between cache, buffer and memory. The line in memory says that Ram is used up to 75% and remaining is taken over by the cache used.
  1. To get access faster, Kernel uses temporary memory where the cache is relevant for using a hard drive. The application reveals that the memory is free to use.
  2. You can see that the free memory and used memory are listed two times. Both caches and buffer use the memory which is mentioned in the second line.
  3. From 993MB, the total space used is 234MB. One of the best memory tools used in “top” gives you more information such as statistics, runtime, CPU usage and memory process.
Keep an eye on your process to check the condition of risk:
  1. As said previously, whenever the server memory goes over the limitation it might bring you more problems. It starts to stop continuing processes like a variable, lost memory and free memory. Based on the OOM killed runtime it gives a score to the process.
  2. You can check the score in the folder /proc/<pid>/oom_score/. To identify the process use the PID given over here. Use the command for identification:
Code:
ps aux ¦ grep <process name>
Most of the distributions in Linux gives interruption always! It is better to disable the function “overcommit”. The utilization of memory is improved frequently by giving default space to free memory. Disable the overcommit memory by using the killer of OOM. Prevention is better than cure!
Author
kumkumsharma
Views
2,381
First release
Last update
Rating
0.00 star(s) 0 ratings

More resources from kumkumsharma

Top