I‘ve two servers located in two different data center. Both server deals with a lot of concurrent large file transfers. But network performance is very poor for large files and performance degradation take place with a large files. How do I tune TCP under Linux to solve this problem?
I‘m not able to ping from FreeBSD prison (jail). I’m able to resolve the names or use ftp / http for ports but ping and traceroute access is disabled. How do I allow virtualized jail application / users to perform traceroute and ping commands?
Only software developers legitimately need to access core files and none of my production web server requires a core dump. How do I disable core dumps on Debian / CentOS / RHEL / Fedora Linux to save large amounts of disk space?
How do I increase the maximum number of open files under CentOS Linux? How do I open more file descriptors under Linux?