Recently I created a simple shell script called backup.sh in /root/scripts directory to just backup MySQL database and dumped it to /nfs/mysql/ directory. I put a file (more like used the ln command to create a soft link ) in /etc/cron.hourly/ and it doesn’t run. There was no error in systemd log or cron log. Why is my cron job was not working, and here is how I troubleshoot it.
I recently read that TCP BBR has significantly increased throughput and reduced latency for connections on Google’s internal backbone networks and google.com and YouTube Web servers throughput by 4 percent on average globally – and by more than 14 percent in some countries. The TCP BBR patch needs to be applied to the Linux kernel. The first public release of BBR was here, in September 2016. The patch is available to any one to download and install. Another option is using Google Cloud Platform (GCP). GCP by default turned on to use a cutting-edge new congestion control algorithm named TCP BBR.
Recently I come across a nice little nifty tool called pssh to run a single command on multiple Linux / UNIX / BSD servers. You can easily increase your productivy with this SSH tool.
More about pssh
pssh is a command line tool for executing ssh in parallel on some hosts. It provides specialties includes:
- Sending input to all of the processes
- Inputting a password to ssh
- Saving output to files
- IT/sysadmin taks automation such as patching servers
- Timing out and more
Let us see how to install and use pssh on Linux and Unix-like system.
It’s always a good idea to keep backups of all of your data in multiple places. Every Linux or Unix sysadmin must master the art of backups if you want to keep your data forever. Most sysadmin recommend and follows the 3-2-1 rule:
- At least three copies of data.
- In two different formats.
- With one of those copies off-site.
Tarsnap is one of such off-site backup sites. It’s a secure online backup system for UNIX-like system. This service encrypts and stores data in Amazon S3. To use Tarsnap perfectly and feel secure about your backups, you need the “Tarsnap Mastery” book by Michael W. Lucas. It is no secret that I’m a big fan of his book series. Let’s see what the book is all about.
Cloud storage is nothing but an enterprise-level cloud data storage model to store the digital data in logical pools, across the multiple servers. You can use a hosting company such as Amazon, Google, Rackspace, Dropbox and others for keeping your data available and accessible 24×7. You can access data stored on cloud storage via API or desktop/mobile apps or web based systems.
In this post, I’m going to list amazingly awesome open source cloud storage engines that you can use to access and sync your data privately for security and privacy reasons.
Vagrant is a multi-platform command line tool for creating lightweight, reproducible and portable virtual environments. Vagrant acts as a glue layer between different virtualization solutions (Software, hardware PaaS and IaaS) and different configuration management utilities (Puppet, Chef, etc’). Vagrant was started back at 2010 by Mitchell Hashimoto as a side project and later became one of the first products of HashiCorp – the company Mitchell founded.
While officially described as a tool for setting up development environments, Vagrant can be used for a lot of other purposes by non developers as well:
- Creating demo labs
- Testing configuration management tools
- Speeding up the work with non multi-platform tools such as Docker
In this tutorial I’ll show how can we take Vagrant as use it to create small virtual test lab which we will be able to pass to our colleagues.
You can send visitors to different servers based on country of their IP address using Amazon Route 53 cloud based dns server. For example, if you have a server in Amsterdam, a server in America, and a server in Singapore, then you can easily route traffic for visitors in Europe to the Amsterdam server, people in Asia go to the Singapore server and those in the rest of the world be served by the American server. This will results into the various kinds of benefits such as:
- Better performance as you are sending web site visitors to their nearest web server.
- Reduced load on origin.
- Geomarketing/online advertising.
- Restricting content to those geolocated in specific countries (I am not a big fan of DRM).
- In some cases you can get potentially lower costs and more.
In this post, I will explain how to configure and test GeoDNS using AWS Route 53 service.
- How to serve your entire blog using cloudfront.
- DNS settings.
- Wordpress settings.
- Documenting limitations of cloudfront.
- Documenting performance improvements.
The HTTP 2xx class of status codes indicates the action requested by the client was received, and processed successfully. HTTP/1.1 200 OK is the standard response for successful HTTP requests. When you type www.cyberciti.biz in the browser you will get this status code. The HTTP/1.1 206 status code allows the client to grab only part of the resource by sending a range header. This is useful for:
- Understanding http headers and protocol.
- Troubleshooting network problems.
- Troubleshooting large download problems.
- Troubleshooting CDN and origin HTTP server problems.
- Test resuming interrupted downloads using tools like lftp or wget or telnet.
- Test and split a large file size into multiple simultaneous streams i.e. download a large file in parts.
Amazon web services (AWS) launched a new service called Amazon Glacier. You can use this service for archiving mission-critical data and backups in a reliable way in an enterprise IT or for personal usage. This service cost as low as $0.01 (one US penny, one one-hundredth of a dollar) per Gigabyte, per month. You can store a lot of data in various geographically distinct facilities and verifying hardware or data integrity, irrespective of the length of your retention periods. The first thing comes to mind is, the Glacier would be a good place for a backup off family photos and videos from my local 12TB nas.