How to fully automate unattended virt-install? - virtualization

Let me start by saying what I want to do. I'd like to fully automate, in an unattended way, the building of QEMU/KVM VM images using virt-install. I know that some folks use the GUI tool to do this, or they edit a pre-existing image's XML description, but I want to start from scratch.
I've Googled around and examples of doing this are hard to find. What I have found is that virt-install is the command to use, and that it can be used interactively with a TTY console attached (you manually answer configuration questions during the install). For a fully automated solution, you can specify a kickstart file (typically preseed.cfg) to provide answers to questions that you'd normally enter manually. The kickstart file can also specify additional software to install, disk and network configuration, etc.).
I think I've got this mostly working except that the installation hangs shortly after install begins. I think it has something to do with the need (or not) to have a console attached to the install. Here is the virt-install command I am using:
virt-install --connect qemu:///system \
--name vm --ram 128 \
--disk path=./vm.qcow2,size=8,format=qcow2 \
--location 'http://archive.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/' \
--network user,model=virtio \
--initrd-inject preseed.cfg \
--extra-args="console=tty0 console=ttyS0,115200"
This is the preseed.cfg file (which I cribbed from many examples on the web and in the Ubuntu documentation):
### Localization
# Locale sets language and country.
d-i debian-installer/locale string en_US
# Keyboard selection.
d-i keyboard-configuration/layoutcode string us
d-i keyboard-configuration/modelcode string pc105
d-i keyboard-configuration/variantcode string
### Network configuration
# netcfg will choose an interface that has link if possible. This makes it
# skip displaying a list if there is more than one interface.
d-i netcfg/choose_interface select auto
# Any hostname and domain names assigned from dhcp take precedence over
# values set here. However, setting the values still prevents the questions
# from being shown, even if values come from dhcp.
d-i netcfg/get_hostname string vm
d-i netcfg/get_domain string foobar.net
# Disable that annoying WEP key dialog.
d-i netcfg/wireless_wep string
### Mirror settings
d-i mirror/country string manual
d-i mirror/http/hostname string us.archive.ubuntu.com
d-i mirror/http/directory string /ubuntu
d-i mirror/http/proxy string
### Partitioning
# Encrypt your home directory?
d-i user-setup/encrypt-home boolean false
# Alternatively, you can specify a disk to partition. The device name
# can be given in either devfs or traditional non-devfs format.
d-i partman-auto/disk string /dev/vda
# In addition, you'll need to specify the method to use.
# The presently available methods are: "regular", "lvm" and "crypto"
d-i partman-auto/method string regular
# You can choose from any of the predefined partitioning recipes.
d-i partman-auto/choose_recipe select atomic
# This makes partman automatically partition without confirmation, provided
# that you told it what to do using one of the methods above.
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
### Clock and time zone setup
# Controls whether or not the hardware clock is set to UTC.
d-i clock-setup/utc boolean true
# You may set this to any valid setting for $TZ; see the contents of
# /usr/share/zoneinfo/ for valid values.
d-i time/zone string UTC
### Account setup
# Skip creation of a root account (normal user account will be able to
# use sudo).
d-i passwd/root-login boolean false
# To create a normal user account.
d-i passwd/user-fullname string VMuser
d-i passwd/username string vmuser
# Normal user's password, either in clear text
# or encrypted using an MD5 hash.
d-i passwd/user-password-crypted password CRACKMECRACKM
# This is fairly safe to set, it makes grub install automatically to the MBR
# if no other operating system is detected on the machine.
d-i grub-installer/only_debian boolean true
### Package selection
d-i tasksel/first multiselect standard
# Individual additional packages to install
d-i pkgsel/include string openssh-server
### Finishing up the first stage install
# Avoid that last message about the install being complete.
d-i finish-install/reboot_in_progress note
# How do you want to manage upgrades on this system?
d-i pkgsel/update-policy select none
After all that, when I execute the virt-install command I see:
WARNING Unable to connect to graphical console: virt-viewer not installed. Please install the 'virt-viewer' package.
WARNING No console to launch for the guest, defaulting to --wait -1
Starting install...
Retrieving file linux...
Retrieving file initrd.gz...
Allocating 'virtinst-linux.rCdX0h'
Transferring virtinst-linux.rCdX0h
Allocating 'virtinst-initrd.gz.BbRBMv'
Transferring virtinst-initrd.gz.BbRBMv
Creating domain...
Domain installation still in progress. Waiting for installation to complete.
and it just hangs. If I ^Z into the background and start virsh I see the vm in a running state.
I think I'm close, but need to fix it so that:
Install shows complete and virt-install returns to shell.
The new VM is shutdown and I'm left with the image file ready to go.
I think #2 can be accomplished in the preseed.cfg file with some kind of cleanup instructions (still researching this), but any help of fixing #1 would be greatly appreciated.

To have virt-install use a Kickstart file to initialize an operating system, you need to pass the ks= argument to the kernel by specifying it via the --extra-args parameter:
--initrd-inject preseed.cfg \
--extra-args="ks=file:/preseed.cfg console=tty0 console=ttyS0,115200"
The above example injects a local Kickstart file onto the guest operating system, to be used for automated installation.
You can also specify ks via HTTP:
--extra-args="ks=http://192.168.1.1/preseed.cfg"
or FTP:
--extra-args="ks=ftp://192.168.1.1/preseed.cfg"
or NFS:
--extra-args="ks=nfs:192.168.1.1:/preseed.cfg"

Related

How do I get Salt Master to apply a basic SLS file to work against a Salt Minion?

I am programming and want to push down code with Salt. I have recently installed Salt minion and Salt master on on two CentOS 7.x servers. They are both Salt version 2015.8.7. My salt '*' test.ping worked. This, to me, proves /etc/salt/minion.yml and /etc/salt/master.yml were set up correctly on their respective servers. It proves the services are up and running.
Here are the contents of top.sls:
base:
'*':
- core
Here are the content of core.sls:
{{ salt['runtests_helpers.get_sys_temp_dir_for_path']('testfile') }};
file:
- managed
- source: salt://testfile
When I run
# salt 'fqdnOfSaltMinionServer' state.apply
I get an error like this "..No Top file or external nodes data matches found...Error: Minions returned with non-zero exit code"
How do I uninstall Salt master from the server that I want to be Salt minion? How do I get a basic .sls file to work? Ping works. I don't see what is wrong with my top.sls or core.sls files. I have a small, simple text file named testfile. I want to transfer it from the Salt master server to Salt minion. I don't see what is wrong with my set up.
are you using the yum/rpm provided salt master on centos? I was facing a similar issue and had to create a /srv/salt directory on the salt master server to hold my files (core.sls and testfile in your example) before I could get anywhere.
At least with salt 2016.11.1 (Carbon), this is the default setting (in /etc/salt/master) where the top file must reside:
##### File Server settings #####
##########################################
# Salt runs a lightweight file server written in zeromq to deliver files to
# minions. This file server is built into the master daemon and does not
# require a dedicated port.
# The file server works on environments passed to the master, each environment
# can have multiple root directories, the subdirectories in the multiple file
# roots cannot match, otherwise the downloaded files will not be able to be
# reliably ensured. A base environment is required to house the top file.
# Example:
# file_roots:
# base:
# - /srv/salt/
# dev:
# - /srv/salt/dev/services
# - /srv/salt/dev/states
# prod:
# - /srv/salt/prod/services
# - /srv/salt/prod/states
#
#file_roots:
# base:
# - /srv/salt
#
As previous John answer, putting the top file in /srv/salt is what to do if you have not changed the default in /etc/salt/master.

puppet master didn't pass agent hostname/fqdn to enc script

Puppet version: 3.6.2
In order to simplify the management of ssl certificates, our puppet agents use the same certname, certname=agent.puppet.com
When puppet master gets request from agent(hostname: web00.xxx.com), it executes Enc script with certname as parameter.
node_terminus = exec
external_nodes = /home/ocean/puppet/conf/bce_puppet_bns
puppet.log:
2015-05-06 09:55:34 +0800 Puppet (debug): Executing '/home/ocean/puppet/conf/bce_puppet_bns agent.puppet.com'
How do I configure to make puppet master pass agent's real hostname/FQDN to Enc script like:
/home/ocean/puppet/conf/bce_puppet_bns web00.xxx.com
Or how can I get the agent's hostname/FQDN in Enc script ?
Don't.
Don't use any info other than $clientcert passed from the agent.
Don't share certificates among different agents.
There are deeply rooted assumptions in Puppet that each agent node has an individual certificate. You will wreak havoc in your infrastructure by trying such stunts.
For example, PuppetDB data is usually grouped by owning agents' certnames. This data will become inconsistent quickly with all agents calling themselves the same, but being quite different of course.
ensure puppetmaster says this
[master]
node_name = facter
alter auth.conf so that all the sections have the "agent.puppet.com" cert like this
# allow nodes to retrieve their own catalog
path ~ ^/catalog/([^/]+)$
method find
allow $1
allow agent.puppet.com
# allow nodes to retrieve their own node definition
path ~ ^/node/([^/]+)$
method find
allow $1
allow agent.puppet.com
# allow all nodes to access the certificates services
path /certificate_revocation_list/ca
method find
allow *
# allow all nodes to store their own reports
path ~ ^/report/([^/]+)$
method save
allow $1
allow agent.puppet.com
That's just puppetmaster <=> client, Felix is right that if you are using puppetdb that would have to be altered too

Apache change CGI interpreter on Windows

I'm trying to run my jscript files and return result to client using CGI. But I can't to set first line og mys cript like #!/usr/bin/cscript.exe because jscript not support comments started by # and get error.
Question: How can I set path to my CGI interpreter without !#/usr/bin/cscript.exe in the first line of my script?
From my rather dated httpd.conf:
# However, Apache on Windows allows either the Unix behavior above, or can
# use the Registry to match files by extention. The command to execute
# a file of this type is retrieved from the registry by the same method as
# the Windows Explorer would use to handle double-clicking on a file.
# These script actions can be configured from the Windows Explorer View menu,
# 'Folder Options', and reviewing the 'File Types' tab. Clicking the Edit
# button allows you to modify the Actions, of which Apache 1.3 attempts to
# perform the 'Open' Action, and failing that it will try the shebang line.
# This behavior is subject to change in Apache release 2.0.
#
# Each mechanism has it's own specific security weaknesses, from the means
# to run a program you didn't intend the website owner to invoke, and the
# best method is a matter of great debate.
#
# To enable the this Windows specific behavior (and therefore -disable- the
# equivilant Unix behavior), uncomment the following directive:
#
#ScriptInterpreterSource registry
So I enabled the ScriptInterpreterSource feature, checked:
ftype JSFile
JSFile=%SystemRoot%\System32\CScript.exe "%1" %*
and used c:\programme\xampp\cgi-bin\jscgi.js containing:
WScript.Echo("Content-Type: text/html\n");
WScript.Echo("OK:", WScript.ScriptFullName, new Date());
successfully. I did not touch other settings like AddHandler, directory, or
ScriptAlias, and I just tested phpinfo.php and printenv.pl to see if this
change wrecked my installation blatantly - no.
You should be much more prudent.
Update wrt comment:
According to the 2.4 docs (search for "ScriptInterpreterSource") the directive is still valid. Are you sure the apache user account associates .js files with cscript.exe?

Passing parameters to a SSH client to execute a ForceCommand with parameters

I'm having trouble passing command parameters remotely to a "ForceCommand" program in ssh.
In my remote server I have this configuration in sshd_config:
Match User user_1
ForceCommand /home/user_1/user_1_shell
The user_1_shell program limits the commands the user can execute, in this case, I only allow the user to execute "set_new_mode param1 param2". Any other commands will be ignored.
So I expect that when a client logs in via ssh such as this one:
ssh user_1#remotehost "set_new_mode param1 param2"
The user_1_shell program seems to be executed, but the parameter string doesn't seem to be passed.
Maybe, I should be asking, does ForceCommand actually support this?
If yes, any suggestions on how I could make it work?
Thanks.
I found the answer. The remote server captures the parameter string and saves it in "$SSH_ORIGINAL_COMMAND" environment variable.
As already answered, the commandline sent from the ssh client is put into the SSH_ORIGINAL_COMMAND environment variable, only the ForcedCommand is executed.
If you use the information in SSH_ORIGINAL_COMMAND in your ForcedCommand you must take care of security implications. An attacker can augment your command with arbitrary additional commands by sending e.g. ; rm -rf / at the end of the commandline.
This article shows a generic script which can be used to lock down allowed parameters. It also contains links to relevant information.
The described method (the 'only' script) works as follows:
Make the 'only' script the ForcedCommand, and give it the allowed
command as its parameter. Note that more then one allowed command may be used.
Put a .onlyrules files into the home directory of user_1 and fill it with rules (regular expressions) which are matched against the
commandline sent by the ssh client.
Your example would look like:
Match User user_1
ForceCommand /usr/local/bin/only /home/user_1/user_1_shell
and if, for example, you want to allow as parameters only 'set_new_mode' with exactly two alphanumeric arbitrary parameters the .onlyrules file would look like this:
\:^/home/user_1/user_1_shell set_new_mode [[:alnum:]]\{1,\} [[:alnum:]]\{1,\}$:{p;q}
Note that for sending the command to the server you must use the whole commandline:
/home/user_1/user_1_shell set_new_mode param1 param2
'only' looks up the command on the server and uses its name for matching the rules. If any of these checks fail, the command is not run.

Cron Job Log - How to Log?

I want to know how I can see exactly what the cron jobs are doing on each execution. Where are the log files located? Or can I send the output to my email? I have set the email address to send the log when the cron job runs but I haven't received anything yet.
* * * * * myjob.sh >> /var/log/myjob.log 2>&1
will log all output from the cron job to /var/log/myjob.log
You might use mail to send emails. Most systems will send unhandled cron job output by email to root or the corresponding user.
By default cron logs to /var/log/syslog so you can see cron related entries by using:
grep CRON /var/log/syslog
https://askubuntu.com/questions/56683/where-is-the-cron-crontab-log
Here is my code:
* * * * * your_script_fullpath >> your_log_path 2>&1
There are at least three different types of logging:
The logging BEFORE the program is executed, which only logs IF the
cronjob TRIED to execute the command. That one is located in
/var/log/syslog, as already mentioned by #Matthew Lock.
The logging of errors AFTER the program tried to execute, which can be sent to
an email or to a file, as mentioned by #Spliffster. I prefer logging
to a file, because with email THEN you have a NEW source of
problems, and its checking if email sending and reception is working
perfectly. Sometimes it is, sometimes it's not. For example, in a
simple common desktop machine in which you are not interested in
configuring an smtp, sometimes you will prefer logging to a file:
* * * * COMMAND_ABSOLUTE_PATH > /ABSOLUTE_PATH_TO_LOG 2>&1
I would also consider checking the permissions of /ABSOLUTE_PATH_TO_LOG, and run the command from that user's permissions. Just for verification, while you test whether it might be a potential source of problems.
The logging of the program itself, with its own error-handling and logging for tracking purposes.
There are some common sources of problems with cronjobs:
* The ABSOLUTE PATH of the binary to be executed. When you run it from your
shell, it might work, but the cron process seems to use another
environment, and hence it doesn't always find binaries if you don't
use the absolute path.
* The LIBRARIES used by a binary. It's more or less the same previous point, but make sure that, if simply putting the NAME of the command, is referring to exactly the binary which uses the very same library, or better, check if the binary you are referring with the absolute path is the very same you refer when you use the console directly. The binaries can be found using the locate command, for example:
$locate python
Be sure that the binary you will refer, is the very same the binary you are calling in your shell, or simply test again in your shell using the absolute path that you plan to put in the cronjob.
Another common source of problems is the syntax in the cronjob. Remember that there are special characters you can use for lists (commas), to define ranges (dashes -), to define increment of ranges (slashes), etc. Take a look:
http://www.softpanorama.org/Utilities/cron.shtml
On Ubuntu you can enable a cron.log file to contain just the CRON entries.
Uncomment the line that mentions cron in /etc/rsyslog.d/50-default.conf file:
# Default rules for rsyslog.
#
# For more information see rsyslog.conf(5) and /etc/rsyslog.conf
#
# First some standard log files. Log by facility.
#
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
#cron.* /var/log/cron.log
Save and close the file and then restart the rsyslog service:
sudo systemctl restart rsyslog
You can now see cron log entries in its own file:
sudo tail -f /var/log/cron.log
Sample outputs:
Jul 18 07:05:01 machine-host-name CRON[13638]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
However, you will not see more information about what scripts were actually run inside /etc/cron.daily or /etc/cron.hourly, unless those scripts direct output to the cron.log (or perhaps to some other log file).
If you want to verify if a crontab is running and not have to search for it in cron.log or syslog, create a crontab that redirects output to a log file of your choice - something like:
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
30 2 * * 1 /usr/local/sbin/certbot-auto renew >> /var/log/le-renew.log 2>&1
Steps taken from: https://www.cyberciti.biz/faq/howto-create-cron-log-file-to-log-crontab-logs-in-ubuntu-linux/
Incase you're running some command with sudo, it won't allow it. Sudo needs a tty.
All of the above is not working for me. So, I have to add the 'MAILTO=[my email]' on the top of the crontab's file at /etc/cron.d/ and I got the answer because the my cron command got errors.
cron already sends the standard output and standard error of every job it runs by mail to the owner of the cron job.
You can use MAILTO=recipient in the crontab file to have the emails sent to a different account.
For this to work, you need to have mail working properly. Delivering to a local mailbox is usually not a problem (in fact, chances are ls -l "$MAIL" will reveal that you have already been receiving some) but getting it off the box and out onto the internet requires the MTA (Postfix, Sendmail, what have you) to be properly configured to connect to the world.

Resources