For those who are using Ansible and other configuration management tools from Bamboo, Custom Deployments plugin might come in handy:
https://marketplace.atlassian.com/plugins/com.valens.deployments.bamboo-custom-deployments/server/overview
Sometimes you would like to deploy partially only certain machines, sometimes a particular software only.
Bamboo has since the beginning variables and custom build plan executions, what about deployments? I often wanted to deploy and pass a new parameters to the deployment either to limit the number of hosts affected in a large cluster, or to deploy single components, or to change the target of a deployment.
While a regular deployment will update the entire software release I wanted to use at least 2 variables in my Ansible tasks:
- HOSTS: all
- TAGS:
and most importantly customize them without having to edit the environment all the time.
The Custom Deployments for Bamboo plugin allows this scenario, assuming you use the regular flows to create a release you can select a version, fill your variables and deploy.
In terms of security you can also filter which variables you would like to expose to the teams. Under Bamboo security you can set a regular expression and only variables matching will be shown.
Tuesday, November 22, 2016
Thursday, September 15, 2016
Deploy using Bamboo and Ansible, getting started
In order to get started with Ansible and Bamboo (talking about latest versions which include the deployment management section) there is very little to do, at least this is what I did
- get your Bamboo up and running of course
- prepare one agent with Ansible (pip install ansible or other installation guides)
- prepare a build and locate your build artifacts
- share your artifacts so they are picked by the release management module of Bamboo
- create a deployment project
Now that you have your artifacts and you can prepare software releases is time to move to Ansible:
- prepare a repository to store your Ansible code, create usual folders for Ansible
- commit your playbook, roles, vaults, inventories
As a next step we put things together, the release and the ansible scripts meet at environment level:
- checkout the ansible code
- add a task to download the artifacts in a subfolder called "files"
- invoke Ansible - any command to copy files to remote destination will be searching under "files" and will do the shipping for you.
To make life easier you can make a special Ansible role that removes versions from the release files so that roles can reference them easier, for example my.software-1.3.rpm could become my.software.rpm
And you are done, now starts the Ansible fun. Personally I create one Bamboo environment for each inventory file and name them in a way that I can trace easy which env is tied to which environment.
In case you have sensitive information use vaults, encrypt files in you Ansible repository, you can keep the password in a file directly on the agent.
Sunday, August 7, 2016
Deployments - Ansible and multiple clusters of servers
I have been digging on the internet for a solution to have Ansible variables per cluster of servers (environment in the Bamboo), by cluster variables I mean some sort of inventory vars but a bit easier to maintain.
You just put the code (inventory_vars.py) in the plugin folder - eg plugins\vars_plugins (where plugins folder is at same level with roles)
The plugin will allow you to have the following structure
While the plugin is basically very close to inventory variables concept, I found the original concept very complicated to maintain due to the inventory file format. Using the plugin will allow to use the regular yml dictionaries. The fact that you can have group vars or host vars per cluster is more of a nice to have for me.
In order to use the plugin you will need to have a folder under cluster_vars with exact same name as the original inventory file name.
Eg. if "qa" is my inventory file name, I can place a cluster with same name under cluster_vars. There are a few print statements to help you with debugging in case you run in trouble, if you are annoyed by them just comment the out.
# (c) 2016, Iulius Hutuleac
import os
import glob
from ansible import errors
from ansible import utils
from ansible.errors import AnsibleUndefinedVariable
from ansible.parsing.dataloader import DataLoader
from ansible.template import Templar
from ansible.utils.vars import combine_vars
import ansible.constants as C
def vars_file_matches(f, name):
# A vars file matches if either:
# - the basename of the file equals the value of 'name'
# - the basename of the file, stripped its extension, equals 'name'
if os.path.basename(f) == name:
return True
elif os.path.basename(f) == '.'.join([name, 'yml']):
return True
elif os.path.basename(f) == '.'.join([name, 'yaml']):
return True
else:
return False
def vars_files(vars_dir, name):
files = []
try:
candidates = [os.path.join(vars_dir, f) for f in os.listdir(vars_dir)]
except OSError:
return files
for f in candidates:
if os.path.isfile(f) and vars_file_matches(f, name):
files.append(f)
elif os.path.isdir(f):
files.extend(vars_files(f, name))
return sorted(files)
class VarsModule(object):
def __init__(self, inventory):
self.inventory = inventory
self.group_cache = {}
def get_group_vars(self, group, vault_password=None):
""" Get group specific variables. """
inventory = self.inventory
inventory_name = os.path.basename(inventory.src())
results = {}
#basedir = os.getcwd()
basedir = os.path.join(inventory.basedir(),"..")
if basedir is None:
# could happen when inventory is passed in via the API
return
inventory_vars_dir = os.path.join(basedir, "cluster_vars", inventory_name, "group_vars")
inventory_vars_files = vars_files(inventory_vars_dir, group.name)
print("Files for group ", group.name, ":", inventory_vars_files)
if len(inventory_vars_files) > 1:
raise errors.AnsibleError("Found more than one file for host '%s': %s"
% (group.name, inventory_vars_files))
dl = DataLoader()
for path in inventory_vars_files:
data = dict()
data.update( dl.load_from_file(path) )
if type(data) != dict:
raise errors.AnsibleError("%s must be stored as a dictionary/hash" % path)
if C.DEFAULT_HASH_BEHAVIOUR == "merge":
# let data content override results if needed
results = utils.merge_hash(results, data)
else:
results.update(data)
return results
def run(self, host, vault_password=None):
print("Requested files for host ", host)
return {}
def get_host_vars(self, host, vault_password=None):
""" Get group specific variables. """
inventory = self.inventory
inventory_name = os.path.basename(inventory.src())
results = {}
#basedir = os.getcwd()
basedir = os.path.join(inventory.basedir(),"..")
if basedir is None:
# could happen when inventory is passed in via the API
return
inventory_vars_dir = os.path.join(basedir, "cluster_vars", inventory_name, "host_vars")
inventory_vars_files = vars_files(inventory_vars_dir, host)
print("Files for host ", host, ":", inventory_vars_files)
if len(inventory_vars_files) > 1:
raise errors.AnsibleError("Found more than one file for host '%s': %s"
% (host, inventory_vars_files))
dl = DataLoader()
for path in inventory_vars_files:
data = dict()
data.update( dl.load_from_file(path) )
if type(data) != dict:
raise errors.AnsibleError("%s must be stored as a dictionary/hash" % path)
if C.DEFAULT_HASH_BEHAVIOUR == "merge":
# let data content override results if needed
results = utils.merge_hash(results, data)
else:
results.update(data)
return results
You just put the code (inventory_vars.py) in the plugin folder - eg plugins\vars_plugins (where plugins folder is at same level with roles)
The plugin will allow you to have the following structure
While the plugin is basically very close to inventory variables concept, I found the original concept very complicated to maintain due to the inventory file format. Using the plugin will allow to use the regular yml dictionaries. The fact that you can have group vars or host vars per cluster is more of a nice to have for me.
In order to use the plugin you will need to have a folder under cluster_vars with exact same name as the original inventory file name.
Eg. if "qa" is my inventory file name, I can place a cluster with same name under cluster_vars. There are a few print statements to help you with debugging in case you run in trouble, if you are annoyed by them just comment the out.
# (c) 2016, Iulius Hutuleac
import os
import glob
from ansible import errors
from ansible import utils
from ansible.errors import AnsibleUndefinedVariable
from ansible.parsing.dataloader import DataLoader
from ansible.template import Templar
from ansible.utils.vars import combine_vars
import ansible.constants as C
def vars_file_matches(f, name):
# A vars file matches if either:
# - the basename of the file equals the value of 'name'
# - the basename of the file, stripped its extension, equals 'name'
if os.path.basename(f) == name:
return True
elif os.path.basename(f) == '.'.join([name, 'yml']):
return True
elif os.path.basename(f) == '.'.join([name, 'yaml']):
return True
else:
return False
def vars_files(vars_dir, name):
files = []
try:
candidates = [os.path.join(vars_dir, f) for f in os.listdir(vars_dir)]
except OSError:
return files
for f in candidates:
if os.path.isfile(f) and vars_file_matches(f, name):
files.append(f)
elif os.path.isdir(f):
files.extend(vars_files(f, name))
return sorted(files)
class VarsModule(object):
def __init__(self, inventory):
self.inventory = inventory
self.group_cache = {}
def get_group_vars(self, group, vault_password=None):
""" Get group specific variables. """
inventory = self.inventory
inventory_name = os.path.basename(inventory.src())
results = {}
#basedir = os.getcwd()
basedir = os.path.join(inventory.basedir(),"..")
if basedir is None:
# could happen when inventory is passed in via the API
return
inventory_vars_dir = os.path.join(basedir, "cluster_vars", inventory_name, "group_vars")
inventory_vars_files = vars_files(inventory_vars_dir, group.name)
print("Files for group ", group.name, ":", inventory_vars_files)
if len(inventory_vars_files) > 1:
raise errors.AnsibleError("Found more than one file for host '%s': %s"
% (group.name, inventory_vars_files))
dl = DataLoader()
for path in inventory_vars_files:
data = dict()
data.update( dl.load_from_file(path) )
if type(data) != dict:
raise errors.AnsibleError("%s must be stored as a dictionary/hash" % path)
if C.DEFAULT_HASH_BEHAVIOUR == "merge":
# let data content override results if needed
results = utils.merge_hash(results, data)
else:
results.update(data)
return results
def run(self, host, vault_password=None):
print("Requested files for host ", host)
return {}
def get_host_vars(self, host, vault_password=None):
""" Get group specific variables. """
inventory = self.inventory
inventory_name = os.path.basename(inventory.src())
results = {}
#basedir = os.getcwd()
basedir = os.path.join(inventory.basedir(),"..")
if basedir is None:
# could happen when inventory is passed in via the API
return
inventory_vars_dir = os.path.join(basedir, "cluster_vars", inventory_name, "host_vars")
inventory_vars_files = vars_files(inventory_vars_dir, host)
print("Files for host ", host, ":", inventory_vars_files)
if len(inventory_vars_files) > 1:
raise errors.AnsibleError("Found more than one file for host '%s': %s"
% (host, inventory_vars_files))
dl = DataLoader()
for path in inventory_vars_files:
data = dict()
data.update( dl.load_from_file(path) )
if type(data) != dict:
raise errors.AnsibleError("%s must be stored as a dictionary/hash" % path)
if C.DEFAULT_HASH_BEHAVIOUR == "merge":
# let data content override results if needed
results = utils.merge_hash(results, data)
else:
results.update(data)
return results
Saturday, August 6, 2016
IntelliJ, Ansible, Vaults and Windows ...
How to integrate ansible-vault in IntelliJ under Windows
Software prerequisites:
- install cygwin (I placed it directly on the C: drive)
- install pip inside cygwin: easy_install pip
- inside cygwin install ansible-vault using pip: pip install ansible-vault
Now we have the tools in place, we can switch to the setup:
- create a password file for example under C:\Users\<your user>\ansible_vault_pass
#!/bin/bash
echo -n "MyStrongPasswordSomething"
As we are on Windows, Ansible will detect the file as executable and try to run it in order to obtain the password.
- last create the 2 new external tools in IntelliJ as in the screenshots bellow:
Program:
c:\cygwin64\bin\bash
Parameters example:
-c "/bin/ansible-vault -vvv encrypt $FileName$ --vault-password-file /cygdrive/c/Users/XX/ansible-vault-pass"
Working directory:
$FileDir$
Software prerequisites:
- install cygwin (I placed it directly on the C: drive)
- install pip inside cygwin: easy_install pip
- inside cygwin install ansible-vault using pip: pip install ansible-vault
Now we have the tools in place, we can switch to the setup:
- create a password file for example under C:\Users\<your user>\ansible_vault_pass
#!/bin/bash
echo -n "MyStrongPasswordSomething"
As we are on Windows, Ansible will detect the file as executable and try to run it in order to obtain the password.
- last create the 2 new external tools in IntelliJ as in the screenshots bellow:
Program:
c:\cygwin64\bin\bash
Parameters example:
-c "/bin/ansible-vault -vvv encrypt $FileName$ --vault-password-file /cygdrive/c/Users/XX/ansible-vault-pass"
Working directory:
$FileDir$
Tuesday, August 2, 2016
DSL plugin security change
For security reasons I have added the possibility to disable DSL processing when processing source code repositories.
If you would like to prevent DSL in a specific job in order to avoid random Bamboo task execution just disable it.
If you would like to prevent DSL in a specific job in order to avoid random Bamboo task execution just disable it.
Bamboo Template Plans compatibility change
Starting version 3.1.4 I have made the necessary changes to support older versions of Bamboo up to 5.6.0 !
Sunday, July 31, 2016
How to check groovy binding contents in you DSL
Simple code snippet to be added to the groovy script:
binding.variables.each{
if (it.key.toString().toLowerCase().matches("password"))
{
logger.addBuildLogEntry ( "Key: " + it.key + " value ***********" )
} else
{
logger.addBuildLogEntry ( "Key: " + it.key + " value " + it.value )
}
}
Sample output:
binding.variables.each{
if (it.key.toString().toLowerCase().matches("password"))
{
logger.addBuildLogEntry ( "Key: " + it.key + " value ***********" )
} else
{
logger.addBuildLogEntry ( "Key: " + it.key + " value " + it.value )
}
}
Sample output:
simple 31-Jul-2016 13:09:10 DSL Pre Build Processing starting... simple 31-Jul-2016 13:09:10 on DESKTOP-6QJ44JS simple 31-Jul-2016 13:09:11 Key: logger value com.atlassian.bamboo.build.logger.BuildLoggerImpl@1102b86c simple 31-Jul-2016 13:09:11 Key: dslHelper value org.valens.utils.DslHelper@32ee93bb simple 31-Jul-2016 13:09:11 Key: planManager value com.atlassian.bamboo.plan.PlanManagerImpl@6c647f5 simple 31-Jul-2016 13:09:11 Key: environmentService value com.atlassian.bamboo.deployments.environments.service.EnvironmentServiceImpl@4a39933 simple 31-Jul-2016 13:09:11 Key: buildContext value com.atlassian.bamboo.v2.build.BuildContextImpl@b1f71b01 simple 31-Jul-2016 13:09:11 Key: SDKPATH value C:\Atlassian\atlassian-plugin-sdk-6.2.9\bin simple 31-Jul-2016 13:09:11 Key: bambooDelimiterParsingDisabled_0 value true simple 31-Jul-2016 13:09:11 Key: filter_pattern_option_0 value none simple 31-Jul-2016 13:09:11 Key: filter_pattern_regex_0 value simple 31-Jul-2016 13:09:11 Key: changeset_filter_pattern_regex_0 value simple 31-Jul-2016 13:09:11 Key: repository_common_quietPeriod_enabled_0 value false simple 31-Jul-2016 13:09:11 Key: repository_common_quietPeriod_period_0 value 10 simple 31-Jul-2016 13:09:11 Key: repository_common_quietPeriod_maxRetries_0 value 5 simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_repository_0 value iuliushutuleac/bamboo-ansible-tasks simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_scm_0 value GIT simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_branch_0 value master simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_showAdvancedOptions_0 value false simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_commandTimeout_0 value 180 simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_verbose_logs_0 value false simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_fetch_whole_repository_0 value false simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_git_useShallowClones_0 value false simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_git_useSubmodules_0 value false simple 31-Jul-2016 13:09:11 Key: repository_bitbucket_git_useRemoteAgentCache_0 value false
Tuesday, July 26, 2016
Build templating - getting started - the quick guide
To get started there are 2 steps to be performed:
- pick a plan which you would like to be a template, tick the "Is Template" flag. This will allow the plan to appear in the other plans in the dropdown with template names.
- Once you have the Template allowed, go in the configuration of the other plan - Miscellaneous page.
Here you should find a section "Template list" with a dropdown. Select from the dropdown the template name and tick what would you like to be templated.
The replication can be started in few modes (result is same but depending on your permissions in Bamboo one can be more comfortable than the other):
- by doing changes to the template (save on Miscellaneous page)
- by going to the template operations menu
- from System administration / Addons / replication operations
As a test I would suggest to trigger the replication without any server pause. If you wait a bit you should see the progress in the replication operations.
One important aspect, name your tasks ! In order for templating to work properly please make sure you give your tasks unique names, this will ensure the engine can identify which task from template has been copied over and also will ensure the state of the tasks is maintained between copies.
Another important aspect, replication is not done on the fly, it needs to be triggered,
Monday, July 18, 2016
Task templating process - advanced topics
Due to the complexity of the repository definitions in Bamboo I have implemented pretty much 2 models for merging:
- without task reordering - quite simple case, the merging is split in 2 phases: remove almost all tasks from the build configuration, keep checkout tasks, and second phase bring all other tasks from the template. This method works fine on simple jobs, when you have to template jobs that checkout code at the start of jobs.
- with task reordering - in this case the current tasks are backed up, everything is pulled from template and if the engine is finding a checkout task it will try to find its configuration in the backup.
In terms of timeline the first method was the original merging solution, the second solution was only recently implemented. In order to switch between merge engines you need to tick the "Attempt to maintain task order" checkbox under configuration.
- without task reordering - quite simple case, the merging is split in 2 phases: remove almost all tasks from the build configuration, keep checkout tasks, and second phase bring all other tasks from the template. This method works fine on simple jobs, when you have to template jobs that checkout code at the start of jobs.
- with task reordering - in this case the current tasks are backed up, everything is pulled from template and if the engine is finding a checkout task it will try to find its configuration in the backup.
In terms of timeline the first method was the original merging solution, the second solution was only recently implemented. In order to switch between merge engines you need to tick the "Attempt to maintain task order" checkbox under configuration.
Saturday, July 9, 2016
Bamboo Templated plans get Deployment support, simple variables control templating this time.
New features that will help you keep an eye on your templating:
- reports regarding which templates are used
- live log review for replication progress
- trigger replication from main menu
- secure replication menu from administration page
New features that will help you keep an eye on your templating:
- reports regarding which templates are used
- live log review for replication progress
- trigger replication from main menu
- secure replication menu from administration page
Subscribe to:
Posts (Atom)