Sign In
Sign In

What is a CI/CD Pipeline? Tools, Benefits, and an Example

What is a CI/CD Pipeline? Tools, Benefits, and an Example
Hostman Team
Technical writer
CI/CD
08.11.2024
Reading time: 10 min

The CI/CD pipeline, short for "continuous integration and continuous deployment (or delivery) pipeline," is a specialized practice for automating the delivery of new software versions to users throughout the development lifecycle.

In simple terms, the CI/CD pipeline automates delivering software updates to users incrementally rather than dividing the product into distinct versions that require long waits between releases. Instead, updates are delivered gradually with each iteration of the codebase. While the stages of development, testing, deployment, and release can be done manually, the true value of the CI/CD pipeline lies in automation.

Diving Deeper into CI/CD

As modern user applications (like taxi services, food delivery, or rental platforms) become central to many companies, the speed of code releases (or app updates) has become a competitive advantage. To enable the fastest possible delivery of a digital product, two core components are essential:

  1. Continuous Integration (CI). Developers frequently merge changes into the main branch, using a version control system (such as Git).All changes undergo automated testing, improving the product incrementally rather than in large, disruptive updates. Imagine a development timeline as a line with update points spaced evenly along it, showing consistent progress rather than sudden, clustered changes.

  2. Continuous Deployment (CD). This further extends continuous integration by automatically deploying new code changes to the production environment after the build stage. The aim is clear: to reduce developer workload, minimize human error, and maintain a steady release process.

Tools within the CI/CD pipeline may include code compilers, analyzers, unit tests, data security systems, and a variety of other components useful at all stages of product release.

It’s worth noting that CI/CD is the foundation of DevOps methodology, which automates software build, configuration, and deployment. This approach promotes close collaboration between development and operations teams, effectively integrating their workflows and fostering a culture of streamlined product creation and support.

Frequent, incremental code testing reduces the number of errors and bugs, providing users with the best possible experience. Iterative software development and delivery also accelerate the product’s return on investment and make it easier to create a Minimum Viable Product (MVP). As a result, development costs are reduced, and hypotheses can be tested quickly.

Writing small sections of code alongside automated tests also lessens the cognitive load on developers. Yes, each pipeline stage follows a strict sequence: development comes first, with deployment to the production environment at the end. Testing happens in the later stages, while static code analysis occurs earlier. Notification systems often operate between stages, sending status updates about the pipeline to messaging platforms or email.

Most importantly, the entire process runs automatically. Depending on the specific tool and developer's needs, the pipeline can be triggered with a console command or a timer.

CI/CD Tools

Many popular Git repository hosting platforms offer comprehensive systems with scripts or full interfaces to streamline CI/CD processes, such as GitHub CI/CD, GitLab CI/CD, and Bitbucket CI/CD. Other tools include Jenkins CI/CD, AWS CI/CD, and Azure DevOps CI/CD. Each has unique features, so the choice of tool often comes down to preference, though each option has its own pros and cons. Here are a few tools designed explicitly for organizing CI/CD pipelines:

  • Jenkins is a free, open-source software environment (set up as a server) built specifically for continuous integration. Written in Java, Jenkins runs on Windows, macOS, and other Unix-like operating systems.

  • CircleCI is a CI/CD tool delivered as a web service, enabling complete pipeline automation from code creation to testing and deployment. It integrates with GitHub, GitHub Enterprise, and Bitbucket, triggering builds whenever new code is committed to the repository. Builds run using containers or virtual machines, with automatic parallelization of the pipeline across multiple threads. CircleCI is a paid service, but it has a free option with a single job without parallelization. Open-source projects can receive three additional free containers.

  • TeamCity is a build management and continuous integration server from JetBrains geared toward DevOps teams. It runs in a Java environment and integrates with Visual Studio and IDEs. TeamCity can be installed on both Windows and Linux servers and supports .NET projects.

  • Bamboo is a continuous integration server that automates application release management. Developed by Atlassian, Bamboo covers the entire process from build, functional testing, and versioning to release tagging, deployment, and activation of new versions in the production environment.

Stages of CI/CD

The stages of a CI/CD pipeline can vary depending on the product and the specific development team. Still, there is a generally standard sequence of actions that nearly every pipeline follows. Some stages can be skipped or done manually, but this is considered poor practice. Typically, a pipeline can be outlined in seven main steps:

  1. Trigger. The pipeline should start automatically whenever new code is committed to the repository. There are multiple ways to achieve this. For example, a CI/CD tool (such as Jenkins) may "poll" the Git repository, or a "hook" (like Git Webhooks) could send a push notification to the CI/CD tool whenever a developer pushes new code. While manual triggers are possible, automated triggers reduce human error and provide greater reliability.

  2. Code Verification. The CI/CD tool pulls the code from the repository (via a hook or poll), along with details on which commit triggered the pipeline and the steps to be executed. At this stage, static code analysis tools may run to detect errors, halting the pipeline if any issues are found. If everything checks out, the CI/CD process moves forward.

  3. Code Compilation. The CI/CD tool must have access to all necessary build tools for code compilation. For instance, tools like Maven or Gradle might be used for Java applications. Ideally, the build should occur in a clean environment; Docker containers are often used for this purpose.

  4. Unit Testing. A critical part of the pipeline, unit testing involves running specialized libraries for each programming language to test the compiled application. If tests are completed successfully, the pipeline proceeds to the next step. Comprehensive test coverage is essential to ensure all functions and components are tested. Tests should be updated and improved as the codebase grows.

  5. Packaging. Once all tests have passed, the application is packaged into a final "build" for delivery. For Java code, this might be a JAR file, while for Dockerized applications, a Docker image may be created.

  6. Acceptance Testing. This stage verifies that the software meets all specified requirements, either client-specific or based on the developer’s own standards. Acceptance tests, like unit tests, are automated. Requirements and expected outcomes are specified in a format that the system can interpret, allowing them to be automatically tested repeatedly. For example, using a tool like Selenium, functional aspects of the application can be tested, such as verifying whether a user can add a product to a cart on an e-commerce site. Acceptance testing saves time by automating what would otherwise be manual tests.

  7. Delivery and Deployment. At this final stage, the product is ready to be deployed to the client’s production environment. For continuous deployment, a production environment is necessary. This might be a public cloud with its own API or a tool like Spinnaker, which integrates with Kubernetes for container orchestration and works with popular cloud providers such as Google Cloud Platform, AWS, Microsoft Azure, and Oracle Cloud.

This is the endpoint of the pipeline. The next time a developer commits new code to the repository, the process will begin again.

CI/CD in Practice

Typically, when adding a new feature to a product, a separate branch is created in the version control system (such as Git). Code is written in this branch and tested locally. Once the feature is ready, the developer makes a pull request and asks a senior colleague to review the code before merging it into the main branch. Then, the updated codebase is deployed to the dev environment. All of this is done manually.

If you spend 25 hours on development and 2 hours on deployment, that’s a reasonable ratio. However, if you spend 20 minutes creating a feature and 2 hours deploying it, that’s a problem—your time isn’t being used efficiently.

At this point, you have two options:

  1. Commit changes to the main branch less frequently, building up larger pull requests. However, reviewing large chunks of code is more challenging.

  2. Set up a CI/CD pipeline to automate building, testing, and deployment.

With the second approach, the process is strictly standardized—any feature only makes it into the final product (main branch) once it has passed through every stage of the pipeline, with no exceptions.

Although this article isn’t intended to teach any specific CI/CD tool, let’s look at a simple example using GitLab CI/CD to illustrate how a pipeline is set up in practice.

Imagine you already have a GitLab repository with project code and want to automate the build and deployment process. In GitLab, automated processes are handled by the GitLab Runner—a standalone virtual machine that executes pipeline jobs.

The runners are programmed using YAML scripts containing detailed instructions for GitLab CI/CD. In this file, you define:

  • The structure and sequence of jobs the runner should execute
  • Branch-based conditions for decision-making during pipeline execution

Here is a basic example of such a file:

build-job:
  stage: build
  script:
    - echo "Hello, $GITLAB_USER_LOGIN!"

test-job1:
  stage: test
  script:
    - echo "This job tests something"

test-job2:
  stage: test
  script:
    - echo "This job tests something, but takes more time than test-job1."
    - echo "After the echo commands complete, it runs the sleep command for 20 seconds"
    - echo "which simulates a test that runs 20 seconds longer than test-job1"
    - sleep 20

deploy-prod:
  stage: deploy
  script:
    - echo "This job deploys something from the $CI_COMMIT_BRANCH branch."
  environment: production

This pipeline contains four jobs: build-job, test-job1, test-job2, and deploy-prod. Everything after echo outputs messages to the GitLab UI console. GitLab provides predefined variables like $GITLAB_USER_LOGIN and $CI_COMMIT_BRANCH, which can be used to display information in the console.

Of course, this pipeline doesn’t perform any actual operations—it only outputs messages to the console. It’s meant to illustrate how to structure a pipeline format.  This example has three stages: build, test, and deploy, with two jobs executed in the test stage. GitLab’s UI also offers a visual view of the script’s content.

Image1

Image source: docs.gitlab.com

As with any CI/CD tool, GitLab has its own documentation, which includes many useful examples and specific guidelines for working with this service. GitHub, for example, offers something similar. Some developers may find it convenient to use a CI/CD tool provided by the same platform hosting their repository, which can simplify the setup.

A Few Tips for CI/CD

  • Version control not only your product’s codebase but also the scripts that define your CI/CD pipeline.
  • Follow the correct sequence of pipeline stages. If you deploy to production before testing, you risk issues.
  • Don’t skip stages in the pipeline, even if it’s tempting. All stages should be executed in sequence. If you have hundreds of tests but a few are blocking the pipeline, fix or remove the problematic tests instead of bypassing them.
  • Avoid manual intervention! Each stage of the pipeline should trigger automatically; otherwise, the DevOps methodology loses its impact.
  • Set up notifications to regularly update you on the CI/CD pipeline status throughout the build, testing, and deployment processes. For example, these notifications could be sent to your messenger app.

Conclusion

This article has covered some general principles of the DevOps methodology, which is grounded in CI/CD pipelines. We looked at several popular tools and services used to automate continuous integration and deployment. While CI/CD tools share many core features, each has unique characteristics. Anyone planning to adopt a DevOps approach in their development processes will need time to get familiar with each tool, understand its nuances, and select the right one.

CI/CD
08.11.2024
Reading time: 10 min

Similar

Docker

How to Automate Jenkins Setup with Docker

In the modern software development world, Continuous Integration and Continuous Delivery (CI/CD) have become an integral part of the development process. Jenkins, one of the leading CI/CD tools, helps automate application build, testing, and deployment. However, setting up and managing Jenkins can be time-consuming and complex, especially in large projects with many developers and diverse requirements. Docker, containerization, and container orchestration have come to the rescue, offering more efficient and scalable solutions for deploying applications and infrastructure. Docker allows developers to package applications and their dependencies into containers, which can be easily transported and run on any system with Docker installed. Benefits of Using Docker for Automating Jenkins Setup Simplified Installation and Setup: Using Docker to deploy Jenkins eliminates many challenges associated with installing dependencies and setting up the environment. You only need to run a few commands to get a fully functional Jenkins server. Repeatability: With Docker, you can be confident that your environment will always be the same, regardless of where it runs. This eliminates problems associated with different configurations across different servers. Environment Isolation: Docker provides isolation of applications and their dependencies, avoiding conflicts between different projects and services. Scalability: Using Docker and orchestration tools such as Docker Compose or Kubernetes allows Jenkins to be easily scaled by adding or removing agents as needed. Fast Deployment and Recovery: In case of failure or the need for an upgrade, Docker allows you to quickly deploy a new Jenkins container, minimizing downtime and ensuring business continuity. In this article, we will discuss how to automate the setup and deployment of Jenkins using Docker. We will cover all the stages, from creating a Docker file and setting up Docker Compose to integrating Jenkins Configuration as Code (JCasC) for automatic Jenkins configuration. As a result, you'll have a complete understanding of the process and a ready-made solution for automating Jenkins in your projects. Prerequisites Before you begin setting up Jenkins with Docker, you need to ensure that you have all the necessary tools and software. In this section, we will discuss the requirements for successfully automating Jenkins and how to install the necessary components. Installing Docker and Docker Compose Docker can be installed on various operating systems, including Linux, macOS, and Windows. Below are the steps for installing Docker on the most popular platforms: Linux (Ubuntu) Update the package list with the command: sudo apt update Install packages for HTTPS support: sudo apt install apt-transport-https ca-certificates curl software-properties-common Add the official Docker GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - Add the Docker repository to APT sources: sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" Install Docker: sudo apt install docker-ce Verify Docker is running: sudo systemctl status docker macOS Download and install Docker Desktop from the official website: Docker Desktop for Mac. Follow the on-screen instructions to complete the installation. Windows Download and install Docker Desktop from the official website: Docker Desktop for Windows. Follow the on-screen instructions to complete the installation. Docker Compose is typically installed along with Docker Desktop on macOS and Windows. For Linux, it requires separate installation: Download the latest version of Docker Compose: sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*?(?=")')/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose Make the downloaded file executable: sudo chmod +x /usr/local/bin/docker-compose Verify the installation: docker-compose --version Docker Hub is a cloud-based repository where you can find and store Docker images. The official Jenkins Docker image is available on Docker Hub and provides a ready-to-use Jenkins server. Go to the Docker Hub website. In the search bar, type Jenkins. Select the official image jenkins/jenkins. The official image is regularly updated and maintained by the community, ensuring a stable and secure environment. Creating a Dockerfile for Jenkins In this chapter, we will explore how to create a Dockerfile for Jenkins that will be used to build a Docker image. We will also discuss how to add configurations and plugins to this image to meet the specific requirements of your project. Structure of a Dockerfile A Dockerfile is a text document containing all the commands that a user could call on the command line to build an image. In each Dockerfile, instructions are used to define a step in the image-building process. The key commands include: FROM: Specifies the base image to create a new image from. RUN: Executes a command in the container. COPY or ADD: Copies files or directories into the container. CMD or ENTRYPOINT: Defines the command that will be executed when the container starts. Basic Dockerfile for Jenkins Let’s start by creating a simple Dockerfile for Jenkins. This file will use the official Jenkins image as the base and add a few necessary plugins. Create a new file named Dockerfile in your project directory. Add the following code: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins workflow-aggregator git EXPOSE 8080 EXPOSE 50000 This basic Dockerfile installs two plugins: workflow-aggregator and git. It also exposes ports 8080 (for the web interface) and 50000 (for connecting Jenkins agents). Adding Configurations and Plugins For more complex configurations, we can add additional steps to the Dockerfile. For example, we can configure Jenkins to automatically use a specific configuration file or add scripts for pre-configuration. Create a jenkins_home directory to store custom configurations. Inside the new directory, create a custom_config.xml file with the required configurations: <?xml version='1.0' encoding='UTF-8'?> <hudson> <numExecutors>2</numExecutors> <mode>NORMAL</mode> <useSecurity>false</useSecurity> <disableRememberMe>false</disableRememberMe> <label></label> <primaryView>All</primaryView> <slaveAgentPort>50000</slaveAgentPort> <securityRealm class='hudson.security.SecurityRealm$None'/> <authorizationStrategy class='hudson.security.AuthorizationStrategy$Unsecured'/> </hudson> Update the Dockerfile as follows: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins workflow-aggregator git docker-workflow COPY jenkins_home/custom_config.xml /var/jenkins_home/config.xml COPY scripts/init.groovy.d /usr/share/jenkins/ref/init.groovy.d/ EXPOSE 8080 EXPOSE 50000 In this example, we are installing additional plugins, copying the custom configuration file into Jenkins, and adding scripts to the init.groovy.d directory for automatic initialization of Jenkins during its first startup. Docker Compose Setup Docker Compose allows you to define your application's infrastructure as code using YAML files. This simplifies the configuration and deployment process, making it repeatable and easier to manage. Key benefits of using Docker Compose: Ease of Use: Create and manage multi-container applications with a single YAML file. Scalability: Easily scale services by adding or removing containers as needed. Convenience for Testing: Ability to run isolated environments for development and testing. Example of docker-compose.yml for Jenkins Let’s create a docker-compose.yml file to deploy Jenkins along with associated services such as a database and Jenkins agent. Create a docker-compose.yml file in your project directory. Add the following code to the file: version: '3.8' services: jenkins: image: jenkins/jenkins:lts container_name: jenkins-server ports: - "8080:8080" - "50000:50000" volumes: - jenkins_home:/var/jenkins_home networks: - jenkins-network jenkins-agent: image: jenkins/inbound-agent container_name: jenkins-agent environment: - JENKINS_URL=http://jenkins-server:8080 - JENKINS_AGENT_NAME=agent - JENKINS_AGENT_WORKDIR=/home/jenkins/agent volumes: - agent_workdir:/home/jenkins/agent depends_on: - jenkins networks: - jenkins-network volumes: jenkins_home: agent_workdir: networks: jenkins-network: This file defines two services: jenkins: The service uses the official Jenkins image. Ports 8080 and 50000 are forwarded for access to the Jenkins web interface and communication with agents. The /var/jenkins_home directory is mounted on the external volume jenkins_home to persist data across container restarts. jenkins-agent: The service uses the Jenkins inbound-agent image. The agent connects to the Jenkins server via the URL specified in the JENKINS_URL environment variable. The agent's working directory is mounted on an external volume agent_workdir. Once you create the docker-compose.yml file, you can start all services with a single command: Navigate to the directory that contains your docker-compose.yml. Run the following command to start all services: docker-compose up -d The -d flag runs the containers in the background. After executing this command, Docker Compose will create and start containers for all services defined in the file. You can now check the status of the running containers using the following command: docker-compose ps If everything went well, you should see only the jenkins-server container in the output. Now, let’s set up the Jenkins server and agent. Open a browser and go to http://localhost:8080/. During the first startup, you will see the following message: To retrieve the password, run this command: docker exec -it jenkins-server cat /var/jenkins_home/secrets/initialAdminPassword Copy the password and paste it into the Unlock Jenkins form. This will open a new window with the initial setup. Select Install suggested plugins. After the installation is complete, fill out the form to create an admin user. Accept the default URL and finish the setup. Then, go to Manage Jenkins → Manage Nodes. Click New Node, provide a name for the new node (e.g., "agent"), and select Permanent Agent. Fill in the remaining fields as shown in the screenshot. After creating the agent, a window will open with a command containing the secret for the agent connection. Copy the secret and add it to your docker-compose.yml: environment: - JENKINS_URL=http://jenkins-server:8080 - JENKINS_AGENT_NAME=agent - JENKINS_AGENT_WORKDIR=/home/jenkins/agent - JENKINS_SECRET=<your-secret-here> # Insert the secret here To restart the services, use the following commands and verify that the jenkins-agent container has started: docker-compose downdocker-compose up -d Configuring Jenkins with Code (JCasC) Jenkins Configuration as Code (JCasC) is an approach that allows you to describe the entire Jenkins configuration in a YAML file. It simplifies the automation, maintenance, and portability of Jenkins settings. In this chapter, we will explore how to set up JCasC for automatic Jenkins configuration when the container starts. JCasC allows you to describe Jenkins configuration in a single YAML file, which provides the following benefits: Automation: A fully automated Jenkins setup process, eliminating the need for manual configuration. Manageability: Easier management of configurations using version control systems. Documentation: Clear and easily readable documentation of Jenkins configuration. Example of a Jenkins Configuration File First, create the configuration file. Create a file named jenkins.yaml in your project directory. Add the following configuration to the file: jenkins: systemMessage: "Welcome to Jenkins configured as code!" securityRealm: local: allowsSignup: false users: - id: "admin" password: "${JENKINS_ADMIN_PASSWORD}" authorizationStrategy: loggedInUsersCanDoAnything: allowAnonymousRead: false tools: jdk: installations: - name: "OpenJDK 11" home: "/usr/lib/jvm/java-11-openjdk" jobs: - script: > pipeline { agent any stages { stage('Build') { steps { echo 'Building...' } } stage('Test') { steps { echo 'Testing...' } } stage('Deploy') { steps { echo 'Deploying...' } } } } This configuration file defines: System message in the systemMessage block. This string will appear on the Jenkins homepage and can be used to inform users of important information or changes. Local user database and administrator account in the securityRealm block. The field allowsSignup: false disables self-registration of new users. Then, a user with the ID admin is defined, with the password set by the environment variable ${JENKINS_ADMIN_PASSWORD}. Authorization strategy in the authorizationStrategy block. The policy loggedInUsersCanDoAnything allows authenticated users to perform any action, while allowAnonymousRead: false prevents anonymous users from accessing the system. JDK installation in the tools block. In this example, a JDK named OpenJDK 11 is specified with the location /usr/lib/jvm/java-11-openjdk. Pipeline example in the jobs block. This pipeline includes three stages: Build, Test, and Deploy, each containing one step that outputs a corresponding message to the console. Integrating JCasC with Docker and Docker Compose Next, we need to integrate our jenkins.yaml configuration file with Docker and Docker Compose so that this configuration is automatically applied when the Jenkins container starts. Update the Dockerfile to copy the configuration file into the container and install the JCasC plugin: FROM jenkins/jenkins:lts RUN jenkins-plugin-cli --plugins configuration-as-code COPY jenkins.yaml /var/jenkins_home/jenkins.yaml EXPOSE 8080 EXPOSE 50000 Update the docker-compose.yml to set environment variables and mount the configuration file. Add the following code in the volumes block: - ./jenkins.yaml:/var/jenkins_home/jenkins.yaml After the volumes block, add a new environment block (if you haven't defined it earlier): environment: - JENKINS_ADMIN_PASSWORD=admin_password Build the new Jenkins image with the JCasC configuration: docker-compose build Run the containers: docker-compose up -d After the containers start, go to your browser at http://localhost:8080 and log in with the administrator account. You should see the system message and the Jenkins configuration applied according to your jenkins.yaml file. A few important notes: The YAML files docker-compose.yml and jenkins.yaml might seem similar at first glance but serve completely different purposes. The file in Docker Compose describes the services and containers needed to run Jenkins and its environment, while the file in JCasC describes the Jenkins configuration itself, including plugin installation, user settings, security, system settings, and jobs. The .yml and .yaml extensions are variations of the same YAML file format. They are interchangeable and supported by various tools and libraries for working with YAML. The choice of format depends largely on historical community preferences; in Docker documentation, you will more often encounter examples with the .yml extension, while in JCasC documentation, .yaml is more common. The pipeline example provided below only outputs messages at each stage with no useful payload. This example is for demonstrating structure and basic concepts, but it does not prevent Jenkins from successfully applying the configuration. We will not dive into more complex and practical structures. jenkins.yaml describes the static configuration and is not intended to define the details of a specific CI/CD process for a particular project. For that purpose, you can use the Jenkinsfile, which offers flexibility for defining specific CI/CD steps and integrating with version control systems. We will discuss this in more detail in the next chapter. Key Concepts of Jobs in JCasC Jobs are a section of the configuration file that allows you to define and configure build tasks using code. This block includes the following: Description of Build Tasks: This section describes all aspects of a job, including its type, stages, triggers, and execution steps. Types of Jobs: There are different types of jobs in Jenkins, such as freestyle projects, pipelines, and multiconfiguration projects. In JCasC, pipelines are typically used because they provide a more flexible and powerful approach to automation. Declarative Syntax: Pipelines are usually described using declarative syntax, simplifying understanding and editing. Example Breakdown: pipeline: The main block that defines the pipeline job. agent any: Specifies that the pipeline can run on any available Jenkins agent. stages: The block that contains the pipeline stages. A stage is a step in the process. Additional Features: Triggers: You can add triggers to make the job run automatically under certain conditions, such as on a schedule or when a commit is made to a repository: triggers { cron('H 4/* 0 0 1-5') } Post-Conditions: You can add post-conditions to execute steps after the pipeline finishes, such as sending notifications or archiving artifacts. Parameters: You can define parameters for a job to make it configurable at runtime: parameters { string(name: 'BRANCH_NAME', defaultValue: 'main', description: 'Branch to build') } Automating Jenkins Deployment in Docker with JCasC Using Scripts for Automatic Deployment Use Bash scripts to automate the installation, updating, and running Jenkins containers. Leverage Jenkins Configuration as Code (JCasC) to automate Jenkins configuration. Script Examples Script for Deploying Jenkins in Docker: #!/bin/bash # Jenkins Parameters JENKINS_IMAGE="jenkins/jenkins:lts" CONTAINER_NAME="jenkins-server" JENKINS_PORT="8080" JENKINS_AGENT_PORT="50000" VOLUME_NAME="jenkins_home" CONFIG_DIR="$(pwd)/jenkins_configuration" # Create a volume to store Jenkins data docker volume create $VOLUME_NAME # Run Jenkins container with JCasC docker run -d \ --name $CONTAINER_NAME \ -p $JENKINS_PORT:8080 \ -p $JENKINS_AGENT_PORT:50000 \ -v $VOLUME_NAME:/var/jenkins_home \ -v $CONFIG_DIR:/var/jenkins_home/casc_configs \ -e CASC_JENKINS_CONFIG=/var/jenkins_home/casc_configs \ $JENKINS_IMAGE The JCasC configuration file jenkins.yaml was discussed earlier. Setting Up a CI/CD Pipeline for Jenkins Updates To set up a CI/CD pipeline, follow these steps: Open Jenkins and go to the home page. Click on Create Item. Enter a name for the new item, select Pipeline, and click OK. If this section is missing, you need to install the plugin in Jenkins. Go to Manage Jenkins → Manage Plugins. In the Available Plugins tab, search for Pipeline and install the Pipeline plugin. Similarly, install the Git Push plugin. After installation, go back to Create Item. Select Pipeline, and under Definition, choose Pipeline script from SCM. Select Git as the SCM. Add the URL of your repository; if it's private, add the credentials. In the Branch Specifier field, specify the branch that contains the Jenkinsfile (e.g., */main). Note that the Jenkinsfile should be created without an extension. If it's located in a subdirectory, specify it in the Script Path field. Click Save. Example of a Jenkinsfile pipeline { agent any environment { JENKINS_CONTAINER_NAME = 'new-jenkins-server' JENKINS_IMAGE = 'jenkins/jenkins:lts' JENKINS_PORT = '8080' JENKINS_VOLUME = 'jenkins_home' } stages { stage('Setup Docker') { steps { script { // Install Docker on the server if it's not installed sh ''' if ! [ -x "$(command -v docker)" ]; then curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh fi ''' } } } stage('Pull Jenkins Docker Image') { steps { script { // Pull the latest Jenkins image sh "docker pull ${JENKINS_IMAGE}" } } } stage('Cleanup Old Jenkins Container') { steps { script { // Stop and remove the old container if it exists def existingContainer = sh(script: "docker ps -a -q -f name=${JENKINS_CONTAINER_NAME}", returnStdout: true).trim() if (existingContainer) { echo "Stopping and removing existing container ${JENKINS_CONTAINER_NAME}..." sh "docker stop ${existingContainer} || true" sh "docker rm -f ${existingContainer} || true" } else { echo "No existing container with name ${JENKINS_CONTAINER_NAME} found." } } } } stage('Run Jenkins Container') { steps { script { // Run Jenkins container with port binding and volume mounting sh ''' docker run -d --name ${JENKINS_CONTAINER_NAME} \ -p ${JENKINS_PORT}:8080 \ -p 50000:50000 \ -v ${JENKINS_VOLUME}:/var/jenkins_home \ ${JENKINS_IMAGE} ''' } } } stage('Configure Jenkins (Optional)') { steps { script { // Additional Jenkins configuration through Groovy scripts or REST API sh ''' # Example script for performing initial Jenkins setup curl -X POST http://localhost:${JENKINS_PORT}/scriptText --data-urlencode 'script=println("Jenkins is running!")' ''' } } } } post { always { echo "Jenkins setup and deployment process completed." } } } On the page of your new pipeline, click Build Now. Go to Console Output. In case of a successful completion, you should see the following output. For this pipeline, we used the following files.  Dockerfile: FROM jenkins/jenkins:lts USER root RUN apt-get update && apt-get install -y docker.io docker-compose.yml: version: '3.7' services: jenkins: build: . ports: - "8081:8080" - "50001:50000" volumes: - jenkins_home:/var/jenkins_home - /var/run/docker.sock:/var/run/docker.sock environment: - JAVA_OPTS=-Djenkins.install.runSetupWizard=false networks: - jenkins-network volumes: jenkins_home: networks: jenkins-network: Ports 8081 and 50001 are used here so that the newly deployed Jenkins can occupy ports 8080 and 50000, respectively. This means that the main Jenkins, from which the pipeline is running, is currently located at http://localhost:8081/. One way to check if Jenkins has been deployed is to go to http://localhost:8080/, as we specified this in the pipeline. Since this is a new image, a welcome message with authentication will appear on the homepage. Conclusion Automating the deployment, updates, and backups of Jenkins is crucial for ensuring the reliability and security of CI/CD processes. Using modern tools enhances this process with a variety of useful features and resources. If you're further interested in exploring Jenkins capabilities, we recommend the following useful resources that can assist with automating deployments: Official Jenkins website Jenkins Configuration as Code documentation Pipeline Syntax
30 January 2025 · 19 min to read
Git

Installing and Using GitLab Runner

GitLab Runner is a web application (agent) designed to launch and automatically run CI/CD processes in GitLab. GitLab Runner runs tasks from the .gitlab-ci.yml file, which is located in the root directory of your project. Runner can be installed either on the same server with GitLab or separately. The main thing is that there is network communication between GitLab Runner and the GitLab server. You can install Gitlab Runner on operating systems such as Linux, Windows, macOS, and it also supports running in a Docker container. In this article, we will install GitLab Runner in Docker and run a test project. Prerequisites To install GitLab Runner, you will need: A cloud server or a virtual machine running a Linux OS. You can use any Linux distribution compatible with Docker. Docker installed.  An account on gitlab.com, as well as a pre-prepared project. You can install Docker manually (we have a step-by-step guide for Ubuntu) or automatically from Marketplace, when creating a new Hostman server. Installing GitLab Runner using Docker First, connect to the server where Docker is installed. Create a Docker volume in which we will store the configuration. A Docker volume is a file system for persistent storage of information. Data in a volume is stored separately from the container. The volume and the data will remain if you restart, stop, or delete the container.  The command to create a volume named runner1 is: docker volume create runner1 Next, launch the container with the gitlab-runner image: docker run -d --name gitlab-runner1 --restart always \     -v /var/run/docker.sock:/var/run/docker.sock \     -v runner1:/etc/gitlab-runner\     gitlab/gitlab-runner:latest Check the status of the container and make sure that it is running (Up): docker ps This completes the installation of GitLab Runner. The next step is Runner registration. Registering GitLab Runner Once Runner has been installed, it must be registered. Without registration, Runner will not be able to complete tasks. Launch the gitlab-runner container and execute the register command: docker run --rm -it -v runner1:/etc/gitlab-runner gitlab/gitlab-runner:latest register You will be prompted to enter the URL of the GitLab server: If you are using self hosted GitLab installed on a separate server, use the address of your GitLab instance. For example, if your project is located at https://gitlab.test1.com/projects/testproject, then the URL will be https://gitlab.test1.com. If your projects are stored on GitLab.com, then the URL is https://gitlab.com. Next, you will need to enter the registration token. To get the token, go to the GitLab web interface, select the project, select the Settings section on the left, then CI/CD: Find the Runners menu, expand the section. You will find the token in the Action menu (three dots): Next, you'll be prompted to enter a description for this Runner. You can skip it not writing anything: Now you need to set the tags. Tags are labels designed to determine which runner will be used when running tasks. You can enter one or several tags separating them by commas: When entering a maintenance note, you can add information for other developers, containing, for example, technical information about the server. You can also skip this step.  Select an executor, i.e. the environment for launching the pipeline. We will choose docker. In this case, the pipeline will be launched in Docker containers, and upon completion, the containers will be deleted. At the last step, select the Docker image to use in the container where the pipeline will be launched. As an example, let's choose the python 3.10-slim image: After you are done registering the Runner, it will be displayed in the project settings, in the Runners section: Using GitLab Runner when starting a pipeline In order to use Runner to run a pipeline, you need to create a file called .gitlab-ci.yml. You can create a file directly in the root directory of the project or in the GitLab web interface: On the project main page click Set up CI/CD (this button is only visible before you set up CI/CD for the first time): Click Configure pipeline: When you first set up a pipeline, GitLab provides the basic pipeline syntax. In our example, we use a Python project, namely a script to test the speed of an Internet connection. If the script executes successfully, the output should display information about incoming and outgoing connections: Your Download speed is 95.8 Mbit/sYour Upload speed is 110.1 Mbit/s The pipeline syntax will look like this: image: python:3.10-slim default: tags: - test1 before_script: - pip3 install -r requirements.txt run: script: - python3 check_internet_speed.py To assign a previously created Runner to this project, you need to add: default:   tags:     - test1 Where test1 is the tag of the Runner we created. With this tag, the pipeline will be executed on the Runner that is assigned the test1 tag. Save the changes to the file (make a commit) and launch the pipeline. If you look at the job execution process, you can see at the very beginning of the output that the gitlab runner is used: The full output of the entire pipeline is shown in the screenshot below: Conclusion In this tutorial, we have installed and configured GitLab Runner, assigned it to a project, and launched the pipeline.
27 May 2024 · 5 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support