Converting CICD from Bash to Python Scripts

Calling this Phase 1. I've switch the build machine philosophy from
using a dedicated Digital Ocean droplet per arch to using one large
build machine and the +package-linux Earthly target which results
in .deb and .rpm packages for both amd64 and arm64/aarch64.

The script to create and delete the build machine has been migrated
to Python. I feel like the error handling is better and the delete
function now does its thing by using the specific ID of the running
build machine vs the name. Using the name would, in rare circumstances,
fail when more than one machine of the same name existed causing
duplicates to be created, all very expensive and creating larger than
normal Digital Ocean costs.

Lastly, moving the .deb and .rpm packages from the build machine
to the build orchestrator for creating and signing the repositories
now uses the Gitlab CICD artifact system verses SCP. This switch
will allow us to include the packages in the release records and
maybe streamline the Python and Crates distribution jobs in a
later phase of this project.

Changes are made in the Dry Run section off the CICD config for
testing, which will start in a few minutes and probably result in
a bunch of failed pipelines and tweaking because there's just no
way I got all of this right on the first try.
This commit is contained in:
TC Johnson 2025-03-16 11:08:58 -05:00
parent 09f7210979
commit ea0c3b6469
12 changed files with 476 additions and 126 deletions

178
.gitignore vendored
View File

@ -76,3 +76,181 @@ perf.data.old
# Earthly temporary build output
.tmp-earthly-out/
###############################################################################
### Python
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# UV
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
#uv.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# Ruff stuff:
.ruff_cache/
# PyPI configuration file
.pypirc

View File

@ -261,13 +261,11 @@ dryrun_create_build_machines:
tags:
- build-orchestration
script:
- bash scripts/cicd/build-orchestration/build-machine-ctrl.sh create amd64-deb
- bash scripts/cicd/build-orchestration/build-machine-ctrl.sh create arm64-deb
- bash scripts/cicd/build-orchestration/build-machine-ctrl.sh create amd64-rpm
- uv scripts/cicd-python/main.py create_build_machine
rules:
- if: $CI_COMMIT_MESSAGE =~ /\[ci dryrun]/
dryrun_package_amd64_deb:
dryrun_package_linux:
stage: build_packages
needs:
- dryrun_create_build_machines
@ -275,34 +273,10 @@ dryrun_package_amd64_deb:
- build-amd64-deb
script:
- earthly bootstrap
- earthly +package-linux-amd64-deb
- bash scripts/cicd/build-machine/scp-amd64-debs-to-orchestrator.sh
rules:
- if: $CI_COMMIT_MESSAGE =~ /\[ci dryrun]/
dryrun_package_arm64_deb:
stage: build_packages
needs:
- dryrun_create_build_machines
tags:
- build-arm64-deb
script:
- earthly bootstrap
- earthly +package-linux-arm64-deb
- bash scripts/cicd/build-machine/scp-arm64-debs-to-orchestrator.sh
rules:
- if: $CI_COMMIT_MESSAGE =~ /\[ci dryrun]/
dryrun_package_amd64_rpm:
stage: build_packages
needs:
- dryrun_create_build_machines
tags:
- build-amd64-rpm
script:
- earthly bootstrap
- earthly +package-linux-amd64-rpm
- bash scripts/cicd/build-machine/scp-amd64-rpms-to-orchestrator.sh
- earthly +package-linux
artifacts:
paths:
- target/packages/*
rules:
- if: $CI_COMMIT_MESSAGE =~ /\[ci dryrun]/
@ -315,7 +289,6 @@ dryrun_publish_crates:
script:
- vlt login
- vlt run --command="cargo publish -p veilid-tools --dry-run"
- vlt run --command="cargo publish -p veilid-core --dry-run"
rules:
- if: $CI_COMMIT_MESSAGE =~ /\[ci dryrun]/
@ -338,8 +311,11 @@ dryrun_build_repositories:
SECURE_FILES_DOWNLOAD_PATH: './'
script:
- curl --silent "https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files/-/raw/main/installer" | bash
- cp scripts/cicd/build-orchestration/generate-release.sh ~
- bash scripts/cicd/build-orchestration/distribute-packages.sh
- cp scripts/cicd/build-orchestration/rpm-repo-building/Dockerfile ~/rpm-build-container
- cp scripts/cicd/build-orchestration/rpm-repo-building/repobuild.sh ~/rpm-build-container
- cp scripts/cicd/build-orchestration/generate-stable-release.sh ~
- bash scripts/cicd/build-orchestration/distribute-stable-packages.sh
dependencies: dryrun_package_linux
rules:
- if: $CI_COMMIT_MESSAGE =~ /\[ci dryrun]/
@ -361,9 +337,7 @@ dryrun_delete_build_machines:
tags:
- build-orchestration
script:
- bash scripts/cicd/build-orchestration/build-machine-ctrl.sh delete amd64-deb
- bash scripts/cicd/build-orchestration/build-machine-ctrl.sh delete arm64-deb
- bash scripts/cicd/build-orchestration/build-machine-ctrl.sh delete amd64-rpm
- uv scripts/cicd-python/main.py create_build_machine
rules:
- if: $CI_COMMIT_MESSAGE =~ /\[ci dryrun]/

View File

@ -1 +0,0 @@
DO_API_TOKEN=dop_v1_4cce22b1171e09c37b5a191ab42dde8004b4cb699bf924e5f479d8c9764fb36e

178
scripts/cicd-python/.gitignore vendored Normal file
View File

@ -0,0 +1,178 @@
###############################################################################
### Python
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
**/__pycache__/
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# UV
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
#uv.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# Ruff stuff:
.ruff_cache/
# PyPI configuration file
.pypirc

View File

@ -0,0 +1,8 @@
{
"droplet_config": {
"name": "build-server-tmp",
"image": 181171505,
"size": "c2-16vcpu-32gb"
},
"droplet_id": 482837155
}

View File

@ -1,38 +0,0 @@
#!/usr/bin/env python3
import os
import sys
from dotenv import load_dotenv
import argparse
import asyncio
from utils.test_credentials import test_api_credentials
from utils.droplets import create_droplet, delete_droplet
if __name__ == "__main__":
# Load environment variables from the .env file.
load_dotenv()
token = os.getenv("DO_API_TOKEN")
if not token:
print("Error: DO_API_TOKEN environment variable not found. Please set it in the .env file.", file=sys.stderr)
sys.exit(1)
# Set up command-line argument parsing.
parser = argparse.ArgumentParser(description="DigitalOcean API Utility")
subparsers = parser.add_subparsers(dest="command", required=True)
subparsers.add_parser("test-credentials", help="Test DigitalOcean API credentials")
create_parser = subparsers.add_parser("create", help="Create a droplet")
create_parser.add_argument("droplet_type", help="Type of droplet to create (e.g., amd64-deb)")
delete_parser = subparsers.add_parser("delete", help="Delete a droplet")
delete_parser.add_argument("droplet_type", help="Type of droplet to delete (e.g., amd64-deb)")
args = parser.parse_args()
if args.command == "test-credentials":
asyncio.run(test_api_credentials(token))
elif args.command == "create":
asyncio.run(create_droplet(token, args.droplet_type))
elif args.command == "delete":
asyncio.run(delete_droplet(token, args.droplet_type))

View File

@ -1,20 +1,29 @@
import aiohttp
import asyncio
import sys
import json
# Define droplet configurations for different droplet types.
DROPLET_CONFIGS = {
"amd64-deb": {
"name": "build-server-amd64-deb-tmp",
"image": 179066895,
"size": "c2-16vcpu-32gb"
},
}
CONFIG_FILE = "config.json"
async def create_droplet(token: str, droplet_type: str) -> None:
config = DROPLET_CONFIGS.get(droplet_type)
if not config:
print(f"Droplet type '{droplet_type}' not recognized.", file=sys.stderr)
# Load config from file
def load_config():
try:
with open(CONFIG_FILE, "r") as f:
return json.load(f)
except (FileNotFoundError, json.JSONDecodeError):
return {}
# Save config to file
def save_config(config):
with open(CONFIG_FILE, "w") as f:
json.dump(config, f, indent=4)
async def create_build_machine(token: str) -> None:
config = load_config()
droplet_config = config.get("droplet_config", {})
if not droplet_config:
print("Droplet configuration not found.", file=sys.stderr)
sys.exit(1)
headers = {
@ -23,10 +32,10 @@ async def create_droplet(token: str, droplet_type: str) -> None:
}
create_url = "https://api.digitalocean.com/v2/droplets"
payload = {
"name": config["name"],
"region": "nyc1", # Changed default region to "ncy1"
"size": config["size"],
"image": config["image"],
"name": droplet_config["name"],
"region": "nyc1",
"size": droplet_config["size"],
"image": droplet_config["image"],
"backups": False,
}
@ -42,17 +51,23 @@ async def create_droplet(token: str, droplet_type: str) -> None:
print("No droplet information returned.", file=sys.stderr)
sys.exit("No droplet information returned.")
droplet_id = droplet.get("id")
print(f"Droplet creation initiated. Droplet ID: {droplet_id}")
print(f"Droplet created. Droplet ID: {droplet_id}")
# Poll for droplet status until it becomes "active"
# Save droplet ID to config
config["droplet_id"] = droplet_id
save_config(config)
print("Droplet ID saved to config.")
# Poll every 10 second for droplet status until it becomes "active"
status = droplet.get("status", "new")
droplet_url = f"https://api.digitalocean.com/v2/droplets/{droplet_id}"
while status != "active":
await asyncio.sleep(2)
await asyncio.sleep(10)
async with session.get(droplet_url, headers=headers) as poll_resp:
if poll_resp.status != 200:
error_text = await poll_resp.text()
print(f"Error polling droplet status: {error_text}", file=sys.stderr)
print(f"Error polling droplet status: {error_text}",
file=sys.stderr)
sys.exit(error_text)
droplet_data = await poll_resp.json()
droplet = droplet_data.get("droplet")
@ -60,7 +75,8 @@ async def create_droplet(token: str, droplet_type: str) -> None:
status = droplet.get("status", status)
print(f"Droplet status: {status}")
else:
print("Droplet data missing in polling response", file=sys.stderr)
print("Droplet data missing in polling response",
file=sys.stderr)
sys.exit("Droplet data missing in polling response")
print("Droplet is up and running.")
@ -68,46 +84,36 @@ async def create_droplet(token: str, droplet_type: str) -> None:
async with session.get(droplet_url, headers=headers) as final_resp:
if final_resp.status != 200:
error_text = await final_resp.text()
print(f"Error retrieving droplet information: {error_text}", file=sys.stderr)
print(f"Error retrieving droplet information: {error_text}",
file=sys.stderr)
sys.exit(error_text)
final_data = await final_resp.json()
print("Droplet Information:")
print(final_data)
async def delete_droplet(token: str, droplet_type: str) -> None:
config = DROPLET_CONFIGS.get(droplet_type)
if not config:
print(f"Droplet type '{droplet_type}' not recognized.", file=sys.stderr)
sys.exit(1)
async def delete_build_machine(token: str) -> None:
config = load_config()
droplet_id = config.get("droplet_id")
if not droplet_id:
print("No droplet ID found in config.", file=sys.stderr)
return
headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
}
droplets_url = "https://api.digitalocean.com/v2/droplets"
delete_url = f"https://api.digitalocean.com/v2/droplets/{droplet_id}"
async with aiohttp.ClientSession() as session:
async with session.get(droplets_url, headers=headers) as resp:
if resp.status != 200:
async with session.delete(delete_url, headers=headers) as resp:
if resp.status != 204:
error_text = await resp.text()
print(f"Error retrieving droplets: {error_text}", file=sys.stderr)
print(f"Error deleting droplet: {error_text}", file=sys.stderr)
sys.exit(error_text)
data = await resp.json()
droplets = data.get("droplets", [])
target_droplet = None
for droplet in droplets:
if droplet.get("name") == config["name"]:
target_droplet = droplet
break
if not target_droplet:
print(f"No droplet found with name '{config['name']}'.")
return
print(f"Droplet {droplet_id} deleted successfully.")
droplet_id = target_droplet.get("id")
delete_url = f"https://api.digitalocean.com/v2/droplets/{droplet_id}"
async with session.delete(delete_url, headers=headers) as delete_resp:
if delete_resp.status != 204:
error_text = await delete_resp.text()
print(f"Error deleting droplet: {error_text}", file=sys.stderr)
sys.exit(error_text)
print(f"Droplet '{config['name']}' deleted successfully.")
# Remove droplet ID from config
config.pop("droplet_id", None)
save_config(config)
print("Droplet ID removed from config.")

View File

@ -0,0 +1,8 @@
import subprocess
def build_deb_repo():
print("Creating and signing .deb package repository.")
def build_rpm_repo():
print("Creating and signing .rpm package repository.")

View File

@ -0,0 +1,37 @@
#!/usr/bin/env python3
import os
import sys
import argparse
import asyncio
from dotenv import load_dotenv
from utils.build_machine_control import create_build_machine, delete_build_machine
from utils.test_credentials import test_api_credentials
from utils.repos_builder import build_deb_repo, build_rpm_repo
if __name__ == "__main__":
# Load environment variables from the .env file.
load_dotenv()
token = os.getenv("DO_API_TOKEN")
if not token:
print("Error: DO_API_TOKEN environment variable not found. Please set it in the .env file.", file=sys.stderr)
sys.exit(1)
# Set up command-line argument parsing.
parser = argparse.ArgumentParser(description="Veilid compiling and releasing utility")
parser.add_argument("--create-build-machine", action="store_true", help="Create a build machine")
parser.add_argument("--delete-build-machine", action="store_true", help="Delete the created build machine")
parser.add_argument("--build-deb-repo", action="store_true", help="Creates and signs .deb repository")
parser.add_argument("--build-rpm-repo", action="store_true", help="Creates and signs .rpm repository")
parser.add_argument("--test-api-credentials", action="store_true", help="Test DigitalOcean API credentials")
args = parser.parse_args()
if args.create_build_machine:
asyncio.run(create_build_machine(token))
elif args.delete_build_machine:
asyncio.run(delete_build_machine(token))
elif args.build_deb_repo:
asyncio.run(build_deb_repo())
elif args.build_rpm_repo:
asyncio.run(build_rpm_repo())
elif args.test_api_credentials:
asyncio.run(test_api_credentials(token))