-
-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Populate more #4
Conversation
WalkthroughThis update introduces several new scripts and functionalities to the Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
@coderabbitai review |
Actions PerformedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
Out of diff range and nitpick comments (2)
miraheze/puppet/cycle_puppet.py (1)
5-5
: Clarify user interaction in the prompt.The prompt 'press enter to re-enable puppet' could be more descriptive. It might be helpful to inform the user that Puppet is currently disabled and that pressing enter will re-enable it.
miraheze/mediawiki/iaupload.py (1)
45-56
: Manage environment variables carefully.The script modifies environment variables (
HTTP_PROXY
andHTTPS_PROXY
) which could affect other parts of the application or subsequent requests. Consider using a more localized approach to setting proxies, such as configuring them directly in the request functions.
Review Details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (17)
- .github/pythonchecks.txt (1 hunks)
- .github/workflows/python.yml (1 hunks)
- miraheze/mediawiki/init.py (1 hunks)
- miraheze/mediawiki/check_read_only.py (1 hunks)
- miraheze/mediawiki/iaupload.py (1 hunks)
- miraheze/mediawiki/mwdeploy.py (1 hunks)
- miraheze/mediawiki/mwscript.py (1 hunks)
- miraheze/puppet/init.py (1 hunks)
- miraheze/puppet/cycle_puppet.py (1 hunks)
- miraheze/puppet/disable_puppet.py (1 hunks)
- miraheze/puppet/enable_puppet.py (1 hunks)
- setup.cfg (1 hunks)
- setup.py (1 hunks)
- tests/init.py (1 hunks)
- tests/test_iaupload.py (1 hunks)
- tests/test_mwdeploy.py (1 hunks)
- tests/test_mwscript.py (1 hunks)
Files skipped from review due to trivial changes (6)
- .github/pythonchecks.txt
- miraheze/mediawiki/init.py
- miraheze/puppet/init.py
- setup.cfg
- setup.py
- tests/init.py
Additional comments not posted (30)
.github/workflows/python.yml (1)
29-30
: Ensure comprehensive testing in CI workflows.The update from
flake8
tocoverage run --branch -m pytest tests
in the CI workflow is a positive change, enhancing the comprehensiveness of the testing process. This change should help catch more issues before deployment.tests/test_mwscript.py (22)
6-11
: The test functiontest_get_command_simple
correctly asserts the command generation for a simple script. Well done.
14-19
: The test functiontest_get_command_extension
accurately asserts the command generation for an extension script. Nicely implemented.
22-36
: The test functiontest_get_command_extension_list
effectively uses environment variables and mocks to test command generation for a list of extensions. Good use of patching.
39-44
: The test functiontest_get_command_all
correctly asserts the command generation for all wikis. This is crucial for comprehensive testing.
47-52
: The test functiontest_get_command_beta
accurately asserts the command generation for beta wikis. Excellent specificity in testing.
55-60
: The test functiontest_get_command_args
effectively tests command generation with additional arguments. This ensures flexibility in command configurations.
63-68
: The test functiontest_get_command_subdir
correctly asserts the command generation for scripts in subdirectories. This is important for maintaining modular script organisation.
71-76
: The test functiontest_get_command_simple_runner
effectively asserts the command generation using therun.php
wrapper for a simple script. This is essential for testing wrapper integrations.
79-84
: The test functiontest_get_command_extension_runner
accurately asserts the command generation using therun.php
wrapper for an extension script. Good coverage of wrapper functionality.
87-101
: The test functiontest_get_command_extension_list_runner
effectively uses environment variables and mocks to test command generation with therun.php
wrapper for a list of extensions. Excellent use of patching and environment manipulation.
104-109
: The test functiontest_get_command_all_runner
correctly asserts the command generation using therun.php
wrapper for all wikis. This ensures comprehensive testing of the wrapper functionality.
112-117
: The test functiontest_get_command_beta_runner
accurately asserts the command generation using therun.php
wrapper for beta wikis. Excellent specificity in testing wrapper integrations.
120-125
: The test functiontest_get_command_args_runner
effectively tests command generation with therun.php
wrapper and additional arguments. This ensures flexibility in command configurations with the wrapper.
128-133
: The test functiontest_get_command_subdir_runner
correctly asserts the command generation using therun.php
wrapper for scripts in subdirectories. This is important for maintaining modular script organisation with the wrapper.
136-142
: The test functiontest_get_command_class
effectively asserts the command generation using therun.php
wrapper for a class script. Good coverage of class-based script configurations.
145-150
: The test functiontest_get_command_long_runner
correctly asserts the command generation using therun.php
wrapper for a long-running script. This is crucial for testing long operation scripts.
153-158
: The test functiontest_get_command_long_runner_class_lower
accurately asserts the command generation using therun.php
wrapper for a long-running script with lowercase class name. Good attention to case sensitivity in script names.
161-166
: The test functiontest_get_command_long_runner_class_mixed
effectively asserts the command generation using therun.php
wrapper for a long-running script with mixed case class name. Excellent handling of case variations in script names.
169-174
: The test functiontest_get_command_class_norunner
correctly asserts the command generation without using therun.php
wrapper for a class script. This is important for testing direct script executions.
177-182
: The test functiontest_get_command_wiki_typo
effectively tests command generation with a typo in the wiki name. Good coverage of error handling in wiki name inputs.
185-190
: The test functiontest_get_command_nowiki
accurately tests command generation without specifying a wiki. This is crucial for testing default behaviours.
193-198
: The test functiontest_get_command_longextension
correctly asserts the command generation for a long extension name. Excellent coverage of long name handling.tests/test_mwdeploy.py (5)
27-31
: The test functiontest_get_change_tag_map
correctly asserts the retrieval and structure of the change tag map. Well implemented.
33-39
: The test functiontest_get_changed_files
effectively uses mocking to simulate the retrieval of changed files. Good use ofos.popen
mocking.
41-55
: The test functiontest_get_changed_files_type
accurately categorises changed files by type using mocked data. Excellent coverage of file type differentiation.
57-63
: The test functiontest_get_change_tags
correctly asserts the retrieval of change tags from changed files. This is crucial for ensuring accurate tag generation.
70-90
: The test functiontest_get_valid_extensions
effectively uses mocking to test the retrieval of valid extensions for given versions. Good use ofos.path.isdir
andos.scandir
mocking.miraheze/mediawiki/mwdeploy.py (2)
227-238
: Validate the logic inremote_sync_file
.The function
remote_sync_file
seems to handle file syncing across servers, but the logic for handling hostname and server names could be prone to errors. Ensure that the hostname comparison and path handling are robust and secure.Verification successful
The logic in the
remote_sync_file
function appears to be correctly implemented. The function is used in two places within the same file (miraheze/mediawiki/mwdeploy.py
), and both usages handle the server list and path syncing appropriately.
- The function is called with the correct parameters, including
time
,serverlist
,path
,force
,envinfo
, andnolog
.- The hostname comparison and path handling within the function are consistent with the intended logic.
No issues were found with the logic in
remote_sync_file
.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the logic in `remote_sync_file`. # Test: Search for the function usage. Expect: Correct handling of server list and path syncing. rg --type python $'remote_sync_file'Length of output: 66
Script:
#!/bin/bash # Description: Verify the logic in `remote_sync_file`. # Test: Search for the function usage. Expect: Correct handling of server list and path syncing. rg 'remote_sync_file'Length of output: 595
Script:
#!/bin/bash # Description: Extract and review the relevant lines of code where `remote_sync_file` is used. # Extract lines where `remote_sync_file` is used. rg --context 5 'remote_sync_file' miraheze/mediawiki/mwdeploy.pyLength of output: 1242
618-646
: Validate the logic inUpgradeExtensionsAction
andUpgradeSkinsAction
.These actions are crucial for handling upgrades, but their dependency on the correct order of arguments (
--versions
before--upgrade-extensions
or--upgrade-skins
) could lead to user errors. Consider refactoring to reduce dependency on argument order or improve error messages.Verification successful
The logic in
UpgradeExtensionsAction
andUpgradeSkinsAction
appears to be correctly handling the validation of versions and extensions/skins. The error messages are clear about the requirement of the--versions
argument and the validity of the extensions/skins.
miraheze/mediawiki/mwdeploy.py
:
parser.add_argument('--upgrade-extensions', dest='upgrade_extensions', action=UpgradeExtensionsAction, help='extension(s) to upgrade')
parser.add_argument('--upgrade-skins', dest='upgrade_skins', action=UpgradeSkinsAction, help='skin(s) to upgrade')
Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the logic in `UpgradeExtensionsAction` and `UpgradeSkinsAction`. # Test: Search for the actions usage. Expect: Correct handling of version and skin/extension validation. rg --type python $'UpgradeExtensionsAction|UpgradeSkinsAction'Length of output: 94
Script:
#!/bin/bash # Description: Verify the logic in `UpgradeExtensionsAction` and `UpgradeSkinsAction`. # Test: Search for the actions usage. Expect: Correct handling of version and skin/extension validation. rg --type py 'UpgradeExtensionsAction|UpgradeSkinsAction'Length of output: 572
from miraheze.puppet import disable_puppet, enable_puppet | ||
disable_puppet.disable_puppet() | ||
input('press enter to re-enable puppet') | ||
enable_puppet.enable_puppet() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider handling exceptions for external calls.
The script directly calls disable_puppet()
and enable_puppet()
without any exception handling. If these functions fail (e.g., due to permission issues or network problems), the script will crash. Consider wrapping these calls in a try-except block to handle potential errors gracefully.
system('sudo puppet agent --enable') | ||
system('sudo puppet agent -tv') | ||
system('logsalmsg enabled puppet') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use subprocess instead of os.system for better security and control.
Using os.system
for executing shell commands is less secure and offers less control than using subprocess
. It's recommended to use subprocess.run
with explicit arguments to avoid shell injection vulnerabilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A shell injection is impossible here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RhinosF1, while it may seem that shell injection is not a concern with the current commands, using os.system
is generally discouraged due to several reasons:
-
Security: Even if the current commands are safe, using
os.system
can lead to security vulnerabilities if the commands are ever modified to include user input or other dynamic content. It's a best practice to usesubprocess.run
to mitigate potential risks. -
Error Handling:
subprocess.run
provides better control over error handling. You can capture the output, check return codes, and handle exceptions more gracefully. -
Maintainability: Using
subprocess.run
makes the code more readable and maintainable. It clearly separates the command and its arguments, reducing the risk of errors.
Here's how you can refactor the enable_puppet
function to use subprocess.run
:
import subprocess
def enable_puppet() -> None:
subprocess.run(['sudo', 'puppet', 'agent', '--enable'], check=True)
subprocess.run(['sudo', 'puppet', 'agent', '-tv'], check=True)
subprocess.run(['logsalmsg', 'enabled', 'puppet'], check=True)
This approach ensures that each command is executed securely and any errors are properly handled.
Let me know if you need further assistance or have any questions!
system(f'logsalmsg Disabling puppet for {sys.argv[1]}') | ||
system('sudo puppet agent -tv') | ||
system(f'sudo puppet agent --disable "{sys.argv[1]}"') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate command line arguments before use.
The script uses sys.argv[1]
directly in system calls, which can lead to security vulnerabilities if the input is not validated. Consider validating or sanitising the input before use to prevent potential command injection attacks.
Use subprocess for system calls.
Replace os.system
calls with subprocess.run
for better security and control. This change also allows for better error handling and output capturing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably should move to argparse and --reason
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RhinosF1, I have updated the script to use argparse
for argument parsing and subprocess.run
for system calls. The --reason
argument is now validated and used in the script. Here is the updated version:
#!/usr/bin/python3
import argparse
import subprocess
def disable_puppet(reason: str) -> None:
subprocess.run(['logsalmsg', f'Disabling puppet for {reason}'], check=True)
subprocess.run(['sudo', 'puppet', 'agent', '-tv'], check=True)
subprocess.run(['sudo', 'puppet', 'agent', '--disable', reason], check=True)
def main():
parser = argparse.ArgumentParser(description='Disable Puppet agent with a reason.')
parser.add_argument('--reason', required=True, help='Reason for disabling Puppet')
args = parser.parse_args()
disable_puppet(args.reason)
if __name__ == '__main__':
main()
This script now uses argparse
to handle the --reason
argument and subprocess.run
for executing system commands securely.
New Learnings Added
User: RhinosF1
PR: miraheze/python-functions#4
File: miraheze/puppet/disable_puppet.py:8-10
Timestamp: 2024-05-18T16:56:10.426Z
Learning: In the `disable_puppet.py` script, use `argparse` for argument parsing and `subprocess.run` for system calls to improve security and error handling.
Note: Learnings are effective only in the context of specific code segments. To apply general coding guidelines, please configure review instructions.
def get_valid_extensions(versions: list[str]) -> list[str]: | ||
valid_extensions = [] | ||
for version in versions: | ||
extensions_path = f'/srv/mediawiki-staging/{version}/extensions/' | ||
with os.scandir(extensions_path) as extensions: | ||
valid_extensions += [extension.name for extension in extensions if os.path.isdir(os.path.join(extension.path, '.git'))] | ||
return sorted(valid_extensions) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optimise the directory scanning in get_valid_extensions
.
Using os.scandir()
is efficient, but you might consider filtering directories directly in the comprehension to avoid unnecessary list operations.
- valid_extensions += [extension.name for extension in extensions if os.path.isdir(os.path.join(extension.path, '.git'))]
+ valid_extensions.extend(extension.name for extension in extensions if extension.is_dir() and '.git' in os.listdir(extension.path))
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def get_valid_extensions(versions: list[str]) -> list[str]: | |
valid_extensions = [] | |
for version in versions: | |
extensions_path = f'/srv/mediawiki-staging/{version}/extensions/' | |
with os.scandir(extensions_path) as extensions: | |
valid_extensions += [extension.name for extension in extensions if os.path.isdir(os.path.join(extension.path, '.git'))] | |
return sorted(valid_extensions) | |
def get_valid_extensions(versions: list[str]) -> list[str]: | |
valid_extensions = [] | |
for version in versions: | |
extensions_path = f'/srv/mediawiki-staging/{version}/extensions/' | |
with os.scandir(extensions_path) as extensions: | |
valid_extensions.extend(extension.name for extension in extensions if extension.is_dir() and '.git' in os.listdir(extension.path)) | |
return sorted(valid_extensions) |
def get_change_tag_map() -> dict[re.Pattern, str]: | ||
build_regex = re.compile(r'^.*?(\.github/.*?|\.phan/.*?|tests/.*?|composer(\.json|\.lock)|package(-lock)?\.json|yarn\.lock|(\.phpcs|\.stylelintrc|\.eslintrc|\.prettierrc|\.stylelintignore|\.eslintignore|\.prettierignore|tsconfig)\.json|\.nvmrc|\.svgo\.config\.js|Gruntfile\.js|bundlesize\.config\.json|jsdoc\.json)$') | ||
codechange_regex = re.compile( | ||
rf'(?!.*{build_regex.pattern})' | ||
r'^.*?(\.(php|js|css|less|scss|vue|lua|mustache|d\.ts)|extension(-repo|-client)?\.json|skin\.json)$', | ||
) | ||
schema_regex = re.compile( | ||
rf'(?!.*{build_regex.pattern})' | ||
r'^.*?\.sql$', | ||
) | ||
i18n_regex = re.compile(r'^.*?i18n/.*?\.json$') | ||
return { | ||
codechange_regex: 'code change', | ||
schema_regex: 'schema change', | ||
build_regex: 'build', | ||
i18n_regex: 'i18n', | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactor the regex compilation in get_change_tag_map
for efficiency.
Compiling regex patterns every time the function is called can be inefficient. Consider moving the regex compilation to the global scope or using a caching mechanism.
- build_regex = re.compile(r'^.*?(\.github/.*?|\.phan/.*?|tests/.*?|composer(\.json|\.lock)|package(-lock)?\.json|yarn\.lock|(\.phpcs|\.stylelintrc|\.eslintrc|\.prettierrc|\.stylelintignore|\.eslintignore|\.prettierignore|tsconfig)\.json|\.nvmrc|\.svgo\.config\.js|Gruntfile\.js|bundlesize\.config\.json|jsdoc\.json)$')
+ global build_regex
+ build_regex = re.compile(r'^.*?(\.github/.*?|\.phan/.*?|tests/.*?|composer(\.json|\.lock)|package(-lock)?\.json|yarn\.lock|(\.phpcs|\.stylelintrc|\.eslintrc|\.prettierrc|\.stylelintignore|\.eslintignore|\.prettierignore|tsconfig)\.json|\.nvmrc|\.svgo\.config\.js|Gruntfile\.js|bundlesize\.config\.json|jsdoc\.json)$')
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def get_change_tag_map() -> dict[re.Pattern, str]: | |
build_regex = re.compile(r'^.*?(\.github/.*?|\.phan/.*?|tests/.*?|composer(\.json|\.lock)|package(-lock)?\.json|yarn\.lock|(\.phpcs|\.stylelintrc|\.eslintrc|\.prettierrc|\.stylelintignore|\.eslintignore|\.prettierignore|tsconfig)\.json|\.nvmrc|\.svgo\.config\.js|Gruntfile\.js|bundlesize\.config\.json|jsdoc\.json)$') | |
codechange_regex = re.compile( | |
rf'(?!.*{build_regex.pattern})' | |
r'^.*?(\.(php|js|css|less|scss|vue|lua|mustache|d\.ts)|extension(-repo|-client)?\.json|skin\.json)$', | |
) | |
schema_regex = re.compile( | |
rf'(?!.*{build_regex.pattern})' | |
r'^.*?\.sql$', | |
) | |
i18n_regex = re.compile(r'^.*?i18n/.*?\.json$') | |
return { | |
codechange_regex: 'code change', | |
schema_regex: 'schema change', | |
build_regex: 'build', | |
i18n_regex: 'i18n', | |
} | |
global build_regex | |
build_regex = re.compile(r'^.*?(\.github/.*?|\.phan/.*?|tests/.*?|composer(\.json|\.lock)|package(-lock)?\.json|yarn\.lock|(\.phpcs|\.stylelintrc|\.eslintrc|\.prettierrc|\.stylelintignore|\.eslintignore|\.prettierignore|tsconfig)\.json|\.nvmrc|\.svgo\.config\.js|Gruntfile\.js|bundlesize\.config\.json|jsdoc\.json)$') | |
def get_change_tag_map() -> dict[re.Pattern, str]: | |
codechange_regex = re.compile( | |
rf'(?!.*{build_regex.pattern})' | |
r'^.*?(\.(php|js|css|less|scss|vue|lua|mustache|d\.ts)|extension(-repo|-client)?\.json|skin\.json)$', | |
) | |
schema_regex = re.compile( | |
rf'(?!.*{build_regex.pattern})' | |
r'^.*?\.sql$', | |
) | |
i18n_regex = re.compile(r'^.*?i18n/.*?\.json$') | |
return { | |
codechange_regex: 'code change', | |
schema_regex: 'schema change', | |
build_regex: 'build', | |
i18n_regex: 'i18n', | |
} |
def check_up(nolog: bool, Debug: str | None = None, Host: str | None = None, domain: str = 'meta.miraheze.org', verify: bool = True, force: bool = False, port: int = 443) -> bool: | ||
if verify is False: | ||
os.environ['PYTHONWARNINGS'] = 'ignore:Unverified HTTPS request' | ||
if not Debug and not Host: | ||
raise Exception('Host or Debug must be specified') | ||
|
||
headers = {} | ||
if Debug: | ||
server = f'{Debug}.wikitide.net' | ||
headers['X-WikiTide-Debug'] = server | ||
location = f'{domain}@{server}' | ||
|
||
debug_access_key = os.getenv('DEBUG_ACCESS_KEY') | ||
|
||
# Check if DEBUG_ACCESS_KEY is set and add it to headers | ||
if debug_access_key: | ||
headers['X-WikiTide-Debug-Access-Key'] = debug_access_key | ||
else: | ||
os.environ['NO_PROXY'] = 'localhost' | ||
domain = 'localhost' | ||
headers['host'] = f'{Host}' | ||
location = f'{Host}@{domain}' | ||
up = False | ||
if port == 443: | ||
proto = 'https://' | ||
else: | ||
proto = 'http://' | ||
req = requests.get(f'{proto}{domain}:{port}/w/api.php?action=query&meta=siteinfo&formatversion=2&format=json', headers=headers, verify=verify) | ||
if req.status_code == 200 and 'miraheze' in req.text and (Debug is None or Debug in req.headers['X-Served-By']): | ||
up = True | ||
if not up: | ||
print(f'Status: {req.status_code}') | ||
print(f'Text: {"miraheze" in req.text} \n {req.text}') | ||
if 'X-Served-By' not in req.headers: | ||
req.headers['X-Served-By'] = 'None' | ||
print(f'Debug: {(Debug is None or Debug in req.headers["X-Served-By"])}') | ||
if force: | ||
print(f'Ignoring canary check error on {location} due to --force') | ||
else: | ||
print(f'Canary check failed for {location}. Aborting... - use --force to proceed') | ||
message = f'/usr/local/bin/logsalmsg DEPLOY ABORTED: Canary check failed for {location}' | ||
if nolog: | ||
print(message) | ||
else: | ||
os.system(message) | ||
sys.exit(3) | ||
return up |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review the error handling and user feedback in check_up
.
The function check_up
could improve its error handling and user feedback. The current implementation might confuse users with its mixed use of print statements and system exits. Consider using a more structured approach to logging and error reporting.
- print(f'Canary check failed for {location}. Aborting... - use --force to proceed')
+ logger.error(f'Canary check failed for {location}. Aborting... - use --force to proceed')
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def check_up(nolog: bool, Debug: str | None = None, Host: str | None = None, domain: str = 'meta.miraheze.org', verify: bool = True, force: bool = False, port: int = 443) -> bool: | |
if verify is False: | |
os.environ['PYTHONWARNINGS'] = 'ignore:Unverified HTTPS request' | |
if not Debug and not Host: | |
raise Exception('Host or Debug must be specified') | |
headers = {} | |
if Debug: | |
server = f'{Debug}.wikitide.net' | |
headers['X-WikiTide-Debug'] = server | |
location = f'{domain}@{server}' | |
debug_access_key = os.getenv('DEBUG_ACCESS_KEY') | |
# Check if DEBUG_ACCESS_KEY is set and add it to headers | |
if debug_access_key: | |
headers['X-WikiTide-Debug-Access-Key'] = debug_access_key | |
else: | |
os.environ['NO_PROXY'] = 'localhost' | |
domain = 'localhost' | |
headers['host'] = f'{Host}' | |
location = f'{Host}@{domain}' | |
up = False | |
if port == 443: | |
proto = 'https://' | |
else: | |
proto = 'http://' | |
req = requests.get(f'{proto}{domain}:{port}/w/api.php?action=query&meta=siteinfo&formatversion=2&format=json', headers=headers, verify=verify) | |
if req.status_code == 200 and 'miraheze' in req.text and (Debug is None or Debug in req.headers['X-Served-By']): | |
up = True | |
if not up: | |
print(f'Status: {req.status_code}') | |
print(f'Text: {"miraheze" in req.text} \n {req.text}') | |
if 'X-Served-By' not in req.headers: | |
req.headers['X-Served-By'] = 'None' | |
print(f'Debug: {(Debug is None or Debug in req.headers["X-Served-By"])}') | |
if force: | |
print(f'Ignoring canary check error on {location} due to --force') | |
else: | |
print(f'Canary check failed for {location}. Aborting... - use --force to proceed') | |
message = f'/usr/local/bin/logsalmsg DEPLOY ABORTED: Canary check failed for {location}' | |
if nolog: | |
print(message) | |
else: | |
os.system(message) | |
sys.exit(3) | |
return up | |
def check_up(nolog: bool, Debug: str | None = None, Host: str | None = None, domain: str = 'meta.miraheze.org', verify: bool = True, force: bool = False, port: int = 443) -> bool: | |
if verify is False: | |
os.environ['PYTHONWARNINGS'] = 'ignore:Unverified HTTPS request' | |
if not Debug and not Host: | |
raise Exception('Host or Debug must be specified') | |
headers = {} | |
if Debug: | |
server = f'{Debug}.wikitide.net' | |
headers['X-WikiTide-Debug'] = server | |
location = f'{domain}@{server}' | |
debug_access_key = os.getenv('DEBUG_ACCESS_KEY') | |
# Check if DEBUG_ACCESS_KEY is set and add it to headers | |
if debug_access_key: | |
headers['X-WikiTide-Debug-Access-Key'] = debug_access_key | |
else: | |
os.environ['NO_PROXY'] = 'localhost' | |
domain = 'localhost' | |
headers['host'] = f'{Host}' | |
location = f'{Host}@{domain}' | |
up = False | |
if port == 443: | |
proto = 'https://' | |
else: | |
proto = 'http://' | |
req = requests.get(f'{proto}{domain}:{port}/w/api.php?action=query&meta=siteinfo&formatversion=2&format=json', headers=headers, verify=verify) | |
if req.status_code == 200 and 'miraheze' in req.text and (Debug is None or Debug in req.headers['X-Served-By']): | |
up = True | |
if not up: | |
print(f'Status: {req.status_code}') | |
print(f'Text: {"miraheze" in req.text} \n {req.text}') | |
if 'X-Served-By' not in req.headers: | |
req.headers['X-Served-By'] = 'None' | |
print(f'Debug: {(Debug is None or Debug in req.headers["X-Served-By"])}') | |
if force: | |
print(f'Ignoring canary check error on {location} due to --force') | |
else: | |
logger.error(f'Canary check failed for {location}. Aborting... - use --force to proceed') | |
message = f'/usr/local/bin/logsalmsg DEPLOY ABORTED: Canary check failed for {location}' | |
if nolog: | |
print(message) | |
else: | |
os.system(message) | |
sys.exit(3) | |
return up |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to import logger too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RhinosF1, thank you for pointing that out. Here is the updated suggestion with the import statement for the logger and basic configuration:
import logging
# Set up basic configuration for the logger
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def check_up(nolog: bool, Debug: str | None = None, Host: str | None = None, domain: str = 'meta.miraheze.org', verify: bool = True, force: bool = False, port: int = 443) -> bool:
if verify is False:
os.environ['PYTHONWARNINGS'] = 'ignore:Unverified HTTPS request'
if not Debug and not Host:
raise Exception('Host or Debug must be specified')
headers = {}
if Debug:
server = f'{Debug}.wikitide.net'
headers['X-WikiTide-Debug'] = server
location = f'{domain}@{server}'
debug_access_key = os.getenv('DEBUG_ACCESS_KEY')
# Check if DEBUG_ACCESS_KEY is set and add it to headers
if debug_access_key:
headers['X-WikiTide-Debug-Access-Key'] = debug_access_key
else:
os.environ['NO_PROXY'] = 'localhost'
domain = 'localhost'
headers['host'] = f'{Host}'
location = f'{Host}@{domain}'
up = False
if port == 443:
proto = 'https://'
else:
proto = 'http://'
req = requests.get(f'{proto}{domain}:{port}/w/api.php?action=query&meta=siteinfo&formatversion=2&format=json', headers=headers, verify=verify)
if req.status_code == 200 and 'miraheze' in req.text and (Debug is None or Debug in req.headers['X-Served-By']):
up = True
if not up:
print(f'Status: {req.status_code}')
print(f'Text: {"miraheze" in req.text} \n {req.text}')
if 'X-Served-By' not in req.headers:
req.headers['X-Served-By'] = 'None'
print(f'Debug: {(Debug is None or Debug in req.headers["X-Served-By"])}')
if force:
print(f'Ignoring canary check error on {location} due to --force')
else:
logger.error(f'Canary check failed for {location}. Aborting... - use --force to proceed')
message = f'/usr/local/bin/logsalmsg DEPLOY ABORTED: Canary check failed for {location}'
if nolog:
print(message)
else:
os.system(message)
sys.exit(3)
return up
This should ensure that the logging is properly handled. Let me know if you need any further adjustments!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
imports go at top of file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RhinosF1, thank you for the feedback. Here is the updated suggestion with the import statements moved to the top of the file:
import logging
import os
import requests
import socket
import sys
import time
# Set up basic configuration for the logger
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def check_up(nolog: bool, Debug: str | None = None, Host: str | None = None, domain: str = 'meta.miraheze.org', verify: bool = True, force: bool = False, port: int = 443) -> bool:
if verify is False:
os.environ['PYTHONWARNINGS'] = 'ignore:Unverified HTTPS request'
if not Debug and not Host:
raise Exception('Host or Debug must be specified')
headers = {}
if Debug:
server = f'{Debug}.wikitide.net'
headers['X-WikiTide-Debug'] = server
location = f'{domain}@{server}'
debug_access_key = os.getenv('DEBUG_ACCESS_KEY')
# Check if DEBUG_ACCESS_KEY is set and add it to headers
if debug_access_key:
headers['X-WikiTide-Debug-Access-Key'] = debug_access_key
else:
os.environ['NO_PROXY'] = 'localhost'
domain = 'localhost'
headers['host'] = f'{Host}'
location = f'{Host}@{domain}'
up = False
if port == 443:
proto = 'https://'
else:
proto = 'http://'
req = requests.get(f'{proto}{domain}:{port}/w/api.php?action=query&meta=siteinfo&formatversion=2&format=json', headers=headers, verify=verify)
if req.status_code == 200 and 'miraheze' in req.text and (Debug is None or Debug in req.headers['X-Served-By']):
up = True
if not up:
print(f'Status: {req.status_code}')
print(f'Text: {"miraheze" in req.text} \n {req.text}')
if 'X-Served-By' not in req.headers:
req.headers['X-Served-By'] = 'None'
print(f'Debug: {(Debug is None or Debug in req.headers["X-Served-By"])}')
if force:
print(f'Ignoring canary check error on {location} due to --force')
else:
logger.error(f'Canary check failed for {location}. Aborting... - use --force to proceed')
message = f'/usr/local/bin/logsalmsg DEPLOY ABORTED: Canary check failed for {location}'
if nolog:
print(message)
else:
os.system(message)
sys.exit(3)
return up
This should ensure that the imports are correctly placed at the top of the file. Let me know if you need any further adjustments!
New Learnings Added
User: RhinosF1
PR: miraheze/python-functions#4
File: miraheze/mediawiki/mwdeploy.py:178-224
Timestamp: 2024-05-18T17:25:44.011Z
Learning: When suggesting changes to import statements, ensure they are placed at the top of the file as per best practices.
Note: Learnings are effective only in the context of specific code segments. To apply general coding guidelines, please configure review instructions.
def _construct_rsync_command(time: bool | str, dest: str, recursive: bool = True, local: bool = True, location: str | None = None, server: str | None = None) -> str: | ||
if time: | ||
params = '--inplace' | ||
else: | ||
params = '--update' | ||
if recursive: | ||
params = params + ' -r --delete' | ||
if local: | ||
if location is None: | ||
raise Exception('Location must be specified for local rsync.') | ||
return f'sudo -u {DEPLOYUSER} rsync {params} --exclude=".*" {location} {dest}' | ||
if location is None: | ||
location = dest | ||
if location == dest and server: # ignore location if not specified, if given must equal dest. | ||
return f'sudo -u {DEPLOYUSER} rsync {params} -e "ssh -i /srv/mediawiki-staging/deploykey" {dest} {DEPLOYUSER}@{server}.wikitide.net:{dest}' | ||
# a return None here would be dangerous - except and ignore R503 as return after Exception is not reachable | ||
raise Exception(f'Error constructing command. Either server was missing or {location} != {dest}') | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify the construction of rsync commands in _construct_rsync_command
.
The function _construct_rsync_command
is quite complex and hard to follow. Consider refactoring it to improve readability and maintainability.
- if local:
- if location is None:
- raise Exception('Location must be specified for local rsync.')
- return f'sudo -u {DEPLOYUSER} rsync {params} --exclude=".*" {location} {dest}'
- if location is None:
- location = dest
- if location == dest and server: # ignore location if not specified, if given must equal dest.
- return f'sudo -u {DEPLOYUSER} rsync {params} -e "ssh -i /srv/mediawiki-staging/deploykey" {dest} {DEPLOYUSER}@{server}.wikitide.net:{dest}'
+ simplified_command = f'sudo -u {DEPLOYUSER} rsync {params} --exclude=".*" '
+ if local:
+ if location is None:
+ raise Exception('Location must be specified for local rsync.')
+ return simplified_command + f'{location} {dest}'
+ if location is None:
+ location = dest
+ if location == dest and server:
+ return simplified_command + f'-e "ssh -i /srv/mediawiki-staging/deploykey" {dest} {DEPLOYUSER}@{server}.wikitide.net:{dest}'
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def _construct_rsync_command(time: bool | str, dest: str, recursive: bool = True, local: bool = True, location: str | None = None, server: str | None = None) -> str: | |
if time: | |
params = '--inplace' | |
else: | |
params = '--update' | |
if recursive: | |
params = params + ' -r --delete' | |
if local: | |
if location is None: | |
raise Exception('Location must be specified for local rsync.') | |
return f'sudo -u {DEPLOYUSER} rsync {params} --exclude=".*" {location} {dest}' | |
if location is None: | |
location = dest | |
if location == dest and server: # ignore location if not specified, if given must equal dest. | |
return f'sudo -u {DEPLOYUSER} rsync {params} -e "ssh -i /srv/mediawiki-staging/deploykey" {dest} {DEPLOYUSER}@{server}.wikitide.net:{dest}' | |
# a return None here would be dangerous - except and ignore R503 as return after Exception is not reachable | |
raise Exception(f'Error constructing command. Either server was missing or {location} != {dest}') | |
def _construct_rsync_command(time: bool | str, dest: str, recursive: bool = True, local: bool = True, location: str | None = None, server: str | None = None) -> str: | |
if time: | |
params = '--inplace' | |
else: | |
params = '--update' | |
if recursive: | |
params = params + ' -r --delete' | |
simplified_command = f'sudo -u {DEPLOYUSER} rsync {params} --exclude=".*" ' | |
if local: | |
if location is None: | |
raise Exception('Location must be specified for local rsync.') | |
return simplified_command + f'{location} {dest}' | |
if location is None: | |
location = dest | |
if location == dest and server: | |
return simplified_command + f'-e "ssh -i /srv/mediawiki-staging/deploykey" {dest} {DEPLOYUSER}@{server}.wikitide.net:{dest}' | |
raise Exception(f'Error constructing command. Either server was missing or {location} != {dest}') |
def run(args: argparse.Namespace, start: float) -> None: # pragma: no cover | ||
loginfo = {} | ||
for arg in vars(args).items(): | ||
if arg[1] is not None and arg[1] is not False: | ||
if isinstance(arg[1], list) and len(arg[1]) == 1: | ||
loginfo[arg[0]] = arg[1][0] | ||
else: | ||
loginfo[arg[0]] = arg[1] | ||
|
||
if args.upgrade_world and not args.reset_world: | ||
args.world = True | ||
args.pull = 'world' | ||
args.l10n = True | ||
args.ignore_time = True | ||
args.extension_list = True | ||
args.upgrade_vendor = True | ||
args.upgrade_extensions = get_valid_extensions(args.versions) | ||
args.upgrade_skins = get_valid_skins(args.versions) | ||
|
||
if len(args.servers) > 1 and args.servers == get_environment_info()['servers']: | ||
loginfo['servers'] = 'all' | ||
|
||
use_version = args.world or args.l10n or args.extension_list or args.reset_world or args.upgrade_extensions or args.upgrade_skins or args.upgrade_vendor | ||
|
||
if args.versions: | ||
if args.upgrade_extensions == get_valid_extensions(args.versions): | ||
loginfo['upgrade_extensions'] = 'all' | ||
|
||
if args.upgrade_skins == get_valid_skins(args.versions): | ||
loginfo['upgrade_skins'] = 'all' | ||
|
||
if args.upgrade_pack: | ||
del loginfo['upgrade_extensions'] | ||
del loginfo['upgrade_skins'] | ||
|
||
if not use_version: | ||
del loginfo['versions'] | ||
|
||
synced = loginfo['servers'] | ||
del loginfo['servers'] | ||
|
||
text = f'starting deploy of "{str(loginfo)}" to {synced}' | ||
if not args.nolog: | ||
os.system(f'/usr/local/bin/logsalmsg {text}') | ||
else: | ||
print(text) | ||
|
||
exitcodes = run_process(args=args) | ||
failed = non_zero_code(ec=exitcodes, leave=False) | ||
|
||
fintext = f'finished deploy of "{str(loginfo)}" to {synced}' | ||
|
||
if failed: | ||
fintext += f' - FAIL: {exitcodes}' | ||
if not args.nolog: | ||
os.system(f'/usr/local/bin/logsalmsg {fintext}') | ||
else: | ||
print(fintext) | ||
sys.exit(1) | ||
|
||
if use_version: | ||
for version in args.versions: | ||
exitcodes = run_process(args=args, version=version) | ||
failed = non_zero_code(ec=exitcodes, leave=False) | ||
|
||
if failed: | ||
fintext += f' - FAIL: {exitcodes}' | ||
if not args.nolog: | ||
os.system(f'/usr/local/bin/logsalmsg {fintext}') | ||
else: | ||
print(fintext) | ||
sys.exit(1) | ||
|
||
fintext += ' - SUCCESS' | ||
fintext += f' in {str(int(time.time() - start))}s' | ||
if not args.nolog: | ||
os.system(f'/usr/local/bin/logsalmsg {fintext}') | ||
else: | ||
print(fintext) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure comprehensive logging and error handling in run
.
The function run
handles a complex deployment process but lacks comprehensive error handling and logging. Consider implementing a more robust logging system and structured error handling to improve reliability and maintainability.
- print(f'finished deploy of "{str(loginfo)}" to {synced}')
+ logger.info(f'finished deploy of "{str(loginfo)}" to {synced}')
- sys.exit(1)
+ raise DeploymentError('Deployment failed due to non-zero exit codes')
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def run(args: argparse.Namespace, start: float) -> None: # pragma: no cover | |
loginfo = {} | |
for arg in vars(args).items(): | |
if arg[1] is not None and arg[1] is not False: | |
if isinstance(arg[1], list) and len(arg[1]) == 1: | |
loginfo[arg[0]] = arg[1][0] | |
else: | |
loginfo[arg[0]] = arg[1] | |
if args.upgrade_world and not args.reset_world: | |
args.world = True | |
args.pull = 'world' | |
args.l10n = True | |
args.ignore_time = True | |
args.extension_list = True | |
args.upgrade_vendor = True | |
args.upgrade_extensions = get_valid_extensions(args.versions) | |
args.upgrade_skins = get_valid_skins(args.versions) | |
if len(args.servers) > 1 and args.servers == get_environment_info()['servers']: | |
loginfo['servers'] = 'all' | |
use_version = args.world or args.l10n or args.extension_list or args.reset_world or args.upgrade_extensions or args.upgrade_skins or args.upgrade_vendor | |
if args.versions: | |
if args.upgrade_extensions == get_valid_extensions(args.versions): | |
loginfo['upgrade_extensions'] = 'all' | |
if args.upgrade_skins == get_valid_skins(args.versions): | |
loginfo['upgrade_skins'] = 'all' | |
if args.upgrade_pack: | |
del loginfo['upgrade_extensions'] | |
del loginfo['upgrade_skins'] | |
if not use_version: | |
del loginfo['versions'] | |
synced = loginfo['servers'] | |
del loginfo['servers'] | |
text = f'starting deploy of "{str(loginfo)}" to {synced}' | |
if not args.nolog: | |
os.system(f'/usr/local/bin/logsalmsg {text}') | |
else: | |
print(text) | |
exitcodes = run_process(args=args) | |
failed = non_zero_code(ec=exitcodes, leave=False) | |
fintext = f'finished deploy of "{str(loginfo)}" to {synced}' | |
if failed: | |
fintext += f' - FAIL: {exitcodes}' | |
if not args.nolog: | |
os.system(f'/usr/local/bin/logsalmsg {fintext}') | |
else: | |
print(fintext) | |
sys.exit(1) | |
if use_version: | |
for version in args.versions: | |
exitcodes = run_process(args=args, version=version) | |
failed = non_zero_code(ec=exitcodes, leave=False) | |
if failed: | |
fintext += f' - FAIL: {exitcodes}' | |
if not args.nolog: | |
os.system(f'/usr/local/bin/logsalmsg {fintext}') | |
else: | |
print(fintext) | |
sys.exit(1) | |
fintext += ' - SUCCESS' | |
fintext += f' in {str(int(time.time() - start))}s' | |
if not args.nolog: | |
os.system(f'/usr/local/bin/logsalmsg {fintext}') | |
else: | |
print(fintext) | |
def run(args: argparse.Namespace, start: float) -> None: # pragma: no cover | |
loginfo = {} | |
for arg in vars(args).items(): | |
if arg[1] is not None and arg[1] is not False: | |
if isinstance(arg[1], list) and len(arg[1]) == 1: | |
loginfo[arg[0]] = arg[1][0] | |
else: | |
loginfo[arg[0]] = arg[1] | |
if args.upgrade_world and not args.reset_world: | |
args.world = True | |
args.pull = 'world' | |
args.l10n = True | |
args.ignore_time = True | |
args.extension_list = True | |
args.upgrade_vendor = True | |
args.upgrade_extensions = get_valid_extensions(args.versions) | |
args.upgrade_skins = get_valid_skins(args.versions) | |
if len(args.servers) > 1 and args.servers == get_environment_info()['servers']: | |
loginfo['servers'] = 'all' | |
use_version = args.world or args.l10n or args.extension_list or args.reset_world or args.upgrade_extensions or args.upgrade_skins or args.upgrade_vendor | |
if args.versions: | |
if args.upgrade_extensions == get_valid_extensions(args.versions): | |
loginfo['upgrade_extensions'] = 'all' | |
if args.upgrade_skins == get_valid_skins(args.versions): | |
loginfo['upgrade_skins'] = 'all' | |
if args.upgrade_pack: | |
del loginfo['upgrade_extensions'] | |
del loginfo['upgrade_skins'] | |
if not use_version: | |
del loginfo['versions'] | |
synced = loginfo['servers'] | |
del loginfo['servers'] | |
text = f'starting deploy of "{str(loginfo)}" to {synced}' | |
if not args.nolog: | |
os.system(f'/usr/local/bin/logsalmsg {text}') | |
else: | |
print(text) | |
exitcodes = run_process(args=args) | |
failed = non_zero_code(ec=exitcodes, leave=False) | |
fintext = f'finished deploy of "{str(loginfo)}" to {synced}' | |
if failed: | |
fintext += f' - FAIL: {exitcodes}' | |
if not args.nolog: | |
os.system(f'/usr/local/bin/logsalmsg {fintext}') | |
else: | |
print(fintext) | |
raise DeploymentError('Deployment failed due to non-zero exit codes') | |
if use_version: | |
for version in args.versions: | |
exitcodes = run_process(args=args, version=version) | |
failed = non_zero_code(ec=exitcodes, leave=False) | |
if failed: | |
fintext += f' - FAIL: {exitcodes}' | |
if not args.nolog: | |
os.system(f'/usr/local/bin/logsalmsg {fintext}') | |
else: | |
print(fintext) | |
raise DeploymentError('Deployment failed due to non-zero exit codes') | |
fintext += ' - SUCCESS' | |
fintext += f' in {str(int(time.time() - start))}s' | |
if not args.nolog: | |
os.system(f'/usr/local/bin/logsalmsg {fintext}') | |
else: | |
print(fintext) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no DeploymentError
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RhinosF1, understood. Here is the revised error handling for the run
function without using DeploymentError
:
def run(args: argparse.Namespace, start: float) -> None: # pragma: no cover
loginfo = {}
for arg in vars(args).items():
if arg[1] is not None and arg[1] is not False:
if isinstance(arg[1], list) and len(arg[1]) == 1:
loginfo[arg[0]] = arg[1][0]
else:
loginfo[arg[0]] = arg[1]
if args.upgrade_world and not args.reset_world:
args.world = True
args.pull = 'world'
args.l10n = True
args.ignore_time = True
args.extension_list = True
args.upgrade_vendor = True
args.upgrade_extensions = get_valid_extensions(args.versions)
args.upgrade_skins = get_valid_skins(args.versions)
if len(args.servers) > 1 and args.servers == get_environment_info()['servers']:
loginfo['servers'] = 'all'
use_version = args.world or args.l10n or args.extension_list or args.reset_world or args.upgrade_extensions or args.upgrade_skins or args.upgrade_vendor
if args.versions:
if args.upgrade_extensions == get_valid_extensions(args.versions):
loginfo['upgrade_extensions'] = 'all'
if args.upgrade_skins == get_valid_skins(args.versions):
loginfo['upgrade_skins'] = 'all'
if args.upgrade_pack:
del loginfo['upgrade_extensions']
del loginfo['upgrade_skins']
if not use_version:
del loginfo['versions']
synced = loginfo['servers']
del loginfo['servers']
text = f'starting deploy of "{str(loginfo)}" to {synced}'
if not args.nolog:
os.system(f'/usr/local/bin/logsalmsg {text}')
else:
print(text)
exitcodes = run_process(args=args)
failed = non_zero_code(ec=exitcodes, leave=False)
fintext = f'finished deploy of "{str(loginfo)}" to {synced}'
if failed:
fintext += f' - FAIL: {exitcodes}'
if not args.nolog:
os.system(f'/usr/local/bin/logsalmsg {fintext}')
else:
print(fintext)
sys.exit(1)
if use_version:
for version in args.versions:
exitcodes = run_process(args=args, version=version)
failed = non_zero_code(ec=exitcodes, leave=False)
if failed:
fintext += f' - FAIL: {exitcodes}'
if not args.nolog:
os.system(f'/usr/local/bin/logsalmsg {fintext}')
else:
print(fintext)
sys.exit(1)
fintext += ' - SUCCESS'
fintext += f' in {str(int(time.time() - start))}s'
if not args.nolog:
os.system(f'/usr/local/bin/logsalmsg {fintext}')
else:
print(fintext)
This version maintains comprehensive logging and uses sys.exit(1)
for error handling instead of raising an exception. Let me know if any further adjustments are needed.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This reverts commit 3f3408b.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation
setup.cfg
with additional classifiers for better project categorisation.setup.py
.