Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update qa-scenario-2a (SOC-10460) #3711

Merged
merged 1 commit into from
Nov 29, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 16 additions & 1 deletion jenkins/ci.suse.de/cloud-mkphyscloud-qa-scenario-2a.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,10 @@
- string:
name: commands
default: addupdaterepo prepareinstallcrowbar runupdate bootstrapcrowbar installcrowbar allocate waitcloud setup_aliases
description: All the steps that needs to be completed to have cloud installed
description: All the steps that needs to be completed to have cloud installed:When deploying with SSL add "install_ca_certificates" after "setup_aliases" command
choices:
- addupdaterepo prepareinstallcrowbar runupdate bootstrapcrowbar installcrowbar allocate waitcloud setup_aliases
- addupdaterepo prepareinstallcrowbar runupdate bootstrapcrowbar installcrowbar allocate waitcloud setup_aliases install_ca_certificates

- string:
name: want_test_updates
Expand Down Expand Up @@ -220,7 +223,19 @@

ret=0

# copy CA files
if [[ $ssl_type = "ssl" ]]; then
ssh root@crowbar$hw_number "mkdir ssl-certs"
scp -r /home/jenkins/ssl-certs/qa$hw_number root@crowbar$hw_number:/root/ssl-certs/
fi

ssh root@$admin "
# update certificate file paths
if [[ $ssl_type = "ssl" ]]; then
sed -i -e "s,##certfile##,/etc/cloud/ssl/qa$hw_number/qa$hw_number.cloud.suse.de.crt," scenario.yml
sed -i -e "s,##keyfile##,/etc/cloud/ssl/qa$hw_number/qa$hw_number.cloud.suse.de.pem," scenario.yml
sed -i -e "s,##cafile##,/etc/cloud/ssl/qa$hw_number/SUSE_CA_suse.de.chain.crt," scenario.yml
fi
export cloud=$cloud ;
export hw_number=$hw_number ;
export sbd_ip=$sbd_ip ;
Expand Down
85 changes: 31 additions & 54 deletions scripts/scenarios/cloud8/qa/no-ssl/qa-scenario-2a.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
# 2a - 8 nodes, HA (SBD 3x2), KVM x 1
# 2a - 7 nodes, HA (SBD 3x2), KVM x 1
proposals:
- barclamp: pacemaker
name: services
Expand All @@ -14,20 +14,27 @@ proposals:
"@@controller2@@":
devices:
- "@@sbd_device_services@@"
"@@controller3@@":
devices:
- "@@sbd_device_services@@"
per_node:
nodes:
"@@controller1@@":
params: ''
"@@controller2@@":
params: ''
"@@controller3@@":
params: ''
deployment:
elements:
pacemaker-cluster-member:
- "@@controller1@@"
- "@@controller2@@"
- "@@controller3@@"
hawk-server:
- "@@controller1@@"
- "@@controller2@@"
- "@@controller3@@"

- barclamp: pacemaker
name: data
Expand All @@ -42,57 +49,30 @@ proposals:
"@@data2@@":
devices:
- "@@sbd_device_data@@"
"@@data3@@":
devices:
- "@@sbd_device_data@@"
per_node:
nodes:
"@@data1@@":
params: ''
"@@data2@@":
params: ''
"@@data3@@":
params: ''
deployment:
elements:
pacemaker-cluster-member:
- "@@data1@@"
- "@@data2@@"
- "@@data3@@"
hawk-server:
- "@@data1@@"
- "@@data2@@"

- barclamp: pacemaker
name: network
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why did we loose a dedicated network cluster?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have only enough nodes for 2 clusters(with 3 nodes) I can remove data and add network back if you think its better coverage /scenario

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. Lets leave it like this if there are no more nodes available.

attributes:
stonith:
mode: sbd
sbd:
nodes:
"@@network1@@":
devices:
- "@@sbd_device_network@@"
"@@network2@@":
devices:
- "@@sbd_device_network@@"
per_node:
nodes:
"@@network1@@":
params: ''
"@@network2@@":
params: ''
deployment:
elements:
pacemaker-cluster-member:
- "@@network1@@"
- "@@network2@@"
hawk-server:
- "@@network1@@"
- "@@network2@@"
- "@@data3@@"

- barclamp: database
attributes:
sql_engine: postgresql
ha:
storage:
shared:
device: ##shared_nfs_for_database##
fstype: nfs
deployment:
elements:
database-server:
Expand Down Expand Up @@ -151,11 +131,21 @@ proposals:
- barclamp: cinder
attributes:
volumes:
- backend_driver: nfs
backend_name: nfs
nfs:
nfs_shares: ##cinder-storage-shares##
nfs_snapshot: true
- backend_driver: netapp
backend_name: netapp
netapp:
nfs_shares: ''
netapp_vfiler: ''
netapp_volume_list: ''
storage_family: ontap_cluster
storage_protocol: iscsi
vserver: 'cloud-openstack-svm '
netapp_server_hostname: ##netapp_server##
netapp_server_port: 80
netapp_login: admin
netapp_password: ##netapp_password##
netapp_transport_type: http
max_over_subscription_ratio: 20
deployment:
elements:
cinder-controller:
Expand All @@ -178,7 +168,7 @@ proposals:
neutron-server:
- cluster:services
neutron-network:
- cluster:network
- cluster:services

- barclamp: nova
attributes:
Expand Down Expand Up @@ -222,19 +212,6 @@ proposals:
heat-server:
- cluster:services

- barclamp: ceilometer
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you removing Ceilometer from the scenario?

Copy link
Contributor Author

@gosipyan gosipyan Sep 23, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

during prechecks if monasca is missing upgrade wont go further unless i remove ceilometer and as such case already covered in qa-scenario-8a(deploying monasca) I decided remove it from here ,other option would be deploy monasca , but because not enough nodes avaible i think it would be complicated .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, ok. I was wondering about that because I created dedicated precheck clean scenarios in #3682 which is a little...starved for reviews... :-)

What do we do about this? Merge the dedicated scenarios or remove all precheck breakers from the existing ones (in that case there'd be more things to change). I for one would prefer having dedicated upgrade scenarios in order to continue testing the barclamps that are still supported on Cloud 8 for Cloud 8 rather than disabling them for regular Cloud 8 (i.e. in non-upgrade situations) as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted do two things together fix broken scenario in general for SOC8 and make them working for upgrade so we dont need duplicate
I expect those scenarios under cloud8 run only for upgrade porpuse I dont think we would use them for anything else .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Then I'll +1 this for now, since I have none of the non-precheck fixes in my pull request. We may need a follow-up to take care of Aodh, Trove and more Ceilometer occurences as well though (unless someone has taken care of that in the time since I created #3682 ; haven't checked).

attributes:
deployment:
elements:
ceilometer-agent:
- "@@compute-kvm@@"
ceilometer-agent-hyperv: []
ceilometer-central:
- cluster:services
ceilometer-server:
- cluster:services
ceilometer-swift-proxy-middleware: []

- barclamp: manila
attributes:
default_share_type: default
Expand Down
58 changes: 16 additions & 42 deletions scripts/scenarios/cloud8/qa/ssl-insecure/qa-scenario-2a.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
# 2a - 8 nodes, HA (SBD 3x2), KVM x 1
# 2a - 7 nodes, HA (SBD 3x2), KVM x 1
proposals:
- barclamp: pacemaker
name: services
Expand All @@ -14,20 +14,27 @@ proposals:
"@@controller2@@":
devices:
- "@@sbd_device_services@@"
"@@controller3@@":
devices:
- "@@sbd_device_services@@"
per_node:
nodes:
"@@controller1@@":
params: ''
"@@controller2@@":
params: ''
"@@controller3@@":
params: ''
deployment:
elements:
pacemaker-cluster-member:
- "@@controller1@@"
- "@@controller2@@"
- "@@controller3@@"
hawk-server:
- "@@controller1@@"
- "@@controller2@@"
- "@@controller3@@"

- barclamp: pacemaker
name: data
Expand All @@ -42,70 +49,37 @@ proposals:
"@@data2@@":
devices:
- "@@sbd_device_data@@"
"@@data3@@":
devices:
- "@@sbd_device_data@@"
per_node:
nodes:
"@@data1@@":
params: ''
"@@data2@@":
params: ''
"@@data3@@":
params: ''
deployment:
elements:
pacemaker-cluster-member:
- "@@data1@@"
- "@@data2@@"
- "@@data3@@"
hawk-server:
- "@@data1@@"
- "@@data2@@"

- barclamp: pacemaker
name: network
attributes:
stonith:
mode: sbd
sbd:
nodes:
"@@network1@@":
devices:
- "@@sbd_device_network@@"
"@@network2@@":
devices:
- "@@sbd_device_network@@"
per_node:
nodes:
"@@network1@@":
params: ''
"@@network2@@":
params: ''
deployment:
elements:
pacemaker-cluster-member:
- "@@network1@@"
- "@@network2@@"
hawk-server:
- "@@network1@@"
- "@@network2@@"
- "@@data3@@"

- barclamp: database
attributes:
ha:
storage:
shared:
device: ##shared_nfs_for_database##
fstype: nfs
options: nfsvers=3
deployment:
elements:
database-server:
- cluster:data

- barclamp: rabbitmq
attributes:
ha:
storage:
shared:
device: ##shared_nfs_for_rabbitmq##
fstype: nfs
options: nfsvers=3
client:
enable_notifications: true
deployment:
Expand Down Expand Up @@ -208,7 +182,7 @@ proposals:
neutron-server:
- cluster:services
neutron-network:
- cluster:network
- cluster:services

- barclamp: nova
attributes:
Expand Down
Loading