-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update qa-scenario-2a (SOC-10460) #3711
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,5 @@ | ||
--- | ||
# 2a - 8 nodes, HA (SBD 3x2), KVM x 1 | ||
# 2a - 7 nodes, HA (SBD 3x2), KVM x 1 | ||
proposals: | ||
- barclamp: pacemaker | ||
name: services | ||
|
@@ -14,20 +14,27 @@ proposals: | |
"@@controller2@@": | ||
devices: | ||
- "@@sbd_device_services@@" | ||
"@@controller3@@": | ||
devices: | ||
- "@@sbd_device_services@@" | ||
per_node: | ||
nodes: | ||
"@@controller1@@": | ||
params: '' | ||
"@@controller2@@": | ||
params: '' | ||
"@@controller3@@": | ||
params: '' | ||
deployment: | ||
elements: | ||
pacemaker-cluster-member: | ||
- "@@controller1@@" | ||
- "@@controller2@@" | ||
- "@@controller3@@" | ||
hawk-server: | ||
- "@@controller1@@" | ||
- "@@controller2@@" | ||
- "@@controller3@@" | ||
|
||
- barclamp: pacemaker | ||
name: data | ||
|
@@ -42,57 +49,30 @@ proposals: | |
"@@data2@@": | ||
devices: | ||
- "@@sbd_device_data@@" | ||
"@@data3@@": | ||
devices: | ||
- "@@sbd_device_data@@" | ||
per_node: | ||
nodes: | ||
"@@data1@@": | ||
params: '' | ||
"@@data2@@": | ||
params: '' | ||
"@@data3@@": | ||
params: '' | ||
deployment: | ||
elements: | ||
pacemaker-cluster-member: | ||
- "@@data1@@" | ||
- "@@data2@@" | ||
- "@@data3@@" | ||
hawk-server: | ||
- "@@data1@@" | ||
- "@@data2@@" | ||
|
||
- barclamp: pacemaker | ||
name: network | ||
attributes: | ||
stonith: | ||
mode: sbd | ||
sbd: | ||
nodes: | ||
"@@network1@@": | ||
devices: | ||
- "@@sbd_device_network@@" | ||
"@@network2@@": | ||
devices: | ||
- "@@sbd_device_network@@" | ||
per_node: | ||
nodes: | ||
"@@network1@@": | ||
params: '' | ||
"@@network2@@": | ||
params: '' | ||
deployment: | ||
elements: | ||
pacemaker-cluster-member: | ||
- "@@network1@@" | ||
- "@@network2@@" | ||
hawk-server: | ||
- "@@network1@@" | ||
- "@@network2@@" | ||
- "@@data3@@" | ||
|
||
- barclamp: database | ||
attributes: | ||
sql_engine: postgresql | ||
ha: | ||
storage: | ||
shared: | ||
device: ##shared_nfs_for_database## | ||
fstype: nfs | ||
deployment: | ||
elements: | ||
database-server: | ||
|
@@ -151,11 +131,21 @@ proposals: | |
- barclamp: cinder | ||
attributes: | ||
volumes: | ||
- backend_driver: nfs | ||
backend_name: nfs | ||
nfs: | ||
nfs_shares: ##cinder-storage-shares## | ||
nfs_snapshot: true | ||
- backend_driver: netapp | ||
backend_name: netapp | ||
netapp: | ||
nfs_shares: '' | ||
netapp_vfiler: '' | ||
netapp_volume_list: '' | ||
storage_family: ontap_cluster | ||
storage_protocol: iscsi | ||
vserver: 'cloud-openstack-svm ' | ||
netapp_server_hostname: ##netapp_server## | ||
netapp_server_port: 80 | ||
netapp_login: admin | ||
netapp_password: ##netapp_password## | ||
netapp_transport_type: http | ||
max_over_subscription_ratio: 20 | ||
deployment: | ||
elements: | ||
cinder-controller: | ||
|
@@ -178,7 +168,7 @@ proposals: | |
neutron-server: | ||
- cluster:services | ||
neutron-network: | ||
- cluster:network | ||
- cluster:services | ||
|
||
- barclamp: nova | ||
attributes: | ||
|
@@ -222,19 +212,6 @@ proposals: | |
heat-server: | ||
- cluster:services | ||
|
||
- barclamp: ceilometer | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why are you removing Ceilometer from the scenario? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. during prechecks if monasca is missing upgrade wont go further unless i remove ceilometer and as such case already covered in qa-scenario-8a(deploying monasca) I decided remove it from here ,other option would be deploy monasca , but because not enough nodes avaible i think it would be complicated . There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, ok. I was wondering about that because I created dedicated precheck clean scenarios in #3682 which is a little...starved for reviews... :-) What do we do about this? Merge the dedicated scenarios or remove all precheck breakers from the existing ones (in that case there'd be more things to change). I for one would prefer having dedicated upgrade scenarios in order to continue testing the barclamps that are still supported on Cloud 8 for Cloud 8 rather than disabling them for regular Cloud 8 (i.e. in non-upgrade situations) as well. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wanted do two things together fix broken scenario in general for SOC8 and make them working for upgrade so we dont need duplicate There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok. Then I'll +1 this for now, since I have none of the non-precheck fixes in my pull request. We may need a follow-up to take care of Aodh, Trove and more Ceilometer occurences as well though (unless someone has taken care of that in the time since I created #3682 ; haven't checked). |
||
attributes: | ||
deployment: | ||
elements: | ||
ceilometer-agent: | ||
- "@@compute-kvm@@" | ||
ceilometer-agent-hyperv: [] | ||
ceilometer-central: | ||
- cluster:services | ||
ceilometer-server: | ||
- cluster:services | ||
ceilometer-swift-proxy-middleware: [] | ||
|
||
- barclamp: manila | ||
attributes: | ||
default_share_type: default | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why did we loose a dedicated network cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have only enough nodes for 2 clusters(with 3 nodes) I can remove data and add network back if you think its better coverage /scenario
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. Lets leave it like this if there are no more nodes available.