Anzeige:
Seite 1 von 2 12 LetzteLetzte
Ergebnis 1 bis 15 von 29

Thema: Heartbeat startet keine Anwendungen

  1. #1
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256

    Heartbeat startet keine Anwendungen

    Da ich scheinbar etwas durcheinander konfiguriert habe habe ich noch mal von vorn angefangen. Ich ahbe aber das Problem dass Heartbeat mysql nicht startet.

    Server 1 eth1 192.168.1.17 Debian Lenny stable
    Server 2 eth1 192.168.1.18 Debian Lenny stable
    Virtuelle IP 192.168.1.253

    /etc/network/interfaces Server1

    Code:
    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # The primary network interface
    allow-hotplug eth1
    iface eth1 inet dhcp
    
    auto eth0
    iface eth0 inet static
      address 192.168.200.1
      network 192.168.200.0
      netmask 255.255.255.0
      broadcast 192.168.200.255
    
    auto lo:0
    iface lo:0 inet static
      address 192.168.1.253
      netmask 255.255.255.255
      pre-up sysctl -p > /dev/null
    Server2

    Code:
    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # The primary network interface
    allow-hotplug eth1
    iface eth1 inet dhcp
    
    
    auto eth0
    iface eth0 inet static
      address 192.168.200.2
      network 192.168.200.0
      netmask 255.255.255.0
      broadcast 192.168.200.255
    
    auto lo:0
    iface lo:0 inet static
      address 192.168.1.253
      netmask 255.255.255.255
      pre-up sysctl -p > /dev/null
    Server1
    /etc/ha.d/ha.cf

    Code:
    debugfile /var/log/ha-debug
    logfile /var/log/ha-log
    logfacility local0
    bcast eth1
    mcast eth1 225.0.0.1 694 1 0
    ucast eth1 192.168.1.18
    crm yes
    keepalive 5
    warntime 10
    deadtime 120
    initdead 120
    auto_failback off
    node server1
    node server2
    respawn hacluster /usr/lib/heartbeat/ipfail
    apiauth ipfail gid=haclient uid=hacluster
    Server1
    /etc/ha.d/ha.cf

    Code:
    debugfile /var/log/ha-debug
    logfile /var/log/ha-log
    logfacility local0
    bcast eth1
    mcast eth1 225.0.0.1 694 1 0
    ucast eth1 192.168.1.18
    crm yes
    keepalive 5
    warntime 10
    deadtime 120
    initdead 120
    auto_failback off
    node server1
    node server2
    respawn hacluster /usr/lib/heartbeat/ipfail
    apiauth ipfail gid=haclient uid=hacluster
    Server1/2
    /etc/ha.d/haresources
    muss da als erstes Wort (Server1) nur ein Node stehen? oder der jeweisl andere oder ...

    Code:
    server1 \
            IPaddr2::192.168.1.253/24/eth0/192.168.1.255 mysql
    /etc/ha.d/authkeys
    Server1/2

    Code:
    auth 3
    
    #1 sha1 passworthiereingeben
    #2 md5 passworthiereingeben
    3 crc

    Mit dem Script

    Code:
    /usr/lib/heartbeat/haresources2cib.py /etc/ha.d/haresources
    habe ich die haresources in die cib.xml konvertiert, vorher natuerlich die cib.xml und die cib.xml.sig geloescht.

    cib.xml

    Code:
    <cib admin_epoch="0" epoch="1" have_quorum="false" ignore_dtd="false" num_peers="2" cib_feature_revision="2.0" generated="false" num_updates="4" cib-last-written="Wed Mar 10 10:55:06 2010">
       <configuration>
         <crm_config>
           <cluster_property_set id="cib-bootstrap-options">
             <attributes>
               <nvpair id="cib-bootstrap-options-symmetric-cluster" name="symmetric-cluster" value="true"/>
               <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="stop"/>
               <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="0"/>
               <nvpair id="cib-bootstrap-options-default-resource-failure-stickiness" name="default-resource-failure-stickiness" value="0"/>
               <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/>
               <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="reboot"/>
               <nvpair id="cib-bootstrap-options-startup-fencing" name="startup-fencing" value="true"/>
               <nvpair id="cib-bootstrap-options-stop-orphan-resources" name="stop-orphan-resources" value="true"/>
               <nvpair id="cib-bootstrap-options-stop-orphan-actions" name="stop-orphan-actions" value="true"/>
               <nvpair id="cib-bootstrap-options-remove-after-stop" name="remove-after-stop" value="false"/>
               <nvpair id="cib-bootstrap-options-short-resource-names" name="short-resource-names" value="true"/>
               <nvpair id="cib-bootstrap-options-transition-idle-timeout" name="transition-idle-timeout" value="5min"/>
               <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="20s"/>
               <nvpair id="cib-bootstrap-options-is-managed-default" name="is-managed-default" value="true"/>
               <nvpair id="cib-bootstrap-options-cluster-delay" name="cluster-delay" value="60s"/>
               <nvpair id="cib-bootstrap-options-pe-error-series-max" name="pe-error-series-max" value="-1"/>
               <nvpair id="cib-bootstrap-options-pe-warn-series-max" name="pe-warn-series-max" value="-1"/>
               <nvpair id="cib-bootstrap-options-pe-input-series-max" name="pe-input-series-max" value="-1"/>
               <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.3-node: 552305612591183b1628baa5bc6e903e0f1e26a3"/>
             </attributes>
           </cluster_property_set>
         </crm_config>
         <nodes>
           <node id="a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0" uname="server2" type="normal"/>
           <node id="7d026928-b718-4d28-b3e6-22ee3cd9ab50" uname="server1" type="normal"/>
         </nodes>
         <resources>
           <group id="group_1">
             <primitive class="ocf" id="IPaddr2_1" provider="heartbeat" type="IPaddr2">
               <operations>
                 <op id="IPaddr2_1_mon" interval="5s" name="monitor" timeout="5s"/>
               </operations>
               <instance_attributes id="IPaddr2_1_inst_attr">
                 <attributes>
                   <nvpair id="IPaddr2_1_attr_0" name="ip" value="192.168.1.253"/>
                   <nvpair id="IPaddr2_1_attr_1" name="nic" value="24"/>
                   <nvpair id="IPaddr2_1_attr_2" name="cidr_netmask" value="eth0"/>
                   <nvpair id="IPaddr2_1_attr_3" name="broadcast" value="192.168.1.255"/>
                 </attributes>
               </instance_attributes>
             </primitive>
             <primitive class="ocf" id="mysql_2" provider="heartbeat" type="mysql">
               <operations>
                 <op id="mysql_2_mon" interval="120s" name="monitor" timeout="60s"/>
               </operations>
             </primitive>
           </group>
         </resources>
         <constraints>
           <rsc_location id="rsc_location_group_1" rsc="group_1">
             <rule id="prefered_location_group_1" score="100">
               <expression attribute="#uname" id="prefered_location_group_1_expr" operation="eq" value="server1"/>
             </rule>
           </rsc_location>
         </constraints>
       </configuration>
     </cib>
    /var/log/messages

    Code:
    Mar 10 10:55:03 server2 cib: [2398]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
    Mar 10 10:55:03 server2 cib: [2398]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
    Mar 10 10:55:03 server2 crmd: [2393]: info: do_cib_control: CIB connection established
    Mar 10 10:55:03 server2 crmd: [2393]: info: register_with_ha: Hostname: server2
    Mar 10 10:55:03 server2 cib: [2389]: info: cib_client_status_callback: Status update: Client server2/cib now has status [join]
    Mar 10 10:55:03 server2 cib: [2389]: info: cib_client_status_callback: Status update: Client server2/cib now has status [online]
    Mar 10 10:55:03 server2 cib: [2389]: info: cib_null_callback: Setting cib_diff_notify callbacks for mgmtd: on
    Mar 10 10:55:03 server2 cib: [2389]: info: cib_null_callback: Setting cib_refresh_notify callbacks for crmd: on
    Mar 10 10:55:03 server2 cib: [2398]: info: write_cib_contents: Wrote version 0.1.1 of the CIB to disk (digest: 94547266cef61ee87acef8a5225147a3)
    Mar 10 10:55:03 server2 cib: [2398]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
    Mar 10 10:55:03 server2 cib: [2398]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
    Mar 10 10:55:03 server2 crmd: [2393]: info: register_with_ha: UUID: a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0
    Mar 10 10:55:03 server2 cib: [2389]: info: cib_client_status_callback: Status update: Client server1/cib now has status [online]
    Mar 10 10:55:04 server2 crmd: [2393]: info: populate_cib_nodes: Requesting the list of configured nodes
    Mar 10 10:55:05 server2 mgmtd: [2394]: info: Started.
    Mar 10 10:55:05 server2 crmd: [2393]: notice: populate_cib_nodes: Node: server2 (uuid: a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0)
    Mar 10 10:55:05 server2 ccm: [2388]: info: Break tie for 2 nodes cluster
    Mar 10 10:55:05 server2 cib: [2389]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
    Mar 10 10:55:05 server2 cib: [2389]: info: mem_handle_event: instance=1, nodes=1, new=1, lost=0, n_idx=0, new_idx=0, old_idx=3
    Mar 10 10:55:05 server2 cib: [2389]: info: cib_ccm_msg_callback: PEER: server2
    Mar 10 10:55:06 server2 crmd: [2393]: notice: populate_cib_nodes: Node: server1 (uuid: 7d026928-b718-4d28-b3e6-22ee3cd9ab50)
    Mar 10 10:55:06 server2 crmd: [2393]: info: do_ha_control: Connected to Heartbeat
    Mar 10 10:55:06 server2 crmd: [2393]: info: do_ccm_control: CCM connection established... waiting for first callback
    Mar 10 10:55:06 server2 crmd: [2393]: info: do_started: Delaying start, CCM (0000000000100000) not connected
    Mar 10 10:55:06 server2 crmd: [2393]: info: crmd_init: Starting crmd's mainloop
    Mar 10 10:55:06 server2 crmd: [2393]: notice: crmd_client_status_callback: Status update: Client server2/crmd now has status [online]
    Mar 10 10:55:06 server2 crmd: [2393]: notice: crmd_client_status_callback: Status update: Client server2/crmd now has status [online]
    Mar 10 10:55:07 server2 heartbeat: [2321]: WARN: 1 lost packet(s) for [server1] [19:21]
    Mar 10 10:55:07 server2 heartbeat: [2321]: info: No pkts missing from server1!
    Mar 10 10:55:07 server2 cib: [2389]: info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
    Mar 10 10:55:07 server2 crmd: [2393]: notice: crmd_client_status_callback: Status update: Client server1/crmd now has status [offline]
    Mar 10 10:55:07 server2 crmd: [2393]: notice: crmd_client_status_callback: Status update: Client server1/crmd now has status [online]
    Mar 10 10:55:07 server2 cib: [2389]: info: mem_handle_event: no mbr_track info
    Mar 10 10:55:07 server2 cib: [2389]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
    Mar 10 10:55:07 server2 cib: [2389]: info: mem_handle_event: instance=2, nodes=2, new=1, lost=0, n_idx=0, new_idx=2, old_idx=4
    Mar 10 10:55:07 server2 cib: [2389]: info: cib_ccm_msg_callback: PEER: server2
    Mar 10 10:55:07 server2 cib: [2389]: info: cib_ccm_msg_callback: PEER: server1
    Mar 10 10:55:07 server2 crmd: [2393]: info: do_started: Delaying start, CCM (0000000000100000) not connected
    Mar 10 10:55:07 server2 crmd: [2393]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
    Mar 10 10:55:07 server2 crmd: [2393]: info: mem_handle_event: instance=1, nodes=1, new=1, lost=0, n_idx=0, new_idx=0, old_idx=3
    Mar 10 10:55:07 server2 crmd: [2393]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=1)
    Mar 10 10:55:07 server2 crmd: [2393]: info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
    Mar 10 10:55:07 server2 crmd: [2393]: info: mem_handle_event: no mbr_track info
    Mar 10 10:55:07 server2 crmd: [2393]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm
    Mar 10 10:55:07 server2 crmd: [2393]: info: mem_handle_event: instance=2, nodes=2, new=1, lost=0, n_idx=0, new_idx=2, old_idx=4
    Mar 10 10:55:07 server2 crmd: [2393]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=2)
    Mar 10 10:55:07 server2 crmd: [2393]: info: ccm_event_detail: NEW MEMBERSHIP: trans=2, nodes=2, new=1, lost=0 n_idx=0, new_idx=2, old_idx=4
    Mar 10 10:55:07 server2 crmd: [2393]: info: ccm_event_detail: #011CURRENT: server2 [nodeid=1, born=1]
    Mar 10 10:55:07 server2 crmd: [2393]: info: ccm_event_detail: #011CURRENT: server1 [nodeid=0, born=2]
    Mar 10 10:55:07 server2 crmd: [2393]: info: ccm_event_detail: #011NEW:     server1 [nodeid=0, born=2]
    Mar 10 10:55:07 server2 crmd: [2393]: info: do_started: The local CRM is operational
    Mar 10 10:55:07 server2 crmd: [2393]: info: ccm_event_detail: NEW MEMBERSHIP: trans=1, nodes=1, new=1, lost=0 n_idx=0, new_idx=0, old_idx=3
    Mar 10 10:55:07 server2 crmd: [2393]: info: ccm_event_detail: #011CURRENT: server2 [nodeid=1, born=1]
    Mar 10 10:55:07 server2 crmd: [2393]: info: ccm_event_detail: #011NEW:     server2 [nodeid=1, born=1]
    Mar 10 10:55:07 server2 crmd: [2393]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_CCM_CALLBACK origin=do_started ]
    Mar 10 10:55:07 server2 heartbeat: [2321]: WARN: 1 lost packet(s) for [server1] [22:24]
    Mar 10 10:55:07 server2 heartbeat: [2321]: info: No pkts missing from server1!
    Mar 10 10:55:10 server2 attrd: [2392]: info: main: Starting mainloop...
    Mar 10 10:57:08 server2 crmd: [2393]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
    Mar 10 10:57:08 server2 crmd: [2393]: WARN: do_log: [[FSA]] Input I_DC_TIMEOUT from crm_timer_popped() received in state (S_PENDING)
    Mar 10 10:57:08 server2 crmd: [2393]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
    Mar 10 10:57:08 server2 crmd: [2393]: info: do_election_count_vote: Updated voted hash for server2 to vote
    Mar 10 10:57:08 server2 crmd: [2393]: info: do_election_count_vote: Election ignore: our vote (server2)
    Mar 10 10:57:08 server2 crmd: [2393]: info: do_election_check: Still waiting on 1 non-votes (2 total)
    Mar 10 10:57:08 server2 crmd: [2393]: info: do_election_count_vote: Updated voted hash for server1 to no-vote
    Mar 10 10:57:08 server2 crmd: [2393]: info: do_election_count_vote: Election ignore: no-vote from server1
    Mar 10 10:57:08 server2 crmd: [2393]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
    Mar 10 10:57:08 server2 crmd: [2393]: info: start_subsystem: Starting sub-system "tengine"
    Mar 10 10:57:08 server2 crmd: [2393]: info: start_subsystem: Starting sub-system "pengine"
    Mar 10 10:57:08 server2 crmd: [2393]: info: do_dc_takeover: Taking over DC status for this partition
    Mar 10 10:57:09 server2 cib: [2389]: info: cib_process_readwrite: We are now in R/W mode
    Mar 10 10:57:09 server2 crmd: [2393]: info: join_make_offer: Making join offers based on membership 2
    Mar 10 10:57:09 server2 crmd: [2393]: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
    Mar 10 10:57:09 server2 tengine: [2406]: info: G_main_add_SignalHandler: Added signal handler for signal 15
    Mar 10 10:57:09 server2 tengine: [2406]: info: G_main_add_TriggerHandler: Added signal manual handler
    Mar 10 10:57:09 server2 tengine: [2406]: info: G_main_add_TriggerHandler: Added signal manual handler
    Mar 10 10:57:09 server2 tengine: [2406]: info: te_init: Registering TE UUID: 87bd86cf-74fd-4cc0-8b99-7f543e3bccc4
    Mar 10 10:57:09 server2 tengine: [2406]: info: set_graph_functions: Setting custom graph functions
    Mar 10 10:57:09 server2 tengine: [2406]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
    Mar 10 10:57:09 server2 tengine: [2406]: info: te_init: Starting tengine
    Mar 10 10:57:09 server2 tengine: [2406]: info: te_connect_stonith: Attempting connection to fencing daemon...
    Mar 10 10:57:09 server2 cib: [2389]: info: cib_null_callback: Setting cib_diff_notify callbacks for tengine: on
    Mar 10 10:57:09 server2 pengine: [2407]: info: G_main_add_SignalHandler: Added signal handler for signal 15
    Mar 10 10:57:09 server2 pengine: [2407]: info: pe_init: Starting pengine
    Mar 10 10:57:09 server2 crmd: [2393]: info: update_dc: Set DC to server2 (2.0)
    Mar 10 10:57:10 server2 tengine: [2406]: info: te_connect_stonith: Connected
    Mar 10 10:57:10 server2 crmd: [2393]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
    Mar 10 10:57:10 server2 crmd: [2393]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
    Mar 10 10:57:10 server2 crmd: [2393]: info: do_dc_join_finalize: join-1: Asking server1 for its copy of the CIB
    Mar 10 10:57:12 server2 cib: [2389]: info: cib_replace_notify: Replaced: 0.1.3 -> 0.1.4 from <null>
    Mar 10 10:57:12 server2 crmd: [2393]: info: populate_cib_nodes: Requesting the list of configured nodes
    Mar 10 10:57:12 server2 crmd: [2393]: notice: populate_cib_nodes: Node: server2 (uuid: a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0)
    Mar 10 10:57:13 server2 crmd: [2393]: notice: populate_cib_nodes: Node: server1 (uuid: 7d026928-b718-4d28-b3e6-22ee3cd9ab50)
    Mar 10 10:57:13 server2 crmd: [2393]: info: update_attrd: Connecting to attrd...
    Mar 10 10:57:13 server2 crmd: [2393]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
    Mar 10 10:57:13 server2 crmd: [2393]: info: update_dc: Unset DC server2
    Mar 10 10:57:13 server2 attrd: [2392]: info: attrd_local_callback: Sending full refresh
    Mar 10 10:57:13 server2 crmd: [2393]: info: do_election_count_vote: Updated voted hash for server2 to vote
    Mar 10 10:57:13 server2 crmd: [2393]: info: do_election_count_vote: Election ignore: our vote (server2)
    Mar 10 10:57:13 server2 crmd: [2393]: info: do_election_check: Still waiting on 1 non-votes (2 total)
    Mar 10 10:57:14 server2 crmd: [2393]: info: do_election_count_vote: Updated voted hash for server1 to no-vote
    Mar 10 10:57:14 server2 crmd: [2393]: info: do_election_count_vote: Election ignore: no-vote from server1
    Mar 10 10:57:14 server2 crmd: [2393]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
    Mar 10 10:57:14 server2 crmd: [2393]: info: start_subsystem: Starting sub-system "tengine"
    Mar 10 10:57:14 server2 crmd: [2393]: WARN: start_subsystem: Client tengine already running as pid 2406
    Mar 10 10:57:14 server2 crmd: [2393]: info: start_subsystem: Starting sub-system "pengine"
    Mar 10 10:57:14 server2 crmd: [2393]: WARN: start_subsystem: Client pengine already running as pid 2407
    Mar 10 10:57:14 server2 crmd: [2393]: info: do_dc_takeover: Taking over DC status for this partition
    Mar 10 10:57:14 server2 cib: [2389]: info: cib_process_readwrite: We are now in R/O mode
    Mar 10 10:57:14 server2 cib: [2389]: info: cib_process_readwrite: We are now in R/W mode
    Mar 10 10:57:14 server2 crmd: [2393]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
    Mar 10 10:57:15 server2 crmd: [2393]: info: update_dc: Set DC to server2 (2.0)
    Mar 10 10:57:16 server2 crmd: [2393]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
    Mar 10 10:57:16 server2 crmd: [2393]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
    Mar 10 10:57:16 server2 attrd: [2392]: info: attrd_local_callback: Sending full refresh
    Mar 10 10:57:16 server2 cib: [2389]: info: sync_our_cib: Syncing CIB to all peers
    Mar 10 10:57:16 server2 crmd: [2393]: info: update_dc: Set DC to server2 (2.0)
    Mar 10 10:57:17 server2 crmd: [2393]: info: do_dc_join_ack: join-2: Updating node state to member for server2
    Mar 10 10:57:18 server2 crmd: [2393]: info: do_dc_join_ack: join-2: Updating node state to member for server1
    Mar 10 10:57:18 server2 crmd: [2393]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
    Mar 10 10:57:18 server2 crmd: [2393]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
    Mar 10 10:57:18 server2 tengine: [2406]: info: update_abort_priority: Abort priority upgraded to 1000000
    Mar 10 10:57:18 server2 tengine: [2406]: info: update_abort_priority: 'DC Takeover' abort superceeded
    Mar 10 10:57:18 server2 pengine: [2407]: info: determine_online_status: Node server1 is online
    Mar 10 10:57:18 server2 pengine: [2407]: info: determine_online_status: Node server2 is online
    Mar 10 10:57:18 server2 pengine: [2407]: notice: group_print: Resource Group: group_1
    Mar 10 10:57:18 server2 pengine: [2407]: notice: native_print:     IPaddr2_1#011(heartbeat::ocf:IPaddr2):#011Stopped
    Mar 10 10:57:18 server2 pengine: [2407]: notice: native_print:     mysql_2#011(heartbeat::ocf:mysql):#011Stopped
    Mar 10 10:57:18 server2 pengine: [2407]: notice: StartRsc:  server1#011Start IPaddr2_1
    Mar 10 10:57:18 server2 pengine: [2407]: notice: RecurringOp: server1#011   IPaddr2_1_monitor_5000
    Mar 10 10:57:18 server2 pengine: [2407]: notice: StartRsc:  server1#011Start mysql_2
    Mar 10 10:57:18 server2 pengine: [2407]: notice: RecurringOp: server1#011   mysql_2_monitor_120000
    Mar 10 10:57:18 server2 crmd: [2393]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
    Mar 10 10:57:18 server2 tengine: [2406]: info: unpack_graph: Unpacked transition 0: 13 actions in 13 synapses
    Mar 10 10:57:18 server2 tengine: [2406]: info: send_rsc_command: Initiating action 3: IPaddr2_1_monitor_0 on server2
    Mar 10 10:57:18 server2 tengine: [2406]: info: send_rsc_command: Initiating action 6: IPaddr2_1_monitor_0 on server1
    Mar 10 10:57:18 server2 tengine: [2406]: info: send_rsc_command: Initiating action 4: mysql_2_monitor_0 on server2
    Mar 10 10:57:18 server2 tengine: [2406]: info: send_rsc_command: Initiating action 7: mysql_2_monitor_0 on server1
    Mar 10 10:57:18 server2 crmd: [2393]: info: do_lrm_rsc_op: Performing op=IPaddr2_1_monitor_0 key=3:0:87bd86cf-74fd-4cc0-8b99-7f543e3bccc4)
    Mar 10 10:57:18 server2 lrmd: [2390]: info: rsc:IPaddr2_1: monitor
    Mar 10 10:57:18 server2 crmd: [2393]: info: do_lrm_rsc_op: Performing op=mysql_2_monitor_0 key=4:0:87bd86cf-74fd-4cc0-8b99-7f543e3bccc4)
    Mar 10 10:57:18 server2 lrmd: [2390]: info: rsc:mysql_2: monitor
    Mar 10 10:57:18 server2 pengine: [2407]: info: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/heartbeat/pengine/pe-input-11.bz2
    Mar 10 10:57:18 server2 crmd: [2393]: info: process_lrm_event: LRM operation mysql_2_monitor_0 (call=3, rc=7) complete
    Mar 10 10:57:18 server2 tengine: [2406]: info: match_graph_event: Action mysql_2_monitor_0 (4) confirmed on server2 (rc=0)
    Mar 10 10:57:18 server2 lrmd: [2390]: info: RA output: (IPaddr2_1:monitor:stderr) Invalid netmask specification [eth0]#012/usr/lib/heartbeat/findif version 2.1.3 Copyright Alan Robertson#012#012Usage: /usr/lib/heartbeat/findif [-C]#012Options:#012    -C: Output netmask as the number of bits rather than as 4 octets.#012Environment variables:#012OCF_RESKEY_ip#011#011 ip address (mandatory!)#012OCF_RESKEY_cidr_netmask netmask of interface#012OCF_RESKEY_broadcast#011 broadcast address for interface#012OCF_RESKEY_nic#011#011 interface to assign to
    Mar 10 10:57:18 server2 crmd: [2393]: info: process_lrm_event: LRM operation IPaddr2_1_monitor_0 (call=2, rc=2) complete
    Mar 10 10:57:18 server2 tengine: [2406]: info: status_from_rc: Re-mapping op status to LRM_OP_ERROR for rc=2
    Mar 10 10:57:18 server2 tengine: [2406]: WARN: status_from_rc: Action monitor on server2 failed (target: 7 vs. rc: 2): Error
    Mar 10 10:57:18 server2 tengine: [2406]: info: update_abort_priority: Abort priority upgraded to 1
    Mar 10 10:57:18 server2 tengine: [2406]: info: update_abort_priority: Abort action 0 superceeded by 2
    Mar 10 10:57:18 server2 tengine: [2406]: info: match_graph_event: Action IPaddr2_1_monitor_0 (3) confirmed on server2 (rc=4)
    Mar 10 10:57:19 server2 tengine: [2406]: info: match_graph_event: Action mysql_2_monitor_0 (7) confirmed on server1 (rc=0)
    Mar 10 10:57:19 server2 tengine: [2406]: info: status_from_rc: Re-mapping op status to LRM_OP_ERROR for rc=2
    Mar 10 10:57:19 server2 tengine: [2406]: WARN: status_from_rc: Action monitor on server1 failed (target: 7 vs. rc: 2): Error
    Mar 10 10:57:19 server2 tengine: [2406]: info: match_graph_event: Action IPaddr2_1_monitor_0 (6) confirmed on server1 (rc=4)
    Mar 10 10:57:19 server2 tengine: [2406]: info: run_graph: ====================================================
    Mar 10 10:57:19 server2 tengine: [2406]: notice: run_graph: Transition 0: (Complete=4, Pending=0, Fired=0, Skipped=7, Incomplete=2)
    Mar 10 10:57:19 server2 crmd: [2393]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_IPC_MESSAGE origin=route_message ]
    Mar 10 10:57:19 server2 crmd: [2393]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
    Mar 10 10:57:19 server2 pengine: [2407]: info: determine_online_status: Node server1 is online
    Mar 10 10:57:19 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server1: Error
    Mar 10 10:57:19 server2 pengine: [2407]: info: determine_online_status: Node server2 is online
    Mar 10 10:57:19 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server2: Error
    Mar 10 10:57:19 server2 pengine: [2407]: notice: group_print: Resource Group: group_1
    Mar 10 10:57:19 server2 pengine: [2407]: notice: native_print:     IPaddr2_1#011(heartbeat::ocf:IPaddr2)
    Mar 10 10:57:19 server2 pengine: [2407]: notice: native_print: #0110 : server1
    Mar 10 10:57:19 server2 pengine: [2407]: notice: native_print: #0111 : server2
    Mar 10 10:57:19 server2 pengine: [2407]: notice: native_print:     mysql_2#011(heartbeat::ocf:mysql):#011Stopped
    Mar 10 10:57:19 server2 pengine: [2407]: notice: StopRsc:   server1#011Stop IPaddr2_1
    Mar 10 10:57:19 server2 pengine: [2407]: notice: StopRsc:   server2#011Stop IPaddr2_1
    Mar 10 10:57:19 server2 pengine: [2407]: notice: StartRsc:  server1#011Start IPaddr2_1
    Mar 10 10:57:19 server2 pengine: [2407]: notice: RecurringOp: server1#011   IPaddr2_1_monitor_5000
    Mar 10 10:57:19 server2 pengine: [2407]: notice: StartRsc:  server1#011Start mysql_2
    Mar 10 10:57:19 server2 pengine: [2407]: notice: RecurringOp: server1#011   mysql_2_monitor_120000
    Mar 10 10:57:19 server2 crmd: [2393]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
    Mar 10 10:57:19 server2 tengine: [2406]: info: unpack_graph: Unpacked transition 1: 14 actions in 14 synapses
    Mar 10 10:57:19 server2 tengine: [2406]: info: te_pseudo_action: Pseudo action 13 fired and confirmed
    Mar 10 10:57:19 server2 tengine: [2406]: info: te_pseudo_action: Pseudo action 16 fired and confirmed
    Mar 10 10:57:19 server2 tengine: [2406]: info: send_rsc_command: Initiating action 4: probe_complete on server2
    Mar 10 10:57:19 server2 tengine: [2406]: info: send_rsc_command: Initiating action 5: probe_complete on server1
    Mar 10 10:57:19 server2 tengine: [2406]: info: send_rsc_command: Initiating action 1: IPaddr2_1_stop_0 on server1
    Mar 10 10:57:19 server2 tengine: [2406]: info: send_rsc_command: Initiating action 2: IPaddr2_1_stop_0 on server2
    Mar 10 10:57:19 server2 crmd: [2393]: info: do_lrm_rsc_op: Performing op=IPaddr2_1_stop_0 key=2:1:87bd86cf-74fd-4cc0-8b99-7f543e3bccc4)
    Mar 10 10:57:19 server2 lrmd: [2390]: info: rsc:IPaddr2_1: stop
    Mar 10 10:57:19 server2 tengine: [2406]: info: extract_event: Aborting on transient_attributes changes for a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0
    Mar 10 10:57:19 server2 tengine: [2406]: info: update_abort_priority: Abort priority upgraded to 1000000
    Mar 10 10:57:19 server2 tengine: [2406]: info: update_abort_priority: Abort action 0 superceeded by 2
    Mar 10 10:57:19 server2 lrmd: [2390]: info: RA output: (IPaddr2_1:stop:stderr) Invalid netmask specification [eth0]
    Mar 10 10:57:19 server2 lrmd: [2390]: info: RA output: (IPaddr2_1:stop:stderr) #012/usr/lib/heartbeat/findif version 2.1.3 Copyright Alan Robertson#012#012Usage: /usr/lib/heartbeat/findif [-C]#012Options:#012    -C: Output netmask as the number of bits rather than as 4 octets.#012Environment variables:#012OCF_RESKEY_ip#011#011 ip address (mandatory!)#012OCF_RESKEY_cidr_netmask netmask of interface#012OCF_RESKEY_broadcast#011 broadcast address for interface#012OCF_RESKEY_nic#011#011 interface to assign to
    Mar 10 10:57:19 server2 crmd: [2393]: info: process_lrm_event: LRM operation IPaddr2_1_stop_0 (call=4, rc=2) complete
    Mar 10 10:57:19 server2 tengine: [2406]: info: status_from_rc: Re-mapping op status to LRM_OP_ERROR for rc=2
    Mar 10 10:57:19 server2 tengine: [2406]: WARN: status_from_rc: Action stop on server2 failed (target: <null> vs. rc: 2): Error
    Mar 10 10:57:19 server2 tengine: [2406]: WARN: update_failcount: Updating failcount for IPaddr2_1 on a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0 after failed stop: rc=2
    Mar 10 10:57:19 server2 tengine: [2406]: info: match_graph_event: Action IPaddr2_1_stop_0 (2) confirmed on server2 (rc=4)
    Mar 10 10:57:19 server2 tengine: [2406]: info: extract_event: Aborting on transient_attributes changes for a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0
    Mar 10 10:57:21 server2 tengine: [2406]: info: extract_event: Aborting on transient_attributes changes for 7d026928-b718-4d28-b3e6-22ee3cd9ab50
    Mar 10 10:57:21 server2 tengine: [2406]: info: status_from_rc: Re-mapping op status to LRM_OP_ERROR for rc=2
    Mar 10 10:57:21 server2 tengine: [2406]: WARN: status_from_rc: Action stop on server1 failed (target: <null> vs. rc: 2): Error
    Mar 10 10:57:21 server2 tengine: [2406]: WARN: update_failcount: Updating failcount for IPaddr2_1 on 7d026928-b718-4d28-b3e6-22ee3cd9ab50 after failed stop: rc=2
    Mar 10 10:57:21 server2 tengine: [2406]: info: match_graph_event: Action IPaddr2_1_stop_0 (1) confirmed on server1 (rc=4)
    Mar 10 10:57:21 server2 tengine: [2406]: info: run_graph: ====================================================
    Mar 10 10:57:21 server2 tengine: [2406]: notice: run_graph: Transition 1: (Complete=6, Pending=0, Fired=0, Skipped=8, Incomplete=0)
    Mar 10 10:57:21 server2 crmd: [2393]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_IPC_MESSAGE origin=route_message ]
    Mar 10 10:57:21 server2 crmd: [2393]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
    Mar 10 10:57:21 server2 pengine: [2407]: info: determine_online_status: Node server1 is online
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server1: Error
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_stop_0 on server1: Error
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Compatability handling for failed op IPaddr2_1_stop_0 on server1
    Mar 10 10:57:21 server2 pengine: [2407]: info: determine_online_status: Node server2 is online
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server2: Error
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_stop_0 on server2: Error
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Compatability handling for failed op IPaddr2_1_stop_0 on server2
    Mar 10 10:57:21 server2 pengine: [2407]: info: native_add_running: resource IPaddr2_1 isnt managed
    Mar 10 10:57:21 server2 pengine: [2407]: notice: group_print: Resource Group: group_1
    Mar 10 10:57:21 server2 pengine: [2407]: notice: native_print:     IPaddr2_1#011(heartbeat::ocf:IPaddr2)
    Mar 10 10:57:21 server2 pengine: [2407]: notice: native_print: #0110 : server1
    Mar 10 10:57:21 server2 pengine: [2407]: notice: native_print: #0111 : server2
    Mar 10 10:57:21 server2 pengine: [2407]: notice: native_print:     mysql_2#011(heartbeat::ocf:mysql):#011Stopped
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: native_color: Resource IPaddr2_1 cannot run anywhere
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: native_color: Resource mysql_2 cannot run anywhere
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: custom_action: Action IPaddr2_1_stop_0 (unmanaged)
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: custom_action: Action IPaddr2_1_stop_0 (unmanaged)
    Mar 10 10:57:21 server2 crmd: [2393]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
    Mar 10 10:57:21 server2 tengine: [2406]: info: unpack_graph: Unpacked transition 2: 0 actions in 0 synapses
    Mar 10 10:57:21 server2 tengine: [2406]: info: run_graph: Transition 2: (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0)
    Mar 10 10:57:21 server2 tengine: [2406]: info: notify_crmd: Transition 2 status: te_complete - <null>
    Mar 10 10:57:21 server2 crmd: [2393]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
    Mar 10 10:57:21 server2 pengine: [2407]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
    Mar 10 10:57:21 server2 tengine: [2406]: info: extract_event: Aborting on transient_attributes changes for 7d026928-b718-4d28-b3e6-22ee3cd9ab50
    Mar 10 10:57:21 server2 tengine: [2406]: info: update_abort_priority: Abort priority upgraded to 1000000
    Mar 10 10:57:21 server2 crmd: [2393]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_IPC_MESSAGE origin=route_message ]
    Mar 10 10:57:21 server2 crmd: [2393]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
    Mar 10 10:57:21 server2 pengine: [2407]: info: determine_online_status: Node server1 is online
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server1: Error
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_stop_0 on server1: Error
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Compatability handling for failed op IPaddr2_1_stop_0 on server1
    Mar 10 10:57:21 server2 pengine: [2407]: info: determine_online_status: Node server2 is online
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server2: Error
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Processing failed op IPaddr2_1_stop_0 on server2: Error
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: unpack_rsc_op: Compatability handling for failed op IPaddr2_1_stop_0 on server2
    Mar 10 10:57:21 server2 pengine: [2407]: info: native_add_running: resource IPaddr2_1 isnt managed
    Mar 10 10:57:21 server2 pengine: [2407]: notice: group_print: Resource Group: group_1
    Mar 10 10:57:21 server2 pengine: [2407]: notice: native_print:     IPaddr2_1#011(heartbeat::ocf:IPaddr2)
    Mar 10 10:57:21 server2 pengine: [2407]: notice: native_print: #0110 : server1
    Mar 10 10:57:21 server2 pengine: [2407]: notice: native_print: #0111 : server2
    Mar 10 10:57:21 server2 pengine: [2407]: notice: native_print:     mysql_2#011(heartbeat::ocf:mysql):#011Stopped
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: native_color: Resource IPaddr2_1 cannot run anywhere
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: native_color: Resource mysql_2 cannot run anywhere
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: custom_action: Action IPaddr2_1_stop_0 (unmanaged)
    Mar 10 10:57:21 server2 pengine: [2407]: WARN: custom_action: Action IPaddr2_1_stop_0 (unmanaged)
    Mar 10 10:57:21 server2 crmd: [2393]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
    Mar 10 10:57:21 server2 tengine: [2406]: info: unpack_graph: Unpacked transition 3: 0 actions in 0 synapses
    Mar 10 10:57:21 server2 tengine: [2406]: info: run_graph: Transition 3: (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0)
    Mar 10 10:57:21 server2 tengine: [2406]: info: notify_crmd: Transition 3 status: te_complete - <null>
    Mar 10 10:57:21 server2 crmd: [2393]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
    Mar 10 10:57:21 server2 pengine: [2407]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.


    Was zur Hölle mach ich falsch????


    Gruss Stefan
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  2. #2
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256
    Ok eins hab ich gefunden.


    Code:
    server2:~# crm_verify -LV
    crm_verify[2457]: 2010/03/10_11:21:08 ERROR: unpack_rsc_op: Remapping IPaddr2_1_monitor_0 (rc=2) on server1 to an ERROR
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server1: Error
    crm_verify[2457]: 2010/03/10_11:21:08 ERROR: unpack_rsc_op: Remapping IPaddr2_1_stop_0 (rc=2) on server1 to an ERROR
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_stop_0 on server1: Error
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: unpack_rsc_op: Compatability handling for failed op IPaddr2_1_stop_0 on server1
    crm_verify[2457]: 2010/03/10_11:21:08 ERROR: unpack_rsc_op: Remapping IPaddr2_1_monitor_0 (rc=2) on server2 to an ERROR
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server2: Error
    crm_verify[2457]: 2010/03/10_11:21:08 ERROR: unpack_rsc_op: Remapping IPaddr2_1_stop_0 (rc=2) on server2 to an ERROR
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_stop_0 on server2: Error
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: unpack_rsc_op: Compatability handling for failed op IPaddr2_1_stop_0 on server2
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: native_color: Resource IPaddr2_1 cannot run anywhere
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: native_color: Resource mysql_2 cannot run anywhere
    crm_verify[2457]: 2010/03/10_11:21:08 ERROR: native_create_actions: Attempting recovery of resource IPaddr2_1
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: custom_action: Action IPaddr2_1_stop_0 (unmanaged)
    crm_verify[2457]: 2010/03/10_11:21:08 WARN: custom_action: Action IPaddr2_1_stop_0 (unmanaged)
    Warnings found during check: config may not be valid
    Waere nett wenn mir jemand beim auseinanderklamüsern helfen würde.


    Gruss Stefan
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  3. #3
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256
    In den haresources hab ich eth0 gegen eth1 getauscht.

    Jetzt kommt bei

    crm_verify -LV


    Code:
    server2:~# crm_verify -LV
    crm_verify[2479]: 2010/03/10_12:10:16 ERROR: unpack_rsc_op: Remapping IPaddr2_1_monitor_0 (rc=2) on server2 to an ERROR
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server2: Error
    crm_verify[2479]: 2010/03/10_12:10:16 ERROR: unpack_rsc_op: Remapping IPaddr2_1_stop_0 (rc=2) on server2 to an ERROR
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_stop_0 on server2: Error
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: unpack_rsc_op: Compatability handling for failed op IPaddr2_1_stop_0 on server2
    crm_verify[2479]: 2010/03/10_12:10:16 ERROR: unpack_rsc_op: Remapping IPaddr2_1_monitor_0 (rc=2) on server1 to an ERROR
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server1: Error
    crm_verify[2479]: 2010/03/10_12:10:16 ERROR: unpack_rsc_op: Remapping IPaddr2_1_stop_0 (rc=2) on server1 to an ERROR
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_stop_0 on server1: Error
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: unpack_rsc_op: Compatability handling for failed op IPaddr2_1_stop_0 on server1
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: native_color: Resource IPaddr2_1 cannot run anywhere
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: native_color: Resource mysql_2 cannot run anywhere
    crm_verify[2479]: 2010/03/10_12:10:16 ERROR: native_create_actions: Attempting recovery of resource IPaddr2_1
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: custom_action: Action IPaddr2_1_stop_0 (unmanaged)
    crm_verify[2479]: 2010/03/10_12:10:16 WARN: custom_action: Action IPaddr2_1_stop_0 (unmanaged)
    Warnings found during check: config may not be valid
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  4. #4
    Registrierter Benutzer
    Registriert seit
    Mar 2010
    Beiträge
    18
    So einen Fehler habe ich gefunden.
    Du hast bei:
    Code:
                 
    <op id="IPaddr2_1_mon" interval="5s" name="monitor" timeout="5s"/>
    interval 5s und timeout 5s ändere diese werte mal in: interval 5s und timeout 10s
    damit wäre ein Error schonmal weg.
    Die resource mysql kann so nicht funktionieren.
    Da fehlen angaben zu den configs, du kannst aber anstat ocf lsb verwenden.
    Versuche dies mal:
    Für die Ip
    Code:
    <primitive class="ocf" id="IP" provider="heartbeat" type="IPaddr2">
            <meta_attributes id="IP-meta_attributes">
              <nvpair id="IP-meta_attributes-target-role" name="target-role" value="started"/>
            </meta_attributes>
            <operations id="IP-operations">
              <op id="IP-op-monitor-10s" interval="10s" name="monitor" timeout="20s"/>
            </operations>
            <instance_attributes id="IP-instance_attributes">
              <nvpair id="IP-instance_attributes-ip" name="ip" value="192.168.1.253"/>
              <nvpair id="IP-instance_attributes-nic" name="nic" value="eth0"/>
              <nvpair id="IP-instance_attributes-cidr_netmask" name="cidr_netmask" value="24"/>
            </instance_attributes>
          </primitive>
    Für Mysql
    Code:
    <primitive class="lsb" id="Mysql" type="mysql">
            <meta_attributes id="Mysql-meta_attributes">
              <nvpair id="Mysql-meta_attributes-target-role" name="target-role" value="started"/>
            </meta_attributes>
            <operations id="Mysql-operations">
              <op id="Mysql-op-monitor-15" interval="15" name="monitor" start-delay="16" timeout="20"/>
            </operations>
          </primitive>
    Für resource location
    Code:
    <rsc_location id="rsc_location_group_1" node="server1" rsc="group_1" score="100"/>
    Code:
    Ändere in ha.cf
    node server1
    node server2
    in 
    node server1 server2
    crm von yes auf on ändern.
    Nochwas gefunden, das mit dem script brauchst du nicht.
    Code:
    /usr/lib/heartbeat/haresources2cib.py /etc/ha.d/haresources
    Geändert von Radab (10.03.10 um 12:03 Uhr)

  5. #5
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256
    Mysql startet nach wie vor nicht.
    Ich hab jetzt doch nen Monitor herbekommen und auf einer Maschine die gui installiert. Ist etwas verwirrend. Beide Maschinen werden als stopped angezeigt, dabei laeuft heartbeat auf beiden Maschinen

    Code:
    heartbeat OK [pid 2770 et al] is running on server1 [server1]...
    
    
     2770 ?        S<Ls   0:00 heartbeat: master control process
     2773 ?        S<L    0:00  \_ heartbeat: FIFO reader
     2774 ?        S<L    0:00  \_ heartbeat: write: bcast eth1
     2775 ?        S<L    0:00  \_ heartbeat: read: bcast eth1
     2776 ?        S<L    0:00  \_ heartbeat: write: mcast eth1
     2777 ?        S<L    0:00  \_ heartbeat: read: mcast eth1
     2778 ?        S<L    0:00  \_ heartbeat: write: ucast eth1
     2779 ?        S<L    0:00  \_ heartbeat: read: ucast eth1
     2782 ?        S<     0:00  \_ /usr/lib/heartbeat/ccm
     2783 ?        S<     0:00  \_ /usr/lib/heartbeat/cib
     2784 ?        S<     0:00  \_ /usr/lib/heartbeat/lrmd -r
     2785 ?        S<L    0:00  \_ /usr/lib/heartbeat/stonithd
     2786 ?        S<     0:00  \_ /usr/lib/heartbeat/attrd
     2787 ?        S<     0:00  \_ /usr/lib/heartbeat/crmd
     2788 ?        S<     0:01  \_ /usr/lib/heartbeat/mgmtd -v

    Gruss Stefan
    Code:
    debugfile /var/log/ha-debug
    logfile /var/log/ha-log
    logfacility local0
    bcast eth1
    mcast eth1 225.0.0.1 694 1 0
    ucast eth1 192.168.1.18
    crm on
    keepalive 5
    warntime 10
    deadtime 120
    initdead 120
    auto_failback off
    node server1 server2
    #apiauth mgmtd uid=root
    #respawn         root    /usr/lib/heartbeat/mgmtd -v
    respawn hacluster /usr/lib/heartbeat/ipfail
    apiauth ipfail gid=haclient uid=hacluster
    Code:
     <cib admin_epoch="0" epoch="1" generated="false" have_quorum="true" ignore_dtd="false" num_peers="0" cib_feature_revision="2.0" num_updates="3" cib-last-written="Wed Mar 10 13:21:03 2010" ccm_transition="1">
       <configuration>
         <crm_config>
           <cluster_property_set id="cib-bootstrap-options">
             <attributes>
               <nvpair id="cib-bootstrap-options-symmetric-cluster" name="symmetric-cluster" value="true"/>
               <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="stop"/>
               <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="0"/>
               <nvpair id="cib-bootstrap-options-default-resource-failure-stickiness" name="default-resource-failure-stickiness" value="0"/>
               <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/>
               <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="reboot"/>
               <nvpair id="cib-bootstrap-options-startup-fencing" name="startup-fencing" value="true"/>
               <nvpair id="cib-bootstrap-options-stop-orphan-resources" name="stop-orphan-resources" value="true"/>
               <nvpair id="cib-bootstrap-options-stop-orphan-actions" name="stop-orphan-actions" value="true"/>
               <nvpair id="cib-bootstrap-options-remove-after-stop" name="remove-after-stop" value="false"/>
               <nvpair id="cib-bootstrap-options-short-resource-names" name="short-resource-names" value="true"/>
               <nvpair id="cib-bootstrap-options-transition-idle-timeout" name="transition-idle-timeout" value="5min"/>
               <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="20s"/>
               <nvpair id="cib-bootstrap-options-is-managed-default" name="is-managed-default" value="true"/>
               <nvpair id="cib-bootstrap-options-cluster-delay" name="cluster-delay" value="60s"/>
               <nvpair id="cib-bootstrap-options-pe-error-series-max" name="pe-error-series-max" value="-1"/>
               <nvpair id="cib-bootstrap-options-pe-warn-series-max" name="pe-warn-series-max" value="-1"/>
               <nvpair id="cib-bootstrap-options-pe-input-series-max" name="pe-input-series-max" value="-1"/>
               <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.3-node: 552305612591183b1628baa5bc6e903e0f1e26a3"/>
             </attributes>
           </cluster_property_set>
         </crm_config>
         <nodes>
           <node id="a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0" uname="server2" type="normal"/>
           <node id="7d026928-b718-4d28-b3e6-22ee3cd9ab50" uname="server1" type="normal"/>
         </nodes>
         <resources>
           <group id="group_1">
             <primitive class="ocf" id="IP" provider="heartbeat" type="IPaddr2">
              <meta_attributes id="IP-meta_attributes">
               <nvpair id="IP-meta_attributes-target-role" name="target-role" value="started"/>
                </meta_attributes>
                <operations id="IP-operations">
                 <op id="IP-op-monitor-10s" interval="10s" name="monitor" timeout="20s"/>
                </operations>
                <instance_attributes id="IP-instance_attributes">
                <nvpair id="IP-instance_attributes-ip" name="ip" value="192.168.1.253"/>
                <nvpair id="IP-instance_attributes-nic" name="nic" value="eth0"/>
                <nvpair id="IP-instance_attributes-cidr_netmask" name="cidr_netmask" value="24"/>
                </instance_attributes>
          </primitive>
          <primitive class="lsb" id="Mysql" type="mysql">
             <meta_attributes id="Mysql-meta_attributes">
               <nvpair id="Mysql-meta_attributes-target-role" name="target-role" value="started"/>
               </meta_attributes>
                <operations id="Mysql-operations">
               <op id="Mysql-op-monitor-15" interval="15" name="monitor" start-delay="16" timeout="20"/>
               </operations>
          </primitive>
           </group>
         </resources>
         <constraints>
             <rsc_location id="rsc_location_group_1" node="server1" rsc="group_1" score="100"/>
             <rule id="prefered_location_group_1" score="100">
               <expression attribute="#uname" id="prefered_location_group_1_expr" operation="eq" value="server1"/>
    Code:
    crm_verify[2685]: 2010/03/10_13:42:12 ERROR: unpack_rsc_op: Remapping IPaddr2_1_monitor_0 (rc=2) on server2 to an ERROR
    crm_verify[2685]: 2010/03/10_13:42:12 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_monitor_0 on server2: Error
    crm_verify[2685]: 2010/03/10_13:42:12 ERROR: unpack_rsc_op: Remapping IPaddr2_1_stop_0 (rc=2) on server2 to an ERROR
    crm_verify[2685]: 2010/03/10_13:42:12 WARN: unpack_rsc_op: Processing failed op IPaddr2_1_stop_0 on server2: Error
    crm_verify[2685]: 2010/03/10_13:42:12 WARN: unpack_rsc_op: Compatability handling for failed op IPaddr2_1_stop_0 on server2
    crm_verify[2685]: 2010/03/10_13:42:12 WARN: native_color: Resource IPaddr2_1 cannot run anywhere
    crm_verify[2685]: 2010/03/10_13:42:12 WARN: native_color: Resource mysql_2 cannot run anywhere
    crm_verify[2685]: 2010/03/10_13:42:12 WARN: custom_action: Action IPaddr2_1_stop_0 (unmanaged)
    crm_verify[2685]: 2010/03/10_13:42:12 WARN: should_dump_action: action 4 (IPaddr2_1_stop_0) was for an unmanaged resource (IPaddr2_1)
    crm_verify[2685]: 2010/03/10_13:42:12 WARN: should_dump_action: action 4 (IPaddr2_1_stop_0) was for an unmanaged resource (IPaddr2_1)
    crm_verify[2685]: 2010/03/10_13:42:12 WARN: should_dump_action: action 4 (IPaddr2_1_stop_0) was for an unmanaged resource (IPaddr2_1)
    Warnings found during check: config may not be valid
    Geändert von Huhn Hur Tu (10.03.10 um 13:02 Uhr)
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  6. #6
    Registrierter Benutzer
    Registriert seit
    Mar 2010
    Beiträge
    18
    Gut also den Monitor brauchst du nicht.
    Folgenden Befehl in der Konsole auf deinem Pc eingeben:
    Code:
    ssh -C -X root@192.168.1.156
    Ip und Benutzername ändern
    dann wenn du in der Konsole von dem server mit dem du dich grade verbunden hast,
    hb_gui eingeben und enter.

    Bevor du das machst aber noch das passwort für hacluster setzen.
    Code:
    passwd hacluster #dann ein passwort setzen auf beiden servern
    Wenn du eingeloggt bist in der gui(die heartbeat gui ist nicht gut, aber zum testen ok)
    Müssten beide server als nodes erkennbar sein, also aufgelistet.
    Dann ist die Komunikation der zwei schonmal in ordnung.
    Alle resourcen und rsc_locations löschen.
    Dann zuerst versuchen eine resource IP über die gui zu erstellen, wenn das funktioniert
    machst du für die Ip eine rsc_location. Funktioniert das kurz bescheid geben.

  7. #7
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256
    10zeicheneinself
    Geändert von Huhn Hur Tu (10.03.10 um 14:10 Uhr)
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  8. #8
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256
    Ich ahbe mit der IP angefangen.
    Das Ding jammert

    Update does not conform to the DTD in /usr/share/heartbeat/crm.dtd

    egal weche Änderungen ich mache.
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  9. #9
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256
    Jetzt jammert nicht mehr.
    Ich habe selbst die Gruppe gelöscht und seit dem stehen beide Server als running drin, einer als dc der andere normal.

    Wie erstelle ich hier die Gruppe , ich weiss es klingt wie nimm mich an die Hand, aber. . ...narf sags bitte.

    Gruss Stefan


    die cib.xml nachdem ich alles umgeworfen habe.

    Ach ja die Gui hat mir einen interessanten fehler angezeigt.
    die in der haresources angegebenen Daten von nic und cidr waren falsch rum. Es musste also

    server1 \
    IPaddr2::192.168.1.253/24/eth1/192.168.1.255
    statt
    server1 \
    IPaddr2::192.168.1.253/eth1/24/192.168.1.255
    heissen.
    Code:
     
    <cib generated="true" admin_epoch="0" epoch="1" have_quorum="true" ignore_dtd="false" num_peers="2" ccm_transition="2" cib_feature_revision="2.0" dc_uuid="a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0" num_updates="4" cib-last-written="Wed Mar 10 15:31:07 2010">
       <configuration>
         <crm_config>
           <cluster_property_set id="cib-bootstrap-options">
             <attributes>
               <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="2.1.3-node: 552305612591183b1628baa5bc6e903e0f1e26a3"/>
             </attributes>
           </cluster_property_set>
         </crm_config>
         <nodes>
           <node id="a8cc6b4a-9b5d-409f-8654-8e2a90cbf1e0" uname="server2" type="normal"/>
           <node id="7d026928-b718-4d28-b3e6-22ee3cd9ab50" uname="server1" type="normal"/>
         </nodes>
         <resources/>
         <constraints/>
       </configuration>
     </cib>

    die haresources hab ich erst mal wegverschoben, damit die keinen einfluss auf das ganze haben darf.

    Und das wechseln von Master und Standby aus der Gui heraus funktioniert, vorher hat sich das Ding immer aufgehaengt.
    Geändert von Huhn Hur Tu (10.03.10 um 14:41 Uhr)
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  10. #10
    Registrierter Benutzer
    Registriert seit
    Mar 2010
    Beiträge
    18
    Siehste geht doch
    Also eine gruppe mit der heartbeat gui weiss nicht obs funktioniert, die gui ist alt und hat viele fehler.
    Zuerst machst du mal eine einzel resource ip mit location.
    Wenn das funktioniert und auch switcht machst du folgendes:
    Beide server Platt machen.
    Debian minimal installation und installierst Pacemaker Pacemaker-gui.
    Wie du das machst habe ich hier beschrieben:
    http://linuxforen.de/forums/showthre...=266719&page=2

    Dann hst du eine Vernünftige gui, bei der du dir auch die Xml anschauen lassen kannst.
    So siehst du schnell was alles in die xml kommt und wie es richtig in die xml muss.
    Ich habe knapp 10-13 mal die server neu aufgesetzt. Jetzt läuft aber alles perfeckt
    Lehrgeld muss man immer bezahlen aber wenns läuft ist schon ein geiles Gefühl

  11. #11
    Linuxer Avatar von HBtux
    Registriert seit
    Oct 2009
    Ort
    $HOME
    Beiträge
    315
    Zitat Zitat von Huhn Hur Tu Beitrag anzeigen
    auto eth0
    iface eth0 inet static
    address 192.168.200.1
    network 192.168.200.0
    netmask 255.255.255.0
    broadcast 192.168.200.255

    auto lo:0
    iface lo:0 inet static
    address 192.168.1.253
    netmask 255.255.255.255
    pre-up sysctl -p > /dev/null
    [/CODE]
    Liegt auf dem Interface lo:0 auch die IP 192.168.1.253 ???
    Die IP wäre ja dann doppelt vergeben....

    Zitat Zitat von Huhn Hur Tu Beitrag anzeigen
    bcast eth1
    mcast eth1 225.0.0.1 694 1 0
    ucast eth1 192.168.1.18
    Dein Heartbeat kommuniziert über 3 parallele Wege innerhalb des Clusters....
    (per bcast, mcast und ucast)
    3 Varianten parallel über ein Interface in doch sicherlich ziemlich sinnfrei.
    Eine Variante pro Interface reicht völlig aus.
    Dreimal zu schicken erzeugt auch nur unnötig Last.
    Viele Grüße
    HBtux

  12. #12
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256
    So die lo:0 hab ich erst mal gecancelt.
    Wenn ich "native" also nicht "group" eine IP ohne weitere Parameter anlege startet die recource gleich.
    Das gleiche mit mysql, doch ich will dass MySQL nicht nur auf einer Seite sondern auf beiden Seiten am Leben gehalten wird, wegen Master Master Replikation.
    Es soll nur die Anfragen im Fehlerfall switchen.
    Also Grundsätzlich Funkioniert alles, aber eben noch nicht so wie es soll:-)
    Für diesen Punkt habe ich mit leerer cib.xml begonnen und dann in der hb_gui bei linux_ha ganz oben bei Configurations und Advanced alles auf default gestellt. Dannach hatte ich meine 2 Server aktiv und konnte active und standby wechseln. Soweit der Stand jetzt.

    Gruss Stefan

    Wie es aussieht muss Pacemaker noch warten. Aber am WE bin ich in Chemnitz auf dem Lixtag dort und da ist der erste Vortrag rund um HA, mal schaun was sich dann da ergibt.
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  13. #13
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256
    Zitat Zitat von HBtux Beitrag anzeigen
    Dein Heartbeat kommuniziert über 3 parallele Wege innerhalb des Clusters....
    (per bcast, mcast und ucast)
    3 Varianten parallel über ein Interface in doch sicherlich ziemlich sinnfrei.
    Eine Variante pro Interface reicht völlig aus.
    Dreimal zu schicken erzeugt auch nur unnötig Last.
    Das heisst ich benötige nur eines der Dreien?

    Gruss Stefan
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  14. #14
    Registrierter Benutzer Avatar von Huhn Hur Tu
    Registriert seit
    Nov 2003
    Ort
    Karlsruhe
    Beiträge
    2.256
    Um meine Frage noch mal zu verdeutlichen, wie sage ich Heartbeat dass er manche Diesnte auf beiden Servern starten soll und manche nur dem aktiven.
    Macht das rein zufällig der ldirector?

    Gruss Stefan

    P.S.
    virtuelle IP/Mysql/Filesystem laufen auf dem jeweils aktiven.
    Zwischen anonym sein wollen und seine Daten nicht verkaufen wollen, liegen zwei Welten. Wenn man sich einen kostenpflichtigen Dienst sucht, dann meist, weil man für diese Dienstleistung zahlt und nicht selbst das Produkt sein will.


  15. #15
    Registrierter Benutzer
    Registriert seit
    Mar 2010
    Beiträge
    18
    Also wenn eine Resource zweimal vorkommen soll(auf beiden Nodes).
    Spricht man von einer MultiStateResource. Drbd ist zum Beispiel solch eine.
    Auf einem Node Master auf dem anderen Slave.
    Welches Filesystem willst du einsetzen?
    Sollen beide Nodes auf die Mysql daten zugreifen gleichzeitig?

    Für die Komunikation im Cluster brauchst du nur eins von den drei(Bcast/Uncast/Mcast).

    Also das IP/Filesys/Mysql laufen ist schonmal gut. Per Gui oder per Hand?

Ähnliche Themen

  1. Heartbeat Authentication
    Von ValiX im Forum Linux als Server
    Antworten: 1
    Letzter Beitrag: 17.10.06, 14:46
  2. Heartbeat und Macaddressen
    Von troubadix im Forum System installieren und konfigurieren
    Antworten: 0
    Letzter Beitrag: 02.05.05, 10:31
  3. Heartbeat haresource und ha.cf Frage
    Von troubadix im Forum Linux als Server
    Antworten: 2
    Letzter Beitrag: 04.03.05, 18:19
  4. heartbeat
    Von the_makis im Forum Linux als Server
    Antworten: 6
    Letzter Beitrag: 16.11.04, 09:27
  5. kdm startet nicht immer
    Von PhilippB im Forum Windowmanager
    Antworten: 1
    Letzter Beitrag: 29.03.03, 12:07

Lesezeichen

Berechtigungen

  • Neue Themen erstellen: Nein
  • Themen beantworten: Nein
  • Anhänge hochladen: Nein
  • Beiträge bearbeiten: Nein
  •