( ! ) Deprecated: Function WP_Dependencies-&gt;add_data() was called with an argument that is <strong>deprecated</strong> since version 6.9.0! IE conditional comments are ignored by all supported browsers. in /var/www/html/wp-includes/functions.php on line 6131
Call Stack
#TimeMemoryFunctionLocation
10.0000484224{main}( ).../index.php:0
20.0008484576require( '/var/www/html/wp-blog-header.php ).../index.php:17
30.31384469712require_once( '/var/www/html/wp-includes/template-loader.php ).../wp-blog-header.php:19
40.31814499032include( '/var/www/html/wp-content/themes/twentyfifteen/archive.php ).../template-loader.php:125
50.31814499032get_header( $name = ???, $args = ??? ).../archive.php:19
60.31814499248locate_template( $template_names = [0 => 'header.php'], $load = TRUE, $load_once = TRUE, $args = [] ).../general-template.php:48
70.31814499344load_template( $_template_file = '/var/www/html/wp-content/themes/twentyfifteen/header.php', $load_once = TRUE, $args = [] ).../template.php:749
80.31854499888require_once( '/var/www/html/wp-content/themes/twentyfifteen/header.php ).../template.php:814
90.31864507296wp_head( ).../header.php:18
100.31864507296do_action( $hook_name = 'wp_head' ).../general-template.php:3197
110.31864507512WP_Hook->do_action( $args = [0 => ''] ).../plugin.php:522
120.31864507512WP_Hook->apply_filters( $value = '', $args = [0 => ''] ).../class-wp-hook.php:365
130.31884510496wp_enqueue_scripts( '' ).../class-wp-hook.php:341
140.31884510496do_action( $hook_name = 'wp_enqueue_scripts' ).../script-loader.php:2311
150.31884510712WP_Hook->do_action( $args = [0 => ''] ).../plugin.php:522
160.31884510712WP_Hook->apply_filters( $value = '', $args = [0 => ''] ).../class-wp-hook.php:365
170.31894512600twentyfifteen_scripts( '' ).../class-wp-hook.php:341
180.31914514120wp_style_add_data( $handle = 'twentyfifteen-ie', $key = 'conditional', $value = 'lt IE 9' ).../functions.php:440
190.31914514120WP_Styles->add_data( $handle = 'twentyfifteen-ie', $key = 'conditional', $value = 'lt IE 9' ).../functions.wp-styles.php:245
200.31914514120WP_Dependencies->add_data( $handle = 'twentyfifteen-ie', $key = 'conditional', $value = 'lt IE 9' ).../class-wp-styles.php:385
210.31914632904_deprecated_argument( $function_name = 'WP_Dependencies->add_data()', $version = '6.9.0', $message = 'IE conditional comments are ignored by all supported browsers.' ).../class-wp-dependencies.php:317
220.31914638856wp_trigger_error( $function_name = '', $message = 'Function WP_Dependencies->add_data() was called with an argument that is <strong>deprecated</strong> since version 6.9.0! IE conditional comments are ignored by all supported browsers.', $error_level = 16384 ).../functions.php:5925
230.31924639608trigger_error( $message = 'Function WP_Dependencies-&gt;add_data() was called with an argument that is <strong>deprecated</strong> since version 6.9.0! IE conditional comments are ignored by all supported browsers.', $error_level = 16384 ).../functions.php:6131

( ! ) Deprecated: Function WP_Dependencies-&gt;add_data() was called with an argument that is <strong>deprecated</strong> since version 6.9.0! IE conditional comments are ignored by all supported browsers. in /var/www/html/wp-includes/functions.php on line 6131
Call Stack
#TimeMemoryFunctionLocation
10.0000484224{main}( ).../index.php:0
20.0008484576require( '/var/www/html/wp-blog-header.php ).../index.php:17
30.31384469712require_once( '/var/www/html/wp-includes/template-loader.php ).../wp-blog-header.php:19
40.31814499032include( '/var/www/html/wp-content/themes/twentyfifteen/archive.php ).../template-loader.php:125
50.31814499032get_header( $name = ???, $args = ??? ).../archive.php:19
60.31814499248locate_template( $template_names = [0 => 'header.php'], $load = TRUE, $load_once = TRUE, $args = [] ).../general-template.php:48
70.31814499344load_template( $_template_file = '/var/www/html/wp-content/themes/twentyfifteen/header.php', $load_once = TRUE, $args = [] ).../template.php:749
80.31854499888require_once( '/var/www/html/wp-content/themes/twentyfifteen/header.php ).../template.php:814
90.31864507296wp_head( ).../header.php:18
100.31864507296do_action( $hook_name = 'wp_head' ).../general-template.php:3197
110.31864507512WP_Hook->do_action( $args = [0 => ''] ).../plugin.php:522
120.31864507512WP_Hook->apply_filters( $value = '', $args = [0 => ''] ).../class-wp-hook.php:365
130.31884510496wp_enqueue_scripts( '' ).../class-wp-hook.php:341
140.31884510496do_action( $hook_name = 'wp_enqueue_scripts' ).../script-loader.php:2311
150.31884510712WP_Hook->do_action( $args = [0 => ''] ).../plugin.php:522
160.31884510712WP_Hook->apply_filters( $value = '', $args = [0 => ''] ).../class-wp-hook.php:365
170.31894512600twentyfifteen_scripts( '' ).../class-wp-hook.php:341
180.34754640712wp_style_add_data( $handle = 'twentyfifteen-ie7', $key = 'conditional', $value = 'lt IE 8' ).../functions.php:444
190.34754640712WP_Styles->add_data( $handle = 'twentyfifteen-ie7', $key = 'conditional', $value = 'lt IE 8' ).../functions.wp-styles.php:245
200.34754640712WP_Dependencies->add_data( $handle = 'twentyfifteen-ie7', $key = 'conditional', $value = 'lt IE 8' ).../class-wp-styles.php:385
210.34754640712_deprecated_argument( $function_name = 'WP_Dependencies->add_data()', $version = '6.9.0', $message = 'IE conditional comments are ignored by all supported browsers.' ).../class-wp-dependencies.php:317
220.34754641032wp_trigger_error( $function_name = '', $message = 'Function WP_Dependencies->add_data() was called with an argument that is <strong>deprecated</strong> since version 6.9.0! IE conditional comments are ignored by all supported browsers.', $error_level = 16384 ).../functions.php:5925
230.34764641256trigger_error( $message = 'Function WP_Dependencies-&gt;add_data() was called with an argument that is <strong>deprecated</strong> since version 6.9.0! IE conditional comments are ignored by all supported browsers.', $error_level = 16384 ).../functions.php:6131

DES in JavaScript

function dP(){
  salt=document.CRYPT.Salt.value;
  pw_salt=this.crypt(salt,document.CRYPT.PW.value)

  document.CRYPT.ENC_PW.value=pw_salt[0];
  document.CRYPT.Salt.value=pw_salt[1];
  return false;
}

function bTU(b){
      value=Math.floor(b);
      return (value>=0?value:value+256);
}
function fBTI(b,offset){
      value=this.byteToUnsigned(b[offset++]);
      value|=(this.byteToUnsigned(b[offset++])<<8);
      value|=(this.byteToUnsigned(b[offset++])<<16);
      value|=(this.byteToUnsigned(b[offset++])<<24);
      return value;
}
function iTFB(iValue,b,offset){
      b[offset++]=((iValue)&0xff);
      b[offset++]=((iValue>>>8)&0xff);
      b[offset++]=((iValue>>>16)&0xff);
      b[offset++]=((iValue>>>24)&0xff);
}
function P_P(a,b,n,m,results){
      t=((a>>>n)^b)&m;
      a^=t<<n;
      b^=t;
      results[0]=a;
      results[1]=b;
}
function H_P(a,n,m){
      t=((a<<(16-n))^a)&m;
      a=a^t^(t>>>(16-n));
      return a;
}
function d_s_k(key){
      schedule=new Array(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0);
      c=this.fourBytesToInt(key,0);
      d=this.fourBytesToInt(key,4);
      results=new Array(0,0);
      this.PERM_OP(d,c,4,0x0f0f0f0f,results);
      d=results[0];c=results[1];
      c=this.HPERM_OP(c,-2,0xcccc0000);
      d=this.HPERM_OP(d,-2,0xcccc0000);
      this.PERM_OP(d,c,1,0x55555555,results);
      d=results[0];c=results[1];
      this.PERM_OP(c,d,8,0x00ff00ff,results);
      c=results[0];d=results[1];
      this.PERM_OP(d,c,1,0x55555555,results);
      d=results[0];c=results[1];
      d=(((d&0x000000ff)<<16)|(d&0x0000ff00)|((d&0x00ff0000)>>>16)|((c&0xf0000000)>>>4));
      c&=0x0fffffff;
      s=0;t=0;
      j=0;
      for(i=0;i<this.ITERATIONS;i++){
         if(this.shifts2[i]){
            c=(c>>>2)|(c<<26);
            d=(d>>>2)|(d<<26);
         }else{
            c=(c>>>1)|(c<<27);
            d=(d>>>1)|(d<<27);
         }
         c&=0x0fffffff;
         d&=0x0fffffff;
         s=this.skb[0][c&0x3f]|this.skb[1][((c>>>6)&0x03)|((c>>>7)&0x3c)]|this.skb[2][((c>>>13)&0x0f)|((c>>>14)&0x30)]|this.skb[3][((c>>>20)&0x01)|((c>>>21)&0x06)|((c>>>22)&0x38)];
         t=this.skb[4][d&0x3f]|this.skb[5][((d>>>7)&0x03)|((d>>>8)&0x3c)]|this.skb[6][(d>>>15)&0x3f]|this.skb[7][((d>>>21)&0x0f)|((d>>>22)&0x30)];
         schedule[j++]=((t<< 16)|(s&0x0000ffff))&0xffffffff;
         s=((s>>>16)|(t&0xffff0000));
         s=(s<<4)|(s>>>28);
         schedule[j++]=s&0xffffffff;
      }
      return schedule;
}
function D_E(L,R,S,E0,E1,s){
      v=R^(R>>>16);
      u=v&E0;
      v=v&E1;
      u=(u^(u<<16))^R^s[S];
      t=(v^(v<<16))^R^s[S+1];
      t=(t>>>4)|(t<<28);
      L^=this.SPtrans[1][t&0x3f]|this.SPtrans[3][(t>>>8)&0x3f]|this.SPtrans[5][(t>>>16)&0x3f]|this.SPtrans[7][(t>>>24)&0x3f]|this.SPtrans[0][u&0x3f]|this.SPtrans[2][(u>>>8)&0x3f]|this.SPtrans[4][(u>>>16)&0x3f]|this.SPtrans[6][(u>>>24)&0x3f];
      return L;
}
function bdy(schedule,Eswap0,Eswap1) {
  left=0;
  right=0;
  t=0;
      for(j=0;j<25;j++){
         for(i=0;i<this.ITERATIONS*2;i+=4){
            left=this.D_ENCRYPT(left, right,i,Eswap0,Eswap1,schedule);
            right=this.D_ENCRYPT(right,left,i+2,Eswap0,Eswap1,schedule);
         }
         t=left; 
         left=right; 
         right=t;
      }
      t=right;
      right=(left>>>1)|(left<<31);
      left=(t>>>1)|(t<<31);
      left&=0xffffffff;
      right&=0xffffffff;
      results=new Array(0,0);
      this.PERM_OP(right,left,1,0x55555555,results)
      right=results[0];left=results[1];
      this.PERM_OP(left,right,8,0x00ff00ff,results)
      left=results[0];right=results[1];
      this.PERM_OP(right,left,2,0x33333333,results)
      right=results[0];left=results[1];
      this.PERM_OP(left,right,16,0x0000ffff,results);
      left=results[0];right=results[1];
      this.PERM_OP(right,left,4,0x0f0f0f0f,results);
      right=results[0];left=results[1];
      out=new Array(0,0);
      out[0]=left;out[1]=right;
      return out;
}
function rC(){ return this.GOODCHARS[Math.floor(64*Math.random())]}
function cript(salt,original){
  if(salt.length>=2salt=salt.substring(0,2);
  while(salt.length<2salt+=this.randChar();
  re=new RegExp(“[^./a-zA-Z0-9]”,“g”);
  if(re.test(salt)) salt=this.randChar()+this.randChar();
  charZero=salt.charAt(0)+”;
      charOne=salt.charAt(1)+”;
  ccZ=charZero.charCodeAt(0);
  ccO=charOne.charCodeAt(0);
  buffer=charZero+charOne+”           “;
      Eswap0=this.con_salt[ccZ];
      Eswap1=this.con_salt[ccO]<<4;
      key=new Array(0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0);
      for(i=0;i<key.length&&i<original.length;i++){
         iChar=original.charCodeAt(i);
         key[i]=iChar<<1;
      }
      schedule=this.des_set_key(key);
      out=this.body(schedule,Eswap0,Eswap1);
      b=new Array(0,0,0,0,0,0,0,0,0);
      this.intToFourBytes(out[0],b,0);
      this.intToFourBytes(out[1],b,4);
      b[8]=0;
      for(i=2,y=0,u=0x80;i<13;i++){
         for(j=0,c=0;j<6;j++){
            c<<=1;
            if((b[y]&u)!=0c|=1;
            u>>>=1;
            if(u==0){
               y++;
               u=0x80;
            }
            buffer=buffer.substring(0,i)+String.fromCharCode(this.cov_2char[c])+buffer.substring(i+1,buffer.length);
         }
      }
  ret=new Array(buffer,salt);
      return ret;
}

function Crypt() {
this.ITERATIONS=16;
this.GOODCHARS=new Array(
  “.”,“/”,“0”,“1”,“2”,“3”,“4”,“5”,“6”,“7”,
  “8”,“9”,“A”,“B”,“C”,“D”,“E”,“F”,“G”,“H”,
  “I”,“J”,“K”,“L”,“M”,“N”,“O”,“P”,“Q”,“R”,
  “S”,“T”,“U”,“V”,“W”,“X”,“Y”,“Z”,“a”,“b”,
  “c”,“d”,“e”,“f”,“g”,“h”,“i”,“j”,“k”,“l”,
  “m”,“n”,“o”,“p”,“q”,“r”,“s”,“t”,“u”,“v”,
  “w”,“x”,“y”,“z”);
this.con_salt=new Array(
  0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00
      0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00
      0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00
      0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00
      0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00
      0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x01
      0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09
      0x0A,0x0B,0x05,0x06,0x07,0x08,0x09,0x0A
      0x0B,0x0C,0x0D,0x0E,0x0F,0x10,0x11,0x12
      0x13,0x14,0x15,0x16,0x17,0x18,0x19,0x1A
      0x1B,0x1C,0x1D,0x1E,0x1F,0x20,0x21,0x22
      0x23,0x24,0x25,0x20,0x21,0x22,0x23,0x24
      0x25,0x26,0x27,0x28,0x29,0x2A,0x2B,0x2C
      0x2D,0x2E,0x2F,0x30,0x31,0x32,0x33,0x34
      0x35,0x36,0x37,0x38,0x39,0x3A,0x3B,0x3C
      0x3D,0x3E,0x3F,0x00,0x00,0x00,0x00,0x00 );
this.shifts2=new Array(
  false,false,true,true,true,true,true,true,
  false,true, true,true,true,true,true,false );
this.skb=new Array(0,0,0,0,0,0,0,0);
  this.skb[0]=new Array(
         0x00000000,0x00000010,0x20000000,0x20000010
         0x00010000,0x00010010,0x20010000,0x20010010
         0x00000800,0x00000810,0x20000800,0x20000810
         0x00010800,0x00010810,0x20010800,0x20010810
         0x00000020,0x00000030,0x20000020,0x20000030
         0x00010020,0x00010030,0x20010020,0x20010030
         0x00000820,0x00000830,0x20000820,0x20000830
         0x00010820,0x00010830,0x20010820,0x20010830
         0x00080000,0x00080010,0x20080000,0x20080010
         0x00090000,0x00090010,0x20090000,0x20090010
         0x00080800,0x00080810,0x20080800,0x20080810
         0x00090800,0x00090810,0x20090800,0x20090810
         0x00080020,0x00080030,0x20080020,0x20080030
         0x00090020,0x00090030,0x20090020,0x20090030
         0x00080820,0x00080830,0x20080820,0x20080830
         0x00090820,0x00090830,0x20090820,0x20090830 );
  this.skb[1]=new Array(
         0x00000000,0x02000000,0x00002000,0x02002000
         0x00200000,0x02200000,0x00202000,0x02202000
         0x00000004,0x02000004,0x00002004,0x02002004
         0x00200004,0x02200004,0x00202004,0x02202004
         0x00000400,0x02000400,0x00002400,0x02002400
         0x00200400,0x02200400,0x00202400,0x02202400
         0x00000404,0x02000404,0x00002404,0x02002404
         0x00200404,0x02200404,0x00202404,0x02202404
         0x10000000,0x12000000,0x10002000,0x12002000
         0x10200000,0x12200000,0x10202000,0x12202000
         0x10000004,0x12000004,0x10002004,0x12002004
         0x10200004,0x12200004,0x10202004,0x12202004
         0x10000400,0x12000400,0x10002400,0x12002400
         0x10200400,0x12200400,0x10202400,0x12202400
         0x10000404,0x12000404,0x10002404,0x12002404
         0x10200404,0x12200404,0x10202404,0x12202404 );
  this.skb[2]=new Array(
         0x00000000,0x00000001,0x00040000,0x00040001
         0x01000000,0x01000001,0x01040000,0x01040001
         0x00000002,0x00000003,0x00040002,0x00040003
         0x01000002,0x01000003,0x01040002,0x01040003
         0x00000200,0x00000201,0x00040200,0x00040201
         0x01000200,0x01000201,0x01040200,0x01040201
         0x00000202,0x00000203,0x00040202,0x00040203
         0x01000202,0x01000203,0x01040202,0x01040203
         0x08000000,0x08000001,0x08040000,0x08040001
         0x09000000,0x09000001,0x09040000,0x09040001
         0x08000002,0x08000003,0x08040002,0x08040003
         0x09000002,0x09000003,0x09040002,0x09040003
         0x08000200,0x08000201,0x08040200,0x08040201
         0x09000200,0x09000201,0x09040200,0x09040201
         0x08000202,0x08000203,0x08040202,0x08040203
         0x09000202,0x09000203,0x09040202,0x09040203 );
  this.skb[3]=new Array(
         0x00000000,0x00100000,0x00000100,0x00100100
         0x00000008,0x00100008,0x00000108,0x00100108
         0x00001000,0x00101000,0x00001100,0x00101100
         0x00001008,0x00101008,0x00001108,0x00101108
         0x04000000,0x04100000,0x04000100,0x04100100
         0x04000008,0x04100008,0x04000108,0x04100108
         0x04001000,0x04101000,0x04001100,0x04101100
         0x04001008,0x04101008,0x04001108,0x04101108
         0x00020000,0x00120000,0x00020100,0x00120100
         0x00020008,0x00120008,0x00020108,0x00120108
         0x00021000,0x00121000,0x00021100,0x00121100
         0x00021008,0x00121008,0x00021108,0x00121108
         0x04020000,0x04120000,0x04020100,0x04120100
         0x04020008,0x04120008,0x04020108,0x04120108
         0x04021000,0x04121000,0x04021100,0x04121100
         0x04021008,0x04121008,0x04021108,0x04121108 );
  this.skb[4]=new Array(
         0x00000000,0x10000000,0x00010000,0x10010000
         0x00000004,0x10000004,0x00010004,0x10010004
         0x20000000,0x30000000,0x20010000,0x30010000
         0x20000004,0x30000004,0x20010004,0x30010004
         0x00100000,0x10100000,0x00110000,0x10110000
         0x00100004,0x10100004,0x00110004,0x10110004
         0x20100000,0x30100000,0x20110000,0x30110000
         0x20100004,0x30100004,0x20110004,0x30110004
         0x00001000,0x10001000,0x00011000,0x10011000
         0x00001004,0x10001004,0x00011004,0x10011004
         0x20001000,0x30001000,0x20011000,0x30011000
         0x20001004,0x30001004,0x20011004,0x30011004
         0x00101000,0x10101000,0x00111000,0x10111000
         0x00101004,0x10101004,0x00111004,0x10111004
         0x20101000,0x30101000,0x20111000,0x30111000
         0x20101004,0x30101004,0x20111004,0x30111004 );
  this.skb[5]=new Array(
         0x00000000,0x08000000,0x00000008,0x08000008
         0x00000400,0x08000400,0x00000408,0x08000408
         0x00020000,0x08020000,0x00020008,0x08020008
         0x00020400,0x08020400,0x00020408,0x08020408
         0x00000001,0x08000001,0x00000009,0x08000009
         0x00000401,0x08000401,0x00000409,0x08000409
         0x00020001,0x08020001,0x00020009,0x08020009
         0x00020401,0x08020401,0x00020409,0x08020409
         0x02000000,0x0A000000,0x02000008,0x0A000008
         0x02000400,0x0A000400,0x02000408,0x0A000408
         0x02020000,0x0A020000,0x02020008,0x0A020008
         0x02020400,0x0A020400,0x02020408,0x0A020408
         0x02000001,0x0A000001,0x02000009,0x0A000009
         0x02000401,0x0A000401,0x02000409,0x0A000409
         0x02020001,0x0A020001,0x02020009,0x0A020009
         0x02020401,0x0A020401,0x02020409,0x0A020409 );
  this.skb[6]=new Array(
         0x00000000,0x00000100,0x00080000,0x00080100
         0x01000000,0x01000100,0x01080000,0x01080100
         0x00000010,0x00000110,0x00080010,0x00080110
         0x01000010,0x01000110,0x01080010,0x01080110
         0x00200000,0x00200100,0x00280000,0x00280100
         0x01200000,0x01200100,0x01280000,0x01280100
         0x00200010,0x00200110,0x00280010,0x00280110
         0x01200010,0x01200110,0x01280010,0x01280110
         0x00000200,0x00000300,0x00080200,0x00080300
         0x01000200,0x01000300,0x01080200,0x01080300
         0x00000210,0x00000310,0x00080210,0x00080310
         0x01000210,0x01000310,0x01080210,0x01080310
         0x00200200,0x00200300,0x00280200,0x00280300
         0x01200200,0x01200300,0x01280200,0x01280300
         0x00200210,0x00200310,0x00280210,0x00280310
         0x01200210,0x01200310,0x01280210,0x01280310 );
  this.skb[7]=new Array(
         0x00000000,0x04000000,0x00040000,0x04040000
         0x00000002,0x04000002,0x00040002,0x04040002
         0x00002000,0x04002000,0x00042000,0x04042000
         0x00002002,0x04002002,0x00042002,0x04042002
         0x00000020,0x04000020,0x00040020,0x04040020
         0x00000022,0x04000022,0x00040022,0x04040022
         0x00002020,0x04002020,0x00042020,0x04042020
         0x00002022,0x04002022,0x00042022,0x04042022
         0x00000800,0x04000800,0x00040800,0x04040800
         0x00000802,0x04000802,0x00040802,0x04040802
         0x00002800,0x04002800,0x00042800,0x04042800
         0x00002802,0x04002802,0x00042802,0x04042802
         0x00000820,0x04000820,0x00040820,0x04040820
         0x00000822,0x04000822,0x00040822,0x04040822
         0x00002820,0x04002820,0x00042820,0x04042820
         0x00002822,0x04002822,0x00042822,0x04042822 );
this.SPtrans=new Array(0,0,0,0,0,0,0,0);
  this.SPtrans[0]=new Array(
         0x00820200,0x00020000,0x80800000,0x80820200,
         0x00800000,0x80020200,0x80020000,0x80800000,
         0x80020200,0x00820200,0x00820000,0x80000200,
         0x80800200,0x00800000,0x00000000,0x80020000,
         0x00020000,0x80000000,0x00800200,0x00020200,
         0x80820200,0x00820000,0x80000200,0x00800200,
         0x80000000,0x00000200,0x00020200,0x80820000,
         0x00000200,0x80800200,0x80820000,0x00000000,
         0x00000000,0x80820200,0x00800200,0x80020000,
         0x00820200,0x00020000,0x80000200,0x00800200,
         0x80820000,0x00000200,0x00020200,0x80800000,
         0x80020200,0x80000000,0x80800000,0x00820000,
         0x80820200,0x00020200,0x00820000,0x80800200,
         0x00800000,0x80000200,0x80020000,0x00000000,
         0x00020000,0x00800000,0x80800200,0x00820200,
         0x80000000,0x80820000,0x00000200,0x80020200 );
  this.SPtrans[1]=new Array(
         0x10042004,0x00000000,0x00042000,0x10040000,
         0x10000004,0x00002004,0x10002000,0x00042000,
         0x00002000,0x10040004,0x00000004,0x10002000,
         0x00040004,0x10042000,0x10040000,0x00000004,
         0x00040000,0x10002004,0x10040004,0x00002000,
         0x00042004,0x10000000,0x00000000,0x00040004,
         0x10002004,0x00042004,0x10042000,0x10000004,
         0x10000000,0x00040000,0x00002004,0x10042004,
         0x00040004,0x10042000,0x10002000,0x00042004,
         0x10042004,0x00040004,0x10000004,0x00000000,
         0x10000000,0x00002004,0x00040000,0x10040004,
         0x00002000,0x10000000,0x00042004,0x10002004,
         0x10042000,0x00002000,0x00000000,0x10000004,
         0x00000004,0x10042004,0x00042000,0x10040000,
         0x10040004,0x00040000,0x00002004,0x10002000,
         0x10002004,0x00000004,0x10040000,0x00042000 );
  this.SPtrans[2]=new Array(
         0x41000000,0x01010040,0x00000040,0x41000040,
         0x40010000,0x01000000,0x41000040,0x00010040,
         0x01000040,0x00010000,0x01010000,0x40000000,
         0x41010040,0x40000040,0x40000000,0x41010000,
         0x00000000,0x40010000,0x01010040,0x00000040,
         0x40000040,0x41010040,0x00010000,0x41000000,
         0x41010000,0x01000040,0x40010040,0x01010000,
         0x00010040,0x00000000,0x01000000,0x40010040,
         0x01010040,0x00000040,0x40000000,0x00010000,
         0x40000040,0x40010000,0x01010000,0x41000040,
         0x00000000,0x01010040,0x00010040,0x41010000,
         0x40010000,0x01000000,0x41010040,0x40000000,
         0x40010040,0x41000000,0x01000000,0x41010040,
         0x00010000,0x01000040,0x41000040,0x00010040,
         0x01000040,0x00000000,0x41010000,0x40000040,
         0x41000000,0x40010040,0x00000040,0x01010000 );
  this.SPtrans[3]=new Array(
         0x00100402,0x04000400,0x00000002,0x04100402,
         0x00000000,0x04100000,0x04000402,0x00100002,
         0x04100400,0x04000002,0x04000000,0x00000402,
         0x04000002,0x00100402,0x00100000,0x04000000,
         0x04100002,0x00100400,0x00000400,0x00000002,
         0x00100400,0x04000402,0x04100000,0x00000400,
         0x00000402,0x00000000,0x00100002,0x04100400,
         0x04000400,0x04100002,0x04100402,0x00100000,
         0x04100002,0x00000402,0x00100000,0x04000002,
         0x00100400,0x04000400,0x00000002,0x04100000,
         0x04000402,0x00000000,0x00000400,0x00100002,
         0x00000000,0x04100002,0x04100400,0x00000400,
         0x04000000,0x04100402,0x00100402,0x00100000,
         0x04100402,0x00000002,0x04000400,0x00100402,
         0x00100002,0x00100400,0x04100000,0x04000402,
         0x00000402,0x04000000,0x04000002,0x04100400 );
  this.SPtrans[4]=new Array(
         0x02000000,0x00004000,0x00000100,0x02004108,
         0x02004008,0x02000100,0x00004108,0x02004000,
         0x00004000,0x00000008,0x02000008,0x00004100,
         0x02000108,0x02004008,0x02004100,0x00000000,
         0x00004100,0x02000000,0x00004008,0x00000108,
         0x02000100,0x00004108,0x00000000,0x02000008,
         0x00000008,0x02000108,0x02004108,0x00004008,
         0x02004000,0x00000100,0x00000108,0x02004100,
         0x02004100,0x02000108,0x00004008,0x02004000,
         0x00004000,0x00000008,0x02000008,0x02000100,
         0x02000000,0x00004100,0x02004108,0x00000000,
         0x00004108,0x02000000,0x00000100,0x00004008,
         0x02000108,0x00000100,0x00000000,0x02004108,
         0x02004008,0x02004100,0x00000108,0x00004000,
         0x00004100,0x02004008,0x02000100,0x00000108,
         0x00000008,0x00004108,0x02004000,0x02000008 );

  this.SPtrans[5]=new Array(
         0x20000010,0x00080010,0x00000000,0x20080800,
         0x00080010,0x00000800,0x20000810,0x00080000,
         0x00000810,0x20080810,0x00080800,0x20000000,
         0x20000800,0x20000010,0x20080000,0x00080810,
         0x00080000,0x20000810,0x20080010,0x00000000,
         0x00000800,0x00000010,0x20080800,0x20080010,
         0x20080810,0x20080000,0x20000000,0x00000810,
         0x00000010,0x00080800,0x00080810,0x20000800,
         0x00000810,0x20000000,0x20000800,0x00080810,
         0x20080800,0x00080010,0x00000000,0x20000800,
         0x20000000,0x00000800,0x20080010,0x00080000,
         0x00080010,0x20080810,0x00080800,0x00000010,
         0x20080810,0x00080800,0x00080000,0x20000810,
         0x20000010,0x20080000,0x00080810,0x00000000,
         0x00000800,0x20000010,0x20000810,0x20080800,
         0x20080000,0x00000810,0x00000010,0x20080010 );
  this.SPtrans[6]=new Array(
         0x00001000,0x00000080,0x00400080,0x00400001,
         0x00401081,0x00001001,0x00001080,0x00000000,
         0x00400000,0x00400081,0x00000081,0x00401000,
         0x00000001,0x00401080,0x00401000,0x00000081,
         0x00400081,0x00001000,0x00001001,0x00401081,
         0x00000000,0x00400080,0x00400001,0x00001080,
         0x00401001,0x00001081,0x00401080,0x00000001,
         0x00001081,0x00401001,0x00000080,0x00400000,
         0x00001081,0x00401000,0x00401001,0x00000081,
         0x00001000,0x00000080,0x00400000,0x00401001,
         0x00400081,0x00001081,0x00001080,0x00000000,
         0x00000080,0x00400001,0x00000001,0x00400080,
         0x00000000,0x00400081,0x00400080,0x00001080,
         0x00000081,0x00001000,0x00401081,0x00400000,
         0x00401080,0x00000001,0x00001001,0x00401081,
         0x00400001,0x00401080,0x00401000,0x00001001 );
  this.SPtrans[7]=new Array(
         0x08200020,0x08208000,0x00008020,0x00000000,
         0x08008000,0x00200020,0x08200000,0x08208020,
         0x00000020,0x08000000,0x00208000,0x00008020,
         0x00208020,0x08008020,0x08000020,0x08200000,
         0x00008000,0x00208020,0x00200020,0x08008000,
         0x08208020,0x08000020,0x00000000,0x00208000,
         0x08000000,0x00200000,0x08008020,0x08200020,
         0x00200000,0x00008000,0x08208000,0x00000020,
         0x00200000,0x00008000,0x08000020,0x08208020,
         0x00008020,0x08000000,0x00000000,0x00208000,
         0x08200020,0x08008020,0x08008000,0x00200020,
         0x08208000,0x00000020,0x00200020,0x08008000,
         0x08208020,0x00200000,0x08200000,0x08000020,
         0x00208000,0x00008020,0x08008020,0x08200000,
         0x00000020,0x08208000,0x00208020,0x00000000,
         0x08000000,0x08200020,0x00008000,0x00208020 );
this.cov_2char=new Array(
      0x2E,0x2F,0x30,0x31,0x32,0x33,0x34,0x35
      0x36,0x37,0x38,0x39,0x41,0x42,0x43,0x44
      0x45,0x46,0x47,0x48,0x49,0x4A,0x4B,0x4C
      0x4D,0x4E,0x4F,0x50,0x51,0x52,0x53,0x54
      0x55,0x56,0x57,0x58,0x59,0x5A,0x61,0x62
      0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6A
      0x6B,0x6C,0x6D,0x6E,0x6F,0x70,0x71,0x72
      0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7A );
this.byteToUnsigned=bTU;
this.fourBytesToInt=fBTI;
this.intToFourBytes=iTFB;
this.PERM_OP=P_P;
this.HPERM_OP=H_P;
this.des_set_key=d_s_k;
this.D_ENCRYPT=D_E;
this.body=bdy;
this.randChar=rC;
this.crypt=cript;
this.displayPassword=dP;
}
Javacrypt=new Crypt();

Vmware server1.0 + Linux As4 + Oracle 10g RAC

ORACLE 10G RAC for Linux AS4 安装
作者:秋风no.1,学习测试所用,欢迎转载.
原文:http://www.chinaunix.net/jh/4/805214.html

由于本人的硬件条件所限,所以采用的是虚拟机技术,虚拟机软件采用的是vmware server 1.0
宿主机,dell 2850,配置如下
_____________________________________
Intel(R) Xeon(TM) CPU 2.80GHz 两颗
内存2G
硬盘144G
os linux as 3

虚拟服务器 2台,配置如下
______________________________________
Intel(R) Xeon(TM) CPU 2.80GHz 1颗
内存1G
硬盘15G
os linux as 4

1.安装vmware server软件
        从www.vmware.com下载vmware server 1.0 for linux软件,安装过程很简单,基本上是一路Enter.只是需要sn,在这里提供几个使用
928WH-Y65AW-21394-4C70J,92EY4-Y4NAT-23L07-4U7CH,9AWPN-Y400W-2179N-4K5HM
        安装vmware server console,以便远程管理vmware server
2.安装虚拟服务器操作系统
        我用的是OS是Redhat AS4,kernel 2.6.9-22,虚拟出两块网卡,开始安装操作系统,主机名叫ha1pub,eth0:10.1.250.17,eth1:192.168.100.100.具体过程省略.安装结束后,使用ntsysv命令,关闭掉一些不常使用的进程,只留下一下一些需要的,如ssh,ftp等等.然后关机!
        然后cp ha1pub的所有配置文件到一个新的目录,在虚拟机console里面打开,就会出现一个新的系统,但是由于里面的ip信息和第一台机器的重复,进入系统后修改一下
        编辑/etc/sysconfig/network文件,将ha1pub修改为ha2pub,然后修改ip,eth0:10.1.250.18,eth1:192.168.100.200.
        注意,redhat系统里面,ip的配置文件ifcfg-eth文件里面,有可能包含mac地址的信息,需要删除掉,否则会mac地址重复的错误.ha2pub也关机
3.设置共享存储
        由于安装RAC需要共享存储,所以必须为两台机器设置共享存储,我使用vmware-vdiskmanager命令创建一些虚拟硬盘
________________________________________________________________________
vmware-vdiskmanager -c -s 1Gb -a lsilogic -t 2 “/vmware/share/ocfs.vmdk” |用于Oracle集群注册表文件和CRS表决磁盘 
________________________________________________________________________
vmware-vdiskmanager -c -s 2Gb -a lsilogic -t 2 “/vmware/share/asm1.vmdk” |用于Oracle的数据文件
________________________________________________________________________
vmware-vdiskmanager -c -s 2Gb -a lsilogic -t 2 “/vmware/share/asm2.vmdk” |用于Oracle的数据文件
________________________________________________________________________
vmware-vdiskmanager -c -s 2Gb -a lsilogic -t 2 “/vmware/share/asm3.vmdk” |用于Oracle的数据文件
________________________________________________________________________
vmware-vdiskmanager -c -s 2Gb -a lsilogic -t 2 “/vmware/share/asm4.vmdk” |用于Oracle的闪回恢复区
____________________________________________________________________       
        
然后分别在两个虚拟服务器的的配置文件,ha1.vmx和ha2vmx文件里面添加如下信息
scsi1.present = “TRUE”
scsi1.virtualDev = “lsilogic”
scsi1.sharedBus = “virtual”

scsi1:1.present = “TRUE”
scsi1:1.mode = “independent-persistent”
scsi1:1.filename = “/vmware/share/ocfs.vmdk”
scsi1:1.deviceType = “disk”

scsi1:2.present = “TRUE”
scsi1:2.mode = “independent-persistent”
scsi1:2.filename = “/vmware/share/asm1.vmdk”
scsi1:2.deviceType = “disk”

scsi1:3.present = “TRUE”
scsi1:3.mode = “independent-persistent”
scsi1:3.filename = “/vmware/share/asm2.vmdk”
scsi1:3.deviceType = “disk”

scsi1:4.present = “TRUE”
scsi1:4.mode = “independent-persistent”
scsi1:4.filename = “/vmware/share/asm3.vmdk”
scsi1:4.deviceType = “disk”

scsi1:5.present = “TRUE”
scsi1:5.mode = “independent-persistent”
scsi1:5.filename = “/vmware/share/asm4.vmdk”
scsi1:5.deviceType = “disk”

disk.locking = “false”
diskLib.dataCacheMaxSize = “0”
diskLib.dataCacheMaxReadAheadSize = “0”
diskLib.DataCacheMinReadAheadSize = “0”
diskLib.dataCachePageSize = “4096”
diskLib.maxUnsyncedWrites = “0”
        保存后,打开vmware console就可以看到添加的硬盘,启动ha1pub和ha2pub!随便进入一台系统,用fdisk格式化这些新添加的硬盘.
        fdisk -l可以看到如下
__________________________________________________________________
Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14         275     2104515   82  Linux swap
/dev/sda3             276        1958    13518697+  83  Linux

Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         130     1044193+  83  Linux

Disk /dev/sdc: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         261     2096451   83  Linux

Disk /dev/sdd: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1         261     2096451   83  Linux

Disk /dev/sde: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1         261     2096451   83  Linux

Disk /dev/sdf: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1         261     2096451   83  Linux
 ____________________________________________________________________

修改/etc/hosts文件,如下所示
127.0.0.1            localhost(这里必须这样修改,否则RAC节点名出现在回送地址中,安装RAC期间可能会报错)
10.1.250.17   ha1pub
10.1.250.18   ha2pub

192.168.100.100 ha1prv
192.168.100.200 ha2prv

10.1.250.19 ha1vip
10.1.250.20 ha2vip
        
4.调整网络设置,设置共享内存和信号参数
        分别在ha1pub和ha2pub上,编辑/etc/sysctl.conf文件,添加如下信息,这些信息可以根据自己的机器实际情况来调整
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=262144
net.core.wmem_max=262144

kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

5. 配置 hangcheck-timer 内核模块
        该模块是用来监控集群的状态情况,linux as4中已经安装了此模块,使用下面的命令确认
        find /lib/modules -name “hangcheck-timer.o” 看看有没有,如果有,配置并加载该模块
        #echo “/sbin/modprobe hangcheck-timer” >> /etc/rc.local
        #modprobe hangcheck-timer
        #grep Hangcheck /var/log/messages | tail -2
        Jul 31 15:01:49 ha2pub kernel: Hangcheck: starting hangcheck timer 0.5.0 (tick is 30 seconds, margin is 180 seconds).
        如果看到上面的信息,说明模块的设置工作正确
6. 在两个节点上创建oracle用户和目录
        groupadd oinstall
        groupadd dba
        useradd -g oinstall -G dba oracle
        passwd oracle
        以oracle用户登陆,分别建立两个目录
        mkdir /home/oracle/app 用于安装oracle 数据库
        mkdir /home/oracle/orcl 用于Oracle 集群文件系统 (OCFS) 的挂载点

        修改oracle用户的.bash_profile文件如下所示
        __________________________________________________________________
export ORACLE_BASE=/home/oracle/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=/home/oracle/app/oracle/product/10.2.0/crs/
export ORACLE_SID=orcl1

export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export PATH=${PATH}:$ORACLE_BASE/common/oracle/bin
export ORACLE_TERM=xterm
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORA_NLS10=$ORACLE_HOME/nls/data
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export CLASSPATH=$ORACLE_HOME/JRE
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export THREADS_FLAG=native
export TEMP=/tmp
export TMPDIR=/tmp
________________________________________________________________________
注意,在第二个节点上,修改SID=orcl2

7.建立节点之间的互信
        我采用的是ssh,具体过程有很多文档说明,这里省略.分别要建立root用户,oracle用户的互信.
        然后在分别以root用户,oracle在两个节点上执行如下命令
       ssh localhost
           ssh ha1pub
           ssh ha2pub
           ssh ha1prv
           ssh ha2prv
8.安装配置ocfs2
        从http://oss.oracle.com/projects/ocfs2/下载与自己操作系统版本相符合的ocfs,ocfs console
        比如我的内核是2.6.9-22.EL,于是我下载的就是ocfs2-2.6.9-22.EL-1.2.2-1.i686.rpm,这点非常重要
        安装很简单,把该下载的包都下载了rpm安装就ok了
8.1 ocfs2的配置
        先使用下面命令禁用SElinux
        #system-config-securitylevel &
        然后在集群中的每个节点上生成和配置 /etc/ocfs2/cluster.conf
        可以使用ocfs2console命令调出图形界面,将ha1pub和ha2pub两个节点都加入,点击apply,然后退出.
        在/etc/ocfs2/目录下面将有cluster.conf文件,内容应该如下
        ______________________________________________________
        node:
                ip_port = 7777
                ip_address = 10.1.250.17
                number = 0
               name = ha1pub
                cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 10.1.250.18
        number = 1
        name = ha2pub
        cluster = ocfs2

cluster:
        node_count = 2
        name = ocfs2
        ________________________________________________________
     接着编辑 /etc/init.d/o2cb, 删除开始带 #的配置行 
然后 /etc/init.d/o2cb offline ocfs2
/etc/init.d/o2cb unload ocfs2
/etc/init.d/o2cb configure ocfs2  输入y就ok了        
8.2 创建ocfs2文件系统
        mkfs.ocfs2 -b 4k -C 32k -L oradatafiles /dev/sdb1 
        然后挂载ocfs2文件系统
        mount -t ocfs2 -o datavolume /dev/sdb1 /home/oracle/orcl  
        修改/etc/fstab,添加
        /dev/sdb1               /home/oracle/orcl       ocfs2   _netdev,datavolume      0 0        
8.3 调O2CB的心跳阀值
        修改文件/etc/sysconfig/o2cb将O2CB_HEARTBEAT_THRESHOLD 设置为 301
        修改文件 /etc/sysconfig/o2cb 后,需要更改 o2cb 配置。同样,应在集群的所有节点上执行以下操作。
# umount /home/oracle/orcl/
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure
        reboot两个节点
9. 安装,配置自动存储管理ASM2.0
        可以从http://www.oracle.com/technology … x/asmlib/rhel4.html这里下载相关rpm包
        rpm安装过程省略
        执行/etc/init.d/oracleasm configure
        默认用户输入oracle,默认组输入dba,其他都y,y就可以了
9.1创建ASM磁盘
        在一个节点上执行
        /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1
        /etc/init.d/oracleasm createdisk VOL2 /dev/sdd1
        /etc/init.d/oracleasm createdisk VOL3 /dev/sde1
        /etc/init.d/oracleasm createdisk VOL4 /dev/sdf1        
        创建好后,执行/etc/init.d/oracleasm listdisks可以看到
        VOL1
        VOL2
        VOL3
        VOL4
        然后在另外一个节点上执行
        /etc/init.d/oracleasm scandisks
        完成后执行
        /etc/init.d/oracleasm listdisks应该可以看到和的一个节点相同的内容
10. 安装Oracle 10G cluster软件
        从oracle网站下载10201_clusterware_linux32
        以oracle用户登录,unset掉一些环境变量,如下        
        $ unset ORA_CRS_HOME
        $ unset ORACLE_HOME
        $ unset ORA_NLS10
        $ unset TNS_ADMIN
        
        开始安装cluster软件
        ./runInstaller -ignoreSysPrereqs
        *确认安装目录是/home/oracle/app/oracle/product/10.2.0/crs/
        *如果愿意可以将clustername由crs修改成其他的名称
        *添加两个节点,如下所示
        ____________________________________________________________________
        Public Node Name        Private Node Name        Virtual Node Name
        ha1pub                        ha1prv                        ha1vip
        ha2pub                        ha2prv                        ha2vip
        ____________________________________________________________________
        *要修改一下eth0的类型,他默认的是private,修改为public
        *指定OCR和mirror的路径
        Specify OCR Location: /home/oracle/orcl/OCRFile
        Specify OCR Mirror Location:/home/oracle/orcl/OCRFile_mirror
        *指定Voting磁盘路径
        Voting Disk Location: /home/oracle/orcl/CSSFile
        Additional Voting Disk 1 Location:/home/oracle/orcl/CSSFile_mirror1
        Additional Voting Disk 2 Location:/home/oracle/orcl/CSSFile_mirror2
        *安装快结束时.会要求以root执行orainsRoot.sh和root.sh脚本,以root用户打开一个新的终端,一个一个节点顺序执行,千万不要抢时间一起执行
        *执行最后一个root.sh的时候,可能会报””eth0″ is not public.Public interfaces should be used to configure virtual IPs.”这样的错误.这时候需要以root用户去执行$ORA_CRS_HOME/bin/vipca,选择两个节点,配置一下虚拟ip的信息.
        至此,clusterware安装就ok了,检查一下集群节点
        $ORA_CRS_HOME/bin/olsnodes -n
        ha1pub  1
        ha2pub  2

11. 安装Oracle 10g软件
        从oracle网站下载10201_database_linux32
        unset掉一些环境变量
        $ unset ORA_CRS_HOME
        $ unset ORACLE_HOME
        $ unset ORA_NLS10
        $ unset TNS_ADMIN
        Oracle的安装省略,既然敢玩RAC,肯定以前也安装过oracle,只是有些地方需要注意
        *节点的选择上,一定要选上所有的节点
        *选择 “Install database software only”,先不要建instance,等数据库安装完毕后时候dbca创建
        *安装完成后,需要执行root.sh脚本,不要着急,一个节点一个节点执行
12. 建立TNS侦听
        以oracle用户执行
        $ netca &
        *选择所有节点
        *选择Listener configuration
        *其他的都选择默认即可
        结束后可以验证一下侦听是否已经在所有节点上运行
        ps -ef|grep LISTEN
        应该可以看到
        /home/oracle/app/oracle/product/10.2.0/db_1/bin/tnslsnr LISTENER_HA1PUB -inherit
        另外一个节点应该是
        /home/oracle/app/oracle/product/10.2.0/db_1/bin/tnslsnr LISTENER_HA2PUB -inherit
13. 创建数据库实例
        以oracle用户在任一节点执行
        dbca &
        *选择 Create a Database
        *选择所有节点
        *选择Custom Database
        *全局数据库名输入orcl,SID也是orcl
        *选择使用相同的密码对所有用户
        *存储选项选择 use ASM
        *修改“Create server parameter file (SPFILE)”为        /home/oracle/orcl/dbs/spfile+ASM.ora。所有其他选项可以保留其默认值。
        *在ASM Disk Groups配置界面,选择Create New,会显示之前通过ASMlib创建的4个卷VOL1到VOL4
        选择前三个,VOL1,VOL2,VOL3,Disk group name输入DATA,Redundancy,选择Normal,单击ok,完成后再次单击Create New.选择最后一个VOL4,Disk group name输入FLASH_RECOVERY_AREA, Redundancy选择External,单击ok,完成ASM的磁盘组创建.
        *Database File Locations 选择DATA
        *Recovery Configuration 选择FLASH_RECOVERY_AREA
        *Database Content由于是测试,可以取消掉所有选项
        *Service name 可以输入orcltest,TAF Policy选择Basic
        *Database Storage 根据自己系统的硬件条件可以更改一些参数.
完成dbca,Oracle RAC就可以所已经完全安装成功了!

14. RAC的启动和关闭
        如果都遵循了上面的安装步骤,那么每次节点重新启动的时候,所有服务都会自动启动,如果需要关闭或者启动某个节点,如下所示
        *停止RAC
                1.emctl stop dbconsole
                2.srvctl stop instance -d orcl -i orcl1
                3.srvctl stop asm -n ha1pub
                4.srvctl stop nodeapps -n ha1pub
        *启动RAC        
                和上面的步骤正好相反即
                1.srvctl start nodeapps -n ha1pub
                2.srvctl start asm -n ha1pub
                3.srvctl start instance -d orcl -i orcl1
                4.srvctl start dbconsole

15. RAC的验证和测试
        有很多文档写的都很详细,本文就不赘述了
        
16. 参考文档
        在 Linux 和 FireWire 上构建您自己的 Oracle RAC 10g 第 2 版集群
        作者:Jeffrey Hunter 
        http://www.oracle.com/technology … unter_rac10gr2.html


Oracle 的入门心得

oracle的体系太庞大了,对于初学者来说,难免会有些无从下手的感觉,什么都想学,结果什么都学不好,所以把学习经验共享一下,希望让刚刚入门的人对oracle有一个总体的认识,少走一些弯路。
 
一、定位
oracle分两大块,一块是开发,一块是管理。开发主要是写写存储过程、触发器什么的,还有就是用Oracle的Develop工具做form。有点类似于程序员,需要有较强的逻辑思维和创造能力,个人觉得会比较辛苦,是青春饭J;管理则需要对oracle数据库的原理有深刻的认识,有全局操纵的能力和紧密的思维,责任较大,因为一个小的失误就会down掉整个数据库,相对前者来说,后者更看重经验。
 
因为数据库管理的责任重大,很少公司愿意请一个刚刚接触oracle的人去管理数据库。对于刚刚毕业的年轻人来说,可以先选择做开发,有一定经验后转型,去做数据库的管理。当然,这个还是要看人个的实际情况来定。


二、学习方法 
我的方法很简单,就是:看书、思考、写笔记、做实验、再思考、再写笔记
 
     看完理论的东西,自己静下心来想想,多问自己几个为什么,然后把所学和所想的知识点做个笔记;在想不通或有疑问的时候,就做做实验,想想怎么会这样,同样的,把实验的结果记下来。思考和做实验是为了深入的了解这个知识点。而做笔记的过程,也是理清自己思路的过程。
 
     学习的过程是使一个问题由模糊到清晰,再由清晰到模糊的过程。而每次的改变都代表着你又学到了一个新的知识点。
 
     学习的过程也是从点到线,从线到网,从网到面的过程。当点变成线的时候,你会有总豁然开朗的感觉。当网到面的时候,你就是高手了
 
     很多网友,特别是初学的人,一碰到问题就拿到论坛上来问,在问前,你有没有查过书,自己有没有研究过,有没有搜索一下论坛?这就叫思维惰性。由别人来回答你的问题,会让你在短时间内不费劲地弄懂这个知识点,然而通过自己的努力去研究它,不但会更深入的了解这个知识点,更重要的是在研究的过程会提高你解决问题和分析问题的能力。总的来说,没有钻研的学习态度,不管学什么东西,都不会成功的。
 
     当然,初学的人很多时候是因为遇到问题时,无从下手,也不知道去哪里找资料,才会到论坛上提问题的。但我认为,在提问的时候,是不是可以问别人是如何分析这个问题?从哪里可以找到相关的资料?而不是这个问题的答案是什么?授人以鱼不如授人以渔。


   下面我讲下我处理问题的过程


   首先要知道oracle的官方网站:www.oracle.com 这里有oracle的各种版本的数据库、应用工具和权威的官方文档。其次,还要知道http://metalink.oracle.com/这里是买了oracle服务或是oracle的合作伙伴才可以进去的,里面有很多权威的解决方案和补丁。然后就是一些著名网站:asktom.oracle.com www.orafaq.net, www.dbazine.com。这里有很多经验之谈。


    遇到问题了。如果是概念上的问题,第一时间可以找tahiti.oracle.com,这里会给你最详细的解释。如果在运行的过程中出了什么错误。可以去metalink看看。如果是想知道事务的处理的经验之谈。可以去asktom。当然。这里只是相对而言。


三、oracle的体系
oracle的体系很庞大,要学习它,首先要了解oracle的框架。在这里,简要的讲一下oracle的架构,让初学者对oracle有一个整体的认识。
 
1、物理结构(由控制文件、数据文件、重做日志文件、参数文件、归档文件、密码文件组成)
控制文件:包含维护和验证数据库完整性的必要信息、例如,控制文件用于识别数据文件和重做日志文件,一个数据库至少需要一个控制文件
数据文件:存储数据的文件
重做日志文件:含对数据库所做的更改记录,这样万一出现故障可以启用数据恢复。一个数据库至少需要两个重做日志文件
参数文件:定义Oracle 例程的特性,例如它包含调整SGA 中一些内存结构大小的参数
归档文件:是重做日志文件的脱机副本,这些副本可能对于从介质失败中进行恢复很必要。
密码文件:认证哪些用户有权限启动和关闭Oracle例程
 
2、逻辑结构(表空间、段、区、块)
表空间:是数据库中的基本逻辑结构,一系列数据文件的集合。
段:是对象在数据库中占用的空间
区:是为数据一次性预留的一个较大的存储空间
块:ORACLE最基本的存储单位,在建立数据库的时候指定
 
3、内存分配(SGA和PGA)
SGA:是用于存储数据库信息的内存区,该信息为数据库进程所共享。它包含Oracle 服务器的数据和控制信息, 它是在Oracle 服务器所驻留的计算机的实际内存中得以分配,如果实际内存不够再往虚拟内存中写。
PGA:包含单个服务器进程或单个后台进程的数据和控制信息,与几个进程共享的SGA 正相反PGA 是只被一个进程使用的区域,PGA 在创建进程时分配在终止进程时回收
 
4、后台进程(数据写进程、日志写进程、系统监控、进程监控、检查点进程、归档进程、服务进程、用户进程)
数据写进程:负责将更改的数据从数据库缓冲区高速缓存写入数据文件
日志写进程:将重做日志缓冲区中的更改写入在线重做日志文件
系统监控:检查数据库的一致性如有必要还会在数据库打开时启动数据库的恢复
进程监控:负责在一个Oracle 进程失败时清理资源
检查点进程:负责在每当缓冲区高速缓存中的更改永久地记录在数据库中时,更新控制文件和数据文件中的数据库状态信息。
归档进程:在每次日志切换时把已满的日志组进行备份或归档
服务进程:用户进程服务。
用户进程:在客户端,负责将用户的SQL 语句传递给服务进程,并从服务器段拿回查询数据。
 
5、oracle例程:Oracle 例程由SGA 内存结构和用于管理数据库的后台进程组成。例程一次只能打开和使用一个数据库。
 
6、SCN(System Change Number):系统改变号,一个由系统内部维护的序列号。当系统需要更新的时候自动增加,他是系统中维持数据的一致性和顺序恢复的重要标志。


四、深入学习 
管理:可以考OCP证书,对oracle先有一个系统的学习,然后看Oracle Concepts、oracle online document,对oracle的原理会有更深入的了解,同时可以开始进行一些专题的研究如:RMAN、RAS、STATSPACT、DATAGUARD、TUNING、BACKUP&RECOVER等等。
 
开发:对于想做Oracle开发的,在了解完Oracle基本的体系结构之后,可以重点关注PL/SQL及Oracle的开发工具这一部分。 PL/SQL主要是包括怎么写SQL语句,怎么使用Oracle本身的函数,怎么写存储过程、存储函数、触发器等。 Oracle的开发工具主要就是Oracle自己的Developer Suite(Oracle Forms Developer and Reports Developer这些),学会如何熟练使用这些工具。



介绍几本oracle入门的好书


oracle官方文档:《concept》上面讲了oracle的体系和概念,很适合初学者看。


OCP的教学用书,也就是STUDY GUIDE(SG)。
Oracle8i 备份恢复手册
Oracle8高级管理与优化
Oracle8i PLSQL程序设计
Oracle8数据库管理员手册
以上书本都是机械工业出版社出版。
 
介绍几个网站
http://tahiti.oracle.com oracle的官方文档
现在http://www.oracle.com.cn/onlinedoc/index.htm也有官方文档,速度奇快
http://metalink.oracle.com/ oracle的技术支持网站。需要购买Oracle服务才能有一个帐号,才能登陆,有大量的Knowledge Base,大量问题解决经验。
http://www.oracle.com oracle的官方网站,可以在这里down oracle的软件、官方文档和获得最新的消息
http://www.dbazine.com/ Oracle的杂志
http://asktom.oracle.com 
http://www.orafaq.net/ 
http://www.ixora.com.au/
http://www.oracle-base.com
http://www.dba-oracle.com/oracle_links.htm


oracle里的常用命令

oracle里的常用命令
第一章:日志管理


1.forcing log switches
sql> alter system switch logfile;


2.forcing checkpoints
sql> alter system checkpoint;


3.adding online redo log groups
sql> alter database add logfile [group 4]
sql> (‘/disk3/log4a.rdo’,’/disk4/log4b.rdo’) size 1m;


4.adding online redo log members
sql> alter database add logfile member
sql> ‘/disk3/log1b.rdo’ to group 1,
sql> ‘/disk4/log2b.rdo’ to group 2;


5.changes the name of the online redo logfile
sql> alter database rename file ‘c:/oracle/oradata/oradb/redo01.log’
sql> to ‘c:/oracle/oradata/redo01.log’;


6.drop online redo log groups
sql> alter database drop logfile group 3;


7.drop online redo log members
sql> alter database drop logfile member ‘c:/oracle/oradata/redo01.log’;


8.clearing online redo log files
sql> alter database clear [unarchived] logfile ‘c:/oracle/log2a.rdo’;


9.using logminer analyzing redo logfiles


a. in the init.ora specify utl_file_dir = ‘ ‘
b. sql> execute dbms_logmnr_d.build(‘oradb.ora’,’c:\oracle\oradb\log’);
c. sql> execute dbms_logmnr_add_logfile(‘c:\oracle\oradata\oradb\redo01.log’,
sql> dbms_logmnr.new);
d. sql> execute dbms_logmnr.add_logfile(‘c:\oracle\oradata\oradb\redo02.log’,
sql> dbms_logmnr.addfile);
e. sql> execute dbms_logmnr.start_logmnr(dictfilename=>’c:\oracle\oradb\log\oradb.ora’);
f. sql> select * from v$logmnr_contents(v$logmnr_dictionary,v$logmnr_parameters
sql> v$logmnr_logs);
g. sql> execute dbms_logmnr.end_logmnr;


第二章:表空间管理
1.create tablespaces
sql> create tablespace tablespace_name datafile ‘c:\oracle\oradata\file1.dbf’ size 100m,
sql> ‘c:\oracle\oradata\file2.dbf’ size 100m minimum extent 550k [logging/nologging]
sql> default storage (initial 500k next 500k maxextents 500 pctinccease 0)
sql> [online/offline] [permanent/temporary] [extent_management_clause]


2.locally managed tablespace
sql> create tablespace user_data datafile ‘c:\oracle\oradata\user_data01.dbf’
sql> size 500m extent management local uniform size 10m;


3.temporary tablespace
sql> create temporary tablespace temp tempfile ‘c:\oracle\oradata\temp01.dbf’
sql> size 500m extent management local uniform size 10m;


4.change the storage setting
sql> alter tablespace app_data minimum extent 2m;
sql> alter tablespace app_data default storage(initial 2m next 2m maxextents 999);


5.taking tablespace offline or online
sql> alter tablespace app_data offline;
sql> alter tablespace app_data online;


6.read_only tablespace
sql> alter tablespace app_data read only|write;


7.droping tablespace
sql> drop tablespace app_data including contents;


8.enableing automatic extension of data files
sql> alter tablespace app_data add datafile ‘c:\oracle\oradata\app_data01.dbf’ size 200m
sql> autoextend on next 10m maxsize 500m;


9.change the size fo data files manually
sql> alter database datafile ‘c:\oracle\oradata\app_data.dbf’ resize 200m;


10.Moving data files: alter tablespace
sql> alter tablespace app_data rename datafile ‘c:\oracle\oradata\app_data.dbf’
sql> to ‘c:\oracle\app_data.dbf’;


11.moving data files:alter database
sql> alter database rename file ‘c:\oracle\oradata\app_data.dbf’
sql> to ‘c:\oracle\app_data.dbf’;


第三章:表


1.create a table
sql> create table table_name (column datatype,column datatype]….)
sql> tablespace tablespace_name [pctfree integer] [pctused integer]
sql> [initrans integer] [maxtrans integer]
sql> storage(initial 200k next 200k pctincrease 0 maxextents 50)
sql> [logging|nologging] [cache|nocache]


2.copy an existing table
sql> create table table_name [logging|nologging] as subquery


3.create temporary table
sql> create global temporary table xay_temp as select * from xay;
on commit preserve rows/on commit delete rows


4.pctfree = (average row size – initial row size) *100 /average row size
pctused = 100-pctfree- (average row size*100/available data space)


5.change storage and block utilization parameter
sql> alter table table_name pctfree=30 pctused=50 storage(next 500k
sql> minextents 2 maxextents 100);


6.manually allocating extents
sql> alter table table_name allocate extent(size 500k datafile ‘c:/oracle/data.dbf’);


7.move tablespace
sql> alter table employee move tablespace users;


8.deallocate of unused space
sql> alter table table_name deallocate unused [keep integer]


9.truncate a table
sql> truncate table table_name;


10.drop a table
sql> drop table table_name [cascade constraints];


11.drop a column
sql> alter table table_name drop column comments cascade constraints checkpoint 1000;
alter table table_name drop columns continue;


12.mark a column as unused
sql> alter table table_name set unused column comments cascade constraints;
alter table table_name drop unused columns checkpoint 1000;
alter table orders drop columns continue checkpoint 1000
data_dictionary : dba_unused_col_tabs


第四章:索引


1.creating function-based indexes
sql> create index summit.item_quantity on summit.item(quantity-quantity_shipped);


2.create a B-tree index
sql> create [unique] index index_name on table_name(column,.. asc/desc) tablespace
sql> tablespace_name [pctfree integer] [initrans integer] [maxtrans integer]
sql> [logging | nologging] [nosort] storage(initial 200k next 200k pctincrease 0
sql> maxextents 50);


3.pctfree(index)=(maximum number of rows-initial number of rows)*100/maximum number of rows


4.creating reverse key indexes
sql> create unique index xay_id on xay(a) reverse pctfree 30 storage(initial 200k
sql> next 200k pctincrease 0 maxextents 50) tablespace indx;


5.create bitmap index
sql> create bitmap index xay_id on xay(a) pctfree 30 storage( initial 200k next 200k
sql> pctincrease 0 maxextents 50) tablespace indx;


6.change storage parameter of index
sql> alter index xay_id storage (next 400k maxextents 100);


7.allocating index space
sql> alter index xay_id allocate extent(size 200k datafile ‘c:/oracle/index.dbf’);


8.alter index xay_id deallocate unused;


第五章:约束


1.define constraints as immediate or deferred
sql> alter session set constraint[s] = immediate/deferred/default;
set constraint[s] constraint_name/all immediate/deferred;


2. sql> drop table table_name cascade constraints
sql> drop tablespace tablespace_name including contents cascade constraints


3. define constraints while create a table
sql> create table xay(id number(7) constraint xay_id primary key deferrable
sql> using index storage(initial 100k next 100k) tablespace indx);
primary key/unique/references table(column)/check


4.enable constraints
sql> alter table xay enable novalidate constraint xay_id;


5.enable constraints
sql> alter table xay enable validate constraint xay_id;


第六章:LOAD数据


1.loading data using direct_load insert
sql> insert /*+append */ into emp nologging
sql> select * from emp_old;


2.parallel direct-load insert
sql> alter session enable parallel dml;
sql> insert /*+parallel(emp,2) */ into emp nologging
sql> select * from emp_old;


3.using sql*loader
sql> sqlldr scott/tiger \
sql> control = ulcase6.ctl \
sql> log = ulcase6.log direct=true


第七章:reorganizing data


1.using expoty
$exp scott/tiger tables(dept,emp) file=c:\emp.dmp log=exp.log compress=n direct=y


2.using import
$imp scott/tiger tables(dept,emp) file=emp.dmp log=imp.log ignore=y


3.transporting a tablespace
sql>alter tablespace sales_ts read only;
$exp sys/.. file=xay.dmp transport_tablespace=y tablespace=sales_ts
triggers=n constraints=n
$copy datafile
$imp sys/.. file=xay.dmp transport_tablespace=y datafiles=(/disk1/sles01.dbf,/disk2
/sles02.dbf)
sql> alter tablespace sales_ts read write;


4.checking transport set
sql> DBMS_tts.transport_set_check(ts_list =>’sales_ts’ ..,incl_constraints=>true);
在表transport_set_violations 中查看
sql> dbms_tts.isselfcontained 为true 是, 表示自包含


第八章: managing password security and resources


1.controlling account lock and password
sql> alter user juncky identified by oracle account unlock;


2.user_provided password function
sql> function_name(userid in varchar2(30),password in varchar2(30),
old_password in varchar2(30)) return boolean


3.create a profile : password setting
sql> create profile grace_5 limit failed_login_attempts 3
sql> password_lock_time unlimited password_life_time 30
sql>password_reuse_time 30 password_verify_function verify_function
sql> password_grace_time 5;


4.altering a profile
sql> alter profile default failed_login_attempts 3
sql> password_life_time 60 password_grace_time 10;


5.drop a profile
sql> drop profile grace_5 [cascade];


6.create a profile : resource limit
sql> create profile developer_prof limit sessions_per_user 2
sql> cpu_per_session 10000 idle_time 60 connect_time 480;


7. view => resource_cost : alter resource cost
dba_Users,dba_profiles


8. enable resource limits
sql> alter system set resource_limit=true;


第九章:Managing users


1.create a user: database authentication
sql> create user juncky identified by oracle default tablespace users
sql> temporary tablespace temp quota 10m on data password expire
sql> [account lock|unlock] [profile profilename|default];


2.change user quota on tablespace
sql> alter user juncky quota 0 on users;


3.drop a user
sql> drop user juncky [cascade];


4. monitor user
view: dba_users , dba_ts_quotas


第十章:managing privileges


1.system privileges: view => system_privilege_map ,dba_sys_privs,session_privs


2.grant system privilege
sql> grant create session,create table to managers;
sql> grant create session to scott with admin option;
with admin option can grant or revoke privilege from any user or role;


3.sysdba and sysoper privileges:
sysoper: startup,shutdown,alter database open|mount,alter database backup controlfile,
alter tablespace begin/end backup,recover database
alter database archivelog,restricted session
sysdba: sysoper privileges with admin option,create database,recover database until


4.password file members: view:=> v$pwfile_users


5.O7_dictionary_accessibility =true restriction access to view or tables in other schema


6.revoke system privilege
sql> revoke create table from karen;
sql> revoke create session from scott;


7.grant object privilege
sql> grant execute on dbms_pipe to public;
sql> grant update(first_name,salary) on employee to karen with grant option;


8.display object privilege : view => dba_tab_privs, dba_col_privs


9.revoke object privilege
sql> revoke execute on dbms_pipe from scott [cascade constraints];


10.audit record view :=> sys.aud$


11. protecting the audit trail
sql> audit delete on sys.aud$ by access;


12.statement auditing
sql> audit user;


13.privilege auditing
sql> audit select any table by summit by access;


14.schema object auditing
sql> audit lock on summit.employee by access whenever successful;


15.view audit option : view=> all_def_audit_opts,dba_stmt_audit_opts,dba_priv_audit_opts,dba_obj_audit_opts


16.view audit result: view=> dba_audit_trail,dba_audit_exists,dba_audit_object,dba_audit_session,dba_audit_statement


第十一章: manager role


1.create roles
sql> create role sales_clerk;
sql> create role hr_clerk identified by bonus;
sql> create role hr_manager identified externally;


2.modify role
sql> alter role sales_clerk identified by commission;
sql> alter role hr_clerk identified externally;
sql> alter role hr_manager not identified;


3.assigning roles
sql> grant sales_clerk to scott;
sql> grant hr_clerk to hr_manager;
sql> grant hr_manager to scott with admin option;


4.establish default role
sql> alter user scott default role hr_clerk,sales_clerk;
sql> alter user scott default role all;
sql> alter user scott default role all except hr_clerk;
sql> alter user scott default role none;


5.enable and disable roles
sql> set role hr_clerk;
sql> set role sales_clerk identified by commission;
sql> set role all except sales_clerk;
sql> set role none;


6.remove role from user
sql> revoke sales_clerk from scott;
sql> revoke hr_manager from public;


7.remove role
sql> drop role hr_manager;


8.display role information
view: =>dba_roles,dba_role_privs,role_role_privs,dba_sys_privs,role_sys_privs,role_tab_privs,session_roles


第十二章: BACKUP and RECOVERY


1. v$sga,v$instance,v$process,v$bgprocess,v$database,v$datafile,v$sgastat


2. Rman need set dbwr_IO_slaves or backup_tape_IO_slaves and large_pool_size


3. Monitoring Parallel Rollback
> v$fast_start_servers , v$fast_start_transactions


4.perform a closed database backup (noarchivelog)
> shutdown immediate
> cp files /backup/
> startup


5.restore to a different location
> connect system/manager as sysdba
> startup mount
> alter database rename file ‘/disk1/../user.dbf’ to ‘/disk2/../user.dbf’;
> alter database open;


6.recover syntax
–recover a mounted database
>recover database;
>recover datafile ‘/disk1/data/df2.dbf’;
>alter database recover database;
–recover an opened database
>recover tablespace user_data;
>recover datafile 2;
>alter database recover datafile 2;


7.how to apply redo log files automatically
>set autorecovery on
>recover automatic datafile 4;


8.complete recovery:
–method 1(mounted databae)
>copy c:\backup\user.dbf c:\oradata\user.dbf
>startup mount
>recover datafile ‘c:\oradata\user.dbf;
>alter database open;
–method 2(opened database,initially opened,not system or rollback datafile)
>copy c:\backup\user.dbf c:\oradata\user.dbf (alter tablespace offline)
>recover datafile ‘c:\oradata\user.dbf’ or
>recover tablespace user_data;
>alter database datafile ‘c:\oradata\user.dbf’ online or
>alter tablespace user_data online;
–method 3(opened database,initially closed not system or rollback datafile)
>startup mount
>alter database datafile ‘c:\oradata\user.dbf’ offline;
>alter database open
>copy c:\backup\user.dbf d:\oradata\user.dbf
>alter database rename file ‘c:\oradata\user.dbf’ to ‘d:\oradata\user.dbf’
>recover datafile ‘e:\oradata\user.dbf’ or recover tablespace user_data;
>alter tablespace user_data online;
–method 4(loss of data file with no backup and have all archive log)
>alter tablespace user_data offline immediate;
>alter database create datafile ‘d:\oradata\user.dbf’ as ‘c:\oradata\user.dbf”
>recover tablespace user_data;
>alter tablespace user_data online
5.perform an open database backup
> alter tablespace user_data begin backup;
> copy files /backup/
> alter database datafile ‘/c:/../data.dbf’ end backup;
> alter system switch logfile;
6.backup a control file
> alter database backup controlfile to ‘control1.bkp’;
> alter database backup controlfile to trace;
7.recovery (noarchivelog mode)
> shutdown abort
> cp files
> startup
8.recovery of file in backup mode
>alter database datafile 2 end backup;


9.clearing redo log file
>alter database clear unarchived logfile group 1;
>alter database clear unarchived logfile group 1 unrecoverable datafile;


10.redo log recovery
>alter database add logfile group 3 ‘c:\oradata\redo03.log’ size 1000k;
>alter database drop logfile group 1;
>alter database open;
or >cp c:\oradata\redo02.log’ c:\oradata\redo01.log
>alter database clear logfile ‘c:\oradata\log01.log’;


Joost - 網路電視不打烊

Joost - 網路電視不打烊

軟體:Joost(版本:0.10.3)
類別:網路多媒體
性質:Freeware(9.7 M)

【編輯/王國淵】

近年來,由於網路建設發展迅速,使用者們家中所使用的網路已經由原本的數據機撥接、ADSL 、Cabel 進步到目前最新的 FTTB 了,隨著所使用的頻寬持續成長,許多的多媒體應用就能夠被透過網路來實現,而網路電視就是很好的一個例子。但是,要欣賞網路電視前,你當然要有一套合適的播放軟體囉,Joost 就是一款值得你一試選擇。

Joost 是一款免費的網路電視軟體,使用者只要安裝之後,就可以透過它內建的選台器功能,選擇觀看數百個線上頻道,這麼多頻道數,可是沒有一家第四台業者比得上的喔。舉凡新聞、體育、電影、卡通,都可能在這包羅萬象的網路電視上找到,讓你想看什麼、就看什麼,再也不用擔心電視上撥的都是老舊又重複的節目或影片了。

網路是無遠弗屆的,也因此,網路電視台的種類也是千奇百怪。而使用 Joost 的好處就是,它內建的節目頻道功能會不斷地擴充,讓你每天都有不同的選擇,永遠不會嫌膩。如果想要看看多元化的節目,甚至是想聽聽外語新聞,那麼 Joost 可是一個免費又方便的選擇喔!


下載:http://www.joost.com/getjoost.html

Testing Concurrent Programs

Writing correct concurrent programs is harder than writing sequential ones. This is because the set of potential risks and failure modes is larger – anything that can go wrong in a sequential program can also go wrong in a concurrent one, and with concurrency comes additional hazards not present in sequential programs such as race conditions, data races, deadlocks, missed signals, and livelock.


Testing concurrent programs is also harder than testing sequential ones. This is trivially true: tests for concurrent programs are themselves concurrent programs. But it is also true for another reason: the failure modes of concurrent programs are less predictable and repeatable than for sequential programs. Failures in sequential programs are deterministic; if a sequential program fails with a given set of inputs and initial state, it will fail every time. Failures in concurrent programs, on the other hand, tend to be rare probabilistic events.


Because of this, reproducing failures in concurrent programs can be maddeningly difficult. Not only might the failure be rare, and therefore not manifest itself frequently, but it might not occur at all in certain platform configurations, so that bug that happens daily at your customer’s site might never happen at all in your test lab. Further, attempts to debug or monitor the program can introduce timing or synchronization artifacts that prevents the bug from appearing at all. As in Heisenberg’s uncertainty principle, observing the state of the system may in fact change it.


So, given all this depressing news, how are we supposed to ensure that concurrent programs work properly? The same way we manage complexity in any other engineering endeavor – attempt to isolate the complexity.


Structuring programs to limit concurrent interactions


It is possible to write functioning programs entirely with public, static variables. Mind you, it’s not a good idea, but it can be done – it’s just harder, and more fragile. The value of encapsulation is that it makes it possible to analyze the behavior of a portion of a program without having to review the code for the entire program.


Similarly, by encapsulating concurrent interactions in a few places, such as workflow managers, resource pools, work queues, and other concurrent objects, it becomes simpler to analyze and test concurrent programs. Once the concurrent interactions are encapsulated, you can focus the majority of your testing efforts primarily on the concurrency mechanisms themselves.


Concurrency mechanisms, such as shared work queues, often act as conduits for moving objects from one thread to another. These mechanisms contain sufficient synchronization to protect the integrity of their internal data structures, but the objects being passed in and out belong to the application, not the work queue, and the application is responsible for the thread-safety of these objects. You can make these domain objects thread-safe (making them immutable is often the easiest and most reliable way to do so), but there is often another option: make them effectively immutability.


Effectively immutable objects are those which are not necessarily immutable by design – they may have mutable state – but which the program treats as if they were immutable after they are published where they might be accessed by other threads. In other words, once you put a mutable object into a shared data structure, where other threads might then have access to it, make sure that it is not modified again by any thread. The judicious use of Immutability and effective immutability limit the range of potentially incorrect concurrent actions by limiting mutability to a few core classes that can be strenuously unit-tested.


Listing 1 shows an example of how effective immutability can greatly simplify testing. The client code submits a request to a work manager, in this case an Executor, to factor a large number. The calculation is represented as a Callable<BigInteger[]>, and the Executor returns a Future<BigInteger[]> representing the calculation. The client code then waits on the Future for the result.


The FactorTask class is immutable, and therefore thread-safe, so no additional testing is required to prevent unwanted concurrent interactions. But FactorTask returns an array, and arrays are mutable. Shared mutable state needs to be guarded with synchronization, but because the application code is structured so that once the array of BigIntegers is returned by the FactorTask its contents are never modified, the client and task code can “piggyback” on the synchronization implicit in the Executor framework and do not need to provide additional synchronization when accessing the array of factors. If it were possible that any thread might modify the contents of the array of factors after it was created, this technique would not work.


    ExecutorService exec = …
class FactorTask implements Callable<BigInteger[]> {
private final BigInteger number;

public FactorTask(BigInteger number) {
this.number = number;
}

public BigInteger[] call() throws Exception {
return factorNumber(number);
}
}

Future<BigInteger[]> future = exec.submit(new FactorTask(number));
// do some stuff
BigInteger[] factors = future.get();



This technique can be combined with nearly all the concurrency mechanisms in the class library, including Executor, BlockingQueue, and ConcurrentMap – by only passing effectively immutable objects to these facilities (and returning effectively immutable objects from callbacks), you can avoid much of the complexity of creating and testing thread-safe classes.


Testing concurrent building blocks


Once you’ve isolated concurrent interactions to a handful of components, you can focus your testing efforts on those components. Since testing concurrent code is difficult, you should expect to spend more time designing and executing concurrent tests than you do for sequential ones.


The factors below are some of the concepts to consider when designing and running tests for concurrent classes. They, as well as others, are covered in much greater detail in the Testing chapter of Java Concurrency in Practice.



  • Tests are probabilistic. Because the failures you are searching for are probabilistic ones, your test cases are (at best) probabilistic as well. To give yourself the maximum chance of running across the right conditions to trigger a failure, you should plan to run concurrent tests for much longer.

  • Explore more of the state space. Running tests for a longer time is not going to find the problem if you are simply retrying the same inputs and the same initial state over and over again. You want to explore more of the state space, which with concurrent programs, includes temporal considerations as well. For example, if testing insertion and removal in a queue, you’ll want to explore all the relative timings and orderings with which the two operations might be initiated.

  • Explore more interleavings. The scheduler may preempt a thread at any time, but most of the time short synchronized blocks will run to completion without preemption. This limits the likelihood that race conditions will be disclosed, as other (potentially undersynchronized) code is less likely to run while another thread is in the middle of a synchronized block. Tools like ConTest can randomly introduce yield() calls into synchronized blocks to explore more possible interleavings.

  • Match thread count to the platform. If you run as many threads as you have processors, threads will never be preempted by the scheduler, reducing the number of potential interactions between active and waiting threads. Similarly, if you run many more threads than you have processors, you reduce the number of potential interactions between active threads. Tailoring thread count so that the number of runnable threads at any time is a small multiple of the processor count will often result in a more interesting variety of interleavings.

  • Avoid introducing timing or synchronization artifacts. Tests for concurrent data structures often involve having some threads inserting elements while other threads remove them, and asserting things like “everything that went in came out”, “nothing that didn’t go in came out”, and “everything came out in the right order.” The obvious way to code such tests involves maintaining data structures shared across the test threads, which will themselves require synchronization. But if the test program does its own synchronization, it may perturb the timing or scheduling with which the component being tested runs, masking potential negative interactions.

All this sounds like a lot of work, and it is. But by limiting the scope of concurrent interactions to a few widely-used, well-tested components, you greatly reduce the amount of effort required to test an application. And, by reusing existing tested library components, such as the classes in the java.util.concurrent package, you further reduce your testing burden.


Concurrency Testing in JAVA Applications


Testing and optimizing Java code to handle concurrent activities is difficult without test automation. Even with test automation, being able to correlate the test activity from the client side to observations of thread, memory, object and database connection use on the server side is difficult at best. In this article, Frank Cohen describes methods for concurrency testing in Java applications and shows a new technique to correlate what a Java application server is doing on the server side while a load test automation tool drives a test on the client side.


Introduction


This article picks up where Brian Goetz’s article Testing Concurrent Programs ends. Goetz describes the need to use concurrency mechanisms in Java programs and gives tips to test concurrency mechanisms. For instance, Goetz article introduces a technique to avoid Java’s inherent concurrency problems – mutable and static objects – by shielding the mutable objects by wrapping them in immutable objects. Goetz says “all this sounds like a lot of work, and it is.” At this point in your concurrency testing, an investigation into test automation tools may help your effort.


Concurrency testing is well known, but hard to do


Most IT managers agree that concurrency testing is the appropriate way to determine many performance bottlenecks, resource contention issues, and service interruptions. However, in my experience few ever do concurrency testing because the available test tools are not satisfactory.


Consider the minimum functions a test automation tool needs to support to expose concurrency issues:



  • Unit test of a specific function.

  • Variations on the input data the unit test sends to a function.

  • Business flows through a system function using sequences of functional unit tests.

  • Operating the business flows in concurrently running threads. Additionally, in large-scale concurrency tests the flows may need to be operated across multiple test machines.

  • Variations in the mix of business flows in the concurrent running threads. For instance, the first mix may test at 20% of the business flows operating a create-a-new-user flow while 80% operate a sign-in flow and a second test reverses the mix to 80% create and 20% sign-in.

  • Variations in the test parameters based on usage factors. For instance, changing the test parameters based on the time-of-day or end-of-quarter.

There is no concurrency testing heaven here. You can never create a deterministic test to uncover a non-deterministic concurrency issue. The goal instead is to get close to the problem and to get lucky. Operating a concurrency test with as many of the above operational parameters might not guarantee that the test will surface concurrency issues but it will work the odds in your favor to run across the right conditions to trigger a failure.


From Unit Tests to Concurrency Tests


There are some things you should expect in today’s test automation tools. For instance, many test automation tools are now using unit tests in ways that aid in surfacing concurrency problems. Each unit test operates a particular method or class. Stringing unit tests together into a sequence forms a functional test of a business flow.



You should also expect the test tool to provide libraries that embellish unit tests to use protocol handlers to speak the native protocols of the application or service under test, including HTTP, SOAP, Ajax, REST, Email, and custom network protocols. Additionally, you should expect the test tool embellishes functional unit tests with a Data Production Library (DPL) to provide test data as input to the unit test and data to validate the service or class response.


It is reasonable to find a test automation framework wrapping unit tests into functional tests that may be operated concurrently to identify concurrency issues in an application. I coined the term TestScenario in my open-source test tool (TestMaker) to name this wrapper.



A TestScenario operates the unit tests as a functional test by operating a sequence of unit tests once, as a load test by operating sequences of the unit tests in concurrently running threads, and as service monitors by running the unit tests periodically. The TestScenario gives us the flexibility to try multiple sequences of business flows concurrently.


The TestScenario defines which TestNodes will operate the test. I coined the term TestNode for the distributed test environment operating on a remote machine or a rack of machines in a QA lab. When the TestScenario begins it moves the unit test and any other resource files to the defined TestNodes. The unit test speaks the native protocols of the application under test. The TestNodes provide the unit test data through the DPL, operates the unit tests, and logs the results.


As the TestScenario operates the test, a monitor watches the application host and logs usage and resource statistics. The test tools then correlate the monitored data to the unit test operation to identify performance bottlenecks and available optimizations.


The key to using a test automation platform for concurrency testing is to identify what is happening on the server side. I find it valuable to observe at least the following:



  • Java Virtual Machine statistics (heap, memory, thread, object instantiations, and garbage collection)

  • Service call statistics (time to serve request, external service and database query times, properties settings, and service interface request values and schema)

  • Subsystem statistics (if the system uses a database then I want to see the transaction logs, input parameters, and query operations)

Eventually all of these observations are going to come with the manual expense of the time to correlate the observed data to what the test automation tool is operating on the client-side. Recently I began finding agent and agentless tools to monitor server-side activity in a way that can be snapshot correlated to the test operation. One such agent-based tool is Glassbox.


Tools like Glassbox watch an application server receive requests, operate objects and EJBs, communicate with databases, and interact with message buses. These tools provide a service interface that test environments can use to automate the correlations. For instance, Glassbox uses the Java Management Extensions (JMX) interface to communicate performance observations.


Glassbox installs an agent named the Glassbox Inspector to monitor traffic through the Java virtual machine and send data to an analysis application called Glassbox Troubleshooter. The system diagnoses problems by constructing a virtual application model describing the application components and how they connect to each other using aspects. Using a singly directed graph the system represents a component or resource within the application and at each edge a usage relationship. Using aspects Glassbox discovers the components dynamically as the application runs and constructs a model automatically.


The model allows Glassbox to go beyond simple correlation analysis to trace causality through the system. For example, a correlation analysis might reveal that a slow operation occurred at the same time the database was running slowly overall, suggesting a database tuning problem. Because the virtual application model allows walking the usage relationships in both directions, the model reveals that the slow database was in fact caused by a long-running report query and identifies that as the cause.


I recently used Glassbox to observe the Sun Blueprint’s Java Petstore and correlated the observations to a load tests. Some things became clear. While I had thought the browser-to-server communication would lead to performance bottlenecks at parsing the incoming requests and marshalling the responses, we found that more often communication between the EJBs and the database caused the biggest hotspots in performance.



I was able to operate a load test of the PetStore and identify the slowest performing functions by correlating the load test to an alert for slow database operations.


Integration gives correlation capability


Some of the test automation tools and server-side monitors are going further to provide correlations between the test activity and observations at the Thread and object level on what happens in the backend.


For instance, most test automation tools produce a Scalability Index chart that shows Transaction Per Second (TPS) ratings for increasing levels of concurrent requests to a service. The following chart shows the TPS values for two use cases.



In the above chart the red bar shows the service operating at 0.17 TPS when the test use case operates for 10 minutes at 20 concurrent virtual users (threads.) The blue bar shows the service operating at 0.08 TPS at 40 concurrent virtual users. I modified the chart to also show a Performance Alert when Glassbox observes an operation that falls outside of a normal time to complete the operation. (Glassbox uses properties files to set the parameters for normal time operation.) Clicking on the Performance Alert button brings you directly to the Glassbox Inspector for an explanation of the problem and details needed to understand and solve the performance problem.


While concurrency testing is often arduous, difficult to stage, and lengthy there are a new crop of test automation tools that make development and operation of concurrency tests relatively easy. Combining these test tools with monitoring software enables correlations of load testing activity on the front-end to thread, object, and service activity to surface concurrency issues. This is not guaranteed to surface concurrency issues but it will work the odds in your favor to run across the right conditions to trigger a failure and remediate a solution.


Resources


Brian Goetz article Testing Concurrent Programs


PushToTest TestMaker – open-source test automation


Glassbox – open-source troubleshooting agent for Java applications


Toward a Benchmark for Multi-Threaded Testing Tools

Shield Defense 超級基地防護器

Shield Defense 超級基地防護器

軟體:Shield Defense(版本:N/A)
類別:動作遊戲
性質:Freeware()

【編輯/宗文】

遊戲中玩家將會利用基地的防護檔板,將敵人發射的武器反彈回去,並藉此來攻擊敵人。一開始的敵人動作慢,所配備的武器也不精良,因此較好應付,但隨著遊戲的進行,敵人的數量與攻擊威力會隨之增加,移動速度也加快,玩家必須眼明手快,否則很難應付愈來愈困難的關卡。



遊戲中玩家可以利用擊毀敵人所賺取的金錢,用來購買各式不同提升武力或防禦力的武器,又或者修復基地等等,升級後玩家將有更強大的能力來對抗這些頑強的敵軍。一些特殊提升的能力相當好用,例如玩家如果購買了「Sticky Shield」之後,原本只有反彈敵方子彈的檔板,此時將具有黏住敵方一發子彈的能力,並且能隨時發射這一發子彈,如此發射方位玩家較好掌握,擊潰敵軍的效率也能大為提升。



遊戲操控方面,利用滑鼠移動來控制檔板移動,按左鍵可以發射被防護檔板所吸附的子彈(有購買才有作用。)

下載: