top of page

Search Results

Results found for ""

  • Oracle CRSCTL Commands List

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home Oracle CRSCTL Commands List A comprehensive list of CRSCTL commands for managing Oracle Clusterware. CRACTL utility allows you to administer cluster resources. Here are few quick commands to help you administer Oracle RAC cluster! Check Cluster Status Check Cluster Nodes Stop Grid Cluster Start Grid Cluster Check Cluster Status Status of upper & lower stack ./crsctl check crs Status of upper stack ./crsctl check cluster Cluster status on all nodes ./crsctl check cluster -all Cluster status on specific node ./crsctl check cluster -n rac2 Check Cluster Nodes Check cluster services in table format ./crsctl status resource -t Checking status of clusterware nodes / services ./crsctl status server -f Check cluster nodes olsnodes -n oraracn1 1 oraracn2 2 Stop Grid Cluster Stop HAS on current node ./crsctl stop has Stop HAS on remote node ./crsctl stop has –n rac2 Stop entire cluster on all nodes ./crsctl stop cluster -all Stop cluster ( CRS + HAS ) on remote node ./crsctl stop cluster –n rac2 Start Grid Cluster Start HAS on current node ./crsctl start has Start HAS on remote node ./crsctl start has –n rac2 Start entire cluster on all nodes ./crsctl start cluster –all Start cluster(CRS + HAS) on remote node ./crsctl start cluster –n rac2 ENABLE – DISABLE CLUSTER AUTO START crsctl disable has crsctl enable has

  • Oracle Database Hot Backup and Recovery

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home Oracle Database Hot Backup and Recovery Perform seamless hot backups and recovery for Oracle databases. This article demonstrates Oracle database hot backup and recovery process. Please note that this method is no longer used in real-time as RMAN does way better job at database backup & recovery. Hot Backup Overview Taking hot backup Important queries Parameter file recovery Control file recovery Redolog recovery Tablespace recovery Datafile recovery System tablespace recovery Entire database recovery Datafile recovery without backup This is just good to know activity but do not implement it in real time. Knowing how Oracle hot backup and recovery process works, it helps you understand Oracle RMAN better. Hot Backup Overview Taking the backup while the database is up and running is called hot backup During hot backup database will be in fuzzy state and still users can perform transactions which makes backup inconsistent Whenever we place a tablespace or database in begin backup mode, following happens The corresponding datafiles header will be freezed i.e CKPT process will not update latest SCN Body of the datafile is still active i.e DBWRn will write the dirty blocks to datafiles After end backup, datafile header will be unfreezed and CKPT process will update latest SCN immediately by taking that information from controlfiles During hot backup, we will observe much redo generated because oracle will copy entire data block as redo entry into LBC. This is to avoid fractured block A block fracture occurs when a block is being read by the backup, and being written to at the same time by DBWR. Because the OS (usually) reads blocks at a different rate than Oracle, your OS copy will pull pieces of an Oracle block at a time. What if the OS copy pulls half a block, and while that is happening, the block is changed by DBWR? When the OS copy pulls the second half of the block it will result in mismatched halves, which Oracle would not know how to reconcile This is also why the SCN of the datafile header does not change when a tablespace enters hot backup mode. The current SCNs are recorded in redo, but not in the datafile. This is to ensure that Oracle will always recover over the datafile contents with redo entries. When recovery occurs, the fractured datafile block will be replaced with a complete block from redo, making it whole again Note: Database should be in archivelog mode to perform hot backup Taking hot backup Put the database in begin backup mode Alter database begin backup; Copy all database related files to backup location. Create a location on OS where you can copy backup files. Eg /u02/db_backup/hot_bkp_12_jan/ Get the location of datafiles, controlfiles, redolog files select name from v$datafile; select name from v$controlfile; select member from v$logfile; Copy above files using Linx cp command to backup location cp Put DB in end backup mode Alter database end backup; Take manual controlfile backup Alter database backup controlfile to '/u02/db_backup/hot_bkp/ctrolfile_DB_level_bkp.ctl'; Alter database backup controlfile to trace as '/u02/db_backup/hot_bkp/ctrolfile_trace_bkp.ctl'; Backup the archive logs generated during the begin backup and end backup mode. SELECT THREAD#,SEQUENCE#,NAME FROM V$ARCHIVED_LOG; Copy above archive log files to the backup location using the Linux cp command. Important queries Sometimes you want to check which tablespaces inside the database are specifically kept in BACKUP mode. Use below query to get all the tablespaces which are ACTIVE mode (means they are under BEGIN BACKUP mode) SELECT t.name AS "TB_NAME", d.file# as "DF#", d.name AS "DF_NAME", b.status FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b WHERE d.TS#=t.TS# AND b.FILE#=d.FILE# AND b.STATUS='ACTIVE' / The following sample output shows that the example and users tablespaces currently have ACTIVE status TB_NAME DF# DF_NAME STATUS ------------- ---------- ------------------------------------ ------ EXAMPLE 7 /oracle/oradata/proddb/example01.dbf ACTIVE USERS 8 /oracle/oradata/proddb/users01.dbf ACTIVE Parameter file recovery There are three scenarios when you loose parameter file and how to recover using hot backups. Scenario 1: When you have parameter file backup Let us simulate a failure. We assume that we already have taken the hot backup. DB is up and running Delete both pfile and spfile under $ORACLE_HOME/dbs location Connect to sqlplus and shutdown the database Now start the database and it should throw below error ORA-01078: failure in processing system parameters LRM-00109: could not open parameter file '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/initproddb.ora' Recovering the parameter file Exit the sqlplus Go to hot backup location Copy parameter files (pfile & spfile) to $ORACLE_HOME/dbs location Connect to sqlplus and start your database! Scenario 2: When you do not have parameter file backup Let us assume we do not have hot backup of the parameter files, then follow below method to recovery your parameter file Goto alert log location (Generally /u01/app/oracle/diag/rdbms/testdb/testdb/trace) Cat the alert log file (cat alert_testdb.log) Find last time database was started Copy all the non-default parameters into notepad Create a pfile under $ORACLE_HOME/dbs location The file name should be init.ora Paste all contents from notepad Start the database! Scenario 3: When you do not have parameter file backup in 11g From 11g onwards, you can recreate parameter file in case the database is up and running. Even if you loose parameter file but database instance is still running, we can recreate parameter file from memory: CREATE PFILE FROM MEMORY; Control file recovery Simulate failure (we assume that hot backup is already taken). Without shutting down the database, delete database control file at OS level SQL> select name from v$controlfile; # rm -rf ... Scenario 1: steps for control file incomplete recovery SQL> shutdown immediate SQL> !cp /u03/hotbkp/*.ctl /datafiles/prod/*.ctl SQL> startup mount SQL> recover database using backup controlfile until cancel; SQL> alter database open resetlogs; Scenario 2: steps for control file complete recovery SQL> alter database backup controlfile to trace as '/u03/hotbkp/control_bkp.trc'; SQL> shutdown immediate Goto udump location and copy the first create controlfile script to a file called control.sql SQL> startup nomount SQL> @control.sql SQL> alter database open; Note: After creating control files using above procedure, there will be no SCN in that. So server process will write the latest SCN to control files in this situation by taking info from datafile header. Redolog recovery Let’s try to recovery our database with loss of redolog files. Simulate a failure and let us delete redologs at OS level SQL> select member from v$logfile; rm -rf Recover redo log file in archive log mode SQL> shutdown immediate SQL> startup mount SQL> recover database until cancel; SQL> alter database open resetlogs; What resetlogs will do? Create new redolog files at OS level (location and size will be taken from controlfile) if not already existing Resets the log seq number (LSN) to 1, 2, 3 etc for the created files Whenever database is opened with resetlogs option, we will say database entered into new incarnation. If database is in new incarnation, the backups which were taken till now are no more useful. So, whenever we perform an incomplete recovery we need to take full backup of database immediately We can find the prev incarnation information of a database from below query select resetlogs_change#,resetlogs_time from v$database; Tablespace recovery Delete all the data files related to particular tablespace and follow below recovery steps SQL> alter tablespace mydata offline; SQL> !cp /u03/hotbkp/mydata01.dbf /datafiles/prod SQL> recover tablespace mydata; SQL> alter tablespace mydata online; Datafile recovery Delete only one data file related to a tablespace and follow below steps to recover the data file SQL> alter database datafile '/datafiles/prod/mydata01.dbf' offline; SQL> !cp /u03/hotbkp/mydata01.dbf /datafiles/prod SQL> recover 'datafile /datafiles/prod/mydata01.dbf'; SQL> alter database datafile '/datafiles/prod/mydata01.dbf' online; System tablespace recovery Delete all the data files related to system tablespace and use below steps to recover the system tablespace SQL> shut immediate SQL> !cp /u03/hotbkp/system01.dbf /datafiles/prod SQL> startup mount SQL> recover tablespace system; SQL> alter database open; Entire database recovery Let us simulate a failure (make sure you have already taken the database backup). Delete all the data files, control files and redo log files associated to the database SQL> select name from v$controlfile; SQL> select name from v$datafile; SQL> select member from v$logfile; Follow below steps to perform entire database recovery SQL> shut immediate / shut abort SQL> !cp /u03/hotbkp/*.dbf /datafiles/prod SQL> startup mount SQL> recover database; SQL> alter database open; Note: we can drop a single datafile using below command SQL> alter database datafile ‘/datafiles/prod/mydata01.dbf’ offline drop; When we use above command, it will delete the file at OS level, but data dictionary will not be updated and never we can get back that file even if we have backup. So, don’t use this in real time. Datafile recovery without backup In case you do not have hot backup of the database or a particular data file still you can recover entire data file but you need to have all the archive logs from the day data file was first created. To simulate this type of failure, do following Create a tablespace Create user to store one table inside new tablespace Delete data file at OS level Now use below queries to recover the data file (note we do not have any hot backup) SQL> alter database datafile 6 offline; SQL> alter database create datafile '/u01/app/oracle/oradata/clone1/test01.dbf' as '/u01/app/oracle/oradata/clone1/test01.dbf'; SQL> recover tablespace test; SQL> alter tablespace test online; Enjoy!

  • Shell Script to Check File or Directory

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home Shell Script to Check File or Directory Verify if a path is a file or directory with a shell script. Linux shell script allows users to check and determine if the user input is a file or a directory. To achieve this we are using operators -f and -d. vi check_file_directory.sh # !/bin/bash echo "Enter the file name: " read file if [ -f $file ] then echo $file "---> It is a ORDINARY FILE." elif [ -d $file ] then echo $file "---> It is a DIRCTORY." else echo $file "---> It is something else." fi Here is the sample output of the above script More ways to use -f and -d Operators You can use it to check If a Directory Exists In a Shell Script [ -d "/path/to/dir" ] && echo "Directory /path/to/dir exists." You can also check if File Exists test -f /etc/resolv.conf && echo "$FILE exists."

  • RMAN Incremental Backup & Recovery

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home RMAN Incremental Backup & Recovery Optimize backup and recovery with RMAN incremental strategies. In this article we will be looking at RMAN incremental backup & how to perform database recovery using incremental backup. Take RMAN Incremental Backup Simulate Failure Start Database Recovery Take RMAN Incremental Backup Connect to the target DB and catalog. Take level 0 backup RMAN> backup incremental level 0 database plus archivelog; Once backup is completed, check backup tag via below command RMAN> list backup of database summary; TAG20170115T113749 - –> L0 Backup tag Create New User & Table SQL> create user ogr identified by ogr; SQL> grant connect, resource, create session to ogr; SQL> conn ogr/ogr SQL> create table test(serial number(2),name varchar2(5)); SQL> insert into test values(1,'one'); SQL> insert into test values(2,'Two'); SQL> insert into test values(3,'Three'); SQL> insert into test values(4,'Four'); SQL> commit; Trigger DB L1 Backup RMAN> backup incremental level 1 database plus archivelog; Once backup is completed, check backup tag via below command RMAN> list backup of database summary; TAG20170115T114127 -–> Level 1 Backup tag Simulate Failure Delete all the datafiles from server SQL> select name from v$datafile; rm -rf Start Database Recovery Kill the DB instance, if running. You can do shut abort or kill pmon at OS level Start the DB instance and take it to Mount stage. Connect to RMAN and issue below command run { RESTORE DATABASE from tag TAG20170115T113749; RECOVER DATABASE from tag TAG20170115T114127; RECOVER DATABASE; sql 'ALTER DATABASE OPEN'; }

  • Oracle Home Cloning

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home Oracle Home Cloning Clone Oracle Home for easy replication across environments. Have you ever imagine how easy it would be if you could just clone an existing installation of Oracle Software from one server to another server without performing a fresh installation! You might have definitely heard of Oracle Database cloning but you can even clone Oracle home without performing a fresh installation. It makes your life easy if you are planning to perform Oracle software installation on multiple servers at once. Benefits One time installation and clone the installation to all other servers Saves time and simple to perform No need to fire Oracle Installer on all servers Important: I will never use this method for production server installation as I just want it to be a clean installation. This method is good for single instance DB and not for RAC DBs. Always make sure the source and the destination servers have same setup and all pre-requisites completed. Cloning Steps On source machine, stop all the databases, listener and all other Oracle services running out from the ORACLE_HOME you want to clone SQL> shut immediate; # lsnrctl stop listener Go to ORACLE_HOME location and come out one directory path cd $ORACLE_HOME cd .. Create a tar file of the ORACLE_HOME directory. In our example, the Oracle software binaries are installed under 12.2.0 directory # tar -cvf oracle_home.tar 12.2.0 Copy the above tar file onto the target server ORACLE_HOME location. This ORACLE_HOME location can be different from the source ORACLE_HOME location. Unzip (or un-tar) the tar file # tar -xvf oracle_home.tar Make sure on the new server the ORACLE_HOME is set to correct parth # env | grep ORA Run the clone perl script. Make sure to give proper ORACLE_HOME and ORACLE_BASE locations $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_BASE="/ora/app/oracle/" ORACLE_HOME="/ora/app/oracle/clone_home/12.2.0" OSDBA_GROUP=dba OSOPER_GROUP=oper -defaultHomeName Now you can create database using DBCA from the clone ORACLE_HOME dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName oraclo -sid oraclo -createAsContainerDatabase false -SysPassword sys -SystemPassword sys -emConfiguration NONE -datafileDestination /ora/data/oracle/db_files -storageType FS -characterSet AL32UTF8 -totalMemory 2048 -recoveryAreaDestination /ora/app/oracle/fast_recovery_area

  • Oracle Data Guard Protection Modes

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home Oracle Data Guard Protection Modes Learn about protection modes in Oracle Data Guard for disaster recovery. A Data Guard configuration always runs in one of three data protection modes (also called as redo transport rules) By default, the protection mode is MAX PERFORMANCE . If you look above, MAX PERFORMANCE uses ASYNC redo transport and rest other protection modes uses SYNC protection mode. Also, looking at MAX PROTECTION and MAX AVAILABILITY , we can say that the MAX PROTECTION mode is not used in real time. The main reason is if standby unavailable, primary will shut down. The ultimate protection modes you must use are: MAX PERFORMANCE and MAX AVAILABILITY! Switch from Max Performance to Max Availability Protection Mode Verify the broker configuration, check if it’s enabled and make sure log apply is enabled dgmgrl sys/oracle@proddb show configuration show database proddb show database proddb_st edit database proddb_st set state=apply-on; Change LNS mode from ASYN to SYNC EDIT DATABASE proddb_st SET PROPERTY LogXptMode='SYNC'; EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability; Switch from Max Availability to Max Performance Protection Mode Verify the broker configuration, check if it’s enabled and make sure log apply is enabled dgmgrl sys/oracle@proddb show configuration show database proddb show database proddb_st edit database proddb_st set state=apply-on; Change LNS mode from ASYN to SYNC EDIT DATABASE proddb_st SET PROPERTY LogXptMode='ASYNC'; EDIT CONFIGURATION SET PROTECTION MODE AS MaxPerformance;

  • MS SQL Server | DBA Genesis Support

    MS SQL Server MS SQL Server to Oracle Replication Using Golden Gate In this project, we will perform single table DML replication from MS SQL server (on windows 10) to Oracle database (on Linux) using...

  • Oracle External Tables

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home Oracle External Tables Access external data sources using Oracle external tables. Oracle SQL*Loader engine allows you to query external tables that are stored on flat files. When I say flat files, I literally mean a file that is stored on OS level. Yes, you can query a flat file that is stored outside of the database at OS level. The ORACLE_LOADER drive is used to query the external tables that is stored in any format on an external file. Create flat files Create directory object Create external table Query external table Create view on external table Load operation log Note: Any format means all the formats that SQL*Loader can read. Note: you can only query external table. No DML operation is allowed. Create flat files For our example, let us create one file and save it as .txt format on the database server. Create my_regions.txt file and copy paste below contents 1,US 2,UK 3,AUS 4,IND 5,UAE Save and close the file. Create directory object We need to create a directory inside Oracle database that points to the directory pointing to the location of my_regions.txt file CREATE OR REPLACE DIRECTORY my_ext_tab AS '/home/oracle'; Create external table Inside Oracle database, we need to create an external table that will query data from the above file CREATE TABLE my_regions ( region_id number(1), region_name varchar2(20) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY my_ext_tab ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE FIELDS TERMINATED BY ',' MISSING FIELD VALUES ARE NULL ( region_id CHAR(1), region_name CHAR(5) ) ) LOCATION ('my_regions.txt') ) PARALLEL 5 REJECT LIMIT UNLIMITED; Query external table Once the external table is created, you can query it like a normal table SELECT * FROM my_regions; Create view on external table Once you are able to query the external table, you can even create view on it CREATE OR REPLACE VIEW my_regions_view AS SELECT * FROM my_regions WHERE region_name LIKE 'U%'; SELECT * FROM my_regions_view; Load operation log By default, a log of load operations is created in the same directory as the load files. In the same location where you saved my_regions.txt file, there will a log file created for the load operation LOG file opened at 01/02/16 07:17:59 Field Definitions for table MY_REGIONS Record format DELIMITED BY NEWLINE Data in file has same endianness as the platform Rows with all null fields are accepted Fields in Data Source: REGION_ID CHAR (1) Terminated by "," Trim whitespace same as SQL Loader REGION_NAME CHAR (5) Terminated by "," Trim whitespace same as SQL Loader LOG file opened at 01/02/16 07:20:12 Field Definitions for table MY_REGIONS Record format DELIMITED BY NEWLINE Data in file has same endianness as the platform Rows with all null fields are accepted Fields in Data Source: REGION_ID CHAR (1) Terminated by "," Trim whitespace same as SQL Loader REGION_NAME CHAR (5) Terminated by "," Trim whitespace same as SQL Loader

  • Database Normalization

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home Database Normalization Master database normalization for efficient and organized data storage. Database normalization is the process of refining the data in accordance with a series of normal forms. This is done to reduce data redundancy and improve data integrity. This process divides large tables into small tables and links them using relationships. The concept of normalization was invented by Edgar Codd and he introduced First Normal form before moving ahead with other types of normalization forms like Second and Third Normal forms. Normalization forms Key Terms Step by Step Normalization Example 1NF - First Normal Form 2NF - Second Normal Form 3NF - Third Normal Form Assignment There are further enhancements to theory of normalization and it is still being developed. There is even 6th normal form but in any practical scenario, normalization achieves its best shape in 3rd Normal form . Key terms Column – Attribute Row – Tuple Table – Relation Entity – Any real world object that makes sense Step by Step Normalization Example Let us look at a library table that maintains all the books they rent out in one single table Now let us push this data from various normal forms and see how we can refine the data. 1NF - First Normal Form The rules of the first normal form are Each table cell should contain a single/atomic value Every record in the table must be unique Let us first convert the Books_Main_Table into 1NF As per the 1NF rules, our Books Main Table looks good. Before we proceed with 2NF and 3NF, we need to understand key columns. Key / non-key columns Any column (or group of columns) in a table which can uniquely identify each row is known as key column. Example Phone number Email id Student roll number Employee id These are some columns that will always remain unique to every record inside the table. Such columns are known as key columns inside the table. Any column apart from key columns is known as non-key column. Primary key A primary key is a single column value which uniquely identifies each record in a table. In RDBMS, primary key must satisfy below Primary key must be unique Primary key cannot be null Every record will have primary key value Composite Key Sometimes its hard to define unique records with one single column. In such cases, we can have two or more columns that uniquely identify each record in a table. Such columns are known as composite key. For example Name + Address First Name + DOB + Father Name Now that we know about key / non-key columns, let us move to 2NF. 2NF - Second Normal Form The rules of the second normal form are Table must be in 1NF Every non-key attribute must be fully dependent on key attributes We see that our Books_Main_Table does not have any primary key, in such cases, we will have to introduce a new key column like Membership ID . To make Books_Main_Table into 2NF, we need to see how columns are closely related: Membership ID has a salutation, name, and address Membership ID has books issued on their name With this logic in mind, we will have to divide our Books_Main_Table into two tables If you see the above tables, we have Membership ID in both tables but in Membership_Details_table, it is a primary key column and in Books_Issued_table, it is a non-key column. Foreign Key Till now we have seen Primary key and composite key. A foreign key refers to a primary key column of another table. This helps in connecting two tables (and defines a relation between two tables). A foreign key must satisfy below Foreign key column name can be different than primary key column name Unlike primary key, then need not be unique (see Books_Issued_Table above) Foreign key column can be null even though primary key column cannot Reason for Foreign key When a user tries to insert a record into Books_Issued_Table and if there is no membership ID exists in Membership_Details_Table , it will be rejected. This way, we maintain data integrity in RDBMS. If there is no record with Membership ID in the parent table, it will be rejected and database will throw an error. 3NF - Third Normal Form The rules of the third normal form are Data must be in 2NF No transitive functional dependencies What is a transitive dependency? In simple terms, if changing a non-key column causes any other non-key column to change, then it's called a transitive dependency. In our example, if we change the Full Name of the customer, it might change the Salutation Final 3NF Tables To move the Membership_Details_Table into 3NF, we need to further divide the table into below We have divided the Membership_Details_Table into a new Salutation_table. Assignment If you see the Books_Issued_Table , it still does not have a key column. What do you think should be the key column for the Books_Issued_Table? Or do we need to introduce a new column? Further Read Boyce Codd Normal Form (BCNF) Fifth Normal Form (5NF) Sixth Normal Form (6NF)

  • Linux Project - Monitor Server Disk Space

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home Linux Project - Monitor Server Disk Space Build a script to monitor and report server disk usage. In this Linux project we will write a shell script that will monitor the disk space and put the info in a file every 20 mints. Setting up Oracle Linux 7 on Virtual Box How to find disk free Script to log disk space Schedule Script via crontab Setting up Oracle Linux 7 on Virtual Box Follow these detailed steps for the exact process we consistently use to create a virtual machine (VM) and practice Oracle on Oracle VirtualBox. Step-by-Step Guide: Setting Up Oracle Linux 7 on Oracle VirtualBox How to Find Disk Free Space? We use the below command to find the disk free space on Linux df -h Script to Log Disk Space Creating a script to log disk space simplifies monitoring and helps manage storage effectively, ensuring timely updates on disk utilization. Create Scripts Folder: to store and organise various scripts mkdir /tmp/scripts Storage Space log: This file will store storage information touch /tmp/scripts/storage_space.log Permission for Log File: chmod 777 /tmp/scripts/storage_space.log Shell script executing 'df -h' to retrieve and log free disk space information, saving results in storage_space.log with timestamps for each entry #! /bin/bash # To find the free disk space and save it in a log file echo "********************************************" >> /tmp/scripts/storage_space.log date >> /tmp/scripts/storage_space.log echo "********************************************" >> /tmp/scripts/storage_space.log df -h >> /tmp/scripts/storage_space.log # To insert a space between each log entry echo >> /tmp/scripts/storage_space.log Permission for Shell Script: chmod 777 /tmp/scripts/get_storage.sh Schedule Script via crontab Creation of crontab: Automate script execution at 20-minute intervals crontab -e */20 * * * * /tmp/scripts/get_storage.sh

  • ASM Related Background Process

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home ASM Related Background Process Understand the background processes supporting Oracle ASM. Oracle ASM instance is built on same Oracle database instance architecture. Most of the ASM background processes are same as the database background process. ASM Background Process in DB Instance ASM Background Process in ASM Instance ASM Background Process in DB Instance Oracle database that uses ASM disks, two new background processes exists RBAL ASMB ASMB performs communication between the database and the ASM instance. RBAL performs the opening and closing of the disks in the disk groups on behalf of Oracle database. This RBAL is the same process as in ASM instance but it performs a different function. To find asm background process in oracle db instance, connect to Oracle database and issue below query SQL> select sid, serial#, process, name, description from v$session join v$bgprocess using(paddr); Note the ASMB and RBAL processes in the above list. You can even query using the process id at OS level (ps -ef|grep ). ASM Background Process in ASM Instance Oracle introduced two new background processes first in their 10g version: RBAL ARBn The RBAL performs rebalancing when a disk is added or removed. The ARBn performs actual extent movement between the disks in the diskgroup. To find asm background process inside asm instance, connect to ASM instance and issue below query sqlplus / as sysasm SQL> select sid, serial#, process, name, description from v$session join v$bgprocess using(paddr); Note: The ARBn process is started only when there is rebalancing operating happening inside diskgroup.

  • Shell Script to Accept User Input

    Linux SQL Oracle Oracle Database Oracle ASM Oracle RAC Oracle Golden Gate Enterprise Manager MySQL PostgreSQL AWS Home Shell Script to Accept User Input Learn to take user inputs dynamically in shell scripts. Linux bash script allows you to accept user inputs and read what user types on the screen. This is achieved by Linux read command within the script. In the below script, we will be asking user their favorite color and prompt the user with the input they have typed on the screen. vi user_input_fav_col.sh # !/bin/bash echo -n "What is your favourite colour :" read answer echo "oh! you like $answer!" Here is the sample output of the above script More ways to use User Input Bash Script Ask user to input their name & output "Welcome " + username Ask user to input their first & last name. Output - "Hello " + first + last name

bottom of page