Showing posts with label Oracle Data Mining 11g R2. Show all posts
Showing posts with label Oracle Data Mining 11g R2. Show all posts

Thursday, November 24, 2011

Applying an ODM Model to new data in Oracle – Part 2

This is the second of a two part blog posting on using an Oracle Data Mining model to apply it to or score new data. The first part looked at how you can score data the DBMS_DATA_MINING.APPLY procedure for scoring data batch type process.

This second part looks at how you can apply or score the new data, using our ODM model, in a real-time mode, scoring a single record at a time.


The PREDICTION SQL function can be used in many different ways. The following examples illustrate the main ways of using it. Again we will be using the same data set with data in our (NEW_DATA_TO_SCORE) table.

The syntax of the function is

PREDICTION ( model_name, USING attribute_list);

Example 1 – Real-time Prediction Calculation

In this example we will select a record and calculate its predicted value. The function will return the predicted value with the highest probability

SELECT cust_id, prediction(clas_decision_tree using *)
WHERE cust_id = 103001;

---------- ------------------------------------
    103001                                    0

So a predicted class value is 0 (zero) and this has a higher probability than a class value of 1.

We can compare and check this results with the result that was produced using the DBMS_DATA_MINING.APPLY function (see previous blog post).

SQL> select * from new_data_scored
  2  where cust_id = 103001;

---------- ---------- -----------
    103001          0           1
    103001          1           0

Here we can see that the class value of 0 has a probability of 1 (100%) and the class value of 1 has a probability of 0 (0%).

Example 2 – Selecting top 10 Customers with Class value of 1

For this we are selecting from our NEW_DATA_TO_SCORE table. We want to find the records that have a class value of 1 and has the highest probability. We only want to return the first 10 of these

SELECT cust_id
WHERE PREDICTION(clas_decision_tree using *) = 1
AND       rownum <=10;


Example 3 – Selecting records based on Prediction value and Probability

For this example we want to find our from what Countries do the customer come from where the Prediction is 0 (wont take up offer) and the Probability of this occurring being 1 (100%). This example introduces the PREDICTION_PROBABILITY function. This function allows use to use the probability strength of the prediction.

select country_name, count(*)
from   new_data_to_score
where  prediction(clas_decision_tree using *) = 0
and    prediction_probability (clas_decision_tree using *) = 1
group by country_name
order by count(*) asc;

COUNTRY_NAME                               COUNT(*)
---------------------------------------- ----------
Brazil                                            1
China                                             1
Saudi Arabia                                      1
Australia                                         1
Turkey                                            1
New Zealand                                       1
Italy                                             5
Argentina                                        12
United States of America                        293

The examples that I have give above are only the basic examples of using the PREDICTION function. There are a number of other uses that include using the PREDICTION_COST, PREDICTION_SET, PREDICTION_DETAILS. Examples of these will be covered in a later blog post

Monday, November 21, 2011

Applying an ODM Model to new data in Oracle – Part 1

This is the first of a two part blog posting on using an Oracle Data Mining model to apply it to or score new data.  This first part looks at the how you can score data using the DBMS_DATA_MINING.APPLY procedure in a batch type process.

The second part will be posted in a couple of days and will look how you can apply or score the new data, using our ODM model, in a real-time mode, scoring a single record at a time.


Instead of applying the model to data as it is captured, you may need to apply a model to a large number of records at the same time. To perform this bulk processing we can use the APPLY procedure that is part of the DBMS_DATA_MINING package. The format of the procedure is

      model_name           IN VARCHAR2,
      data_table_name      IN VARCHAR2,
      case_id_column_name  IN VARCHAR2,
      result_table_name    IN VARCHAR2,
      data_schema_name     IN VARCHAR2 DEFAULT NULL);

Parameter Name Description
Model_Name The name of your data mining model
Data_Table_Name The source data for the model. This can be a tree or view.
Case_Id_Column_Name The attribute that give uniqueness for each record. This could be the Primary Key or if the PK contains more than one column then a new attribute is needed
Result_Table_Name The name of the table where the results will be stored
Data_Schema_Name The schema name for the source data

The main condition for applying the model is that the source table (DATA_TABLE_NAME) needs to have the same structure as the table that was used when creating the model.

Also the data needs to be prepossessed in the same way as the training data to ensure that the data in each attribute/feature has the same formatting.

When you use the APPLY procedure it does not update the original data/table, but creates a new table (RESULT_TABLE_NAME) with a structure that is dependent on what the underlying DM algorithm is. The following gives the Result Table description for the main DM algorithms:

For a Classification algorithms

case_id      VARCHAR2/NUMBER
prediction   NUMBER / VARCHAR2  -- depending a target data type
probability  NUMBER

For Regression

case_id     VARCHAR2/NUMBER
prediction  NUMBER

For Clustering

case_id      VARCHAR2/NUMBER
cluster_id   NUMBER
probability  NUMBER

Example / Case Study

My last few blog posts on ODM have covered most of the APIs for building and transferring models. We will be using the same data set in these posts. The following code uses the same data and models to illustrate how we can use the DBMS_DATA_MINING.APPLY procedure to perform a bulk scoring of data.

In my previous post we used the EXPORT and IMPORT procedures to move a model from one database (Test) to another database (Production). The following examples uses the model in Production to score new data. I have setup a sample of data (NEW_DATA_TO_SCORE) from the SH schema using the same set of attributes as was used to create the model (MINING_DATA_BUILD_V). This data set contains 1500 records.

Name                                 Null?    Type
------------------------------------ -------- ------------
CUST_ID                              NOT NULL NUMBER
CUST_GENDER                          NOT NULL CHAR(1)
AGE                                           NUMBER
CUST_MARITAL_STATUS                           VARCHAR2(20)
COUNTRY_NAME                         NOT NULL VARCHAR2(40)
CUST_INCOME_LEVEL                             VARCHAR2(30)
EDUCATION                                     VARCHAR2(21)
OCCUPATION                                    VARCHAR2(21)
HOUSEHOLD_SIZE                                VARCHAR2(21)
YRS_RESIDENCE                                 NUMBER
AFFINITY_CARD                                 NUMBER(10)
BULK_PACK_DISKETTES                           NUMBER(10)
FLAT_PANEL_MONITOR                            NUMBER(10)
HOME_THEATER_PACKAGE                          NUMBER(10)
BOOKKEEPING_APPLICATION                       NUMBER(10)
PRINTER_SUPPLIES                              NUMBER(10)
Y_BOX_GAMES                                   NUMBER(10)
OS_DOC_SET_KANJI                              NUMBER(10)

SQL> select count(*) from new_data_to_score;


The next step is to run the the DBMS_DATA_MINING.APPLY procedure. The parameters that we need to feed into this procedure are

Parameter Name Description
Model_Name CLAS_DECISION_TREE  -- we imported this model from our test database
Case_Id_Column_Name CUST_ID  -- this is the PK
Result_Table_Name NEW_DATA_SCORED   -- new table that will be created that contains the Prediction and Probability.

The NEW_DATA_SCORED table will contain 2 records for each record in the source data (NEW_DATA_TO_SCORE). For each record in NEW_DATA_TO_SCORE we will have one record for the each of the Target Values (O or 1) and the probability for each target value. So for our NEW_DATA_TO_SCORE, which contains 1,500 records, we will get 3,000 records in the NEW_DATA_SCORED table.

To apply the model to the new data we run:

  model_name => 'CLAS_DECISION_TREE',
  data_table_name => 'NEW_DATA_TO_SCORE',
  case_id_column_name => 'CUST_ID',
  result_table_name => 'NEW_DATA_SCORED');

This takes 1 second to run on my laptop, so this apply/scoring of new data is really quick.

The new table NEW_DATA_SCORED has the following description

Name                            Null?    Type
------------------------------- -------- -------
CUST_ID                         NOT NULL NUMBER
PREDICTION                               NUMBER
PROBABILITY                              NUMBER

SQL> select count(*) from NEW_DATA_SCORED;


We can now look at the prediction and the probabilities

SQL> select * from NEW_DATA_SCORED where rownum <=12;

---------- ---------- -----------
    103001          0           1
    103001          1           0
    103002          0  .956521739
    103002          1  .043478261
    103003          0  .673387097
    103003          1  .326612903
    103004          0  .673387097
    103004          1  .326612903
    103005          1  .767241379
    103005          0  .232758621
    103006          0           1
    103006          1           0

12 rows selected.

Wednesday, November 16, 2011

My UKOUG Conference 2011 Schedule

UK Oracle User Group Conference 2011

The UKOUG conference will be in a couple of weeks. I have my flights and hotel booked, and I’ve just finished selecting my agenda of presentations. I really enjoy this conference as it serves many purposes including, finding new directions Oracle is taking, new product features, some upskilling/training, confirming that the approaches that I have been using on projects are valid, getting lots of hints and tips, etc.

One thing that I always try to do and I strongly everyone (in particular first timers) to do is to go to 1 session everyday that is on a topic or product that you know (nearly) nothing about.  You might discover that you know more than you think or you may learn something new that can be feed into some project on your return or over the next 12 months.

My agenda for the conference currently looks Very busy and in between these session, there is the exhibition hall, meetings with old and new friends, meetings with product/business unit managers, asking people to write articles for Oracle Scene, checking out possible presenters to come to Ireland for our conference in March 2012, etc.  Then there is my presentation on the Wednesday afternoon.


I’ll miss most of the Oak Table event on the Sunday but I hope to make it in time for

16:40-17:30 : Performance & High Availability Panel Session


9:20-9:50 : Keynote by Mark Sunday, Oracle (H1)
10:00-10:45 : The Future of BI & Oracle roadmap, Mike Durran, Oracle (H5)
11:05-12:05 : Implementing Interactive Maps with OBIEE 11g, Antony Heljula, Peak Indicators (H10A)
12:15-13:15 : OBI 11g Analysis & Reporting New Features, Mark Rittman (8A)
14:30-15:15 : Master Data Management – What is it & how to make it work – Robert Barnett, Hub Solutions Designs (H10A)
16:20-17:35 : Dummies Guide to Oracle ADF, Grant Ronald, Oracle, (Media Suite)
16:35-18:30 : The DB Time Performance Method, Graham Wood, Oracle (H8A)
17:45-18:30 : Performance & Stability with Oracle 11g SQL Plan Management, Doug Burns (H1)
17:45-18:30 : Experiences in Virtualization, Michael Doherty (H10A)
19:45-20:45 : Exhibition Welcome Drinks
20:45-Late : Focus Pubs


9:00-11:00 : Next Generation BI Architectures Masterclass, Andrew Bond, Oracle (H10B)
10:10-10:55 : Who’s afraid of Analytic Functions, Alex Nuijten, Maxima (H5)
11:15-12:15 : Analysing Your Data with Analytic Functions, Carl Dudley, (H9)
11:25-13:25 : Using a Physical Standby to Minimize Downtime for DB Release or Server Change, Michael Abbey, Pythian (Media Suite)
14:40-15:25 : How note to make the headlines, Mark Clewett, Hitachi (H10A)
14:40-15:25 : APEX Back to Basics, Paul Broughton, APEX Evangelists (H9)
15:35-16:20 : Can People be identified in the database, Pete Finnigan (H1)
16:40-18:35 : OTN Hands-on Workshop, Todd Trichler, Oracle (H8A)
17:50-18:35 : SQL Developer Data Modeler as a replacement for Oracle Designer, Paul Bainbridge, Fujitsu, (H8B)
18:45-19:45 : Keynote : Future of Enterprise Software and Oracle, Ray Wang, Constellation Research (H1)
20:00-Late : Evening Social & Networking


9:00-10:00 : Oracle 11g Database: Automatic Parallelism, Joel Goodman, Oracle (H9)
9:00-10:00 : Big Data: Learn how to predict the future, Keith Laker, Oracle (H8B)
10:10-10:55 : All about indexes – What to index, when and how, Mark Bobak, ProQuest (H5)
11:20-12:30 : Using Application Express to Build Highly Accessible Products, Anthony Rayner, Oracle (H8A)
12:30-13:30 : Practical uses for APEX Dictionary, John Scott, APEX Evangelists (H8A)
15:20-16:05 : How to deploy you Oracle Data Miner 11g R2 Workflows in a Live Environment – Me  (H7B)
16:15-17:00 : Next Generation Data Warehousing, Kulvinder Hari, Oracle (H8A)
16:15-17:00 : Beyond RTFM and WTF Message Moments. Introducing a new standard: Oracle Fusion Applications User Assistance, Ultan O’Broin (Executive Room 7)

I know I have some overlapping sessions, but I will decide on the date which of these I will attend.

As you an see I will be following the BI stream mainly, with a few sessions on the Database and Development streams too.

This year there is a smart phone app help us organise our agenda, meetings, etc, The only downside is that the app does not import the agenda that I created on the website. So I have to do it again. Maybe for next year they will have an import agenda feature.

New UKOUG mobile app – Launched October 2011

Wednesday, November 9, 2011

ODM–PL/SQL API for Exporting & Importing Models

In a previous blog post I talked about how you can take a copy of a workflow developed in Oracle Data Miner, and load it into a new schema.
When you data mining project gets to a mature stage and you need to productionalise the data mining process and model updates, you will need to use a different set of tools.

As you gather more and more data and cases, you will be updating/refreshing your models to reflect this new data. The new update data mining model needs to be moved from the development/test environment to the production environment. As with all things in IT we would like to automate this updating of the model in production.
There are a number of database features and packages that we can use to automate the update and it involves the setting up of some scripts on the development/test database and also on the production database.

These steps include:

  • Creation of a directory on the development/test database
  • Exporting of the updated Data Mining model
  • Copying of the exported Data Mining model to the production server
  • Removing the existing Data Mining model from production
  • Importing of the new Data Mining model.
  • Rename the imported mode to the standard name

The DBMS_DATA_MINING PL/SQL package has 2 functions that allow us to export a model and to import a model. These functions are an API to the Oracle Data Pump. The function to export a model is DBMS_DATA_MINING.EXPORT_MODEL and the function to import a model is DBMS_DATA_MINING.IMPORT_MODEL.The parameters to these function are what you would expect use if you were to use Data Pump directly, but have been tailored for the data mining models.

Lets start with listing the models that we have in our development/test schema:

SQL> connect dmuser2/dmuser2
SQL> SELECT model_name FROM user_mining_models;


Create/define the directory on the server where the models will be exported to.

CREATE OR REPLACE DIRECTORY DataMiningDir_Exports AS 'c:\app\Data_Mining_Exports';

The schema you are using will need to have the CREATE ANY DIRECTORY privilege.

Now we can export our mode. In this example we are going to export the Decision Tree model (CLAS_DT_1_6)

The function has the following structure

     filename IN VARCHAR2,
     directory IN VARCHAR2,
     model_filter IN VARCHAR2 DEFAULT NULL,
     operation IN VARCHAR2 DEFAULT NULL,
     remote_link IN VARCHAR2 DEFAULT NULL,

If we wanted to export all the models into a file called Exported_DM_Models, we would run:

DBMS_DATA_MINING.EXPORT_MODEL('Exported_DM_Models', 'DataMiningDir');

If we just wanted to export our Decision Tree model to file Exported_CLASS_DT_Model, we would run:

DBMS_DATA_MINING.EXPORT_MODEL('Exported_CLASS_DT_Model', 'DataMiningDir', 'name in (''CLAS_DT_1_6'')');

Before you can load the new update data mining model into your production database we need to drop the existing model. Before we do this we need to ensure that this is done when the model is not in use, so it would be advisable to schedule the dropping of the model during a quiet time, like before or after the nightly backups/processes.


Warning : When importing the data mining model, you need to import into a tablespace that has the same name as the tablespace in the development/test database.  If the USERS tablespace is used in the development/test database, then the model will be imported into the USERS tablespace in the production database.

Hint : Create a DATAMINING tablespace in your development/test and production databases. This tablespace can be used solely for data mining purposes.

To import the decision tree model we exported previously, we would run

DBMS_DATA_MINING.IMPORT_MODEL('Exported_CLASS_DT_Model', 'DataMiningDir', 'name=’CLAS_DT_1_6''', 'IMPORT', null, null, 'dmuser2:dmuser3');

We now have the new updated data mining model loaded into the production database.

The final step before we can start using the new updated model in our production database is to rename the imported model to the standard name that is being used in the production database.


Scheduling of these steps
We can wrap most of this up into stored procedures and have schedule it to run on a semi-regular bases, using the DBMS_JOB function. The following example schedules a procedure that controls the importing, dropping and renaming of the models.

DBMS_JOB.SUBMIT(jobnum.nextval, 'import_new_data_mining_model', trunc(sysdate), add_month(trunc(sysdate)+1);

This schedules the the running of the procedure to import the new data mining models, to run immediately and then to run every month.

Thursday, November 3, 2011

ODM 11.2 Data Dictionary Views.

The Oracle 11.2 database contains the following Oracle Data Mining views. These allow you to query the database for the metadata relating to what Data Mining Models you have, what the configurations area and what data is involved.


Describes the high level information about the data mining models in the database.  Related views include DBA_MINING_MODELS and USER_MINING_MODELS.

Attribute Data Type Description
OWNER Varchar2(30) NN Owner of the mining model
MODEL_NAME Varchar2(30) NN Name of the mining model
MINING_FUNCTION Varchar2(30) What data mining function to use
ALGORITHM Varchar2(30) Algorithm used by the model
CREATION_DATE Date NN Date model was created
BUILD_DURATION Number Time in seconds for the model build process
MODEL_SIZE Number Size of model in MBytes
COMMENTS Varchar2(4000)  
Lets query the my DMUSER2 data mining schema. This was created during a previous post where we exported some ODM models from schema and loaded them into DMUSER2 schema

SELECT model_name, 

-------------  ---------------- -------------------------- -------------- ----------
CLAS_SVM_1_6   CLASSIFICATION    SUPPORT_VECTOR_MACHINES                     3      .1515
CLAS_DT_1_6    CLASSIFICATION    DECISION_TREE                               2      .0842
CLAS_GLM_1_6   CLASSIFICATION    GENERALIZED_LINEAR_MODEL                    3      .0877
CLAS_NB_1_6    CLASSIFICATION    NAIVE_BAYES                                 2      .0459


Describes the attributes of the data mining models.  Related views are DBA_MINING_MODEL_ATTRIBUTES and USER_MINING_MODEL_ATTRIBUTES.

Attribute Data Type Description
OWNER Varchar2(30) NN Owner of the mining model
MODEL_NAME Varchar2(30) NN Name of the mining mode
ATTRIBUTE_NAME Varchar2(30) NN Name of the attribute
ATTRIBUTE_TYPE Varchar2(11) Logical type of attribute
NUMERICAL – numeric data
CATEGORICAL – character data
DATA_TYPE Varchar2(12) Data type of attribute
DATA_LENGTH Number Length of data type
DATA_PRECISION Number Precision of a fixed point number
DATA_SCALE Number Scale of the fixed point number
USAGE_TYPE Varchar2(8) Indicated if the attribute was used to create the model (ACTIVE) or not (INACTIVE)
TARGET Varchar2(3) Indicates if the attribute is the target

If we take one of our data mining models that was listed about and select what attributes are used by that model;

SELECT attribute_name,
from all_mining_model_attributes
where model_name = 'CLAS_DT_1_6';

------------------------------ ----------- -------- ---
AGE                            NUMERICAL   ACTIVE   NO
Y_BOX_GAMES                    NUMERICAL   ACTIVE   NO

The first thing to note here is that all the attributes are listed as ACTIVE. This is the default and will be the case for all attributes for all the algorithms, so we can ignore this attribute in our queries, but it is good to check just in case.

The second thing to note is for the last row we have the AFFINITY_CARD has a target attribute value of YES. This is the target attributes used by the classification algorithm.


Describes the setting of the data mining models. The settings associated with a model are algorithm dependent. The Setting values can be provided as input to the model build process. Alternatively, separate settings table can used.  If no setting values are defined of provided, then the algorithm will use its default settings.

Attribute Data Type Description
OWNER Varchar2(30) NN Owner of the mining model
MODEL_NAME Varchar2(30) NN Name of the mining model
SETTING_NAME Varchar2(30) NN Name of the Setting
SETTING_VALUE Varchar2(4000) Value of the Setting
SETTING_TYPE Varchar2(7) Indicates whether the default value (DEFAULT) or a user specified value (INPUT) is used by the model

Lets take our previous example of the 'CLAS_DT_1_6' model and query the database to see what the setting are.

column setting_value format a30
select setting_name, 
from all_mining_model_settings
where model_name = 'CLAS_DT_1_6';

SETTING_NAME            SETTING_VALUE                SETTING
----------------------- ---------------------------- -------
ALGO_NAME               ALGO_DECISION_TREE           INPUT
PREP_AUTO               ON                           INPUT
TREE_TERM_MINPCT_NODE   .05                          INPUT
TREE_TERM_MINREC_SPLIT  20                           INPUT
TREE_TERM_MINPCT_SPLIT  .1                           INPUT
TREE_TERM_MAX_DEPTH     7                            INPUT
TREE_TERM_MINREC_NODE   10                           INPUT

Monday, October 31, 2011

ODM 11.2–Data Mining PL/SQL Packages

The Oracle 11.2 database contains 3 PL/SQL packages that allow you to perform all (well almost all) of your data mining functions.

So instead of using the Oracle Data Miner tool you can write some PL/SQL code that will you to do the same things.

Before you can start using these PL/SQL packages you need to ensure that the schema that you are going to use has been setup with the following:

  • Create a schema or use and existing one
  • Grant the schema all the data mining privileges: see my earlier posting on how to setup an Oracle schema for data mining – Click here and YouTube video
  • Grant all necessary privileges to the data that you will be using for data mining

The first PL/SQL package that you will use is the DBMS_DATA_MINING_TRANSFORM. This PL/SQL package allows you to transform the data to make it suitable for data mining. There are a number of functions in this package that allows you to transform the data, but depending on the data you may need to write your own code to perform the transformations. When you apply your data model to the test or the apply data sets, ODM will automatically take the transformation functions defined using this package and apply them to the new data sets.

The second PL/SQL package is DBMS_DATA_MINING. This is the main data mining PL/SQL package. It contains functions to allow you to:

  • To create a Model
  • Describe the Model
  • Exploring and importing of Models
  • Computing costs and text metrics for classification Models
  • Applying the Model to new data
  • Administration of Models, like dropping, renaming, etc

The next (and last) PL/SQL package is DBMS_PREDICTIVE_ANALYTICS.The routines included in this package allows you to prepare data, build a model, score a model and return results of model scoring. The routines include EXPLAIN which ranks attributes in order of influence in explaining a target column. PREDICT which predicts the value of a target attribute based on values in the input. PROFILE which generates rules that describe the cases from the input data.

Over the coming weeks I will have separate blog posts on each of these PL/SQL packages. These will cover the functions that are part of each packages and will include some examples of using the package and functions.

Saturday, October 29, 2011

ODM PL/SQL API 11.2 New Features

The PL/SQL API interface for Oracle Data Miner has had a number of new features. These are listed below along with the new API features added with the 11.1 release.

  • Support for Native Transactional Data with Association Rules: you can build association rule models without first transforming the transactional data.
  • SVM class weights specified with CLAS_WEIGHTS_TABLE_NAME: including the GLM class weights
  • FORCE argument to DROP_MODEL: you can now force a drop model operation even if a serious system error has interrupted the model build process
  • GET_MODEL_DETAILS_SVM has a new REVERSE_COEF parameter: you can obtain the transformed attribute coefficients used internally by an SVM model by setting the new REVERSE_COEF parameter to 1

11.1g API New Features

  • Mining Model schema objects: previous releases, DM models were implemented as a collection of tables and metadata within the DMSYS schema. in 11.1 models are implemented as data dictionary objects in the SYS schema. A new set of DD views present DM models and their properties
  • Automatic and Embedded Data Preparation: previously data preparation was the responsibility of the user. Now it can be automated
  • Scoping of Nested Data: supports nested data types for both categorical and numerical data. Most algorithms require multi-record case data to the presented as columns of nested rows, each containing an attribute name/value pair. ODM processes each nested row as a separate attribute.
  • Standardised Handling of Sparse Data & Missing Values: standardised across all algorithms.
  • Generalised Linear Models: has a new algorithm and supports classification (logistic regression) and regression (linear regression)
  • New SQL Data Mining Function: PREDICTION_BOUNDS has been introduced for Generalised Linear Models. This returns the confidence bounds on predicted values (regression models) or predicted probabilities (classification)
  • Enhanced Support for Cost-Sensitive Decision Making: can be added or removed using DATA_MINING.ADD_COST_MATRIX and DBMS_DATA_MINING_REMOVE_COST_MATRIX.

Wednesday, October 19, 2011

ODM API Demos in PL/SQL (& Java)

If you have been using Oracle Data Miner to develop your data mining workflows and models, at some point you will want to move away from the tool and start using the ODM APIs.

Oracle Data Mining provides a PL/SQL API and a Java API for creating supervised and unsupervised data mining models. The two APIs are fully interoperable, so that a model can be created with one API and then modified or applied using the other API.

I will cover the Java APIs in a later post, so watch out for that.

To help you get started with using the APIs there are a number of demo PL/SQL programs available. These were available as part of the the pre-11.2g version of the tool. But they don’t seem to packaged up with the 11.2 (SQL Developer 3) application.

The following table gives a list of the PL/SQL demo programs that are available. Although these were part of the pre-11.2g tool, they still seem to work on your 11.2g database.

You can download a zip of these files from here.

The sample PL/SQL programs illustrate each of the algorithms supported by Oracle Data Mining. They include examples of data transformations appropriate for each algorithm.


I will be exploring the main APIs, how to set them up, the parameters, etc.,  over the next few weeks, so check back for these posts.

Wednesday, October 12, 2011

SQL Developer 3.1 EA & Bug

The new/updated SQL Developer 3.1 Early Adopter has just been released.

For the Data Miner, there are no major changes and it appears that there has been some bug fixes and some minor enhancements to so parts.

The main ODM features, apart from bug fixes, in this release include:

  • Globalization support, including translated error messages and GUI for all languages supported by SQL Developer
  • Improved accessibility features including the addition of a Structure navigator that lists all the nodes and links displayed in a workflow

Bug / Feature

After unzipping the download I opened SQL Developer. With each new release you will have to upgrade the existing ODM repository. The easiest way of doing this is to open the ODM connections pane and double click on one of your ODM schemas. SQL Developer will then run the necessary scripts to upgrade the repository.

I discovered a bug/feature with SQL Developer 3.1 EA1  upgrade script. The repository upgrade does not complete and an error is report.

I logged this error on the ODM forum on OTN. Mark Kelly who is the Development Manager for ODM and monitors the ODM forum, and his team, were quickly onto investigating the error. Mark has posted an update on the ODM form and give a script that needs to be run before you upgrade your existing repository.

You can download the pre-upgrade script from here.

If you don’t have an existing repository then you don’t have to run the script.

Check out the message on the ODM forum.


How to Upgrade SQL Developer & ODM

You will have to download the new SQL Developer 3.1 EA install files.

  • Unzip this into your SQL Developer directory
  • Create a shortcut for  sqldeveloper.exe on your desktop and relabel it SQL Developer 3.1 EA
  • Double-click this short cut


  • You should be presented with the above window. Select the Yes button to migrate you previous install settings
  • SQL Developer should now open and contains all your previous connections

If you have an existing ODM repository, you need to run the pre-upgrade script (see above) at this point 

  • You will now have to upgrade the ODM repository in the database. The simplest way of doing this is to allow SQL Developer to run the necessary scripts.
  • From the View Menu, select Oracle Data Miner –> Connections
  • In the ODM Connections pane double click one of your ODM schemas. Enter the username and password and click OK


  • You will then be prompted to migrate/update the ODM repository to the new version. Click Yes.
  • Enter the SYS username and Password


  • Click Start button, to start the migrate/upgrade scripts
  • On my laptop this migrate/upgrade step took less than 1 minute
  • The upgrade is now finished and you can start using ODM.

ODM – SQL Developer 3.1 EA – Release Notes

The ODM release notes can be found at

Thursday, September 29, 2011

Check out Oracle Data Miner at OOW 11

If you are at Oracle Open World (OOW11) and you have an interest in Oracle Data Miner, check out the following presentation sessions:
In addition to these sessions there are also the following Hands-On Labs, where you can get your hand dirty with the tool.
Do let me know if I have missed a session so that I can update the list.
I’m not attending OOW11 Sad smile so let me know what the sessions are like.

And tell Charlie that I sent you

Tuesday, September 13, 2011

Next Generation Analytics–Oracle BIWA TechCast

The Oracle BIWA SIG, which is part of the IOUG, will be having a tech cast on Wednesday 14th September 12:00 PM - 1:00 PM CDT  (between 6pm and 7pm in Ireland)

It is titled 'Building Next-Generation Predictive Analytics Applications using Oracle Data Mining'.

You can register for this by visiting

This presentation will cover how the Oracle Database has become a predictive analytics (PA) platform for next-generation applications and will include several examples including:

  • Oracle Fusion Human Capital Management (HCM) Predictive Workforce,
  • Oracle Adaptive Access Manager for fraud detection, 
    Oracle Communications Industry Model,
  • Oracle Complex Event Processing and others and will be interspersed with
  • Oracle Data Mining demos and PA examples where possible.

“Predictive analytics help you make better decisions by uncovering patterns and relationships hidden in the data. This new information generates competitive advantage. Oracle has invested heavily to "move the algorithms to the data" rather than current approaches. Oracle Data Mining provides 12 in-database algorithms that mine star schemas, structured, unstructured, transactional, and spatial data. Exadata, delivering 10x-100x faster performance, combined with OBIEE for dashboards and drill-down deliver an unbeatable in-database analytical platform that undergirds next-generation “predictive” analytics applications. This webcast will show you how to get started.”

Saturday, August 6, 2011

New Frontiers for Oracle Data Miner

Oracle Data Miner functionality is now well established and proven over the years. In particular with the release of the ODM 11gR2 version of the tool. But how will Oracle Data Miner develop into the future.

There are 4 main paths or Frontiers for future developments for Oracle Data Miner:

Oracle Data Miner Tool

The new ODM 11gR2 tool is a major development over the previous version of the tool. With the introduction of workflows and some added functionality for some of the features. the tool is now comparable with the likes of SAS Enterprise Miner and SPSS.

But the new tool is not complete and still needs a bit of fine tuning of most of the features. In particular with the usability and interactions. Some of the colour schemes needs to be looked at or to allow users to select their own colours.

Apart from the usability improvements for the tool another major development that is needed, is the ability to translate the workflow and the underlying database objects into usable code. This code can then be incorporated into our applications and other tools. The tool does allow you to produce shell code of the nodes, but there is still a lot of effort needed to make this usable.  Under the previous version of the tool there was features available in JDeveloper and SQL Developer to produced packaged code that was easy to include in our applications.

“A lot done – More to do”

Oracle Applications

Over the past couple of months there has been a few postings on how Oracle Data Miner (11gR2) has been, or will be, incorporated in various Oracle Applications. For example Oracle Fusion Human Capital Management and Oracle Real Time Decision (RTD). Watch out of other applications that will be including Oracle Data Miner.

“A bit done – Lots more to do”

Oracle Business Intelligence

One of the most common places where ODM can be used is with OBIEE. OBIEE is the core engine for the delivery of the BI needs for an organisation. OBIEE coordinates the gathering of data from various sources, the defining of the business measures and then the delivery of this information in various forms to the users. Oracle Data Miner can be included in this process and can add significant value to the BI needs and report.

“A lot done – Need to publicise more”

Customized Projects

Most data mining projects are independent of various Applications and BI requirements. They are projects that are hoping to achieve a competitive insight into their organisational data. Over time as the success of some pilot projects become know they need for more data mining projects will increase. This will lead to organisations have a core data mining team to support these project. With this, the team will need tools to support them in the delivery of their project and with the delivery. This is were OBIEE and Oracle Fusion Apps will come increasingly important.

“A lot done – more to do”

Wednesday, July 20, 2011

Data Exploration using Oracle Data Miner 11gR2

Before beginning any data mining task we need to performs some data investigation. This will allow us to explore the data and to gain a better understanding of the data values. We can discover a lot by doing this can it can help us to identify areas for improvement in the source applications, as well as identifying data that does not contribute to our business problem (this is called feature reduction), and it can allow us to identify data that needs reformatting into a number of additional features (feature creation). A simple example of this is a date of birth field provides no real value, but by creating a number of additional attributes (features) we can now use the date of birth field to determine what age group they fit into.

As with most of the interface in Oracle Data Miner 11gR2, there is a new Data Exploration interface. In this blog post I will talk you through how to set-up and use the new Data Exploration interface and show you how you can use the data exploration features to gain an understanding of the data before you begin using the data mining algorithms.

The examples given here are based on my previous blog posts and we will use the same sample data sets, that were set-up as part of the install and configuration.

See my other blog post and videos on installing and setting up Oracle Data Miner.

Data Set-up

Before we can begin the data exploration we need to identify data we are going to use. To do this we need to select the Data tab from the Component Palette, and then select Data Source.image

To create the Data Node on our Workflow we need to click and drag the Data Source onto the workflow. Select the MINING_DATA_BUILD_V and select all the data.image

The next step is to create the Explore Data node on our workflow. From the Data tab in the Component Palette, select and drag the Explore Data node onto the workflow. Now we need to link the Data node to the Explore Data node.


Right-click on the Explore Data mode and click Run. This will make the ODM tool go to the database and analyse the data that is specified in our Data node. The analyse results will be used in the Explore Data note.

Exploring the Data

When the Explore Data node has finished we can look at the data it has generated. Right-click the Explore Data node and select View Data.


A lot of statistical information has been generated for each of the attributes in our Data node. In addition to the statistical information we also get a histogram of the attribute distributions.

We can work through each attribute taking the statistical data and the histograms to build up a picture of the data.

The data we are using is for an Electronics Goods store.

A few interesting things in the data are:

  • 90% of the data comes from the United States of America
  • PRINTER_SUPPLIES attribute only has one value. We can eliminate this from our data set as it will not contribute to the data mining algorithms
  • Similarly for OS_DOC_SET_KENJI, which also has one one value

The histograms are based on predetermined number of bins. This is initially set to 10, but you may need to changed this value up or down to see if a pattern exists in the data.

An example of this is if we select AGE and set the number of bins to 10. We get a nice histogram showing that most of our customers are in the 31 to 46 age ranges. So maybe we should be concentrating on these.


Now if we change the number of bins to 30 can get a completely different picture of what is going on in the data.

To change the number of bin we need to go to the Workflow pane and select the Property Inspector. Scroll down to the Histogram section and change the Numerical Bins to 25. You then need to rerun the Explore Data node.


Now we can see that there are a number of important age groups what stand out more than others. If we look at the 31 to 46 age range, in the first histogram we can see that there is not much change between each of the age bins. But when we look at the second histogram for the 25 bins for the same 21 to 34 age range we get a very different view of the data. In this second histogram we see that that the ages of the customers vary a lot. What does mean. Well it can mean lots of different things and it all depends on the business scenario. In our example we are looking at an electronic goods store. What we can deduce from this second histogram is that there are a small number of customers up to about age 23. Then there is an increase. Is this due to people having obtained their main job after school having some disposable income. This peak is followed by a drop off in customers followed by another peak, drop off, peak, drop off etc. Maybe we can build a profile of our customer based on their age just like what our financial organisations due to determine what products to sell to use based on our age and life stage.

Conclusions on the data

From this histogram we can maybe categorise the customers into the follow

• Early 20s – out of education, fist job, disposable income
• Late 20s to early 30s – settling down, own home
• Late 30s – maybe kids, so have less disposable income
• 40s – maybe people are trading up and need new equipment. Or maybe the kids have now turned into teenagers and are encouraging their parents to buy up todate equipment.
• Late 50s – These could be empty nesters where their children have left home, maybe setting up home by themselves and their parents are building things for their home. Or maybe the parents are treating themselves with new equipment as they have more disposable income
• 60s + – parents and grand-parents buying equipment for their children and grand-children. Or maybe we have very techie people who have just retired
• 70+ – we have a drop off here.

As you can see we can discover a lot in the day by changing the number of bins and examining the data. The important part of this examination is trying to relate what you are seeing from the graphical representation of the data on the screen, back to the type of business we are examining. A lot can be discovered but you will have to spend some time looking for it.

ODM 11gR2 Extra Data Exploration Functionality

In ODM 11gR2 we now have an extra feature for our data analysis feature. We can now produce the histograms that are grouped by one of the other attributes. Typically this would be the Target or Class attribute but you can also use it with the other attributes.

To set this extra feature, double click on the Explore Data node. The Group By drop down lets you to select the attribute you want to group the other attributes by.


Using our example data, the target variable is AFFINITY_CARD. Select this in the drop down and run the Explore Data node again. When you look at the newly generated histograms you will now see each bin has two colours. If you hover the mouse of each coloured part you will be able to get the number of records in each group. You can use other attributes, such as the CUST_GENDER, COUNTRY_NAME, etc. Only use the attributes where it would make sense to analyse the data by.


This is a powerful new feature that allows you to gain a deeper level of insight into the data you are analysing

Brendan Tierney

Monday, July 18, 2011

VirtaThon Presentation

Today I gave my VirtaThon presentation on the new Oracle Data Miner 11gR2 tool.

It was an interesting experience as VirtaThon was a virtual conference. The organisation and administration of the conference was excellent.

I had over 25 participants for my presentation, including Carolyn Hamm who has written a book on using Oracle Data Miner 10g.  She seemed to enjoy my presentation as she was asking for more at the end, but we had run out of time.

The presentation was an unusual but interesting experience. All the participants were muted, so I could not hear anyone or be asked questions as the presentation progressed. I was not able to judge the body language or facial expressions, for me to work out how the presentation was going.

I was sitting in my living room when giving the presentation and spent almost an hour talking to myself. At time the concentration levels dipped and I have to refocus and used some visualisation to help me concentrate.

The presentation was divided into 2 parts. The first part was a presentation consisting of some background to ODM, how to get setup and running with ODM, and finally a discussion of some of the new features. This first part took approx. 30 minutes which surprised me as during my rehearsals it was talking 16 minutes. The second part of the presentation was a demo of using ODM to create a workflow, generating a classification model and then applying this model to some new data. During my rehearsals this was taking approx. 40 minutes.

I only had 50-55 minutes for my VirtaThon presentation so after my presentation I had 20-25 minutes for the demo. So I had to get through the demo quickly and I had to cut out a discussion of how the data exploration functionality in ODM can be used to get an insight into the data before you start using the data mining features. I will put together a blog post and video of this in a couple of weeks time that will explain it in more detail.

I managed to finish at 49 minutes, which left 6 minutes for questions. There was only a couple of questions, plenty of Thank You’s along with Good Presentation, which is always good to hear.

Thank you to everyone who attended the presentation and to the organisers of VirtaThon.

Brendan Tierney

Friday, July 15, 2011

My Presentation at VirtaThon 2011

I will be giving a presentation on the Oracle Data Miner New Features at the online conference VirtaThon, on Monday 18th July.

VirtaThon is a FREE 6 day conference with 2 parallel sessions with world leading speakers on Oracle Java and MySQL. 

Previously attendance at the conference cost $100, which was good value considering the quality of the speakers. But this year it is Free.

The VirtaThon conference runs from 16th July to 21st July

The schedule is available at

To sign up to attend some or all of the sessions go to

Attend4FREE! Jul 16-21: 6 Days of Expert+ Sessions #VirtaThon The Online Conference for the Oracle, Java & MySQL Domains

Friday, July 8, 2011

Exporting & Importing Oracle Data Miner (11gR2) Workflows

As with all development environments there will be need to move your code from one schema to another or from one database to another.

With Oracle Data Miner 11gR2, we have the same requirement. In our case it is not just individual procedures or packages, we have a workflow consisting of a number of nodes. With each node we may have a number of steps or functions that are applied to the data.

Exporting an ODM (11gR2) Workflow

In the Data Miner navigator, right-click the name of the workflow that you want to export.

The Save dialog opens. Specify a location on you computer where the workflow is saved as an XML file.

The default name for the file is workflow_name.xml, where workflow_name is the name of the workflow. You can change the name and location of the file.


Importing an ODM (11gR2) Workflow

Before you import your ODM workflow, you need to make sure that you have access the the same data that is specified in the workflow.

All tables/views are prefixed with the schema where the table/view resides.

You may want to import the data into the new schema or ensure that the new schema has the necessary grants.

Open the connection in ODM.

Select the project under with you want to import the workflow, or create a new project.

Right click the Project and select Import Workflow.

Search for the XML export file of the workflow.

Preserve the objects during the import.

When you have all the data and the ODM workflow imported, you will need to run the entire workflow to ensure that you have everything setup correctly.

It will also create the models in the new schema.

Data encoding in Workflow

All of the tables and views used as data sources in the exported workflow must reside in the new account

The account from which the workflow was exported is encoded in the exported workflow e.g. the exported workflow was exported from the account DMUSER and contains the data source node with data MINING_DATA_BUILD. If you import the schema into a different account (that is, an account that is not DMUSER) and try to run the workflow, the data source node fails because the workflow is looking for USER.MINING_DATA_BUILD_V.

To solve this problem, right-click the data node (MINING_DATA_BUILD_V in this example) and select Define Data Wizard. A message appears indicating that DMUSER.MINING_DATA_BUILD_V does not exist in the available tables/views. Click OK and then select MINING_DATA_BUILD_V in the current account.


I have created a video of this blog. It illustrates how you can Export a workflow and Import the workflow into a new schema.

ODM 11gR2 - Exporting and Importing ODM Workflows

Make sure to check out my other Oracle Data Miner (11gR2) videos.

Friday, May 27, 2011

Creating ODM Schemas & Repository for ODM 11g R2

Before you can start using the Oracle Data Miner features that are now available in SQL Developer 3, there are a few steps you need to perform. This post will walk you through these steps and I have put together a video which goes into more detail. The video is available on my YouTube channel.

Oracle Data Miner 11g R2 : Creating ODM User & Repository video

I will be posting more How To type videos over the coming weeks and months. Each video will focus in one one particular feature within the new Oracle Data Mining tool.

So following steps are necessary before you can start using the ODM tool

Set up of Oracle Data Miner tabs

To get the ODM tabs to display in SQL Developer, you need to go to the View menu and select the following from the Data Miner submenu

  • Data Miner Connections
  • Workflow Jobs
  • Property Inspector


Create an ODM Schema

There are two main ways to create a Schema. The first and simplest way is to use SQL Developer. To do this you need to create a connection to SYS. Right click on the Other Users option and select Create User.

The second option is to use SQL*Plus to create the user. Using both methods you need to grant Connect & Resource privileges to the user.

Create the Repository

Before you can start using Oracle Data Mining, you need to create an Oracle Data Miner Repository in the database. Again there are two ways to do this. The simplest is to use the inbuilt functionality in SQL Developer. In the Oracle Data Miner Connections tab, double click on the ODM schema you have just created. SQL Developer will check the database to see if the ODM Repository exists in the database. If it will create the repository for you. But you will need to provide the SYS password.

The other way to create the repository is to run the installodmr.sql script that in available in the ‘datamining’ directory.

@installodmr.sql <default tablespace> <temp tablespace>

example:   @installodmr.sql USER TEMP

Create another ODM Schema

It is typical that you would need to have more than one schema for your data mining work. After creating the default Oracle schema, the next step is to grant the schema the privileges to use the Data Mining Repository. This script is called

usergrants.sql <DM Schema>

example:    @usergrants.sql DMUSER

Hint: The schema name needs to be in upper case. 

IMPORTANT: The last grant statement in the script may give an error. If this occurs then it is due to an invalid hidden character on the line. If you do a cut and paste of the grant statement and execute this statement, everything should run fine.

If you want to demo data to be created for this new ODM schema then you need to run

@instdemodata.sql <DM Schema>

example:    @instdemodata.sql DMUSER

All of these scripts can be found in SQL developer directories