Showing posts with label Oracle Advanced Analytics. Show all posts
Showing posts with label Oracle Advanced Analytics. Show all posts

Tuesday, December 16, 2014

ODMr 4.1 EA1 Repository Upgrade

If you are downloading the EA1 of SQL Developer that includes Oracle Data Miner (ODMr), and you intend to use Oracle Data Miner then you will need to update the ODMr Repository.

You could do it the hard way and run the upgrade repository sql scripts that are located in the ...\sqldeveloper-4.1.0.17.29-no-jre\sqldeveloper\dataminer\scripts directory.

Or you could do it the easy way and let the inbuilt functionality in Oracle Data Miner do it for you.

To do it the easy way all you need to do is to open the ODMr Connections window and the double click on one of your ODM connections.

ODMr will check the version of the repository you have installed and if needed it will prompt you about upgrading the repository. Select Yes and you will be prompted to enter the SYS password. So talk kindly with your DBA for them to enter the password for you. Then click on the Start button. They will lick off the OMDr Repository Upgrade scripts.

NB: Make sure you have a backup of your workflows before you do this. A little think happened to me during the SQL Dev / ODMr 4.0 upgrade back in September 2013 where all my workflows disappeared. You can imagine how happy I was about that. Since then the ODMr team have added some functionality to ensure something like this doesn't happen again. But you never know.

To backup your ODMr workflows use the Export Workflow option.

When the repository upgrade has finished you will get a 'Task Complete Successfully' message in the upgrade window. Click on the close button and away you go with this updated version.

Check out this blog post for details of what is new in ODMr 4.1.

Friday, December 12, 2014

Oracle Data Miner (SQL Dev) 4.1 EA1

A few days ago the first Early Adaptor release of SQL Developer 4.1 (EA1) was made available. You can go ahead and download it from here and make sure to check out the blog post by Jeff Smith on some install and setup that is required around the latest version of Java.

I've been using SQL Developer since its very first release, so getting my hands on a new release is very exciting. There are lots and lots of new features in the tool. Again check out the blog posts by Jeff Smith and Kris Rice on some of these new features. I really like the new DBA screens :-) But this screen really needs some scroll bars and not everything fits on my screen. So Jeff and Kris if you are reading this, can you add some scroll bars.

Sqldev4 1

In addition they have been working on "new" SQL*Plus that is called SDSQL. This is a new command line tool that is supposed to be bigger and better than SQL*Plus but still gives us a command line tool to run our scripts and demos. To download and install the tool go to here.

As you know I'm a bit of an Oracle Data Miner/Mining fan. There are now new in-database features, but there are a lot of new features in the GUI tool (aka ODMr) along with some improvements and bug fixes. Here is a list of the ODMr 4.1 EA1 new and updated features (taken from the ODMr Help in SQL Dev)

JSON Data Support for Oracle Database 12.1.0.2 and above

In response to the growing popularity of JSON data and its use in Big Data configurations, Data Miner now provides an easy to use JSON Query node. The JSON Query node allows you to select and aggregate JSON data without entering any SQL commands. The JSON Query node opens up using all of the existing Data Miner features with JSON data. The enhancements include:

Data Source Node

Automatically identifies columns containing JSON data by identifying those with the IS_JSON constraint.

Generates JSON schema for any selected column that contain JSON data.

Imports a JSON schema for a given column.

JSON schema viewer.

Create Table Node

Ability to select a column to be typed as JSON.

Generates JSON schema in the same manner as the Data Source node.

JSON Data Type

Columns can be specifically typed as JSON data.

JSON Query Node

Ability to utilize any of the selection and aggregation features without having to enter SQL commands.

Ability to select data from a graphical layout of the JSON schema, making data selection as easy as it is with scalar relational data columns.

Ability to partially select JSON data as standard relational scalar data while leaving other parts of the same JSON document as JSON data.

Ability to aggregate JSON data in combination with relational data. Includes the Sub-Group By option, used to generate nested data that can be passed into mining model build nodes.

General Improvements

Improved database session management resulting in less database sessions being generated and a more responsive user interface.

Filter Columns Node

Combined primary Editor and associated advanced panel to improve usability.

Explore Data Node

Allows multiple row selection to provide group chart display.

Classification Build Node

Automatically filters out rows where the Target column contains NULLs or all Spaces. Also, issues a warning to user but continues with Model build.

Workflow

Enhanced workflows to ensure that Loading, Reloading, Stopping, Saving operations no longer block the UI.

Online Help

Revised the Online Help to adhere to topic-based framework.

Selected Bug Fixes (does not include 4.0 patch release fixes)

GLM Model Algorithm Settings: Added GLM feature identification sampling option (Oracle Database 12.1 and above).

Filter Rows Node: Custom Expression Editor not showing all possible available columns.

WebEx Display Issues: Fixed problems affecting the display of the Data Miner UI through WebEx conferencing.


Denny Wong of the ODM team in Oracle has made available a tutorial on importing JSON data for use with ODMr. Check it out here.

I've been told there will be a couple of tutorials on the new features coming out (from the ODMr team) over the next few weeks. So keep an eye out of these.


Check out my blog post on what you need to do to get started/using ODMr 4.1 EA1.

Friday, December 5, 2014

UKOUG 2015 Conferences

The UKOUG annual conferences commence on Sunday 7th December and run until Wednesday 10th.

Like previous years there are two conferences, one called TECH15 and the other is called APPS15. You might guess what each conference is about!!.

This year these conferences are being held at the same time and in the same venue. But they are separate conferences!.

This year I've been very lucky (or very unlucky) to have 3 presentations at these conferences. Two of these will on part of the TECH15 conference and one will be part of the APPS15 conference.

Just in case you interested in what I'm presenting about and you might want to attend them, here is the list with the room numbers.

Monday

10:30-11:20 : Oracle Advanced Analytics in Oracle Fusion Apps & Beyond (Apps) (Room : Ex1)

11:30-12:20 : Predictive Queries in Oracle 12c (TECH) (Room : Hall 6)

Wednesday

11:30-12:20 : What are they thinking? With APEX and Oracle Data Miner. (TECH) (Room : Ex4)

(this is a joint presentation with Roel Hartman)

Yes on the Monday I have 2 back-to-back presentation with a 10 minute gap to get from one side of the conference centre to the other side :-( I'm not looking forward to that transition, but I'm sure it will be fine.

Friday, November 7, 2014

ODMr : Graph Node: Zooming in on Graphs

When Oracle Data Miner (ODMr) 4.0 (which is part of SQL Developer) came out back in late 2013 there was a number of new features added to the tool. One of these was a Graph node that allows us to create various graphs and charts that include Line, Scatter, Bar, Histogram and Box plot.

I've been using this node recently to produce graphs and particularly scatter plots. I've been using the scatter plots to graph the Actual values in a data set against the Predicted values that was generated by ODMr. In this scenario I had a separate data set for training my ODM data mining models and another testing data set for, well testing how well the model performed against an unseen data set.

In general the graphs produced by the Graph node look good and gives you the information that you need. But what I found was that as you increased the size of the data set, the scatter plot can look a messy. This was in part due to the size of the square used to represent a data point. As the volume of data increased then your scatter plot could just look like a coloured in area of blue squares. This is illustrated in the following image.

Graph node 1

What I discovered today is that you can zoom in on this graph to explore different regions and data point on it. This do this you need to select an data that is within the x-axis and y-axis area. When you do this you will see a box form on your graph that selects the area that you indicate by moving your mouse. After you have finished selecting the area, the Graph Node will zooms into this part of the graph and shows the data points. For example if I select the area from about 1000 on the x-axis and 1000 on the y-axis, I will get the following.

Graph node 2

Again if I select a similar are area of 350 on the x-axis and 400 on the y-axis I get the following zoomed area.

Graph node 3

You can keep zooming in on various areas.

At some point you will have finished zooming in and you will want to return to the original graph. To zoom back outward all you need to do in the graph is to click on it. When you do this you will go back to the previous step or image of the graph. You can keep doing this until you get back to the original graph. Alternatively you can zoom in and out on various parts of the graph.

Hopefully you will find this feature useful.

Wednesday, September 17, 2014

Analytics Hands on Labs at OOW 14

I had an previous blog post listing the various Oracle Advanced Analytics sessions/presentation at Oracle Open World 2014.

After trawling through the list of Hands-on-Labs it was disappointing to see that there was no Oracle Data Mining or Oracle R Enterprise hands-on-labs this year.

But there is a hands on lab that looks are how to use the new SQL for Big Data feature (announced over the summer).

Here is the abstract for the session.

Data warehouses contain the critical data required for managing and running organizations. Increasingly, Hadoop and NoSQL databases are capturing additional information—such as web logs, social media, and weather data that can augment the warehouse—enabling users to uncover new insights and opportunities. This hands-on lab illustrates how Oracle Big Data SQL is used to unify these environments. First you will learn how to securely access these big data sources from Oracle Database 12c. Then you will utilize Oracle’s analytical SQL across all your data, regardless of where it resides. Welcome to Oracle’s new big data management system!

There will be a lab session each day for this session and I will certainly be doing my best to get to one of these.

DateTimeLocationHands-on-Lab Session Title
Monday 29th Sept.11:45-12:45Hotel Nikko - PeninsulaOracle Big Data SQL: Unified SQL Analysis Across the Big Data Platform [HOL9348]
Tuesday 30th Sept.15:45-16:45Hotel Nikko - Peninsula
Wednesday 1st Oct.13:15-14:15Hotel Nikko - Peninsula
Thursday 2nd Oct.11:30-12:30Hotel Nikko - Peninsula

If any new hands-on-labs appear that are related to the Big Data and Advanced Analytics areas/options I will update the above table.

Some other Hands-on-Labs that you might be interested in include:

DateTimeLocationHands-on-Lab Session Title
Monday 29th Sept.17:45-18:45Hotel Nikko - PeninsulaOracle NoSQL Database for Application Developers [HOL9349]
Tuesday 30th Sept.10:15-11:10Hotel Nikko - PeninsulaOracle NoSQL Database for Application Developers [HOL9349]
Tuesday 30th Sept.15:45-16:45Hotel Nikko - Nikko Ballroom IIIOracle Data Integrator 12c New Features Deep Dive [HOL9439]
Tuesday 30th Sept.17:15-18:15Hotel Nikko - Nikko Ballroom IIIOracle Data Integrator for Big Data [HOL9414]
Wednesday 1st Oct.13:15-14:15Hotel Nikko - Mendocino I/IISet Up a Hadoop 2 Cluster with Oracle Solaris Zones, Oracle Solaris ZFS, and Unified Archive [HOL2086]
Wednesday 1st Oct.14:45-15:45Hotel Nikko - PeninsulaOracle NoSQL Database for Administrators [HOL9327]
Thursday 2nd Oct.14:30-15:30Hotel Nikko - PeninsulaOracle NoSQL Database for Administrators [HOL9327]

Thursday, March 20, 2014

Issues with using latest release of ODM

The title of this blog post makes it sound more dramatic than it actually is.

The reason for this blog post is down to me receiving a recent comment on the blog, plus having received numerous emails and a recent OTN Discussion Forum topic for Oracle Data Mining.

The main thing that they have in common is that if I use the latest version of Oracle Data Mining (ODM) it tells me that I need to upgrade my ODM Repository. What impact will this have?

The ODM Repository stores lots of information about the workflows you create using the (free) Oracle Data Mining tool that comes as part of SQL Developer. Yes you do have to pay for the OAA option, so is it really free? Well some part are like the explore node and the graph node.

If you download and want to use the latest version of the ODM tool or you want to try it out before rolling it out to others then you will need to upgrade your ODM repository.

And this the problem that people are facing.

If you upgrade then the ODM Repository it is updated to work with the latest version of the ODM tool. But what happens to everyone else who is using the previous release of the tool? The answer to that is they can no longer use ODM against their database.

Why is that? Well the version of the tool is tied to a version of the Repository. If you upgrade to the newer tool and repository then your older versions of the ODM tool no longer work.

The result of all of this is that you cannot have a mixture of versions of the ODM tool (SQL Developer) being used in your team/company.

There is a very simple solution to all of this. Everyone uses the same version of the ODM tool (i.e. the same version of SQL Developer). For example your team might be using SQL Dev 4 that was released last December. But in early March there was a new patch release 4.1. In order to use this new version of the tool all of your team needs to start using it at the same time. The first person to use it will be prompted to migrate the ODM repository. This is automatically done once you enter the password for SYS.

But in some teams this is not possible to do, you want to try out the tool to see that it works correctly before getting others to use it. The way around this is to have a separate database and use it for your testing. You can easily copy across your workflows and ODM objects to the test database.

This might not be possible for everyone, so what can you do. Create a Virtual Machine and try it out on your own desktop is one way.

The answer to this problem is not ideal, but hopefully you have a better idea of why things are happening this way and what you can or cannot do about it.

Like I said at the topic of this blog post that the title is a bit more dramatic than is really the case :-)


My next blog post will be on another question I've been asked a few times and this is 'When I go to use the ODM tool it tells me that the Oracle Text feature of Oracle needs to be enabled'

Monday, March 3, 2014

OUG Ireland

The annual OUG Ireland Conference (or special event) will be on Tuesday 11th March.  Actually this year there are sessions spread over 2 days, for the first time ever in the 10+ year history of OUG Ireland. In addition to 2 days of sessions there are 7 streams of presentations on the Tuesday and then there is the RAC AttacK for the first time in Ireland.

The main conference event is on Tuesday 11th March in the DCC in Dublin. Things kick off at 9:20 with Debra Lilley welcoming everyone to the event. Then Jon Paul from Oracle in Ireland will do the opening keynote. Then we can break into the 7 streams with lots of local case studies and some well known speakers from around the world including many Oracle  ACEs and ACE Directors (my presentation is at 12:15).

The day ends up with 2 keynote presentations. There will be a keynote that will be focused on the App streams (Nadia Bendjedou, Oracle) and a separate keynote for Tech streams (by Tom Kyte).

Throughout the day there will be RAC Attack event. Look out for their tables in the exhibition hall. Again there will be some well known experts from around the world who will be on hand to help you get RAC setup and running on your own laptop, answer your questions and engage in lots of discussions about all thing Oracle. The RAC Attack Ninja will include Osama Mustafa, Philippe Fierens, Marcin Przepiorowski, Martin Bach and Tim Hall. Some of these are giving presentations throughout the day, so when they are not presenting you will find them at the RAC Attack table. Even if you are not going to install RAC drop by and have a chat with them.

On Wednesday 12th March the OUG Ireland Conference ventures into a second day of sessions. These sessions will be a full day of topics by Tom Kyte. This is certainly a day not to be missed. As they say places are limited so book your place today.


Click on the following image to view the agenda for the 2 days and to book your place on the 11th and 12th March.


I hope to see you there and make sure you say hello to me.

Friday, December 13, 2013

Upgrading to SQL Dev 4 & Oracle Data Miner 4

The production release of SQL Developer 4 and Oracle Data Miner 4 has just been released. If you are like me you will want to upgrade and start using this latest release. For me I particularly want to be using the new Oracle Data Miner 4.  Over the past (almost) 6 months I’ve been working with the Early Adopter versions (EAs) with some degree of frustration. So hopefully it will be all working now.

To download the production version of SQL Developer 4 that include Oracle Data Miner go to here.

The following are the steps that I followed to get SQL Developer installed and to migrate my Oracle Data Miner Repository.  I’m running a 12.1c Oracle Database.

1. Download and unzip the SQL Developer software. Go to the \sqldeveloper folder to locate the sqldeveloper.exe file. I created a shortcut on my desktop for this. When ready then run this file.

2. As SQL Developer is opening you will get the typical splash screen and at some point you will be asked about migrating your preferences from your previous release. In my case I’m migrating from EA1. I select Yes.

image

After a few more seconds SQL Developer should open with all your previous settings.

image

3. Now to update and migrate your existing Oracle Data Mining Repository to the new versions. To start this process, to to the Tool Menu and then select Data Miner –> Make Visible

image

This will open the Oracle Data Miner Connections tab and the Workflow Jobs tab. If you don’t make do this step then your Oracle Data Miner workflows may not run.

4. Double click on one of your schemas in the Data Miner Connection tab.

image

5. Before you upgrade your repository it is advisable to take a full backup of your database, and to export your workflows. Just in case anything might happen during the Repository upgrade. I cannot stress this enough, because during a previous upgrade my repository got wiped and I had to rely on my backups.

5. The version of the repository will be check and if it needs updating then you will get the following window. I’m migrating from EA1 so you might get a slightly different messages. It all depends on what version you were previously using. Select Yes.

image

6. Next you will need to give the SYS password (or talk nicely to your DBA). Then you will get a warning about disconnecting your session from the repository. Click OK.

image

Then you can click on the Start Button

image

Everything should finish after a few minutes.

image

7. Open one of your workflows and run it to make sure all is OK.

 

Based on my initial few hours of working with the production version of SQL Developer 4 and Oracle Data Miner 4 is that it seems to run a lot quicker than the Early Adopter versions.

Watch out for some blog posts over the coming weeks about some of the new features that are available in SQL Developer 4.  Like my previous blog posts, the new posts will be how-to type of articles.

Wednesday, December 11, 2013

Running PL/SQL Procedures in Parallel

As your data volumes increase, particularly as you evolve into the big data world, you will be start to see that your Oracle Data Mining scoring functions will start to take longer and longer.  To apply an Oracle Data Mining model to new data is a very quick process. The models are, what Oracle calls, first class objects in the database. This basically means that they run Very quickly with very little overhead.

But as the data volumes increase you will start to see that your Apply process or scoring the data will start to take longer and longer. As with all OLTP or OLAP environments as the data grows you will start to use other in-database features to help your code run quicker. One example of this is to use the Parallel Option.

You can use the Parallel Option to run your Oracle Data Mining functions in real-time and in batch processing mode. The examples given below shows you how you can do this.

Let us first start with some basics. What are the typical commands necessary to setup our schema or objects to use Parallel. The following commands are examples of what we can use

ALTER session enable parallel dml;
ALTER TABLE table_name PARALLEL (DEGREE 8);
ALTER TABLE table_name NOPARALLEL;
CREATE TABLE … PARALLEL degree …
ALTER  TABLE … PARALLEL degree …
CREATE INDEX … PARALLEL degree …
ALTER  INDEX … PARALLEL degree …

You can force parallel operations for tables that have a degree of 1 by using the force option.

ALTER SESSION ENABLE PARALLEL DDL;
ALTER SESSION ENABLE PARALLEL DML;
ALTER SESSION ENABLE PARALLEL QUERY;

alter session force parallel query PARALLEL 2

You can disable parallel processing with the following session statements.

ALTER SESSION DISABLE PARALLEL DDL;
ALTER SESSION DISABLE PARALLEL DML;
ALTER SESSION DISABLE PARALLEL QUERY;

We can also tell the database what degree of Parallelism to use


ALTER SESSION FORCE PARALLEL DDL PARALLEL 32;
ALTER SESSION FORCE PARALLEL DML PARALLEL 32;
ALTER SESSION FORCE PARALLEL QUERY PARALLEL 32;


 


Using your Oracle Data Mining model in real-time using Parallel


When you want to use your Oracle Data Mining model in real-time, on one record or a set of records you will be using the PREDICTION and PREDICTION_PROBABILITY function. The following example shows how a Classification model is being applied to some data in a view called MINING_DATA_APPLY_V.


column prob format 99.99999
SELECT cust_id,
       PREDICTION(DEMO_CLASS_DT_MODEL USING *)  Pred,
       PREDICTION_PROBABILITY(DEMO_CLASS_DT_MODEL USING *) Prob
FROM   mining_data_apply_v
WHERE  rownum <= 18
/


   CUST_ID       PRED      PROB
---------- ---------- ---------
    100574          0    .63415
    100577          1    .73663
    100586          0    .95219
    100593          0    .60061
    100598          0    .95219
    100599          0    .95219
    100601          1    .73663
    100603          0    .95219
    100612          1    .73663
    100619          0    .95219
    100621          1    .73663
    100626          1    .73663
    100627          0    .95219
    100628          0    .95219
    100633          1    .73663
    100640          0    .95219
    100648          1    .73663
    100650          0    .60061


If the volume of data warrants the use of the Parallel option then we can add the necessary hint to the above query as illustrated in the example below.


SELECT /*+ PARALLEL(mining_data_apply_v, 4) */
       cust_id,
       PREDICTION(DEMO_CLASS_DT_MODEL USING *)  Pred,
       PREDICTION_PROBABILITY(DEMO_CLASS_DT_MODEL USING *) Prob
FROM   mining_data_apply_v
WHERE  rownum <= 18
/


If you turn on autotrace you will see that Parallel was used. So you should now be able to use your Oracle Data Mining models to work on a Very large number of records and by adjusting the degree of parallelism you can improvements.


Using your Oracle Data Mining model in Batch mode using Parallel


When you want to perform some batch scoring of your data using your Oracle Data Mining model you will have to use the APPLY procedure that is part of the DBMS_DATA_MINING package. But the problem with using a procedure or function is that you cannot give it a hint to tell it to use the parallel option. So unless you have the tables(s) setup with parallel and/or the session to use parallel, then you cannot run your Oracle Data Mining model in Parallel using the APPLY procedure.


So how can you get the DBMA_DATA_MINING.APPLY procedure to run in parallel?


The answer is that you can use the DBMS_PARALLEL_EXECUTE package. The following steps walks you through what you need to do to use the DMBS_PARALLEL_EXECUTE package to run your Oracle Data Mining models in parallel.


The first step required is for you to put the DBMS_DATA_MINING.APPLY code into a stored procedure. The following code shows how our DEMO_CLASS_DT_MODEL can be used by the APPLY procedure and how all of this can be incorporated into a stored procedure called SCORE_DATA.


create or replace procedure score_data
is
begin


dbms_data_mining.apply(
  model_name => 'DEMO_CLAS_DT_MODEL',
  data_table_name => 'NEW_DATA_TO_SCORE',
  case_id_column_name => 'CUST_ID',
  result_table_name => 'NEW_DATA_SCORED');


end;
/


Next we need to create a Parallel Task for the DBMS_PARALLEL_EXECUTE package. In the following example this is called ODM_SCORE_DATA.


-- Create the TASK
  DBMS_PARALLEL_EXECUTE.CREATE_TASK ('ODM_SCORE_DATA');


Next we need to define the Parallel Workload Chunks details

 -- Chunk the table by ROWID
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID('ODM_SCORE_DATA', 'DMUSER', 'NEW_DATA_TO_SCORE', true, 100);
The scheduled jobs take an unassigned workload chunk, process it and will then move onto the next unassigned chunk. 
 
Now you are ready to execute the stored procedure for your Oracle Data Mining model, in parallel by 10.

DECLARE
   l_sql_stmt   varchar2(200);
BEGIN
   -- Execute the DML in parallel
   l_sql_stmt := 'begin score_data(); end;';
  
   DBMS_PARALLEL_EXECUTE.RUN_TASK('ODM_SCORE_DATA', l_sql_stmt, DBMS_SQL.NATIVE,
                                  parallel_level => 10);
END;
/


When every thing is finished you can then clean up and remove the task using



BEGIN
   dbms_parallel_execute.drop_task('ODM_SCORE_DATA');
END;
/


 

NOTE: The schema that will be running the above code will need to have the necessary privileges to run DBMS_SCHEDULER, for example


grant create job to dmuser;

Wednesday, October 9, 2013

How to Fix Odd Layout Behaviour in ODM v4 EA1 & EA2

If you have been using the last versions of the Oracle Data Miner tool that have been released as part of SQL Developer 4 EA1 and EA2, you may have noticed that the layout of your worksheets and other areas are inconsistent each time you open ODM.

This can be very frustrating as you have to rearrange the layout of the worksheets, property inspection, the ODM connections tab, etc, etc, etc.

This “feature” seems to be linked to when you installed the new version of the software. When you open SQL Dev 4 EA1 & EA2 you are asked if you would like to migrate your settings.  If you selected Yes for this then it is this that is causing the project.

How do you fix this?

1. Export your SQL Developer Connections. To do this go to the Connections tab and right click on Connections and select export from the menu

image

This will create a XML file.  Save this to your desktop so that you can find it easily later.

2. Locate the AppData directory for SQL Developer.  This really depends on your environment. If you are using Windows for your client then the AppData directory will be located somewhere live the following

C:\Users\<your username>\AppData\Roaming\SQL Developer

3. Rename the System4.0… Directory.  Located in the AppData/SQL Developer directory called some like  ‘system4.0.0.12.84’  if you have installed and migrated to SQL Dev 4 EA2.  Rename this directory to another name e.g. ‘system4.0.0.12.84_old’.  This effectively deletes all your setting for SQL Developer 4

4. Start SQL Dev 4.  When you start SQL Developer 4 EA1 or EA2 it will be like you are running the software for the first time.  But this time, when you are asked do you want to migrate your setting from an earlier version, select No.  This will create a clean system folder.

5. Import your Connections.  When SQL Developer opens you can right click on the connection link and select Import Connections (see image above). Then enter the name and location of the file you create in step 1 above.

 

After completing all of the above steps your unusual layout when you open ODM should now be fixed.

Tuesday, September 24, 2013

Adding Oracle Data Miner to OBIEE

Oracle Data Miner is a very powerful tool that provides advanced machine learning algorithms that are embedded in the Oracle database. By using Oracle Data Miner you do not have to use another tool, from another vendor, to do your data mining. You can do everything in the database, ensuring that the security of your data is maintained and use all the performance functionality that comes with the database.
To add to the advanced insights that you can get from using ODM, you can combine ODM with your OBIEE dashboards to gain a deeper level of insight of your data. This is the combining of data mining techniques and visualization techniques.
The purpose of this blog post is to show you the steps involved in adding an ODM model to your OBIEE dashboards. Lots of people have been asking for the details of how to do it, so here it is.
The following example is based on a presentation that I have given a few times (OUG Ireland, UKOUG, OOW) with Antony Heljula.
1. Export & Import the ODM model
If your data mining analysis and development was completed in a different database to where your OBIEE data resides then you will need to move the ODM model from ODM/development database to the OBIEE database.
ODM provides two PL/SQL procedures to allow you to easily move your ODM model. These procedures are part of the DBMS_DATA_MINING package. To export a model you will need to use the DBMS_DATA_MINING.EXPORT_MODEL procedure. Similarly to import your (exported) ODM model you will use the DBMS_DATA_MINING.IMPORT_MODEL procedure.
2. Create a view that uses the ODM model
You can create a view that uses the PREDICTION and PREDICTION_PROBABILITY functions to apply the import ODM model to your data. For example the following view is used to score our customer data to make a prediction of they are going to churn and the probability that this prediction is correct.
SELECT st_pk,
       prediction(clas_decision_tree using *) WITHDRAW_PREDICTION,
       prediction_probability(clas_decision_tree using *) WITHDRAW_PROBABILITY
FROM   CUSTOMER_DATA;

clip_image002
3. Import the view into the Physical layer of the BI Repository (RPD)
The view was then imported into the Physical layer of the BI Repository (RPD) where it was joined on primary key to the other customer tables (we had one records per customer in the view). With the tables being joined, we can use the prediction columns to filter the customer data. For example filter all the customer who are likely to churn, WITHDRAW_PREDICTION = ‘N’
clip_image002[11]
clip_image002[13]
4.Add the new columns to the Business Model layer
The new prediction columns were then mapped into the Business Model layer where they could be incorporated into various relevant calculations e.g. % Withdrawals Predicted, and then subsequently presented to the end users for reporting
clip_image002[9]
5. Add to your Dashboards
The Withdraw prediction columns could then be published on the BI Dashboards where they could be used to filter the data content. In the example below, the use has chosen to show data for only those customers who are predicted to Withdraw with a probability rating of >70%
clip_image002[5]

Thursday, September 19, 2013

Nested Tables (and Data) in Oracle & ODM

Oracle Data Mining uses Nested data types/tables to store some of its data. Oracle Data Mining creates a number of tables/objects that contain nested data when it is preparing data for input to the data mining algorithms and when outputting certain results from the algorithms.  In Oracle 11.2g there are two nested data types used and in Oracle 12.1c we get an additional two nested data types. These are setup when you install the Oracle Data Miner Repository. If you log into SQL*Plus or SQL Developer you can describe them like any other table or object.

DM_NESTED_NUMERICALS

image

DM_NESTED_CATEGORICALS

image

The following two Nested data types are only available in 12.1c

DM_NESTED_BINARY_DOUBLES

image

DM_NESTED_BINARY_FLOATS

image

These Nested data types are used by Oracle Data Miner in preparing data for input to the data mining algorithms and for producing the some of the outputs from the algorithms.

Creating your own Nested Tables

To create your own Nested Data Types and Nested Tables you need to performs steps that are similar to what is illustrated in the following steps. These steps show you how to define a data type, how to create a nested table, how to insert data into the nested table and how to select the data from the nested table.

1. Set up the Object Type

Create a Type object that will defines the structure of the data. In these examples we want to capture the products and quantity purchased by a customer.

create type CUST_ORDER as object
(product_id     varchar2(6),
quantity_sold  number(6));
/

2. Create a Type as a Table

Now you need to create a Type as a table.

create type cust_orders_type as table of CUST_ORDER;
/

3. Create the table using the Nested Data

Now you can create the nested table.

create table customer_orders_nested (
cust_id       number(6) primary key,
order_date    date,
sales_person  varchar2(30),
c_order       CUST_ORDERS_TYPE)
NESTED TABLE c_order STORE AS c_order_table;

4. Insert a Record and Query

This insert statement shows you how to insert one record into the nested column.

insert into customer_orders_nested
values (1, sysdate, 'BT', CUST_ORDERS_TYPE(cust_order('P1', 2)) );

When we select the data from the table we get

select * from customer_orders_nested;

   CUST_ID ORDER_DAT SALES_PERSON
---------- --------- ------------------------------
C_ORDER(PRODUCT_ID, QUANTITY_SOLD)
-----------------------------------------------------
         1 19-SEP-13 BT
CUST_ORDERS_TYPE(CUST_ORDER('P1', 2))

It can be a bit difficult to read the data in the nested column so we can convert the nested column into a table to display the results in a better way

select cust_id, order_date, sales_person, product_id, quantity_sold
from customer_orders_nested, table(c_order)


   CUST_ID ORDER_DAT SALES_PERSON                   PRODUC QUANTITY_SOLD
---------- --------- ------------------------------ ------ -------------
         1 19-SEP-13 BT                             P1                 2


5. Insert many Nested Data items & Query

To insert many entries into the nested column you can do this

insert into customer_orders_nested
values (2, sysdate, 'BT2', CUST_ORDERS_TYPE(CUST_ORDER('P2', 2), CUST_ORDER('P3',3)));

When we do a Select * we get

   CUST_ID ORDER_DAT SALES_PERSON
---------- --------- ------------------------------
C_ORDER(PRODUCT_ID, QUANTITY_SOLD)
-------------------------------------------------------------
         1 19-SEP-13 BT
CUST_ORDERS_TYPE(CUST_ORDER2('P1', 2))

         2 19-SEP-13 BT2
CUST_ORDERS_TYPE(CUST_ORDER2('P2', 2), CUST_ORDER2('P3', 3))

Again it is not easy to ready the data in the nested column, so if we convert it to a table again we now get a row being displayed for each entry in the nested column.

select cust_id, order_date, sales_person, product_id, quantity_sold
from customer_orders_nested, table(c_order);

   CUST_ID ORDER_DAT SALES_PERSON                   PRODUC QUANTITY_SOLD
---------- --------- ------------------------------ ------ -------------
         1 19-SEP-13 BT                             P1                 2
         2 19-SEP-13 BT2                            P2                 2
         2 19-SEP-13 BT2                            P3                 3

Friday, September 13, 2013

Upgrading ODMr and SQL Dev forEA2

The Early Adopter 2 of Oracle SQL Developer was released yesterday (Thursday 12th Sept). To install this new version of the Tool, including Oracle Data Miner, you can follow the instructions below

  • Go to the EA2 download page and download the EA2 release
  • Unzip the EA2 download
  • Create a new shortcut that point to the sqldeveloper.exe
  • Start SQL Developer EA2
  • You will be prompted for the location of the Java JDK. On my VM it was  C:\Program Files\Java\jdk1.7.0_25. 
  • Next you are prompted about importing your setting from the previous version. Select Yes.
  • After the setting have been imported SQL Developer will open and you are now able to enjoy

Oracle Data Miner

  • For Oracle Data Miner you need to make the option visible by selecting Tools->Data Miner->Make Visible. This will open the ODM connection tabs along with a couple of others. I’m running the following on a 12.1c database.
  • Open one of your ODM connections by double clicking on it.
  • ODM will check the version of the ODM repository in the DB. You will be prompted to upgrade the ODM repository to the latest version. Click on the Yes button

image

  • Enter the SYS password, or talk nicely to your DBA.

image

  • Then click the Start button to start the ODM repository upgrade

image

  • This will take anything from a minute to 10 minutes, depending on the location of the DB and your network.
  • When everything is finished you can close the window and start using Oracle Data Miner by opening an existing workflow or by creating a new one.

Tuesday, August 20, 2013

Schema Table Filtering for Oracle Data Miner

If you have been using Oracle Data Miner, that is part of SQL Developer 4 or SQL Developer 3, you will notice that your schema can get filled up with various tables that are created by your workflows. The following image gives an example.

image

These tables can include details of the various algorithms used and their settings, sample tables that were created using the various nodes, etc.  Basically they contain all the information that was setup by each node. Not every node in your workflow will create a table, but a lot do in particular if you have set the Cache or Sample in the Properties tab.

In most cases you do not need to be aware of or use most of these tables.

So How do I hide them, so that my schema table listing only shows me the main tables in my schema ?  By main tables, I mean the tables that you would expect to have in your schema before you started using Oracle Data Miner.

The answer to this question is to apply filters to your tables in SQL Developer. To do this go to your schema in the Connections tab. Expand to get the full list of schema objects and then right click on Tables. You should get a menu like the following.

image

Select Apply Filter from the menu and the Apply Filter window will open. Here you can create filters to apply to the tables in your schema.

To restrict Oracle Data Miner related table you will need to exclude tables that begin with, DM$ and ODMR$. The following image shows these filters.

image

When these filters are applied we only get our schema tables.

image

There are two additional filters you may want to consider. The first of these is for the tables that begin with OUTPUT. These are tables that are created when you build a node sends the outputs from running a model to a table, or some other scenario where the output is sent to a table. In reality this is bad naming and we should use a name that is more meaningful, and reflects the contents of the table. But sometimes you just want to spool the outputs to a table and the name is not important. I have an additional filter to not show these tables (see below).

With SQL Developer 4, Oracle Data Miner seems to generate IOTs, as we can see in the above image. Again another filter can be created to exclude these from the list.

Here is the full list of filters.

image

Tuesday, August 6, 2013

Depreciated ODM features in 12c

With the release of the Oracle 12c Database there has been some changes to the Advanced Analytics options. Most of the new features both in the database and in the ODM tool have been documented in previous blog posts.
But what has been removed from the Advanced Analytics Option and what is not longer supported.
The first of these is the Java API that Oracle supplied many, many years ago. They have been saying for a few years now and since the release of 11.2g that these Java APIs are no longer supported. Again the documentation states this and the demo scripts are not included in the latest SQL Developer 4.  Instead of using the Java APIs you can using the in-database SQL functions and procedures.
One of the in-database DM algorithms was the Adaptive Bayes Network (ABN). Although this was de-supported in 11.2g of the database is was still in the database. This was to give customers who were still using it time to migrate to using the other algorithms. In 12c the ABN algorithm is not in the the database.  Before you upgrade your 11.2.x Oracle database to 12c you will need to drop any ABN models that you have in your database

Thursday, July 25, 2013

12c New Data Mining functions

With the release of Oracle 12c we get new functions/procedures and some updated ones for Oracle Data Miner that is part of the Advanced Analytics option.

The following are the new functions/procedures and the functions/procedures that have been updated in 12c, with a link to the 12c Documentation that explains what they do.

  • CLUSTER_DETAILS is a new function that predicts cluster membership for each row. It can use a pre-defined clustering model or perform dynamic clustering. The function returns an XML string that describes the predicted cluster or a specified cluster.

  • CLUSTER_DISTANCE is a new function that predicts cluster membership for each row. It can use a pre-defined clustering model or perform dynamic clustering. The function returns the raw distance between each row and the centroid of either the predicted cluster or a specified.

  • CLUSTER_ID has been enhanced so that it can either use a pre-defined clustering model or perform dynamic clustering.

  • CLUSTER_PROBABILITY has been enhanced so that it can either use a pre-defined clustering model or perform dynamic clustering. The data type of the return value has been changed from NUMBER to BINARY_DOUBLE.

  • CLUSTER_SET has been enhanced so that it can either use a pre-defined clustering model or perform dynamic clustering. The data type of the returned probability has been changed from NUMBER to BINARY_DOUBLE

  • FEATURE_DETAILS is a new function that predicts feature matches for each row. It can use a pre-defined feature extraction model or perform dynamic feature extraction. The function returns an XML string that describes the predicted feature or a specified feature.

  • FEATURE_ID has been enhanced so that it can either use a pre-defined feature extraction model or perform dynamic feature extraction.

  • FEATURE_SET has been enhanced so that it can either use a pre-defined feature extraction model or perform dynamic feature extraction. The data type of the returned probability has been changed from NUMBER to BINARY_DOUBLE.

  • FEATURE_VALUE has been enhanced so that it can either use a pre-defined feature extraction model or perform dynamic feature extraction. The data type of the return value has been changed from NUMBER to BINARY_DOUBLE.

  • PREDICTION has been enhanced so that it can either use a pre-defined predictive model or perform dynamic prediction.

  • PREDICTION_BOUNDS now returns the upper and lower bounds of the prediction as the BINARY_DOUBLE data type. It previously returned these values as the NUMBER data type.

  • PREDICTION_COST has been enhanced so that it can either use a pre-defined predictive model or perform dynamic prediction. The data type of the returned cost has been changed from NUMBER to BINARY_DOUBLE.

  • PREDICTION_DETAILS has been enhanced so that it can either use a pre-defined predictive model or perform dynamic prediction.

  • PREDICTION_PROBABILITY has been enhanced so that it can either use a pre-defined predictive model or perform dynamic prediction. The data type of the returned probability has been changed from NUMBER to BINARY_DOUBLE.

  • PREDICTION_SET has been enhanced so that it can either use a pre-defined predictive model or perform dynamic prediction. The data type of the returned probability has been changed from NUMBER to BINARY_DOUBLE.

Tuesday, July 23, 2013

Oracle Data Miner New Features (SQL Dev 4)

With the release of the new Oracle 12c database and SQL Developer 4 we have a range of Oracle Data Miner new features . Some of these are embedded into the database and are only available in 12c. Check out my previous blog post on these new features.

In this blog post I will look at the new Oracle Data Miner features that come with the ODM tool in SQL Developer4.

The new features of the Oracle Data Miner tool can be grouped into 2 categories. The first category contains the new features that are available to all user of the tool (11.2g and 12c). The second category contains the new features that are only available in 12c. The new features of each of these categories will be explained below.

Category 1 – Common new features for 11.2g and 12c Database users

There is a new View Data feature that allows you to drill down to view the customer object and to view nested tables.

A new Graph Node that allows you to create graphs such as line, bar, scatter and boxplots for data at any stage of a workflow. You can specify any of the attributes from the data source for the graphs. You don’t seem to be limited to the number of graphs you can create.

image

A new SQL Node. This is welcome addition, as there has been many times that I’ve need to write some SQL or PL/SQL to do a specific piece of processing on the data that was not available with the other nodes. There are 2 important elements to this SQL node really. The first is that you can write SQL and PL/SQL code to do whatever processing you want to do. But you can only do it on the Data node you are connected to.

image

The second is that you can use it to call some ORE code. This allows you to use the power of R and extensive range of packages that are available to expand the analytic functionality that is available in the database. If there is some particular function that you cannot do in Oracle and it is available in R, you can now embed this function/code as an ORE object in the database. You can then called using SQL. 

WARNING: this particular feature will only work if you have ORE installed on your 11.2.0.3g or 12.1c database

New Model Build Node features, include node level text specifications for text transformations, displays the heuristic rules responsible for excluding predictor columns and being able to control the amount of classification and regression test results that are generated.  I’ll be covering these in later blog posts.

New Workflow SQL Script Deployment features. Up to now the workflow SQL script, I found to be of limited use. The development team have put a lot of work into generating a proper script that can be used by developers and DBA. But there are some limitations still. You can use the script will run the workflow automatically in the database without having the use the ODM tool. But it can only be run the in the schema that the workflow was generated. You will still have to do a lot of coding (although a lot less than you used to) to get your ODM models and workflows to run in another schema or database.

image

This will output the script to a file buried deep somewhere inside you SQL  Developer directory.  Unfortunately in the EA1 release, the size of this location field is small and scrolling has not been enabled. So you cannot (currently) scroll to the end of the field to see the actual location.  You can edit this location to have a different shorter location.

image

Maybe this will be fixed for the official release.

Category 2 – New features for 12c Database users.

Now for the new features that are only visible when you are running ODM / SQL Dev 4 against a 12c database. No configuration changes are needed. The ODM tool checks to see what version of the database you are logging into. It will then present the available features based on the version of the database.

New Predictive Query nodes allows you to build a node based on the new non-transient feature in 12c called Predictive Queries (PQs). In SQL Developer we get 3 addition types of Predictive Queries. These can be used for Anomaly Detection, Clustering and Feature Extraction

image

It is important to remember that underlying model produced by these PQs to not exist in the database after the query has executed. The model is created, used on the data and then the model deleted.

The Clustering node has the new algorithm Expectation Maximization in addition to the existing algorithms of K-Means and O-Cluster.

image

The Feature Extraction node has the new algorithm called Principal Component Analysis in addition to the existing Non-Negative Matrix Factorization algorithm.

image

Text Transformations are now built into the model build nodes. These text transformations will be part of the Automatic Data Processing steps for the model build nodes. This is illustrated in the above images.

The Generalized Linear Model that is part of the Classification Node has a Feature Selection option in the Algorithm Settings. The default setting is Ridge Regression. Now there is an additional option of using Feature Selection.

image

Prediction Result Explanations gives the scoring details used to to explain why the prediction was made.

 

Look out for blog post on each of these new features.

Thursday, July 18, 2013

Upgrading your ODM Repository for SQL Dev 4

For those users of Oracle Data Miner (ODM) that is part of SQL Developer, now that Oracle have finally released SQL Developer 4, you might want to upgrade to this new release. There are a lot of new features. Some of these are available for 11.2g and 12.1c databases and some are only available for 12.1c users.

I will have another blog post soon on the new Oracle Data Miner (ODM) features that are available in SQL Developer 4.

The instructions given below are what I did to upgrade so that I could use the new ODM tool/SQL Developer 4.

Step 1 – Install SQL Developer 4 : I have another blog post on what this involves, so check it out and complete the steps before you continue with the result of the steps below.

Step 2 – Make ODM Visible : After SQL Developer 4 opens you should see all your migrated connections. To make ODM visible you need to click on the Tools menu, select Oracle Data Miner and then Make Visible. This will open a number of tabs on the left hand side of SQL Developer. These will include Data Miner (connections), Workflow Structure and Workflow Jobs.

image

Step 3 – Open an ODM Connection : Take one your ODM connections and double click on it. SQL Developer 4 / ODM will check what versions of the ODM repository exists in your database. If this is your first time connecting from SQL Developer 4, you will be told that you will need to upgrade your repository

SNAGHTML19755c5

Step 4 – Upgrade the ODM Repository : Select the Yes button on the Upgrade Repository window. You will then be asked for the SYS password. If you do not have access to this you can talk nicely to your DBA and ask them to enter the password for you.

SNAGHTML198e42e

You may or may not get a warning message like the following. Just click OK to continue.

SNAGHTML199f5cb

Step 5 – Start the Repository Upgrade : When the Migrate Data Miner Repository window opens, just click the Start button. 

SNAGHTML19b0a35

This might be a good time to go off an make yourself a coffee. The upgrade process tool approx. 8 minutes on my laptop. If you were running this on a server located somewhere then the script will take a little bit longer to run!

The progress bar will let you know how things are progressing. It also gives some messages to let you known at what stage of the process it is at.

SNAGHTML19f591f

Step 6 – All finished : When the Repository Migration has finished you will get a window with a message saying Task Successfully Complete. Click on the Close button to close this window.

SNAGHTML1a0ffe9

Step 7 – Open an Existing Workflow : Just to make sure that everything has worked with the install and ODM Repository migration, open one of your existing workflows. If it opens then everything should be OK.

When you open the workflow, the new Workflow Editor tab opens on the right hand side of SQL Developer. This seems to have replaced the Component Palette we had with the pervious version of the ODM tool. Expand the headings under the Workflow Editor to see the different nodes that are available. Most of these are the same but we have 2 new nodes under the Data section. These are Graph and SQL Query. I’ll have more on these in another post or posts.

image