Which three data file formats can be loaded using oracle database actions?

Oracle Database Actions provides a web-based interface with development tools, data tool, administration, and monitoring features and lets you load or access data from local files, from cloud storage, or from remote databases.

On the Database Actions Data Load page you can choose to load data from a file on your local device, from cloud storage, or from a database. You can also choose to explore the data in your Oracle Autonomous Database. See The Data Load Page for detailed information and the steps for loading data using Database Actions.


Description of the illustration database_actions_load_data.png

See Connect with Built-in Oracle Database Actions for information on accessing Oracle Database Actions.

The values you provide for

CREATE TABLE CHANNELS
   [channel_id CHAR[1],
    channel_desc VARCHAR2[20],
    channel_class VARCHAR2[20]
   ];
/
6 and
CREATE TABLE CHANNELS
   [channel_id CHAR[1],
    channel_desc VARCHAR2[20],
    channel_class VARCHAR2[20]
   ];
/
7 depend on the Cloud Object Storage service you are using:

  • Oracle Cloud Infrastructure Object Storage:

    CREATE TABLE CHANNELS
       [channel_id CHAR[1],
        channel_desc VARCHAR2[20],
        channel_class VARCHAR2[20]
       ];
    /
    6 is your Oracle Cloud Infrastructure user name and
    CREATE TABLE CHANNELS
       [channel_id CHAR[1],
        channel_desc VARCHAR2[20],
        channel_class VARCHAR2[20]
       ];
    /
    7 is your Oracle Cloud Infrastructure auth token. See Working with Auth Tokens.

  • Oracle Cloud Infrastructure Object Storage Classic:

    CREATE TABLE CHANNELS
       [channel_id CHAR[1],
        channel_desc VARCHAR2[20],
        channel_class VARCHAR2[20]
       ];
    /
    6 is your Oracle Cloud Infrastructure Classic user name and
    CREATE TABLE CHANNELS
       [channel_id CHAR[1],
        channel_desc VARCHAR2[20],
        channel_class VARCHAR2[20]
       ];
    /
    7 is your Oracle Cloud Infrastructure Classic password.

This operation stores the credentials in the database in an encrypted format. You can use any name for the credential name. Note that this step is required only once unless your object store credentials change. Once you store the credentials you can then use the same credential name for all data loads.

Parent topic: Load Data from Files in the Cloud

Load Data from Text Files

Learn how to load data from text files in the cloud to your Autonomous Database using the

S,Direct Sales,Direct
T,Tele Sales,Direct
C,Catalog,Indirect
I,Internet,Indirect
P,Partners,Others
3 procedure.

The source file in this example,

BEGIN
 DBMS_CLOUD.COPY_DATA[
    table_name =>'CHANNELS',
    credential_name =>'DEF_CRED_NAME',
    file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
    format => json_object['delimiter' value ',']
 ];
END;
/
3, has the following data:

S,Direct Sales,Direct
T,Tele Sales,Direct
C,Catalog,Indirect
I,Internet,Indirect
P,Partners,Others

  1. Store your Cloud Object Storage credential using the
    BEGIN
     DBMS_CLOUD.COPY_DATA[
        table_name =>'CHANNELS',
        credential_name =>'DEF_CRED_NAME',
        file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
        format => json_object['delimiter' value ',']
     ];
    END;
    /
    
    4 procedure. See Create Credentials for more details.
  2. Create the table that will contain the data. For example:

    CREATE TABLE CHANNELS
       [channel_id CHAR[1],
        channel_desc VARCHAR2[20],
        channel_class VARCHAR2[20]
       ];
    /

  3. Load data into the table using the procedure
    S,Direct Sales,Direct
    T,Tele Sales,Direct
    C,Catalog,Indirect
    I,Internet,Indirect
    P,Partners,Others
    3
    . For example:

    BEGIN
     DBMS_CLOUD.COPY_DATA[
        table_name =>'CHANNELS',
        credential_name =>'DEF_CRED_NAME',
        file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
        format => json_object['delimiter' value ',']
     ];
    END;
    /
    

    The parameters are:

    • BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      6: is the target table’s name.

    • BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      7: is the name of the credential created in the previous step.

    • BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      8: is a comma delimited list of the source files you want to load.

      In this example,

      BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      8 is an Oracle Cloud Infrastructure Swift URI that specifies the
      BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      3 file in the
      { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      1 bucket in the
      { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      2 region. [
      { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      3 is the object storage namespace in which the bucket resides.] For information about the supported URI formats, see Cloud Object Storage URI Formats.

    • { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      4: defines the options you specify to describe the format of the source file. For information about the format options you can specify, see Format Parameter.

    For more detailed information, see COPY_DATA Procedure .

Parent topic: Load Data from Files in the Cloud

Load a JSON File of Delimited Documents into a Collection

Learn how to load a JSON file of delimited documents into a collection in your Autonomous Database using the

S,Direct Sales,Direct
T,Tele Sales,Direct
C,Catalog,Indirect
I,Internet,Indirect
P,Partners,Others
3 procedure.

This example loads JSON values from a line-delimited file and uses the JSON file

{ "name" : "apple", "count": 20 }
{ "name" : "orange", "count": 42 }
{ "name" : "pear", "count": 10 }
6. Each value, each line, is loaded into a collection on your Autonomous Database as a single document.

Here is an example of such a file. It has three lines, with one object per line. Each of those objects gets loaded as a separate JSON document.

{ "name" : "apple", "count": 20 }
{ "name" : "orange", "count": 42 }
{ "name" : "pear", "count": 10 }

Procedure:

  1. Store your Cloud Object Storage credential using the
    S,Direct Sales,Direct
    T,Tele Sales,Direct
    C,Catalog,Indirect
    I,Internet,Indirect
    P,Partners,Others
    5 procedure. See Create Credentials for more details.
  2. Load data into a collection using the procedure
    S,Direct Sales,Direct
    T,Tele Sales,Direct
    C,Catalog,Indirect
    I,Internet,Indirect
    P,Partners,Others
    3
    . For example:

    BEGIN 
      DBMS_CLOUD.COPY_COLLECTION[
        collection_name =>'fruit',
        credential_name =>'DEF_CRED_NAME',
        file_uri_list =>'//objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/fruit_bucket/o/myCollection.json',
        format => json_object['recorddelimiter' value '''\n''']
     ];
    END;
    /
    

    The parameters are:

    • { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      9: is the target collection’s name.

    • BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      7: is the name of the credential created in the previous step.

    • BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      8: is a comma delimited list of the source files you want to load.

      In this example,

      BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      8 is an Oracle Cloud Infrastructure Swift URI that specifies the
      { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      6 file in the
      { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      1 bucket in the
      { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      2 region. For information about the supported URI formats, see Cloud Object Storage URI Formats.

    • { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      4: defines the options you specify to describe the format of the source file. The format options
      BEGIN 
        DBMS_CLOUD.COPY_COLLECTION[
          collection_name =>'fruit',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/fruit_bucket/o/myCollection.json',
          format => json_object['recorddelimiter' value '''\n''']
       ];
      END;
      /
      
      7
      are supported for loading JSON data. Any other formats specified will result in an error. For information about the format options you can specify, see Format Parameter.

    For more detailed information, see COPY_COLLECTION Procedure.

Parent topic: Load Data from Files in the Cloud

Load an Array of JSON Documents into a Collection

Learn how to load an array of JSON documents into a collection in your Autonomous Database using the

CREATE TABLE CHANNELS
   [channel_id CHAR[1],
    channel_desc VARCHAR2[20],
    channel_class VARCHAR2[20]
   ];
/
0 procedure.

This example uses the JSON file

BEGIN 
  DBMS_CLOUD.COPY_COLLECTION[
    collection_name =>'fruit',
    credential_name =>'DEF_CRED_NAME',
    file_uri_list =>'//objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/fruit_bucket/o/myCollection.json',
    format => json_object['recorddelimiter' value '''\n''']
 ];
END;
/
9. The following shows the contents of the file
BEGIN 
  DBMS_CLOUD.COPY_COLLECTION[
    collection_name =>'fruit',
    credential_name =>'DEF_CRED_NAME',
    file_uri_list =>'//objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/fruit_bucket/o/myCollection.json',
    format => json_object['recorddelimiter' value '''\n''']
 ];
END;
/
9:

[{"name" : "apple", "count": 20 },
 {"name" : "orange", "count": 42 },
 {"name" : "pear", "count": 10 }]

Procedure:

  1. Store your Cloud Object Storage credential using the
    S,Direct Sales,Direct
    T,Tele Sales,Direct
    C,Catalog,Indirect
    I,Internet,Indirect
    P,Partners,Others
    5 procedure. See Create Credentials for more details.
  2. Load data into a collection using the procedure
    S,Direct Sales,Direct
    T,Tele Sales,Direct
    C,Catalog,Indirect
    I,Internet,Indirect
    P,Partners,Others
    3
    . For example:

    BEGIN 
      DBMS_CLOUD.COPY_COLLECTION[    
        collection_name => 'fruits',    
        credential_name => 'DEF_CRED_NAME',    
        file_uri_list => '//objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/json/o/fruit_array.json',
        format => '{"recorddelimiter" : "0x''01''", "unpackarrays" : "TRUE", "maxdocsize" : "10240000"}'
      ];
    END;
    /

    In this example, you load a single JSON value which occupies the whole file. So, there is no need to specify a record delimiter. To indicate that there is no record delimiter, you can use a character that does not occur in the input file. For example, you can use value

    [{"name" : "apple", "count": 20 },
     {"name" : "orange", "count": 42 },
     {"name" : "pear", "count": 10 }]
    3 because this character does not occur directly in JSON text.

    When

    [{"name" : "apple", "count": 20 },
     {"name" : "orange", "count": 42 },
     {"name" : "pear", "count": 10 }]
    4 parameter for format value is set to
    [{"name" : "apple", "count": 20 },
     {"name" : "orange", "count": 42 },
     {"name" : "pear", "count": 10 }]
    5, the array of documents is loaded as individual documents rather than as an entire array. The unpacking of array elements is however limited to single level. If there are nested arrays in the documents, those arrays are not unpacked.

    The parameters are:

    • { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      9: is the target collection’s name.

    • BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      7: is the name of the credential created in the previous step.

    • BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      8: is a comma delimited list of the source files you want to load.

      In this example,

      BEGIN
       DBMS_CLOUD.COPY_DATA[
          table_name =>'CHANNELS',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/idthydc0kinr/mybucket/channels.txt',
          format => json_object['delimiter' value ',']
       ];
      END;
      /
      
      8 is an Oracle Cloud Infrastructure Swift URI that specifies the
      { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      6 file in the
      { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      1 bucket in the
      { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      2 region. For information about the supported URI formats, see Cloud Object Storage URI Formats.

    • { "name" : "apple", "count": 20 }
      { "name" : "orange", "count": 42 }
      { "name" : "pear", "count": 10 }
      4: defines the options you specify to describe the format of the source file. The format options
      BEGIN 
        DBMS_CLOUD.COPY_COLLECTION[
          collection_name =>'fruit',
          credential_name =>'DEF_CRED_NAME',
          file_uri_list =>'//objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/fruit_bucket/o/myCollection.json',
          format => json_object['recorddelimiter' value '''\n''']
       ];
      END;
      /
      
      7
      are supported for loading JSON data. Any other formats specified will result in an error. For information about the format options you can specify, see Format Parameter.

    Loading

    BEGIN 
      DBMS_CLOUD.COPY_COLLECTION[
        collection_name =>'fruit',
        credential_name =>'DEF_CRED_NAME',
        file_uri_list =>'//objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/fruit_bucket/o/myCollection.json',
        format => json_object['recorddelimiter' value '''\n''']
     ];
    END;
    /
    
    9 with
    CREATE TABLE CHANNELS
       [channel_id CHAR[1],
        channel_desc VARCHAR2[20],
        channel_class VARCHAR2[20]
       ];
    /
    0
    using the format option
    [{"name" : "apple", "count": 20 },
     {"name" : "orange", "count": 42 },
     {"name" : "pear", "count": 10 }]
    4 makes the procedure recognize array values in the source. Therefore, instead of loading the data as a single document, as it would by default, the data is loaded in the collection
    BEGIN 
      DBMS_CLOUD.COPY_COLLECTION[    
        collection_name => 'fruits',    
        credential_name => 'DEF_CRED_NAME',    
        file_uri_list => '//objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/json/o/fruit_array.json',
        format => '{"recorddelimiter" : "0x''01''", "unpackarrays" : "TRUE", "maxdocsize" : "10240000"}'
      ];
    END;
    /
    8 with each value in the array as a single document.

    For more detailed information, see COPY_COLLECTION Procedure.

Parent topic: Load Data from Files in the Cloud

Monitor and Troubleshoot Data Loading

All data load operations done using the PL/SQL package

S,Direct Sales,Direct
T,Tele Sales,Direct
C,Catalog,Indirect
I,Internet,Indirect
P,Partners,Others
1 are logged in the tables
CREATE TABLE CHANNELS
   [channel_id CHAR[1],
    channel_desc VARCHAR2[20],
    channel_class VARCHAR2[20]
   ];
/
2 and
CREATE TABLE CHANNELS
   [channel_id CHAR[1],
    channel_desc VARCHAR2[20],
    channel_class VARCHAR2[20]
   ];
/
3:

  • CREATE TABLE CHANNELS
       [channel_id CHAR[1],
        channel_desc VARCHAR2[20],
        channel_class VARCHAR2[20]
       ];
    /
    2: shows all load operations.

  • CREATE TABLE CHANNELS
       [channel_id CHAR[1],
        channel_desc VARCHAR2[20],
        channel_class VARCHAR2[20]
       ];
    /
    3: shows the load operations in your schema.

Query these tables to see information about ongoing and completed data loads. For example, using a


SELECT table_name, owner_name, type, status, start_time, update_time, logfile_table, badfile_table 
   FROM user_load_operations WHERE type = 'COPY';

TABLE_NAME OWNER_NAME  TYPE   STATUS     START_TIME                            UPDATE_TIME                          LOGFILE_TABLE   BADFILE_TABLE
---------- ----------- ------- ---------- ---------------------- --------------------- --------------- ------------- ------------- -------------
CHANNELS   SH          COPY   COMPLETED  04-MAR-21 07.38.30.522711000 AM GMT    04-MAR-21 07.38.30.522711000 AM GMT  COPY$1_LOG     COPY$1_BAD
4 statement with a

SELECT table_name, owner_name, type, status, start_time, update_time, logfile_table, badfile_table 
   FROM user_load_operations WHERE type = 'COPY';

TABLE_NAME OWNER_NAME  TYPE   STATUS     START_TIME                            UPDATE_TIME                          LOGFILE_TABLE   BADFILE_TABLE
---------- ----------- ------- ---------- ---------------------- --------------------- --------------- ------------- ------------- -------------
CHANNELS   SH          COPY   COMPLETED  04-MAR-21 07.38.30.522711000 AM GMT    04-MAR-21 07.38.30.522711000 AM GMT  COPY$1_LOG     COPY$1_BAD
5 clause predicate on the

SELECT table_name, owner_name, type, status, start_time, update_time, logfile_table, badfile_table 
   FROM user_load_operations WHERE type = 'COPY';

TABLE_NAME OWNER_NAME  TYPE   STATUS     START_TIME                            UPDATE_TIME                          LOGFILE_TABLE   BADFILE_TABLE
---------- ----------- ------- ---------- ---------------------- --------------------- --------------- ------------- ------------- -------------
CHANNELS   SH          COPY   COMPLETED  04-MAR-21 07.38.30.522711000 AM GMT    04-MAR-21 07.38.30.522711000 AM GMT  COPY$1_LOG     COPY$1_BAD
6 column, shows load operations with the type

SELECT table_name, owner_name, type, status, start_time, update_time, logfile_table, badfile_table 
   FROM user_load_operations WHERE type = 'COPY';

TABLE_NAME OWNER_NAME  TYPE   STATUS     START_TIME                            UPDATE_TIME                          LOGFILE_TABLE   BADFILE_TABLE
---------- ----------- ------- ---------- ---------------------- --------------------- --------------- ------------- ------------- -------------
CHANNELS   SH          COPY   COMPLETED  04-MAR-21 07.38.30.522711000 AM GMT    04-MAR-21 07.38.30.522711000 AM GMT  COPY$1_LOG     COPY$1_BAD
7:


SELECT table_name, owner_name, type, status, start_time, update_time, logfile_table, badfile_table 
   FROM user_load_operations WHERE type = 'COPY';

TABLE_NAME OWNER_NAME  TYPE   STATUS     START_TIME                            UPDATE_TIME                          LOGFILE_TABLE   BADFILE_TABLE
---------- ----------- ------- ---------- ---------------------- --------------------- --------------- ------------- ------------- -------------
CHANNELS   SH          COPY   COMPLETED  04-MAR-21 07.38.30.522711000 AM GMT    04-MAR-21 07.38.30.522711000 AM GMT  COPY$1_LOG     COPY$1_BAD

The


SELECT table_name, owner_name, type, status, start_time, update_time, logfile_table, badfile_table 
   FROM user_load_operations WHERE type = 'COPY';

TABLE_NAME OWNER_NAME  TYPE   STATUS     START_TIME                            UPDATE_TIME                          LOGFILE_TABLE   BADFILE_TABLE
---------- ----------- ------- ---------- ---------------------- --------------------- --------------- ------------- ------------- -------------
CHANNELS   SH          COPY   COMPLETED  04-MAR-21 07.38.30.522711000 AM GMT    04-MAR-21 07.38.30.522711000 AM GMT  COPY$1_LOG     COPY$1_BAD
8 column shows the name of the table you can query to look at the log of a load operation. For example, the following query shows the log of the load operation:

select * from COPY$21_LOG;

The


SELECT table_name, owner_name, type, status, start_time, update_time, logfile_table, badfile_table 
   FROM user_load_operations WHERE type = 'COPY';

TABLE_NAME OWNER_NAME  TYPE   STATUS     START_TIME                            UPDATE_TIME                          LOGFILE_TABLE   BADFILE_TABLE
---------- ----------- ------- ---------- ---------------------- --------------------- --------------- ------------- ------------- -------------
CHANNELS   SH          COPY   COMPLETED  04-MAR-21 07.38.30.522711000 AM GMT    04-MAR-21 07.38.30.522711000 AM GMT  COPY$1_LOG     COPY$1_BAD
9 column shows the name of the table you can query to look at the rows that got errors during loading. For example, the following query shows the rejected records for the load operation:

What types of files are included in the Oracle Database?

Tablespace and Tablespace Datafiles Oracle uses a tablespace to house the following different kinds of structures: Database object structures—like tables, indexes, packages, procedures, triggers, etc. Rollback segments. Temporary sort segments.

What are the 3 physical components of the Oracle Database?

Three basic components are required for the recovery of Oracle Database:.
Data files..
Redo Logs..
Control Files..

What three typical data types models are covered by Oracle converged database?

Unlike single-purpose databases, Oracle's converged database supports JSON, XML, relational, spatial, graph, IoT, text and blockchain data with full joins, transactions, and other critical SQL features enterprises rely on [Figure 2].

Which 4 file formats are supported when loading data from cloud storage?

The source data can be in any of the following formats: Avro. Comma-separated values [CSV] JSON [newline-delimited].
buckets. get..
objects. get..
objects. list [required if you are using a URI wildcard].

Chủ Đề