DBD::Teradata

Perl DBI Driver for Teradata®

 

Description

Perl DBI driver for Teradata.

NOTE:This site describes the GPL (formerly commercial) version of DBD::Teradata. The CPAN version provides a limited subset of the GPL version's features:

Feature: Version:
12.001
(GPL)
1.50
(CPAN)
CLI Adapter Optional Mandatory
Pure Perl Capable  
Array Binding  
Fastload  
Multiload  
Fastexport  
PM/API  
Remote Console  
BIGINT/DECIMAL(38)
Support
 

As of version 12.001, DBD::Teradata is now freely provided under the terms of Gnu General Public License version 2. Refer to the LICENSE section below for details.

New Features/Enhancements for Version 12.001

Support for BIGINT and large precision DECIMAL

BIGINT columns and parameters are now support, as well as DECIMAL precisions up to 38.

New handle attributes

The following attributes have been added:
  • tdat_has_bignum connection handle metadata attribute to indicates if the connection supports large precision DECIMAL.

  • NOTE: Updatable cursor support has been deprecated for Teradata database versions R6.2 or higher.

    Prerequisites

    The following Perl modules or runtime libraries are required to use DBD::Teradata:
    • Perl 5.8 (5.8.6+ preferred)
    • DBI 1.42 (1.52+ preferred)
    • For encrypted logon support:

    • For CLI Adapter use:

      • the appropriate Teradata CLI bundle for your platform; The CLI bundle for most platforms is now freely available here.

    Installation

    Unix/Linux Installation

    cd to wherever you've unbundled the DBD-Teradata-XXX.tar.gz file, and then cd to the DBD-Teradata-XXX directory. Run the usual Perl package installation process, namely
    perl Makefile.PL
    make
    make install
    
    Note that the current test suite is rather large and has numerous dependencies, and thus there is currently no "make test". Refer to the Testing section for testing details.

    If you wish to install the CLI adapter, cd to '../DBD-Teradata-Cli-XXX' from the current DBD::Teradata install directory.

    The following libraries are used by DBD::Teradata::Cli:

    • libcliv2.so
    • libtdusr.so
    • libtdgss.so

    The following header files are required to build DBD::Teradata::Cli:

    • parcel.h
    • dbcarea.h
    • coperr.h
    • coptypes.h

    The following files are used by the CLI library for initialization and reference:

    • errmsg.cat
    • clispb.dat

    Additionally, if using TTU 8.0, the tdicu package must be installed, and the TD_ICU_DATA environment variable properly defined.

    NOTE: In order to use the CLI adapter, the various TTU libraries must be installed in the standard location (usually /usr/lib), or LD_LIBRARY_PATH must include the location of the libtdgss.so library.

    You'll need a compiler (preferably, though not neccesarily, the same compiler Perl was built with). If you've installed the CLI libraries and/or header files in a non-standard location, or are using "special" versions (i.e., 64 bit), which are usually installed to a /usr/lib subdirectory, then prior to installation, you can create TDAT_DBD_CLI_LIB and TDAT_DBD_CLI_INC environment variables pointing to the full path where the libraries and headers are located. Alternately (preferably ?), you can create symbolic links in the /usr/lib and /usr/include directories to point to the alternate installation locations.

    Once again, run the usual Perl package installation process, namely

    perl Makefile.PL
    make
    make install
    
    There is no test suite exclusively for the CLI adapter at present. The DBD::Teradata test.pl script will by default use DBD::Teradata::Cli if it is installed, unless the -l 0 command line option is specified (run perl t/test.pl -h from the DBD-Teradata-XXX directory to see the list of command line options).

    Note that make will generate various messages from the platform compiler/linker as it builds the XS portion of DBD::Teradata::Cli. If header or library files are not found during the compile/link, the install process may fail.

    If you need to install DBD::Teradata::Cli on several platforms with identical (or nearly identical) hardware, operating system, and Perl versions, you can use PPM (available from ActiveState) to create an installable binary distribution of DBD::Teradata::Cli and the generated binaries on the build platform, and then install them via

    ppm install DBD-Teradata-Cli.ppd
    
    on other machines that do not have a compiler installed. The process for building the PPM is described at Creating PPM Packages at the ActiveState website.

    NOTE: This process has not yet been tested by Presicient.

    Also, if you used non-standard locations for your CLI library files on the target platforms, make sure they use the same directory name as the build platform, or have /usr/lib symbolic links created.

    Microsoft® Windows Installation

    cd to wherever you've unbundled the DBD-Teradata-XXX.tar.gz file, and then cd to the DBD-Teradata-XXX directory. Run the usual Perl package installation process, namely
    perl Makefile.PL
    nmake
    nmake install
    
    nmake is a part of the Visual Studio toolset; a free standalone version is available.

    Note that the current test suite is rather large and has numerous dependencies, and thus there is currently no "make test". Refer to the Testing section for testing details.

    If you wish to install the CLI adapter, cd to '../DBD-Teradata-Cli-XXX' from the current DBD::Teradata install directory.

    The Windows platform presents some additional installation challenges. The CLI client libraries and header files must be installed (from the Teradata Call Level Interface Version 2 Developer's Kit for Window, aka, WinCLI Developers Kit), along with an appropriate compiler (the same type used to build your copy of Perl, e.g., Visual Studio for ActiveState Perl releases).

    ExtUtils::MakeMaker needs both the *.lib and *.dll files to build the XS portion of DBD::Teradata::Cli. While the Teradata Client installation does copy the DLL files to the %SystemRoot%\SYSTEM32 directory, where most other general purpose DLLs reside, it keeps the *.lib and header files in its own installation directory path. E.g., if you installed the Teradata Client software to C:\Program Files\NCR\Teradata Client, then the *.lib files will be located in C:\Program Files\NCR\Teradata Client\cli\lib, and the header files in C:\Program Files\NCR\Teradata Client\cli\inc. To successfully build DBD::Teradata::Cli, you'll need to make sure that the <install-path>\cli\inc directory has been added to your INCLUDE environment variable. In addition, you'll need to create a new environment variable TDAT_DBD_CLI_LIB set to the <install-path>\cli\lib directory prior to nmake'ing DBD::Teradat::Cli. Most importantly, IF YOUR TERADATA CLIENT INSTALL PATH CONTAINS SPACES, TDAT_DBD_CLI_LIB MUST BE SET TO THE ABBREVIATED PATHNAME. You can determine that value using the "dir /X" command. If TDAT_DBD_CLI_LIB is not set, DBD::Teradata::Cli will default to 'C:\\PROGRA~1\\NCR\\TERADA~1\\cli\\lib' (the abbreviated form of 'C:\Program Files\NCR\Teradata Client\cli\lib'.

    Also note that 'nmake' will generate various messages from the Windows compiler/linker as it builds the XS portion of DBD::Teradata::Cli. If header or library files are not found during the compile/link, the install process may fail.

    If you need to install DBD::Teradata::Cli on several platforms with identical (or nearly identical) hardware, operating system, and Perl versions, you can use PPM (available from ActiveState) to create an installable binary distribution of DBD::Teradata::Cli, including the XS generated libraries, on the build platform, and then install them via

    ppm install DBD-Teradata-Cli.ppd
    
    on other machines that do not have a compiler installed. The process for building the PPM is described at Creating PPM Packages at the ActiveState website.

    NOTE: This process has not yet been tested by Presicient.

    DBD::Teradata::Cli Restrictions/Limitations

    All the CLI operational considerations described in "Teradata Call-Level Interface Version 2 Reference for Network-Attached Systems" apply.

    Presicient recommends using the latest TTU 12 CLI libraries, regardless of the version of Teradata server to which you will be connecting.

    CLI is *not* threadsafe on Unix platforms prior to TTU 8.0.

    There appears to be an issue with Linux and TTU 8.0 which causes the CLI adapter to fail at initialization with an ICULOADERROR; TTU 8.1 appears to function properly

    Rewindable cursors aren't supported, and updatable cursors may have some issues with regard to updating the last bufferful of a resultset. Both should be considered unsupported at present.

    Double buffering is disabled in CLI mode, which may cause some performance degradation for large result sets.

    The connection DSN must use a named server address, as CLI does not support numeric IP address specification. Existing applications which specified a fully qualified "COPn" hostname will be properly modified by DBD::Teradata to trim the COPn suffix before passing the hostname to CLI.

    Support for large request sizes appears to be limited to SQL sessions (at least through V2R6.0.x), thus the RequestSize parameter for tdat_UtilitySetup() is currently ignored for CLI based connections.

    Usage Guide

    Data-Source Name

    The dsn string passed to DBI->connect() must be of the following form:

         dbi:Teradata:host[:port][;dsnattrs]
    

    where

    • host is a TCP/IP address in human-readable or (deprecated) dotted-decimal format,
    • port is an optional TCP/IP port number to use (default is 1025, the most common value)
    • dsnattrs is an optional set of connection attributes (See below)

    DBD::Teradata supports the random COPx selection algorithm. Users are advised to set an environment variable that is assigned the maximum COP number for the host, e.g., if the server TDAT has 4 gateway addresses, then on Windows the variable would be set as

    set TDAT=4
    
    and on *nix
    export TDAT=4
    

    DSN Attributes

    The following driver-specific attributes may be specified in the DBI->connect() dsn string, as name=value pairs, appended to the hostname and optional port, and separated by semicolons:

    DSN
    Attribute
    Equivalent
    connect() Attribute
    DSN Format
    ACCOUNT N/A ACCOUNT=accountstring
    Note: will be appended to password string, or
    replace any existing accountstring in the current
    password string. Enclosing quotation marks should be omitted.
    CHARSET tdat_charset CHARSET=ASCII | UTF8
    CLI tdat_no_cli CLI=YES | NO | 0 | 1
    Note the DSN attribute has the opposite
    effect of its connect() attribute
    COMPATIBLE tdat_compatible COMPATIBLE=version
    DATABASE tdat_database DATABASE=name
    Note: this attribute is only applied to DBC/SQL sessions.
    MODE tdat_mode MODE=ANSI | TERADATA | DEFAULT
    REQBUFSZ tdat_reqsize REQBUFSZ=bytes
    RSPBUFSZ tdat_respsize RSPBUFSZ=bytes

    If both a DSN attribute, and its equivalent connect() attribute are present, the DSN attribute will override the connect() attribute.

    Example:

    my $dbh = DBI->connect('dbi:Teradata:dbc:1025;DATABASE=foobar;CHARSET=UTF8;MODE=ANSI;RSPBUFSZ=1048576', ...)
    
    would logon with a UTF8 character set, in ANSI transaction mode, with a default 1 Meg response buffer size, and set the default database to foobar upon logon.

    Connections, Sessions And Transactions

    Multiple connections to a Teradata database are supported. Please note that

    • Applications using the tdat_UtilitySetup interface specify the number of sessions to use as an attibute to that method.

    • DBI's current support for threading does not support sharing connections between threads. Hence, if you intend to use a connection in a thread, the connection must be created in that thread. For example, rather than creating a connection pool from which any thread can select a random connection, you must create a thread pool, each with its own connection, and select a thread as needed. However, you may find the DBIx::Threaded module useful for passing connections amongst threads.

    • Various other factors may limit the number of connections which may be established by an application (e.g., open file descriptor count limits)

    Connections may be made in either Teradata or ANSI mode. To specify the desired mode, the tdat_mode attribute can be supplied during connect(), set to either 'ANSI', 'TERADATA', or 'DEFAULT'. If no mode is specified, or the mode is set to 'DEFAULT', then the current DBMS default mode will be used. Refer to the Teradata SQL documents for all the differences between ANSI and Teradata behavior.

    RunStartup execution and session reconnection are not supported.

    Teradata account strings can be provided by simply appending a single comma, followed by the single-quoted account string, to the password string, e.g.,

         use DBI;
         my $dbh = DBI->connect(
              "dbi:Teradata:some.host.com",
              "user",
              "passwd,'\$H&Lmyaccount'")
              or die "Cannot connect\n";
         # more DBI calls...
    
    The following features are supported:

    • Multiple open statements on a single connection
    • Updatable cursors
    • Stored procedures
    • transaction-spanning read-only cursors (via tdat_keepresp)
    • Remote Console sessions
    • Fastload
    • EXPORT (aka Fastexport)
    • MLOAD (aka Multiload)
    • MONITOR (aka PM/API)
    In the event of unexpected session disconnection (e.g., network failure, or a session that has been forced off the database), some platforms (notably UNIXen) may receive a SIGPIPE signal. Currently, DBD::Teradata does not catch this signal, and the application may die unexpectedly if it does not catch it.

    Encrypted Logon Performance

    In pure Perl mode, encrypted logons require complex Math::BigInt computations which can take significant time (several seconds) to execute. While Math::BigInt extensions exist which could significantly improve encryption performance, there appear to be library incompatibilities which cause various spurious failures, including failure of socket libraries, and sudden application exit without any error indication. As an alternative, DBD::Teradata implements a weaker form of encryption as the default behavior; however, stronger encryption can be enforced by setting the TDAT_DBD_SLOWNC environment variable to a nonzero value. Be advised that setting this flag will add several seconds to logon times.

    Connection Character Sets

    DBD::Teradata provides UNICODE support via the UTF8 character set. To use UNICODE, an application may explicitly set the tdat_charset connection attribute to 'UTF8' at connect(). If no character set is specified, then the Teradata system default will be used, and can be determined by the application after connection by referencing the $dbh->{tdat_charset} attribute, or calling $dbh->tdat_charSet() driver specific method.

    The current connection character set can be queried using either the $dbh->tdat_CharSet() or $sth->tdat_CharSet() driver specific methods. The returned values is the current character set string in uppercase, usually either 'ASCII' or 'UTF8'.

    While DBD::Teradata will perform the neccesary translations internally, application writers should be aware of the following when using UTF8:

    • Database object names are currently limited to LATIN1 (aka ASCII)

    • DBD::Teradata will process VARTEXT input/output in the same character set as the current connection, for both normal and utility processing. Assuming the ChopBlanks statement attribute is not enabled, the character length of fixed length CHAR fields in VARTEXT output will be the proper fixed length, but the byte length will vary depending on the codepoints in the string.

    • Utility connections generated by tdat_UtilitySetup use the same character set as the "master" connection.

    • The PRECISION attribute for returned data specifies the length in characters for CHAR and VARCHAR data types.

    • Additional statement attributes tdat_CHARSET and tdat_CASESPECIFIC are available on SQL connections to Teradata V2R5.1 or higher for most data returning statements.

    • The precision specified for CHAR and VARCHAR parameter definitions in USING clauses is the length in bytes

    • Any PRECISION attribute supplied when binding CHAR or VARCHAR parameters is interpretted as the length in bytes

    • Fixed length CHAR fields in records returned in raw output mode are stored unaltered in the form returned by the database. Therefore, when the records are later used, applications must be aware of the appropriate byte lengths to apply when

      • decomposing the records into individual fields
      • specifying the precision of a fixed length CHAR parameter definition in USING clauses (including for Loopback Fastload or Multiload jobs, which use rawmode output from a source fastexport job)
      • specifying a PRECISION attribute for a fixed length CHAR parameter binding

      Note that the translation from byte length to character length applies to both UNICODE (3x the length in characters) and LATIN (2x the length in characters) character sets.

    • Applications which frequently process fixed length CHAR fields in UTF8 connections are strongly advised to explicitly cast CHAR fields to VARCHAR within the SQL where possible, in order to avoid the extra overhead required to adjust the returned string lengths. This advice is especially important when

      • using raw mode output as input to another application (to simplify the application logic)
      • the returned fields will be predominantly single byte characters (in order to efficiently compress the data transferred between client and server by avoiding the extraneous pad characters required for fixed length CHAR fields).

    • Please review the Perl UNICODE Introduction documentation for guidance on when and how to encode/decode UTF8 strings.

    • Also review the Teradata RDBMS SQL Reference - Data Types and Literals manual, chapter 5, "Character Data Types", section "Teradata Character Strings and Client Physical Bytes" for details on how UTF8 data is transferred between clients and the database system.

    • also refer to "Teradata Multinational Character Sets" manual for character set encoding and collation details.

    Data Types

    The following list maps DBI defined data types to their Teradata equivalent (if applicable):

    DBI Data TypeTeradata Data Type
    SQL_CHAR CHAR
    SQL_NUMERIC DECIMAL
    SQL_DECIMAL DECIMAL
    SQL_INTEGER INTEGER
    SQL_SMALLINT SMALLINT
    SQL_FLOAT FLOAT
    SQL_REAL FLOAT
    SQL_DOUBLE FLOAT
    SQL_VARCHAR VARCHAR
    SQL_DATE DATE
    SQL_TIME TIME
    SQL_TIMESTAMPTIMESTAMP
    SQL_LONGVARCHAR LONG VARCHAR
    SQL_BINARY BYTE
    SQL_VARBINARY VARBYTE
    SQL_LONGVARBINARY LONG VARBYTE
    SQL_BIGINT N/A
    SQL_TINYINT BYTEINT
    SQL_WCHAR N/A
    SQL_WVARCHAR N/A
    SQL_WLONGVARCHAR N/A
    SQL_BIT N/A
    SQL_INTERVAL_DAYINTERVAL DAY
    SQL_INTERVAL_DAY_TO_HOURINTERVAL DAY TO HOUR
    SQL_INTERVAL_DAY_TO_MINUTEINTERVAL DAY TO MINUTE
    SQL_INTERVAL_DAY_TO_SECONDINTERVAL DAY TO SECOND
    SQL_INTERVAL_HOURINTERVAL HOUR
    SQL_INTERVAL_HOUR_TO_MINUTEINTERVAL HOUR TO MINUTE
    SQL_INTERVAL_HOUR_TO_SECONDINTERVAL HOUR TO SECOND
    SQL_INTERVAL_MINUTEINTERVAL MINUTE
    SQL_INTERVAL_MINUTE_TO_SECONDINTERVAL MINUTE TO SECOND
    SQL_INTERVAL_MONTHINTERVAL MONTH
    SQL_INTERVAL_SECONDINTERVAL SECOND
    SQL_INTERVAL_YEARINTERVAL YEAR
    SQL_INTERVAL_YEAR_TO_MONTHINTERVAL YEAR TO MONTH
    SQL_TYPE_TIMESTAMP_WITH_TIMEZONETIMESTAMP WITH TIME ZONE
    SQL_TYPE_TIME_WITH_TIMEZONETIME WITH TIME ZONE

    NOTE: Teradata treats TIME, TIMESTAMP, and INTERVAL types externally as CHAR types.

    prepare() Optimization

    Non-data-returning statements on SQL sessions are not fully prepared on the DBMS. Instead, some limited parsing of SQL statements (including multi-statement requests) is performed to determine if non-data returning statements are included; if there are none, the various metadata structures are synthesized locally and returned immediately (with the exception of REPLACE and CREATE MACRO or PROCEDURE). This optimization reduces the processing burden on the DBMS, and helps reduce associated delays in the client, esp. when using $dbh->do().

    This change may impact code that relied on the prepare() for detecting access or syntax errors. Existing code that is adversely impacted can disable this optimization by setting the tdat_compatible database handle attribute to '2.0' or earlier.

    Parameterized SQL

    DBD::Teradata supports both USING clauses and '?' placeholders to implement parameterized SQL; however, they cannot be mixed in the same request. When using '?' placeholders, all parameter datatypes are assumed to be VARCHAR unless explicit datatypes are specified via a bind_param() or bind_param_array() call.

    Bound parameter types and values can be inspected via the ParamTypes, ParamValues, and ParamArrays statement handle attribute (per the DBI specification).

    Note that statements which define parameters via a USING clause may use named placeholders in the bind_param() or bind_param_array() calls, e.g.

    	my $sth = $dbh->prepare('USING (param1 int, param2 char(100))
    		SELECT * FROM myTable
    		WHERE col1 = :param1 AND col2 LIKE :param2');
    
    	$sth->bind_param(':param1', 1234);
    	$sth->bind_param(':param2', 'sdrgsdfgsdfgsdfg');
    	$sth->execute();
    
    Note the inclusion of the leading colon in the parameter names.

    Multi-Statement and MACRO Requests

    Multi-statement and MACRO execution requests are supported (Stored procedures are discussed below).

    Reporting the results of multi-statement and MACRO requests presents additional issues. Refer to the Driver Specific Attributes section below for detailed descriptions of relevant statement handle attributes. The driver behavior is augmented as follows:

    • All DBI statements will have an associated tdat_stmt_info statement handle attribute. (Note that DBI's notion of a statement is equivalent to a Teradata request, which may contain more than 1 SQL statement. For the purposes of this discussion, the Teradata definitions of request and statement will be used; when refering to DBI's definition of a statement, the term "DBI statement" will be used). tdat_stmt_info returns an arrayref of hashrefs. Each array entry is indexed by its associated statement number within a Teradata request. Please note that the DBMS starts statement numbering with 1, not zero; thus, loop constructs used to scan the statement info array should start their index values at 1. The hashref has several keys, described in the following sections and in the Driver-Specific Attributes section.

    • The rowcount value returned by $sth->execute() or $dbh->do() for multistatement and MACRO requests may not include results of all statements. The tdat_stmt_info should be inspected to determine actual activity counts and success/failure of individual statements.

    • For multi-statement and MACRO requests which do not return rows (i.e., do not include SELECT statements), the fetchrow_XXX() statement handle method will always return an undef result. The activity type, activity count, and warning messages of an individual statement can be queried via the ActivityType, ActivityCount and Warning keys in the statement's hashref in the array returned by the tdat_stmt_info attribute. The value returned by $sth->execute() or $dbh->do() will indicate the sum of activity counts of all statements.

    • Multi-statement and MACRO requests which include a single SELECT statement are handled exactly like a single SELECT statement. The value returned by $sth->execute() or $dbh->do() will indicate the sum of activity counts of all statements up to and including the SELECT statement. Note that if non-SELECT statements sequentially follow the SELECT statement, their attributes should not be queried until all SELECTed rows have been fetched, since the results of the succeding statements are not reported by the DBMS until all the rows have been returned.

    • Multi-statement and MACRO requests which include multiple SELECT statements require special handling when fetching results. The value returned by $sth->execute() or $dbh->do() will indicate the sum of activity counts of all statements up to and including the first SELECT statement. The tdat_stmt_info attributes still apply as for single-SELECT multi-statement or MACRO requests. However, the column (and summary) information for all SELECT statements are included in the NAME, TYPE, PRECISION, SCALE, and NULLABLE DBI statement handle attributes, and each fetched row will include fields for all SELECT statements, but only the fields for the current SELECT statement will be be valid. All fields for non-current SELECT will be set to undef. In order to identify the SELECT statement that a fetchrow_XXX() call is processing:
      • the tdat_stmt_num attribute can be queried to get the current statement number
      • the starting index of column information for the current statement can be retrieved via the key StartsAt in the hashref located at the current statement's index in the array returned by the tdat_stmt_info attribute.
      • the ending index of column information for the current statement can be retrieved via the key EndsAt in the hashref located at the current statement's index in the array returned by the tdat_stmt_info attribute.

    • Generic applications can test the tdat_more_results read-only attribute to determine if a statement handle has additional results to report when any of the fetch functions return undef.

    • Since ANSI mode will continue processing statements after a preceding statement reports an error, the disposition of an individual statement can be queried through the ErrorCode and ErrorMessage keys of the statement's hashref in the tdat_stmt_info array.

    An example of processing multi-SELECT requests:

    
    $sth = $dbh->prepare('SELECT user; SELECT date; SELECT time;');
    $names = $sth->{NAME};
    $types= $sth->{TYPE};
    $precisions = $sth->{PRECISION};
    $scales = $sth->{SCALE};
    $stmt_info = $sth->{'tdat_stmt_info'};
    
    $sth->execute;
    $currstmt = -1;
    while ($sth->{tdat_more_results}) {
        while ($rows = $sth->fetch_array()) {
            if ($currstmt != $sth->{'tdat_stmt_num'}) {
                print "\n\n";
                $currstmt = $sth->{'tdat_stmt_num'};
                $stmthash = $stmt_info->[$currstmt};
                $starts_at = $stmthash->{'StartsAt'};
                $ends_at = $stmthash->{'EndsAt'};
                for ($i = $starts_at; $i <= $ends_at; $i++) {
                    print "$$names[$i] ";
                }
                print "\n";
            }
            for ($i = $starts_at; $i <= $ends_at; $i++) {
                print "$row[$i] ";
            }
        }
    }
    
    

    Summarized SELECT Requests

    Like multi-statement and MACRO requests, reporting the results of summarized SELECT requests requires special processing. Refer to the Driver Specific Attributes section below for detailed descriptions of relevant statement handle attributes. The driver behavior is augmented as follows:

    • Like multi-SELECT statement requests, summarized SELECT statements will include all summary columns in the DBI attribute and row data arrays. The summary columns in the rowdata array returned by fetchrow_XXX() will be set to undef until a summary row is returned by the DBMS.

    • When a summary row is fetched, an IsSummary attribute of the current statment hashref (stored at the current statement number index within the arrayref returned by the tdat_stmt_info statement handle attribute) returns the summary row number of the current statement; otherwise, it will be set to undef.

    • the current statement hashref also includes SummaryStarts and SummaryEnds attributes, which return arrays (indexed by summary row number) of starting and ending indexes, respectively, within the DBI attribute and row data arrays for each summary row (You're probably confused at this point, so review the example below).

    • the current statement hashref includes a SummaryPosition attribute, which returns an arrayref of the column numbers associated with each summary field within the current statement. NOTE: SummaryPosition information is not available until after the execute() method has been called and a summary row has been fetched.

    • the current statement hashref includes a SummaryPosStart attribute, which returns an arrayref, indexed by summary row number, of the starting index within the SummaryPosition array for the current summary row. NOTE: SummaryPosStart information is not available until after the execute() method has been called and a summary row has been fetched.

    An example of processing summarized SELECT:

    
    $sth = $dbh->prepare('SELECT Name FROM Employees WITH AVG(Salary), SUM(Salary)');
    $names = $sth->{NAME};
    $types= $sth->{TYPE};
    $precisions = $sth->{PRECISION};
    $scales = $sth->{SCALE};
    $stmt_info = $sth->{'tdat_stmt_info'};
    
    $sth->execute();
    $currstmt = -1;
    while ($rows = $sth->fetchrow_array()) {
        if ($currstmt != $sth->{'tdat_stmt_num'}) {
    #
    #    new stmt, get its info
    #
            print "\n\n";
            $currstmt = $sth->{'tdat_stmt_num'};
            $stmthash = $stmt_info->[$currstmt];
            $starts_at = $stmthash->{'StartsAt'};
            $ends_at = $stmthash->{'EndsAt'};
            $sumstarts = $stmthash->{'SummaryStarts'};
            $sumends = $stmthash->{'SummaryEnds'};
            $sumrow = $stmthash->{'IsSummary'};
            for ($i = $starts_at; $i <= $ends_at; $i++) {
                print "$$names[$i] ";    # print the column names
            }
            print "\n";
        }
        if (defined($sumrow)) {
    #
    #    got a summary row, space it
    #    NOTE: this example uses simple tabs to space summary fields;
    #    in practice, a more rigorous method to precisely align summaries with their
    #    respective columns would be used
    #
            $sumpos = $stmthash->{'SummaryPosition'};
            $sumposst = $stmthash->{'SummaryPosStart'};
            print "\n-----------------------------------------------------\n";
            for ($i = $$sumstart[$sumrow], $j = $$sumposst[$sumrow];
                $i <= $$sumend[$sumrow]; $i++, $j++) {
                print ("\t" x $$sumpos[$j]);    # tab each column for alignment
                print "$$names[$i]: $row[$i]\n";
            }
        }
        else {
    #
    #    regular row, just print the values
    #
            for ($i = $starts_at; $i <= $ends_at; $i++) {
                print "$row[$i] ";
        }
    }
    
    

    Using execute_array()

    DBD::Teradata provides support for parameter array binding for SQL sessions. For target database version prior to V2R6.0, the default implementation will simply iterate over the parameter arrays and execute a single SQL request per parameter tuple until all tuples have been consumed.

    For database versions V2R6.0 and higher, the bulk data interface is used to deliver as many parameter tuples as can fit in the currently defined request buffer size (as specified via tdat_reqsize handle attribute, default 64000). By using large request buffer sizes up to 1 megabyte in size, a large number of parameter tuples can be delivered and processed by the database in a single request, with a very significant performance improvement.

    However, there are some special considerations when using the array binding interface:

    • execute_array() and execute_for_fetch() are not supported for non-SQL sessions, including fastload and multiload. For the latter, use the tdat_UtilitySetup() interface instead.

    • execute_array() and execute_for_fetch() are not currently supported for data-returning (e.g., SELECT) statements.

    • bind_param_array() may be called with named parameters if the associated SQL statement defines parameters via a USING clause.

    • when specifying the ArrayTupleFetch => $sourcesth attribute, the specified source statement handle cannot be from the same DBI connection as the target statement.

    • The behavior of the bulk interface when one or more parameter tuples cause an error depends on the current transaction semantics, AutoCommit setting, and Teradata database version:

      For Teradata versions prior to V2R6.0:

      ModeAutoCommit
      ONOFF
      Teradata Each tuple is individually committed. All supplied tuples are sent to the database (except in the event of connection failure), and each tuple reports either a rowcount or failure indication in the tuple status array. If any tuple fails, the effects of all prior tuples are rolled back, and no further tuples are sent to the database. The tuple status for all tuples prior to the failing tuple is set to -2 to indicate the effect of the tuple has been rolled back, and the failed tuple's entry will report the error. The failed tuple entry will be the last entry in the tuple status array. The application should explicitly rollback() in this instance.
      ANSI Each tuple is individually committed. All supplied tuples are sent to the database (except in the event of connection failure), and each tuple reports either a rowcount or failure indication in the tuple status array. All supplied tuples are sent to the database (except in the event of connection failure), and each tuple reports either a rowcount or failure indication in the tuple status array. The application is responsible for explicitly committing after the request has been processed.

      For Teradata versions V2R6.0 and above:

      ModeAutoCommit
      ONOFF
      Teradata If any tuple fails, the effects of some tuples may be rolled back. All tuples are sent to the database. Since multiple tuples are sent in a single request, and execute_array() may require sending multiple requests, if the failure occurs in the Nth request, then the tuples in requests 1 to N-1 will report their successful tuple status, and all tuples in the failing request will report a status of -2, except for the failed tuple, which will specify the failure code. This pattern may continue from request N + 1 until the last request. If any tuple fails, the effects of all tuples are rolled back, and no further tuples are sent to the database. The tuple status for all tuples prior to the failing tuple is set to -2 to indicate the effect of the tuple has been rolled back, and the failed tuple's entry will report the error. The failed tuple entry will be the last entry in the tuple status array. The application should explicitly rollback() in this instance.
      ANSI If any tuple fails, the effects of some tuples may be rolled back. All tuples are sent to the database. Since multiple tuples are sent in a single request, and execute_array() may require sending multiple requests, if the failure occurs in the Nth request, then the tuples in requests 1 to N-1 will report their successful tuple status, and all tuples in the failing request will report a status of -2 to indicate the effect of the tuple has been rolled back, except for the failed tuple, which will specify the failure code. This pattern may continue from request N + 1 until the last request. All supplied tuples are sent to the database (except in the event of connection failure), and each tuple reports either a rowcount or failure indication in the tuple status array. The application is responsible for explicitly committing after the request has been processed.

    • Multistatement requests are not supported. (This restriction is imposed by Teradata, not by DBD::Teradata).

    • Explicit parameter binding via bind_param_array() will cause both ParamValues and ParamArrays attributes to be populated with arrayrefs.

    • When using tdat_vartext_in mode to load data from a vartext file, and explicitly binding parameters via bind_param_array(), the ParamValues and ParamArrays attributes will be populated with the split() version of the vartext records, rather than the single vartext input record(s).

    Utility Support

    FASTLOAD, FASTEXPORT, MLOAD, MONITOR and DBCCONS sessions are supported.
    NOTE: These features expose fragile aspects of the DBMS. Applications using this functionality should be rigorously tested against non-production systems before deployment. In addition, DBD::Teradata's implementation of these utilities does not support restartability, as no event logging is performed. In the case of Export, this should be harmless wrt the source system, as Export is a readonly operation. Likewise, since Fastload can only be applied to empty tables, any failure can be recovered by simply dropping and recreating the target table. Multiload can be recovered via the "RELEASE MLOAD " statement; in fact, DBD::Teradata performs the RELEASE MLOAD internally whenever it encounters a processing error during Mload.

    ETL Utility Interface

    The driver specific tdat_UtilitySetup() connection handle method has been provided to simplify the process of using the load and export utilities from DBD::Teradata. To use the tdat_UtilitySetup() interface:

    1. logon a control session with the tdat_lsn set to zero, and NO tdat_utility specified.

    2. execute the tdat_UtilitySetup driver-specific function on the control session, passing an attribute hash containing the defined attributes.

    3. The value returned is the number of rows loaded/exported if no errors occured, in which case the errorlog tables will have been DROP'ed. A negative return value indicates the number of errors reported, and the errorlog tables are NOT dropped. Note that the positive return value may be less than the number of rows that were sent; this difference is the number of duplicate or missing rows.

    4. If the return value is negative, select the rowcounts from the errortables via the control session, and optionally report them to the user.

    5. The CheckpointCallback subroutine should accept the following parameters:

      • $function - string indicating the current function ('INIT', 'CHECKPOINT', 'FINISH')

      • $rowcount - number of session logged on (at INIT), or running count of rows transferred

      • $ctxt - the Context attribute value (if any) provided in the tdat_UtilitySetup attribute hash

      The CheckpointCallback (if provided) will be called

      • When the tdat_UtilitySetup has successfully logged on and prepared the utility sessions for loading/exporting, but before any data has actually been transfered. The provided rowcount and function parameters will be the number of utility sessions logged on, and the string 'INIT', respectively.

      • Each time the checkpoint number of rows has been transfered. The rowcount and function parameters will be the running total of rows loaded or exported, and the string 'CHECKPOINT', respectively.

      • When the load or export operation has completed, in which case the rowcount and function parameters will be the total rows transferred and the string 'FINISH', respectively.

    6. When a subroutine is specified for the Source or Target, attributes, it will be passed the following parameters:

      • $function - a string indicating the function to perform, either 'INIT', 'CHECKPOINT', 'MOREDATA', or 'FINISH'

      • $sth - a statement handle used to bind input data, or retrieve exported data

      • $sessiontag - a number used to indicate the session for which the subroutine is being called

      • $maxrows - the maximum number of rows the function should attempt to either supply or consume (in order to avoid exceeding the defined CHECKPOINT value); -1 indicates unlimited (ie., no checkpoint)

      • $ctxt - the Context attribute value (if any) provided in the tdat_UtilitySetup attribute hash

      When $function is 'MOREDATA', the Source callback should return the number of rows applied to the input statement handle (via bind_param(), or tdat_BindParamArray()), or zero when source data is exhausted. Return values for 'INIT', 'CHECKPOINT', 'FINISH', and for 'MOREDATA' for the Target callback, should be some non-zero integer. Returning undef from either Source or Target callback will cause tdat_UtilitySetup to abort.
      The Source (or Target) subroutine will be called

      • during initialization

      • during a checkpoint

      • whenever the fastload/mload needs more data, or the export has more data to output.

      • whenever some event causes the utility to terminate (successfully or otherwise)

    Review the following pages for detailed discussions of using tdat_UtilitySetup:

    NOTE:
    To date, only basic FASTEXPORT operations have been tested. Parameterized and/or multistatement queries remain to be tested.

    MONITOR (PM/API)

    To use MONITOR from this driver:

    1. logon a session with AutoCommit set to 0, and tdat_utility set to 'MONITOR'

    2. prepare each PM/PC statement you intend to use.

    3. call bind_param() to bind any parameters for whichever statement you intend to execute.
      DBD::Teradata internally maps the correct parameter type information for each PM/PC statement, so explicit type information is no longer required in bind_param(). However, the provided parameter values must be compatible with the required parameter types. Raw mode should NOT be used.

    4. execute the desired statement.

    5. fetch any returned data. The tdat_stmt_info and tdat_stmt_num can be used as described above for regular multistatement SQL requests.

    6. process the returned data as needed.

    7. repeat as needed, or simply logoff when done.

    Due to the lack of explicit placeholder syntax, PM API programmers need to review the Teradata RDBMS Performance Monitor Reference document and the montest() subroutine in the TdTestPMAPI.pm module included in the DBD::Teradata installation package for a better understanding of which PM API statements require parameters, what data types are expected, and which statement return multiple results.

    Note that, due to the additional fields that are optionally returned for the MONITOR SESSION request in the various Teradata releases, the number of fields defined in $sth->{NUM_OF_FIELDS} and associated metadata attributes may be larger than the number actually returned. The additional fields will be returned as undef in the returned row in the event the older version of MONITOR SESSION has been requested. Check the returned VersionId field of the first result statement to determine what number of fields to expect in the succesive result rows.

    Remote Console (DBCCONS)

    DBD::Teradata supports the Remote Console interface, providing programmatic support for the various console utilities, including:

    • aborthost
    • checktable
    • config
    • dbscontrol
    • dip
    • ferret
    • filer
    • gtwglobal
    • lokdisp
    • dumplocklog
    • qryconfig
    • qrysessn
    • reconfig
    • rcvmanager
    • schmon
    • showlocks
    • sysinit
    • rebuild
    • tpccons
    • updatespace
    • vprocmanager

    (Refer to the Teradata RDBMS Utilities, volumes 1 and 2, for details on each of these utilities)

    To use these utilities via remote console, you must first configure your DBMS to permit selective execution of the utility by

    1. Creating a CONSOLE database

    2. Create a null MACRO within the CONSOLE database with the same name as the utility you wish to run, and GRANT EXECUTE access to any users you wish to have remote console access to the utility, e.g.,

      CREATE MACRO CONSOLE.gtwglobal() (;);
      GRANT EXECUTE ON CONSOLE.gtwglobal TO some_user;
      

    Using the remote console capability requires some special programming considerations. Most importantly, a new Prompt attribute has been added to the tdat_stmt_info statement attribute structure which indicates whether the remote console protocol has solicited user input. Refer to the TdTestConsole.pm test module included in the DBD::Teradata bundle for an example.

    Note that several console utilities will output ANSI terminal escape sequences, which may present an undesirable display behavior.

    Finally, be aware that some console utilities operate in full duplex mode, whereby they issue PROMPT parcels before they have completely sent all of their output display data (a protocol artifact from early support for serial line ANSI terminals). For console utilities which provide such dynamic display updates, handling of input requires

    • using the non-blocking tdat_FirstAvailList and tdat_Realize driver specific functions to execute and fetch on the console connection

    • aborting any outstanding request posted to the console session prior to sending any prompted data to the console connection

    • properly keeping track of when a PROMPT has been received and is outstanding

    See the TdTestConsole.pm module included with DBD::Teradata bundle.

    Double Buffering

    Double buffering (i.e., issuing a CONTINUE to the DBMS while the application is still fetching data from the last received set of rowdata) is supported, and is the default behavior. However, once a session executes a SELECT...FOR CURSOR statement, double buffering is disabled for all queries in the session. Double buffering is not supported for CLI adapter connections.

    Using Updatable Cursors

    Note: Updatable cursor support is deprecated for Teradata versions R6.2 or higher.

    • Cursor syntax is only supported in ANSI mode (i.e., tdat_mode => 'ANSI' during connect).

    • To open a cursor for positioned operations, the FOR CURSOR clause must be appended to the cursor SELECT statement.

    • Cursor names are generated internally; DECLARE CURSOR syntax is not supported. The internally generated cursor name can be retrieved via the CursorName statement handle attribute.

    • To apply an update or delete at the current position for a cursor, the WHERE CURRENT OF $sth->{CursorName} clause must be appended to the UPDATE or DELETE statement.

    • All the restrictions of updatable cursors described in the SQL Preprocessor manual also apply. Especially note that commit or rollback will implicitly close all updatable cursors, and all read-only cursors not prepared with the tdat_keepresp attribute enabled!

    • The current row of the cursor will be invalidated whenever a DELETE...WHERE CURRENT is successfully executed on the cursor. The cursor must be explicitly advanced via any of the fetch() functions before continuing with positioned operations.

    • Remember to turn off AutoCommit mode when using updatable cursors!

    • Note that using updatable cursors may adversely impact performance of other non-cursor queries concurrently or subsequently opened on the same connection, since supporting positioned updates precludes the ability to double-buffer responses (as described above).

    • Due to an apparent anomoly in the PREPARE of positioned statements by the DBMS, positioned statements must be prepare()'d either

      1. before any updatable cursor statement is execute()'d

      2. after the associated cursor statement has been execute()'d and fetch()'d

    A simple example:

    $cursth = $dbh->prepare('SELECT * FROM mytable WHERE col1 > 12345 FOR CURSOR');
    $updsth = $dbh->prepare("UPDATE mytable SET col3 = 'Invalid' WHERE CURRENT OF $cursth->{CursorName}");
    $cursth->execute;
    while ($cursth->fetch) {
        $updsth->execute;
    }
    $dbh->commit;
    

    Stored Procedures

    • A minimum Teradata Release V2R4.0 is required for stored procedure support.

    • DBD::Teradata supports large CREATE/REPLACE PROCEDURE statements up to approx. 6 megabytes in length.

    • The statement level attributes tdat_sp_save and tdat_sp_print are used to enable or disable saving of procedure text, and console PRINT statements, respectively.Note that Teradata has deprecated the PRINT statement as of V2R5.0.1, and PRINT statements are now always ignored.

    • CREATE/REPLACE PROCEDURE is a data-returning statement. Applications should check the statement Warning attribute to determine if any compilation errors occurred. In the event of compilation errors, the individual errors are available in single column rows which can be retrieved by simply fetch'ing on the statement handle and displaying the single column of each row. E.g.,

      $sth = $dbh->prepare(
      'CREATE PROCEDURE DbiSPTest(IN Parent INTEGER, OUT Child INTEGER,
          INOUT Sibling integer, IN CommentString CHAR(20))
      BEGIN
          Declare Level Integer;
          Set Level = Parent;
          DELETE FROM SPLTEST All;
          WHILE Level < Parent + Sibling DO
              Insert into spltest values(:level, :CommentString);
              Set level = level + 1;
          END WHILE;
          Set Child = Level;
      END;', { tdat_sp_save => 1 });
      $sth->execute;
      $stmtinfo = $sth->{tdat_stmt_info};
      $stmthash = $$stmtinfo[1];
      if ($$stmthash{Warning}) {
          print $$stmthash{Warning}, "\n";
          while ($row = $sth->fetchrow_arrayref) {
              print $$row[0], "\n";
          }
      }
      
    • USING clauses are not supported with CALL statements; only placeholders should be used for parameters.

    • Placeholders are required for INOUT parameters. IN parameters may be either placeholders, or literals. OUT parameters must be specified by parameter name.

    • The parameter number supplied to bind_param() and bind_param_inout() is the ordinal placeholder position, starting from 1.

    • The column number supplied to bind_col() is the ordinal position of the output value in the returned row data (i.e., position in the parameter list, after excluding IN parameters). E.g., assume a stored procedure defined as

      CREATE PROCEDURE exampleProc(IN parm1 INTEGER,
      	OUT parm2 INTEGER, INOUT parm3 INTEGER, IN parm4 INTEGER)
      
      and invoked with
      $sth = $dbh->prepare('CALL exampleProc(10, parm2, ?, ?)');
      
      Then parameter bindings would be
      $sth->bind_param_inout(1, \$parm3);
      $sth->bind_param(2, $p4);
      $sth->bind_col(1, \$parm2);
      
      Alternately:
      $sth->bind_param(1, $parm3);
      $sth->bind_param(2, $p4);
      $sth->bind_col(1, \$parm2);
      $sth->bind_col(2, \$parm3);
      
    • IN parameters must bound via bind_param()

    • INOUT parameters may be bound via bind_param_inout(), or by separately binding the input via bind_param(), then binding output via bind_col() as illustrated above.

    • OUT parameters must bound via bind_col()

    • Alternately, the $sth->fetchrow_XXX() functions can be used to retrieve the OUT and INOUT values; the associated statement attributes (NAME, TYPE, etc.) will be defined only for the set of OUT and INOUT parameters, and IN parameters will not be included in the NUM_OF_FIELDS count.

    • Errors may not be returned for improperly specified parameters (e.g., supplying a placeholder for an OUT parameter), resulting in improper or unexpected behavior. Generic applications should execute a HELP PROCEDURE statement to determine parameter IN/OUT attributes when constructing CALL statements.

    Error Handling

    Warnings are returned in the statement handle tdat_stmt_info attribute in the Warning field that can be queried to retrieve warning messages.

    Transaction behavior with respect to errors differs between ANSI and Teradata modes. Review the Teradata SQL documents for details.

    In ANSI mode, multistatement and MACRO requests can complete with 1 or more of the statements returning an error; the statement info hashes returned for the statement handle should be inspected after execute() to determine if any errors occured.

    Diagnostics

    DBI provides the trace() function to enable various levels of trace information. DBD::Teradata uses this trace level to report its internal operation, as well.

    • If the trace level is unset, or set to zero, no diagnostic reporting is performed.

    • If trace level is set to 1, some limited diagnostic reporting is performed. This trace level is useful for informational (as opposed to debugging) purposes.

    • If trace level is set to 2 or higher, detailed level diagnostic reporting is performed. Hex dumps of sent and received parcel streams and message headers will be included if an environment variable TDAT_DBD_DEBUG is set to a non-zero value prior to calling DBI->connect(). This level should be used whenever a potential driver bug is believed to exist, and the resulting report should be included when the bug is reported (assuming the data stream doesn't include sensitive information). PLEASE DON'T SEND DIAGNOSTIC DUMPS THAT INCLUDE CONFIDENTIAL OR SENSITIVE INFORMATION!! Instead, try to reproduce the problem using dummy data.

    Driver-Specific Attributes

    There are some additional attributes that the user can either supply to various DBI calls, or query on database or statement handles:

    tdat_active

    Read-only on connection handle
    When non-zero, indicates the connection handle has results which need to be "Realized". Refer to the discussion on tdat_First_AvailList for details. Note that this is different than the DBI's 'Active' attribute.

    tdat_bufsize

    Deprecated; use tdat_reqsize and tdat_respsize instead.
    Read/write on connection or statement handle; statement handle inherits default from parent connection.

    Specifies the maximum request and response buffer size in bytes. See tdat_reqsize and tdat_respsize for usage details.

    tdat_CASESPECIFIC

    Read-only on statement handle
    Returns an arrayref of returned column case sensitivity indicators. If an entry is true, then the associated column has CASE SPECIFIC sensitivity; otherwise, the column is not case sensitive (or is not a character column). Only applies to UTF8 connections to a V2R5.1 or higher Teradata server.

    tdat_CHARSET

    Read-only on statement handle
    Returns an arrayref of returned column character set indicators. For character string columns, entries may be 'ASCII', 'LATIN', 'UNICODE', 'KANJISJIS, 'GRAPHIC', 'KANJI', or 'Unknown'; entries are undef for non-character columns. Only applies to UTF8 connections to a V2R5.1 or higher Teradata server.

    tdat_charset

    Write on connect(), Read-only on connection handle
    Determines the connection character set. May be any of 'ASCII', 'UTF8', or 'EBCDIC'. If not defined, the Teradata system default it used. Can be queried after connection to determine the current connection character set.

    tdat_compatible

    Write-only connection or statement attribute.
    Used to establish a minimum version compatibility. When set to a driver version string, e.g., tdat_compatible => '1.12', causes certain behaviors that may have changed since that release level to be restored. (Currently only applies to result value of $sth->execute() and $dbh->do(), or the optimization of $dbh->prepare() for non-data returning statements).

    tdat_database

    Write-only connection attribute.
    Used to establish the default database after logon. Only applied to DBC/SQL sessions. Causes a "DATABASE name" statement to be issued immediately after logon. Note that, if the DATABASE statement fails, the session will be logged off and an error returned.

    tdat_FORMAT

    Read-only on statement handle.
    Returns an arrayref of returned column format specification strings, as specified by either FORMAT qualifiers in SELECT statements of various DDL statements.

    tdat_formatted

    Statement handle attribute, set on prepare.
    When set to a non-zero value, causes result values to be returned in DBMS formatted form (i.e., uses FIELD mode instead of record mode requests). Only effective on DBC/SQL sessions.

    tdat_has_bignum

    Read-only connect() attribute
    Indicates that the connection supports BIGINT and large precision DECIMAL datatypes.

    tdat_hostid

    Read-only on connection handle
    Returns the host group ID to which the session has been connected.

    tdat_keepresp

    Write-only prepare() attribute.
    When set to a non-zero value, causes a KEEPRESP parcel to be issued with the request to the DBMS. This useful for

    • executing the EXPORT'd query on the control session of a fastexport (Note that the new tdat_UtilitySetup() interface for EXPORT eliminates the need for the application to specify this attribute).
    • executing transaction-spanning read-only cursors (SELECT's without a FOR CURSOR suffix.).

    The latter case permits an application to execute a SELECT statement as a read-only cursor, which remains open after subsequent commit or rollback operations, either via AutoCommit'ed INSERT/UPDATE/DELETE/etc. statements, or by explicit commit() or rollback() calls. NOTE that the application must explicitly finish() the associated statement handle, even after all rows have been retrieved from the cursor; otherwise the cursor will remain open, consuming both client and server resources, until the associated connection has been disconnected.

    Example:

    my $selsth = $dbh->prepare('SELECT * from alltypetst', { tdat_keepresp => 1 });
    my $updsth = $dbh->prepare('UPDATE alltypetst SET col2 = 1 WHERE col1 = ?');
    my $row;
    $selsth->execute;
    while ($row = $selsth->fetchrow_arrayref) {
        if ($$row[0]%100 == 0) {
            $updsth->execute($$row[0]);
            $dbh->commit;
        }
    }
    $selsth->finish;
    
    

    tdat_lsn

    Write-only connect() attribute, Read-only on connection handle.
    When specified on connect():
    • if specified with a value of zero, causes the session to allocate an LSN from the DBMS, which can be queried after successful connection using the tdat_lsn attribute.
    • if specified with a non-zero value, causes the session to associate with the provided LSN value.
    If not specified during connect(), no LSN action is performed, and querying tdat_lsn after connection will return undef. After connect(), the LSN value can be queried via the database handle tdat_lsn attribute.

    tdat_mlmask

    Write-only on statement handle
    A scalar bitmask, or an arrayref of bitmasks, used with MLOAD utility sessions to indicate which MLOAD jobs a given input record is associated with. Refer to the Multiload detail page for a detailed description and example code.

    tdat_mode

    Set at connect(), connection handle attribute.
    Sets the session mode, either 'ANSI', 'TERADATA', or 'DEFAULT', upon connection. If not specified, or set to DEFAULT, the current DBMS default mode is used. After connection, the application can query the attribute to determine which mode the session is operating in.

    tdat_more_results

    Read-only statement handle attribute.
    Indicates if there are more results to retrieve from a statement handle. When a fetch operation returns undef, a non zero tdat_more_results value indicates more Teradata statements are available for fetching on the statement handle.

    tdat_no_bigint

    Write-only connect() attribute
    Causes DBD::Teradata to not use Math::BigInt for decimal data conversions; floating point conversions will be used instead.

    tdat_no_cli

    Write-only connect() attribute
    Causes DBD::Teradata to not use the CLI adapter (i.e., use the pure Perl implementation).

    tdat_progress

    Additional attribute value for execute_array().
    Specifies an arrayref containing a rowcount increment and a callback. Used by execute_array() to provide a progress reporting mechanism, e.g.,
    $sth->execute_array({
    	ArrayTupleStatus => \@stsary,
    	tdat_progress => [ 100, \&report_progress ]
    	});
    
    sub report_progress {
    	print "\r Sent ", shift, '...';
    }
    

    tdat_raw

    Deprecated; use tdat_raw_in and tdat_raw_out instead.
    Write-only on statement handle creation; read-only on statement handle thereafter.

    When set to either RecordMode or IndicatorMode in the attributes hash provided to a $dbh->prepare() call, causes the resulting DBI statement handle to both output rowdata, or accept the input parameter data, in Teradata binary import/export format. Specifying RecordMode indicates data is provided without the NULL indicator bits; IndicatorMode indicates data is provided with indicator bits.
    NOTE: tdat_raw is equivalent to setting both tdat_raw_in and tdat_raw_out

    tdat_raw_in

    Write-only on statement handle creation; read-only on statement handle thereafter.
    When set to either RecordMode or IndicatorMode in the attributes hash provided to a $dbh-<prepare() call, causes the resulting DBI statement handle to accept the input parameter data in Teradata binary import/export format:
    <2 byte length><(optional) N bytes of indicators><N bytes of data><newline>
    
    Specifying RecordMode indicates data is provided without the NULL indicator bits; IndicatorMode indicates data is provided with indicator bits.

    Each row of parameter data should be bound as SQL_VARBINARY type. This attribute is intended to provide a faster path for import operations by avoiding the translation from internal Perl datatypes. E.g.,

    open (FLIMPORT, 'fload.data') || die 'Can't open import data file: $!\n";
    
    $sth = $dbh->prepare('USING (col1 integer, col2 char(20), col3 float, col4 varchar(100)) '
        . 'INSERT INTO MyTable VALUES(:col1, :col2, :col3, :col4);',
        { tdat_raw_in => 'IndicatorMode' });
    
    while (sysread(FLIMPORT, $len, 2)) {
        sysread(FLIMPORT, $buffer, $len+1);    # remember the newline!
        $buffer = pack("SA*", $len, $buffer);
        $sth->bind_param(1, $buffer, {
            TYPE => SQL_VARBINARY,
            PRECISION => length($buffer)
        });
        $sth->execute( $buffer );
    }
    

    tdat_raw_out

    Write-only on statement handle creation; read-only on statement handle thereafter.
    When set to either RecordMode or IndicatorMode in the attributes hash provided to a $dbh-<prepare() call, causes the resulting DBI statement handle to output rowdata in Teradata binary import/export format. Specifying RecordMode indicates data is provided without the NULL indicator bits; IndicatorMode indicates data is provided with indicator bits.

    Returned row data will be returned as a single SQL_VARBINARY result column. This attribute is intended to provide a faster path for export operations by avoiding the translation to internal Perl datatypes. E.g.,

    open (FLEXPORT, '>fload.data') || die 'Can't open export data file: $!\n";
    binmode FLEXPORT;
    
    $sth = $dbh->prepare('SELECT * FROM MyTable',
        { 'tdat_raw_out' => 'IndicatorMode' });
    
    $sth->execute();
    
    print FLEXPORT $row->[0]
    	while ($row = $sth->fetchrow_arrayref());
    close FLEXPORT;
    
    

    tdat_reqsize

    Read/write on connection or statement handle; statement handle inherits default from parent connection.
    Specifies the maximum request buffer size in bytes. Used primarily for execute_array() to limit or expand the number of parameter tuples sent to the database in each request; also used for FASTLOAD and MLOAD connections (via the RequestSize attribute). For SQL connections, only relevant to Teradata V2R6.0 or above, and only for execute_array(). Default value is 64256; may be set to any value between 64256 and 1,048,000.

    tdat_respsize

    Read/write on connection or statement handle; statement handle inherits default from parent connection.
    Specifies the maximum reponse buffer size in bytes. Used expand the amount of data which can be returned from the database in each request. Only relevant to Teradata V2R6.0 or above. Default value is 64256; may be set to any value between 64256 and 1,048,000.

    tdat_sessno

    Read-only on connection handle
    Returns the database assigned session number.

    tdat_sp_print

    Statement handle attribute, set on prepare.
    Sets console behavior for CREATE/REPLACE PROCEDURE statements. Any non-zero value enables console PRINT's; zero, or if not defined, console PRINTing is disabled. NOTE: Teradata has deprecated the use of PRINT statements, and this option is ignored as of Teradata V2R5.0.2

    tdat_sp_save

    Statement handle attribute, set on prepare.
    Sets stored procedure text save behavior for CREATE/REPLACE PROCEDURE statements. Any non-zero value, or if not defined, enables saving stored procedure text; zero disables text saving.

    tdat_stmt_num

    Read-only on statement handle.
    Returns the number of the current statement within the request associated with the statement handle. Applies only for the fetchrow_XXX() statement handle method; for requests which do not include SELECT statements, the returned value is the statement number of the last statement in the request.

    tdat_stmt_info

    Read-only on statement handle.
    Returns an arrayref of hashrefs of Teradata statement information for each Teradata statement within the request associated with the DBI statement handle. Not valid on EXPORT or PM/PC sessions.
    Please note that the DBMS starts statement numbering with 1, not zero; thus, loop constructs used to scan the statement info array should start their index values at 1. The following attributes are included in each statement's hashref:

    AttributeDescription
    ActivityTypetype of activity ('Select', 'Insert', 'Update', etc.) of the statement.
    ActivityCountnumber of rows effected by the statement.
    Warningany warning message associated with the statement. Returns undef if none.
    ErrorCodeDBMS error code reported if the statement failed. Returns undef if none.
    ErrorMessageDBMS error message text reported if the associated statement failed. Returns undef if none.
    StartsAtstarting index of a statement's returned column info or data within the DBI column info and data arrays (NAME, PRECISION, etc., as well as the results of fetchrow_XXX()). Each attribute and rowdata array includes entries for all columns of all SELECT statements within a request. In order to isolate the array entries which apply to the statement currently being fetched from, use the result of $sth->{'tdat_stmt_num'} to index into the information and data arrayref's. See the Multi-Statement And MACRO Requests section above for details. For non-SELECT statements, undef is returned.
    EndsAtthe (inclusive) ending index of a statement's returned column attribute and data within the DBI attribute and row data arrays. This does NOT include any summary columns information generated by the statement. For non-SELECT statements, undef is returned.
    IsSummarycurrent summary row number for the statement, if any, or undef if not a summarized SELECT statement, or if the current row is not a summary row. The returned value is used to index into the arrays returned by SummaryStarts and SummaryEnds to locate the field values and attributes for the specified summary row.
    SummaryPositionarrayref of the column numbers associated with the summary fields in each summary row. Set to undef for non-SELECT or non-summarized statements. SummaryPosition information is not available until after the execute() method has been called and a summary row has been fetched.
    SummaryPosStartarrayref, indexed by summary row number, of the starting index within the SummaryPosition array for each summary row. Set to undef for non-SELECT or non-summarized statements. SummaryPosStart information is not available until after the execute() method has been called and a summary row has been fetched.
    SummaryStartsarray of starting indexes within the DBI attribute and row data arrays for a statement's summary column info and data. Set to undef for non-SELECT or non-summarized statements. When processing a summarized statement, an application
    • retrieves the current statement's hashref from the arrayref returned by the tdat_stmt_info statement handle attribute
    • checks the IsSummary attribute of the current statement hashref
    • retrieves the SummaryStarts and SummaryEnds arrays from the current statement hashref
    • uses the current summary row number (from the IsSummary attribute) to get the starting and ending indexes (inclusive) of column attribute and row data from the SummaryStarts and SummaryEnds arrays
    • iterates through the DBI attribute and row data arrays using the retrieved start and end indexes.
    SummaryEndsarray of ending indexes within the DBI attribute and row data arrays for a statement's summary column info and data. Set to undef for non-SELECT or non-summarized statements.
    PromptFor remote console sessions only, set to 1 when the console utility returns a PROMPT parcel, i.e., requests input.

    An example use of these attributes:

    $sth = $dbh->prepare("INSERT INTO table VALUES(?,?,?,?); "
    . "UPDATE table2 SET col1 = 'another value' WHERE col1 = 'some value';");
    
    $rows = $sth->execute(1, 2, 3, 4);
    $stmtcnt = $sth->{'tdat_stmt_num'};    # no SELECT, so returns number of last stmt
    $stmt_info = $sth->{'tdat_stmt_info'};
    for ($i = 0; $i < $stmtcnt; $i++) {
        $stmthash = $$stmt_info[$i];
        $activity = $$stmthash{'ActivityType'};
        print "Statement $i failed: Error ", $$stmthash{ErrorCode}, ': ',
            $$stmthash{ErrorMessage}, "\n" and next
            if ($$stmthash{ErrorCode});
        $stmtrows = $$stmthash{'ActivityCount'};
    
        $warn = $$stmthash{'Warning'};
        if ($warn) {
            print "Statement $i: $warn\n";
        }
        print "$activity at statement $i effected $stmtrows rows.\n";
    }
    

    tdat_TITLE

    Read-only on statement handle.
    Returns an arrayref of returned column titles, as specified by the TITLE qualifiers in SELECT statements, or various DDL statements. Defaults to the column name if no title is reported by the database.

    tdat_TYPESTR

    Read-only on statement handle.
    Returns an arrayref of returned column type information as a string, e.g., "DECIMAL(10,5)".

    tdat_uses_cli

    Read-only on connection handle
    If "true", indicates the connection is using the CLI adapter.

    tdat_utility

    Write-only connect() attribute, Read-only on connection handle.
    When specified on connect(), the specified string is used as the logon partition for the session. If not specified, the default value is 'DBC/SQL'. Valid values are 'DBC/SQL', 'FASTLOAD', 'MLOAD', 'EXPORT', 'MONITOR', and 'DBCCONS'.

    tdat_vartext_in

    Write-only on statement handle creation; read-only on statement handle thereafter.
    When set to a single character in the attributes hash provided to a $dbh->prepare() call, causes the resulting DBI statement handle to accept the input parameter data in Teradata VARTEXT format (i.e., records consisting of character string fields separated by the specified separator character). Any parameter data provided either via explicit bind() operations, or provided with execute()/execute_array(), will be treated as a single string to be split along the specified separator character boundaries. Only a single parameter value should be bound in this case.

    tdat_vartext_out

    Write-only on statement handle creation; read-only on statement handle thereafter.
    When set to a single character in the attributes hash provided to a $dbh->prepare() call, causes the resulting DBI statement handle to output rowdata in Teradata VARTEXT format (i.e., records consisting of character string fields separated by the specified separator character). The columns of each row returned by any fetch() operation will be concatenated into a single string - separated by the specified separator character - and returned as the first and only column of row data.

    tdat_version

    Read-only on connection handle.
    Returns the Teradata version for the connection as a string, e.g., "V2R.05.01.00.23".

    tdat_versnum

    Read-only on connection handle.
    Returns the Teradata version as an integer number of the form
    	(major_release * 1,000,000) + (minor_release * 10,000) + (maint_release * 100) + emergency_release
    
    E.g., "V2R6.0.1.17" would be 6000117.

    Driver-Specific Functions

    NOTE: the BindColArray, BindParamArray, FirstAvailable, FirstAvailList, and Realize functions have been deprecated and replaced with tdat_BindColArray, tdat_BindParamArray, tdat_FirstAvailable, tdat_FirstAvailList, and tdat_Realize, respectively, in order to properly conform to DBI's naming rules. While these functions are still available via the old names, the "tdat_" prefix should be used in all new code. In addition, the tdat_BindParamArray function has been deprecated; the new official DBI array binding interfaces should be used instead.

    Also note that these methods are installed, and may be called directly; the $handle->func(...'Function') syntax is no longer required, though still supported..

    tdat_BindParamArray
    $i = $sth->tdat_BindParamArray($param_num, \@param_ary);

    Deprecated; use the $sth->bind_param_array() standard API instead

    $param_num is the number of the parameter to be bound, and \@param_ary is an arrayref that will contain the parameter values. Values need not be instantiated until just prior to the execute() call.

    • Upon execute, the entire set of parameter values is supplied to the DBMS in a single request.
    • For statements with multiple parameters, any bound parameter arrays with fewer elements than the longest bound parameter array will cause a NULL value to be used for the unsupplied parameter array elements.
    • If some parameters are bound to scalar values, the scalar value will be used for each data row.
    • If the total set of bound parameter array values exceeds the maximum request message size, an error is returned requesting the user reduce the number of parameter array values.

    tdat_BindColArray
    $i = $sth->tdat_BindColArray($column_num, \@colary, $max_rows);

    $column_num is the number of the column to be bound, \@colary is an arrayref that will receive the column values, and $max_rows is the maximum number of rows the application expects to be returned per fetch().
    This function allows a single fetch() operation to return multiple rows of data.

    tdat_CharSet
    $charset = $dbh->tdat_CharSet();
    $charset = $sth->tdat_CharSet();

    Returns the current connection character set, usually either 'ASCII' or 'UTF8'.

    Non-blocking Execution Control Methods

    In order to make this driver useful for high-performance ETL applications, support for multiple concurrent sessions is needed. Unfortunately, native DBI doesn't currently support the type of asynchronous session interaction needed to efficiently move data to/from a MPP database system. To address this need, the following functions have been provided:

    tdat_FirstAvailable
    $i = $drh->tdat_FirstAvailable(\@dbh_ary, $timeout);

    \@dbh_ary is an arrayref of database handles or file descriptor numbers, and $timeout is a timeout specification (in seconds, -1 or undef indicate infinite wait). Returns the index of the first session or file descriptor within the supplied database handle array that is ready to be serviced. If none of the sessions or file descriptors is ready for service, it waits up to the timeout number of seconds (or forever if timeout is -1 or undef) for a session to become ready. Returns undef if no sessions are ready in the specified timeout.

    tdat_FirstAvailList
    @ary = $drh->tdat_FirstAvailList(\@dbh_ary, $timeout);

    \@dbh_ary is an arrayref of database handles or file descriptor numbers, and $timeout is a timeout specification (in seconds, -1 or undef indicate infinite wait). Returns an array of indexes of sessions and file descriptor numbers within the supplied database handle array that are ready to be serviced. If none of the sessions or file descriptors is ready for service, it waits up to the timeout number of seconds (or forever if timeout is -1 or undef) for a session or file descriptor to become ready. Returns undef if no sessions or file descriptors are ready in the specified timeout.
    NOTE: This function is useful for more evenly distributing the workload across multiple sessions when all sessions respond at nearly the same time. Using FirstAvailable() in that situation tends to favor the first 1 or 2 sessions in the list, thus underusing the remaining sessions.
    NOTES:

    1. tdat_FirstAvailList() will include any inactive connection handle specified in the supplied handle array, rather than only those which have completed an operation and are waiting to be "Realized". Use the tdat_active connection handle attribute (see below) to test if the connection actually needs to be Realized.
    2. tdat_FirstAvailList() allows file descriptor numbers to be included in the supplied array of database handles, in order to support interleaved I/O operations between DBI and non-DBI objects.

    tdat_Realize
    $i = $sth->tdat_Realize();

    Realizes the results of a non-blocking statement execution. tdat_FirstAvailable and tdat_FirstAvailList only wait for and report that a session is ready; they do not process the results on the session. tdat_Realize performs the actual processing of the database response, including returning the success or failure of the operation, and any returned rows.

    An example use of these functions to bulkload a table:

    my $drh;
    my @dbhlist;
    my @sthlist;
    open(IMPORT "$infile") || die "Can't open import file";
    binmode IMPORT;
    
    for (my $i = 0; $i < 10; $i++) {
        $dbhlist[$i] = DBI->connect("dbi:Teradata:dbc", "dbc", "dbc");
        if (!defined($drh)) { $drh = $dbhlist[$i]->{Driver}; }
    }
    
    for (my $i = 0; $i < $sesscount; $i++) {
        $sthlist[$i] = $dbhlist[$i]->prepare(
            'USING (col1 INTEGER, col2 CHAR(30), col3 DECIMAL(9,2), col4 DATE) ' .
            'INSERT INTO mytable VALUES(?, ?, ?, ?)', {
            tdat_nowait => 1,
            tdat_raw_in => IndicatorMode
        });
        sysread(IMPORT, $buffer, $len)) {
        $sthlist[$i]->bind_param(1, $buffer);
        $sthlist[$i]->execute();
    }
    
    while (sysread(IMPORT, $buffer, $len)) {
        $i = $drh->func(\@dbhlist, -1, tdat_FirstAvailable);
        $rowcnt = $sthlist[$i]->func(undef, tdat_Realize);
        if (!defined($rowcnt)) {
            print STDERR " ** INSERT failed: " . $sthlist[$i]->errstr() . "\n";
        }
        $sthlist[$i]->bind_param(1, $buffer);
        $sthlist[$i]->execute();
    }
    
    while (some statements still active) {
        $i = $drh->func(\@dbhlist, -1, tdat_FirstAvailable);
        $rowcnt = $sthlist[$i]->func(undef, tdat_Realize);
        if (!defined($rowcnt)) {
            print STDERR " ** INSERT failed: " . $sthlist[$i]->errstr() . "\n";
        }
        $sthlist[$i]->finish();
    }
    

    tdat_ServerSideVartext
    $vartext_sql = $dbh->tdat_ServerSideVartext($sql [ , $separator ])

    Converts the provided SQL SELECT statement into a query in which all returned columns are concatenated into a single return column, separated by the specified $separator string (default '|'). Using this techinque, the performance of a VARTEXT output query (including for fastexport use) may be significantly improved, as the various client-side conversion required for e.g., large DECIMAL columns are no longer required. On failure, returns undef, with error information available via $dbh->err and $dbh->errstr.

    The input query is subject to the following limitations:

    • The resulting concatenated column cannot exceed 64000 bytes in length.

    • The returned column list cannot include any USING variables or '?' placeholders.

    • The specified query must be a SELECT statement without any prepended USING, LOCKING, or WITH clauses.

    • The returned column list can only include a single wildcard ('*') specifier, and it must be either the first, last, or only column list element.

    • The returned column list cannot include BLOB or CLOB entries.

    • In Teradata transaction mode, a tdat_ServerSideVartext failure may cause any currently open transaction to be rolled back (due to the possible failure of the HELP COLUMN query).

    The returned $vartext_sql will expand the returned column list into individual elements, with appropriate COALESCE() and TRIM() wrappers, and casting to VARCHAR where needed, all concatenated with the separator string.

    Note that the parse applied to the input $sql is relatively simple, and may cause unexpected translation failures, primarily due to a HELP COLUMN query sent to the database to retrieve the true returned column list types. To avoid translation failures, if possible, remove leading LOCKING, USING, and WITH clauses, and any clauses which trail the FROM clause, from the query before attempting to translate, and then restore the removed clauses after the translation. To verify the trimmed query can be translated, simply apply it with a HELP COLUMN query using your favorite tool (Perl, BTEQ, etc.).

    Also note that, if the resulting query is subsequently used with DBD::Teradata to generate a vartext file upon execution, the usual tdat_vartext_out statement attribute must still be applied (in order to emit the results with newlines; note that no additional separators will be applied to the output, since the output appears as a single column to DBD::Teradata).

    Example: Assuming a table defined as

    CREATE TABLE DBITST.alltypetst(
          col1 INTEGER NOT NULL,
          col2 SMALLINT NOT NULL,
          col3 BYTEINT,
          col4 CHAR(20),
          col5 VARCHAR(100),
          col6 FLOAT,
          col7 DECIMAL(2,1),
          col8 DECIMAL(4,2),
          col9 DECIMAL(8,4),
          col10 DECIMAL(14,5),
          col11 DATE FORMAT 'YY/MM/DD',
          col12 TIME(6),
          col13 TIMESTAMP(0))
    UNIQUE PRIMARY INDEX ( col1 );
    
    then the following query
    my $vtsql = $dbh->tdat_ServerSideVartext(
    	'SELECT CURRENT_TIMESTAMP, 'hello world', a.*
    	FROM alltypetst a
    	WHERE a.col1 > 12345
    	ORDER BY a.col3', '|');
    
    would produce the following SQL:
    select (TRIM(LEADING FROM COALESCE((CURRENT_TIMESTAMP(VARCHAR(36))), '')) || '|' ||
    COALESCE('hello world', '') || '|' ||
    TRIM(LEADING FROM col1) || '|' ||
    TRIM(LEADING FROM col2) || '|' ||
    TRIM(LEADING FROM COALESCE(col3, '')) || '|' ||
    COALESCE(col4, '') || '|' ||
    COALESCE(col5, '') || '|' ||
    TRIM(LEADING FROM COALESCE(col6, '')) || '|' ||
    TRIM(LEADING FROM COALESCE(col7, '')) || '|' ||
    TRIM(LEADING FROM COALESCE(col8, '')) || '|' ||
    TRIM(LEADING FROM COALESCE(col9, '')) || '|' ||
    TRIM(LEADING FROM COALESCE(col10, '')) || '|' ||
    TRIM(LEADING FROM COALESCE((col11(VARCHAR(10))), '')) || '|' ||
    TRIM(LEADING FROM COALESCE((col12(VARCHAR(20))), '')) || '|' ||
    TRIM(LEADING FROM COALESCE((col13(VARCHAR(30))), ''))
    ) as vartext FROM  alltypetst a
    FROM alltypetst a
    WHERE a.col1 > 12345
    ORDER BY a.col3
    
    A simple performance test on a minimal client (WinXP, 2.4 GHz x86) indicates the server side vartext is nearly 5X faster than the tdat_vartext_out equivalent. (as usual, YMMV)

    tdat_UtilitySetup
    $i = $sth->tdat_UtilitySetup(\%utility_attributes);

    As described in the Utility section, this function encapsulates much of the processing for FASTLOAD, MLOAD, and EXPORT utility applications. The attributes parameter includes the following attributes:

    AttributeRequired/OptionalDefaultDescription
    CheckpointOptional1,000,000 Number of rows to process between checkpoints
    CheckpointCallbackOptionalNone Ref to a subroutine to be called when a checkpoint event occurs
    ContextOptionalNone Any value the application wishes to pass thru to the callbacks; most often a hashref with various application specific control attributes
    ErrorLimitOptional1,000,000 Maximum number of errors to allow before terminating a FASTLOAD or MLOAD
    LogTablesOptional  Arrayref of errortables for FASTLOAD or MLOAD
    LoopbackOptionalNone Specifies a SQL SELECT statement to be used as the source of data for either FASTLOAD or MLOAD utilities. This attribute will result in an EXPORT job being logged on, and a matching EXPORT session generated for each FASTLOAD/MLOAD session to provide data. Note that this attribute requires the MP attribute to be enabled.
    MPOptionalNone When set to a nonzero value, causes the utility to operate in multiprocess mode. whereby a separate process is fork()'ed for each utility session. In some environments, this can improve performance (esp. SMP platforms). Note that this mode is not yet available on Windows platforms, due to issues with the fork() emulation as implemented. MP can be used with Microsoft Services for UNIX 3.0, as it provides a true fork() implementation.
    ReportOptionalNone A callback subroutine reference to receive status messages as the utility processing progresses.
    RequestSize Optional
    (FASTLOAD and MLOAD only)
    64256 Maximum request buffer size in bytes. For Teradata versions V2R6.0 and above, this value may be increased up to 1,048,000 in order to transfer a larger number of tuples to the database in a single request. The value is silently ignored if less than 64256, or greater than 1,048,000, or the Teradata version is below V2R6.0, or the connections are CLI adapter based.
    SessionsOptionalLesser of number of AMPs
    in the DBMS, or 24
    Number of utility sessions to logon
    SourceRequired for FASTLOAD and MLOADNone May be a
    • Ref to a subroutine to be called to import data
    • Filename description for either VARTEXT or FASTLOAD formatted files (see below)
    • DBI connection handle of a control session to be used for the EXPORT job when Loopback attribute is specified (see below).
    The filename description is
    < VARTEXT 'c' | INDICDATA | DATA > filename
    where 'c' is the character to be used as a field separator.
    SQLRequired  SQL statement to be applied (INSERT for FASTLOAD, SELECT for EXPORT, or an arrayref of multiple statements for MLOAD);
    NOTE: Placeholder (i.e., '?' parameters) are not allowed.
    SourceFieldsRequired for MLOADNone A USING clause that defines the format of the source data
    TargetRequired for EXPORTNone Either a subroutine ref to be called to export data for EXPORT, or a filename description (as defined for Source above).
    UtilityRequired  Utility to be invoked (either FASTLOAD, MLOAD, or EXPORT)

    Refer to the individual Fastload, Export, or Multiload pages for detailed examples on using tdat_UtilitySetup.

    Testing

    Several test scripts are provided. The primary test script, test.pl, attempts to exersize all the available functionality, but may also be limited to selected individual classes of tests. Aditionally, diagnostics may be enabled; the number of sessions used for utilities to be adjusted, threaded test may be enabled or disabled, the CLI adapter may be enabled or disabled, and/or the operational database version may be specified from the command line.

    Note that several local files will be created, and various tables, macros, and procedures will be created on the target database system. The userid used for testing will need various privileges in order to complete all of the tests; in addition, the remote console tests require the CONSOLE database to exist with the DBSCONTROL empty macro in it, and execute privilege granted to the test userid; likewise, the PM/API tests require that the user have the various MONITOR privileges. "test.pl -h" produces the following information:

    test.pl [options] [ hostname userid password[,account] [default_database] ]
    where [options] are any number of instances of
            -h : print this message
            -n : do all normal SQL tests
            -f : do fastload tests
            -m : do multiload tests
            -x : do fastexport tests
            -p : do PMAPI tests
            -c : do remote console tests
            -u : do utility loopback tests
            -s count : set max sessions for utilities (default 2)
            -d logfile : turn on diagnostic tracing and log to logfile
            -t [2|1|0] : only/enable/disable thread testing (default enabled)
            -l [0|1] : use CLI adapter (default on)
            -v  : force behavior for specified integer Teradata version
                    (e.g., 6000127 eq 'V2R6.0.1.27')
    
    Default is all tests, no trace, 2 sessions, enable thread testing,
    CLI adapter enabled.
    
    If no host/user/password are given, then the environment variables
    TDAT_DBD_DSN, TDAT_DBD_USER, and TDAT_DBD_PASSWORD are used.
    
    Example:
    
    perl test.pl -n -f -p -d bugtest.txt localhost dbitst dbitst
    
    will use the localhost, user dbitst password dbitst and perform
    only SQL, fastload, and PMAPI tests, logging traces to bugtest.txt.
    
    In addition to the primary test.pl script, several smaller scripts are provided to test individual classes of functionality; refer to the various test*.pl and TdTest*.pm files in the /t directory. Finally, a simple script, logonoff.pl, performs a logon and executes a few small queries, as a quick sanity check.

    Conformance

    DBD::Teradata 8.103 requires a minimum Perl version of 5.8.0, and a minimum DBI version of 1.40. It was tested with Perl Versions 5.8.6 and 5.8.8, and DBI version 1.54.

    The following DBI functions are not yet supported:

    DBI->data_sources()
    $dbh->prepare_cached()
    $dbh->type_info()
    
    Also be advised that using either selectall_arrayref() or fetchall_arrayref() is probably a bad idea unless you know the number of rows returned is reasonably small.

    DBD::Teradata has been successfully tested with the DBI::PurePerl capability introduced in DBI 1.24.

    Threaded applications should consider using the latest versions of threads and threads::shared, available on CPAN.

    Change History

    Refer to the Changes.txt file included in the software bundle.

    Tips & Tricks

    • Review the various included test scripts and modules for examples of using the various feature of DBD::Teradata.

    • Keep in mind that some DDL statements may return errors that are actually acceptable in some cases, e.g., a "precautionary" DROP TABLE returning a 3807 error if the table doesn't exist.

    • For optimal performance when bulkloading via SQL sessions, turn off AutoCommit and explicitly commit() at periodic intervals.

    • If you need to re-execute a previously prepared and executed data returning statement before the returned rowset has been completely consumed, you must use sth->finish() first.

    • Consider setting both PrintError and RaiseError to zero during DBI->connect(), and explicitly checking for errors yourself; otherwise, you may exit unexpectedly or get spurious error messages in the output when you're just doing a "precautionary" DROP on a non-existant database object.

    • SHOW commands return carriage returns (i.e., "\r") where newlines ("\n") would normally be expected; remember to apply a quick substitute to the returned data before displaying.

    • DBIx::Chart can be a useful tool for doing quick data visualizations directly from your SQL.

    • SQL::Preproc provides the ability to use embedded SQL without the additional DBI wrappers, and has been tested with Teradata.

    LICENSE

    As of version 12.001, permission to use, modify, and redistribute DBD::Teradata is provided under the terms of the Gnu General Public License version 2.

    		    GNU GENERAL PUBLIC LICENSE
    		       Version 2, June 1991
    
     Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
     51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
     Everyone is permitted to copy and distribute verbatim copies
     of this license document, but changing it is not allowed.
    
    			    Preamble
    
      The licenses for most software are designed to take away your
    freedom to share and change it.  By contrast, the GNU General Public
    License is intended to guarantee your freedom to share and change free
    software--to make sure the software is free for all its users.  This
    General Public License applies to most of the Free Software
    Foundation's software and to any other program whose authors commit to
    using it.  (Some other Free Software Foundation software is covered by
    the GNU Lesser General Public License instead.)  You can apply it to
    your programs, too.
    
      When we speak of free software, we are referring to freedom, not
    price.  Our General Public Licenses are designed to make sure that you
    have the freedom to distribute copies of free software (and charge for
    this service if you wish), that you receive source code or can get it
    if you want it, that you can change the software or use pieces of it
    in new free programs; and that you know you can do these things.
    
      To protect your rights, we need to make restrictions that forbid
    anyone to deny you these rights or to ask you to surrender the rights.
    These restrictions translate to certain responsibilities for you if you
    distribute copies of the software, or if you modify it.
    
      For example, if you distribute copies of such a program, whether
    gratis or for a fee, you must give the recipients all the rights that
    you have.  You must make sure that they, too, receive or can get the
    source code.  And you must show them these terms so they know their
    rights.
    
      We protect your rights with two steps: (1) copyright the software, and
    (2) offer you this license which gives you legal permission to copy,
    distribute and/or modify the software.
    
      Also, for each author's protection and ours, we want to make certain
    that everyone understands that there is no warranty for this free
    software.  If the software is modified by someone else and passed on, we
    want its recipients to know that what they have is not the original, so
    that any problems introduced by others will not reflect on the original
    authors' reputations.
    
      Finally, any free program is threatened constantly by software
    patents.  We wish to avoid the danger that redistributors of a free
    program will individually obtain patent licenses, in effect making the
    program proprietary.  To prevent this, we have made it clear that any
    patent must be licensed for everyone's free use or not licensed at all.
    
      The precise terms and conditions for copying, distribution and
    modification follow.
    
    		    GNU GENERAL PUBLIC LICENSE
       TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
    
      0. This License applies to any program or other work which contains
    a notice placed by the copyright holder saying it may be distributed
    under the terms of this General Public License.  The "Program", below,
    refers to any such program or work, and a "work based on the Program"
    means either the Program or any derivative work under copyright law:
    that is to say, a work containing the Program or a portion of it,
    either verbatim or with modifications and/or translated into another
    language.  (Hereinafter, translation is included without limitation in
    the term "modification".)  Each licensee is addressed as "you".
    
    Activities other than copying, distribution and modification are not
    covered by this License; they are outside its scope.  The act of
    running the Program is not restricted, and the output from the Program
    is covered only if its contents constitute a work based on the
    Program (independent of having been made by running the Program).
    Whether that is true depends on what the Program does.
    
      1. You may copy and distribute verbatim copies of the Program's
    source code as you receive it, in any medium, provided that you
    conspicuously and appropriately publish on each copy an appropriate
    copyright notice and disclaimer of warranty; keep intact all the
    notices that refer to this License and to the absence of any warranty;
    and give any other recipients of the Program a copy of this License
    along with the Program.
    
    You may charge a fee for the physical act of transferring a copy, and
    you may at your option offer warranty protection in exchange for a fee.
    
      2. You may modify your copy or copies of the Program or any portion
    of it, thus forming a work based on the Program, and copy and
    distribute such modifications or work under the terms of Section 1
    above, provided that you also meet all of these conditions:
    
        a) You must cause the modified files to carry prominent notices
        stating that you changed the files and the date of any change.
    
        b) You must cause any work that you distribute or publish, that in
        whole or in part contains or is derived from the Program or any
        part thereof, to be licensed as a whole at no charge to all third
        parties under the terms of this License.
    
        c) If the modified program normally reads commands interactively
        when run, you must cause it, when started running for such
        interactive use in the most ordinary way, to print or display an
        announcement including an appropriate copyright notice and a
        notice that there is no warranty (or else, saying that you provide
        a warranty) and that users may redistribute the program under
        these conditions, and telling the user how to view a copy of this
        License.  (Exception: if the Program itself is interactive but
        does not normally print such an announcement, your work based on
        the Program is not required to print an announcement.)
    
    These requirements apply to the modified work as a whole.  If
    identifiable sections of that work are not derived from the Program,
    and can be reasonably considered independent and separate works in
    themselves, then this License, and its terms, do not apply to those
    sections when you distribute them as separate works.  But when you
    distribute the same sections as part of a whole which is a work based
    on the Program, the distribution of the whole must be on the terms of
    this License, whose permissions for other licensees extend to the
    entire whole, and thus to each and every part regardless of who wrote it.
    
    Thus, it is not the intent of this section to claim rights or contest
    your rights to work written entirely by you; rather, the intent is to
    exercise the right to control the distribution of derivative or
    collective works based on the Program.
    
    In addition, mere aggregation of another work not based on the Program
    with the Program (or with a work based on the Program) on a volume of
    a storage or distribution medium does not bring the other work under
    the scope of this License.
    
      3. You may copy and distribute the Program (or a work based on it,
    under Section 2) in object code or executable form under the terms of
    Sections 1 and 2 above provided that you also do one of the following:
    
        a) Accompany it with the complete corresponding machine-readable
        source code, which must be distributed under the terms of Sections
        1 and 2 above on a medium customarily used for software interchange; or,
    
        b) Accompany it with a written offer, valid for at least three
        years, to give any third party, for a charge no more than your
        cost of physically performing source distribution, a complete
        machine-readable copy of the corresponding source code, to be
        distributed under the terms of Sections 1 and 2 above on a medium
        customarily used for software interchange; or,
    
        c) Accompany it with the information you received as to the offer
        to distribute corresponding source code.  (This alternative is
        allowed only for noncommercial distribution and only if you
        received the program in object code or executable form with such
        an offer, in accord with Subsection b above.)
    
    The source code for a work means the preferred form of the work for
    making modifications to it.  For an executable work, complete source
    code means all the source code for all modules it contains, plus any
    associated interface definition files, plus the scripts used to
    control compilation and installation of the executable.  However, as a
    special exception, the source code distributed need not include
    anything that is normally distributed (in either source or binary
    form) with the major components (compiler, kernel, and so on) of the
    operating system on which the executable runs, unless that component
    itself accompanies the executable.
    
    If distribution of executable or object code is made by offering
    access to copy from a designated place, then offering equivalent
    access to copy the source code from the same place counts as
    distribution of the source code, even though third parties are not
    compelled to copy the source along with the object code.
    
      4. You may not copy, modify, sublicense, or distribute the Program
    except as expressly provided under this License.  Any attempt
    otherwise to copy, modify, sublicense or distribute the Program is
    void, and will automatically terminate your rights under this License.
    However, parties who have received copies, or rights, from you under
    this License will not have their licenses terminated so long as such
    parties remain in full compliance.
    
      5. You are not required to accept this License, since you have not
    signed it.  However, nothing else grants you permission to modify or
    distribute the Program or its derivative works.  These actions are
    prohibited by law if you do not accept this License.  Therefore, by
    modifying or distributing the Program (or any work based on the
    Program), you indicate your acceptance of this License to do so, and
    all its terms and conditions for copying, distributing or modifying
    the Program or works based on it.
    
      6. Each time you redistribute the Program (or any work based on the
    Program), the recipient automatically receives a license from the
    original licensor to copy, distribute or modify the Program subject to
    these terms and conditions.  You may not impose any further
    restrictions on the recipients' exercise of the rights granted herein.
    You are not responsible for enforcing compliance by third parties to
    this License.
    
      7. If, as a consequence of a court judgment or allegation of patent
    infringement or for any other reason (not limited to patent issues),
    conditions are imposed on you (whether by court order, agreement or
    otherwise) that contradict the conditions of this License, they do not
    excuse you from the conditions of this License.  If you cannot
    distribute so as to satisfy simultaneously your obligations under this
    License and any other pertinent obligations, then as a consequence you
    may not distribute the Program at all.  For example, if a patent
    license would not permit royalty-free redistribution of the Program by
    all those who receive copies directly or indirectly through you, then
    the only way you could satisfy both it and this License would be to
    refrain entirely from distribution of the Program.
    
    If any portion of this section is held invalid or unenforceable under
    any particular circumstance, the balance of the section is intended to
    apply and the section as a whole is intended to apply in other
    circumstances.
    
    It is not the purpose of this section to induce you to infringe any
    patents or other property right claims or to contest validity of any
    such claims; this section has the sole purpose of protecting the
    integrity of the free software distribution system, which is
    implemented by public license practices.  Many people have made
    generous contributions to the wide range of software distributed
    through that system in reliance on consistent application of that
    system; it is up to the author/donor to decide if he or she is willing
    to distribute software through any other system and a licensee cannot
    impose that choice.
    
    This section is intended to make thoroughly clear what is believed to
    be a consequence of the rest of this License.
    
      8. If the distribution and/or use of the Program is restricted in
    certain countries either by patents or by copyrighted interfaces, the
    original copyright holder who places the Program under this License
    may add an explicit geographical distribution limitation excluding
    those countries, so that distribution is permitted only in or among
    countries not thus excluded.  In such case, this License incorporates
    the limitation as if written in the body of this License.
    
      9. The Free Software Foundation may publish revised and/or new versions
    of the General Public License from time to time.  Such new versions will
    be similar in spirit to the present version, but may differ in detail to
    address new problems or concerns.
    
    Each version is given a distinguishing version number.  If the Program
    specifies a version number of this License which applies to it and "any
    later version", you have the option of following the terms and conditions
    either of that version or of any later version published by the Free
    Software Foundation.  If the Program does not specify a version number of
    this License, you may choose any version ever published by the Free Software
    Foundation.
    
      10. If you wish to incorporate parts of the Program into other free
    programs whose distribution conditions are different, write to the author
    to ask for permission.  For software which is copyrighted by the Free
    Software Foundation, write to the Free Software Foundation; we sometimes
    make exceptions for this.  Our decision will be guided by the two goals
    of preserving the free status of all derivatives of our free software and
    of promoting the sharing and reuse of software generally.
    
    			    NO WARRANTY
    
      11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
    FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
    OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
    PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
    OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
    MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
    TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
    PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
    REPAIR OR CORRECTION.
    
      12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
    WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
    REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
    INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
    OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
    TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
    YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
    PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGES.
    
    

    Support

    DBD::Teradata is provided as is. Support questions may be directed to Presicient. Note that Presicient has discontinued development of and support for Teradata products; however, Presicient may, at its sole discretion, provide support for individual issues.

    TO DO List

    Refer to the TODO.txt file included in the software bundle.

    NOTE:The list should not be considered a commitment to implement any of the listed items; it is provided only to inform and solicit feedback.

    References

    Copyright

    Copyright (c) 2001-2009 Presicient Corp., USA


  • Teradata® is a registered trademark of Teradata Corporation.