Tải bản đầy đủ - 0 (trang)
10 Don't Assume LOAD DATA Knows More than It Does

10 Don't Assume LOAD DATA Knows More than It Does

Tải bản đầy đủ - 0trang

and escape character settings. If your input doesn't match those assumptions, you need to tell

MySQL about it.

When in doubt, check the contents of your datafile using a hex dump program or other utility

that displays a visible representation of whitespace characters like tab, carriage return, and

linefeed. Under Unix, the od program can display file contents in a variety of formats. If you

don't have od or some comparable utility, the transfer directory of the recipes distribution

contains hex dumpers written in Perl and Python (hexdump.pl and hexdump.py), as well as a

couple of programs that display printable representations of all characters of a file (see.pl and

see.py). You may find them useful for examining files to see what they really contain. In some

cases, you may be surprised to discover that a file's contents are different than you think. This

is in fact quite likely if the file has been transferred from one machine to another:

An FTP transfer between machines running different operating systems typically

translates line endings to those that are appropriate for the destination machine if the

transfer is performed in text mode rather than in binary (image) mode. Suppose you

have tab-delimited linefeed-terminated records in a datafile that load into MySQL on a

Unix system just fine using the default LOAD DATA settings. If you copy the file to a

Windows machine with FTP using a text transfer mode, the linefeeds probably will be

converted to carriage return/linefeed pairs. On that machine, the file will not load

properly with the same LOAD DATA statement, because its contents will have been

changed. Does MySQL have any way of knowing that? No. So it's up to you to tell it,

by adding a LINES TERMINATED BY '\r\n' clause to the statement. Transfers

between any two systems with dissimilar default line endings can cause these

changes. For example, a Macintosh file containing carriage returns may contain

linefeeds after transfer to a Unix system. You should either account for such changes

with a LINES TERMINATED BY clause that reflects the modified line-ending sequence,

or transfer the file in binary mode so that its contents do not change.

Datafiles pasted into email messages often do not survive intact. Mail software may

wrap (break) long lines or convert line-ending sequences. If you must transfer a

datafile by email, it's best sent as an attachment.

10.11 Skipping Datafile Lines

10.11.1 Problem

You want LOAD DATA to skip over the first line or lines of your datafile before starting to load


10.11.2 Solution

Tell LOAD DATA how many lines to ignore.

10.11.3 Discussion

To skip over the first n lines of a datafile, add an IGNORE n LINES clause to the LOAD DATA

statement. For example, if a tab-delimited file begins with a line consisting of column headers,

you can skip it like this:


As of MySQL 4.0.2, mysqlimport supports an --ignore-lines=n option that has the same effect.

IGNORE is often useful with files generated by external sources. For example, FileMaker Pro

can export data in what it calls merge format, which is essentially CSV format with an initial

line of column labels. The following statement would be appropriate for skipping the labels in a

merge file created by FileMaker Pro under Mac OS that has carriage return line endings:









Note that importing a FileMaker Pro file often is not actually this easy. For example, if it

contains dates, they may not be in a format that MySQL likes. You'll need to preprocess your

file first or postprocess it after loading it. (See Recipe 10.41.)

10.12 Specifying Input Column Order

10.12.1 Problem

The columns in your datafile aren't in the same order as the columns in the table into which

you're loading the file.

10.12.2 Solution

Tell LOAD DATA how to match up the table and the file by indicating which table columns

correspond to the datafile columns.

10.12.3 Discussion

LOAD DATA assumes the columns in the datafile have the same order as the columns in the

table. If that's not true, you can specify a list to indicate which table columns the datafile

columns should be loaded into. Suppose your table has columns a, b, and c, but successive

columns in the datafile correspond to columns b, c, and a. You can load the file like this:

mysql> LOAD DATA LOCAL INFILE 'mytbl.txt' INTO TABLE mytbl (b, c, a);

The equivalent mysqlimport statement uses the --columns option to specify the column list:

% mysqlimport --local --columns=b,c,a cookbook mytbl.txt

The --columns option for mysqlimport was introduced in MySQL 3.23.17. If you have an older

version, you must either use LOAD DATA directly or preprocess your datafile to rearrange the

file's columns into the order in which they occur in the table. (See Recipe 10.20 for a utility

that can do this.)

10.13 Skipping Datafile Columns

10.13.1 Problem

Your datafile contains columns that should be ignored rather than loaded into the table.

10.13.2 Solution

That's not a problem if the columns are at the ends of the input lines. Otherwise, you'll need

to preprocess the datafile before loading it.

10.13.3 Discussion

Extra columns that occur at the end of input lines are easy to handle. If a line contains more

columns than are in the table, LOAD DATA just ignores them (though it may indicate a nonzero

warning count).

Skipping columns in the middle of lines is a bit more involved. Suppose you want to load

information from a Unix password file /etc/passwd, which contains lines in the following



Suppose also that you don't want to bother loading the password column. A table to hold the

information in the other columns looks like this:






















login name

user ID

group ID

name, phone, office, etc.

home directory

command interpreter

To load the file, we need to specify that the column delimiter is a colon, which is easily

handled with a FIELDS clause:


However, we must also tell LOAD DATA to skip the second field that contains the password.

That's a problem, because LOAD DATA always wants to load successive columns from the

datafile. You can tell it which table column each datafile column corresponds to, but you can't

tell it to skip columns in the file. To deal with this difficulty, we can preprocess the input file

into a temporary file that doesn't contain the password value, then load the temporary file.

Under Unix, you can use the cut utility to extract the columns that you want, like this:

% cut -d":" -f0,3- /etc/passwd > passwd.txt

The -d option specifies a field delimiter of : and the -f option indicates that you want to cut

column one and all columns from the third to the end of the line. The effect is to cut all but the

second column. (Run man cut for more information about the cut command.) Then use LOAD

DATA to import the resulting passwd.txt file into the passwd table like this:

mysql> LOAD DATA LOCAL INFILE 'passwd.txt' INTO TABLE passwd


The corresponding mysqlimport command is:

% mysqlimport --local --fields-terminated-by=":" cookbook passwd.txt

10.13.4 See Also

cut always displays output columns in the same order they occur in the file, no matter what

order you use when you list them with the -f option. (For example, cut -f1,2,3 and cut -f3,2,1

produce the same output.) Recipe 10.20 discusses a utility that can pull out and display

columns in any order.

10.14 Exporting Query Results from MySQL

10.14.1 Problem

You want to export the result of a query from MySQL into a file or another program.

10.14.2 Solution

Use the SELECT ... INTO OUTFILE statement or redirect the output of the mysql program.

10.14.3 Discussion

MySQL provides a SELECT ... INTO OUTFILE statement that exports a query result directly

into a file on the server host. Another way to export a query, if you want to capture the result

on the client host instead, is to redirect the output of the mysql program. These methods have

different strengths and weaknesses, so you should get to know them both and apply

whichever one best suits a given situation.

10.14.4 Exporting with the SELECT ... INTO OUTFILE Statement

The syntax for this statement combines a regular SELECT with INTO OUTFILE filename at

the end. The default output format is the same as for LOAD DATA, so the following statement

exports the passwd table into /tmp/passwd.txt as a tab-delimited, linefeed-terminated file:

mysql> SELECT * FROM passwd INTO OUTFILE '/tmp/passwd.txt';

You can change the output format using options similar to those used with LOAD DATA that

indicate how to quote and delimit columns and records. To export the passwd table in CSV

format with CRLF-terminated lines, use this statement:

mysql> SELECT * FROM passwd INTO OUTFILE '/tmp/passwd.txt'



SELECT ... INTO OUTFILE has the following properties:

The output file is created directly by the MySQL server, so the filename should indicate

where you want the file to be written on the server host. There is no LOCAL version of

the statement analogous to the LOCAL version of LOAD DATA.

You must have the MySQL FILE privilege to execute the SELECT ... INTO statement.

The output file must not already exist. This prevents MySQL from clobbering files that

may be important.

You should have a login account on the server host or some way to retrieve the file

from that host. Otherwise, SELECT ... INTO OUTFILE likely will be of no value to you.

Under Unix, the file is created world readable and is owned by the MySQL server. This

means that although you'll be able to read the file, you may not be able to delete it.

10.14.5 Using the mysql Client to Export Data

Because SELECT ... INTO OUTFILE writes the datafile on the server host, you cannot use it

unless your MySQL account has the FILE privilege. To export data into a local file, you must

use some other strategy. If all you require is tab-delimited output, you can do a "poor-man's

export" by executing a SELECT statement with the mysql program and redirecting the output

to a file. That way you can write query results into a file on your local host without the FILE

privilege. Here's an example that exports the login name and command interpreter columns

from the passwd table created earlier in this chapter:

% mysql -e "SELECT account, shell FROM passwd" -N cookbook > shells.txt

The -e option specifies the query to execute, and -N tells MySQL not to write the row of

column names that normally precedes query output. The latter option was added in MySQL

3.22.20; if your version is older than that, you can achieve the same end by telling mysql to

be "really silent" with the -ss option instead:

% mysql -e "SELECT account, shell FROM passwd" -ss cookbook > shells.txt

Note that NULL values are written as the string "NULL". Some sort of postprocessing may be

necessary to convert them, depending on what you want to do with the output file.

It's possible to produce output in formats other than tab-delimited by sending the query result

into a post-processing filter that converts tabs to something else. For example, to use hash

marks as delimiters, convert all tabs to # characters (TAB indicates where you type a tab

character in the command):

% mysql -N -e " your query here "



| sed -e "s/ TAB /#/g" >

You can also use tr for this purpose, though the syntax may vary for different implementations

of this utility. The command looks like this for Mac OS X or RedHat Linux:

% mysql -N -e " your query here "


| tr "\t" "#" >


The mysql commands just shown use -N or -ss to suppress column labels from appearing in

the output. Under some circumstances, it may be useful to include the labels. (For example,

they may be useful when importing the file later.) If so, omit the label-suppression option

from the command. In this respect, exporting query results with mysql is more flexible than

SELECT ... INTO OUTFILE because the latter cannot produce output that includes column


10.14.6 See Also

Another way to export query results to a file on the client host is by using the

mysql_to_text.pl utility described in Recipe 10.18. That program has options that allow you to

specify the output format explicitly. To export a query result as an Excel spreadsheet or for

use with FileMaker Pro, see Recipes Recipe 10.40 and Recipe 10.41.

10.15 Exporting Tables as Raw Data

10.15.1 Problem

You want to export an entire table to a file.

10.15.2 Solution

Use the mysqldump program with the --tab option.

10.15.3 Discussion

The mysqldump program is used to copy or back up tables and databases. It can write table

output either as a raw datafile, or as a set of INSERT statements that recreate the records in

the table. The former capability is described here, the latter in Recipe 10.16 and Recipe 10.17.

To dump a table as a datafile, you must specify a --tab option that indicates the directory

where you want the MySQL server to write the file. (The directory must already exist; the

server won't create it.) For example, to dump the states table from the cookbook database

to a file in the /tmp directory, use a command like this:

% mysqldump --no-create-info --tab=/tmp cookbook states

mysqldump creates a datafile using the table name plus a .txt suffix, so this command will

write a file named /tmp/states.txt. This form of mysqldump is in some respects the commandline equivalent of SELECT ... INTO OUTFILE. For example, it writes out a table as a datafile

on the server host, and you must have the FILE privilege to use it. See Recipe 10.14 for a list

of general properties of SELECT ... INTO OUTFILE.

If you omit the --no-create-info option, mysqldump also will create a file /tmp/states.sql that

contains the CREATE TABLE statement for the table. (The latter file will be owned by you,

unlike the datafile, which is owned by the server.)

You can name multiple tables after the database name, in which case mysqldump writes

output files for each of them. If you don't name any tables, mysqldump writes output for

every table in the database.

mysqldump creates datafiles in tab-delimited, linefeed-terminated format by default. To

control the output format, use the --fields-enclosed-by, --fields-terminated-by, and --linesterminated-by options (that is, the same options that mysqlimport understands as format

specifiers). For example, to write the states table in CSV format with CRLF line endings, use

this command:

% mysqldump --no-create-info --tab=/tmp \

--fields-enclosed-by="\"" --fields-terminated-by="," \

--lines-terminated-by="\r\n" cookbook states

A datafile exported this way can be imported using LOAD DATA or mysqlimport. Be sure to use

matching format specifiers when importing if you didn't dump the table using the default


10.16 Exporting Table Contents or Definitions in SQL Format

10.16.1 Problem

You want to export tables or databases as SQL statements to make them easier to import


10.16.2 Solution

Use the mysqldump program without the --tab option.

10.16.3 Discussion

As discussed in Recipe 10.15, mysqldump causes the MySQL server to write tables as raw

datafiles on the server host when it's invoked with the --tab option. If you omit the --tab, the

server formats the table records as the INSERT statements and returns them to mysqldump.

You can also generate the CREATE TABLE statement for each table. This provides a

convenient form of output that you can capture in a file and use later to recreate a table or

tables. It's common to use such dump files as backups or for copying tables to another MySQL

server. This section discusses how to save dump output in a file; Recipe 10.17 shows how to

send it directly to another server over the network.

To export a table in SQL format to a file, use a command like this:

% mysqldump cookbook states > dump.txt

That creates an output file dump.txt that contains both the CREATE TABLE statement and a

set of INSERT statements:

# MySQL dump 8.16


# Host: localhost

Database: cookbook

#-------------------------------------------------------# Server version



# Table structure for table 'states'



name varchar(30) NOT NULL default '',

abbrev char(2) NOT NULL default '',

statehood date default NULL,

pop bigint(20) default NULL,

PRIMARY KEY (abbrev)



# Dumping data for table 'states'

































To dump multiple tables, name them all following the database name argument. To dump an

entire database, don't name any tables after the database. If you want to dump all tables in

all databases, invoke mysqldump like this:

% mysqldump --all-databases > dump.txt

In this case, the output file also will include CREATE DATABASE and USE db_name statements

at appropriate places so that when you read in the file later, each table will be created in the

proper database. The --all-databases option is available as of MySQL 3.23.12.

Other options are available to control the output format:


Suppress the CREATE TABLE statements. Use this option when you want to dump table contents only.


Suppress the INSERT statements. Use this option when you want to dump table definitions only.


Precede each CREATE TABLE statement with a DROP TABLE statement. This is useful for generating a file that you can

use later to recreate tables from scratch.


Suppress the CREATE DATABASE statements that the --all-databases option normally produces.

Suppose now that you've used mysqldump to create a SQL-format dump file. How do you

import it the file back into MySQL? One common mistake at this point is to use mysqlimport.

After all, it's logical to assume that if mysqldump exports tables, mysqlimport must import

them. Right? Sorry, no. That might be logical, but it's not always correct. It's true that if you

use the --tab option with mysqldump, you can import the resulting datafiles with mysqlimport.

But if you dump a SQL-format file, mysqlimport won't process it properly. Use the mysql

program instead. The way you do this depends on what's in the dump file. If you dumped

multiple databases using --all-databases, the file will contain the appropriate USE db_name

statements to select the databases to which each table belongs, and you need no database

argument on the command line:

% mysql < dump.txt

If you dumped tables from a single database, you'll need to tell mysql which database to

import them into:

% mysql


< dump.txt

Note that with this second import command, it's possible to load the tables into a different

database than the one from which they came originally. You can use this fact, for example, to

create copies of a table or tables in a test database to use for trying out some data

manipulating statements that you're debugging, without worrying about affecting the original


10.17 Copying Tables or Databases to Another Server

10.17.1 Problem

You want to copy tables or databases from one MySQL server to another.

10.17.2 Solution

Use mysqldump and mysql together, connected by a pipe.

10.17.3 Discussion

SQL-format output from mysqldump can be used to copy tables or databases from one server

to another. Suppose you want to copy the states table from the cookbook database on the

local host to the cb database on the host other-host.com. One way to do this is to dump the

output into a file (as described in Recipe 10.16):

% mysqldump cookbook states > dump.txt

Then copy dump.txt to other-host.com and run the following command there to import the

table into that server's cb database:

% mysql cb < dump.txt

Another way to accomplish this without using an intermediary file is to send the output of

mysqldump directly over the network to the remote MySQL server. If you can connect to both

servers from the host where the cookbook database resides, use this command:

% mysqldump cookbook states | mysql -h other-host.com cb

The mysqldump half of the command connects to the local server and writes the dump output

to the pipe. The mysql half of the command connects to the remote MySQL server on otherhost.com. It reads the pipe for input and sends each statement to the other-host.com server.

If you cannot connect directly to the remote server using mysql from your local host, send the

dump output into a pipe that uses ssh to invoke mysql remotely on other-host.com:

% mysqldump cookbook states | ssh other-host.com mysql cb

ssh connects to other-host.com and launches mysql there. Then it reads the mysqldump

output from the pipe and passes it to the remote mysql process. Using ssh can be useful when

you want to send a dump over the network to a machine that has the MySQL port blocked by

a firewall but that allows connections on the SSH port.

If you don't have access to ssh, you may be able to use rsh instead. However, rsh is insecure,

so ssh is much preferred.

To copy multiple tables over the network, name them all following the database argument of

the mysqldump command. To copy an entire database, don't specify any table names after the

database name. mysqldump will dump all the tables contained in the database.

If you're thinking about invoking mysqldump with the --all-databases option to send all your

databases to another server, consider that the output will include the tables in the mysql

database that contains the grant tables. If the remote server has a different user population,

you probably don't want to replace that server's grant tables!

10.18 Writing Your Own Export Programs

10.18.1 Problem

MySQL's built-in export capabilities don't suffice.

10.18.2 Solution

Write your own utilities.

10.18.3 Discussion

When existing software doesn't do what you want, you can write your own programs to export

data. This section shows how to write a Perl script, mysql_to_text.pl, that executes an

arbitrary query and exports it in the format you specify. It writes output to the client host and

can include a row of column labels (neither of which SELECT ... INTO OUTFILE can do). It

produces multiple output formats more easily than by using mysql with a postprocessor, and it

writes to the client host, unlike mysqldump, which can write only SQL-format output to the


mysql_to_text.pl is based on the Text::CSV_XS module, which you'll need to obtain if it's not

installed on your system. Once it's installed, you can read the documentation like so:

% perldoc Text::CSV_XS

This module is convenient because all you have to do is provide an array of column values,

and it will package them up into a properly formatted output line. This makes it relatively

trivial to convert query output to CSV format. But the real benefit of using the Text::CSV_XS

module is that it's configurable; you can tell it what kind of delimiter and quote characters to

use. This means that although the module produces CSV format by default, you can configure

it to write a variety of output formats. For example, if you set the delimiter to tab and the

quote character to undef, Text::CSV_XS generates tab-delimited output. We'll take

advantage of that flexibility in this section for writing mysql_to_text.pl, and later in Recipe

10.19 to write a file-processing utility that converts files from one format to another.

mysql_to_text.pl accepts several command-line options. Some of these are for specifying

MySQL connection parameters (such as --user, --password, and --host). You're already

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

10 Don't Assume LOAD DATA Knows More than It Does

Tải bản đầy đủ ngay(0 tr)