Moving data to SQL, CSV, Pandas etc.


This uses the standard library csv module:

"""Export to CSV."""
import sys
import csv
from dbfread import DBF

table = DBF('files/people.dbf')
writer = csv.writer(sys.stdout)

for record in table:

The output is:


Pandas Data Frames

Load content of a DBF file into a Pandas data frame.

The iter() is required because Pandas doesn't detect that the DBF
object is iterable.
from dbfread import DBF
from pandas import DataFrame

dbf = DBF('files/people.dbf')
frame = DataFrame(iter(dbf))


This will print:

0  1987-03-01  Alice
1  1980-11-12    Bob

The iter() is required. Without it Pandas will not realize that it can iterate over the table.

Pandas will create a new list internally before converting the records to data frames. This means they will all be loaded into memory. There seems to be no way around this at the moment.

dataset (SQL)

The dataset package makes it easy to move data to a modern database. Here’s how you can insert the people table into an SQLite database:

Convert a DBF file to an SQLite table.

Requires dataset:
import dataset
from dbfread import DBF

# Change to "dataset.connect('people.sqlite')" if you want a file.
db = dataset.connect('sqlite:///:memory:')
table = db['people']

for record in DBF('files/people.dbf', lowernames=True):

# Select and print a record just to show that it worked.

(This also creates the schema.)


You can use the included example program dbf2sqlite to insert tables into an SQLite database:

dbf2sqlite -o example.sqlite table1.dbf table2.dbf

This will create one table for each DBF file. You can also omit the -o example.sqlite option to have the SQL printed directly to stdout.

If you get character encoding errors you can pass --encoding to override the encoding, for example:

dbf2sqlite --encoding=latin1 ...