Bulk Import Admin Panel
CSV import wizard with intelligent field mapping, duplicate detection, and real-time progress tracking
Overview#
The Bulk Import admin panel provides a guided, 5-step wizard for importing large datasets via CSV files. It supports all major data types -- customers, vendors, employees, accounts, items, dimensions, fixed assets, and transactions -- with intelligent field mapping, duplicate detection, and real-time progress tracking.
Supported Object Types#
| Object Type | Required Fields | Duplicate Detection |
|---|---|---|
| Customer | Name, email | Tax ID, then name (case-insensitive) |
| Vendor | Name, email, payment terms | Tax ID, then name (case-insensitive) |
| Employee | First name, last name, work email, hire date | Work email |
| Account | Account number, name, type, normal balance | Account number |
| Item | Item number, item name | Item number |
| Dimension Type | Name, display name, applies to | Name |
| Dimension Value | Dimension type, code, name | Type + code |
| Fixed Asset | Asset number, name, entity, cost, useful life, in-service date | Asset number + entity |
| Transaction | Transaction type, entity | Type + reference number + party |
The 5-Step Wizard#
Step 1: Select Type#
Choose the object type you want to import and select the target legal entity. The entity selection determines where the imported data will be scoped.
Step 2: Upload CSV#
Upload your CSV file by dragging and dropping or browsing. The system:
- Parses the CSV instantly for a preview
- Shows the first 50 rows in a scrollable table
- Displays the total row count and detected column headers
- Uploads the file to secure cloud storage for processing
Step 3: Field Mapping#
This is where the wizard maps your CSV columns to the target fields:
- Auto-mapping: Column headers are automatically matched using intelligent normalization (lowercasing, stripping whitespace, replacing spaces with underscores)
- Required fields: Highlighted and must be mapped or given a default value
- Optional fields: Can be left unmapped
- Default values: Set fallback values for unmapped required fields (e.g., default currency for all records)
- Metadata catch-all: Toggle to store all unmapped CSV columns as metadata -- this preserves legacy IDs and custom fields from your source system without schema changes
- Mapping templates: Save and reuse mapping configurations for recurring imports
Step 4: Review#
A summary of all settings before processing:
- Object type and target entity
- File information and row count
- Complete field mapping table (CSV column to target field)
- Default values and import options
- Duplicate handling mode (skip, update, or error)
Step 5: Progress#
Real-time monitoring during import processing:
- Progress bar with percentage
- Status badge (pending, validating, importing, completed, failed)
- Live counters: imported, updated, skipped, errors
- Error list with row numbers, field names, and record identifiers for easy cross-referencing
- Duration display on completion
Import History#
The import history page lists all past imports with:
- Status, record counts, and processing duration
- Filter by object type
- Pagination (20 per page)
- Color-coded status badges
- Quick link to start a new import
Address Support#
Three object types support importing addresses via flat CSV columns:
Customer addresses use billing_ and shipping_ prefixed columns:
billing_address_line_1,billing_city,billing_state,billing_postal_code,billing_countryshipping_address_line_1,shipping_city, etc.
Vendor addresses use billing_ (primary) and shipping_ (remit-to) prefixed columns with the same field pattern.
Employee addresses use home_ and mailing_ prefixed columns:
home_address_line_1,home_city,home_state, etc.mailing_address_line_1,mailing_city, etc.
Metadata Catch-All#
When the "Store unmapped columns as metadata" option is enabled:
- All CSV columns not mapped to target fields are identified
- Values from these columns are collected for each row
- The data is stored as structured metadata on each record
This is especially useful for data migration, as it preserves information from the source system (legacy IDs, custom fields, internal codes) without requiring any schema changes. The data is available for reference and can be used for future lookups.
Transaction Import: Flexible Party Resolution#
When importing transactions, parties (vendors, customers, employees) can be identified by multiple identifier types:
Vendor resolution (checked in order):
- Internal ID
- Vendor name (case-insensitive)
- Global vendor ID
- External vendor ID (from metadata)
- Tax ID
Customer and employee resolution follow similar patterns with their respective identifiers.
Account resolution accepts either account number or external account ID.
All reference data is pre-cached at import start for fast lookups during processing.
Mapping Templates#
Reusable mapping configurations save time for recurring imports:
- Save a mapping after configuring it in Step 3
- Load a saved template to auto-populate mappings for future imports
- Delete templates that are no longer needed
- Templates are scoped per object type
Error Handling#
Validation Errors#
Each error includes:
- Row number for CSV cross-reference
- Field name causing the error
- Descriptive error message
- Record identifier (e.g., company name) for quick identification
Batch Safety#
Each batch of records is processed within a database transaction. If any record in a batch fails fatally, that batch rolls back while other batches remain unaffected.
Error Limits#
- First 100 errors are stored and displayed
- First 100 warnings are stored
- First 1,000 created/updated record IDs are tracked
- Processing continues after validation errors (does not abort on individual failures)
Key Design Principles#
| Principle | Benefit |
|---|---|
| Memory-efficient streaming | A 500,000-row CSV uses the same memory as a 500-row CSV |
| Name-based duplicate detection | Works reliably with bulk CSV data where emails may be shared |
| Entity-level field injection | Select the legal entity once; it is applied to every record |
| Metadata preservation | Unmapped columns are stored, not discarded |
| Batch-level transactions | One bad record does not corrupt the entire import |
| Reusable templates | Save time on recurring import workflows |
Subscribe to new posts
Get notified when we publish new insights on AI-native finance.