Powerful, memory-efficient bulk data processing for CSV, Excel, and JSON files with streaming, validation, transformation, and performance monitoring.
SEO Keywords: CSV bulk processor, data import export, file streaming, data validation, data transformation, bulk data processing, Node.js data processing, enterprise data pipeline, scalable data processing, production-ready data processor
- π Memory-Efficient Streaming - Process large files without memory issues
 - π Multi-Format Support - CSV, Excel, and JSON file processing
 - β Data Validation - Schema, format, and custom validation rules
 - π§ Data Transformation - Cleaning, mapping, and conversion
 - π Progress Tracking - Real-time progress monitoring and resumable processing
 - π‘οΈ Error Handling - Comprehensive error detection and recovery
 - β‘ Performance Monitoring - Built-in performance analytics and optimization
 - π― Batch Processing - Optimized batch operations for large datasets
 
Perfect for developers who need:
- Enterprise-grade data processing with memory-efficient streaming for large files
 - Production-ready data pipelines with comprehensive validation and transformation
 - Scalable data processing for handling millions of records efficiently
 - Real-time progress monitoring with resumable processing capabilities
 - Multi-format data support for CSV, Excel, and JSON files
 - Advanced error handling with rollback and recovery mechanisms
 - Performance optimization with built-in monitoring and analytics
 - Developer-friendly APIs with intuitive configuration and event handling
 
npm install @prathammahajan/csv-bulk-processorconst BulkProcessor = require('@prathammahajan/csv-bulk-processor');
// Create processor with configuration
const processor = new BulkProcessor({
  streaming: { enabled: true, chunkSize: 1000 },
  validation: { enabled: true },
  transformation: { enabled: true },
  progress: { enabled: true }
});
// Process a file with progress tracking
processor.on('progress', (data) => {
  console.log(`Processed ${data.recordsProcessed} records`);
});
const result = await processor.processFile('data.csv');
console.log(`β
 Processed ${result.result.recordsProcessed} records`);
console.log(`β±οΈ Processing time: ${result.processingTime}ms`);
console.log(`π Performance: ${result.analytics.metrics.throughput.recordsPerSecond} records/sec`);const processor = new BulkProcessor();
// Process CSV file
const csvResult = await processor.processFile('data.csv');
// Process JSON file
const jsonResult = await processor.processFile('data.json');
// Process Excel file
const excelResult = await processor.processFile('data.xlsx');const processor = new BulkProcessor();
processor.on('progress', (data) => {
  console.log(`Progress: ${data.recordsProcessed} records processed`);
});
processor.on('complete', (data) => {
  console.log('Processing completed!');
});
const result = await processor.processFile('large-file.csv');const processor = new BulkProcessor({
  validation: {
    enabled: true,
    schema: {
      name: { type: 'string', required: true },
      email: { type: 'email', required: true },
      age: { type: 'number', min: 0, max: 120 }
    }
  }
});
const result = await processor.processFile('data.csv');
// Validation errors will be in result.result.errorsconst processor = new BulkProcessor({
  transformation: {
    enabled: true,
    mapping: {
      'Full Name': 'name',
      'Email Address': 'email',
      'User Age': 'age'
    },
    cleaning: {
      trimStrings: true,
      normalizeDates: true
    }
  }
});
const result = await processor.processFile('data.csv');const processor = new BulkProcessor({
  streaming: {
    enabled: true,           // Enable streaming for large files
    chunkSize: 1000,        // Records per chunk
    memoryLimit: '100MB'    // Memory limit
  },
  validation: {
    enabled: true,          // Enable validation
    schema: { /* schema */ },
    format: true,           // Format validation
    business: true          // Business rules
  },
  transformation: {
    enabled: true,          // Enable transformation
    mapping: { /* mapping */ },
    cleaning: true,         // Data cleaning
    conversion: true        // Type conversion
  },
  progress: {
    enabled: true,          // Enable progress tracking
    tracking: true,         // Real-time tracking
    resumable: true         // Resumable processing
  },
  error: {
    enabled: true,          // Enable error handling
    recovery: true,         // Error recovery
    rollback: true          // Rollback on errors
  },
  performance: {
    enabled: true,          // Enable performance monitoring
    monitoring: true,       // Real-time monitoring
    optimization: true      // Performance optimization
  }
});const processor = new BulkProcessor({
  streaming: { 
    enabled: true, 
    chunkSize: 5000,
    memoryLimit: '500MB'
  },
  progress: { enabled: true }
});
// Process a 10GB CSV file efficiently
const result = await processor.processFile('huge-dataset.csv');const processor = new BulkProcessor({
  transformation: {
    enabled: true,
    mapping: {
      'Customer Name': 'name',
      'Email': 'email',
      'Phone Number': 'phone'
    },
    cleaning: {
      trimStrings: true,
      normalizeDates: true,
      removeEmpty: true
    },
    conversion: {
      'age': 'number',
      'isActive': 'boolean',
      'createdAt': 'date'
    }
  }
});
const result = await processor.processFile('customer-data.csv');const processor = new BulkProcessor({
  error: {
    enabled: true,
    recovery: true,
    rollback: true,
    retryAttempts: 3
  }
});
processor.on('error', (error) => {
  console.error('Processing error:', error);
});
processor.on('validation-error', (error) => {
  console.error('Validation error:', error);
});
const result = await processor.processFile('data.csv');const processor = new BulkProcessor({
  performance: { enabled: true }
});
const result = await processor.processFile('data.csv');
console.log('Performance Metrics:', result.analytics.metrics);
// {
//   throughput: { recordsPerSecond: 1500 },
//   performance: { averageProcessingTime: 2.5 },
//   memory: { peakUsage: '45MB' },
//   error: { errorRate: 0.02 }
// }| Format | Extension | Features | 
|---|---|---|
| CSV | .csv | 
Headers, custom delimiters, encoding | 
| Excel | .xlsx, .xls | 
Multiple sheets, formulas, formatting | 
| JSON | .json | 
Arrays, objects, streaming support | 
| Method | Description | 
|---|---|
processFile(filePath, options?) | 
Process a file with full pipeline | 
getSupportedFormats() | 
Get list of supported file formats | 
getMemoryUsage() | 
Get current memory usage | 
getPerformanceMetrics() | 
Get performance metrics | 
updateConfiguration(config) | 
Update processor configuration | 
| Event | Description | Data | 
|---|---|---|
progress | 
Processing progress update | { recordsProcessed, batchSize } | 
complete | 
Processing completed | { result, analytics, processingTime } | 
error | 
Processing error occurred | { error, record, timestamp } | 
validation-error | 
Validation error occurred | { error, field, record } | 
- Fork the repository
 - Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
 
This project is licensed under the MIT License - see the LICENSE file for details.
Ideal for:
- Data migration projects requiring efficient processing of large datasets
 - ETL pipelines with validation, transformation, and error handling
 - Data import/export systems for enterprise applications
 - Analytics platforms processing large volumes of data
 - API development with bulk data processing capabilities
 - Microservices architecture with data processing components
 - Real-time data processing with streaming and progress tracking
 - Startup MVPs needing production-ready data processing solutions
 
Search Terms: CSV bulk processor, data import export, file streaming, data validation, data transformation, bulk data processing, Node.js data processing, enterprise data pipeline, scalable data processing, production-ready data processor, CSV parser, Excel processor, JSON processor, data pipeline, ETL processing, data migration, file processing, streaming data, memory efficient processing, data validation engine, data transformation engine, progress tracking, error handling, performance monitoring, batch processing
- π§ Issues: GitHub Issues
 - π Documentation: GitHub Wiki
 - π¬ Discussions: GitHub Discussions
 
Made with β€οΈ by Pratham Mahajan