Skip to content

πŸ“‹ Powerful, memory-efficient bulk data processing for CSV, Excel, and JSON files with streaming, validation, transformation, and performance monitoring.

License

Notifications You must be signed in to change notification settings

prathammahajan13/csv-bulk-processor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ CSV Bulk Processor - Data Import/Export Suite

Powerful, memory-efficient bulk data processing for CSV, Excel, and JSON files with streaming, validation, transformation, and performance monitoring.

SEO Keywords: CSV bulk processor, data import export, file streaming, data validation, data transformation, bulk data processing, Node.js data processing, enterprise data pipeline, scalable data processing, production-ready data processor

npm version License: MIT Node.js

GitHub stars GitHub forks GitHub watchers

✨ Features

  • πŸ”„ Memory-Efficient Streaming - Process large files without memory issues
  • πŸ“Š Multi-Format Support - CSV, Excel, and JSON file processing
  • βœ… Data Validation - Schema, format, and custom validation rules
  • πŸ”§ Data Transformation - Cleaning, mapping, and conversion
  • πŸ“ˆ Progress Tracking - Real-time progress monitoring and resumable processing
  • πŸ›‘οΈ Error Handling - Comprehensive error detection and recovery
  • ⚑ Performance Monitoring - Built-in performance analytics and optimization
  • 🎯 Batch Processing - Optimized batch operations for large datasets

🎯 Why Choose This Bulk Processor?

Perfect for developers who need:

  • Enterprise-grade data processing with memory-efficient streaming for large files
  • Production-ready data pipelines with comprehensive validation and transformation
  • Scalable data processing for handling millions of records efficiently
  • Real-time progress monitoring with resumable processing capabilities
  • Multi-format data support for CSV, Excel, and JSON files
  • Advanced error handling with rollback and recovery mechanisms
  • Performance optimization with built-in monitoring and analytics
  • Developer-friendly APIs with intuitive configuration and event handling

πŸ“¦ Installation

npm install @prathammahajan/csv-bulk-processor

πŸš€ Quick Start

const BulkProcessor = require('@prathammahajan/csv-bulk-processor');

// Create processor with configuration
const processor = new BulkProcessor({
  streaming: { enabled: true, chunkSize: 1000 },
  validation: { enabled: true },
  transformation: { enabled: true },
  progress: { enabled: true }
});

// Process a file with progress tracking
processor.on('progress', (data) => {
  console.log(`Processed ${data.recordsProcessed} records`);
});

const result = await processor.processFile('data.csv');

console.log(`βœ… Processed ${result.result.recordsProcessed} records`);
console.log(`⏱️ Processing time: ${result.processingTime}ms`);
console.log(`πŸ“Š Performance: ${result.analytics.metrics.throughput.recordsPerSecond} records/sec`);

πŸ“‹ Basic Usage

Simple File Processing

const processor = new BulkProcessor();

// Process CSV file
const csvResult = await processor.processFile('data.csv');

// Process JSON file
const jsonResult = await processor.processFile('data.json');

// Process Excel file
const excelResult = await processor.processFile('data.xlsx');

Progress Tracking

const processor = new BulkProcessor();

processor.on('progress', (data) => {
  console.log(`Progress: ${data.recordsProcessed} records processed`);
});

processor.on('complete', (data) => {
  console.log('Processing completed!');
});

const result = await processor.processFile('large-file.csv');

Data Validation

const processor = new BulkProcessor({
  validation: {
    enabled: true,
    schema: {
      name: { type: 'string', required: true },
      email: { type: 'email', required: true },
      age: { type: 'number', min: 0, max: 120 }
    }
  }
});

const result = await processor.processFile('data.csv');
// Validation errors will be in result.result.errors

Data Transformation

const processor = new BulkProcessor({
  transformation: {
    enabled: true,
    mapping: {
      'Full Name': 'name',
      'Email Address': 'email',
      'User Age': 'age'
    },
    cleaning: {
      trimStrings: true,
      normalizeDates: true
    }
  }
});

const result = await processor.processFile('data.csv');

πŸ”§ Configuration

const processor = new BulkProcessor({
  streaming: {
    enabled: true,           // Enable streaming for large files
    chunkSize: 1000,        // Records per chunk
    memoryLimit: '100MB'    // Memory limit
  },
  validation: {
    enabled: true,          // Enable validation
    schema: { /* schema */ },
    format: true,           // Format validation
    business: true          // Business rules
  },
  transformation: {
    enabled: true,          // Enable transformation
    mapping: { /* mapping */ },
    cleaning: true,         // Data cleaning
    conversion: true        // Type conversion
  },
  progress: {
    enabled: true,          // Enable progress tracking
    tracking: true,         // Real-time tracking
    resumable: true         // Resumable processing
  },
  error: {
    enabled: true,          // Enable error handling
    recovery: true,         // Error recovery
    rollback: true          // Rollback on errors
  },
  performance: {
    enabled: true,          // Enable performance monitoring
    monitoring: true,       // Real-time monitoring
    optimization: true      // Performance optimization
  }
});

🎯 Advanced Examples

Large File Processing with Streaming

const processor = new BulkProcessor({
  streaming: { 
    enabled: true, 
    chunkSize: 5000,
    memoryLimit: '500MB'
  },
  progress: { enabled: true }
});

// Process a 10GB CSV file efficiently
const result = await processor.processFile('huge-dataset.csv');

Complex Data Transformation

const processor = new BulkProcessor({
  transformation: {
    enabled: true,
    mapping: {
      'Customer Name': 'name',
      'Email': 'email',
      'Phone Number': 'phone'
    },
    cleaning: {
      trimStrings: true,
      normalizeDates: true,
      removeEmpty: true
    },
    conversion: {
      'age': 'number',
      'isActive': 'boolean',
      'createdAt': 'date'
    }
  }
});

const result = await processor.processFile('customer-data.csv');

Error Handling and Recovery

const processor = new BulkProcessor({
  error: {
    enabled: true,
    recovery: true,
    rollback: true,
    retryAttempts: 3
  }
});

processor.on('error', (error) => {
  console.error('Processing error:', error);
});

processor.on('validation-error', (error) => {
  console.error('Validation error:', error);
});

const result = await processor.processFile('data.csv');

πŸ“Š Performance Monitoring

const processor = new BulkProcessor({
  performance: { enabled: true }
});

const result = await processor.processFile('data.csv');

console.log('Performance Metrics:', result.analytics.metrics);
// {
//   throughput: { recordsPerSecond: 1500 },
//   performance: { averageProcessingTime: 2.5 },
//   memory: { peakUsage: '45MB' },
//   error: { errorRate: 0.02 }
// }

πŸ› οΈ Supported File Formats

Format Extension Features
CSV .csv Headers, custom delimiters, encoding
Excel .xlsx, .xls Multiple sheets, formulas, formatting
JSON .json Arrays, objects, streaming support

πŸ“ API Reference

BulkProcessor Methods

Method Description
processFile(filePath, options?) Process a file with full pipeline
getSupportedFormats() Get list of supported file formats
getMemoryUsage() Get current memory usage
getPerformanceMetrics() Get performance metrics
updateConfiguration(config) Update processor configuration

Events

Event Description Data
progress Processing progress update { recordsProcessed, batchSize }
complete Processing completed { result, analytics, processingTime }
error Processing error occurred { error, record, timestamp }
validation-error Validation error occurred { error, field, record }

🀝 Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸš€ Use Cases & Applications

Ideal for:

  • Data migration projects requiring efficient processing of large datasets
  • ETL pipelines with validation, transformation, and error handling
  • Data import/export systems for enterprise applications
  • Analytics platforms processing large volumes of data
  • API development with bulk data processing capabilities
  • Microservices architecture with data processing components
  • Real-time data processing with streaming and progress tracking
  • Startup MVPs needing production-ready data processing solutions

πŸ” SEO & Discoverability

Search Terms: CSV bulk processor, data import export, file streaming, data validation, data transformation, bulk data processing, Node.js data processing, enterprise data pipeline, scalable data processing, production-ready data processor, CSV parser, Excel processor, JSON processor, data pipeline, ETL processing, data migration, file processing, streaming data, memory efficient processing, data validation engine, data transformation engine, progress tracking, error handling, performance monitoring, batch processing

πŸ™ Support


Made with ❀️ by Pratham Mahajan

About

πŸ“‹ Powerful, memory-efficient bulk data processing for CSV, Excel, and JSON files with streaming, validation, transformation, and performance monitoring.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published