DevToys.PocoCsv.Core 2.0.5

There is a newer version of this package available.
See the version list below for details.
dotnet add package DevToys.PocoCsv.Core --version 2.0.5                
NuGet\Install-Package DevToys.PocoCsv.Core -Version 2.0.5                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="DevToys.PocoCsv.Core" Version="2.0.5" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add DevToys.PocoCsv.Core --version 2.0.5                
#r "nuget: DevToys.PocoCsv.Core, 2.0.5"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install DevToys.PocoCsv.Core as a Cake Addin
#addin nuget:?package=DevToys.PocoCsv.Core&version=2.0.5

// Install DevToys.PocoCsv.Core as a Cake Tool
#tool nuget:?package=DevToys.PocoCsv.Core&version=2.0.5                

DevToys.PocoCsv.Core

One of the fastest csv reader deserialzer available.

DevToys.PocoCsv.Core is a class library to read and write to Csv. It contains CsvStreamReader, CsvStreamWriter and Serialization classes CsvReader<T> and CsvWriter<T>.

Read/write serialize/deserialize data to and from Csv.

  • RFC 4180 compliant.
  • Auto separator detection.
  • Auto line feed/break detection.
  • Sequential read with ReadAsEnumerable().
  • Csv schema Retrieval with CsvUtils.GetCsvSchema().
  • DataTable import and export.
  • Deserialiser / serializer.
  • Stream reader / writer.

CsvStreamReader

    string file = "C:\Temp\data.csv";
    using (CsvStreamReader _reader = new CsvStreamReader(file))
    {
        _reader.Separator = ',';
        while (!_reader.EndOfStream)
        {
            string[] _values = _reader.ReadCsvLine();
        }
    }

or

    string file = "C:\Temp\data.csv";
    using (CsvStreamReader _reader = new CsvStreamReader(file))
    {
        _reader.Separator = ',';
        foreach (string[] items in _reader.ReadAsEnumerable())
        {
            
        }
    }

CsvStreamWriter

    string file = @"D:\Temp\test.csv";
    using (CsvStreamWriter _writer = new CsvStreamWriter(file))
    {
        var _line = new string[] { "Row 1", "Row A,A", "Row 3", "Row B" };
        _writer.WriteCsvLine(_line);
    }

CsvReader<T>

this reader is faster then CsvStreamReader, it is optamized to deserialize the rows to objects.

    public class Data
    {
        [Column(Index = 0)]
        public string Column1 { get; set; }

        [Column(Index = 1)]
        public string Column2 { get; set; }

        [Column(Index = 2)]
        public string Column3 { get; set; }

        [Column(Index = 5)]
        public string Column5 { get; set; }
    }
    
    string file = @"D:\Temp\data.csv";

    using (CsvReader<Data> _reader = new(file) { Separator = ',' })
    {        
        _reader.Culture = CultureInfo.GetCultureInfo("en-us") ;
        _reader.Open();
        _reader.SkipHeader();
        var _data = _reader.ReadAsEnumerable().Where(p => p.Column1.Contains("16"));
        var _materialized = _data.ToList();
    }    
Methods / Property Description
BufferSize Stream buffer size, Default: 1024.
Close() Close the CSV stream reader
Culture Sets the default Culture for decimal / double conversions etc. For more complex conversions use the ICustomCsvParse interface.
CurrentLine Returns the current line number.
DetectEncodingFromByteOrderMarks Indicates whether to look for byte order marks at the beginning of the file.
DetectSeparator() To auto set the separator (looks for commonly used separators in first 10 lines).
Dispose() Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
EmptyLineBehaviour EmptyLineBehaviour: <li>DefaultInstance: Return a new instance of T (Default)</li><li>NullValue: Return Null value for object.</li>
Encoding The character encoding to use.
EndOfStream Returns true when end of stream is reached. Use this when you are using Read() / Skip() or partially ReadAsEnumerable()
Errors Returns a list of errors when HasErrors returned true
Flush() Flushes all buffers.
HasErrors Indicates there are errors
Last(int rows) Last seeks the csv document for the last x entries. this is much faster then IEnumerable.Last().
MoveToStart() Moves the reader to the start position, Skip() and Take() alter the start positions use MoveToStart() to reset the position.
Open() Opens the Reader.
Read() Reads current row into T and advances the reader to the next row.
ReadAsEnumerable() Reads and deserializes each csv file line per iteration in the collection, this allows for querying mega sized files. It starts from the current position, if you used Skip(), Read() or SkipHeader() the current position is determined by those methods.
Separator Set the separator to use (default ',')
Skip(int rows) Skip and advances the reader to the next row without interpreting it. This is much faster then IEnumerable.Skip().
SkipHeader() Ensures stream is at start then skips the first row.

CsvWriter<T>

this writer is faster then CsvStreamWriter, it is optamized to serialize the objects to rows.

    public class Data
    {
        [Column(Index = 0)]
        public string Column1 { get; set; }

        [Column(Index = 1)]
        public string Column2 { get; set; }

        [Column(Index = 2)]
        public string Column3 { get; set; }

        [Column(Index = 5)]
        public string Column5 { get; set; }
    }


    private IEnumerable<CsvSimple> LargeData()
    {
        for (int ii = 0; ii < 10000000; ii++)
        {
            Data _line = new()
            {
                Column1 = "bij",
                Column2 = "100",
                Column3 = "test",
                Column5 = $"{ii}",
                
            };
            yield return _line;
        }
    }
    
    
    string file = @"D:\largedata.csv";
    using (CsvWriter<CsvSimple> _writer = new(file) { Separator = ',', Append = true })
    {
        _writer.Culture = CultureInfo.GetCultureInfo("en-us");
        _writer.Open();
        _writer.Write(LargeData());
    }
      

Methods / Properties:

Item Description
Open() Opens the Writer.
WriteHeader() Write header with property names of T.
Write(IEnumerable<T> rows) Writes data to Csv while consuming rows.
Flush() Flushes all buffers.
Separator Set the separator to use (default ',')
CRLFMode Determine which mode to use for new lines.<li>CR + LF → Used as a new line character in Windows.</li><li>CR(Carriage Return) → Used as a new line character in Mac OS before X.</li><li>LF(Line Feed) → Used as a new line character in Unix/Mac OS X</li>
NullValueBehaviour Determine what to do with writing null objects.<li>Skip, Ignore the object</li><li>Empty Line, Write an empty line</li>
Culture Sets the default Culture for decimal / double conversions etc. For more complex conversions use the ICustomCsvParse interface.
Encoding The character encoding to use.

ColumnAttribute

The column attribute defines the properties to be serialized or deserialized.

Item Description
Index Defines the index position within the CSV document. Numbers can be skipped for the reader to ignore certain columns, for the writer numbers can also be skipped which leads to empty columns.
Header Defines the header text, this property only applies to the CsvWriter, if not specified, the property name is used.
OutputFormat Apply a string format, depending on the Property type. This property is for CsvWriter only.
OutputNullValue Defines the value to write as a default for null, This property is for CsvWriter only.
CustomParserType CustomParserType allows for custom parsing of values to a specific type.

CustomParserType

CustomParserType allows the Reader<T> and Writer<T> to use a custom parsing for a specific field.


    public sealed class ParseBoolean : ICustomCsvParse<bool?>
    {
        // for CsvReader
        public bool? Read(StringBuilder value)
        {
            switch (value.ToString().ToLower())
            {
                case "on":
                case "true":
                case "yes":
                case "1":
                    return true;
                case "off":
                case "false":
                case "no":
                case "0":
                    return false;
            }
            return null;
        }

        // for CsvWriter
        public string Write(bool? value)
        {
            if (value.HasValue)
            {
                if (value == true)
                {
                    return "1";
                }
                return "0";
            }
            return string.Empty;
        }
    }

    public class ParsePrice : ICustomCsvParse<Decimal>
    {
        private CultureInfo _culture;

        public ParseDecimal()
        {
            _culture = CultureInfo.GetCultureInfo("en-us");
        }

        public Decimal Read(StringBuilder value) => Decimal.Parse(value.ToString(), _culture);

        public string Write(Decimal value) => value.ToString(_culture);
    }


    public sealed class CsvPreParseTestObject
    {
        [Column(Index = 0, CustomParserType = typeof(ParseBoolean) )]
        public Boolean? IsOk { get; set; }

        [Column(Index = 1)]
        public string Name { get; set; }

        [Column(Index = 3, CustomParserType = typeof(ParsePrice))]
        public Decimal Price { get; set; }
    }


    using (var _reader = new CsvReader<CsvPreParseTestObject>(_file))
    {
        _reader.Open();
        _reader.Skip(); // Slip header.
        var _rows = _reader.ReadAsEnumerable().ToArray(); // Materialize.
    }

Custom Parsers will run as singleton per specified column in the specific Reader<T>.

CsvAttribute

the CsvAttribute can be set at defaults for CustomParserType, these CustomParserTypes will be applied to all properties of that specific type.
until they are overruled at property level.


    public class Parsestring : ICustomCsvParse<string>
    {
        public string Read(StringBuilder value)
        {
            return value.ToString();
        }
        public string Write(string value)
        {
            return value;
        }
    }

    [Csv( DefaultCustomParserTypeString = typeof(Parsestring))]
    public class CsvAllTypes
    {
        [Column(Index = 0, OutputFormat = "", OutputNullValue = "")]
        public string _stringValue { get; set; }

        [Column(Index = 35, OutputFormat = "", OutputNullValue = "")]
        public string _stringValue2 { get; set; }

        [Column(Index = 1, CustomParserType = typeof(ParseGuid), OutputFormat = "", OutputNullValue = "")]
        public Guid _GuidValue { get; set; }
   }

Other Examples


    public class Data
    {
        [Column(Index = 0)]
        public string Collumn1 { get; set; }
        
        [Column(Index = 1)]
        public string Collumn2 { get; set; }
        
        [Column(Index = 2, Header = "Test" )]
        public byte[] Collumn3 { get; set; }

        [Column(Index = 3)]
        public DateTime TestDateTime { get; set; }
        
        [Column(Index = 4)]
        public DateTime? TestDateTimeNull { get; set; }

        [Column(Index = 5)]
        public Int32 TestInt { get; set; }
        
        [Column(Index = 6, OutputNullValue = "[NULL]")]
        public Int32? TestIntNull { get; set; }
    }

 
    private IEnumerable<Data> GetTestData()
    {
        yield return new Data
        {
            Collumn1 = "01", 
            Collumn2 = "AA",
            Collumn3 = new byte[3] { 2, 4, 6 },
            TestDateTime = DateTime.Now,
            TestDateTimeNull = DateTime.Now,
            TestInt = 100,
            TestIntNull = 200
        };
        yield return new Data
        {
            Collumn1 = "01",
            Collumn2 = "AA",
            Collumn3 = new byte[3] { 2, 4, 6 },
            TestDateTime = DateTime.Now,
            TestDateTimeNull = DateTime.Now,
            TestInt = 100,
            TestIntNull = 200
        };
        yield return new Data
        {
            Collumn1 = "04",
            Collumn2 = "BB",
            Collumn3 = new byte[3] { 8, 9, 10 },
            TestDateTime = DateTime.Now,
            TestDateTimeNull = null,
            TestInt = 300,
            TestIntNull = null
        };
    }

    public static string StreamToString(Stream stream)
    {
        using (StreamReader reader = new StreamReader(stream, Encoding.UTF8))
        {
            stream.Position = 0;
            return reader.ReadToEnd();
        }
    }


    List<Data> _result = new List<Data>();

    using (MemoryStream _stream = new MemoryStream())
    {
        using (CsvWriter<Data> _csvWriter = new CsvWriter<Data>(_stream))
        using (CsvReader<Data> _csvReader = new CsvReader<Data>(_stream))
        {
            _csvWriter.Separator = ';';
            _csvWriter.Open();
            _csvWriter.WriteHeader();
            _csvWriter.Write(GetTestData());

            _csvReader.Open();
            _csvReader.DetectSeparator(); // Auto detect separator.
            _csvReader.Skip(); // Skip header. 
            _result = _csvReader.ReadAsEnumerable().Where(p => p.Collumn2 == "AA").ToList();
        }
    }


    string _result;

    using (MemoryStream _stream = new MemoryStream())
    {
        using (CsvWriter<Data> _csvWriter = new CsvWriter<Data>(_stream))
        {
            _csvWriter.Separator = ',';
            _csvWriter.Open();
            _csvWriter.WriteHeader();
            _csvWriter.Write(GetTestData());

            _result = StreamToString(_stream);
        }
    }    

Sampling only a few rows without reading entire csv.


    List<CsvSimple> _result1;
    List<CsvSimple> _result2;

    string file = @"D:\largedata.csv";
    _w.Start();

    using (CsvReader<CsvSimple> _reader = new CsvReader<CsvSimple>(file))
    {
        _reader.Open();

        _reader.Skip(); // skip the Header row.

        // Materializes 20 records but returns 10.
        _result1 = _reader.ReadAsEnumerable().Skip(10).Take(10).ToList(); 
        
        // Materialize only 10 records.
        _reader.Skip(10);
        _result1 = _reader.ReadAsEnumerable().Take(10).ToList();

        // Take last 10 records. Without serializing everything before it.
        _result1 = _reader.Last(10).ToList();
    }

Mind you on the fact that Skip and Take andvances the reader to the next position.
executing another _reader.ReadAsEnumerable().Where(p ⇒ p...).ToList() will Query from position 21.

Use MoveToStart() to move the reader to the starting position.

_reader.Skip() is different then _reader.ReadAsEnumerable().Skip() as the first does not materialize to T and is faster.

DataTable Import / Export


    // Import
    var _file = @"C:\data.csv";
    var _table = new DataTable();
    _table.ImportCsv(_file, ',', true);

    // Export
    _file = @"C:\data2.csv";
    _table.ExportCsv(_file, ',');

Product Compatible and additional computed target framework versions.
.NET net5.0 is compatible.  net5.0-windows was computed.  net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 is compatible.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
.NET Core netcoreapp3.0 is compatible.  netcoreapp3.1 is compatible. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.
  • .NETCoreApp 3.0

    • No dependencies.
  • .NETCoreApp 3.1

    • No dependencies.
  • net5.0

    • No dependencies.
  • net6.0

    • No dependencies.
  • net7.0

    • No dependencies.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
4.2.3 40 11/13/2024
4.2.2 159 2/28/2024
4.2.1 115 2/24/2024
4.2.0 128 2/23/2024
4.1.2 103 2/22/2024
4.1.1 131 2/21/2024
4.1.0 126 2/21/2024
4.0.1 138 2/12/2024
4.0.0 127 2/12/2024
3.1.13 102 2/8/2024
3.1.12 149 2/7/2024
3.1.11 104 1/31/2024
3.1.10 115 1/19/2024
3.1.9 120 1/13/2024
3.1.8 119 1/12/2024
3.1.7 107 1/11/2024
3.1.5 133 1/8/2024
3.1.3 174 12/1/2023
3.1.2 134 12/1/2023
3.1.0 119 11/28/2023
3.0.7 208 8/27/2023
3.0.6 147 8/23/2023
3.0.5 158 8/23/2023
3.0.4 159 8/17/2023
3.0.3 173 8/15/2023
3.0.2 175 8/11/2023
3.0.1 194 8/11/2023
3.0.0 170 8/11/2023
2.0.7 219 8/9/2023
2.0.5 179 8/4/2023
2.0.4 177 8/3/2023
2.0.3 148 7/31/2023
2.0.2 173 7/28/2023
2.0.0 177 7/19/2023
1.7.53 215 4/14/2023
1.7.52 214 4/12/2023
1.7.51 201 4/7/2023
1.7.43 230 4/3/2023
1.7.42 213 4/3/2023
1.7.41 197 4/3/2023
1.7.5 202 4/7/2023
1.7.3 242 4/3/2023
1.7.2 230 4/3/2023
1.7.1 217 4/3/2023
1.7.0 226 4/1/2023
1.6.3 225 3/31/2023
1.6.2 227 3/29/2023
1.6.1 220 3/29/2023
1.6.0 214 3/27/2023
1.5.8 239 3/24/2023
1.5.7 210 3/22/2023
1.5.6 226 3/22/2023
1.5.5 234 3/21/2023
1.5.4 243 3/21/2023
1.5.1 233 3/20/2023
1.5.0 238 3/19/2023
1.4.5 233 3/18/2023
1.4.4 273 3/18/2023
1.4.3 226 3/18/2023
1.4.2 244 3/18/2023
1.4.1 210 3/18/2023
1.4.0 229 3/18/2023
1.3.92 239 3/18/2023
1.3.91 245 3/17/2023
1.3.9 232 3/17/2023
1.3.8 208 3/17/2023
1.3.7 238 3/17/2023
1.3.6 204 3/17/2023
1.3.5 220 3/17/2023
1.3.4 242 3/17/2023
1.3.3 232 3/16/2023
1.3.2 211 3/16/2023
1.3.1 240 3/16/2023
1.3.0 195 3/16/2023
1.2.0 234 3/14/2023
1.1.6 274 2/24/2023
1.1.5 319 2/16/2023
1.1.4 478 5/18/2022
1.1.3 715 1/27/2022
1.1.2 644 1/27/2022
1.1.1 695 1/14/2022
1.1.0 5,842 11/23/2021
1.0.5 393 5/11/2021
1.0.4 337 4/14/2021
1.0.3 377 4/12/2021
1.0.2 335 4/12/2021
1.0.1 316 4/7/2021
1.0.0 388 4/7/2021

2.0.5
- Small improvements.

2.0.4
- Critical Bugfix: Escaped separator not correctly handled.
- Small performance improvements.

2.0.3
- Bugfix: Deserialize with lesser column count then in CSV.

2.0.2
- Bugfix: not properly reading escaped double quotes.
- Minor improvements

2.0
- Improved CsvWriter<T> speed.
- Extended ICustomCsvParser<T> to be supported by the CsvWriter<T> as well.
- ICustomCsvParser<T>.Parse() has been removed.
- Added Read() and Write() to ICustomCsvParser<T>
- Refactored CsvReader<T> and  CsvWriter<T>
- Introduced CsvAttribute to set, at this attribute defaults for ICustomCsvParser can be set at class level.

1.7.53
- Improved CsvStreamReader speed.
- Added ReadAsEnumerable() to CsvStreamReader.

1.7.51
- Added DataTable extensions ImportCsv / ExportCsv

1.7.1
- Changed ICustomCsvParse to generic ICustomCsvParse

1.7
- Added CustomParserType to ColumnAttribute

1.6.3
- Added NullValueBehaviour to CsvWriter<T>
- Added CurrentLine to Reader
- Added LineNumber to Error log
- Added Flush() to Reader<T> and Writer<T>
- Refactored UnitTests in GitHub code Demo Tests and Validate Tests.

1.6.2
- Minor bugfix with CR only ending.

1.6.1
- Fixed bug with AutoDetectSeparator.
- Added EmptyLineBehaviour to CsvReader<T>
- Refactoring

1.6.0
- Added Last(int rows) function to Reader<T>.
- Added IEnumerable<CsvReadError> Errors to CsvReader<T>.
-Fixed Skip() counter.
- Correct handling for CRLF in CsvStreamReader and CsvReader<T>
   -  \r = CR(Carriage Return) → Used as a new line character in Mac OS before X
   -  \n = LF(Line Feed) → Used as a new line character in Unix/Mac OS X
   -  \r\n = CR + LF → Used as a new line character in Windows
- Added  CRLFMode to CsvStreamWriter and CsvWriter<T>

1.5.8
- Minor Improvements
- Added Skip() to CsvStreamReader
- Changed EndOfStream behaviour

1.5.7
- Small improvements

1.5.1
- Updated Readme
- Fixed bug with Skip(rows)
- Fixed small bug with ReadAsEnumerable() always started at position 0.

1.5
- Correct handling Null Types for Reader

1.4.5
- Refactoring
- Removed DynamicReader and DynamicWriter

1.4.2
- Another performance improvement for Reader

1.4
- Performance improvements for Writer.
- Added OutputFormat ro ColumnAttribute

1.3.8
- Performance improvement for Reader

1.3.2
- Bug fixes

1.3
- Improved constructors to support all parameters for underlying StreamReader and StreamWriters.
- Added Skip() to CsvReader (to be used in combination Read())
- Added WriteHeader() to CsvWriter()
- Added Header to Column attribute to be used by the CsvWriter
- GetCsvSeparator() / DetectSeparator(),detects more exotic separators.
- Added byte[] to base64 serialization to CsvReader and CsvWriter

1.2
- Added single Read() function.
- Rows() now marked as obsolete.
- Added ReadAsEnumerable() as replacement for Rows()
- Added GetCsvSeparator(int sampleRows) to CsvStreamReader()
- Added DetectSeparator() to CsvReader()

1.1.5
- Bug Fixes

1.1.4
- Added CsvUtils static class including some special Csv functions to use.

1.1.3
- Added CsvWriterDynamic

1.1.1
- Added CsvReaderDynamic

1.1.0
- Speed optimizations (using delegates instead of reflection)

1.0.5
- Read/Write Stream csv lines into a poco object.
- Query / Read / Write large csv files.