From wpf-dev-pack
Optimizes .NET standard I/O and file operations using buffered streams, large buffers, async access, and SequentialScan for high-throughput data processing and competitive programming.
npx claudepluginhub christian289/dotnet-with-claudecode --plugin wpf-dev-packThis skill uses the workspace's default tool permissions.
A guide for APIs optimizing large-scale data input/output.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
A guide for APIs optimizing large-scale data input/output.
Quick Reference: See QUICKREF.md for essential patterns at a glance.
| API | Purpose |
|---|---|
Console.OpenStandardInput() | Buffered stream input |
Console.OpenStandardOutput() | Buffered stream output |
BufferedStream | Stream buffering |
FileOptions.Asynchronous | Async file I/O |
// Use buffer stream directly for large I/O
using var inputStream = Console.OpenStandardInput();
using var outputStream = Console.OpenStandardOutput();
using var reader = new StreamReader(inputStream, bufferSize: 65536);
using var writer = new StreamWriter(outputStream, bufferSize: 65536);
// Disable buffer flush for performance improvement
writer.AutoFlush = false;
string? line;
while ((line = reader.ReadLine()) is not null)
{
writer.WriteLine(ProcessLine(line));
}
// Manual flush at the end
writer.Flush();
using System.Text;
// High-speed input
using var reader = new StreamReader(
Console.OpenStandardInput(),
Encoding.ASCII,
bufferSize: 65536);
// High-speed output
using var writer = new StreamWriter(
Console.OpenStandardOutput(),
Encoding.ASCII,
bufferSize: 65536);
var sb = new StringBuilder();
// Collect large output in StringBuilder and write at once
for (int i = 0; i < 100000; i++)
{
sb.AppendLine(i.ToString());
}
writer.Write(sb);
writer.Flush();
// Use larger buffer than default (4KB)
const int bufferSize = 64 * 1024; // 64KB
using var fileStream = new FileStream(
path,
FileMode.Open,
FileAccess.Read,
FileShare.Read,
bufferSize: bufferSize);
// Open file with async option
using var fileStream = new FileStream(
path,
FileMode.Open,
FileAccess.Read,
FileShare.Read,
bufferSize: 4096,
options: FileOptions.Asynchronous);
var buffer = new byte[4096];
int bytesRead = await fileStream.ReadAsync(buffer);
// Provide hint to OS for sequential reading
using var fileStream = new FileStream(
path,
FileMode.Open,
FileAccess.Read,
FileShare.Read,
bufferSize: 64 * 1024,
options: FileOptions.SequentialScan);
// Direct offset access without file position management
using var handle = File.OpenHandle(path, FileMode.Open, FileAccess.Read);
var buffer = new byte[4096];
long offset = 1000;
int bytesRead = RandomAccess.Read(handle, buffer, offset);
// Async version
bytesRead = await RandomAccess.ReadAsync(handle, buffer, offset);
public async IAsyncEnumerable<byte[]> ReadChunksAsync(
string path,
int chunkSize = 64 * 1024,
[EnumeratorCancellation] CancellationToken ct = default)
{
using var stream = new FileStream(
path,
FileMode.Open,
FileAccess.Read,
FileShare.Read,
bufferSize: chunkSize,
options: FileOptions.Asynchronous | FileOptions.SequentialScan);
var buffer = new byte[chunkSize];
int bytesRead;
while ((bytesRead = await stream.ReadAsync(buffer, ct)) > 0)
{
if (bytesRead == chunkSize)
{
yield return buffer;
buffer = new byte[chunkSize];
}
else
{
yield return buffer[..bytesRead];
}
}
}
using System.IO.MemoryMappedFiles;
// Map large file to memory
using var mmf = MemoryMappedFile.CreateFromFile(path, FileMode.Open);
using var accessor = mmf.CreateViewAccessor();
// Direct memory access
byte value = accessor.ReadByte(position);
accessor.Write(position, newValue);
| Method | Relative Performance | Use Case |
|---|---|---|
| Console.ReadLine() | 1x (baseline) | General |
| StreamReader (default buffer) | 2x | Large data |
| StreamReader (64KB buffer) | 3-5x | Large data |
| MemoryMappedFile | 5-10x | Very large data |
// Read UTF-8 without BOM
using var reader = new StreamReader(
stream,
new UTF8Encoding(encoderShouldEmitUTF8Identifier: false));
// Improve performance with AutoFlush = false
writer.AutoFlush = false;
// Manual flush after important data
writer.Flush();