Stowage 2.0.2-pre.4
See the version list below for details.
dotnet add package Stowage --version 2.0.2-pre.4
NuGet\Install-Package Stowage -Version 2.0.2-pre.4
<PackageReference Include="Stowage" Version="2.0.2-pre.4" />
paket add Stowage --version 2.0.2-pre.4
#r "nuget: Stowage, 2.0.2-pre.4"
// Install Stowage as a Cake Addin #addin nuget:?package=Stowage&version=2.0.2-pre.4&prerelease // Install Stowage as a Cake Tool #tool nuget:?package=Stowage&version=2.0.2-pre.4&prerelease
Stowage
This documentation is for Stowage v2 which is a major redesign. Version 1 documentation can be found here.
Stowage is a bloat-free .NET cloud storage kit that supports at minimum THE major โ providers.
- Independent ๐. Provides an independent implementation of the โ storage APIs. Because you can't just have official corporate SDKs as a single source of truth.
- Readable. Official SDKs like the ones for AWS, Google, or Azure are overengineered and unreadable. Some are autogenerated and look just bad and foreign to .NET ecosystem. Some won't even compile without some custom rituals.
- Beautiful ๐ฆ. Designed to fit into .NET ecosystem, not the other way around.
- Rich ๐ฐ. Provide maximum functionality. However, in addition to that, provide humanly possible way to easily extend it with new functionality, without waiting for new SDK releases.
- Embeddable ๐ฑ. Has zero external dependencies, relies only on built-in .NET API. Often official SDKs have a very deep dependency tree causing a large binary sizes and endless conflicts during runtime. This one is a single .NET .dll with no dependencies whatsoever.
- Cross Cloud ๐ฅ. Same API. Any cloud. Best decisions made for you. It's like iPhone vs Windows Phone.
- Cross Tested โ. It's not just cross cloud but also cross tested (I don't know how to call this). It tests that all cloud providers behave absolutely the same on various method calls. They should validate arguments the same, throw same exceptions in the same situations, and support the same set of functionality. Sounds simple, but it's rare to find in a library. And it important, otherwise what's the point of a generic API if you need to write a lot of
if()
s? (or pattern matching).
This library originally came out from being frustrated on working on my another library - Storage.Net. While it's OK, most of the time I had to deal with SDK incompatibilities, breaking changes, oddnesses, and slowness, whereas most of the time users needs something simple that just works.
Getting Started
Right, time to gear up. We'll do it step by step. First, you need to install the package.
Simplest case, using the local ๐ฝ and writing text "I'm a page!!!" to a file called "pagefile.sys" at the root of disk C::
using Stowage;
using(IFileStorage fs = Files.Of.LocalDisk("c:\\")) {
await fs.WriteText("pagefile.sys", "I'm a page!!!!");
}
This is local disk, yeah? But what about cloud storage, like Azure Blob Storage? Piece of cake:
using Stowage;
using(IFileStorage fs = Files.Of.AzureBlobStorage("accountName", "accountKey", "containerName")) {
var entries = await fs.Ls();
}
โ <span style="color:red">S</span>treaming
Streaming is a first-class feature. This means the streaming is real with no workarounds or in-memory buffers, so you can upload/download files of virtually unlimited sizes. Most official SDKs do not support streaming at all - surprisingly even the cloud leader's .NET SDK doesn't. Each requires some sort of crippled down version of stream - either knowing length beforehand, or will buffer it all in memory. I don't. I stream like a stream.
Proper streaming support also means that you can transform streams as you write to them or read from them - something that is not available in the native SDKs. For instance gzipping, encryption, anything else.
Streaming is also truly compatible with synchronous and asynchronous API.
Details/Documentation
Whenever a method appears here, I assume it belongs to IFileStorage
interface, unless specified.
Listing/Browsing
Use .Ls()
(short for list) - very easy to remember! Everyone knows what ls does, right? Optionally allows to list entries recursively.
Reading
The core method for reading is Stream OpenRead(IOPath path)
- this returns a stream from file path. Stream is the lowest level data structure. There are other helper methods that by default rely on this method, like ReadText
etc. Just have a quick look:
IFileStorage fs = ...;
Stream target = ...;
// copy to another stream
using Stream s = await fs.OpenRead("/myfile.txt");
// synchronous copy:
s.CopyTo(target);
// or alternatively, asynchronous copy (preferred):
await s.CopyToAsync(target);
// if you just need text:
string content = await fs.ReadText("/myfile.txt");
Of course there are more overloaded methods you can take advantage of.
Writing
The main method Stream OpenWrite(IOPath path, ...)
opens(/creates?) a file for writing. It returns a real writeable stream you can write to and close afterwards. It behaves like a stream and is a stream.
There are other overloads which support writing text etc.
Destroying ๐งจ
Rm(IOPath path)
trashes files or folders (or both) with options to do it recursively!
Other
There are other useful utility methods:
bool Exists(IOPath path)
that checks for file existence. It supposed to be really efficient, hence a separate method.Ren
renames files and folders.- and more are coming - check
IFileStorage
interface to be up to date.
Supported Storage Systems (Built-In)
Local Disk Directory (
Files.Of.LocalDisk(...)
).In-Memory (
Files.Of.InternalMemory(...)
).AWS S3 (
Files.Of.AmazonS3(...)
).- Minio (
Files.Of.Minio(...)
). - DigitalOcean Spaces (
Files.Of.DigitalOceanSpaces(...)
).
- Minio (
Azure Blob Storage / Data Lake Gen 2 (
Files.Of.AzureBlobStorage(...)
).Google Cloud Storage (
Files.Of.GoogleCloudStorage(...)
).Databricks DBFS (
Files.Of.DatabricksDbfs(...)
).
Instantiation instructions are in the code documentation (IntelliSense?) - I prefer this to writing out here locally.
Below are some details worth mentioning.
AWS S3
In AWS, the path addressing style is the following:
/bucket/path/object
Ls
on the root folder returns list of buckets in the AWS account, whether you do have access to them or not.
Authentication
The most usual way to authenticate with S3 is to use the following method:
IFileStorage storage = Files.Of.AmazonS3(key, secret, region);
These are what Amazon calls "long-term" credentials. If you are using STS, the same method overload allows you to pass sessionToken
.
Another way to authenticate is using CLI profile. This is useful when you machine is already authenticated using aws cli, awsume or similar tools that write credentials and configuration to ~/.aws/credentials
and ~/.aws/config
.
You only need to pass the profile name (and only if it's not a default one):
IFileStorage storage = Files.Of.AmazonS3FromCliProfile();
This method has other default parameters, such as regionName
which can be specified or overridden if not found in CLI configuration.
Minio
Minio is essentially using the standard S3 protocol, but addressing style is slightly different. There is a helper extension that somewhat simplifies Minio authentication:
IFileStorage storage = Files.Of.Minio(endpoint, key, secret);
Azure Blob Storage
In Azure Blob Storage, path addressing style is the following:
/container/path/object
Note that there is no storage account in the path, mostly because Shared Key authentication is storage account scoped, not tenant scoped.
Ls
on the root folder returns list of containers in the storage account.
Authentication
Azure provider supports authentication with Shared Key:
IFileStorage storage = Files.Of.AzureBlobStorage(accountName, sharedKey);
Since v2, authentication with Entra Id service principals is supported too:
IFileStorage storage = Files.Of.AzureBlobStorage(
accountName,
new ClientSecretCredential(tenantId, clientId, clientSecret));
Interactive authentication with user credentials, and managed identities are not yet supported, but watch this space.
Emulator
Azure emulator is supported, just use AzureBlobStorageWithLocalEmulator()
method to connect to it.
๐ฆ Connection Strings
You can also use connection strings, which are useful when implementation type is unknown beforehand, should be configurable, or you just don't want to implement implementation factory yourself. To create a storage using connection string use the following method:
IFileStorage storage = Files.Of.ConnectionString(connectionString);
Connection strings have the following format: <prefix>://<parameters>
.
Prefix is implementation type, like disk
, s3
and so on, and parameters are implementation specific.
mindmap
root((CS))
AWS S3
prefix
s3
connection types
AWS CLI profiles
examples
default profile
s31["`s3://`"]
s32["`s3://profile=default`"]
specific profile
s33["`s3://profile=name`"]
optional parameters
region
if not specified, must be in cli profile
using keys
examples
s34["`s3://keyId=...;key=...;region=...`"]
local disk
prefix
disk
connection types
entire disk
disk2["`disk://`"]
specific directory
disk1["`disk://path=localPath`"]
in-memory
azure blobs
DBFS
๐ Extending
There are many ways to extend functionality:
- Documentation. You might think it's not extending anything, however if user is not aware for some functionality it doesn't exist. Documenting it is making it available, hence extending. You must be slightly mad to follow my style of writing though.
- New functionality. Adding utility methods like copying files inside or between accounts, automatic JSON serialisation etc. is always good. Look
IFileStorage
interface andPolyfilledFileStorage
. In most cases these two files are enough to add pure business logic. Not counting unit tests. Which you must write. Otherwise it's easier to do the whole thing by myself. Which is what will happen according to my experience. - Native optimisations. Some functionality is generic, and some depends on a specific cloud provider. For instance, one can copy a file by downloading it locally, and uploading with a new name. Or utilise a native REST call that accepts source and target file name, if it exists. Involves digging deeper into specific provider's API.
When contributing a new provider, it's way more preferrable to embed it's code in the library, provided that:
- there are no extra nuget dependencies.
- it's cross-platform.
I'm a strong advocate of simplicity and not going to repeat the mistake of turning this into a nuget tree dependency hell!
โ Who?
- Used by:
- databricks-sql-cli - Unofficial Databricks SQL management console.
- Pocket Bricks - Databricks client for Android.
- Stowage Explorer - experimental explorer project?
- Featured in The .NET MAUI Podcast, episode 98.
- Blog post Exploring Stowage.
Related Projects
- RCLONE - cross-platform open-source cloud sync tool.
- Storage.Net - the roots of this project.
๐ฐ Contributing
You are welcome to contribute in any form, however I wouldn't bother, especially financially. Don't bother buying me a โ, I can do it myself real cheap. During my years of OSS development everyone I know (including myself) have only lost money. Why I'm still doing this? Probably because it's just cool and I'm enjoying it.
Product | Versions Compatible and additional computed target framework versions. |
---|---|
.NET | net6.0 is compatible. net6.0-android was computed. net6.0-ios was computed. net6.0-maccatalyst was computed. net6.0-macos was computed. net6.0-tvos was computed. net6.0-windows was computed. net7.0 was computed. net7.0-android was computed. net7.0-ios was computed. net7.0-maccatalyst was computed. net7.0-macos was computed. net7.0-tvos was computed. net7.0-windows was computed. net8.0 is compatible. net8.0-android was computed. net8.0-browser was computed. net8.0-ios was computed. net8.0-maccatalyst was computed. net8.0-macos was computed. net8.0-tvos was computed. net8.0-windows was computed. |
-
net6.0
- No dependencies.
-
net8.0
- No dependencies.
NuGet packages (1)
Showing the top 1 NuGet packages that depend on Stowage:
Package | Downloads |
---|---|
DeltaIO
Pure, managed, super fast Delta Lake implementation in .NET. |
GitHub repositories (1)
Showing the top 1 popular GitHub repositories that depend on Stowage:
Repository | Stars |
---|---|
aloneguid/parquet-dotnet
Fully managed Apache Parquet implementation
|
Version | Downloads | Last updated |
---|---|---|
2.1.0-pre.2 | 112 | 11/8/2024 |
2.1.0-pre.1 | 35 | 11/8/2024 |
2.0.2-pre.4 | 34 | 11/6/2024 |
2.0.2-pre.3 | 128 | 6/12/2024 |
2.0.2-pre.2 | 51 | 6/11/2024 |
2.0.2-pre.1 | 46 | 6/11/2024 |
2.0.1 | 8,192 | 5/31/2024 |
2.0.0 | 372 | 4/16/2024 |
2.0.0-pre.8 | 106 | 3/27/2024 |
2.0.0-pre.7 | 378 | 12/12/2023 |
2.0.0-pre.6 | 108 | 12/6/2023 |
2.0.0-pre.5 | 80 | 12/4/2023 |
2.0.0-pre.4 | 73 | 12/4/2023 |
2.0.0-pre.3 | 71 | 12/4/2023 |
2.0.0-pre.2 | 79 | 12/4/2023 |
2.0.0-pre.1 | 76 | 12/1/2023 |
1.5.1 | 589 | 11/23/2023 |
1.5.0 | 132 | 11/22/2023 |
1.4.0 | 163 | 11/15/2023 |
1.3.0 | 135 | 11/15/2023 |
1.2.7 | 474 | 9/4/2023 |
1.2.6 | 9,510 | 2/23/2023 |
1.2.5 | 517 | 1/17/2023 |
1.2.4 | 2,387 | 7/26/2022 |
1.2.3 | 402 | 7/26/2022 |
1.2.2 | 452 | 6/27/2022 |
1.2.1 | 427 | 6/23/2022 |
1.2.0 | 444 | 6/13/2022 |
1.1.9 | 577 | 5/20/2022 |
1.1.8 | 404 | 5/20/2022 |
1.1.7 | 566 | 4/4/2022 |
1.1.6 | 448 | 3/24/2022 |
1.1.5 | 565 | 2/22/2022 |
1.1.4 | 452 | 2/11/2022 |
1.1.3 | 428 | 2/11/2022 |
1.1.2 | 466 | 1/28/2022 |
1.1.1 | 447 | 1/28/2022 |
1.1.0 | 450 | 1/28/2022 |
1.0.8 | 290 | 12/23/2021 |
1.0.7 | 275 | 12/14/2021 |
1.0.6 | 343 | 11/10/2021 |
1.0.5 | 499 | 9/29/2021 |
1.0.4 | 313 | 9/28/2021 |
1.0.3 | 333 | 9/27/2021 |
1.0.2 | 294 | 9/24/2021 |
1.0.1 | 338 | 8/17/2021 |
1.0.0 | 362 | 8/17/2021 |
1.0.0-alpha-04 | 275 | 6/3/2021 |
1.0.0-alpha-01 | 256 | 6/3/2021 |