Products

Simply transfer, manage, share files in a reinvented revolutional way.

Server & Cluster
  • OS & Arch

    Linux: amd64, arm64, ppc64le, s390x

    macOS: amd64, arm64

    Windows: amd64, arm64

  • Server Binary

    The server is a single binary that can run anywhere with none external dependencies, all features builtin including transfer engine, restful API engine and clustering engine, as well as all other features that can switch on/off with global or individual settings on resource level.

    The server binary can also be used as the client commandline to do transfers with a local share in the spec which needs to access the database to authenticate and authorize the access of the share, or without the local share in the spec so no need to access database, in this case, the abolute path of a local folder/file/link is required.

    It utilizes UDP protocol for transfers to achieve the highest potential speed regardless distance and high packet loss rate, and http restful API for the management of the resources hosted on the server such as user, storage, share, policy, transfer, event and etc, which are highly configurable for various use cases especially the multi tenant cloud SaaS enviornment, either with the global or individual settings on each of the resources.

    It can run both transfer engine and the restful API engine at the same time or either one standalone. The transfer engine can be configured to accept transfer requests from remote servers/cluster or clients, and also to initate transfers uploading to or downloading from remote servers/cluster. If configured only to initate transfers, it can be used as client alike server with or without full restful API features, as named the Filash Pylon.

  • Cluster

    Unlimited number of servers with different licenses can connect to the same database instance to form a multi-role based cluster, within which, all servers share the same set of resources, API reqeusted server to server (Pylon) transfers are distributed with a certain level of load balance (faster server picks up quicker) across the whole cluster.

    The cluster can grow linearly from one server and more servers being connected along the way at anytime.

    The cluster automatically elects its own alpha server to run the background cleanup jobs to avoid conflicts, whenever the alpha server is offline, a new alpha will be elected and run the jobs until the old alpha is back online.

    The ip addresses of the servers in one cluster can be added to the same DNS name that can be translated to multple ip addresses for the cluster-aware transfer client or Pylon to load balance itself (faster server accepts quicker).

    In a multi-role based cluster, there can be servers that 1. run only restful API engine, 2. run background file jobs only, 3. do not accept but to initiate transfers (Pylon), 4. run everything, or many more feature combinations to make more different roles on different hardware configurations.

  • Ports

    The transfer engine uses a single UDP port, configurable but 55001 by default, to listen on all high speed file transfer requests from the remote side on Linux and macOS. While on Windows, a sequential range of UDP ports, configurable but starting from 55001 by default, need to be open if the server is configured to accept concurrent transfers.

    The restful API engine uses a single TCP port with either http or https on all platforms to listen on restful API requests, configurable but 55000 by default

  • Concurrency

    The transfer engine has two configurable settings to throttle the total number of simultaneously running transfers,

    1. the maximum number of transfers that can be accepted simultaneously,

    2. the total number of simultaneous sessions that counts both the accepted and Pylon (initiated) transfer sessions.

  • JSON everywhere

    The server is built to utilize the easy and human readable JSON format as the main and only interface data format for the most convenient integration purposes, besides the restful API is using JSON already, the command line arguments accept JSON, the global and individual settings are JSON, the license content is in JSON and etc.

  • JWT token authentication/authorization

    Either basic username/password or JWT token can be used for server authentication, and JWT can also carry authorization information to grant access of particular resources besides the policies.

    JWT token is verified with one of the public keys that are uploaded to the user's /keys resource on the server, with unlimited number of public keys uploaded, user can issue & sign tokens with the corresponding private key on site without copying it around to other environments to avoid security breakage.

    Together with policies, JWT token carrying additional temporary authorization information, most flexible integration use cases can be achieved easily, such as public share link with short expiration and etc.

    Both SSH and OpenSSL key pairs are supported.

  • Resources

    The server hosts such resources as Users, Storages, Shares, Policies, Files, Jobs, Transfers and Events. All resources are domainized for multi-tenant/domain secure resouce access segregation, all users except super users can only access resources in their own domain unless granted access.

    User ids are global, so if granted access, users of one domain can access other domain's resources.

    Users are in 3 roles, super, admin, regular. Super users are only created with the commandline on site that can manage all resources on the server via restful API from local or remote after creation. Admin users are created by super users in particular domains, and manage all resources in the domain. Regular users can only access the resources in the domain.

    With JWT token, the user role can be promoted or demoted temporarily, for instance, a regular user can be promoted to be admin for a short period of time (JWT expiration) to perform some cleanup job or else. Or a user can be permanently promoted or demoted by changing the role via restful API.

    Storages resource currently supports two types of storages, local storage location and S3 compatible object storage.

    For local storage location, either paths on local disk or mounted locations are supported. Out of security considerations, only super users can create local storage resource for a particular domain.

    For S3 compatible object storage, only super users can create S3 storage without credential information for a domain, which means it will use the ec2 machine's assume role to access S3 buckets. While admin users can create S3 storage with either access key/secret or assume role arn string provided for the domain.

    A storage can have its own settings regarding authentication, storage, job and transfer.

    Shares is the resouce that the transfer spec uses to find the files. Either a folder or a file in the storage resource can be created as a share. A soft link can also be created as a share as long as the target is within the storage.

    A share can have its own settings regarding authentication, share, job and transfer.

    Policies are permanent file access permissions that apply to the shares. Without permissions, only admin users have full access of all the shares of the domain, while regular users need permissions to access. Super users have full access to everything.

    File access permissions can be either granted with policies permanently, or a JWT token with permissions temporarily. All permissions in both policy and JWT token will be granted if applicable.

    Files can be managed and searched synchronously, or asynchronously known as file jobs run in the background. For instance as file jobs, periodically deleting long existed (expired) files to free up storage space, or periodically searching up to update the summary of the total number of folders, files and links.

    Either Storage or Share resource can be specified to access Files features for super and admin users, while for regular users, only Share is required.

    Besides the high speed file transfer feature, users can also create an empty file, or upload a limited size file via Files restful API as a shortcut.

    Jobs are for long or repeated running of file operations in the background, such as periodic file deletion, long search or summarization of a large complex directory structure.

    Be careful of using the file deletion jobs because files will be deleted permanently with no recovery unless at least one thorough backup is secured or the S3 bucket has versioning turned on to possibly recover at east old versions.

    Repeated job spec has an interval and an end time. If JWT token is used, the token expiration is considered together with the end time.

    Excessive use of search jobs will degrade the server's performance because each search job run traverses the entire directory structure specified in the job spec. So a recommended good practice is use dedicated servers in the cluster to run file jobs only.

    Transfers can be posted, canceled, deleted, updated locally, and monitored, aborted, updated with max_rate to cap the maximum speed from both local and remote side, all through the restful API.

    Each transfer can be separated into multiple sessions, each of which transfers a segmented part of the file set. Within a cluster, sessions are distributed with a certen level of load balance (faster server picks up quicker) across the entire cluster.

    Live progress of each transfer or session is available via the restful API calls with various progress numbers reported such as bytes, number of folders/files/links found and transferred and etc. The API call can also be "followed", using chunked encoding in HTTP, to be continuously reporting the progress of a transfer or session.

    In case any error occurs during the transfer, if the retry parameters are present in the transfer spec, it will be retried until success or out of retry limit.

    Transfers can be in either of the two levels of sync (repeating) modes, transfer level or session level, with an end time. For transfer level, all sessions need to be aborted or finished before the next run of the entire transfer after the specified interval. For session level, each session is kicked individually after the specified interval of the last run. The interval also has two modes, the time span since the time of last enqueued at or the last aborted/finished at.

    If JWT token is used for sync transfers, the token expiration is considered together with the end time in the sync spec.

    Events of restful API calls of such methods as POST, PUT and DELETE on all resources, as well as transfer activities, are recorded to keep a trackable history of the server activities.

    learn more...

  • Background jobs

    There are several types of background jobs running periodically to do cleanup chores according to the retention settings, or kick/retry/expire file jobs and transfers according to their specs.

    In a cluster, only the eleted alpha server is doing the background job to avoid conflicts and unnecessary excessive database calls.

  • Settings

    There are 5 levels of flexible settings to fit various kinds of use cases especially SaaS.

    1. License settings, all kinds of different combinations of features can be licenced for partnership or package plans.

    2. Global settings, all settings can be configured with a maximum value in the global settings being applied to the entire server to cap/balance the feature capabilities for the end users.

    3. User settings, throttles the maximum number of JWT token verification public keys, unlimited by default.

    4. Storage settings, each storage can be created/updated with a settings section to cap/balance the feature capabilities.

    5. Share settings, each share can also be capped/balanced with individual settings.

  • Logging

    Multi-level JSON format logging is applied globally either with a log file, syslogd or stdout. Individual transfer can specify different log level from the global for troubleshooting purposes.

  • Key pairs

    All users including the super, admin and regular users can upload unlimited number of public keys to the server for authenticating/authorizing the JWT tokens that are issued with the corresponding private keys.

    Each environment should have its own key pairs tactic is a best practice that is highly recommended in production environment to avoid copying the private key around environments.

  • Database

    Etcd is being used as the main database either standalone running within the server binary or as external cluster of etcd instances for Filash clustering environment.

    Etcd is native by design for cloud environment, especially for Kubernetes.It is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines, which is why Filash chose it as the main database for both permanent resource metadata and transient transfer progress records with the highest data consistency guaranteed.

  • Security

    Both client and remote sides can use certificate with custom CA to authenticates the Filash instances for the use cases that have high security requirements, or those environments that need multiple clusters for different organisations.

  • License

    Each Filash server is looking for a license with an unique license ID to operate against other servers either standalone or in a cluster. Conflict of license IDs will be reported as an error whenver Pylon transfers see duplicate license IDs are used on both sides.

    The license is a JWT token issued and signed by filash.io with a expiration date time according to the subscription, either the last EOD of each month for monthly or the last EOD of December for yearly, with or without settings to restrict the features of the server or cluster.

    Filash provides free tier licenses expire by the end of each year, and if with no license provisioned whatsoever, the server will generate a temporary license with 24 hours expiration with the free tier license feature set for a convenient test drive.

  • Learn more detailed specifications in OpenAPI document ...

Client Command Line & Pylon
  • OS & Arch

    Linux: amd64, arm64, ppc64le, s390x

    macOS: amd64, arm64

    Windows: amd64, arm64

  • Command line

    The client command line is also a single binary, with none external dependencies required to run on major OSs and Archs.

    The client commandline either runs on user's laptop/desktop or on a server with no Filash server installed to perform upload or download of any accessible folders/files/links.

    No license is required whatsoever, it is totally free.

    The command line can accept the transfer spec in a whole JSON string as the only argument to perform download or upload. And the exact same JSON string can also be used for Pylon (server to server) transfers when being submitted through the restful API.

    Custom transfer settings are acceptable in the settings section of the transfer spec, such as increase the default numbers (number of CPU threads) of finders, readers, or writers for transferring millions of small files at highest speed.

    Ctrl+C will abort the transfer command gracefully as fast as possible, while multiple strokes can terminate the command forcefully and possibly leaving the remote side to timeout in 15 ( or configured ) seconds.

    The server binary has the exact same behaviour when being used as a client. So with the server installed already, the client commandline is not necessary at all.

  • Pylon (server as client side)

    Server as client side, named Pylon, only initiates 'download from' or 'upload to' transfers to the remote side, but doesn't serve to accept transfers initiated from the remote side, while still has all the features that the server does, which makes it perfect for web browser or desktop application integrations.

    Pylon behaves exactly like the server with restful APIs scheduling transfers, managing resources, running background jobs and etc. When integrated on the client side with web or desktop application, it behaves transparently like a full featured server in the background on your laptop.

    It needs unique license to operate like the server and has a tiered packege pricing plan that could cost way less than the servers.

  • Logging

    The commandline outputs the JSON formatted log lines to stdout by default, whereas log_path can be specified to redirect it to a log file or syslogd. Log_level can be useful for troubleshooting purpose.

  • Security

    Both client and server side can use certificates with custom CA to authenticate the Filash instances for stricter security requirements.

    Besides the certificate authenticating the Filash instance, server resources are protected by username/password or JWT token.

  • Speed by AI

    No parameters, such as target rate, about the network conditions is needed whatsoever, thanks to the AI powered rate control algorithm, to achieve the highest potential transfer speed regardless long distance as far as GEO satellites or high packet loss rate as high as over 50%. While a max speed can be set at global, storage, share and transfer level, for throttle purpose.

  • Learn more detailed transfer spec specifications in OpenAPI document ...

SDK Shared Library
  • OS & Arch

    Android: amd64, arm64

    iOS: arm64, iphone simulator on amd64

  • C/C++ shared library

    The SDK is a single binary of C/C++ shared library built out of the client's code base for the corresponding mobile platform.

    Any programming language that is able to use C/C++ shared library can easily import and use it with no external dependencies needed.

    There are only two functions. One is to start the transfer with five arguments of string type, the log path (optional), the certificate content (optional), private key content (optional), passphrase (optional) and the transfer spec. The other one is to abort the current transfer if any.

    Settings in the transfer spec can still be set to fine tune or throttle the transfer performance.

    No license is required whatsoever, it is totally free.

  • Custom build SDK

    Custom build of either server/cluster or client command line/Pylon is availabe on demand for platforms other than those listed on this site. And custom feature development is available on demand as well for Lydiksen partners.