Skip to content

Commit

Permalink
Fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
rex4539 committed Apr 21, 2020
1 parent 41851e8 commit b4d6bc9
Show file tree
Hide file tree
Showing 11 changed files with 29 additions and 29 deletions.
4 changes: 2 additions & 2 deletions API_CORE.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The `core` API is the programmatic interface for IPFS, it defines the method sig

# Table of Contents

TODo
TODO

## Required for compliant IPFS implementation

Expand Down Expand Up @@ -83,7 +83,7 @@ TODo
- tail


## Tooling on top of the Core + Extentions
## Tooling on top of the Core + Extensions

> Everything defined here is optional, and might be specific to the implementation details (like running on the command line).
Expand Down
4 changes: 2 additions & 2 deletions ARCHITECTURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ The Routing Sytem is an interface that is satisfied by various kinds of implemen

See more in the [libp2p specs](https://github.com/libp2p/specs).

## 3.3 Block Exchange -- transfering content-addressed data
## 3.3 Block Exchange -- transferring content-addressed data

The IPFS **Block Exchange** takes care of negotiating bulk data transfers. Once nodes know each other -- and are connected -- the exchange protocols govern how the transfer of content-addressed blocks occurs.

Expand Down Expand Up @@ -175,7 +175,7 @@ The IPFS **naming** layer -- or IPNS -- handles the creation of:

IPNS is based on [SFS](http://en.wikipedia.org/wiki/Self-certifying_File_System). It is a PKI namespace -- a name is simply the hash of a public key. Whoever controls the private key controls the name. Records are signed by the private key and distributed anywhere (in IPFS, via the routing system). This is an egalitarian way to assign mutable names in the internet at large, without any centralization whatsoever, or certificate authorities.

See more in the namin spec (TODO).
See more in the naming spec (TODO).

# 4. Applications and Datastructures -- on top of IPFS

Expand Down
2 changes: 1 addition & 1 deletion BITSWAP.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ Task workers watch the message queues, dequeue a waiting message, and send it to

## Network

The network is the abstraction representing all Bitswap peers that are connected to us by one or more hops. Bitswap messages flow in and out of the network. This is where a game-theoretical analysis of Bitswap becomes relevant – in an arbitrary network we must assume that all of our peers are rational and self-interested, and we act accordingly. Work along these lines can be found in the [research-bitswap respository](https://github.com/ipfs/research-bitswap), with a preliminary game-theoretical analysis currently in-progress [here](https://github.com/ipfs/research-bitswap/blob/docs/strategy_analysis/analysis/prelim_strategy_analysis.pdf).
The network is the abstraction representing all Bitswap peers that are connected to us by one or more hops. Bitswap messages flow in and out of the network. This is where a game-theoretical analysis of Bitswap becomes relevant – in an arbitrary network we must assume that all of our peers are rational and self-interested, and we act accordingly. Work along these lines can be found in the [research-bitswap repository](https://github.com/ipfs/research-bitswap), with a preliminary game-theoretical analysis currently in-progress [here](https://github.com/ipfs/research-bitswap/blob/docs/strategy_analysis/analysis/prelim_strategy_analysis.pdf).

# Implementation Details

Expand Down
8 changes: 4 additions & 4 deletions IMPORTERS_EXPORTERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Lots of discussions around this topic, some of them here:

Importing data into IPFS can be done in a variety of ways. These are use-case specific, produce different datastructures, produce different graph topologies, and so on. These are not strictly needed in an IPFS implementation, but definitely make it more useful.

These data importing primitivies are really just tools on top of IPLD, meaning that these can be generic and separate from IPFS itself.
These data importing primitives are really just tools on top of IPLD, meaning that these can be generic and separate from IPFS itself.

Essentially, data importing is divided into two parts:

Expand All @@ -52,10 +52,10 @@ Essentially, data importing is divided into two parts:

## Requirements

These are a set of requirements (or guidelines) of the expectations that need to be fullfilled for a layout or a splitter:
These are a set of requirements (or guidelines) of the expectations that need to be fulfilled for a layout or a splitter:

- a layout should expose an API encoder/decoder like, that is, able to convert data to its format and convert it back to the original format
- a layout should contain a clear umnambiguous representation of the data that gets converted to its format
- a layout should contain a clear unambiguous representation of the data that gets converted to its format
- a layout can leverage one or more splitting strategies, applying the best strategy depending on the data format (dedicated format chunking)
- a splitter can be:
- agnostic - chunks any data format in the same way
Expand All @@ -77,7 +77,7 @@ These are a set of requirements (or guidelines) of the expectations that need to
Importer
```

- `chunkers or splitters` algorithms that read a stream and produce a series of chunks. for our purposes should be deterministic on the stream. divided into:
- `chunkers or splitters` algorithms that read a stream and produce a series of chunks. for our purposes should be deterministic on the stream. divided into:
- `universal chunkers` which work on any streams given to them. (eg size, rabin, etc). should work roughly equally well across inputs.
- `specific chunkers` which work on specific types of files (tar splitter, mp4 splitter, etc). special purpose but super useful for big files and special types of data.
- `layouts or topologies` graph topologies (eg balanced vs trickledag vs ext4, ... etc)
Expand Down
10 changes: 5 additions & 5 deletions IPNS.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ All things considered, the IPFS naming layer is responsible for the creation of:

## Introduction

Each time a file is modified, its content address changes. As a consequence, the address previously used for getting that file needs to be updated by who is using it. As this is not pratical, IPNS was created to solve the problem.
Each time a file is modified, its content address changes. As a consequence, the address previously used for getting that file needs to be updated by who is using it. As this is not practical, IPNS was created to solve the problem.

IPNS is based on [SFS](http://en.wikipedia.org/wiki/Self-certifying_File_System). It consists of a PKI namespace, where a name is simply the hash of a public key. As a result, whoever controls the private key has full control over the name. Accordingly, records are signed by the private key and then distributed across the network (in IPFS, via the routing system). This is an egalitarian way to assign mutable names on the Internet at large, without any centralization whatsoever, or certificate authorities.

Expand Down Expand Up @@ -53,7 +53,7 @@ An IPNS record is a data structure containing the following fields:
- 7. **ttl** (uint64)
- A hint for how long the record should be cached before going back to, for instance the DHT, in order to check if it has been updated.

These records are stored locally, as well as spread accross the network, in order to be accessible to everyone. For storing this structured data, we use [Protocol Buffers](https://github.com/google/protobuf), which is a language-neutral, platform neutral extensible mechanism for serializing structured data.
These records are stored locally, as well as spread across the network, in order to be accessible to everyone. For storing this structured data, we use [Protocol Buffers](https://github.com/google/protobuf), which is a language-neutral, platform neutral extensible mechanism for serializing structured data.

```
message IpnsEntry {
Expand All @@ -79,13 +79,13 @@ message IpnsEntry {

Taking into consideration a p2p network, each peer should be able to publish IPNS records to the network, as well as to resolve the IPNS records published by other peers.

When a node intends to publish a record to the network, an IPNS record needs to be created first. The node needs to have a previously generated assymetric key pair to create the record according to the datastructure previously specified. It is important pointing out that the record needs to be uniquely identified in the network. As a result, the record identifier should be a hash of the public key used to sign the record.
When a node intends to publish a record to the network, an IPNS record needs to be created first. The node needs to have a previously generated asymmetric key pair to create the record according to the datastructure previously specified. It is important pointing out that the record needs to be uniquely identified in the network. As a result, the record identifier should be a hash of the public key used to sign the record.

As an IPNS record may be updated during its lifetime, a versioning related logic is needed during the publish process. As a consequence, the record must be stored locally, in order to enable the publisher to understand which is the most recent record published. Accordingly, before creating the record, the node must verify if a previous version of the record exists, and update the sequence value for the new record being created.

Once the record is created, it is ready to be spread through the network. This way, a peer can use whatever routing system it supports to make the record accessible to the remaining peers of the network.

On the other side, each peer must be able to get a record published by another node. It only needs to have the unique identifier used to publish the record to the network. Taking into account the routing system being used, we may obtain a set of occurences of the record from the network. In this case, records can be compared using the sequence number, in order to obtain the most recent one.
On the other side, each peer must be able to get a record published by another node. It only needs to have the unique identifier used to publish the record to the network. Taking into account the routing system being used, we may obtain a set of occurrences of the record from the network. In this case, records can be compared using the sequence number, in order to obtain the most recent one.

As soon as the node has the most recent record, the signature and the validity must be verified, in order to conclude that the record is still valid and not compromised.

Expand Down Expand Up @@ -120,4 +120,4 @@ The routing record is spread across the network according to the available routi

**Key format:** `/ipns/BINARY_ID`

The two routing systems currenty available in IPFS are the `DHT` and `pubsub`. As the `pubsub` topics must be `utf-8` for interoperability among different implementations
The two routing systems currently available in IPFS are the `DHT` and `pubsub`. As the `pubsub` topics must be `utf-8` for interoperability among different implementations
10 changes: 5 additions & 5 deletions KEYSTORE.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ in the directory should be readonly, by the owner `400`.

### Interface
Several additions and modifications will need to be made to the ipfs toolchain to
accomodate the changes. First, the creation of two subcommands `ipfs key` and
accommodate the changes. First, the creation of two subcommands `ipfs key` and
`ipfs crypt`:

```
Expand Down Expand Up @@ -148,7 +148,7 @@ OPTIONS:
DESCRIPTION:
'ipfs crypt encrypt' is a command used to encypt data so that only holders of a certain
'ipfs crypt encrypt' is a command used to encrypt data so that only holders of a certain
key can read it.
```
Expand Down Expand Up @@ -206,7 +206,7 @@ does not linger in memory.

#### Unixfs

- new node types, 'encrypted' and 'signed', probably shouldnt be in unixfs, just understood by it
- new node types, 'encrypted' and 'signed', probably shouldn't be in unixfs, just understood by it
- if new node types are not unixfs nodes, special consideration must be given to the interop

- DagReader needs to be able to access keystore to seamlessly stream encrypted data we have keys for
Expand All @@ -217,7 +217,7 @@ does not linger in memory.
- DagBuilderHelper needs to be able to encrypt blocks
- Dag Nodes should be generated like normal, then encrypted, and their parents should
link to the hash of the encrypted node
- DagBuilderParams should have extra parameters to acommodate creating a DBH that encrypts the blocks
- DagBuilderParams should have extra parameters to accommodate creating a DBH that encrypts the blocks

#### New 'Encrypt' package

Expand All @@ -230,7 +230,7 @@ public key chosen and stored in the Encrypted DAG structure.
Note: One option is to simply add it to the key interface.

### Structures
Some tenative mockups (in json) of the new DAG structures for signing and encrypting
Some tentative mockups (in json) of the new DAG structures for signing and encrypting

Signed DAG:
```
Expand Down
4 changes: 2 additions & 2 deletions MERKLE_DAG.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ The format has two parts, the logical format, and the serialized format.

### Logical Format

The merkledag format defines two parts, `Nodes` and `Links` between nodes. `Nodes` embed `Links` in their `Link Segement` (or link table).
The merkledag format defines two parts, `Nodes` and `Links` between nodes. `Nodes` embed `Links` in their `Link Segment` (or link table).

A node is divided in two parts:
- a `Link Segment` which contains all the links.
Expand Down Expand Up @@ -112,7 +112,7 @@ In a sense, IPFS is a "web of data-structures", with the merkledag as the common

The merkledag is a type of Linked-Data. The links do not follow the standard URI format, and instead opt for a more general and flexible UNIX filesystem path format, but the power is all there. One can trivially map formats like JSON-LD directly onto IPFS (IPFS-LD), making IPFS applications capable of using the full-power of the semantic web.

A powerful result of content (and identity) addressing is that linked data definitions can be distributed directly with the content itself, and do not need to be served from the original location. This enables the creation of Linked Data defintions, specs, and applications which can operate faster (no need to fetch it over the network), disconnected, or even completely offline.
A powerful result of content (and identity) addressing is that linked data definitions can be distributed directly with the content itself, and do not need to be served from the original location. This enables the creation of Linked Data definitions, specs, and applications which can operate faster (no need to fetch it over the network), disconnected, or even completely offline.

## Merkledag Notation

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

We use the following label system to identify the state of each spec:

- ![](https://img.shields.io/badge/status-wip-orange.svg?style=flat-square) - A work-in-progress, possibly to describe an idea before actually commiting to a full draft of the spec.
- ![](https://img.shields.io/badge/status-wip-orange.svg?style=flat-square) - A work-in-progress, possibly to describe an idea before actually committing to a full draft of the spec.
- ![](https://img.shields.io/badge/status-draft-yellow.svg?style=flat-square) - A draft that is ready to review. It should be implementable.
- ![](https://img.shields.io/badge/status-reliable-green.svg?style=flat-square) - A spec that has been adopted (implemented) and can be used as a reference point to learn how the system works.
- ![](https://img.shields.io/badge/status-stable-brightgreen.svg?style=flat-square) - We consider this spec to close to final, it might be improved but the system it specifies should not change fundamentally.
Expand Down Expand Up @@ -54,7 +54,7 @@ The specs contained in this repository are:
- [Bitswap](./BITSWAP.md) - BitTorrent-inspired exchange
- **Key Management:**
- [KeyStore](./KEYSTORE.md) - Key management on IPFS
- [KeyChain](./KEYCHAIN.md) - Distribution of cryptographic Artificats
- [KeyChain](./KEYCHAIN.md) - Distribution of cryptographic Artifacts
- **Networking layer:**
- [libp2p](https://github.com/libp2p/specs) - libp2p is a modular and extensible network stack, built and use by IPFS, but that it can be reused as a standalone project. Covers:
- **Records, Naming and Record Systems:**
Expand Down
2 changes: 1 addition & 1 deletion REPO.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Keys are structured using the [multikey](https://github.com/jbenet/multikey) for
The node's `config` (configuration) is a tree of variables, used to configure various aspects of operation. For example:
- the set of bootstrap peers IPFS uses to connect to the network
- the Swarm, API, and Gateway network listen addresses
- the Datastore configuration regarding the contruction and operation of the on-disk storage system.
- the Datastore configuration regarding the construction and operation of the on-disk storage system.

There is a set of properties, which are mandatory for the repo usage. Those are `Addresses`, `Discovery`, `Bootstrap`, `Identity`, `Datastore` and `Keychain`.

Expand Down
6 changes: 3 additions & 3 deletions REPO_FS.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,9 @@ This spec defines `fs-repo` version `1`, its formats, and semantics.
`./api` is a file that exists to denote an API endpoint to listen to.
- It MAY exist even if the endpoint is no longer live (i.e. it is a _stale_ or left-over `./api` file).

In the presence of an `./api` file, ipfs tools (eg go-ipfs `ipfs daemon`) MUST attempt to delegate to the endpoint, and MAY remove the file if resonably certain the file is stale. (e.g. endpoint is local, but no process is live)
In the presence of an `./api` file, ipfs tools (eg go-ipfs `ipfs daemon`) MUST attempt to delegate to the endpoint, and MAY remove the file if reasonably certain the file is stale. (e.g. endpoint is local, but no process is live)

The `./api` file is used in conjunction with the `repo.lock`. Clients may opt to use the api service, or wait until the process holding `repo.lock` exits. The file's content is the api endoint as a [multiaddr](https://github.com/jbenet/multiaddr)
The `./api` file is used in conjunction with the `repo.lock`. Clients may opt to use the api service, or wait until the process holding `repo.lock` exits. The file's content is the api endpoint as a [multiaddr](https://github.com/jbenet/multiaddr)

```
> cat .ipfs/api
Expand Down Expand Up @@ -107,7 +107,7 @@ configuration variables. It MUST only be changed while holding the

### hooks/

The `hooks` directory contains exectuable scripts to be called on specific
The `hooks` directory contains executable scripts to be called on specific
events to alter ipfs node behavior.

Currently available hooks:
Expand Down
4 changes: 2 additions & 2 deletions UNIXFS.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ UnixFS currently supports two optional metadata fields:
- For ergonomic reasons a surface API of an encoder must allow fractional 0 as input, while at the same time must ensure it is stripped from the final structure before encoding, satisfying the above constraints.

- Implementations interpreting the mtime metadata in order to apply it within a non-IPFS target must observe the following:
- If the target supports a distinction between `unspecified` and `0`/`1970-01-01T00:00:00Z`, the distinction must be preserverd within the target. E.g. if no `mtime` structure is available, a web gateway must **not** render a `Last-Modified:` header.
- If the target supports a distinction between `unspecified` and `0`/`1970-01-01T00:00:00Z`, the distinction must be preserved within the target. E.g. if no `mtime` structure is available, a web gateway must **not** render a `Last-Modified:` header.
- If the target requires an mtime ( e.g. a FUSE interface ) and no `mtime` is supplied OR the supplied `mtime` falls outside of the targets accepted range:
- When no `mtime` is specified or the resulting `UnixTime` is negative: implementations must assume `0`/`1970-01-01T00:00:00Z` ( note that such values are not merely academic: e.g. the OpenVMS epoch is `1858-11-17T00:00:00Z` )
- When the resulting `UnixTime` is larger than the targets range ( e.g. 32bit vs 64bit mismatch ) implementations must assume the highest possible value in the targets range ( in most cases that would be `2038-01-19T03:14:07Z` )
Expand Down Expand Up @@ -225,7 +225,7 @@ the "usual" positive values easy to eyeball. The varint representing the time of
#### FractionalNanoseconds
Fractional values are effectively a random number in the range 1 ~ 999,999,999. Such values will exceed
2^28 nanoseconds ( 268,435,456 ) in most cases. Therefore, the fractional part is represented as a 4-byte
`fixed32`, [as per google's recommendation](https://developers.google.com/protocol-buffers/docs/proto#scalar).
`fixed32`, [as per Google's recommendation](https://developers.google.com/protocol-buffers/docs/proto#scalar).

[multihash]: https://tools.ietf.org/html/draft-multiformats-multihash-00
[CID]: https://docs.ipfs.io/guides/concepts/cid/
Expand Down

0 comments on commit b4d6bc9

Please sign in to comment.