基本介绍


IPFS 网络分两类:

  1. 公有
  2. 私有

对于大多数商用应用尤其是企业级解决方案而言,需要对自身数据有完全的控制,这种场合公有IPFS网络并不适用,搭建私有IPFS往往是这类应用的必要需求。

本文我们讲解创建一个私有 IPFS 网络的过程:

创建一个 IPFS集群的私有 IPFS网络用于数据复制。

IPFS 本身不提供节点间数据复制,为了在 IPFS网络中复制数据有两个选择:

  1. Filecoin
  2. IPFS-Cluster。

本文中我们使用 IPFS-Cluster 。

我们通过三个虚拟机器实现私有网络,以下是相关的参考文档:

  1. IPFS: A protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia in a distributed file system. Read more

  2. Private IPFS: 私有网络中的用户(peers)使用同一个共享私钥、此时所有用户只能在私有IPFS网络中通讯, Read more。

  3. IPFS-Cluster: 一个 IPFS集群是一个独立的程序和一个 CLI 客户端。通过一组IPFS守护进程负责分配、复制以及跟踪 pins 。 IPFS-Cluster uses a leader-based consensus algorithm Raft to coordinate storage of a pinset, distributing the set of data across the participating nodes.

    • A cluster peer application: ipfs-cluster-service, to be run along with go-ipfs.
    • A client CLI application: ipfs-cluster-ctl, which allows easily interacting with the peer's HTTP API.
    • An additional "follower" peer application: ipfs-cluster-follow, focused on simplifying the process of configuring and running follower peers.

    Read more

需要注意:

  1. IPFS 的核心功能中,私有网络是其默认特征,同时 IPFS-Cluster 是一个独立的应用。
  2. IPFSIPFS-Cluster 程序是作为不同包来安装的、分别以不同进程启动。
  3. IPFSIPFS-Cluster 具有不同的 peer IDs、不同的API endpoints 、使用不同的端口。
  4. IPFS-Cluster 守护进程是依赖于 IPFS 守护进程,启动 IPFS 守护进程后才可以启动 IPFS-Cluster 守护进程。

搭建 IPFS 私有网络


默认 IPFSIPFS-Cluster 使用以下端口:

IPFS 4001 – Communication with other nodes 5001 – API server 8080 – Gateway server

IPFS-CLUSTER 9094 – HTTP API endpoint 9095 – IPFS proxy endpoint 9096 – Cluster swarm, used for communication between cluster nodes

We will use recently created three virtual machines (in my case I used DigitalOcean) with installed Linux Ubuntu Distributive version 16.04 and command line as the main tool for installing necessary packages and settings. Depending on your cloud provider (AWS, Azure, Google, etc.), you may need to look at some additional settings, like firewall or security group configuration, to let your peers see each other.

Let’s suppose that we have three VMs with the following IP addresses: Node0: 192.168.10.1 Node1: 192.168.10.2 Node2: 192.168.10.3

Let’s start with the zero node (Node0) which will be our bootstrap node.

Step 1: 安装 Go

First of all, let’s install Go as we will need it during our deployment process. Update Linux packages and dependencies:

1 sudo apt-get update
   
2 sudo apt-get -y upgrade
   

Download the latest version and unzip Go

1 wget https:``//dl.google.com/go/go1.11.4.linux-amd64.tar.gz
   
2 sudo tar -xvf go1.11.4.linux-amd64.tar.gz
   
3 sudo mv go /usr local
   

Create Path for Go and set environment variables. \1. Create folder:

1 mkdir $HOME``/gopath
   

Open .bashrc file and add to the end three variables GOROOT, GOPATH, PATH. Open file:

1 sudo nano ``$HOME``/.bashrc
   

Insert to the end of the .bashrc file:

1 export GOROOT=/usr/local/go
   
2 export GOPATH=``$HOME``/gopath
   
3 export PATH=``$PATH``:``$GOROOT``/bin:``$GOPATH``/bin
   

\2. Update .bashrc file and check Go version:

1 source ~/.bashrc
   
2 go version
   

Step 2: 安装 IPFS

We will install the latest version of the go-IPFS. At the moment of writing this article, it was v0.4.18 for Linux. You can check for the latest version here https://dist.IPFS.io/#go-IPFS

Download IPFS, unzip tar file, move unzipped folder under bin and initialise IPFS node:

1 wget https:``//dist.IPFS.io/go-IPFS/v0.4.18/go-IPFS_v0.4.18_linux-amd64.tar.gz
   
2 tar xvfz go-IPFS_v0.4.18_linux-amd64.tar.gz
   
3 sudo mv goIPFS/IPFS/usr/local/bin/IPFS``
   
4 ``IPFSinit
   
5 ``IPFSversion
   

Repeat steps 1 and 2 for all your VMs.

Step 3: 创建私有网络

Once you have Go and IPFS installed on all of your nodes, run the following command to install the swarm key generation utility. Swarm key allows us to create a private network and tell network peers to communicate only with those peers who share this secret key.

This command should be run only on your Node0. We generate swarm.key on the bootstrap node and then just copy it to the rest of the nodes.

1 go get -u github.com/Kubuxu/go-IPFS-swarm-key-gen/IPFS-swarm-key-gen
   

Now run this utility on your first node to generate swarm.key under .IPFS folder:

1 ``IPFS-swarm-key-gen & > ~/.IPFS/swarm.key
   

Copy the file generated swarm.key to the IPFS directory of each node participating in the private network. First of all, you need to remove the default entries of bootstrap nodes from all the nodes you have created.

Step 4: 自引导IPFS 节点

1 ``IPFSbootstrap rm –all
   

Add the hash address of your bootstrap to each of the nodes including the bootstrap.

1 ``IPFSbootstrap add /ip4/192.168.10.1/tcp/4001/IPFS/QmQVvZEmvjhYgsyEC7NvMn8EWf131EcgTXFFJQYGSz4Y83
   

The IP part (192.168.10.1) will be changed to your Node0 machine IP. The last part is the peer ID which is generated when you initialise your peer IPFS init). You can see it above where it shows “peer identity:

1 QmQVvZEmvjhYgsyEC7NvMn8EWf131EcgTXFFJQYGSz4Y83
   

or if you run *IPFS id* command in the console. So, you need to change IP and peer ID accordingly to you Node0. Do this for all of your nodes.

We also need to set the environment variable “LIBP2P_FORCE_PNET” to force our network to Private mode:

1 export LIBP2P_FORCE_PNET=1
   

Configuring IP for communication

Inside the .IPFS folder, there is a “config” file. It contains a lot of settings including the network details on which our IPFS nodes will work on. Open this config file and find “Addresses”. It will look like this:

1 "Addresses"``: {
   
2 "API"``: ``"/ip4/192.168.10.1/tcp/5001"``,
   
3 "Announce"``: [],
   
4 "Gateway"``: ``"/ip4/192.168.10.1/tcp/8080"``,
   
5 "NoAnnounce"``: [],
   
6 "Swarm"``: [
   
7 "/ip4/0.0.0.0/tcp/4001"``,
   
8 "/ip6/::/tcp/4001"
   
9 ]
   
10 },
   

The IP mentioned in the API is the one on which IPFS will bind on for communication. By default, it’s localhost (127.0.0.1), so to enable our nodes to “see” each other we need to set this parameter accordingly to each node’s IP. Gateway parameter is for access from the browser.

Step 5: 节点启动与测试

We are done with all the configurations, and now it is time to start all the nodes to see if everything went well and if they are closed to the private network. Run IPFS daemon on all of your nodes.

1 ``IPFSdaemon
   

Now let’s add the file from one of the nodes and try to access it from another.

1 mkdir test-files
   
2 echo helloIPFS& > file.txt
   
3 ``IPFSadd file.txt
   

Take the printed hash and try to the cat file from another node.

1 ``IPFScat QmZULkCELmmk5XNfCgTnCyFgAVxBRBXyDHGGMVoLFLiXEN
   

You should see the contents of the added file from the first node. To check and be sure that we have a private network we can try to access our file by its CID from the public IPFS gateway. You can choose one of the public gateways from this list: https://IPFS.github.io/public-gateway-checker.

If you did everything right, then the file won’t be accessible. Also, you can run the *IPFS swarm peers* command, and it will display a list of the peers in the network it’s connected to. In our example, each peer sees two others.

Step 6: 后台服务方式启动 IPFS 守护进程

For IPFS demon to be continually running, even after we have exited from our console session, we will create systemd service. Before we do so, stop/kill your IPFS daemon. Create a file for a new service.

1 sudo nano /etc/systemd/system/IPFS.service
   

And add to it the following settings:

1 [Unit]
   
2 ``Description=IPFSDaemon
   
3 ``After=syslog.target network.target remote-fs.target nss-lookup.target
   
4 ``[Service]
   
5 ``Type=simple
   
6 ``ExecStart=/usr/local/bin/IPFSdaemon --enable-namesys-pubsub
   
7 ``User=root
   
8 ``[Install]
   
9 ``WantedBy=multi-user.target
   

Save and close the file. Apply the new service.

1 sudo systemctl daemon-reload
   
2 sudo systemctl enableIPFS``
   
3 sudo systemctl startIPFS``
   
4 sudo systemctl statusIPFS``
   

Reboot your system and check that IPFS daemon is active and running, and then you can again try to add the file from one node and access it from another.

We have completed part of creating a private IPFS network and running its demons as a service. At this phase, you should have three IPFS nodes organized in one private network. Now let’s create our IPFS-CLUSTER for data replication.

部署 IPFS-Cluster


After we create a private IPFS network, we can start deploying IPFS-Cluster on top of IPFS for automated data replication and better management of our data.

There are two ways how to organize IPFS cluster, the first one is to set a fixed peerset (so you will not be able to increase your cluster with more peers after the creation) and the other one – to bootstrap nodes (you can add new peers after cluster was created).

IPFS-Cluster includes two components:

  • IPFS-cluster-service mostly to initialize cluster peer and run its daemon
  • IPFS-cluster-ctl for managing nodes and data among the cluster

Step 1: 安装 IPFS-Cluster

There are many ways how to install IPFS-Cluster. In this manual, we are using the installing from source method. You can see all the provided methods here.

Run next commands in your console terminal to install IPFS-cluster components:

1 git clone https:``//github.com/IPFS/IPFS-cluster.git $GOPATH/src/github.com/IPFS/IPFS-cluster
   
2 cd ``$GOPATH``/src/github.com/IPFS/IPFS-cluster
   
3 make install
   

Check successful installation by running:

1 ``IPFS-cluster-service --version
   
2 ``IPFS-cluster-ctl --version
   

Repeat this step for all of your nodes.

Step 2: 生成并设置 CLUSTER_SECRET 变量

Now we need to generate CLUSTER_SECRET and set it as an environment variable for all peers participating in the cluster. Sharing the same CLUSTER_SECRET allow peers to understand that they are part of one IPFS-Cluster. We will generate this key on the zero node and then copy it to all other nodes. On your first node run the following commands:

1 export CLUSTER_SECRET=$(od -vN 32 -An -tx1 /dev/urandom | tr -d ``' \n'``) ``echo $CLUSTER_SECRET
   

You should see something like this:

1 9a420ec947512b8836d8eb46e1c56fdb746ab8a78015b9821e6b46b38344038f
   

In order for CLUSTER_SECRET to not disappear after you exit the console session, you must add it as a constant environment variable to the .bashrc file. Copy the printed key after echo command and add it to the end of .bashrc file on all of your nodes.

It should look like this:

1 export CLUSTER_SECRET=9a420ec947512b8836d8eb46e1c56fdb746ab8a78015b9821e6b46b38344038f
   

And don’t forget to update your .bashrc file with command:

1 source ~/.bashrc
   

Step 3: cluster初始化和启动

After we have installed IPFS-Cluster service and set a CLUSTER_SECRET environment variable, we are ready to initialize and start first cluster peer (Node0).

Note: make sure that your IPFS daemon is running before you start the IPFS-cluster-service daemon. To initialize cluster peer, we need to run the command:

1 -cluster-service init
   

To start cluster peer, run:

1 -cluster-service daemon
   

You should see the output in the console:

1 INFO cluster:IPFSCluster is ready cluster.go:461
   
2 ``IPFS-cluster-service daemon
   

You should see the output in the console:

1 INFO cluster:IPFSCluster is ready cluster.go:461
   

Now open a new console window and connect to your second VM(node1). Note: make sure that your IPFS daemon is running before you start the IPFS-cluster-service daemon.

You need to install IPFS-Cluster components and set a CLUSTER_SECRET environment variable (copy from node0) as we did it for our first node. Run the following commands to initialise IPFS-Cluster and bootstrap it to node0:

1 ``IPFS-cluster-service init
   
2 ``IPFS-cluster-service daemon --bootstrap
   
3 /ip4/192.168.10.1/tcp/9096/IPFS/QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn
   

The IP part (192.168.10.1) will be changed to your Node0 machine IP. The last part is the cluster peer ID which is generated when you initialise your cluster peer(IPFS-cluster-service init). Bear in mind that it should be IPFS-Cluster peer ID, not an IPFS peer ID.

You can run *IPFS-cluster-service* *id* command in the console to get this. You need to change IP and cluster peer ID according to your Node0. Do this for all of your nodes. To check that we have two peers in our cluster, run command:

1 ``IPFS-cluster-ctl peers ls
   

And you should see the list of cluster peers:

1 node1 & >IPFS-cluster-ctl peers ls
   
2 QmYFYwnFUkjFhJcSJJGN72wwedZnpQQ4aNpAtPZt8g5fCd | Sees 1 other peers
   
3 Addresses:
   
4 - /ip4/127.0.0.1/tcp/10096/IPFS/QmYFYwnFUkjFhJcSJJGN72wwedZnpQQ4aNpAtPZt8g5fCd
   
5 - /ip4/192.168.1.3/tcp/10096/IPFS/QmYFYwnFUkjFhJcSJJGN72wwedZnpQQ4aNpAtPZt8g5fCd
   
6 ``IPFS: Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
   
7 - /ip4/127.0.0.1/tcp/4001/IPFS/Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
   
8 - /ip4/192.168.1.3/tcp/4001/IPFS/Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
   
9 QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn | Sees 1 other peers
   
10 Addresses:
   
11 - /ip4/127.0.0.1/tcp/9096/IPFS/QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn
   
12 - /ip4/192.168.1.2/tcp/9096/IPFS/QmZjSoXUQgJ9tutP1rXjjNYwTrRM9QPhmD9GHVjbtgWxEn
   
13 ``IPFS: Qmbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
   
14 - /ip4/127.0.0.1/tcp/4001/IPFS/Qmbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
   
15 - /ip4/192.168.1.2/tcp/4001/IPFS/Qmbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
   

Repeat this step for the third node and all others nodes you want to join to the cluster.

Step 4: 以服务方式启动 IPFS-Cluster 守护进程

For the IPFS-Cluster daemon to be continually running, even after we close console session, we will create systemd service for it. Run the following command to create a file for IPFS-Cluster system service:

1 sudo nano /etc/systemd/system/IPFS-cluster.service
   

And insert to it:

1 [Unit]
   
2 Description=IPFS-Cluster Daemon
   
3 Requires=IPFS``
   
4 After=syslog.target network.target remote-fs.target nss-lookup.targetIPFS``
   
5 [Service]
   
6 Type=simple
   
7 ExecStart=/home/ubuntu/gopath/bin/IPFS-cluster-service daemon
   
8 User=root
   
9 [Install]
   
10 WantedBy=multi-user.target
   

Apply new service and run it:

1 sudo systemctl daemon-reload
   
2 sudo systemctl enableIPFS-cluster
   
3 sudo systemctl startIPFS-cluster
   
4 sudo systemctl statusIPFS-cluster
   

Reboot your machine and check that both IPFS and IPFS-Cluster services are running.

Step 5: 测试 IPFS-Cluster 与数据复制

To test data replication, create the file and add it to the cluster:

1 ``IPFS-cluster-ctl add myfile.txt
   

Take CID of the recently added file and check its status:

1 ``IPFS-cluster-ctl status CID
   

You should see that this file has been PINNED among all cluster nodes.

总结


Are you wondering how you can apply this IPFS tutorial to support your real-life needs? This article describes how we started with an internal PoC and ended up with a real prototype allowing us to share files on the blockchain with IPFS securely.

If you have any questions regarding IPFS networks and their potential use for data replication and secure data sharing, don’t hesitate to get in touch!

使用IPFS集群搭建创建私有IPFS网络相关推荐

  1. IPFS-私有网络集群搭建

    文章目录 什么是IPFS私有网络集群 IPFS私有网络集群原理 IPFS私有网络集群搭建过程 将 IPFS 进程加入到系统进程中启动 编译并启用 ipfs-webui 步骤总结 参考 什么是IPFS私 ...

  2. docker 完成 redis集群搭建

    [Docker那些事]系列文章 docker 安装 与 卸载 centos Dockerfile 文件结构.docker镜像构建过程详细介绍 Dockerfile文件中CMD指令与ENTRYPOINT ...

  3. Zookeeper理解与集群搭建

    文章目录 前言 介绍 简介 Zookeeper持久化机制 Zookeeper特性 安装与部署 节点znode介绍 Zookeeper如何保存数据 znode节点信息 znode结构 znode类型 Z ...

  4. Ceph集群搭建系列(六):RBD块设备的使用场景、原理分析及其创建

    一.前言 下图数据出自2018年ceph社区用户调查报告,Ceph RBD接口的使用场景主要是VM和DB数据库. 而使用Ceph RBD的接口方式的用户数据如下,主要是librbd 和 kernel ...

  5. Solr学习(三)SolrCloud集群搭建与创建Collection

    1.什么是SolrCloud SolrCloud是Solr提供的分布式搜索方案,当你需要大规模,容错,分布式索引和检索能力时使用 SolrCloud.当一个系统的索引数据量少的时候是不需要使用Solr ...

  6. k8s简单集群搭建和应用(包括虚拟机的开启)①

    1.三台虚拟机搭建 开三台虚拟机.使用Centos7 系统,网络模式使用NAT模式(校园网应该是用不了桥接模式) 在虚拟机里设置主机名 #依次设置主机名 hostnamectl set-hostnam ...

  7. 『注册中心』Consul微服务注册中心的使用及相关集群搭建

    Consul目录 一.概念篇--注册中心 1. 什么是注册中心 2. 为什么要使用注册中心 3. 注册中心类型 4. 注册中心的优点 二.概念篇--Consul 1. 什么是Consul 2. Con ...

  8. Kubernetes集群搭建以及基本使用【具详细】;

    文章目录 前言 一.有了docker为什么还需要k8s 二.K8s介绍.集群架构.服务器配置推荐 1.kubernetes是什么 2.Kubernetes集群架构与组件 3.生产环境部署K8s的两种方 ...

  9. ProxmoxVE7.0+Ceph15.2集群搭建

    说明:ProxmoxVE7.0私有云平台搭建:51blog(参考文献) (在7.0中Ceph Octopus 15.2 仍受支持相比6.2参考:ProxmoxVE中文论坛) (以下项目均在Exsi平台 ...

最新文章

  1. 不止最佳长论文,腾讯AI在ACL上还有这些NLP成果
  2. The Web Audio autoplay policy will be re-enabled in 音频无法播放
  3. linux快捷命令怎么拼日期,liunx常用命令,快捷键
  4. 李春雷 | 夜宿棚花村
  5. 9篇!悉尼科技大学入选CVPR2021都研究什么?
  6. PHP中文字符串截取类
  7. 广州海珠php培训_海珠|海珠区第二实验小学教育集团成立两周年 初步实现集团内教师资源的“柔性流动”...
  8. com/lsass.exe smss.exe(磁碟机病毒) 感染方式之我分析 -- 2008第一博
  9. java digestutils.md5hex_linux下md5sum和DigestUtils.md5Hex的关系 博客分类: java
  10. 基于PDF和JSPDF实现调整pdf文件大小功能
  11. 【ARM】嵌入式 ARM Linux 下移植 USB 蓝牙、交叉编译 bluez 各种版本
  12. 【C】狐狸找兔子问题
  13. 手机进程设置多少个最好_手机打开,开发者选项中的这4个设置,性能瞬间提升一倍,不卡顿...
  14. 做旅游的就要有驴子精神
  15. PNAS | 南农张瑞福组揭示了微生物肥料功能菌根际趋化的信号识别新机制
  16. (每日一练c++)解数独
  17. portal无线认证服务器,无线AC配置portal认证功能portal 认证服务器问题
  18. 计算机三级信息安全技术常考知识点总结
  19. inline-block元素设置overflow:hidden属性导致相邻行内元素向下偏移
  20. 端子型号,PH,XH,ZH,EH,VH

热门文章

  1. idou老师教你学Istio 27:解读Mixer Report流程
  2. 测试:脱离VS2010使用自动化测试时出现 6DA215C2-D80D-42F2-A514-B44A16DCBAAA 错误
  3. 360 再次开源管理平台 Wayne:基于企业级 Kubernetes 集群
  4. ajax post 提交无法进入controller 请求200
  5. Oracle 10g 高级安装图文教程(二)
  6. Source Map调试压缩后代码
  7. 使用Mybatis Generator结合Ant脚本快速自动生成Model、Mapper等文件的方法
  8. 如何使编译的EXE程序能多个运行?
  9. java中412是什么错_HTTP 412 错误 – 先决条件失败 (Precondition failed)
  10. map flatmap mappartition flatMapToPair四种用法区别