發表於 程式分享

Hadoop HDFS Java API 串接使用程式開發

Hadoop HDFS Java API 串接使用程式開發,相關步驟如下,請參考

一、設定hadoop eclipse plugin

參考網址: https://www.programmersought.com/article/26674946880/
1.下載 hadoop-3.3.1.tar.gz
https://hadoop.apache.org/
https://hadoop.apache.org/docs/stable/

2.下載 apache-ant-1.10.11-bin.tar.gz
https://ant.apache.org/bindownload.cgi

3.下載eclipse-jee-indigo-SR2-win32-x86_64.zip
https://www.eclipse.org/downloads/packages/release/indigo/sr2

4.解壓縮及設定環境變數
1) 解壓縮以下檔案

hadoop-3.3.1.tar.gz 及 apache-ant-1.10.11-bin.tar.gz、eclipse-jee-indigo-SR2-win32-x86_64.zip

2) 設定環境變數

HADOOP_HOME=D:\04_Source\tool\hadoop-3.3.1
ANT_HOME=D:\04_Source\tool\apache-ant-1.10.11
PATH加上%ANT_HOME%\bin and %HADOOP_HOME%\bin

5.下載eclipse-hadoop3x專案及調整設定
1) 下載以下github內容eclipse-hadoop3x

https://github.com/Woooosz/eclipse-hadoop3x

2) 調整ivy/libraries.properties

hadoop.version=2.6.0
commons-lang.version=2.6
slf4j-api.version=1.7.25
slf4j-log4j12.version=1.7.25
guava.version=11.0.2
netty.version=3.10.5.Final

調整成同hadoop版本

hadoop.version=3.3.1
commons-lang.version=3.7
slf4j-api.version=1.7.30
slf4j-log4j12.version=1.7.30
guava.version=27.0-jre
netty.version=3.10.6.Final

3) 調整src\contrib\eclipse-plugin\build.xml
a.將

<target name="compile" depends="init, ivy-retrieve-common" unless="skip.contrib">

調整成

<target name="compile" unless="skip.contrib">

b.於此行下

<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/htrace-core4-${htrace.version}.jar" todir="${build.dir}/lib" verbose="true"/>

增加

<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/woodstox-core-5.0.3.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/stax2-api-3.1.4.jar" todir="${build.dir}/lib" verbose="true"/>

c.將

<fileset dir="${hadoop.home}/libexec/share/hadoop/mapreduce">
<fileset dir="${hadoop.home}/libexec/share/hadoop/hdfs">
<fileset dir="${hadoop.home}/libexec/share/hadoop/common">
...
<fileset dir="${hadoop.home}/libexec/share/hadoop/mapreduce">
<fileset dir="${hadoop.home}/libexec/share/hadoop/common">
<fileset dir="${hadoop.home}/libexec/share/hadoop/hdfs">
<fileset dir="${hadoop.home}/libexec/share/hadoop/yarn">
...
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/protobuf-java-${protobuf.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/log4j-${log4j.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/commons-cli-${commons-cli.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/commons-configuration2-${commons-configuration.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/commons-lang-${commons-lang.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/commons-collections-${commons-collections.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/jackson-core-asl-${jackson.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/jackson-mapper-asl-${jackson.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/slf4j-log4j12-${slf4j-log4j12.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/slf4j-api-${slf4j-api.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/guava-${guava.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/hadoop-auth-${hadoop.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/commons-cli-${commons-cli.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/netty-${netty.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/htrace-core4-${htrace.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/woodstox-core-5.0.3.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/libexec/share/hadoop/common/lib/stax2-api-3.1.4.jar" todir="${build.dir}/lib" verbose="true"/>
...
lib/woodstox-core-5.0.3.jar,
lib/stax2-api-3.1.4.jar,

調整成同hadoop上同樣的路徑

<fileset dir="${hadoop.home}/share/hadoop/mapreduce">
<fileset dir="${hadoop.home}/share/hadoop/hdfs">
<fileset dir="${hadoop.home}/share/hadoop/common">
...
<fileset dir="${hadoop.home}/share/hadoop/mapreduce">
<fileset dir="${hadoop.home}/share/hadoop/common">
<fileset dir="${hadoop.home}/share/hadoop/hdfs">
<fileset dir="${hadoop.home}/share/hadoop/yarn">
...
<copy file="${hadoop.home}/share/hadoop/common/lib/protobuf-java-${protobuf.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/log4j-${log4j.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/commons-cli-${commons-cli.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/commons-configuration2-${commons-configuration.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/commons-lang3-${commons-lang.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/commons-collections-${commons-collections.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/jackson-core-asl-${jackson.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/jackson-mapper-asl-${jackson.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/slf4j-log4j12-${slf4j-log4j12.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/slf4j-api-${slf4j-api.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/guava-${guava.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/hadoop-auth-${hadoop.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/commons-cli-${commons-cli.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/netty-${netty.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/htrace-core4-${htrace.version}.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/woodstox-core-5.3.0.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.home}/share/hadoop/common/lib/stax2-api-4.2.1.jar" todir="${build.dir}/lib" verbose="true"/>
...
lib/woodstox-core-5.3.0.jar,
lib/stax2-api-4.2.1.jar,

4) 調整src\contrib\eclipse-plugin\build.xml

<javac
encoding="${build.encoding}"
srcdir="${src.dir}"
includes="**/*.java"
destdir="${build.classes}"
debug="${javac.debug}"
deprecation="${javac.deprecation}">

調整成

<javac
encoding="${build.encoding}"
srcdir="${src.dir}"
includes="**/*.java"
destdir="${build.classes}"
debug="${javac.debug}"
deprecation="${javac.deprecation}"
includeantruntime="false"
>

6.建路徑

eclipse-hadoop3x\build\contrib\eclipse-plugin\classes

7.編譯eclipse-hadoop3x專案
切換至 eclipse-hadoop3x\src\contrib\eclipse-plugin 路徑下
執行以下指令

ant jar -Dversion=3.3.1 -Declipse.home=D:\Tool\eclipse\eclipse-indigo -Dhadoop.home=D:\04_Source\tool\hadoop-3.3.1

8.使用hadoop plugin
1) 將hadoop-eclipse-plugin-3.3.1.jar放到Eclipse dropins目錄
2) 重啟Eclipse
3) Eclipse
-> Window -> Open Perspective -> Other… -> Map/Reduce
-> New Hadoop location…. -> 理論上應該會開視窗設定hadoop連線,但失敗…此部份需再確認

二、開發HDFS Java API

1.請完成安裝Hadoop – 參考

於ubuntu 20.04安裝hadoop 3.3.1

2.確認前一篇安裝的Hadoop hostname與測試的電腦所設定的IP是一致的
設定/etc/hosts其對應到程式可連線的IP (用telnet ubuntu-VirtualBox 9000)測試

192.168.208.3 ubuntu-VirtualBox

3.建置Eclipse專案,程式

package com.tssco.hadoop;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;
import org.apache.hadoop.io.IOUtils;

public class HDFSAPITest {
 public static void main(String[] args) throws IOException, InterruptedException, URISyntaxException {
   Configuration conf = new Configuration();
   FileSystem fs = FileSystem.get(new URI("hdfs://ubuntu-VirtualBox:9000"), conf, "root");
   //建立路徑
   Path path = new Path("/data");
   System.out.println("1.路徑/data" + " 是否存在: " + fs.exists(path));
   boolean b = fs.exists(path);
   if (!b) {
      fs.mkdirs(path);
   }

   // 取得檔案清單
   RemoteIterator<LocatedFileStatus> listFiles = fs.listFiles(path, false);
   System.out.println("2.取得檔案清單");
   while (listFiles.hasNext()) {
      LocatedFileStatus next = listFiles.next();
      System.out.println(next.getPath());
      System.out.println(next.getReplication());
      BlockLocation[] blockLocations = next.getBlockLocations();
      for (BlockLocation bl : blockLocations) {
         System.out.println("\t子路徑: " + bl + ", size: " + bl.getLength());
      }
   }

   System.out.println("3.取得檔案狀態");
   FileStatus[] listStatus = fs.listStatus(new Path("/"));
   for (FileStatus fst : listStatus) {
      System.out.println("****** " + fst + " ******");
      System.out.println("\t\t是否為路徑: " + fst.isDirectory());
      System.out.println("\t\t是否為檔案: " + fst.isFile());
      System.out.println("\t\tsize: " + fst.getBlockSize());
   }

   // 上傳檔案至/data/package
   System.out.println("4.上傳檔案");
   FileInputStream in = new FileInputStream(new File("D:\\04_Source\\test2.txt"));
   FSDataOutputStream out = fs.create(new Path("/data/test2.txt"));
   IOUtils.copyBytes(in, out, 4096);

   // 上載檔案
   System.out.println("5.下載檔案");
   FSDataInputStream fsin = fs.open(new Path("/data/test2.txt"));
   FileOutputStream fsout = new FileOutputStream(new File("D:\\test2.txt"));
   IOUtils.copyBytes(fsin, fsout, 4096);
 }
}

4.將/usr/local/hadoop/share/hadoop目錄下的common, hdfs, mapreduce, yarn 4個子目錄的jar文件放到專案的jar目錄

5.執行結果

發表於 程式分享

於ubuntu 20.04安裝hadoop 3.3.1

於ubuntu 20.04安裝hadoop 3.3.1,詳細步驟如下:

1.下載hadoop-3.3.1.tar.gz
https://hadoop.apache.org/
https://hadoop.apache.org/docs/stable/

2.安裝hadoop

tar xvf hadoop-3.3.1.tar.gz
sudo mv hadoop-3.3.1 /usr/local
sudo mv /usr/local/hadoop-3.3.1 hadoop

3.裝JDK 8 或 9

tar xvf jdk-8u291-linux-x64.tar.gz
sudo mv jdk1.8.0_291 /usr/local
sudo mv /usr/local/jdk1.8.0_291 /usr/local/jdk
-- 不建議用以下指令
sudo apt install openjdk-8-jdk-headless

4.調整設定檔
1) sudo vi /etc/profile
加上

export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
alias hadoopdir="cd /usr/local/hadoop"

再執行以下指令

source /etc/profile

2) vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
加上

JAVA_HOME=/usr/local/jdk

3) vi /usr/local/hadoop/etc/hadoop/core-site.xml
加上

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ubuntu-VirtualBox:9000</value>
<description>NameNode_URI</description>
</property>
</configuration>

4) vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
加上

<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///usr/local/hadoop/data/datanode</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///usr/local/hadoop/data/namenode</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>ubuntu-VirtualBox:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>ubuntu-VirtualBox:50090</value>
</property> 
</configuration>

5) vi /usr/local/hadoop/etc/hadoop/yarn-site.xml
加上

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>ubuntu-VirtualBox:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>ubuntu-VirtualBox:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>ubuntu-VirtualBox:8050</value>
</property>
</configuration>

5.Hadoop格式化
以root執行

hadoop namenode -format

若失敗,重新執行的方式

stop-all.sh
cd /usr/local/hadoop
rm -rf data/ logs/
hadoop namenode -format

6.啟動hadoop
start-all.sh

出現如下錯誤

root@ubuntu-VirtualBox:/home/ubuntu# start-all.sh
Starting namenodes on [ubuntu-VirtualBox]
ubuntu-VirtualBox: Warning: Permanently added 'ubuntu-virtualbox,10.0.2.15' (ECDSA) to the list of known hosts.
ubuntu-VirtualBox: root@ubuntu-virtualbox: Permission denied (publickey,password).
Starting datanodes
localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
localhost: root@localhost: Permission denied (publickey,password).
Starting secondary namenodes [ubuntu-VirtualBox]
ubuntu-VirtualBox: root@ubuntu-virtualbox: Permission denied (publickey,password).
Starting resourcemanager
Starting nodemanagers
localhost: root@localhost: Permission denied (publickey,password).

解法: 設定ssh免密碼登入

cd /root/.ssh
rm -rf *
ssh-keygen -t rsa
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

參考網址:
https://codertw.com/%E5%89%8D%E7%AB%AF%E9%96%8B%E7%99%BC/393790/
https://blog.csdn.net/qq_44166946/article/details/109808363

7. 關閉firewall
systemctl stop ufw
systemctl disable ufw

8.停止hadoop
stop-all.sh

9.查看hadoop process
jps
應該要出現如下幾個java process

29218 DataNode
29475 SecondaryNameNode
29687 ResourceManager
33707 Jps
29900 NodeManager
29020 NameNode

10.查看HDFS文件
hadoop fs -ls /

11.HDFS(分布式文件系統)指令請參考
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html
URI: hdfs://namenode:namenodePort/parent/child 或是本機可直接使用/parent/child
(假設配置文件是namenode:namenodePort)

發表於 程式分享

ubuntu上的docker內安裝Gitlab、Jenkins

這是我在ubuntu上的docker內安裝Gitlab、Jenkins的筆記,有興趣的請參考

1.安裝Gitlab

sudo mkdir -p /docker_data
sudo docker network create mycicd
docker network ls
docker run -d --name gitlab \
--hostname host.example.com \
-p 8088:8088 \
-p 8002:22 \
--network mycicd \
--privileged --restart always \
-v /docker_data/gitlab/data:/var/opt/gitlab \
-v /docker_data/gitlab/config:/etc/gitlab \
-v /docker_data/gitlab/logs:/var/log/gitlab \
gitlab/gitlab-ce

2.改GitLab設定

docker exec -it gitlab bash
vi /etc/gitlab/gitlab.rb
# external_url 'GENERATED_EXTERNAL_URL'
external_url 'http://host.example.com:8088'

# nginx['listen_port'] = nil
nginx['listen_port'] = 8088

# gitlab_rails['gitlab_shell_ssh_port'] = 22
gitlab_rails['gitlab_shell_ssh_port'] = 8002
docker restart gitlab
docker ps |grep gitlab

註: 若主機要連請在hosts檔加入host.example.com與IP對應

3.依網路上說http://host.example.com:8088可用root密碼登入
但我登不進去,最後就直接進入gitlab container改root密碼:
docker exec -it gitlab bash
進入container後
gitlab-rails console -e production
進入rail畫面

user = User.where(id: 1).first
user.password = '88888888'
user.password_confirmation = '88888888'
user.save
exit

4.安裝Jenkins
(1)設定路徑、檔案權限
mkdir -p /docker_data/jenkins/
chown -R 1000:1000 /docker_data/jenkins/
chmod 655 /root/.kube/config

(2)vi Dockerfile

FROM jenkins/jenkins:lts
USER root
RUN apt-get update \
&& apt-get install -y apt-utils \
&& apt-get install -y sudo \
&& apt-get install -y libltdl7 \
&& rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins

打包image : jenkins

docker build -t jenkins:sudo .

參考: https://hub.docker.com/r/jenkins/jenkins

(3)啟動Jenkins
vi jenkins.sh

docker run \
--name jenkins \
-d --restart always \
-p 8080:8080 -p 50000:50000 \
--network mycicd \
-v /etc/localtime:/etc/localtime:ro \
-v /docker_data/jenkins:/var/jenkins_home \
-v $(which docker):/usr/bin/docker \
-v $(which kubectl):/usr/bin/kubectl \
-v /root/.kube:/var/jenkins_home/.kube \
-v /var/run/docker.sock:/var/run/docker.sock \
-e JAVA_OPTS=-Duser.timezone=Asia/Taipei \
jenkins:sudo

執行以下指令啟動jenkins

chmod +x jenkins.sh
./jenkins.sh

查看啟動結果

docker exec -it jenkins bash
sudo docker ps
kubectl get nodes
kubectl run nginx --image=nginx

用瀏覽器查看http://host.example.com:8080/
其網站的初始密碼請查看此檔
cat /docker_data/jenkins/secrets/initialAdminPassword
帳密,Ex.
it / 8888

遇到的問題:
jenkins@0665230a6768:/$ docker ps
docker: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32′ not found (required by docker)
=> 查網路上的做法尚無解,先略過
https://www.cnblogs.com/kevingrace/p/8744417.html
https://blog.csdn.net/wangying202/article/details/113178159

1.安裝make、gcc
sudo apt-get update
sudo apt-get install make
sudo apt install build-essential
sudo apt-get install python3
sudo apt-get install python
sudo apt-get install gawk
sudo apt-get install bison

2.以下指令無法使用

sudo wget http://ftp.gnu.org/gnu/glibc/glibc-2.32.tar.gz
sudo tar -xvf glibc-2.32.tar.gz
cd glibc-2.32
sudo mkdir build
cd build
sudo ../configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin
sudo make -j 8

參考: https://github.com/jenkinsci/docker/blob/master/README.md

發表於 程式分享

自架 Harbor Registry,並設定為k8s image來源

自架 Harbor Registry,並設定為k8s image來源,相關步驟如下

1.安裝docker-compose
sudo apt install docker-compose

2.安裝harbor

wget https://github.com/goharbor/harbor/releases/download/v2.2.3/harbor-offline-installer-v2.2.3.tgz
tar xvf harbor-offline-installer-v2.2.3.tgz

3.安裝harbor在docker上

1) 建路徑

mkdir -p /docker_data
mv harbor /docker_data

cd /docker_data/harbor
tree .

2) 設定安裝檔

cp harbor.yml.tmpl harbor.yml
vi harbor.yml

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.56.3

# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 81

# https related config
#https:
# https port for harbor, default is 443
#port: 443
# The path of cert and key files for nginx
#certificate: /your/certificate/path
#private_key: /your/private/key/path

3) 準備設定檔

./prepare –help
./prepare –with-trivy –with-chartmuseum –with-trivy

4) 開始安裝

./install.sh

4.連到harbor web

帳號為「admin」,密碼預設為「Harbor12345」
建立testproj專案

5.設定Docker image

1) 由於registry未採用SSL加密,docker服務須要push image要設定insecure-registry

vi /etc/docker/daemon.json加上insecure-registries

{
“exec-opts": [“native.cgroupdriver=systemd"],
“log-driver": “json-file",
“log-opts": {
“max-size": “100m"
},
“storage-driver": “overlay2″,
“insecure-registries" : [“192.168.56.3:81″]
}

2) 重啟docker 服務

systemctl daemon-reload
systemctl restart docker
docker-compose stop
docker-compose up -d

3) 確認image位址

docker info |grep -A1 Insecure

6.登入image registry

docker login -u admin http://192.168.56.3:81
密碼預設為「Harbor12345」

7.建立image至registry

docker pull nginx
docker tag nginx:latest 192.168.56.3:81/testproj/nginx:v1
docker push 192.168.56.3:81/testproj/nginx:v1

8.由自建registry建k8s cluster pod

1) kubectl create deployment nginx –image=192.168.56.3:81/testproj/nginx:v1

2) 確認由自建registry建k8s cluster pod

kubectl describe pod/nginx-65979d9ddb-xmmgg

9.由於Bug,導致開機無法正常啟動Harbor服務,可由service來設定

cd /etc/systemd/system

vi harbor.service

[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemdresolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor
[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f /docker_data/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /docker_data/harbor/docker-compose.yml down
[Install]
WantedBy=multi-user.target

systemctl daemon-reload; systemctl enable harbor.service

發表於 程式分享

於kubernets (k8s) 裝 wordpress

於kubernets (k8s) 裝 wordpress,並將資料庫及相關檔案持久化(放實體路徑),
以下為執行的步驟

1.設定DB的環境變數

1) vi mydb_env

MYSQL_ROOT_PASSWORD=Redhat1!
TZ="Asia/Taipei"

2) 設定configmaps

kubectl create cm mydb-env –from-env-file=mydb_env

3) 查看configmaps設定結果

kubectl describe cm mydb-env

2.設定DB密碼-secret

1) 設定secret

kubectl create secret generic mydb-pwd –from-literal=MYSQL_ROOT_PASSWORD=Redhat1!

2) 查看secret

kubectl describe secret mydb-pwd

3.建mydb deployment未指定DB密碼,故啟動失敗

kubectl create deployment mydb --image=mariadb --port 3306

4.設定環境變數

1) 將deployment mydb設定檔存成檔案

kubectl get deployments mydb -o yaml > deployment_mydb.yaml

2) 設定環境變數

vi deployment_mydb.yaml

      containers:
      - image: mariadb
        imagePullPolicy: Always
        name: mariadb
        ports:
        - containerPort: 3306
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: MYSQL_ROOT_PASSWORD
              name: mydb-pwd
        - name: TZ
          valueFrom:
            configMapKeyRef:
              key: TZ
              name: mydb-env

3) 重建deployment

kubectl delete deployment mydb
kubectl apply -f deployment_mydb.yaml

註: 可直接由 kubectl edit deployments mydb 做編輯

5.將資料庫放在host server

1) 於host server建 /data/db (mkdir /data/db)

2) 設定hostPath: /data/db

   vi deployment_mydb.yaml

      containers:
      - args:
        - --character-set-server=utf8mb4
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: MYSQL_ROOT_PASSWORD
              name: mydb-pwd
        - name: TZ
          valueFrom:
            configMapKeyRef:
              key: TZ
              name: mydb-env
        image: mariadb
        imagePullPolicy: Always
        name: mariadb
        ports:
        - containerPort: 3306
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
          - mountPath: /var/lib/mysql
            name: mydb-data
      volumes:
      - name: mydb-data
        hostPath:
          path: /data/db
          type: Directory

4) 重建deployment

kubectl delete deployment mydb
kubectl apply -f deployment_mydb.yaml

6.進入pod內操作並建立database wp

kubectl exec -it pod/mydb-5b9dbbbf54-6k8wl — bash
create database wp;

7.設定mydb service (不一定要做此項)

1) kubectl expose deployment mydb –port=3306
2) kubectl get svc => 預設為ClusterIP
3) kubectl edit svc mydb => 將type由ClusterIP改為NodePort

8.設定wordpress環境變數

1) vi wordpress_env

WORDPRESS_DB_NAME=wp
WORDPRESS_DB_USER=root
WORDPRESS_DB_HOST=mydb
WORDPRESS_DB_PASSWORD=Redhat1!
ServerName=localhost

2) 設定configmaps

kubectl create cm wordpress-env –from-env-file=wordpress_env

kubectl get cm wordpress-env
kubectl describe configmaps wordpress-env

9.建立wordpress application

1) 建myweb deployment

kubectl create deployment myweb --image=wordpress

2) 設定環境變數

      containers:
      - image: wordpress
        imagePullPolicy: Always
        name: wordpress
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        env:
        - name: WORDPRESS_DB_NAME
          valueFrom:
            configMapKeyRef:
              key: WORDPRESS_DB_NAME
              name: wordpress-env
        - name: WORDPRESS_DB_USER
          valueFrom:
            configMapKeyRef:
              key: WORDPRESS_DB_USER
              name: wordpress-env
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            configMapKeyRef:
              key: WORDPRESS_DB_PASSWORD
              name: wordpress-env
        - name: ServerName
          valueFrom:
            configMapKeyRef:
              key: ServerName
              name: wordpress-env
        - name: WORDPRESS_DB_HOST
          valueFrom:
            configMapKeyRef:
              key: WORDPRESS_DB_HOST
              name: wordpress-env

10.將檔案建在host server

1) 於host server建 /data/wordpress (mkdir /data/wordpress)

2) 設定hostPath: /data/wordpress
kubectl edit deployment myweb

      containers:
      - env:
        - name: WORDPRESS_DB_NAME
          valueFrom:
            configMapKeyRef:
              key: WORDPRESS_DB_NAME
              name: wordpress-env
        - name: WORDPRESS_DB_USER
          valueFrom:
            configMapKeyRef:
              key: WORDPRESS_DB_USER
              name: wordpress-env
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            configMapKeyRef:
              key: WORDPRESS_DB_PASSWORD
              name: wordpress-env
        - name: ServerName
          valueFrom:
            configMapKeyRef:
              key: ServerName
              name: wordpress-env
        - name: WORDPRESS_DB_HOST
          valueFrom:
            configMapKeyRef:
              key: WORDPRESS_DB_HOST
              name: wordpress-env
        image: wordpress
        imagePullPolicy: Always
        name: wordpress
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/www/html
          name: myweb-data
      volumes:
      - name: myweb-data
        hostPath:
          path: /data/wordpress
          type: Directory

11.建wordpress service

kubectl expose deployment myweb –port=80
kubectl get svc => 預設為ClusterIP
kubectl edit svc myweb => 將type由ClusterIP改為NodePort

12.查看wordpress網站

發表於 程式分享

GitOps概念說明

有幾個潛在的issue,GitOps有其應對方式
1.手動佈署很難追蹤問題 => kubernets可用yaml檔放入Git維護
2.pipeline由GUI設定很方便,但運行的真正狀態很難由GUI全部觀察到
3.權限控管也是一難題

GitOps概念說明如下

一.GitOps三大核心概念

  • Audited (稽核性) change managment of source code: 誰在何時做什麼變動均可被查到
  • Declarative (宣告式) data definition of systems: 如kubernetes 的yaml佈署檔
  • Control loop Configuaration managment of systems: 讓當前狀態符合預期狀態

二.GitOps四大原則

  • The entire system described declaratively (宣告式)
  • The canonial (典範) desired system state versioned in GitOps: 系統狀態的各版本變動也會保存在Git內供追蹤
  • Approved changes that can be automatically applied to the system: 認可的改變可自動佈署至系統內
  • Software agents to ensure correctness & alert an divergence: 軟體代理可確保正確性及警告、提示不一致性

三.GitOps主要流程說明

  • Git Repository分為兩部份
    • 程式原始碼 => 放Application Repo
    • 設定檔 => 放Config Repo
  • Git版本已不用latest而用commit hash tag取代
  • 以下架構只是概念性的描述,實際依應用情境會有所調整

參考網址: https://openpracticelibrary.com/practice/gitops/

發表於 程式分享

DevOps介紹

DevOps指的是Develop & Operation互相合作,主要說明如下

一.DevOps文化
1.快速佈署、快速回饋
2.試水溫

二.DevOps基礎
1.Source code管理
2.Continuous integration (CI 持續整合)
3.Continuous delivery (CD 持續整合持續佈署)
4.Monitoring & feedback (監控及回饋)
5.Rapid Innvoation (快速創新)

三.DevOps目標
1.Improved deployment frequency (提高佈署頻率)
2.Faster time to market (更快的上市時間)
3.Lower Failure rate of new releases (新版本失敗率低)
4.Shortened lead time between fixes (縮短修復間隔時間)
5.Faster mean time to recovery (更快的平均恢復時間)

四.DevOps以kubernetes(k8s)實現的可行系統架構

發表於 程式分享

ubuntu v21.04安裝k8s(kubernetes) v1.21.2

終於略過前一篇說明的防火牆的阻擋,今天來記錄ubuntu v21.04安裝k8s(kubernetes) v1.21.2安裝成功的步驟

ubuntu v21.04安裝k8s(kubernetes) v1.21.2遇到的問題及處理方式

1.安裝套件apt-transport-https、ca-certificates、curl

sudo apt-get update
遇到網站憑證問題,故加上--allow....設定
sudo apt-get update --allow-unauthenticated --allow-insecure-repositories
sudo apt-get install -y apt-transport-https ca-certificates curl

2.安裝kubelet kubeadm kubectl
1) 設定https://packages.cloud.google.com網站憑證

curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg > /dev/null

2) 設定kubernetes安裝apt

echo "deb [arch=amd64 trusted=yes allow-insecure=yes allow-weak=yes allow-downgrade-to-insecure=yes check-valid-until=no] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
或
echo "deb [arch=amd64 trusted=yes allow-insecure=yes allow-weak=yes allow-downgrade-to-insecure=yes check-valid-until=no] https://packages.cloud.google.com/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

註:出現如下訊息

E: 無法取得 https://apt.kubernetes.io/dists/kubernetes-xenial/main/binary-amd64/Packages,Certificate verification failed: The certificate is NOT trusted. The certificate chain uses insecure algorithm. Could not handshake: Error in the certificate verification. [IP: 34.107.204.206 443]
E: Some index files failed to download. They have been ignored, or old ones used instead.

解法: vi etc/apt/apt.conf.d/99verify-peer.conf,若為true,改為false

Acquire { https::Verify-Peer false }

3.swap改由k8s控管,故要關閉swap

swapoff -a
sed -e '/swap/ s/^#*/#/' -i /etc/fstab
free -m

4.初始master,以建立k8s cluster
1) 建置指令
sudo kubeadm init –-pod-network-cidr 10.5.0.0/16 -–v=5

終於看到成功的畫面
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.15:6443 –token xyovgm.p0z428rgi7flwi6a \
–discovery-token-ca-cert-hash sha256:8893a38d20d191ad63714b030926cb904bc468290710237d49d6d11f14a92b48

2)此段需執行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3)加node使用此token請記錄下來

kubeadm join 10.0.2.15:6443 –token xyovgm.p0z428rgi7flwi6a \
–discovery-token-ca-cert-hash sha256:8893a38d20d191ad63714b030926cb904bc468290710237d49d6d11f14a92b48

5.設定自動補齊功能

sudo apt-get install -y bash-completion
echo “source /etc/bash_completion" >> ~/.bashrc
echo “source <(kubectl completion bash)" >> ~/.bashrc
source <(kubectl completion bash)

6.若無足夠主機建置

kubectl taint nodes --all node-role.kubernetes.io/master-

收到結果: node/ubuntu-virtualbox untainted

7.列出node數
kubectl get nodes
kubectl get nodes -o wide
收到結果: (STATUS為NOT Ready)

8.列出node內容
kubectl describe node ubuntu-virtualbox

9.設定overlay網路: 用flannel
1)wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

2)vi kube-flannel.yml

修改
net-conf.json: |
{
“Network": “10.5.0.0/16″,
“Backend": {
“Type": “vxlan"
}
}

其中Network值需同kubeadm init時設定的pod-network-cidr一致

3)建立overlay網路
kubectl apply -f kube-flannel.yml
如下圖結果:

4)查看pod狀況
kubectl get pods –all-namespaces
如下圖結果:

註: flannel image因公司擋quay.io這個網址無法下載,故此步驟一直停在image Err的STATUS,後來先在外部將image拉下來,再上傳到內部server,步驟如下
外部電腦:

docker pull quay.io/coreos/flannel:v0.14.0-amd64
docker image save flannel -o flannel.tar

-- 若已run在container內可以用此方式
docker export --output="flannel.jar" flannel

內部Server:

docker load -i flannel.tar

5) 列出node數
kubectl get nodes -o wide
收到結果: (STATUS為Ready)

10.建立nginx pod

kubectl run nginx --image=nginx

註: kubernets 1.18以後請用以下指令

kubectl create deployment nginx-web --image=nginx

這樣才會連deployment及replicaset一同建立,以kubectl get all查看

kubectl get pods -o wide
如下圖結果:

11.改nodeport

kubectl get deployment -o yaml nginx > nginx.yaml
kubectl expose -f nginx.yaml --port=80 --type NodePort
或
kubectl edit svc nginx

12.查看對應的服務port
kubectl get svc
如下圖結果:

13.佈署一個nginx deployment
參考網址: https://kubernetes.io/zh/docs/tasks/run-application/run-stateless-application-deployment/

1) vi nginx_deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
– name: nginx
image: nginx:latest
ports:
– containerPort: 80

2)建立deployment
kubectl apply -f nginx_deployment.yaml

3)查看deployment、pod、svc
kubectl describe deployment nginx-deployment
kubectl get pods -o wide
kubectl get svc -o wide
如下圖結果:

4)列出deployment建立的Pods
kubectl get pods -l app=nginx
如下圖結果:

5)取得某個pod的描述
kubectl describe pod <pod-name>

14.增加nginx deployment pod由2調整成3
1)調整nginx_deployment.yaml內的replicas為3

2)重建deployment

kubectl apply -f nginx_deployment.yaml

3)查deployment的Pods數量:已變為3
kubectl get pods -l app=nginx
如下圖結果:

15.降低nginx deployment pod由3調整成2
kubectl scale deployment nginx-deployment –replicas=2
查看結果
kubectl get pods -l app=nginx
如下圖結果:

發表於 程式分享

ubuntu v21.04安裝k8s(kubernetes) v1.21.2遇到的問題及處理方式

於ubuntu安裝k8s,以下是我安裝遇到的問題及我的解法,
其中包含公司防火牆無法連到某些網站我的解法說明,
因為v1.21.2還有未解問題,會降版及改CentOS再試看看,
所以相關安裝還會有新文更新試驗結果,敬請期待…

一.方法一:

在公司內部,安裝遇到公司防火牆擋連https://packages.cloud.google.com/apt/,故最後改用方法二乃至方法三
1.安裝套件apt-transport-https、ca-certificates、curl
sudo apt-get update

=> 遇到網站憑證問題,故加上–allow….設定
sudo apt-get update –allow-unauthenticated –allow-insecure-repositories

sudo apt-get install -y apt-transport-https ca-certificates curl

2.安裝kubelet kubeadm kubectl
1) 設定https://packages.cloud.google.com網站憑證
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg –dearmor | sudo tee /usr/share/keyrings/kubernetes-archivekeyring.gpg > /dev/null
2) 設定kubernetes安裝apt
echo “deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

=> 若上述指令執行仍有問題,建議在deb […]加上trusted=yes allow-insecure=yes allow-weak=yes allow-downgrade-to-insecure=yes check-valid-until=no試看看
sudo apt-get update –allow-unauthenticated –allow-insecure-repositories
sudo apt-get install -y kubelet kubeadm kubectl
=> 此步驟因防火牆擋連https://packages.cloud.google.com/apt/無法下載,故改試方法二

二.方法二:

用snap工具安裝 (若未安裝snap,要執行apt install snap)
參考網址: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
1.安裝kubelet kubeadm kubectl
snap install kubectl –classic
kubectl version –client

snap install kubeadm –classic
kubeadm version –client

snap install kubelet –classic
kubelet version –client
kubelet

2.swap改由k8s控管,故要關閉swap
swapoff -a
sed -e ‘/swap/ s/^#*/#/’ -i /etc/fstab
free -m

3.初始master,以建立k8s cluster
kubeadm init –pod-network-cidr 10.5.0.0/16

=> 出現以下錯誤,不曉得如何解,故改用方法三
root@ubuntu-VirtualBox:~/snap# kubeadm init –pod-network-cidr 10.5.0.0/16
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs" as the Docker cgroup driver. The recommended driver is “systemd". Please follow the guide at https://kuberne tes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileExisting-conntrack]: conntrack not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher
root@ubuntu-VirtualBox:~/snap# systemctl enable kubelet.service
Failed to enable unit: Unit file kubelet.service does not exist.

此項解法: [ERROR FileExisting-conntrack]: conntrack not found in system path
=> apt-get install conntrack

三.方法三:

參考網址:
http://kimiwublog.blogspot.com/2017/05/kubernetes.html
https://milexz.pixnet.net/blog/post/228096329-%E3%80%90k8s%E3%80%91kubernetes%E7%92%B0%E5%A2%83%E6%9E%B6%E8%A8%ADby-kubeadm
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
https://www.downloadkubernetes.com/

1.安裝kubectl
1)curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

=> 遇到如下錯誤 curl: (60) SSL certificate problem: EE certificate key too weak
解法: 因ubuntu 20.04將TLS 最低版本為1.2,故認證失敗
修改/etc/ssl/openssl.cnf,在 oid_section = new_oids下增加

openssl_conf = default_conf
[default_conf]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
MinProtocol = TLSv1.1
CipherString = DEFAULT@SECLEVEL=1

2)curl -LO “https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256″
echo “$(<kubectl.sha256) kubectl" | sha256sum –check

3)sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version

2.安裝kubeadm
1)curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubeadm"

2)curl -LO “https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubeadm.sha256″
echo “$(<kubeadm.sha256) kubeadm" | sha256sum –check

3)sudo install -o root -g root -m 0755 kubeadm /usr/local/bin/kubeadm
kubeadm version

3.安裝kubelet
1)curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubelet"

2)curl -LO “https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubelet.sha256″
echo “$(<kubelet.sha256) kubelet" | sha256sum –check

3)sudo install -o root -g root -m 0755 kubelet /usr/local/bin/kubelet
kubelet –version

4.swap改由k8s控管,故要關閉swap

sudo swapoff -a
sudo sed -e ‘/swap/ s/^#*/#/’ -i /etc/fstab
free -m

5.初始master,以建立k8s cluster
sudo kubeadm init –pod-network-cidr 10.5.0.0/16

出現如下錯誤:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs" as the Docker cgroup driver. The recommended driver is “systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher

解法說明:

1)[WARNING IsDockerSystemdCheck]: detected “cgroupfs" as the Docker cgroup driver. The recommended driver is “systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
=> 參考如下第6項7)、8)說明

2)[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
=> 參考如下第6項8)說明

3)其中以下錯誤是因此環境之前有執行過,故需用kubeadm reset清掉之前的設定
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists

4)To see the stack trace of this error execute with –v=5 or higher

6.參考 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/說明,先將整個環境再重新view一次
1)確認MAC address and product_uuid在每個node是unique,因第一個node尚未建,所以此步驟可略過

ifconfig -a
sudo cat /sys/class/dmi/id/product_uuid

2)Check network adapters

3)使用iptables並確認有載入bridged traffic模組

lsmod | grep br_netfilter
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl –system

4)確認相關ports是否有被佔用

A.ontrol-plane node(s)

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self

B.Worker node(s)

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services? All

5)可使用以下runtime環境

  • Docker /var/run/dockershim.sock
  • containerd /run/containerd/containerd.sock
  • CRI-O /var/run/crio/crio.sock

6)取得kubeadm設定檔
sudo kubeadm config images pull

註:kubeadm init會到到此行提醒[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’

7)將docker的cgroup driver改為systemd
參考網址: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/

A.sudo mkdir /etc/docker (若不存在此路徑才執行)

cat <<EOF | sudo tee /etc/docker/daemon.json
{
“exec-opts": [“native.cgroupdriver=systemd"],
“log-driver": “json-file",
“log-opts": {
“max-size": “100m"
},
“storage-driver": “overlay2″
}
EOF

B.重啟docker服務
systemctl daemon-reload && systemctl restart docker && systemctl enable kubelet.service

C.確認是否使用cgroup
docker info |grep “Cgroup"
=> Cgroup Driver: systemd

8)建立kubelet服務,並將其cgroup driver設定為systemd
A.sudo vi /etc/systemd/system/kubelet.service

[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/local/bin/kubelet
–v=2 \
–cgroup-driver=systemd \
–runtime-cgroups=/systemd/system.slice \
–kubelet-cgroups=/systemd/system.slice
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

B.重啟kubelet服務
daemon-reload && systemctl restart kubelet

C.若無kubelet.service可參考設定,可下載這一份試看看
curl -sSL “https://raw.githubusercontent.com/kubernetes/release/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed “s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service

D.查看kubelet.service啟動的LOG,若啟動失敗可先參考此份的原因
journalctl -xeu kubelet => 看啟動Log 或 journalctl -f -u kubelet

E.若無法設定好cgroup driver可參考此

https://www.cnblogs.com/hellxz/p/kubelet-cgroup-driver-different-from-docker.html
Check on the worker nodes file /var/lib/kubelet/kubeadm-flags.env and in KUBELET_KUBEADM_ARGS if you have –cgroup-driver=cgroupfsflag. Changed it to systemd and kubelet will start working again.

9)新增或設定檔案 kubeadm-config.yaml => 但我試的結果是跟kubelet的cgroup設定為systemd沒有影響

kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
kubernetesVersion: v1.21.0

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

7.再次初始master,以建立k8s cluster
1)需用kubeadm reset清掉之前的設定
kubeadm reset

2)取得預設定檔
kubeadm config images pull

3)確認kubelet.service可以啟動成功
systemctl start kubelet.service 或 systemctl restart kubelet.service
systemctl status kubelet.service

4)清除所有虛擬服務
ipvsadm –clear

5)執行以下仍失敗
sudo kubeadm init –pod-network-cidr 10.5.0.0/16
sudo kubeadm init –kubernetes-version=v1.21.2 –pod-network-cidr=10.244.0.0/16 –service-cidr=10.96.0.0/12 –ignore-preflight-errors=Swap –v=5
sudo kubeadm init –kubernetes-version=v1.21.2 –pod-network-cidr=10.244.0.0/16 –service-cidr=10.96.0.0/12 –ignore-preflight-errors=all –v=5
sudo kubeadm init –config kubeadm-config.yml –v=5

出現的錯誤: Error execution phase wait-control-plane

=> 執行以下仍無果,暫時試到這
sudo vi /etc/ufw/sysctl.conf
# 2021.07.08
加上
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1

參考網址:
https://docs.nvidia.com/datacenter/cloud-native/kubernetes/install-k8s.html
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/kubelet-integration/
https://www.cnblogs.com/horizonli/p/10855666.html
https://www.qikqiak.com/k8s-book/docs/16.%E7%94%A8%20kubeadm%20%E6%90%AD%E5%BB%BA%E9%9B%86%E7%BE%A4%E7%8E%AF%E5%A2%83.html
https://jimmysong.io/kubernetes-handbook/cloud-native/cloud-native-local-quick-start.html

發表於 程式分享

於ubuntu安裝ssh

新安裝的ubuntu預設沒有ssh功能,故要手動來安裝,步驟如下
1.安裝openssh server
1) 安裝
sudo apt-get install openssh-server -y
2) 用ssh登入
ssh username@ip 或 username@hostname 或 ssh ip、ssh hostname

2.若要改ssh預設port
1) 設定開啟的port
sudo apt-get install nano -y

Port 22

Port 1337
2) 設定防火牆開放另一個port
sudo ufw allow 1337
sudo service ssh restart 或 sudo systemctl restart ssh
3) 用ssh登入
ssh username@ip -p1337 或 username@hostname -p1337 或 ssh ip -p1337、ssh hostname -p1337