summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorXiang Dai <long0dai@foxmail.com>2020-05-11 10:37:33 +0800
committerXiang Dai <long0dai@foxmail.com>2020-06-04 10:27:54 +0800
commit1d770c11d03ea1985e61575cba5c83d2a9ca5623 (patch)
tree9743e2159d6cbf68e3f1203e1a5e6bf5e9ed91dd
parentMerge pull request #1746 from daixiang0/cache (diff)
downloadkubeedge-1d770c11d03ea1985e61575cba5c83d2a9ca5623.tar.gz
Lint: cleanup white noise
Signed-off-by: Xiang Dai <long0dai@foxmail.com>
-rw-r--r--README.md4
-rw-r--r--build/crd-samples/devices/CC2650-device-instance.yaml2
-rw-r--r--build/crd-samples/devices/CC2650-device-model.yaml6
-rw-r--r--build/crd-samples/devices/led-light-device-model.yaml2
-rw-r--r--build/edge/README.md2
-rw-r--r--build/edge/README_zh.md2
-rw-r--r--build/edge/kubernetes/04-deployment-edgenode.yaml20
-rwxr-xr-xbuild/edge/run_daemon.sh28
-rw-r--r--cloud/README.md6
-rw-r--r--cloud/cmd/cloudcore/app/server.go4
-rw-r--r--cloud/pkg/edgecontroller/OWNERS2
-rw-r--r--docs/getting-started/community-membership.md4
-rw-r--r--docs/getting-started/getting-started.md12
-rw-r--r--docs/getting-started/release_package.md48
-rw-r--r--docs/getting-started/support.md4
-rw-r--r--docs/guides/bluetooth_mapper_e2e_guide.md11
-rw-r--r--docs/guides/device_crd_guide.md34
-rw-r--r--docs/guides/edgemesh_test_env_guide.md4
-rw-r--r--docs/guides/message_topics.md16
-rw-r--r--docs/guides/try_kubeedge_with_ief.md2
-rw-r--r--docs/guides/unit_test_guide.md52
-rw-r--r--docs/images/KubeEdge_arch.vsdxbin226633 -> 226626 bytes
-rw-r--r--docs/images/reliable-message-delivery/reliablemessage-workflow.PNGbin37730 -> 37720 bytes
-rw-r--r--docs/index.rst4
-rw-r--r--docs/mappers/bluetooth_mapper.md6
-rw-r--r--docs/modules/beehive.md72
-rw-r--r--docs/modules/cloud/controller.md10
-rw-r--r--docs/modules/cloud/device_controller.md12
-rw-r--r--docs/modules/edge/devicetwin.md268
-rw-r--r--docs/modules/edge/edged.md28
-rw-r--r--docs/modules/edge/edgehub.md18
-rw-r--r--docs/modules/edge/eventbus.md2
-rw-r--r--docs/modules/edge/metamanager.md10
-rw-r--r--docs/modules/edgesite.md16
-rw-r--r--docs/proposals/EdgeSite.md22
-rw-r--r--docs/proposals/configuration.md50
-rw-r--r--docs/proposals/cri.md38
-rw-r--r--docs/proposals/device-crd.md2
-rw-r--r--docs/proposals/edgemesh-design.md6
-rw-r--r--docs/proposals/keadm-scope.md16
-rw-r--r--docs/proposals/mapper-design.md10
-rw-r--r--docs/proposals/quic-design.md6
-rw-r--r--docs/proposals/reliable-message-delivery.md70
-rw-r--r--docs/setup/cross-compilation.md8
-rw-r--r--docs/setup/deploy-edge-node.md2
-rw-r--r--docs/setup/kubeedge_install_keadm.md92
-rw-r--r--docs/setup/kubeedge_install_source.md4
-rw-r--r--docs/setup/kubeedge_run.md12
-rw-r--r--docs/setup/memfootprint-test-setup.md2
-rw-r--r--edge/cmd/edgecore/app/server.go20
-rw-r--r--edge/hack/install_docker_for_raspbian.sh2
-rw-r--r--edge/test/README.md26
-rwxr-xr-xedge/test/integration/docs/README.md30
-rw-r--r--edgemesh/tools/initContainer/createImg.sh2
-rw-r--r--edgemesh/tools/initContainer/rpm/Dockerfile2
-rw-r--r--edgemesh/tools/initContainer/script/edgemesh-iptables.sh60
-rw-r--r--edgesite/cmd/edgesite/app/server.go6
-rw-r--r--hack/lib/golang.sh30
-rw-r--r--hack/lib/lint.sh2
-rwxr-xr-xhack/verify-golang.sh4
-rw-r--r--keadm/cmd/keadm/app/cmd/cmd.go2
-rw-r--r--keadm/cmd/keadm/app/cmd/edge/join.go4
-rw-r--r--mappers/bluetooth_mapper/deployment.yaml2
-rw-r--r--mappers/modbus_mapper/Makefile2
-rw-r--r--mappers/modbus_mapper/deployment.yaml6
-rw-r--r--mappers/modbus_mapper/dpl/deviceProfile.json2
-rw-r--r--mappers/modbus_mapper/src/devicetwin.js6
-rw-r--r--mappers/modbus_mapper/src/index.js6
-rw-r--r--mappers/modbus_mapper/src/watchfile.js6
-rw-r--r--staging/src/github.com/kubeedge/beehive/Makefile4
-rw-r--r--staging/src/github.com/kubeedge/viaduct/README.md2
-rw-r--r--staging/src/github.com/kubeedge/viaduct/examples/chat/README.md4
-rw-r--r--staging/src/github.com/kubeedge/viaduct/examples/mirror/README.md2
-rw-r--r--tests/e2e/mapper/bluetooth/README.md2
74 files changed, 644 insertions, 641 deletions
diff --git a/README.md b/README.md
index cfba00ce6..b766c4f25 100644
--- a/README.md
+++ b/README.md
@@ -8,10 +8,10 @@
<img src="./docs/images/kubeedge-logo-only.png">
KubeEdge is built upon Kubernetes and extends native containerized application orchestration and device management to hosts at the Edge.
-It consists of cloud part and edge part, provides core infrastructure support for networking, application deployment and metadata synchronization
+It consists of cloud part and edge part, provides core infrastructure support for networking, application deployment and metadata synchronization
between cloud and edge. It also supports **MQTT** which enables edge devices to access through edge nodes.
-With KubeEdge it is easy to get and deploy existing complicated machine learning, image recognition, event processing and other high level applications to the Edge.
+With KubeEdge it is easy to get and deploy existing complicated machine learning, image recognition, event processing and other high level applications to the Edge.
With business logic running at the Edge, much larger volumes of data can be secured & processed locally where the data is produced.
With data processed at the Edge, the responsiveness is increased dramatically and data privacy is protected.
diff --git a/build/crd-samples/devices/CC2650-device-instance.yaml b/build/crd-samples/devices/CC2650-device-instance.yaml
index c87fa5158..11e7ec9e0 100644
--- a/build/crd-samples/devices/CC2650-device-instance.yaml
+++ b/build/crd-samples/devices/CC2650-device-instance.yaml
@@ -1,4 +1,4 @@
-
+
apiVersion: devices.kubeedge.io/v1alpha1
kind: Device
metadata:
diff --git a/build/crd-samples/devices/CC2650-device-model.yaml b/build/crd-samples/devices/CC2650-device-model.yaml
index c8e6473c2..9bae4cc03 100644
--- a/build/crd-samples/devices/CC2650-device-model.yaml
+++ b/build/crd-samples/devices/CC2650-device-model.yaml
@@ -31,7 +31,7 @@ spec:
accessMode: ReadWrite
defaultValue: 0
- name: io-config
- description: register activation of io-config
+ description: register activation of io-config
type:
int:
accessMode: ReadWrite
@@ -57,7 +57,7 @@ spec:
bluetooth:
characteristicUUID: f000aa0204514000b000000000000000
dataWrite:
- "ON": [1] #Here "ON" refers to the value of the property "temperature-enable" and [1] refers to the corresponding []byte value to be written into the device when the value of temperature-enable is "ON"
+ "ON": [1] #Here "ON" refers to the value of the property "temperature-enable" and [1] refers to the corresponding []byte value to be written into the device when the value of temperature-enable is "ON"
"OFF": [0]
- propertyName: io-config-initialize
bluetooth:
@@ -72,7 +72,7 @@ spec:
bluetooth:
characteristicUUID: f000aa6504514000b000000000000000
dataWrite:
- "Red": [1] #Here "Red" refers to the value of the property "io-data" and [1] refers to the corresponding []byte value to be written into the device when the value of io-data is "Red"
+ "Red": [1] #Here "Red" refers to the value of the property "io-data" and [1] refers to the corresponding []byte value to be written into the device when the value of io-data is "Red"
"Green": [2]
"RedGreen": [3]
"Buzzer": [4]
diff --git a/build/crd-samples/devices/led-light-device-model.yaml b/build/crd-samples/devices/led-light-device-model.yaml
index 6f057380b..1eab09423 100644
--- a/build/crd-samples/devices/led-light-device-model.yaml
+++ b/build/crd-samples/devices/led-light-device-model.yaml
@@ -13,7 +13,7 @@ spec:
accessMode: ReadWrite
defaultValue: 'OFF'
- name: gpio-pin-number
- description: Indicates the GPIO pin to which LED is connected
+ description: Indicates the GPIO pin to which LED is connected
type:
int:
accessMode: ReadOnly
diff --git a/build/edge/README.md b/build/edge/README.md
index 62fcd76a0..acb708cf8 100644
--- a/build/edge/README.md
+++ b/build/edge/README.md
@@ -34,7 +34,7 @@ container and MQTT Broker, so make sure that docker engine listening on
qemu_arch=x86_64 \
certpath=/etc/kubeedge/certs \
certfile=/etc/kubeedge/certs/edge.crt \
- keyfile=/etc/kubeedge/certs/edge.key
+ keyfile=/etc/kubeedge/certs/edge.key
```
+ Build image
diff --git a/build/edge/README_zh.md b/build/edge/README_zh.md
index d4bb72702..19f1723c7 100644
--- a/build/edge/README_zh.md
+++ b/build/edge/README_zh.md
@@ -33,7 +33,7 @@
qemu_arch=x86_64 \
certpath=/etc/kubeedge/certs \
certfile=/etc/kubeedge/certs/edge.crt \
- keyfile=/etc/kubeedge/certs/edge.key
+ keyfile=/etc/kubeedge/certs/edge.key
````
+ 编译容器镜像
diff --git a/build/edge/kubernetes/04-deployment-edgenode.yaml b/build/edge/kubernetes/04-deployment-edgenode.yaml
index ad3cba4b0..d3ad2447c 100644
--- a/build/edge/kubernetes/04-deployment-edgenode.yaml
+++ b/build/edge/kubernetes/04-deployment-edgenode.yaml
@@ -30,7 +30,7 @@ spec:
requests:
cpu: 100m
memory: 512Mi
- env:
+ env:
- name: DOCKER_HOST
value: tcp://localhost:2375
volumeMounts:
@@ -39,15 +39,15 @@ spec:
- name: conf
mountPath: /etc/kubeedge/edge/conf
- name: dind-daemon
- securityContext:
+ securityContext:
privileged: true
- image: docker:dind
- resources:
- requests:
- cpu: 20m
- memory: 512Mi
- volumeMounts:
- - name: docker-graph-storage
+ image: docker:dind
+ resources:
+ requests:
+ cpu: 20m
+ memory: 512Mi
+ volumeMounts:
+ - name: docker-graph-storage
mountPath: /var/lib/docker
volumes:
- name: certs
@@ -56,5 +56,5 @@ spec:
- name: conf
configMap:
name: edgenodeconf
- - name: docker-graph-storage
+ - name: docker-graph-storage
emptyDir: {}
diff --git a/build/edge/run_daemon.sh b/build/edge/run_daemon.sh
index cfc6e7dc4..14022da54 100755
--- a/build/edge/run_daemon.sh
+++ b/build/edge/run_daemon.sh
@@ -42,7 +42,7 @@ docker_prepare(){
if [ ! -d /etc/kubeedge/certs ] || [ ! -e /etc/kubeedge/certs/edge.crt ] || [ ! -e /etc/kubeedge/certs/edge.key ]; then
mkdir -p /etc/kubeedge/certs
echo "Certificate does not exist"
- exit -1
+ exit -1
fi
if [ ! -d /var/lib/kubeedge ]; then
@@ -64,30 +64,30 @@ docker_prepare(){
if [ ! -d ${CERTPATH} ] || [ ! -e ${CERTFILE} ] || [ ! -e ${KEYFILE} ]; then
mkdir -p ${CERTPATH}
echo "Certificate does not exist"
- exit -1
+ exit -1
fi
if [[ -z $(which docker-compose) ]]; then
curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
- fi
+ fi
echo "Container runtime environment check passed."
}
docker_set(){
# This script accepts the following parameters:
- #
+ #
# * cloudhub
# * edgename
# * edgecore_image
# * arch
# * qemu_arch
# * certpath
- # * certfile
- # * keyfile
+ # * certfile
+ # * keyfile
#
# Example
- #
+ #
# ./run_daemon.sh set \
# cloudhub=0.0.0.0:10000 \
# edgename=edge-node \
@@ -96,7 +96,7 @@ docker_set(){
# qemu_arch=x86_64 \
# certpath=/etc/kubeedge/certs \
# certfile=/etc/kubeedge/certs/edge.crt \
- # keyfile=/etc/kubeedge/certs/edge.key
+ # keyfile=/etc/kubeedge/certs/edge.key
ARGS=$@
@@ -135,19 +135,19 @@ docker_up(){
}
docker_down(){
- docker-compose down
+ docker-compose down
}
docker_only_run_edge(){
# This script accepts the following parameters:
- #
+ #
# * mqtt
# * edgename
# * cloudhub
# * image
- #
+ #
# Example
- #
+ #
# ./run_daemon.sh only_run_edge mqtt=0.0.0.0:1883 cloudhub=0.0.0.0:10000 edgename=edge-node image="kubeedge/edgecore:latest"
ARGS=$@
@@ -161,7 +161,7 @@ docker_only_run_edge(){
mqtt=${mqtt:-"0.0.0.0:1883"}
cloudhub=${cloudhub:-"0.0.0.0:10000"}
edgename=${edgename:-$(hostname)}
- edgehubWebsocketUrl=wss://${cloudhub}/e632aba927ea4ac2b575ec1603d56f10/${edgename}/events
+ edgehubWebsocketUrl=wss://${cloudhub}/e632aba927ea4ac2b575ec1603d56f10/${edgename}/events
image=${image:-"kubeedge/edgecore:latest"}
containername=${containername:-"edgecore"}
@@ -188,7 +188,7 @@ prepare_qemu(){
rm -rf tmp
mkdir -p tmp
-
+
pushd tmp &&
curl -L -o qemu-${QEMU_ARCH}-static.tar.gz https://github.com/multiarch/qemu-user-static/releases/download/$QEMU_VERSION/qemu-${QEMU_ARCH}-static.tar.gz && tar xzf qemu-${QEMU_ARCH}-static.tar.gz &&
popd
diff --git a/cloud/README.md b/cloud/README.md
index 54250d591..fa85ce3bd 100644
--- a/cloud/README.md
+++ b/cloud/README.md
@@ -2,7 +2,7 @@ This section contains the source code for KubeEdge cloud side components
## KubeEdge Cloud
-At the cloud side, there are two major components: EdgeController and CloudHub.
+At the cloud side, there are two major components: EdgeController and CloudHub.
EdgeController is an extended Kubernetes controller. It watches nodes and pods against APIServer for the cluster.
-Upon changes in nodes/pods, KubeEdge will convert the pod/node binding info. in the format of node -- pods.
-This way, an edge node can obtain pods targeted for itself. It enhances efficiency and reduces the network bandwidth requirement between cloud & edge.
+Upon changes in nodes/pods, KubeEdge will convert the pod/node binding info. in the format of node -- pods.
+This way, an edge node can obtain pods targeted for itself. It enhances efficiency and reduces the network bandwidth requirement between cloud & edge.
diff --git a/cloud/cmd/cloudcore/app/server.go b/cloud/cmd/cloudcore/app/server.go
index 93e21cb8f..704db146f 100644
--- a/cloud/cmd/cloudcore/app/server.go
+++ b/cloud/cmd/cloudcore/app/server.go
@@ -33,8 +33,8 @@ func NewCloudCoreCommand() *cobra.Command {
Use: "cloudcore",
Long: `CloudCore is the core cloud part of KubeEdge, which contains three modules: cloudhub,
edgecontroller, and devicecontroller. Cloudhub is a web server responsible for watching changes at the cloud side,
-caching and sending messages to EdgeHub. EdgeController is an extended kubernetes controller which manages
-edge nodes and pods metadata so that the data can be targeted to a specific edge node. DeviceController is an extended
+caching and sending messages to EdgeHub. EdgeController is an extended kubernetes controller which manages
+edge nodes and pods metadata so that the data can be targeted to a specific edge node. DeviceController is an extended
kubernetes controller which manages devices so that the device metadata/status date can be synced between edge and cloud.`,
Run: func(cmd *cobra.Command, args []string) {
verflag.PrintAndExitIfRequested()
diff --git a/cloud/pkg/edgecontroller/OWNERS b/cloud/pkg/edgecontroller/OWNERS
index 28d58e1a2..0e447b49e 100644
--- a/cloud/pkg/edgecontroller/OWNERS
+++ b/cloud/pkg/edgecontroller/OWNERS
@@ -7,4 +7,4 @@ reviewers:
- anyushun
- fisherxu
- kadisi
- - kuramal
+ - kuramal
diff --git a/docs/getting-started/community-membership.md b/docs/getting-started/community-membership.md
index 86520bf8c..2966a659a 100644
--- a/docs/getting-started/community-membership.md
+++ b/docs/getting-started/community-membership.md
@@ -17,7 +17,7 @@ This document gives a brief overview of the KubeEdge community roles with the re
## Member
Members are active participants in the community who contribute by authoring PRs,
-reviewing issues/PRs or participate in community discussions on slack/mailing list.
+reviewing issues/PRs or participate in community discussions on slack/mailing list.
### Requirements
@@ -44,7 +44,7 @@ reviewing issues/PRs or participate in community discussions on slack/mailing li
## Approver
Approvers are active members who have good experience and knowledge of the domain.
-They have actively participated in the issue/PR reviews and have identified relevant issues during review.
+They have actively participated in the issue/PR reviews and have identified relevant issues during review.
### Requirements
diff --git a/docs/getting-started/getting-started.md b/docs/getting-started/getting-started.md
index 21bd026e4..d46270e45 100644
--- a/docs/getting-started/getting-started.md
+++ b/docs/getting-started/getting-started.md
@@ -3,12 +3,12 @@
KubeEdge is an open source system for extending native containerized application orchestration capabilities to hosts at Edge.
### Why KubeEdge?
-Learn about KubeEdge and the KubeEdge Mission [here](../modules/kubeedge.md)
+Learn about KubeEdge and the KubeEdge Mission [here](../modules/kubeedge.md)
-### First Steps
-To get the most out of KubeEdge, start by reviewing a few introductory topics:
+### First Steps
+To get the most out of KubeEdge, start by reviewing a few introductory topics:
- Quick Start - [Install KubeEdge with keadm](../setup/kubeedge_install_keadm.md)
- [Start developing KubeEdge](../setup/develop_kubeedge.md)
-- [Integrate with IEF](../guides/try_kubeedge_with_ief.md) - Integrate with the Intelligent Edge Fabric cloud
-- [Contributing](contribute.md) - Contribute to KubeEdge
-- [Troubleshooting](../troubleshooting/troubleshooting.md) - Troubleshoot commonly occurring issues. GitHub issues are [here](https://github.com/kubeedge/kubeedge/issues)
+- [Integrate with IEF](../guides/try_kubeedge_with_ief.md) - Integrate with the Intelligent Edge Fabric cloud
+- [Contributing](contribute.md) - Contribute to KubeEdge
+- [Troubleshooting](../troubleshooting/troubleshooting.md) - Troubleshoot commonly occurring issues. GitHub issues are [here](https://github.com/kubeedge/kubeedge/issues)
diff --git a/docs/getting-started/release_package.md b/docs/getting-started/release_package.md
index c61fc4293..e15cf4d97 100644
--- a/docs/getting-started/release_package.md
+++ b/docs/getting-started/release_package.md
@@ -8,7 +8,7 @@
+ [Creating cluster with kubeadm](<https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/>)
-+ KubeEdge supports https connection to Kubernetes apiserver.
++ KubeEdge supports https connection to Kubernetes apiserver.
Enter the path to kubeconfig file in controller.yaml
```yaml
@@ -17,7 +17,7 @@
...
kubeconfig: "path_to_kubeconfig_file" #Enter path to kubeconfig file to enable https connection to k8s apiserver
```
-
+
+ (Optional) KubeEdge also supports insecure http connection to Kubernetes apiserver for testing, debugging cases.
Please follow below steps to enable http port in Kubernetes apiserver.
@@ -36,20 +36,20 @@
```
## Cloud Vm
-
+
**Note**:execute the below commands as root user
```shell
VERSION="v0.3.0"
OS="linux"
ARCH="amd64"
curl -L "https://github.com/kubeedge/kubeedge/releases/download/${VERSION}/kubeedge-${VERSION}-${OS}-${ARCH}.tar.gz" --output kubeedge-${VERSION}-${OS}-${ARCH}.tar.gz && tar -xf kubeedge-${VERSION}-${OS}-${ARCH}.tar.gz -C /etc
-
+
```
-
+
### Generate Certificates
-
+
RootCA certificate and a cert/key pair is required to have a setup for KubeEdge. Same cert/key pair can be used in both cloud and edge.
-
+
```shell
wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/tools/certgen.sh
# make script executable
@@ -57,21 +57,21 @@
bash -x ./certgen.sh genCertAndKey edge
```
**NOTE:** The cert/key will be generated in the `/etc/kubeedge/ca` and `/etc/kubeedge/certs` respectively.
-
+
+ The path to the generated certificates should be updated in `etc/kubeedge/cloud/conf/controller.yaml`. Please update the correct paths for the following :
+ cloudhub.ca
+ cloudhub.cert
+ cloudhub.key
-
+
+ Create DeviceModel and Device CRDs.
-
+
```shell
wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_devicemodel.yaml
kubectl create -f devices_v1alpha1_devicemodel.yaml
wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/devices/devices_v1alpha1_device.yaml
kubectl create -f devices_v1alpha1_device.yaml
- ```
-
+ ```
+
+ Create ClusterObjectSync and ObjectSync CRDs which used in reliable message delivery.
```shell
wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml
@@ -79,9 +79,9 @@
wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/crds/reliablesyncs/objectsync_v1alpha1.yaml
kubectl create -f objectsync_v1alpha1.yaml
```
-
+
+ Run cloud
-
+
```shell
cd /etc/kubeedge/cloud
# run cloudcore
@@ -95,33 +95,33 @@
based on the runtime to be used at edge
**NOTE:** scp kubeedge folder from cloud vm to edge vm
-
+
```shell
In cloud
scp -r /etc/kubeedge root@edgeip:/etc
```
### Configuring MQTT mode
-
+
The Edge part of KubeEdge uses MQTT for communication between deviceTwin and devices. KubeEdge supports 3 MQTT modes:
1) internalMqttMode: internal mqtt broker is enabled.
2) bothMqttMode: internal as well as external broker are enabled.
3) externalMqttMode: only external broker is enabled.
-
+
Use mode field in [edge.yaml](https://github.com/kubeedge/kubeedge/blob/master/edge/conf/edge.yaml#L4) to select the desired mode.
-
+
To use KubeEdge in double mqtt or external mode, you need to make sure that [mosquitto](https://mosquitto.org/) or [emqx edge](https://www.emqx.io/downloads/edge) is installed on the edge node as an MQTT Broker.
-
+
+ We have provided a sample node.json to add a node in kubernetes. Please make sure edge-node is added in kubernetes. Run below steps to add edge-node.
-
+
+ Deploy node
```shell
wget -L https://raw.githubusercontent.com/kubeedge/kubeedge/master/build/node.json
- #Modify the node.json` file and change `metadata.name` to the name of the edge node
+ #Modify the node.json` file and change `metadata.name` to the name of the edge node
kubectl apply -f node.json
```
+ Modify the `/etc/kubeedge/edge/conf/edge.yaml` configuration file
+ Replace `edgehub.websocket.certfile` and `edgehub.websocket.keyfile` with your own certificate path
- + Update the IP address of the master in the `websocket.url` field.
+ + Update the IP address of the master in the `websocket.url` field.
+ replace `edge-node` with edge node name in edge.yaml for the below fields :
+ `websocket:URL`
+ `controller:node-id`
@@ -136,7 +136,7 @@
+ `runtime-request-timeout: 2`
+ `podsandbox-image: k8s.gcr.io/pause`
+ `kubelet-root-dir: /var/run/kubelet/`
- + Run edge
+ + Run edge
```shell
# run edgecore
# `conf/` should be in the same directory as the cloned KubeEdge repository
@@ -145,7 +145,7 @@
./edgecore
# or
nohup ./edgecore > edgecore.log 2>&1 &
-
+
```
**Note**: Running edgecore on ARM based processors,follow the above steps as mentioned for Edge Vm
```shell
diff --git a/docs/getting-started/support.md b/docs/getting-started/support.md
index 99538ecca..e4371f000 100644
--- a/docs/getting-started/support.md
+++ b/docs/getting-started/support.md
@@ -5,10 +5,10 @@ If you need support, start with the [troubleshooting guide](../troubleshooting/t
## Community
-**Slack channel:**
+**Slack channel:**
We use Slack for public discussions. To chat with us or the rest of the community, join us in the [KubeEdge Slack](https://kubeedge.slack.com) team channel #general. To sign up, use our Slack inviter link [here](https://join.slack.com/t/kubeedge/shared_invite/enQtNDg1MjAwMDI0MTgyLTQ1NzliNzYwNWU5MWYxOTdmNDZjZjI2YWE2NDRlYjdiZGYxZGUwYzkzZWI2NGZjZWRkZDVlZDQwZWI0MzM1Yzc).
-**Mailing List**
+**Mailing List**
Please sign up on our [mailing list](https://groups.google.com/forum/#!forum/kubeedge)
diff --git a/docs/guides/bluetooth_mapper_e2e_guide.md b/docs/guides/bluetooth_mapper_e2e_guide.md
index 03981d338..073363161 100644
--- a/docs/guides/bluetooth_mapper_e2e_guide.md
+++ b/docs/guides/bluetooth_mapper_e2e_guide.md
@@ -1,7 +1,7 @@
# Bluetooth Mapper End to End Test Setup Guide
The test setup required for running the end to end test of bluetooth mapper requires two separate machines in bluetooth range.
-The paypal/gatt package used for bluetooth mapper makes use of hci interface for bluetooth communication. Out of two machines specified,
+The paypal/gatt package used for bluetooth mapper makes use of hci interface for bluetooth communication. Out of two machines specified,
one is used for running bluetooth mapper and other is used for running a test server which publishes data that the mapper use for processing.
The test server created here is also using the paypal/gatt package.
@@ -20,11 +20,11 @@ The test server created here is also using the paypal/gatt package.
1. Copy devices folder in tests/e2e/stubs and keep it in path TESTSERVER/src/github.com in first machine.
2. Update the following in devices/mockserver.go
-
+
1. package devices -> package main
2. import "github.com/kubeedge/kubeedge/tests/stubs/devices/services" to "github.com/devices/services"
-
-3. Build the binary using
+
+3. Build the binary using
`go build mockserver.go`
4. Run the server using
`sudo ./mockserver -logtostderr -duration=<specify duration for which test server should be running>`
@@ -33,7 +33,6 @@ The test server created here is also using the paypal/gatt package.
This runs your test server which publishes data for the mapper to process.
-
- \ No newline at end of file
+
diff --git a/docs/guides/device_crd_guide.md b/docs/guides/device_crd_guide.md
index 06914a9ea..980a82c30 100644
--- a/docs/guides/device_crd_guide.md
+++ b/docs/guides/device_crd_guide.md
@@ -2,8 +2,8 @@
KubeEdge supports device management with the help of Kubernetes [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) and a Device Mapper (explained below) corresponding to the device being used.
We currently manage devices from the cloud and synchronize the device updates between edge nodes and cloud, with the help of device controller and device twin modules.
-
-
+
+
## Device Model
A `device model` describes the device properties exposed by the device and property visitors to access these properties. A device model is like a reusable template using which many devices can be created and managed.
@@ -13,17 +13,17 @@ Details on device model definition can be found [here](https://github.com/kubeed
A sample device model can be found [here](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/device-crd.md#device-model-sample)
-## Device Instance
-
+## Device Instance
+
A `device` instance represents an actual device object. It is like an instantiation of the `device model` and references properties defined in the model. The device spec is static while the device status contains dynamically changing data like the desired state of a device property and the state reported by the device.
-
+
Details on device instance definition can be found [here](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/device-crd.md#device-instance-type-definition).
-
+
A sample device model can be found [here](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/device-crd.md#device-instance-sample).
## Device Mapper
-
+
Mapper is an application that is used to connect and and control devices. Following are the responsibilities of mapper:
1) Scan and connect to the device.
2) Report the actual state of twin-attributes of device.
@@ -32,17 +32,17 @@ A sample device model can be found [here](https://github.com/kubeedge/kubeedge/b
5) Convert readings from device to format accepted by KubeEdge.
6) Schedule actions on the device.
7) Check health of the device.
-
+
Mapper can be specific to a protocol where standards are defined i.e Bluetooth, Zigbee, etc or specific to a device if it a custom protocol.
-
+
Mapper design details can be found [here](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/mapper-design.md#mapper-design)
-
- An example of a mapper application created to support bluetooth protocol can be found [here](https://github.com/kubeedge/kubeedge/tree/master/device/bluetooth_mapper#bluetooth-mapper)
-
-
+
+ An example of a mapper application created to support bluetooth protocol can be found [here](https://github.com/kubeedge/kubeedge/tree/master/device/bluetooth_mapper#bluetooth-mapper)
+
+
## Usage of Device CRD
-The following are the steps to
+The following are the steps to
1. Create a device model in the cloud node.
@@ -56,16 +56,16 @@ The following are the steps to
kubectl apply -f <path to device instance yaml>
```
- Note: Creation of device instance will also lead to the creation of a config map which will contain information about the devices which are required by the mapper applications
+ Note: Creation of device instance will also lead to the creation of a config map which will contain information about the devices which are required by the mapper applications
The name of the config map will be as follows: device-profile-config-< edge node name >. The updation of the config map is handled internally by the device controller.
3. Run the mapper application corresponding to your protocol.
4. Edit the status section of the device instance yaml created in step 2 and apply the yaml to change the state of device twin. This change will be reflected at the edge, through the device controller
and device twin modules. Based on the updated value of device twin at the edge the mapper will be able to perform its operation on the device.
-
+
5. The reported values of the device twin are updated by the mapper application at the edge and this data is synced back to the cloud by the device controller. User can view the update at the cloud by checking his device instance object.
```shel
- Note: Sample device model and device instance for a few protocols can be found at $GOPATH/src/github.com/kubeedge/kubeedge/build/crd-samples/devices
+ Note: Sample device model and device instance for a few protocols can be found at $GOPATH/src/github.com/kubeedge/kubeedge/build/crd-samples/devices
``` \ No newline at end of file
diff --git a/docs/guides/edgemesh_test_env_guide.md b/docs/guides/edgemesh_test_env_guide.md
index 6dfb0551d..b69278c44 100644
--- a/docs/guides/edgemesh_test_env_guide.md
+++ b/docs/guides/edgemesh_test_env_guide.md
@@ -46,7 +46,7 @@ Copy the deployment.yaml from the above link in cloud host, and run
```bash
$ kubectl create -f deployment.yaml
deployment.apps/nginx-deployment created
-```
+```
Check the pod is up and is running state, as we could see the pod is running on edge node b
```bash
@@ -126,4 +126,4 @@ Commercial support is available at
</html>
```
>* EdgeMesh supports both Host Networking and Container Networking
->* If you ever used EdgeMesh of old version, check your iptables rules. It might affect your test result. \ No newline at end of file
+>* If you ever used EdgeMesh of old version, check your iptables rules. It might affect your test result. \ No newline at end of file
diff --git a/docs/guides/message_topics.md b/docs/guides/message_topics.md
index 6148e27f0..5ab180abb 100644
--- a/docs/guides/message_topics.md
+++ b/docs/guides/message_topics.md
@@ -3,7 +3,7 @@ KubeEdge uses MQTT for communication between deviceTwin and devices/apps.
EventBus can be started in multiple MQTT modes and acts as an interface for sending/receiving messages on relevant MQTT topics.
The purpose of this document is to describe the topics which KubeEdge uses for communication.
-Please read Beehive [documentation](../modules/beehive.md) for understanding about message format used by KubeEdge.
+Please read Beehive [documentation](../modules/beehive.md) for understanding about message format used by KubeEdge.
## Subscribe Topics
On starting EventBus, it subscribes to these 5 topics:
@@ -13,7 +13,7 @@ On starting EventBus, it subscribes to these 5 topics:
3. "$hw/events/device/+/twin/+"
4. "$hw/events/upload/#"
5. "SYS/dis/upload_records"
-```
+```
If the the message is received on first 3 topics, the message is sent to deviceTwin, else the message is sent to cloud via edgeHub.
@@ -21,15 +21,15 @@ We will focus on the message expected on the first 3 topics.
1. `"$hw/events/node/+/membership/get"`:
This topics is used to get membership details of a node i.e the devices that are associated with the node.
-The response of the message is published on `"$hw/events/node/+/membership/get/result"` topic.
+The response of the message is published on `"$hw/events/node/+/membership/get/result"` topic.
2. `"$hw/events/device/+/state/update`":
-This topic is used to update the state of the device. + symbol can be replaced with ID of the device whose state is to be updated.
+This topic is used to update the state of the device. + symbol can be replaced with ID of the device whose state is to be updated.
3. `"$hw/events/device/+/twin/+"`:
-The two + symbols can be replaced by the deviceID on whose twin the operation is to be performed and any one of(update,cloud_updated,get) respectively.
+The two + symbols can be replaced by the deviceID on whose twin the operation is to be performed and any one of(update,cloud_updated,get) respectively.
-Following is the explanation of the three suffix used:
-1. `update`: this suffix is used to update the twin for the deviceID.
-2. `cloud_updated`: this suffix is used to sync the twin status between edge and cloud.
+Following is the explanation of the three suffix used:
+1. `update`: this suffix is used to update the twin for the deviceID.
+2. `cloud_updated`: this suffix is used to sync the twin status between edge and cloud.
3. `get`: is used to get twin status of a device. The response is published on `"$hw/events/device/+/twin/get/result"` topic.
diff --git a/docs/guides/try_kubeedge_with_ief.md b/docs/guides/try_kubeedge_with_ief.md
index 57c5b8d42..b5d711d29 100644
--- a/docs/guides/try_kubeedge_with_ief.md
+++ b/docs/guides/try_kubeedge_with_ief.md
@@ -1,4 +1,4 @@
-# Try KubeEdge with HuaweiCloud (IEF)
+# Try KubeEdge with HuaweiCloud (IEF)
## [Intelligent EdgeFabric (IEF)](https://www.huaweicloud.com/product/ief.html)
**Note:** The HuaweiCloud IEF is only available in China now.
diff --git a/docs/guides/unit_test_guide.md b/docs/guides/unit_test_guide.md
index bb28e8af2..a1733e18f 100644
--- a/docs/guides/unit_test_guide.md
+++ b/docs/guides/unit_test_guide.md
@@ -1,56 +1,56 @@
# Unit Test Guide
The purpose of this document is to give introduction about unit tests and to help contributors in writing unit tests.
-## Unit Test
-
-Read this [article](http://softwaretestingfundamentals.com/unit-testing/) for a simple introduction about unit tests and benefits of unit testing. Go has its own built-in package called testing and command called ```go test```.
+## Unit Test
+
+Read this [article](http://softwaretestingfundamentals.com/unit-testing/) for a simple introduction about unit tests and benefits of unit testing. Go has its own built-in package called testing and command called ```go test```.
For more detailed information on golang's builtin testing package read this [document](https://golang.org/pkg/testing/]).
-
-## Mocks
+
+## Mocks
The object which needs to be tested may have dependencies on other objects. To confine the behavior of the object under test, replacement of the other objects by mocks that simulate the behavior of the real objects is necessary.
Read this [article](https://medium.com/@piraveenaparalogarajah/what-is-mocking-in-testing-d4b0f2dbe20a) for more information on mocks.
-
+
GoMock is a mocking framework for Go programming language.
Read [godoc](https://godoc.org/github.com/golang/mock/gomock) for more information about gomock.
-
+
Mock for an interface can be automatically generated using [GoMocks](https://github.com/golang/mock) mockgen package.
-
+
**Note** There is gomock package in kubeedge vendor directory without mockgen. Please use mockgen package of tagged version ***v1.1.1*** of [GoMocks github repository](https://github.com/golang/mock) to install mockgen and generate mocks. Using higher version may cause errors/panics during execution of you tests.
There is gomock package in kubeedge vendor directory without mockgen. Please use mockgen package of tagged version ***v1.1.1*** of [GoMocks github repository](https://github.com/golang/mock) to install mockgen and generate mocks. Using higher version may cause errors/panics during execution of you tests.
Read this [article](https://blog.codecentric.de/en/2017/08/gomock-tutorial/) for a short tutorial of usage of gomock and mockgen.
-
-## Ginkgo
-
+
+## Ginkgo
+
[Ginkgo](https://onsi.github.io/ginkgo/) is one of the most popular framework for writing tests in go.
-
+
Read [godoc](https://godoc.org/github.com/onsi/ginkgo) for more information about ginkgo.
-
+
See a [sample](https://github.com/kubeedge/kubeedge/blob/master/edge/pkg/metamanager/dao/meta_test.go) in kubeedge where go builtin package testing and gomock is used for writing unit tests.
See a [sample](https://github.com/kubeedge/kubeedge/blob/master/edge/pkg/devicetwin/dtmodule/dtmodule_test.go) in kubeedge where ginkgo is used for testing.
-## Writing UT using GoMock
+## Writing UT using GoMock
-### Example : metamanager/dao/meta.go
+### Example : metamanager/dao/meta.go
After reading the code of meta.go, we can find that there are 3 interfaces of beego which are used. They are [Ormer](https://github.com/kubeedge/kubeedge/blob/master/vendor/github.com/astaxie/beego/orm/types.go), [QuerySeter](https://github.com/kubeedge/kubeedge/blob/master/vendor/github.com/astaxie/beego/orm/types.go) and [RawSeter](https://github.com/kubeedge/kubeedge/blob/master/vendor/github.com/astaxie/beego/orm/types.go).
We need to create fake implementations of these interfaces so that we do not rely on the original implementation of this interface and their function calls.
-Following are the steps for creating fake/mock implementation of Ormer, initializing it and replacing the original with fake.
+Following are the steps for creating fake/mock implementation of Ormer, initializing it and replacing the original with fake.
-1. Create directory mocks/beego.
+1. Create directory mocks/beego.
2. use mockgen to generate fake implementation of the Ormer interface
```shell
mockgen -destination=mocks/beego/fake_ormer.go -package=beego github.com/astaxie/beego/orm Ormer
```
-- `destination` : where you want to create the fake implementation.
-- `package` : package of the created fake implementation file
-- `github.com/astaxie/beego/orm` : the package where interface definition is there
+- `destination` : where you want to create the fake implementation.
+- `package` : package of the created fake implementation file
+- `github.com/astaxie/beego/orm` : the package where interface definition is there
- `Ormer` : generate mocks for this interface
3. Initialize mocks in your test file. eg meta_test.go
@@ -58,19 +58,19 @@ mockgen -destination=mocks/beego/fake_ormer.go -package=beego github.com/astaxie
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
ormerMock = beego.NewMockOrmer(mockCtrl)
-```
+```
-4. ormermock is now a fake implementation of Ormer interface. We can make any function in ormermock return any value you want.
+4. ormermock is now a fake implementation of Ormer interface. We can make any function in ormermock return any value you want.
-5. replace the real Ormer implementation with this fake implementation. DBAccess is variable to type Ormer which we will replace with mock implemention
+5. replace the real Ormer implementation with this fake implementation. DBAccess is variable to type Ormer which we will replace with mock implemention
```shell
dbm.DBAccess = ormerMock
-```
+```
-6. If we want Insert function of ormer interface which has return types as (int64,err) to return (1 nil), it can be done in 1 line in your test file using gomock.
+6. If we want Insert function of ormer interface which has return types as (int64,err) to return (1 nil), it can be done in 1 line in your test file using gomock.
```shell
ormerMock.EXPECT().Insert(gomock.Any()).Return(int64(1), nil).Times(1)
-```
+```
``Expect()`` : is to tell that a function of ormermock will be called.
diff --git a/docs/images/KubeEdge_arch.vsdx b/docs/images/KubeEdge_arch.vsdx
index 9cb0f2fd9..f59ace88a 100644
--- a/docs/images/KubeEdge_arch.vsdx
+++ b/docs/images/KubeEdge_arch.vsdx
Binary files differ
diff --git a/docs/images/reliable-message-delivery/reliablemessage-workflow.PNG b/docs/images/reliable-message-delivery/reliablemessage-workflow.PNG
index dc3ea47c5..6004dd260 100644
--- a/docs/images/reliable-message-delivery/reliablemessage-workflow.PNG
+++ b/docs/images/reliable-message-delivery/reliablemessage-workflow.PNG
Binary files differ
diff --git a/docs/index.rst b/docs/index.rst
index 9cdb8965a..9e5c1f05f 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -12,7 +12,7 @@ Welcome to KubeEdge's documentation!
:align: right
:target: https://kubeedge.io
-KubeEdge is an open source system for extending native containerized
+KubeEdge is an open source system for extending native containerized
application orchestration capabilities to hosts at Edge.
.. toctree::
@@ -21,7 +21,7 @@ application orchestration capabilities to hosts at Edge.
.. toctree::
:maxdepth: 2
:caption: Getting Started
-
+
getting-started/getting-started
getting-started/contribute.md
getting-started/roadmap.md
diff --git a/docs/mappers/bluetooth_mapper.md b/docs/mappers/bluetooth_mapper.md
index 7d072bf39..019d268ed 100644
--- a/docs/mappers/bluetooth_mapper.md
+++ b/docs/mappers/bluetooth_mapper.md
@@ -159,7 +159,11 @@ perform the event once.
### Configuration File
+<<<<<<< HEAD
The user can give the configurations specific to the bluetooth device using configurations provided in the configuration file present at $GOPATH/src/github.com/kubeedge/kubeedge/mappers/bluetooth_mapper/configuration/config.yaml.
+=======
+ The user can give the configurations specific to the bluetooth device using configurations provided in the configuration file present at $GOPATH/src/github.com/kubeedge/kubeedge/device/bluetooth_mapper/configuration/config.yaml.
+>>>>>>> Lint: cleanup white noise
The details provided in the configuration file are used by action-manager module, scheduler module, watcher module, the data-converter module and the controller.
**Example:** Given below is the instructions using which user can create their own configuration file, for their device.
@@ -324,4 +328,4 @@ then it returns an error message.
{
"name": "temperature" #name of schedule to be deleted
}
- ] \ No newline at end of file
+ ]
diff --git a/docs/modules/beehive.md b/docs/modules/beehive.md
index 88a608ecb..509a4af15 100644
--- a/docs/modules/beehive.md
+++ b/docs/modules/beehive.md
@@ -1,6 +1,6 @@
# Beehive
-## Beehive Overview
+## Beehive Overview
Beehive is a messaging framework based on go-channels for communication between modules of KubeEdge. A module registered with beehive can communicate with other beehive modules if the name with which other beehive module is registered or the name of the group of the module is known.
Beehive supports following module operations:
@@ -9,106 +9,106 @@ Beehive supports following module operations:
2. Add Module to a group
3. CleanUp (remove a module from beehive core and all groups)
-Beehive supports following message operations:
+Beehive supports following message operations:
1. Send to a module/group
2. Receive by a module
3. Send Sync to a module/group
4. Send Response to a sync message
-## Message Format
+## Message Format
-Message has 3 parts
+Message has 3 parts
- 1. Header:
+ 1. Header:
1. ID: message ID (string)
2. ParentID: if it is a response to a sync message then parentID exists (string)
3. TimeStamp: time when message was generated (int)
4. Sync: flag to indicate if message is of type sync (bool)
- 2. Route:
+ 2. Route:
1. Source: origin of message (string)
2. Group: the group to which the message has to be broadcasted (string)
3. Operation: what’s the operation on the resource (string)
4. Resource: the resource to operate on (string)
3. Content: content of the message (interface{})
-
-## Register Module
+
+## Register Module
1. On starting edgecore, each module tries to register itself with the beehive core.
-2. Beehive core maintains a map named modules which has module name as key and implementation of module interface as value.
+2. Beehive core maintains a map named modules which has module name as key and implementation of module interface as value.
3. When a module tries to register itself with beehive core, beehive core checks from already loaded modules.yaml config file to check if the module is enabled. If it is enabled, it is added in the modules map or else it is added in the disabled modules map.
-## Channel Context Structure Fields
+## Channel Context Structure Fields
-### (_Important for understanding beehive operations_)
+### (_Important for understanding beehive operations_)
1. **channels:** channels is a map of string(key) which is name of module and chan(value) of message which will used to send message to the respective module.
2. **chsLock:** lock for channels map
3. **typeChannels:** typeChannels is a map of string(key)which is group name and (map of string(key) to chan(value) of message ) (value) which is map of name of each module in the group to the channels of corresponding module.
-4. **typeChsLock:** lock for typeChannels map
+4. **typeChsLock:** lock for typeChannels map
5. **anonChannels:** anonChannels is a map of string(parentid) to chan(value) of message which will be used for sending response for a sync message.
6. **anonChsLock:** lock for anonChannels map
-## Module Operations
+## Module Operations
-### Add Module
+### Add Module
1. Add module operation first creates a new channel of message type.
-2. Then the module name(key) and its channel(value) is added in the channels map of channel context structure.
-3. Eg: add edged module
+2. Then the module name(key) and its channel(value) is added in the channels map of channel context structure.
+3. Eg: add edged module
```
coreContext.Addmodule(“edged”)
-```
-### Add Module to Group
+```
+### Add Module to Group
1. addModuleGroup first gets the channel of a module from the channels map.
2. Then the module and its channel is added in the typeChannels map where key is the group and in the value is a map in which (key is module name and value is the channel).
-3. Eg: add edged in edged group. Here 1st edged is module name and 2nd edged is the group name.
+3. Eg: add edged in edged group. Here 1st edged is module name and 2nd edged is the group name.
```
coreContext.AddModuleGroup(“edged”,”edged”)
```
-### CleanUp
+### CleanUp
1. CleanUp deletes the module from channels map and deletes the module from all groups(typeChannels map).
2. Then the channel associated with the module is closed.
-3. Eg: CleanUp edged module
+3. Eg: CleanUp edged module
```
coreContext.CleanUp(“edged”)
```
-## Message Operations
+## Message Operations
-### Send to a Module
+### Send to a Module
1. Send gets the channel of a module from channels map.
-2. Then the message is put on the channel.
-3. Eg: send message to edged.
+2. Then the message is put on the channel.
+3. Eg: send message to edged.
```
-coreContext.Send(“edged”,message)
-```
+coreContext.Send(“edged”,message)
+```
-### Send to a Group
+### Send to a Group
1. SendToGroup gets all modules(map) from the typeChannels map.
2. Then it iterates over the map and sends the message on the channels of all modules in the map.
-3. Eg: message to be sent to all modules in edged group.
+3. Eg: message to be sent to all modules in edged group.
```
coreContext.SendToGroup(“edged”,message) message will be sent to all modules in edged group.
```
-### Receive by a Module
+### Receive by a Module
1. Receive gets the channel of a module from channels map.
2. Then it waits for a message to arrive on that channel and returns the message. Error is returned if there is any.
-3. Eg: receive message for edged module
+3. Eg: receive message for edged module
```go
msg, err := coreContext.Receive("edged")
```
-### SendSync to a Module
+### SendSync to a Module
1. SendSync takes 3 parameters, (module, message and timeout duration)
2. SendSync first gets the channel of the module from the channels map.
@@ -116,25 +116,25 @@ msg, err := coreContext.Receive("edged")
4. Then a new channel of message is created and is added in anonChannels map where key is the messageID.
5. Then it waits for the message(response) to be received on the anonChannel it created till timeout.
6. If message is received before timeout, message is returned with nil error or else timeout error is returned.
-7. Eg: send sync to edged with timeout duration 60 seconds
+7. Eg: send sync to edged with timeout duration 60 seconds
```go
response, err := coreContext.SendSync("edged",message,60*time.Second)
```
-### SendSync to a Group
+### SendSync to a Group
1. Get the list of modules from typeChannels map for the group.
2. Create a channel of message with size equal to the number of modules in that group and put in anonChannels map as value with key as messageID.
3. Send the message on channels of all the modules.
4. Wait till timeout. If the length of anonChannel = no of modules in that group, check if all the messages in the channel have parentID = messageID. If no return error else return nil error.
5. If timeout is reached,return timeout error.
-6. Eg: send sync message to edged group with timeout duration 60 seconds
+6. Eg: send sync message to edged group with timeout duration 60 seconds
```go
err := coreContext.SendToGroupSync("edged",message,60*time.Second)
```
-### SendResp to a sync message
+### SendResp to a sync message
1. SendResp is used to send response for a sync message.
2. The messageID for which response is sent needs to be in the parentID of the response message.
diff --git a/docs/modules/cloud/controller.md b/docs/modules/cloud/controller.md
index 60fd51515..95c0fb2c1 100644
--- a/docs/modules/cloud/controller.md
+++ b/docs/modules/cloud/controller.md
@@ -4,9 +4,9 @@
## Edge Controller Overview
EdgeController is the bridge between Kubernetes Api-Server and edgecore
-
+
## Operations Performed By Edge Controller
-
+
The following are the functions performed by Edge controller :-
- Downstream Controller: Sync add/update/delete event to edgecore from K8s Api-server
- Upstream Controller: Sync watch and Update status of resource and events(node, pod and configmap) to K8s-Api-server and also subscribe message from edgecore
@@ -15,7 +15,7 @@
## Downstream Controller:
### Sync add/update/delete event to edge
-
+
- Downstream controller: Watches K8S-Api-server and sends updates to edgecore via cloudHub
- Sync (pod, configmap, secret) add/update/delete event to edge via cloudHub
- Creates Respective manager (pod, configmap, secret) for handling events by calling manager interface
@@ -41,9 +41,9 @@
- **HostIP**: IP address of the host to which pod is assigned
- **PodIp**: IP address allocated to the Pod
- **QosClass**: Assigned to the pod based on resource requirement
-
+
![Upstream Controller](../../images/edgecontroller/UpstreamController.png)
-
+
## Controller Manager:
### Creates manager Interface and implements ConfigmapManager, LocationCache and podManager
diff --git a/docs/modules/cloud/device_controller.md b/docs/modules/cloud/device_controller.md
index 1efd1a1cd..31e3f677e 100644
--- a/docs/modules/cloud/device_controller.md
+++ b/docs/modules/cloud/device_controller.md
@@ -3,7 +3,7 @@
## Device Controller Overview
The device controller is the cloud component of KubeEdge which is responsible for device management. Device management in KubeEdge is implemented by making use of Kubernetes
- [Custom Resource Definitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to describe device metadata/status and device controller to synchronize these device updates between edge and cloud.
+ [Custom Resource Definitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to describe device metadata/status and device controller to synchronize these device updates between edge and cloud.
The device controller starts two separate goroutines called `upstream controller` and `downstream controller`. These are not separate controllers as such but named here for clarity.
The device controller makes use of device model and device instance to implement device management :
@@ -15,10 +15,10 @@ The device controller makes use of device model and device instance to implement
**Note**: Sample device model and device instance for a few protocols can be found at $GOPATH/src/github.com/kubeedge/kubeedge/build/crd-samples/devices
![Device Model](../../images/device-crd/device-crd-model.png)
-
-
+
+
## Operations Performed By Device Controller
-
+
The following are the functions performed by the device controller :-
- **Downstream Controller**: Synchronize the device updates from the cloud to the edge node, by watching on K8S API server
- **Upstream Controller**: Synchronize the device updates from the edge node to the cloud using device twin component
@@ -32,9 +32,9 @@ actions that the upstream controller can take:
| Update Type | Action |
|------------------------------- |---------------------------------------------- |
|Device Twin Reported State Updated | The controller patches the reported state of the device twin property in the cloud. |
-
+
![Device Upstream Controller](../../images/device-crd/device-upstream-controller.png)
-
+
### Syncing Reported Device Twin Property Update From Edge To Cloud
The mapper watches devices for updates and reports them to the event bus via the MQTT broker. The event bus sends the reported state of the device to the device twin which stores it locally and then syncs the updates to the cloud. The device controller watches for device updates from the edge ( via the cloudhub ) and updates the reported state in the cloud.
diff --git a/docs/modules/edge/devicetwin.md b/docs/modules/edge/devicetwin.md
index ad0db3ef2..66201ae4f 100644
--- a/docs/modules/edge/devicetwin.md
+++ b/docs/modules/edge/devicetwin.md
@@ -3,52 +3,52 @@
## Overview
-DeviceTwin module is responsible for storing device status, dealing with device attributes, handling device twin operations, creating a membership
+DeviceTwin module is responsible for storing device status, dealing with device attributes, handling device twin operations, creating a membership
between the edge device and edge node, syncing device status to the cloud and syncing the device twin information between edge and cloud.
-It also provides query interfaces for applications. Device twin consists of four sub modules (namely membership module, communication
+It also provides query interfaces for applications. Device twin consists of four sub modules (namely membership module, communication
module, device module and device twin module) to perform the responsibilities of device twin module.
-
-
+
+
## Operations Performed By Device Twin Controller
-
+
The following are the functions performed by device twin controller :-
-
+
- Sync metadata to/from db ( Sqlite )
- Register and Start Sub Modules
- - Distribute message to Sub Modules
+ - Distribute message to Sub Modules
- Health Check
### Sync Metadata to/from db ( Sqlite )
-
+
For all devices managed by the edge node , the device twin performs the below operations :-
- It checks if the device in the device twin context (the list of devices are stored inside the device twin context), if not it adds a mutex to the context.
- Query device from database
- Query device attribute from database
- Query device twin from database
- - Combine the device, device attribute and device twin data together into a single structure and stores it in the device twin context.
-
-
+ - Combine the device, device attribute and device twin data together into a single structure and stores it in the device twin context.
+
+
### Register and Start Sub Modules
-Registers the four device twin modules and starts them as separate go routines
+Registers the four device twin modules and starts them as separate go routines
### Distribute Message To Sub Modules
-
+
1. Continuously listen for any device twin message in the beehive framework.
2. Send the received message to the communication module of device twin
3. Classify the message according to the message source, i.e. whether the message is from eventBus, edgeManager or edgeHub,
-and fills the action module map of the module (ActionModuleMap is a map of action to module)
+and fills the action module map of the module (ActionModuleMap is a map of action to module)
4. Send the message to the required device twin module
-
-### Health Check
-
+
+### Health Check
+
The device twin controller periodically ( every 60 s ) sends ping messages to submodules. Each of the submodules updates the timestamp in a map for itself once it receives a ping.
The controller checks if the timestamp for a module is more than 2 minutes old and restarts the submodule if true.
-
+
## Modules
@@ -70,43 +70,43 @@ The major functions performed by this module are:-
2. Receive the messages sent to membership module
3. For each message the action message is read and the corresponding function is called
4. Receive heartbeat from the heartbeat channel and send a heartbeat to the controller
-
+
The following are the action callbacks which can be performed by the membership module :-
- dealMembershipGet
- dealMembershipUpdated
- dealMembershipDetail
-**dealMembershipGet**: dealMembershipGet() gets the information about the devices associated with the particular edge node, from the cache.
+**dealMembershipGet**: dealMembershipGet() gets the information about the devices associated with the particular edge node, from the cache.
- The eventbus first receives a message on its subscribed topic (membership-get topic).
-- This message arrives at the devicetwin controller, which further sends the message to membership module.
-- The membership module gets the devices associated with the edge node from the cache (context) and sends the information to the communication module.
+- This message arrives at the devicetwin controller, which further sends the message to membership module.
+- The membership module gets the devices associated with the edge node from the cache (context) and sends the information to the communication module.
It also handles errors that may arise while performing the aforementioned process and sends the error to the communication module instead of device details.
-- The communication module sends the information to the eventbus component which further publishes the result on the
- specified MQTT topic (get membership result topic).
+- The communication module sends the information to the eventbus component which further publishes the result on the
+ specified MQTT topic (get membership result topic).
![Membership Get()](../../images/devicetwin/membership-get.png)
-**dealMembershipUpdated**: dealMembershipUpdated() updates the membership details of the node.
+**dealMembershipUpdated**: dealMembershipUpdated() updates the membership details of the node.
It adds the devices, that were newly added, to the edge group and removes the devices, that were removed,
- from the edge group and updates device details, if they have been altered or updated.
-- The edgehub module receives the membership update message from the cloud and forwards the message
-to devicetwin controller which further forwards it to the membership module.
-- The membership module adds devices that are newly added, removes devices that have been recently
-deleted and also updates the devices that were already existing in the database as well as in the cache.
-- After updating the details of the devices a message is sent to the communication module of the device twin, which sends the message to eventbus module to be published on the given MQTT topic.
-
+ from the edge group and updates device details, if they have been altered or updated.
+- The edgehub module receives the membership update message from the cloud and forwards the message
+to devicetwin controller which further forwards it to the membership module.
+- The membership module adds devices that are newly added, removes devices that have been recently
+deleted and also updates the devices that were already existing in the database as well as in the cache.
+- After updating the details of the devices a message is sent to the communication module of the device twin, which sends the message to eventbus module to be published on the given MQTT topic.
+
![Membership Update](../../images/devicetwin/membership-update.png)
-
-
+
+
**dealMembershipDetail**: dealMembershipDetail() provides the membership details of the edge node, providing information
- about the devices associated with the edge node, after removing the membership details of
- recently removed devices.
-- The eventbus module receives the message that arrives on the subscribed topic,the message is then forwarded to the
-devicetwin controller which further forwards it to the membership module.
-- The membership module adds devices that are mentioned in the message, removes
-devices that that are not present in the cache.
+ about the devices associated with the edge node, after removing the membership details of
+ recently removed devices.
+- The eventbus module receives the message that arrives on the subscribed topic,the message is then forwarded to the
+devicetwin controller which further forwards it to the membership module.
+- The membership module adds devices that are mentioned in the message, removes
+devices that that are not present in the cache.
- After updating the details of the devices a message is sent to the communication module of the device twin.
![Membership Detail](../../images/devicetwin/membership-detail.png)
@@ -114,7 +114,7 @@ devices that that are not present in the cache.
### Twin Module
-The main responsibility of the twin module is to deal with all the device twin related operations. It can perform
+The main responsibility of the twin module is to deal with all the device twin related operations. It can perform
operations like device twin update, device twin get and device twin sync-to-cloud.
The major functions performed by this module are:-
@@ -123,40 +123,40 @@ The major functions performed by this module are:-
2. Receive the messages sent to twin module
3. For each message the action message is read and the corresponding function is called
4. Receive heartbeat from the heartbeat channel and send a heartbeat to the controller
-
+
The following are the action callbacks which can be performed by the twin module :-
- dealTwinUpdate
- dealTwinGet
- dealTwinSync
-
-**dealTwinUpdate**: dealTwinUpdate() updates the device twin information for a particular device.
-- The devicetwin update message can either be received by edgehub module from the cloud or from
-the MQTT broker through the eventbus component (mapper will publish a message on the device twin update topic) .
-- The message is then sent to the device twin controller from where it is sent to the device twin module.
-- The twin module updates the twin value in the database and sends the update result message to the communication module.
+
+**dealTwinUpdate**: dealTwinUpdate() updates the device twin information for a particular device.
+- The devicetwin update message can either be received by edgehub module from the cloud or from
+the MQTT broker through the eventbus component (mapper will publish a message on the device twin update topic) .
+- The message is then sent to the device twin controller from where it is sent to the device twin module.
+- The twin module updates the twin value in the database and sends the update result message to the communication module.
- The communication module will in turn send the publish message to the MQTT broker through the eventbus.
-
+
![Device Twin Update](../../images/devicetwin/devicetwin-update.png)
-
-
-**dealTwinGet**: dealTwinGet() provides the device twin information for a particular device.
-- The eventbus component receives the message that arrives on the subscribed twin get topic and forwards the message to devicetwin controller, which further sends the message to twin module.
+
+
+**dealTwinGet**: dealTwinGet() provides the device twin information for a particular device.
+- The eventbus component receives the message that arrives on the subscribed twin get topic and forwards the message to devicetwin controller, which further sends the message to twin module.
- The twin module gets the devicetwin related information for the particular device and sends it to the communication module, it also handles errors that arise when the device is not found or if any internal problem occurs.
-- The communication module sends the information to the eventbus component, which publishes the result on the topic specified .
-
+- The communication module sends the information to the eventbus component, which publishes the result on the topic specified .
+
![Device Twin Get](../../images/devicetwin/devicetwin-get.png)
**dealTwinSync**: dealTwinSync() syncs the device twin information to the cloud.
- The eventbus module receives the message on the subscribed twin cloud sync topic .
- - This message is then sent to the devicetwin controller from where it is sent to the twin module.
- - The twin module then syncs the twin information present in the database and sends the synced twin results to the communication module.
- - The communication module further sends the information to edgehub component which will in turn send the updates to the cloud through the websocket connection.
+ - This message is then sent to the devicetwin controller from where it is sent to the twin module.
+ - The twin module then syncs the twin information present in the database and sends the synced twin results to the communication module.
+ - The communication module further sends the information to edgehub component which will in turn send the updates to the cloud through the websocket connection.
- This function also performs operations like publishing the updated twin details document, delta of the device twin as well as the update result (in case there is some error) to a specified topic through the communication module,
which sends the data to edgehub, which will send it to eventbus which publishes on the MQTT broker.
-
- ![Sync to Cloud](../../images/devicetwin/sync-to-cloud.png)
+
+ ![Sync to Cloud](../../images/devicetwin/sync-to-cloud.png)
### Communication Module
@@ -175,42 +175,42 @@ The following are the action callbacks which can be performed by the communicati
- dealSendToCloud
- dealSendToEdge
- dealLifeCycle
- - dealConfirm
-
-
+ - dealConfirm
+
+
**dealSendToCloud**: dealSendToCloud() is used to send data to the cloudHub component.
This function first ensures that the cloud is connected, then sends the message to the edgeHub module (through the beehive framework),
which in turn will forward the message to the cloud (through the websocket connection).
**dealSendToEdge**: dealSendToEdge() is used to send data to the other modules present at the edge.
This function sends the message received to the edgeHub module using beehive framework.
- The edgeHub module after receiving the message will send it to the required recipient.
-
-**dealLifeCycle**: dealLifeCycle() checks if the cloud is connected and the state of the twin is disconnected, it then changes the status
+ The edgeHub module after receiving the message will send it to the required recipient.
+
+**dealLifeCycle**: dealLifeCycle() checks if the cloud is connected and the state of the twin is disconnected, it then changes the status
to connected and sends the node details to edgehub. If the cloud is disconnected then, it sets the state of the twin
- as disconnected.
+ as disconnected.
+
+**dealConfirm**: dealConfirm() is used to confirm the event. It checks whether the type of the message is right and
+ then deletes the id from the confirm map.
-**dealConfirm**: dealConfirm() is used to confirm the event. It checks whether the type of the message is right and
- then deletes the id from the confirm map.
-
### Device Module
-The main responsibility of the device module is to perform the device related operations like dealing with device state updates
+The main responsibility of the device module is to perform the device related operations like dealing with device state updates
and device attribute updates.
-
+
The major functions performed by this module are :-
1. Initialize action callback map (which is a map of action(string) to the callback function that performs the requested action)
2. Receive the messages sent to device module
3. For each message the action message is read and the corresponding function is called
4. Receive heartbeat from the heartbeat channel and send a heartbeat to the controller
-
+
The following are the action callbacks which can be performed by the device module :-
- dealDeviceUpdated
- dealDeviceStateUpdate
-
+
**dealDeviceUpdated**: dealDeviceUpdated() deals with the operations to be performed when a device attribute update is encountered.
It updates the changes to the device attributes, like addition of attributes, updation of attributes and deletion of attributes
in the database. It also sends the result of the device attribute update to be published to the eventbus component.
@@ -218,22 +218,22 @@ The following are the action callbacks which can be performed by the device modu
- The edgehub component sends the message to the device twin controller which forwards the message to the device module.
- The device module updates the device attribute details into the database after which, the device module sends the result of the device attribute update to be published
to the eventbus component through the communicate module of devicetwin. The eventbus component further publishes the result on the specified topic.
-
+
![Device Update](../../images/devicetwin/device-update.png)
-
+
**dealDeviceStateUpdate**: dealDeviceStateUpdate() deals with the operations to be performed when a device status update is encountered.
It updates the state of the device as well as the last online time of the device in the database.
- It also sends the update state result, through the communication module, to the cloud through the edgehub module and to the eventbus module which in turn
+ It also sends the update state result, through the communication module, to the cloud through the edgehub module and to the eventbus module which in turn
publishes the result on the specified topic of the MQTT broker.
- The device state updation is initiated by publishing a message on the specified topic which is being subscribed by the eventbus component.
- The eventbus component sends the message to the device twin controller which forwards the message to the device module.
- The device module updates the state of the device as well as the last online time of the device in the database.
-- The device module then sends the result of the device state update to the eventbus component and edgehub component through the communicate module of devicetwin. The eventbus component further publishes the result on the specified topic, while the
+- The device module then sends the result of the device state update to the eventbus component and edgehub component through the communicate module of devicetwin. The eventbus component further publishes the result on the specified topic, while the
edgehub component sends the device status update to the cloud.
![Device State Update](../../images/devicetwin/device-state-update.png)
-
-
+
+
## Tables
DeviceTwin module creates three tables in the database, namely :-
@@ -243,20 +243,20 @@ DeviceTwin module creates three tables in the database, namely :-
- Device Twin Table
-### Device Table
+### Device Table
Device table contains the data regarding the devices added to a particular edge node.
-The following are the columns present in the device table :
+The following are the columns present in the device table :
-|Column Name | Description |
-|---|---|
-| **ID** | This field indicates the id assigned to the device |
-| **Name** | This field indicates the name of the device |
-| **Description** | This field indicates the description of the device |
-| **State** | This field indicates the state of the device |
-| **LastOnline** | This fields indicates when the device was last online |
+|Column Name | Description |
+|---|---|
+| **ID** | This field indicates the id assigned to the device |
+| **Name** | This field indicates the name of the device |
+| **Description** | This field indicates the description of the device |
+| **State** | This field indicates the state of the device |
+| **LastOnline** | This fields indicates when the device was last online |
-**Operations Performed :-**
+**Operations Performed :-**
The following are the operations that can be performed on this data :-
@@ -268,23 +268,23 @@ The following are the operations that can be performed on this data :-
- **Update Device Fields**: Updates multiple fields in the device table
-- **Query Device**: Queries a device from the device table
+- **Query Device**: Queries a device from the device table
- **Query Device All**: Displays all the devices present in the device table
- **Update Device Multi**: Updates multiple columns of multiple devices in the device table
-- **Add Device Trans**: Inserts device, device attribute and device twin in a single transaction, if any of these operations fail,
- then it rolls back the other insertions
+- **Add Device Trans**: Inserts device, device attribute and device twin in a single transaction, if any of these operations fail,
+ then it rolls back the other insertions
-- **Delete Device Trans**: Deletes device, device attribute and device twin in a single transaction, if any of these operations fail,
- then it rolls back the other deletions
+- **Delete Device Trans**: Deletes device, device attribute and device twin in a single transaction, if any of these operations fail,
+ then it rolls back the other deletions
-### Device Attribute Table
+### Device Attribute Table
Device attribute table contains the data regarding the device attributes associated with a particular device in the edge node.
-The following are the columns present in the device attribute table :
+The following are the columns present in the device attribute table :
| Column Name | Description |
|----------------|--------------------------|
@@ -298,67 +298,67 @@ The following are the columns present in the device attribute table :
| **Metadata** |This fields describes the metadata associated with the device attribute |
-**Operations Performed :-**
+**Operations Performed :-**
-The following are the operations that can be performed on this data :
+The following are the operations that can be performed on this data :
- **Save Device Attr**: Inserts a device attribute in the device attribute table
-
+
- **Delete Device Attr By ID**: Deletes a device attribute by its ID from the device attribute table
- **Delete Device Attr**: Deletes a device attribute from the device attribute table by filtering based on device id and device name
-
+
- **Update Device Attr Field**: Updates a single field in the device attribute table
-
+
- **Update Device Attr Fields**: Updates multiple fields in the device attribute table
-
- - **Query Device Attr**: Queries a device attribute from the device attribute table
-
+
+ - **Query Device Attr**: Queries a device attribute from the device attribute table
+
- **Update Device Attr Multi**: Updates multiple columns of multiple device attributes in the device attribute table
-
+
- **Delete Device Attr Trans**: Inserts device attributes, deletes device attributes and updates device attributes in a single transaction.
-
-### Device Twin Table
+
+### Device Twin Table
Device twin table contains the data related to the device device twin associated with a particular device in the edge node.
-The following are the columns present in the device twin table :
+The following are the columns present in the device twin table :
-| Column Name | Description |
-|---|---|
-| **ID** | This field indicates the id assigned to the device twin |
-| **DeviceID** | This field indicates the device id of the device associated with this device twin |
-| **Name** | This field indicates the name of the device twin |
-| **Description** | This field indicates the description of the device twin |
-| **Expected** | This field indicates the expected value of the device |
-| **Actual** | This field indicates the actual value of the device |
-| **ExpectedMeta** | This field indicates the metadata associated with the expected value of the device |
-| **ActualMeta** | This field indicates the metadata associated with the actual value of the device |
-| **ExpectedVersion** | This field indicates the version of the expected value of the device |
-| **ActualVersion** | This field indicates the version of the actual value of the device |
-| **Optional** | This fields indicates whether the device twin is optional or not |
-| **AttrType** | This fields indicates the type of attribute that is referred to |
-| **Metadata** | This fields describes the metadata associated with the device twin |
+| Column Name | Description |
+|---|---|
+| **ID** | This field indicates the id assigned to the device twin |
+| **DeviceID** | This field indicates the device id of the device associated with this device twin |
+| **Name** | This field indicates the name of the device twin |
+| **Description** | This field indicates the description of the device twin |
+| **Expected** | This field indicates the expected value of the device |
+| **Actual** | This field indicates the actual value of the device |
+| **ExpectedMeta** | This field indicates the metadata associated with the expected value of the device |
+| **ActualMeta** | This field indicates the metadata associated with the actual value of the device |
+| **ExpectedVersion** | This field indicates the version of the expected value of the device |
+| **ActualVersion** | This field indicates the version of the actual value of the device |
+| **Optional** | This fields indicates whether the device twin is optional or not |
+| **AttrType** | This fields indicates the type of attribute that is referred to |
+| **Metadata** | This fields describes the metadata associated with the device twin |
-**Operations Performed :-**
+**Operations Performed :-**
The following are the operations that can be performed on this data :-
-
+
- **Save Device Twin**: Inserts a device twin in the device twin table
-
+
- **Delete Device Twin By Device ID**: Deletes a device twin by its ID from the device twin table
- **Delete Device Twin**: Deletes a device twin from the device twin table by filtering based on device id and device name
-
+
- **Update Device Twin Field**: Updates a single field in the device twin table
-
+
- **Update Device Twin Fields**: Updates multiple fields in the device twin table
-
- - **Query Device Twin**: Queries a device twin from the device twin table
-
+
+ - **Query Device Twin**: Queries a device twin from the device twin table
+
- **Update Device Twin Multi**: Updates multiple columns of multiple device twins in the device twin table
-
+
- **Delete Device Twin Trans**: Inserts device twins, deletes device twins and updates device twins in a single transaction.
-
+
diff --git a/docs/modules/edge/edged.md b/docs/modules/edge/edged.md
index 029d00d0a..6872d1709 100644
--- a/docs/modules/edge/edged.md
+++ b/docs/modules/edge/edged.md
@@ -8,7 +8,7 @@ Docker container runtime is currently supported for container and image manageme
There are many modules which work in tandom to achieve edged's functionalities.
-![EdgeD OverAll](../../images/edged/edged-overall.png)
+![EdgeD OverAll](../../images/edged/edged-overall.png)
*Fig 1: EdgeD Functionalities*
@@ -23,15 +23,15 @@ Its primary jobs are as follows:
- Keeps separate cache for config map and secrets respectively.
- Regular cleanup of orphaned pods
-![Pod Addition Flow](../../images/edged/pod-addition-flow.png)
+![Pod Addition Flow](../../images/edged/pod-addition-flow.png)
*Fig 2: Pod Addition Flow*
-![Pod Deletion Flow](../../images/edged/pod-deletion-flow.png)
+![Pod Deletion Flow](../../images/edged/pod-deletion-flow.png)
*Fig 3: Pod Deletion Flow*
-![Pod Updation Flow](../../images/edged/pod-update-flow.png)
+![Pod Updation Flow](../../images/edged/pod-update-flow.png)
*Fig 4: Pod Updation Flow*
@@ -39,7 +39,7 @@ Its primary jobs are as follows:
This module helps in monitoring pod status for edged. Every second, using probe's for liveness and readiness, it updates the information with pod status manager for every pod.
-![PLEG Design](../../images/edged/pleg-flow.png)
+![PLEG Design](../../images/edged/pleg-flow.png)
*Fig 5: PLEG at EdgeD*
@@ -48,8 +48,8 @@ This module helps in monitoring pod status for edged. Every second, using probe'
Container Runtime Interface (CRI) – a plugin interface which enables edged to use a wide variety of container runtimes, without the need to recompile and also support multiple runtimes like docker, containerd, cri-o etc
#### Why CRI for edge?
-Currently kubeedge edged supports only docker runtime using the legacy dockertools.
-+ CRI support for multiple container runtime in kubeedge is needed due to below mentioned factors
+Currently kubeedge edged supports only docker runtime using the legacy dockertools.
++ CRI support for multiple container runtime in kubeedge is needed due to below mentioned factors
+ Include CRI support as in kubernetes kubelet to support containerd, cri-o etc
+ Continue with docker runtime support using legacy dockertools until CRI support for the same is available i.e. support
@@ -60,7 +60,7 @@ Currently kubeedge edged supports only docker runtime using the legacy dockertoo
+ Customer can run light weight container runtime on resource constrained edge node that cannot run the existing docker runtime
+ Customer has the option to choose from multiple container runtimes on his edge platform
-![CRI Design](../../images/edged/edged-cri.png)
+![CRI Design](../../images/edged/edged-cri.png)
*Fig 6: CRI at EdgeD*
@@ -70,7 +70,7 @@ At edged, Secrets are handled separately. For its operations like addition, dele
Using these interfaces, secrets are updated in cache store.
Below flow diagram explains the message flow.
-![Secret Message Handling](../../images/edged/secret-handling.png)
+![Secret Message Handling](../../images/edged/secret-handling.png)
*Fig 7: Secret Message Handling at EdgeD*
@@ -78,13 +78,13 @@ Also edged uses MetaClient module to fetch secret from Metamanager (if available
Hence the subsequent query for same secret key will be responded by Metamanger only, hence reducing the response delay.
Below flow diagram shows, how secret is fetched from metamanager and cloud. The flow of how secret is saved in metamanager.
-![Query Secret](../../images/edged/query-secret-from-edged.png)
+![Query Secret](../../images/edged/query-secret-from-edged.png)
*Fig 8: Query Secret by EdgeD*
## Probe Management
-Probe management creates to probes for readiness and liveness respectively for pods to monitor the containers. Readiness probe helps by monitoring when the pod has reached to running state. Liveness probe helps in monitoring the health of pods, if they are up or down.
+Probe management creates to probes for readiness and liveness respectively for pods to monitor the containers. Readiness probe helps by monitoring when the pod has reached to running state. Liveness probe helps in monitoring the health of pods, if they are up or down.
As explained earlier, PLEG module uses its services.
@@ -93,7 +93,7 @@ At edged, ConfigMap are also handled separately. For its operations like additio
Using these interfaces, configMaps are updated in cache store.
Below flow diagram explains the message flow.
-![ConfigMap Message Handling](../../images/edged/configmap-handling.png)
+![ConfigMap Message Handling](../../images/edged/configmap-handling.png)
*Fig 9: ConfigMap Message Handling at EdgeD*
@@ -101,7 +101,7 @@ Also edged uses MetaClient module to fetch configmap from Metamanager (if availa
Hence the subsequent query for same configmaps key will be responded by Metamanger only, hence reducing the response delay.
Below flow diagram shows, how configmaps is fetched from metamanager and cloud. The flow of how configmaps is saved in metamanager.
-![Query Configmaps](../../images/edged/query-configmap-from-edged.png)
+![Query Configmaps](../../images/edged/query-configmap-from-edged.png)
*Fig 10: Query Configmaps by EdgeD*
@@ -119,7 +119,7 @@ The policy for garbage collecting images we apply takes two factors into conside
Status manager is as an independent edge routine, which collects pods statuses every 10 seconds and forwards this information with cloud using metaclient interface to the cloud.
-![Status Manager Flow](../../images/edged/pod-status-manger-flow.png)
+![Status Manager Flow](../../images/edged/pod-status-manger-flow.png)
*Fig 11: Status Manager Flow*
diff --git a/docs/modules/edge/edgehub.md b/docs/modules/edge/edgehub.md
index 42af6a110..1c1575423 100644
--- a/docs/modules/edge/edgehub.md
+++ b/docs/modules/edge/edgehub.md
@@ -12,7 +12,7 @@ The main functions performed by edgehub are :-
- Keep Alive
- Publish Client Info
-- Route to Cloud
+- Route to Cloud
- Route to Edge
@@ -25,24 +25,24 @@ A keep-alive message or heartbeat is sent to cloudHub after every heartbeatPerio
- The main responsibility of publish client info is to inform the other groups or modules regarding the status of connection to the cloud.
-- It sends a beehive message to all groups (namely metaGroup, twinGroup and busGroup), informing them whether cloud is connected or disconnected.
+- It sends a beehive message to all groups (namely metaGroup, twinGroup and busGroup), informing them whether cloud is connected or disconnected.
-## Route To Cloud
+## Route To Cloud
The main responsibility of route to cloud is to receive from the other modules (through beehive framework), all the
messages that are to be sent to the cloud, and send them to cloudHub through the websocket connection.
-
+
The major steps involved in this process are as follows :-
-1. Continuously receive messages from beehive Context
-2. Send that message to cloudHub
+1. Continuously receive messages from beehive Context
+2. Send that message to cloudHub
3. If the message received is a sync message then :
-
+
3.1 If response is received on syncChannel then it creates a map[string] chan containing the messageID of the message as key
-
+
3.2 It waits for one heartbeat period to receive a response on the channel created, if it does not receive any response on the channel within the specified time then it times out.
-
+
3.3 The response received on the channel is sent back to the module using the SendResponse() function.
![Route to Cloud](../../images/edgehub/route-to-cloud.png)
diff --git a/docs/modules/edge/eventbus.md b/docs/modules/edge/eventbus.md
index 9af814a73..fa51bfa15 100644
--- a/docs/modules/edge/eventbus.md
+++ b/docs/modules/edge/eventbus.md
@@ -4,7 +4,7 @@ Eventbus acts as an interface for sending/receiving messages on mqtt topics.
It supports 3 kinds of mode:
- internalMqttMode
-- externalMqttMode
+- externalMqttMode
- bothMqttMode
## Topic
eventbus subscribes to the following topics:
diff --git a/docs/modules/edge/metamanager.md b/docs/modules/edge/metamanager.md
index 7728daa83..bc0965446 100644
--- a/docs/modules/edge/metamanager.md
+++ b/docs/modules/edge/metamanager.md
@@ -28,14 +28,14 @@ which sends it back to the cloud.
## Update Operation
`Update` operations can happen on objects at the cloud/edge.
-The update message flow is similar to an insert operation. Additionally, metamanager checks if the resource being updated has changed locally.
-If there is a delta, only then the update is stored locally and the message is
+The update message flow is similar to an insert operation. Additionally, metamanager checks if the resource being updated has changed locally.
+If there is a delta, only then the update is stored locally and the message is
passed to edged and response is sent back to the cloud.
![Update Operation](../../images/metamanager/meta-update.png)
## Delete Operation
-`Delete` operations are triggered when objects like pods are deleted from the
+`Delete` operations are triggered when objects like pods are deleted from the
cloud.
![Delete Operation](../../images/metamanager/meta-delete.png)
@@ -43,7 +43,7 @@ cloud.
## Query Operation
`Query` operations let you query for metadata either locally at the edge or for some remote resources like config maps/secrets from the cloud. edged queries this
metadata from metamanager which further handles local/remote query processing and
-returns the response back to edged. A Message resource can be broken into 3 parts
+returns the response back to edged. A Message resource can be broken into 3 parts
(resKey,resType,resId) based on separator ‘/’.
![Query Operation](../../images/metamanager/meta-query.png)
@@ -58,7 +58,7 @@ like remote query to the cloud.
## MetaSync Operation
`MetaSync` operation messages are periodically sent by metamanager to sync the status of the
-pods running on the edge node. The sync interval is configurable in `conf/edge.yaml`
+pods running on the edge node. The sync interval is configurable in `conf/edge.yaml`
( defaults to `60` seconds ).
```yaml
diff --git a/docs/modules/edgesite.md b/docs/modules/edgesite.md
index df3f47ff2..a8eaecf38 100644
--- a/docs/modules/edgesite.md
+++ b/docs/modules/edgesite.md
@@ -21,7 +21,7 @@ There are scenarios user need to run a standalone Kubernetes cluster at edge to
In some IOT scenarios, user need to deploy a full control edge environment and running offline.
For these use cases, a standalone, full controlled, light weight Edge cluster is required.
-By integrating KubeEdge and standard Kubernetes, this EdgeSite enables customers to run an efficient kubernetes cluster for Edge/IOT computing.
+By integrating KubeEdge and standard Kubernetes, this EdgeSite enables customers to run an efficient kubernetes cluster for Edge/IOT computing.
## Assumptions
@@ -62,7 +62,7 @@ With the integration, the following can be enabled
+ [Creating cluster with kubeadm](<https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/>)
-+ KubeEdge supports https connection to Kubernetes apiserver.
++ KubeEdge supports https connection to Kubernetes apiserver.
Enter the path to kubeconfig file in controller.yaml
```yaml
@@ -71,7 +71,7 @@ With the integration, the following can be enabled
...
kubeconfig: "path_to_kubeconfig_file" #Enter path to kubeconfig file to enable https connection to k8s apiserver
```
-
+
+ (Optional) KubeEdge also supports insecure http connection to Kubernetes apiserver for testing, debugging cases.
Please follow below steps to enable http port in Kubernetes apiserver.
@@ -117,7 +117,7 @@ With the integration, the following can be enabled
Modify [edgeSite.yaml](https://github.com/kubeedge/kubeedge/blob/master/edgesite/conf/edgeSite.yaml) configuration file, with the IP address of K8S API server
+ Configure K8S (API Server)
-
+
Replace `localhost` at `controller.kube.master` with the IP address
```yaml
@@ -132,7 +132,7 @@ Modify [edgeSite.yaml](https://github.com/kubeedge/kubeedge/blob/master/edgesite
Replace `edge-node` with an unique edge id/name in below fields :
- `controller.kube.node-id`
- `controller.edged.hostname-override`
-
+
```yaml
controller:
kube:
@@ -180,11 +180,11 @@ Modify [edgeSite.yaml](https://github.com/kubeedge/kubeedge/blob/master/edgesite
### Deploy EdgeSite (Worker) Node to K8S Cluster
-We have provided a sample node.json to add a node in kubernetes. Please make sure edgesite (worker) node is added to k8s api-server.
+We have provided a sample node.json to add a node in kubernetes. Please make sure edgesite (worker) node is added to k8s api-server.
Run below steps:
+ Modify node.json
-
+
Replace `edge-node` in [node.json](https://github.com/kubeedge/kubeedge/blob/master/build/node.json#L5) file, to the id/name of the edgesite node. ID/Name should be same as used before while updating `edgesite.yaml`
```json
@@ -196,7 +196,7 @@ Run below steps:
```
+ Add node in K8S API server
-
+
In the console execute the below command
```shell
diff --git a/docs/proposals/EdgeSite.md b/docs/proposals/EdgeSite.md
index aa9fca4ba..b364154b8 100644
--- a/docs/proposals/EdgeSite.md
+++ b/docs/proposals/EdgeSite.md
@@ -1,7 +1,7 @@
---
title: EdgeSite Design
status: implementable
-authors:
+authors:
- "@cindyxing"
approvers:
- "@qizha"
@@ -11,10 +11,10 @@ approvers:
# EdgeSite: Standalone Cluster at edge
## Abstract
-In Edge computing, there are scenarios where customers would like to have a whole cluster installed at edge location. As a result,
-admins/users can leverage the local control plane to implement management functionalities and take advantages of all edge computing's benefits.
+In Edge computing, there are scenarios where customers would like to have a whole cluster installed at edge location. As a result,
+admins/users can leverage the local control plane to implement management functionalities and take advantages of all edge computing's benefits.
-This design doc is to enable customers deploy and run lightweight clusters at edge.
+This design doc is to enable customers deploy and run lightweight clusters at edge.
## Motivation
There are scenarios user need to run a standalone Kubernetes cluster at edge to get full control and improve the offline scheduling capability. There are two scenarios user need to do that:
@@ -31,8 +31,8 @@ For these use cases, a standalone, full controlled, light weight Edge cluster is
By integrating KubeEdge and standard Kubernetes, this proposal enables customers to run an efficient kubernetes cluster for Edge/IOT computing. User can also leverage other smaller Kubernetes implementation such as K3S to make the footprint even smaller.
## Assumptions
-Here we assume a cluster is deployed at edge location including the management control plane.
-For the management control plane to manage some scale of edge worker nodes, the hosting master node needs to have sufficient resources.
+Here we assume a cluster is deployed at edge location including the management control plane.
+For the management control plane to manage some scale of edge worker nodes, the hosting master node needs to have sufficient resources.
The assumptions are
1. EdgeSite cluster master node is of no less than 2 CPUs and no less than 1GB memory
2. If high availability is required, 2-3 master nodes are needed at different edge locations
@@ -50,10 +50,10 @@ With the integration, the following can be enabled
3. Edge worker node autonomy in case of network disconnection/reconnection
4. All benefits of edge computing including latency, data locality, etc.
-## Protocol
-K8s client library interface will be used. The edgecontroller on each edge node only watches against k8s types for the node itself.
+## Protocol
+K8s client library interface will be used. The edgecontroller on each edge node only watches against k8s types for the node itself.
-The informer programming model will be used between EdgeController and APIServer.
+The informer programming model will be used between EdgeController and APIServer.
For example:
```go
@@ -71,7 +71,7 @@ informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
})
```
-And the data can be written to the client side store.
+And the data can be written to the client side store.
## Work Items
1. Port current EdgeController code to KubeEdge agent side
@@ -81,6 +81,6 @@ And the data can be written to the client side store.
For lightweight etcd, we keep etcdv3 implementation and remove v2; and some other items.
4. Lightweight kubeproxy on edgecore
-5. E2E
+5. E2E
diff --git a/docs/proposals/configuration.md b/docs/proposals/configuration.md
index 1593f1fa0..71b21364d 100644
--- a/docs/proposals/configuration.md
+++ b/docs/proposals/configuration.md
@@ -4,7 +4,7 @@ title: KubeEdge Component Config Proposal
authors:
- "@kadisi"
- "@fisherxu"
-
+
approvers:
- "@kevin-wangzefeng"
- "@sids-b"
@@ -33,7 +33,7 @@ status: implemented
* [Use keadm to install and configure KubeEdge components](#use-keadm-to-install-and-configure-kubeedge-components)
* [Task list tracking](#task-list-tracking)
-# KubeEdge Component Config Proposal
+# KubeEdge Component Config Proposal
## Terminology
@@ -41,9 +41,9 @@ status: implemented
* **KubeEdge modules:** refers to modules e.g. cloudhub, edgecontroller, devicecontroller, devicetwin, edged, edgehub, eventbus, metamanager, servicebus, etc.
-## Proposal
+## Proposal
-Currently, KubeEdge components' configuration files are in the conf directory at the same level and have 3 configuration files, it is difficult to maintain and extend.
+Currently, KubeEdge components' configuration files are in the conf directory at the same level and have 3 configuration files, it is difficult to maintain and extend.
KubeEdge uses beehive package to analyse configuration files, when the program is running, it will print a lot of logs first. When we add subcommands to the program, such as `--version` , it will still print a lot of configuration information and then output the version information.
@@ -59,7 +59,7 @@ We recommend referring to the kubernetes component config api design to redesign
* Start the KubeEdge component with the --config flag, this flag set to the path of the component's config file. The component will then load its config from this file, if --config flag not set, component will read a default configuration file.
-* Configuration file's definition refers to the kubernetes component config api design, which needs to be with a version number for future version management.
+* Configuration file's definition refers to the kubernetes component config api design, which needs to be with a version number for future version management.
* Need to abstract the apis for KubeEdge component configuration file and defined in `pkg/apis/{components}/` dir.
@@ -69,20 +69,20 @@ We recommend referring to the kubernetes component config api design to redesign
* New configuration files should consider backward compatibility in future upgrades
-* Support conversion of 3 old configfiles to one new configfile.
-
+* Support conversion of 3 old configfiles to one new configfile.
+
take cloudcore as an example: now cloudcore has 3 configfiles: `controller.yaml,logging.yaml,modules.yaml`, We need to convert those three old configuration files into one new configuration file in one way.
-## Principle
+## Principle
* **Backward compatibility**
`keadm` provides subcommands for conversion
-* **Forward compatibility**
-
- For configuration file, support addition/deprecattion of some fields, **Modify field not allowed**.
+* **Forward compatibility**
+
+ For configuration file, support addition/deprecattion of some fields, **Modify field not allowed**.
Configuration need a version field.
@@ -90,7 +90,7 @@ We recommend referring to the kubernetes component config api design to redesign
### KubeEdge component config apis definition
-#### meta config apis
+#### meta config apis
defined in `pkg/apis/meta/v1alpha1/types.go`
@@ -136,7 +136,7 @@ const (
```
-#### cloudcore config apis
+#### cloudcore config apis
defined in `pkg/apis/cloudcore/v1alpha1/types.go`
@@ -834,9 +834,9 @@ default load `/etc/kubeedge/config/edgesite.yaml` configfile
With `--minconfig` flag, users can easily get min used configurations as reference. It's useful to users that are new to KubeEdge, and they can modify/create their own configs accordingly. This configuration is suitable for beginners.
-* cloudcore
+* cloudcore
-`# cloudcore --defaultconfig`
+`# cloudcore --defaultconfig`
```yaml
@@ -921,7 +921,7 @@ modules:
```
-`# cloudcore --minconfig`
+`# cloudcore --minconfig`
```yaml
@@ -950,7 +950,7 @@ modules:
* edgecore
-`# edgecore --defaultconfig`
+`# edgecore --defaultconfig`
```yaml
@@ -1023,12 +1023,12 @@ modules:
contextSendModule: websocket
enable: true
podStatusSyncInterval: 60
- servicebus:
- enable: false
+ servicebus:
+ enable: false
```
-`# edgecore --minconfig`
+`# edgecore --minconfig`
```yaml
@@ -1076,7 +1076,7 @@ modules:
* edgesite
-`# edgesite --defaultconfig`
+`# edgesite --defaultconfig`
```yaml
@@ -1161,7 +1161,7 @@ modules:
```
-`# edgesite --minconfig`
+`# edgesite --minconfig`
```yaml
@@ -1206,11 +1206,11 @@ We use the second option, because:
* If the component supports the old configuration file, it will add configuration-compatible logic inside the component. We might as well let keadm do this. such as:
```
-keadm convertconfig --component=<cloudcore,edgecore,edgesite> --srcdir=<old config dir> --desdir=<new config dir>
+keadm convertconfig --component=<cloudcore,edgecore,edgesite> --srcdir=<old config dir> --desdir=<new config dir>
```
-`srcdir` flag set the dir of the old 2 configfiles.
+`srcdir` flag set the dir of the old 2 configfiles.
`desdir` flag set the dir of the new configfile. if `despath` is not set, keadm only print new config, user can create config file by those print info.
keadm first load the old two configfiles and create the new config for each component.
@@ -1218,7 +1218,7 @@ keadm first load the old two configfiles and create the new config for each comp
We can gradually abandon this command after the release of several stable versions.
-### new config file need version number
+### new config file need version number
Just like kubernetes component config, KubeEdge component config need `apiVersion` to define config version schema .
diff --git a/docs/proposals/cri.md b/docs/proposals/cri.md
index fecd0bd9d..d11d86381 100644
--- a/docs/proposals/cri.md
+++ b/docs/proposals/cri.md
@@ -20,17 +20,17 @@ status: implementable
* [Non\-goals](#non-goals)
* [Proposal](#proposal)
* [Use Cases](#use-cases)
- * [High Level Design](#high-level-design)
+ * [High Level Design](#high-level-design)
* [Edged with CRI support](#edged-with-cri-support)
- * [Low Level Design](#low-level-design)
+ * [Low Level Design](#low-level-design)
* [Configuration parameters](#configuration-parameters)
* [Data structure modifications](#data-structure-modifications)
* [Edged object creation modifications](#edged-object-creation-modifications)
* [Runtime dependent module modifications](#runtime-dependent-module-modifications)
* [Runtime dependent functional modifications](#runtime-dependent-functional-modifications)
- * [Open Questions](#open-questions)
-
-
+ * [Open Questions](#open-questions)
+
+
## Motivation
This proposal addresses the Container Runtime Interface support in edged to enable the following
1. Support light weight container runtimes on resource constrained edge node which are unable to run the existing docker runtime
@@ -48,8 +48,8 @@ CRI support in edged must:
## Proposal
Currently Kubernetes kubelet CRI supports container runtimes like containerd, cri-o etc and support for docker runtime is
-provided using dockershim as well. However going forward even docker runtime will be supported through only CRI. However
-currently kubeedge edged supports only docker runtime using the legacy dockertools. Hence we propose to support multiple
+provided using dockershim as well. However going forward even docker runtime will be supported through only CRI. However
+currently kubeedge edged supports only docker runtime using the legacy dockertools. Hence we propose to support multiple
container runtime in kubeedge edged as follows
1. Include CRI support as in kubernetes kubelet to support contianerd, cri-o etc
2. Continue with docker runtime support using legacy dockertools until CRI support for the same is available i.e. support
@@ -114,7 +114,7 @@ type edged struct {
### Edged object creation modifications
The existing newEdged() function needs to modified include creating CRI runtime object based on the runtime type including
-creations of objects for runtime and image services. However the existing edged does not provide the support for all the
+creations of objects for runtime and image services. However the existing edged does not provide the support for all the
parameters required to create the CRI runtime object and default parameters need to be considered for the same like Image GC manager, Container GC manager, Volume manager and container lifecycle manager (clcm)
```go
@@ -123,18 +123,18 @@ parameters required to create the CRI runtime object and default parameters need
func newEdged() (*edged, error) {
conf := getConfig()
......
-
+
switch based on runtimeType {
case DockerContainerRuntime:
Create runtime based on docker tools
Set containerRuntimeName to DockerContainerRuntime
Initialize Container GC, Image GC and Volume Plugin Manager accordingly
-
+
case RemoteContainerRuntime:
Set remoteImageEndpoint same as remoteRuntimeEndpoint if not explicitly specified
Initialize the following required for initializing remote runtime
containerRefManager
- httpClient
+ httpClient
runtimeService
imageService
clcm
@@ -189,13 +189,13 @@ func (e *edged) Start(c *context.Context) {
case DockerContainerRuntime:
Initialize volume manager based on dockertools
Initialize PLEG based on dockertools
-
+
case RemoteContainerRuntime:
Initialize volume manager based on remote runtime
Initialize PLEG based on remote runtime
}
....
-
+
}
```
@@ -209,7 +209,7 @@ func (e *edged) initializeModules() error {
switch based on runtime type {
case DockerContainerRuntime:
Start with docker runtime
-
+
case RemoteContainerRuntime:
Start with remote runtime
....
@@ -220,7 +220,7 @@ func (e *edged) consumePodAddition(namespacedName *types.NamespacedName) error {
case DockerContainerRuntime:
Ensure iamge exists for docker runtime
Start pod with docker runtime
-
+
case RemoteContainerRuntime:
Get current status from pod cache
Sync pod with remote runtime
@@ -232,11 +232,11 @@ func (e *edged) consumePodDeletion(namespacedName *types.NamespacedName) error {
switch based on runtime type {
case DockerContainerRuntime:
TerminatePod with docker runtime
-
+
case RemoteContainerRuntime:
KillPod with remote runtime
}
-
+
....
}
@@ -252,7 +252,7 @@ func (e *edged) HandlePodCleanups() error {
switch switch based on runtime type {
case DockerContainerRuntime:
GetPods for docker runtime
-
+
case RemoteContainerRuntime:
GetPods for remote runtime
}
@@ -277,7 +277,7 @@ func (gl *GenericLifecycle) updatePodStatus(pod *v1.Pod) error {
Get pod status based on remote/docker runtime
Convert to API pod status for remote runtime
Set pod status phase for remote runtime
-
+
....
}
```
diff --git a/docs/proposals/device-crd.md b/docs/proposals/device-crd.md
index 1e020baa1..ae7ffd112 100644
--- a/docs/proposals/device-crd.md
+++ b/docs/proposals/device-crd.md
@@ -21,7 +21,7 @@ status: implementable
* [Non\-goals](#non-goals)
* [Proposal](#proposal)
* [Use Cases](#use-cases)
- * [Design Details](#design-details)
+ * [Design Details](#design-details)
* [CRD API Group and Version](#crd-api-group-and-version)
* [Device model CRD](#device-model-crd)
* [Device model type definition](#device-model-type-definition)
diff --git a/docs/proposals/edgemesh-design.md b/docs/proposals/edgemesh-design.md
index 0875b39a8..ebebb2000 100644
--- a/docs/proposals/edgemesh-design.md
+++ b/docs/proposals/edgemesh-design.md
@@ -89,7 +89,7 @@ Since Router fetches rules from DB, in later versions it can be started as a dif
### Router Low-level Design
### Providers
-Providers are plugins to reach a service running on edge or on cloud. For example (ServiceBus, EventBus running in edgecore)
+Providers are plugins to reach a service running on edge or on cloud. For example (ServiceBus, EventBus running in edgecore)
Providers can be classified into two types
1) Source
2) Target
@@ -135,7 +135,7 @@ type Source interface {
RegisterListener(rule model.Rule, res model.Map, handler func(interface{})) error
// UnregisterListener is used to unregister a listener when rule is deleted
UnregisterListener(rule model.Rule, res map[string]interface{})
- // Callback is function for sending response/ACK if required for a request
+ // Callback is function for sending response/ACK if required for a request
Callback(map[string]interface{})
}
```
@@ -157,7 +157,7 @@ Each provider that wants to register as a Target should implement following inte
type Target interface {
// Name returns name of the provider
Name() string
- // Forward is used to forward the request to target
+ // Forward is used to forward the request to target
Forward(map[string]interface{}, interface{}, func(map[string]interface{})) error
}
```
diff --git a/docs/proposals/keadm-scope.md b/docs/proposals/keadm-scope.md
index 09646e675..c036f7642 100644
--- a/docs/proposals/keadm-scope.md
+++ b/docs/proposals/keadm-scope.md
@@ -63,7 +63,7 @@ For edge, commands shall be:
│ Please give us feedback at: │
│ https://github.com/kubeedge/kubeedge/issues │
└──────────────────────────────────────────────────────────┘
-
+
Create a two-machine cluster with one cloud node
(which controls the edge node cluster), and one edge node
(where native containerized application, in the form of
@@ -148,7 +148,7 @@ keadm reset --k8sserverip 10.20.30.40:8080
Flags:
-h, --help help for reset
-k, --k8sserverip string IP:Port address of cloud components host/VM
-
+
```
### keadm join --help
@@ -159,8 +159,8 @@ Flags:
It checks if the pre-requisites are installed already,
If not installed, this command will help in download,
install and execute on the host.
-It will also connect with cloud component to receieve
-further instructions and forward telemetry data from
+It will also connect with cloud component to receieve
+further instructions and forward telemetry data from
devices to cloud
Usage:
@@ -198,7 +198,7 @@ Flags:
`keadm init`
- What is it?
* This command will be responsible to bring up KubeEdge cloud components like edge-controller and K8S (using kubeadm)
-
+
- What shall be its scope ?
1. Check version of OS and install subsequently the required pre-requisites using supported steps. Currently we will support **ONLY** (Ubuntu & CentOS)
2. Check and install all the pre-requisites before executing edge-controller, which are
@@ -220,7 +220,7 @@ Flags:
7. start edge-controller
`keadm reset`
- - What is it?
+ - What is it?
* This command will be responsible to bring down KubeEdge cloud components edge-controller and call `kubeadm reset` (to stop K8S)
- What shall be its scope ?
@@ -231,7 +231,7 @@ Flags:
### Worker Node (at the Edge) commands
`keadm join`
- - What is it?
+ - What is it?
* This command will be responsible to install pre-requisites and make modifications needed for KubeEdge edge component (edgecore) and start it
- What shall be its scope ?
@@ -246,7 +246,7 @@ Flags:
6. Create `$GOPATH/src/github.com/kubeedge/kubeedge/edge/conf/edge.yaml`
* Use `--cloudcoreip` flag to update the `websocket.url` field.
* Use `--edgenodeid` flags value to update `controller.node-id`,`edged.hostname-override` field.
- 7. Register or add node to K8S cluster, Using Flag `-k` or `--k8sserverip` value to connect with the api-server.
+ 7. Register or add node to K8S cluster, Using Flag `-k` or `--k8sserverip` value to connect with the api-server.
* Create `node.json` file and update it with `-i` or `--edgenodeid` flags value in `metadata.name` field.
* Apply it using `curl` command to api-server
diff --git a/docs/proposals/mapper-design.md b/docs/proposals/mapper-design.md
index bd797d763..501a5f1c5 100644
--- a/docs/proposals/mapper-design.md
+++ b/docs/proposals/mapper-design.md
@@ -38,7 +38,7 @@ Mapper can be specific to a protocol where standards are defined i.e Bluetooth,
All devices can be connected and controlled by drivers provided by their vendor.
But the message from the device need to be translated into a format understood by KubeEdge.
Also there should be a way to control the devices from the platform. Mapper is the application that interfaces between KubeEdge and devices.
-There should be a standard design for mappers supported by KubeEdge for keeping them generic and easy to use.
+There should be a standard design for mappers supported by KubeEdge for keeping them generic and easy to use.
### Goals
* A generic way to support multiple devices of different protocols by having a standard design for mappers provided by KubeEdge.
@@ -47,7 +47,7 @@ There should be a standard design for mappers supported by KubeEdge for keeping
### Non-goals
* Impose restriction on users to follow this design while writing applications for their device.
-* Have a single application that supports multiple devices of different protocols.
+* Have a single application that supports multiple devices of different protocols.
### User cases
1) Manage expected/actual state of a device.
@@ -94,13 +94,13 @@ type Schedule struct{
// can be made corresponding to name to stop the schedule.
Name string
// Frequency is the time in milliseconds after which this actions are to be performed
- Frequency int
+ Frequency int
// Actions is list of Actions to be performed in this schedule
Actions []Action
}
```
-**3) Watcher**: Watcher has 3 responsibilities:
+**3) Watcher**: Watcher has 3 responsibilities:
a) To scan devices(wireless)/wait for device to turn on(wired) and connect to the correct device once it is Online/In-Range. It can use MAC address or any unique address provided by devices. In case of wired devices, GPIO can be an option.
@@ -110,7 +110,7 @@ c) To report the actual state of twin attributes.
**4) Data-Converter**: Data received from the devices can be in complex formats. eg: HexDecimal with bytes shuffled. This data cannot be directly understood by KubeEdge.
The responsibility of data-converter is the convert the readings into a format understood by KubeEdge.
-Many protocols have a standard defined for the reading returned by the device. Hence a common/configurable logic can be used.
+Many protocols have a standard defined for the reading returned by the device. Hence a common/configurable logic can be used.
**5) Health-Checker**: Health-Checker can be used to periodically report the state of the device to KubeEdge.
This can be an optional component as not all devices support health-checking. In-future can be extended to report battery-state, malfunctioning when kubeedge supports these attributes.
diff --git a/docs/proposals/quic-design.md b/docs/proposals/quic-design.md
index a8f70b27b..c461dbe35 100644
--- a/docs/proposals/quic-design.md
+++ b/docs/proposals/quic-design.md
@@ -33,7 +33,7 @@ In edge scenarios, network connectivity could be unstable. With TCP + TLS, it be
## Configuration of kubeedge with websocket/quic
### Start the websocket server only
-1. User edit controller.yaml
+1. User edit controller.yaml
```yaml
cloudhub:
protocol_websocket: true # enable websocket protocol
@@ -116,7 +116,7 @@ In edge scenarios, network connectivity could be unstable. With TCP + TLS, it be
```
2. Running the edgecore, and start to connect to cloudhub through websocket protocol.
-### edgehub connect to cloudhub through quic
+### edgehub connect to cloudhub through quic
1. User edit edge.yaml
```yaml
quic:
@@ -137,5 +137,5 @@ In edge scenarios, network connectivity could be unstable. With TCP + TLS, it be
project-id: e632aba927ea4ac2b575ec1603d56f10
node-id: edge-node
```
-
+
2. Run the edgecore, and start to connect to cloudhub through quic protocol.
diff --git a/docs/proposals/reliable-message-delivery.md b/docs/proposals/reliable-message-delivery.md
index 43840cc4f..7f710e84e 100644
--- a/docs/proposals/reliable-message-delivery.md
+++ b/docs/proposals/reliable-message-delivery.md
@@ -17,11 +17,11 @@ status: Implememted
## Motivation
-At present, the message delivery mechanism with ACK is not completed. Unstable networks
+At present, the message delivery mechanism with ACK is not completed. Unstable networks
between cloud and edge can result in frequent disconnection of edge nodes.
-If cloudcore or edgecore being restarted or offline for a while, and this can result in
-loss of messages sent to edge nodes which can’t be temporarily reached. Without new event successfully
-delivered to the edge, this will cause inconsistency between cloud and edge.
+If cloudcore or edgecore being restarted or offline for a while, and this can result in
+loss of messages sent to edge nodes which can’t be temporarily reached. Without new event successfully
+delivered to the edge, this will cause inconsistency between cloud and edge.
This proposal addresses this problem thus improve the reliable message delivery.
### Goals
@@ -35,16 +35,16 @@ This proposal addresses this problem thus improve the reliable message delivery.
## Proposal
-Currently all the messages from the controllers go via the channel queue (which uses beehive context for messaging)
-to the cloudhub. The cloudhub then uses the configured protocol server (websocket/quic) to send the data to edge nodes.
-The proposal is to introduce the node level sending message queues in cloudhub, and use the ACK message
+Currently all the messages from the controllers go via the channel queue (which uses beehive context for messaging)
+to the cloudhub. The cloudhub then uses the configured protocol server (websocket/quic) to send the data to edge nodes.
+The proposal is to introduce the node level sending message queues in cloudhub, and use the ACK message
returned from edge nodes to ensure the message delivery is in a reliable fashion.
### Use Cases
-- If cloudcore being restarted or offline for a while, whenever the cloudcore is back online,
+- If cloudcore being restarted or offline for a while, whenever the cloudcore is back online,
send the latest event to the edge node (if there is any update to be sent).
-- If edgenode being restarted or offline for a while, whenever the node is back online,
+- If edgenode being restarted or offline for a while, whenever the node is back online,
cloudcore will sent the latest event to make it up to date.
## Design Details
@@ -57,38 +57,38 @@ There are three types of message delivery mechanisms:
- Exactly-Once
- At-Least-Once
-The existing implementation (without this proposal) in KubeEdge is
+The existing implementation (without this proposal) in KubeEdge is
the first approach “At-Most-Once”, which is unreliable.
-The second approach “Exactly-Once” is very expensive and exhibits worst performance
-although it provides guaranteed delivery with no message loss or duplication.
-Since KubeEdge follows Kubernetes’ eventual consistency design principles,
+The second approach “Exactly-Once” is very expensive and exhibits worst performance
+although it provides guaranteed delivery with no message loss or duplication.
+Since KubeEdge follows Kubernetes’ eventual consistency design principles,
it is not a problem for the edge to receive the same message repeatedly, as long as message is the latest one.
In this proposal, “At-Least-Once” is the proposed mechanism.
### At-Least-Once Delivery
-Shown below is a design using MessageQueue and ACKs to ensure that
+Shown below is a design using MessageQueue and ACKs to ensure that
the messages are delivered from the cloud to the edge.
<img src="../images/reliable-message-delivery/reliablemessage-workflow.PNG">
- We use K8s CRD stores the latest resourceVersion of resource that has been sent
- successfully to edge. When cloudcore restarts or starts normally,
+ successfully to edge. When cloudcore restarts or starts normally,
it will check the resourceVersion to avoid sending old messages.
-
-- EdgeController and devicecontroller send the messages to the Cloudhub, and MessageDispatcher will send messages
+
+- EdgeController and devicecontroller send the messages to the Cloudhub, and MessageDispatcher will send messages
to corresponding NodeMessageQueue according to the node name in message.
- CloudHub will sequentially send data from the NodeMessageQueue to the corresponding edge node,
and will also store the message ID in an ACK channel. When the ACK message from the edge node received,
ACK channel will trigger to save the message resourceVersion to K8s as CRD, and send the next message.
-
-- When the edgecore receives the message, it will first save the message to the local datastore and
+
+- When the edgecore receives the message, it will first save the message to the local datastore and
then return an ACK message to the cloud.
-- If cloudhub does not receive an ACK message within the interval, it will keep resending the message 5 times.
+- If cloudhub does not receive an ACK message within the interval, it will keep resending the message 5 times.
If all 5 retries fail, cloudhub will discard the event. SyncController will handling these failed events.
- Even if the edge node receives the message, the returned ACK message may lost during transmission.
@@ -96,7 +96,7 @@ If all 5 retries fail, cloudhub will discard the event. SyncController will hand
### SyncController
-SyncController will periodically compare the saved objects resourceVersion with the objects in K8s,
+SyncController will periodically compare the saved objects resourceVersion with the objects in K8s,
and then trigger the events such as retry and deletion.
When cloudhub add events to nodeMessageQueue, it will be compared with the corresponding object in nodeMessageQueue.
@@ -106,12 +106,12 @@ If the object in nodeMessageQueue is newer, it will directly discard these event
### Message Queue
-When each edge node successfully connects to the cloud, a message queue will be created,
+When each edge node successfully connects to the cloud, a message queue will be created,
which will cache all the messages sent to the edge node.
We use the [workQueue](https://github.com/kubernetes/client-go/blob/master/util/workqueue/rate_limiting_queue.go) and
- [cacheStore](https://github.com/kubernetes/client-go/blob/master/tools/cache/store.go) from [kubernetes/client-go](https://github.com/kubernetes/client-go)
-to implement the message queue and object storage. With Kubernetes workQueue,
+ [cacheStore](https://github.com/kubernetes/client-go/blob/master/tools/cache/store.go) from [kubernetes/client-go](https://github.com/kubernetes/client-go)
+to implement the message queue and object storage. With Kubernetes workQueue,
duplicate events will be merged to improve the transmission efficiency.
- Add message to the queue:
@@ -149,7 +149,7 @@ AckMessage.Operation = "response"
We use K8s CRD to save the resourceVersion of objects that have been successfully persisted to the edge.
We designed two types of CRD to save the resourceVersion. ClusterObjectSync is used to save the cluster
-scoped object and ObjectSync is used to save the namespace scoped object.
+scoped object and ObjectSync is used to save the namespace scoped object.
Their names consist of the related node name and object UUID.
#### The ClusterObjectSync
@@ -197,8 +197,8 @@ type ClusterObjectSync struct {
// ObjectSyncSpec stores the details of objects that sent to the edge.
type ObjectSyncSpec struct {
- // Required: ObjectGroupVerion is the group and version of the object
- // that was successfully sent to the edge node.
+ // Required: ObjectGroupVerion is the group and version of the object
+ // that was successfully sent to the edge node.
ObjectGroupVerion string `json:"objectGroupVerion,omitempty"`
// Required: ObjectKind is the type of the object
// that was successfully sent to the edge node.
@@ -222,17 +222,17 @@ type ObjectSyncStatus struct {
- When cloudcore restarts or starts normally, it will check the resourceVersion to avoid sending old messages.
-- During cloudcore restart, if some objects are deleted, the delete event may lost at this time.
-The SyncController will handle this situation. The object GC mechanism is needed here to ensure the deletion:
+- During cloudcore restart, if some objects are deleted, the delete event may lost at this time.
+The SyncController will handle this situation. The object GC mechanism is needed here to ensure the deletion:
compare whether the objects stored in CRD exist in K8s. If not, then SyncController will generate & send a delete event
to the edge and delete the object in CRD when ACK received.
### EdgeCore restart
-- When edgecore restarts or offline for a while, the node message queue will cache all the messages,
+- When edgecore restarts or offline for a while, the node message queue will cache all the messages,
whenever the node is back online, the messages will be sent.
-- When the edge node is offline, cloudhub will stop sending messages and not retry until
+- When the edge node is offline, cloudhub will stop sending messages and not retry until
the edge node is back online.
### EdgeNode deleted
@@ -241,7 +241,7 @@ the edge node is back online.
## Performance
-We need to run performance tests after introducing the reliability feature and publish the difference
+We need to run performance tests after introducing the reliability feature and publish the difference
in the results. Reliability is associated with a cost which a user needs to bear.
The following are the optimizations already considered.
@@ -251,8 +251,8 @@ The following are the optimizations already considered.
As we propose to use Kubernetes workQueue to implement NodeMessageQueue: only message key will be queued.
The message data is fetched only when it’s ready to be sent.
-When a message is already queued (with its index), follow-up same message (updates on a same k8s object, e.g. pod)
-will only refresh the message body in cache. Thus, when cloudcore proceed the sending, the latest message data is
+When a message is already queued (with its index), follow-up same message (updates on a same k8s object, e.g. pod)
+will only refresh the message body in cache. Thus, when cloudcore proceed the sending, the latest message data is
sent (no duplicated sending operations on a same message).
### Lazy creation of NodeMessageQueues
@@ -261,7 +261,7 @@ The NodeMessageQueue will only be created when an edge node is first connected t
### Stop sending and retries when node disconnected
-When an edge node is offline, cloudcore will stop meaningless sending and retires,
+When an edge node is offline, cloudcore will stop meaningless sending and retires,
cache the message and wait for resume when the node is back.
In long term, we may release NodeMessageQueues that have been holding for a period
diff --git a/docs/setup/cross-compilation.md b/docs/setup/cross-compilation.md
index 9e11743d4..126fafe46 100644
--- a/docs/setup/cross-compilation.md
+++ b/docs/setup/cross-compilation.md
@@ -2,7 +2,7 @@
In most of the cases, when you are trying to compile KubeEdge edgecore on Raspberry Pi or any other device, you may run out of memory, in that case, it is advisable to cross-compile the Edgecore binary and transfer it to your edge device.
-## For ARM Architecture from x86 Architecture
+## For ARM Architecture from x86 Architecture
Clone KubeEdge
@@ -14,7 +14,7 @@ cd $GOPATH/src/github.com/kubeedge/kubeedge/edge
sudo apt-get install gcc-arm-linux-gnueabi
export GOARCH=arm
export GOOS="linux"
-export GOARM=6 #Pls give the appropriate arm version of your device
+export GOARM=6 #Pls give the appropriate arm version of your device
export CGO_ENABLED=1
export CC=arm-linux-gnueabi-gcc
make edgecore
@@ -41,11 +41,11 @@ Thread model: posix
gcc version 6.3.0 20170516 (Raspbian 6.3.0-18+rpi1+deb9u1)
```
-If you see, Target has been defined as
+If you see, Target has been defined as
```
Target: arm-linux-gnueabihf
```
-in that case, export CC as
+in that case, export CC as
```
arm-linux-gnueabihf-gcc rather than arm-linux-gnueabi-gcc
```
diff --git a/docs/setup/deploy-edge-node.md b/docs/setup/deploy-edge-node.md
index 6f0f2cf06..ac85aff7d 100644
--- a/docs/setup/deploy-edge-node.md
+++ b/docs/setup/deploy-edge-node.md
@@ -23,7 +23,7 @@
}
```
-**Note:**
+**Note:**
1. the `metadata.name` must keep in line with edgecore's config `modules.edged.hostnameOverride`.
2. Make sure role is set to edge for the node. For this a key of the form `"node-role.kubernetes.io/edge"` must be present in `metadata.labels`.
diff --git a/docs/setup/kubeedge_install_keadm.md b/docs/setup/kubeedge_install_keadm.md
index cc6b8da17..024f314e3 100644
--- a/docs/setup/kubeedge_install_keadm.md
+++ b/docs/setup/kubeedge_install_keadm.md
@@ -1,6 +1,6 @@
# Setup from KubeEdge Installer
-Keadm is used to install the cloud and edge components of KubeEdge. It is not responsible for installing K8s and runtime,
+Keadm is used to install the cloud and edge components of KubeEdge. It is not responsible for installing K8s and runtime,
so users must install a k8s master on cloud and runtime on edge first. Or use an existing cluster.
Please refer [kubernetes-compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility) to get **Kubernetes compatibility** and determine what version of Kubernetes would be installed.
@@ -27,7 +27,7 @@ There are currently two ways to get keadm
- Building from source
1. Download the source code.
-
+
```shell
git clone https://github.com/kubeedge/kubeedge.git $GOPATH/src/github.com/kubeedge/kubeedge
cd $GOPATH/src/github.com/kubeedge/kubeedge
@@ -41,42 +41,42 @@ There are currently two ways to get keadm
```
2. If you used `go get`, the `keadm` binary is available in `$GOPATH/bin/`
-
+
If you compiled from source, the `keadm` binary is in `$GOPATH/src/github.com/kubeedge/kubeedge/_output/local/bin/`
## Setup Cloud Side (KubeEdge Master Node)
By default ports '10000' and '10002' in your cloudcore needs to be accessible for your edge nodes.
-**Note**: '10002' only needed since 1.3 release
+**Note**: '10002' only needed since 1.3 release
`keadm init` will install cloudcore, generate the certs and install the CRDs. It also provides a flag by which a specific version can be set.
1. Execute `keadm init`: keadm needs super user rights (or root rights) to run successfully.
Command flags
-
+
The optional flags with this command are mentioned below
```shell
"keadm init" command install KubeEdge's master node (on the cloud) component.
It checks if the Kubernetes Master are installed already,
If not installed, please install the Kubernetes first.
-
+
Usage:
keadm init [flags]
-
+
Examples:
-
+
keadm init
-
+
- This command will download and install the default version of KubeEdge cloud component
-
+
keadm init --kubeedge-version=1.2.0 --kube-config=/root/.kube/config
-
+
- kube-config is the absolute path of kubeconfig which used to secure connectivity between cloudcore and kube-apiserver
-
-
+
+
Flags:
--advertise-address string Use this key to set SANs in certificate of cloudcore. eg: 10.10.102.78,10.10.102.79
-h, --help help for init
@@ -85,7 +85,7 @@ By default ports '10000' and '10002' in your cloudcore needs to be accessible fo
--master string Use this key to set K8s master address, eg: http://127.0.0.1:8080
```
-**IMPORTANT NOTE:**
+**IMPORTANT NOTE:**
1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster.
1. `--advertise-address`(only needed since 1.3 release) is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP
@@ -100,7 +100,7 @@ Sample execution output:
Kubernetes version verification passed, KubeEdge installation will start...
...
KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log
-```
+```
## (**Only Needed in Pre 1.3 Release**) Manually copy certs.tgz from cloud host to edge host(s)
@@ -153,23 +153,23 @@ Execute `keadm join <flags>`
```shell
"keadm join" command bootstraps KubeEdge's worker node (at the edge) component.
- It will also connect with cloud component to receive
- further instructions and forward telemetry data from
+ It will also connect with cloud component to receive
+ further instructions and forward telemetry data from
devices to cloud
-
+
Usage:
keadm join [flags]
-
+
Examples:
-
+
keadm join --cloudcore-ipport=<ip:port address> --edgenode-name=<unique string as edge identifier>
-
+
- For this command --cloudcore-ipport flag is a required option
- This command will download and install the default version of pre-requisites and KubeEdge
-
+
keadm join --cloudcore-ipport=10.20.30.40:10000 --edgenode-name=testing123 --kubeedge-version=1.2.0
-
-
+
+
Flags:
--certPath string The certPath used by edgecore, the default value is /etc/kubeedge/certs (default "/etc/kubeedge/certs")
-s, --certport string The port where to apply for the edge certificate
@@ -182,10 +182,10 @@ Execute `keadm join <flags>`
-t, --token string Used for edge to apply for the certificate
```
-**IMPORTANT NOTE:**
-1. For this command `--cloudcore-ipport` flag is a mandatory flag
+**IMPORTANT NOTE:**
+1. For this command `--cloudcore-ipport` flag is a mandatory flag.
1. If you want to apply certificate for edge node automatically, `--token` is needed.
-1. The KubeEdge version used in cloud and edge side should be same.
+1. The kubeEdge version used in cloud and edge side should be same.
Examples:
@@ -208,15 +208,15 @@ KubeEdge edgecore is running, For logs visit: /var/log/kubeedge/edgecore.log
### Steps
1. **Install CNI plugin:**
- - Download CNI plugin release and extract it:
-
+ - Download CNI plugin release and extract it:
+
```
- $ wget https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz
-
+ $ wget https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz
+
# Extract the tarball
$ mkdir cni
$ tar -zxvf v0.2.0.tar.gz -C cni
-
+
$ mkdir -p /opt/cni/bin
$ cp ./cni/* /opt/cni/bin/
```
@@ -224,8 +224,8 @@ KubeEdge edgecore is running, For logs visit: /var/log/kubeedge/edgecore.log
- Configure cni plugin
```
- $ mkdir -p /etc/cni/net.d/
-
+ $ mkdir -p /etc/cni/net.d/
+
$ cat >/etc/cni/net.d/bridge.conf <<EOF
{
"cniVersion": "0.3.1",
@@ -241,20 +241,20 @@ KubeEdge edgecore is running, For logs visit: /var/log/kubeedge/edgecore.log
{ "dst": "0.0.0.0/0" }
]
}
- }
+ }
EOF
```
-
-1. **Setup VM runtime:**
+
+1. **Setup VM runtime:**
Use script [`hack/setup-vmruntime.sh`](/hack/setup-vmruntime.sh) to set up VM runtime. It makes use of Arktos Runtime release to start three containers:
-
+
vmruntime_vms
vmruntime_libvirt
vmruntime_virtlet
1. **Start edgecore service and join the cluster:**
- The step is similare to provision containers with specify `remote-runtime-endpoint`.
+ The step is similare to provision containers with specify `remote-runtime-endpoint`.
Examples:
@@ -282,7 +282,7 @@ spec:
requests:
cpu: "3"
memory: "200Mi"
- ```
+ ```
Then use `kubectl create -f vm.yaml` to create VM pod on the edge node. You should see the workload on master:
@@ -292,16 +292,16 @@ On master:
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
testvm 1/1 Running 0 38s 10.88.0.18 testnodevm3 <none> <none>
- ```
-
+ ```
+
On the edge worker node: either ssh into the VM instance or `virsh list` can verify the VM is created and running:
-
+
```shell
Id Name State
----------------------------------------------------
1 virtlet-10628888-2584-testvm running
- ```
-
+ ```
+
## Reset KubeEdge Master and Worker nodes
@@ -364,7 +364,7 @@ Flags:
kubectl create -f https://raw.githubusercontent.com/kubeedge/kubeedge/<kubeEdge Version>/build/crds/devices/devices_v1alpha1_device.yaml
kubectl create -f https://raw.githubusercontent.com/kubeedge/kubeedge/<kubeEdge Version>/build/crds/devices/devices_v1alpha1_devicemodel.yaml
```
-
+
Also, create ClusterObjectSync and ObjectSync CRDs which are used in reliable message delivery.
```shell
diff --git a/docs/setup/kubeedge_install_source.md b/docs/setup/kubeedge_install_source.md
index 676629507..e6ded2745 100644
--- a/docs/setup/kubeedge_install_source.md
+++ b/docs/setup/kubeedge_install_source.md
@@ -30,7 +30,7 @@ The cert/ key will be generated in the `/etc/kubeedge/ca` and `/etc/kubeedge/cer
#### Generate Certificates for support `kubectl logs` command
-+ First , you need to make sure you can find the kubernetes ca.crt and ca.key files. if you start up your kubernetes cluster by `kubeadmin`.
++ First , you need to make sure you can find the kubernetes ca.crt and ca.key files. if you start up your kubernetes cluster by `kubeadmin`.
those files will be in `/etc/kubernetes/pki/` dir.
+ Second , set `CLOUDCOREIPS` env, The environment variable is set to specify the IP addresses of all cloudcore
@@ -42,7 +42,7 @@ export CLOUDCOREIPS="172.20.12.45 172.20.12.46"
+ third
```bash
-$GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh stream
+$GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh stream
```
+ fourth
diff --git a/docs/setup/kubeedge_run.md b/docs/setup/kubeedge_run.md
index f70421181..88f0702c9 100644
--- a/docs/setup/kubeedge_run.md
+++ b/docs/setup/kubeedge_run.md
@@ -57,21 +57,21 @@ nohup ./edgecore > edgecore.log 2>&1 &
```
If you have setup using the systemctl
-
+
Run edgecore with systemd
-
+
It is also possible to start the edgecore with systemd. If you want, you could use the example systemd-unit-file.
-
+
```shell
sudo ln build/tools/edgecore.service /etc/systemd/system/edgecore.service
sudo systemctl daemon-reload
sudo systemctl start edgecore
```
-
+
**Note:** Please fix __ExecStart__ path in edgecore.service. Do __NOT__ use relative path, use absolute path instead.
-
+
If you also want also an autostart, you have to execute this, too:
-
+
```shell
sudo systemctl enable edgecore
```
diff --git a/docs/setup/memfootprint-test-setup.md b/docs/setup/memfootprint-test-setup.md
index a433d0f54..854ad78a3 100644
--- a/docs/setup/memfootprint-test-setup.md
+++ b/docs/setup/memfootprint-test-setup.md
@@ -18,7 +18,7 @@ After deployment and provisioning of KubeEdge cloud and edge components in 2 VM'
### Test setup
-![KubeEdge Test Setup](../../docs/images/memfootprint-img/perftestsetup_diagram.PNG)
+![KubeEdge Test Setup](../../docs/images/memfootprint-img/perftestsetup_diagram.PNG)
*Fig 1: KubeEdge Test Setup*
diff --git a/edge/cmd/edgecore/app/server.go b/edge/cmd/edgecore/app/server.go
index b1810989d..264c1f6da 100644
--- a/edge/cmd/edgecore/app/server.go
+++ b/edge/cmd/edgecore/app/server.go
@@ -38,16 +38,16 @@ func NewEdgeCoreCommand() *cobra.Command {
opts := options.NewEdgeCoreOptions()
cmd := &cobra.Command{
Use: "edgecore",
- Long: `Edgecore is the core edge part of KubeEdge, which contains six modules: devicetwin, edged,
-edgehub, eventbus, metamanager, and servicebus. DeviceTwin is responsible for storing device status
-and syncing device status to the cloud. It also provides query interfaces for applications. Edged is an
-agent that runs on edge nodes and manages containerized applications and devices. Edgehub is a web socket
-client responsible for interacting with Cloud Service for the edge computing (like Edge Controller as in the KubeEdge
-Architecture). This includes syncing cloud-side resource updates to the edge, and reporting
-edge-side host and device status changes to the cloud. EventBus is a MQTT client to interact with MQTT
-servers (mosquito), offering publish and subscribe capabilities to other components. MetaManager
-is the message processor between edged and edgehub. It is also responsible for storing/retrieving metadata
-to/from a lightweight database (SQLite).ServiceBus is a HTTP client to interact with HTTP servers (REST),
+ Long: `Edgecore is the core edge part of KubeEdge, which contains six modules: devicetwin, edged,
+edgehub, eventbus, metamanager, and servicebus. DeviceTwin is responsible for storing device status
+and syncing device status to the cloud. It also provides query interfaces for applications. Edged is an
+agent that runs on edge nodes and manages containerized applications and devices. Edgehub is a web socket
+client responsible for interacting with Cloud Service for the edge computing (like Edge Controller as in the KubeEdge
+Architecture). This includes syncing cloud-side resource updates to the edge, and reporting
+edge-side host and device status changes to the cloud. EventBus is a MQTT client to interact with MQTT
+servers (mosquito), offering publish and subscribe capabilities to other components. MetaManager
+is the message processor between edged and edgehub. It is also responsible for storing/retrieving metadata
+to/from a lightweight database (SQLite).ServiceBus is a HTTP client to interact with HTTP servers (REST),
offering HTTP client capabilities to components of cloud to reach HTTP servers running at edge. `,
Run: func(cmd *cobra.Command, args []string) {
verflag.PrintAndExitIfRequested()
diff --git a/edge/hack/install_docker_for_raspbian.sh b/edge/hack/install_docker_for_raspbian.sh
index 281394d1c..43b91a3b8 100644
--- a/edge/hack/install_docker_for_raspbian.sh
+++ b/edge/hack/install_docker_for_raspbian.sh
@@ -2,5 +2,5 @@
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/raspbian/gpg | apt-key add -
-echo "deb [arch=armhf] https://download.docker.com/linux/raspbian stretch stable" | tee /etc/apt/sources.list.d/docker.list
+echo "deb [arch=armhf] https://download.docker.com/linux/raspbian stretch stable" | tee /etc/apt/sources.list.d/docker.list
apt-get update && apt-get install -y docker-ce docker-ce-cli containerd.io \ No newline at end of file
diff --git a/edge/test/README.md b/edge/test/README.md
index cc84a7de2..8ea0cb27a 100644
--- a/edge/test/README.md
+++ b/edge/test/README.md
@@ -9,14 +9,14 @@
## Overview About testManager Module
-* testManager is a utility module to mimic the cloud and push messages for different kinds of actions that could happen from the cloud. Typical device , node and application lifecycle management functions are expected to be performed in the cloud and pushed to the edge node. These functions commonly encompass configurations related to
-
+* testManager is a utility module to mimic the cloud and push messages for different kinds of actions that could happen from the cloud. Typical device , node and application lifecycle management functions are expected to be performed in the cloud and pushed to the edge node. These functions commonly encompass configurations related to
+
- Kubernetes Secrets and Configuration Maps.
- Application deployment/sync
- Binding devices to edge nodes via memberships.
- Syncing of different resources between cloud and edge ( like app status, device status etc)
- Node sync, etc..
-
+
Below info can help user how to use the testManager for testing the kubeedge.
* testManager module starts http server on port 12345 to let users interact with kubeedge and perform operations which would be typically performed from cloud.
@@ -26,19 +26,19 @@ It exposes its API's to do the following
- /devices : Bind a Device to kubeedge node.
- /secrets : Configure secrets on kubeedge node.
- /configmaps : Configure configmaps on kubeedge node.
-
+
Using above API's user can perform the resource operations against running edge node.
-
-testManager facilitates validating the capabilities of the edge platform by performing **curl** operations against a running edge node.
-Following sections will explain the procedure to test the kubeedge with testManager.
+testManager facilitates validating the capabilities of the edge platform by performing **curl** operations against a running edge node.
+
+Following sections will explain the procedure to test the kubeedge with testManager.
## Test with `TestManager` Module
### Compile
```shell
-# generate the `edgecore` binary
+# generate the `edgecore` binary
make
# or
make edgecore
@@ -147,9 +147,9 @@ curl -X PUT \
"state": "online"
}]
}'
-```
+```
-#### Verify the DB
+#### Verify the DB
```bash
# Enter the database
sqlite3 edge.db
@@ -171,7 +171,7 @@ curl -X DELETE \
"state": "online"
}]
}'
-```
+```
#### Add Pod
```bash
@@ -197,7 +197,7 @@ curl -i -v -X POST http://127.0.0.1:12345/pod -d '{
```
#### Query Pods
```bash
-curl -i -v -X GET http://127.0.0.1:12345/pod
+curl -i -v -X GET http://127.0.0.1:12345/pod
#or (To display response in json format)
@@ -215,7 +215,7 @@ select * from meta;
# or you can check the pod container using `docker ps`
```
-### Remove Pod
+### Remove Pod
```bash
curl -i -v -X DELETE http://127.0.0.1:12345/pod -d '{
"apiVersion": "v1",
diff --git a/edge/test/integration/docs/README.md b/edge/test/integration/docs/README.md
index 1b3b24617..436a8717f 100755
--- a/edge/test/integration/docs/README.md
+++ b/edge/test/integration/docs/README.md
@@ -2,7 +2,7 @@
- [Background](#Background)
- [Integration test framework features](#Integration-test-framework-features)
- - [Folder Structure](#Folder-structure-of-Integration-tests)
+ - [Folder Structure](#Folder-structure-of-Integration-tests)
- [Sample Testcase](#Sample-Testcase)
- [Configurations](#Configurations)
- [Run Tests](#Run-Tests)
@@ -54,7 +54,7 @@ It("TC_TEST_EBUS_7: change the device status to unknown from eventbus", func() {
return deviceState
}, "60s", "2s").Should(Equal("unknown"), "Device state is not unknown within specified time")
Client.Disconnect(1)
-})
+})
```
## Configurations
##### Modify the test configurations accordingly in the below mentioned file
@@ -70,7 +70,7 @@ cat >config.json<<END
mqttEndpoint : Specify mqttEndpoint accordingly to Run the integration tests on internal or External MQTT server.
testManager: testManager will listen and serve the request on http://127.0.0.1:12345
-edgedEndpoint: edgedEndpoint will listen and serve the request on http://127.0.0.1:10255
+edgedEndpoint: edgedEndpoint will listen and serve the request on http://127.0.0.1:10255
image_url: Specify the docker Image Name/Image URL's for application deployments on edge node.
```
## Run Tests
@@ -79,33 +79,33 @@ image_url: Specify the docker Image Name/Image URL's for application deployments
* Integration test scripts are written in a way that user can run all test suites together or run individual test suites or only failed test case.
**Run all test suites:**
-```shell
+```shell
cd $GOPATH/src/github.com/kubeedge/kubeedge/edge
-
+
1. bash -x test/integration/scripts/compile.sh
2. bash test/integration/scripts/fast_test.sh
-
+
Above 2 commands will ensure you run all the tests.
```
**Run Individual Test Suite:**
-```shell
+```shell
cd $GOPATH/src/github.com/kubeedge/kubeedge/edge
-
+
Ex:
1. bash -x test/integration/scripts/compile.sh <device>
- 2. bash test/integration/scripts/fast_test.sh <device>
- #or
+ 2. bash test/integration/scripts/fast_test.sh <device>
+ #or
1. bash -x test/integration/scripts/compile.sh <appdeployment>
- 2. bash test/integration/scripts/fast_test.sh <appdeployment>
+ 2. bash test/integration/scripts/fast_test.sh <appdeployment>
```
**Run Failed Test:**
-```shell
+```shell
cd $GOPATH/src/github.com/kubeedge/kubeedge/edge
-
- Ex:
- bash test/integration/scripts/fast_test.sh <device> -ginkgo.focus="Failed test case ID/Name"
+
+ Ex:
+ bash test/integration/scripts/fast_test.sh <device> -ginkgo.focus="Failed test case ID/Name"
```
## Test Logs
##### Integration test logs
diff --git a/edgemesh/tools/initContainer/createImg.sh b/edgemesh/tools/initContainer/createImg.sh
index eb1676cff..8b7c485be 100644
--- a/edgemesh/tools/initContainer/createImg.sh
+++ b/edgemesh/tools/initContainer/createImg.sh
@@ -26,7 +26,7 @@ if command -v docker > /dev/null 2>&1 ; then
#docker build
docker build -t edgemesh_init .
# delete iptables script
- rm ./edgemesh-iptables.sh
+ rm ./edgemesh-iptables.sh
else
echo 'the docker command is no found!!'
exit 1
diff --git a/edgemesh/tools/initContainer/rpm/Dockerfile b/edgemesh/tools/initContainer/rpm/Dockerfile
index 2fbf21657..2f3122c7d 100644
--- a/edgemesh/tools/initContainer/rpm/Dockerfile
+++ b/edgemesh/tools/initContainer/rpm/Dockerfile
@@ -3,5 +3,5 @@ FROM centos:latest
ADD edgemesh-iptables.sh /usr/local/bin
RUN yum -y update && yum install -y iproute iptables
-
+
ENTRYPOINT ["/usr/local/bin/edgemesh-iptables.sh"]
diff --git a/edgemesh/tools/initContainer/script/edgemesh-iptables.sh b/edgemesh/tools/initContainer/script/edgemesh-iptables.sh
index 1447ef7fb..86a7dc582 100644
--- a/edgemesh/tools/initContainer/script/edgemesh-iptables.sh
+++ b/edgemesh/tools/initContainer/script/edgemesh-iptables.sh
@@ -16,10 +16,10 @@ function usage() {
echo ' -h: for some help'
}
-# network namespace
+# network namespace
NETMODE=
-# get the container network mode
+# get the container network mode
function getContainerNetMode() {
if ip link |grep docker0 > /dev/null; then
echo 'this is the host mode,share with net namespace with host'
@@ -38,7 +38,7 @@ function isValidIP() {
true
else
false
- fi
+ fi
}
function isIPv4() {
@@ -68,34 +68,34 @@ function bridgeNetMode() {
echo 'this func used for bridge net mode'
# get default route
default_route=$(ip route show |grep default |awk '{print $3}')
-
+
#clear EDGEMESH chain and rule,if exist
iptables -t nat -D OUTPUT -p tcp -j EDGEMESH_OUTBOUND 2>/dev/null
iptables -t nat -D OUTPUT -p udp --dport "53" -j EDGEMESH_OUTBOUND_DNS 2>/dev/null
iptables -t nat -F EDGEMESH_OUTBOUND 2>/dev/null
iptables -t nat -X EDGEMESH_OUTBOUND 2>/dev/null
-
+
iptables -t nat -F EDGEMESH_OUTBOUND_REDIRECT 2>/dev/null
iptables -t nat -X EDGEMESH_OUTBOUND_REDIRECT 2>/dev/null
-
+
iptables -t nat -F EDGEMESH_OUTBOUND_DNS 2>/dev/null
iptables -t nat -X EDGEMESH_OUTBOUND_DNS 2>/dev/null
-
+
# make chain for edgemesh hijacking
iptables -t nat -N EDGEMESH_OUTBOUND_REDIRECT
iptables -t nat -A EDGEMESH_OUTBOUND_REDIRECT -p tcp -j DNAT --to-destination "${default_route}:${EDGEMESH_PROXY_PORT}"
iptables -t nat -N EDGEMESH_OUTBOUND
iptables -t nat -A OUTPUT -p tcp -j EDGEMESH_OUTBOUND
-
+
# support dns use udp for dest port 53
iptables -t nat -N EDGEMESH_OUTBOUND_DNS
iptables -t nat -A EDGEMESH_OUTBOUND_DNS -j DNAT --to-destination "${default_route}"
iptables -t nat -A OUTPUT -p udp --dport "53" -j EDGEMESH_OUTBOUND_DNS
-
+
# excluded traffic for some port incloude some special port,such as 22
iptables -t nat -A EDGEMESH_OUTBOUND -p tcp --dport "22" -j RETURN
- if [ -n "${EDGEMESH_EXCLUDE_PORT}" ]; then
- for port in "${port_exclude_list[@]}"; do
+ if [ -n "${EDGEMESH_EXCLUDE_PORT}" ]; then
+ for port in "${port_exclude_list[@]}"; do
iptables -t nat -A EDGEMESH_OUTBOUND -p tcp --dport "${port}" -j RETURN
done
fi
@@ -105,10 +105,10 @@ function bridgeNetMode() {
iptables -t nat -A EDGEMESH_OUTBOUND -d "${ip}" -j RETURN
done
fi
-
+
# Redirect app callback to itself via Service IP (default not redirected)
get_local_IP=$(ip addr |grep inet|grep -v inet6|awk '{print $2}'|tr -d "addr:")
-
+
for LOCAL_IP in $get_local_IP; do
ele=${LOCAL_IP%$splt}
echo "LOCAL_IP: $LOCAL_IP , $ele"
@@ -118,7 +118,7 @@ function bridgeNetMode() {
done
# loopback traffic
iptables -t nat -A EDGEMESH_OUTBOUND -d 127.0.0.1/32 -j RETURN
-
+
# hijacking
if [ ${#ipv4_include_list[@]} -gt 0 ]; then
# include Ips and ports are *
@@ -131,11 +131,11 @@ function bridgeNetMode() {
done
fi
if [ "${EDGEMESH_HIJACK_PORT}" != "*" ]; then
- for port in "${port_include_list[@]}"; do
+ for port in "${port_include_list[@]}"; do
iptables -t nat -A EDGEMESH_OUTBOUND -p tcp --dport "${port}" -j EDGEMESH_OUTBOUND_REDIRECT
done
fi
-
+
iptables -t nat -A EDGEMESH_OUTBOUND -j RETURN
fi
fi
@@ -158,7 +158,7 @@ EDGEMESH_EXCLUDE_PORT=${EXCLUDE_PORT-}
function main() {
getContainerNetMode
-
+
while getopts ":p:i:t:b:c:h" opt; do
case ${opt} in
p)
@@ -167,13 +167,13 @@ function main() {
i)
EDGEMESH_HIJACK_IP=${OPTARG}
;;
- t)
+ t)
EDGEMESH_HIJACK_PORT=${OPTARG}
;;
b)
EDGEMESH_EXCLUDE_IP=${OPTARG}
;;
- c)
+ c)
EDGEMESH_EXCLUDE_PORT=${OPTARG}
;;
h)
@@ -187,7 +187,7 @@ function main() {
;;
esac
done
-
+
echo "EdgeMesh iptables configration:"
echo "====================================="
echo "Container Network mode is: ${NETMODE}"
@@ -197,7 +197,7 @@ function main() {
echo "EDGEMESH_HIJACK_PORT=${EDGEMESH_HIJACK_PORT-"*"}"
echo "EDGEMESH_EXCLUDE_IP=${EDGEMESH_EXCLUDE_IP-}"
echo "EDGEMESH_EXCLUDE_PORT=${EDGEMESH_EXCLUDE_PORT-}"
-
+
# parse parameter
IFS=',' read -ra EXCLUDE_IP <<< "${EDGEMESH_EXCLUDE_IP}"
IFS=',' read -ra INCLUDE_IP <<< "${EDGEMESH_HIJACK_IP}"
@@ -212,7 +212,7 @@ function main() {
fi
fi
done
-
+
if [ "${EDGEMESH_HIJACK_IP}" == "*" ]; then
ipv4_include_list=("*")
ipv6_include_list=("*")
@@ -225,10 +225,10 @@ function main() {
elif isIPv6 "$r"; then
ipv6_include_list+=("$range")
fi
- fi
+ fi
done
fi
-
+
IFS=',' read -ra INCLUDE_PORT <<< "${EDGEMESH_HIJACK_PORT}"
IFS=',' read -ra EXCLUDE_PORT <<< "${EDGEMESH_EXCLUDE_PORT}"
if [ "${EDGEMESH_HIJACK_PORT}" != "*" ]; then
@@ -236,20 +236,20 @@ function main() {
port_include_list+=("$port")
done
fi
-
- if [ -n "${EDGEMESH_EXCLUDE_PORT}" ]; then
- for port in "${EXCLUDE_PORT[@]}"; do
+
+ if [ -n "${EDGEMESH_EXCLUDE_PORT}" ]; then
+ for port in "${EXCLUDE_PORT[@]}"; do
port_exclude_list+=("$port")
done
fi
-
+
echo "ipv4_include_list : ${ipv4_include_list[@]}"
echo "ipv4_exclude_list : ${ipv4_exclude_list[@]}"
echo "port_include_list : ${port_include_list[@]}"
echo "port_exclude_list : ${port_exclude_list[@]}"
-
+
# bridge mode(port map) container network
- if [ "${NETMODE}" = "OTHER" ]; then
+ if [ "${NETMODE}" = "OTHER" ]; then
echo " ${NETMODE} iptables configration"
bridgeNetMode
# if set ipv6 option
diff --git a/edgesite/cmd/edgesite/app/server.go b/edgesite/cmd/edgesite/app/server.go
index 3fe603a75..7d7366741 100644
--- a/edgesite/cmd/edgesite/app/server.go
+++ b/edgesite/cmd/edgesite/app/server.go
@@ -28,9 +28,9 @@ func NewEdgeSiteCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "edgesite",
Long: `EdgeSite helps running lightweight clusters at edge, which contains three modules: edgecontroller,
-metamanager, and edged. EdgeController is an extended kubernetes controller which manages edge nodes and pods metadata
-so that the data can be targeted to a specific edge node. MetaManager is the message processor between edged and edgehub.
-It is also responsible for storing/retrieving metadata to/from a lightweight database (SQLite).Edged is an agent that
+metamanager, and edged. EdgeController is an extended kubernetes controller which manages edge nodes and pods metadata
+so that the data can be targeted to a specific edge node. MetaManager is the message processor between edged and edgehub.
+It is also responsible for storing/retrieving metadata to/from a lightweight database (SQLite).Edged is an agent that
runs on edge nodes and manages containerized applications.`,
Run: func(cmd *cobra.Command, args []string) {
verflag.PrintAndExitIfRequested()
diff --git a/hack/lib/golang.sh b/hack/lib/golang.sh
index 0c51e83f3..8a9a13fb7 100644
--- a/hack/lib/golang.sh
+++ b/hack/lib/golang.sh
@@ -19,7 +19,6 @@
# KubeEdge Authors:
# To Get Detail Version Info for KubeEdge Project
-#set -x
set -o errexit
set -o nounset
set -o pipefail
@@ -136,7 +135,7 @@ kubeedge::check::env() {
errors+="GOPATH environment value not set"
fi
- # check other env
+ # check other env
# check lenth of errors
if [[ ${#errors[@]} -ne 0 ]] ; then
@@ -178,7 +177,7 @@ kubeedge::golang::get_all_targets() {
}
kubeedge::golang::get_all_binares() {
- local -a binares
+ local -a binares
for bt in "${ALL_BINARIES_AND_TARGETS[@]}" ; do
binares+=("${bt%%:*}")
done
@@ -195,11 +194,11 @@ kubeedge::golang::build_binaries() {
for binArg in "$@"; do
targets+=("$(kubeedge::golang::get_target_by_binary $binArg)")
done
-
+
if [[ ${#targets[@]} -eq 0 ]]; then
targets=("${KUBEEDGE_ALL_TARGETS[@]}")
fi
-
+
local -a binaries
while IFS="" read -r binary; do binaries+=("$binary"); done < <(kubeedge::golang::binaries_from_targets "${targets[@]}")
@@ -227,11 +226,11 @@ kubeedge::golang::is_cross_build_binary() {
local key=$1
for bin in "${KUBEEDGE_ALL_CROSS_BINARIES[@]}" ; do
if [ "${bin}" == "${key}" ]; then
- echo ${YES}
+ echo ${YES}
return
fi
done
- echo ${NO}
+ echo ${NO}
}
KUBEEDGE_ALL_CROSS_GOARMS=(
@@ -243,17 +242,16 @@ kubeedge::golang::is_supported_goarm() {
local key=$1
for value in ${KUBEEDGE_ALL_CROSS_GOARMS[@]} ; do
if [ "${value}" == "${key}" ]; then
- echo ${YES}
+ echo ${YES}
return
fi
done
- echo ${NO}
+ echo ${NO}
}
kubeedge::golang::cross_build_place_binaries() {
kubeedge::check::env
-
- set -x
+
local -a targets=()
local goarm=${goarm:-${KUBEEDGE_ALL_CROSS_GOARMS[0]}}
@@ -275,10 +273,10 @@ kubeedge::golang::cross_build_place_binaries() {
targets+=("$(kubeedge::golang::get_target_by_binary $bin)")
done
fi
-
+
if [ "$(kubeedge::golang::is_supported_goarm ${goarm})" == "${NO}" ]; then
echo "GOARM${goarm} does not support cross build"
- exit 1
+ exit 1
fi
local -a binaries
@@ -312,11 +310,11 @@ kubeedge::golang::is_small_build_binary() {
local key=$1
for bin in "${KUBEEDGE_ALL_SMALL_BINARIES[@]}" ; do
if [ "${bin}" == "${key}" ]; then
- echo ${YES}
+ echo ${YES}
return
fi
done
- echo ${NO}
+ echo ${NO}
}
kubeedge::golang::small_build_place_binaries() {
@@ -336,7 +334,7 @@ kubeedge::golang::small_build_place_binaries() {
targets+=("$(kubeedge::golang::get_target_by_binary $bin)")
done
fi
-
+
local -a binaries
while IFS="" read -r binary; do binaries+=("$binary"); done < <(kubeedge::golang::binaries_from_targets "${targets[@]}")
diff --git a/hack/lib/lint.sh b/hack/lib/lint.sh
index b99e4bb34..692f75018 100644
--- a/hack/lib/lint.sh
+++ b/hack/lib/lint.sh
@@ -22,6 +22,8 @@ set -o pipefail
kubeedge::lint::check() {
cd ${KUBEEDGE_ROOT}
+ git diff --cached --name-only master | grep -Ev "externalversions|fake|vendor" | xargs sed -i 's/[ \t]*$//'
+ [[ $(git diff --name-only) ]] && return 1
golangci-lint run
gofmt -l -w staging
}
diff --git a/hack/verify-golang.sh b/hack/verify-golang.sh
index e8aa3fa54..b4ff025d7 100755
--- a/hack/verify-golang.sh
+++ b/hack/verify-golang.sh
@@ -23,7 +23,7 @@ set -o pipefail
# The root of the build/dist directory
KUBEEDGE_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd -P)"
-echo "go detail version: $(go version)"
+echo "go detail version: $(go version)"
goversion=$(go version |awk -F ' ' '{printf $3}' |sed 's/go//g')
@@ -32,7 +32,7 @@ echo "go version: $goversion"
X=$(echo $goversion|awk -F '.' '{printf $1}')
Y=$(echo $goversion|awk -F '.' '{printf $2}')
-if [ $X -lt 1 ] ; then
+if [ $X -lt 1 ] ; then
echo "go major version must >= 1, now is $X"
exit 1
fi
diff --git a/keadm/cmd/keadm/app/cmd/cmd.go b/keadm/cmd/keadm/app/cmd/cmd.go
index 201e8549d..4743210b7 100644
--- a/keadm/cmd/keadm/app/cmd/cmd.go
+++ b/keadm/cmd/keadm/app/cmd/cmd.go
@@ -34,7 +34,7 @@ var (
| Please give us feedback at: |
| https://github.com/kubeedge/kubeedge/issues |
+----------------------------------------------------------+
-
+
Create a cluster with cloud node
(which controls the edge node cluster), and edge nodes
(where native containerized application, in the form of
diff --git a/keadm/cmd/keadm/app/cmd/edge/join.go b/keadm/cmd/keadm/app/cmd/edge/join.go
index d6d4d2b3b..ca5a1ea74 100644
--- a/keadm/cmd/keadm/app/cmd/edge/join.go
+++ b/keadm/cmd/keadm/app/cmd/edge/join.go
@@ -30,8 +30,8 @@ import (
var (
edgeJoinLongDescription = `
"keadm join" command bootstraps KubeEdge's worker node (at the edge) component.
-It will also connect with cloud component to receive
-further instructions and forward telemetry data from
+It will also connect with cloud component to receive
+further instructions and forward telemetry data from
devices to cloud
`
edgeJoinExample = `
diff --git a/mappers/bluetooth_mapper/deployment.yaml b/mappers/bluetooth_mapper/deployment.yaml
index 3d236ec84..883b78dbe 100644
--- a/mappers/bluetooth_mapper/deployment.yaml
+++ b/mappers/bluetooth_mapper/deployment.yaml
@@ -23,7 +23,7 @@ spec:
- name: config-volume
mountPath: /opt/kubeedge/
nodeSelector:
- bluetooth: "true"
+ bluetooth: "true"
volumes:
- name: config-volume
configMap:
diff --git a/mappers/modbus_mapper/Makefile b/mappers/modbus_mapper/Makefile
index 9be3cb8d8..5b1fecffa 100644
--- a/mappers/modbus_mapper/Makefile
+++ b/mappers/modbus_mapper/Makefile
@@ -1,5 +1,5 @@
#make modbus_mapper
.PHONY: default modbus_mapper
-modbus_mapper:
+modbus_mapper:
cd src && npm install --unsafe-perm=true
docker build -t modbus_mapper:v1.0 .
diff --git a/mappers/modbus_mapper/deployment.yaml b/mappers/modbus_mapper/deployment.yaml
index 30a536796..a024235a7 100644
--- a/mappers/modbus_mapper/deployment.yaml
+++ b/mappers/modbus_mapper/deployment.yaml
@@ -16,7 +16,7 @@ spec:
containers:
- name: modbus-mapper-container
image: <your_dockerhub_username>/modbus_mapper:v1.0
- env:
+ env:
- name: CONNECTOR_MQTT_PORT
value: "1883"
- name: CONNECTOR_MQTT_IP
@@ -30,10 +30,10 @@ spec:
- name: dpl-config-volume
mountPath: /opt/src/dpl
nodeSelector:
- modbus: "true"
+ modbus: "true"
volumes:
- name: dpl-config-volume
configMap:
name: device-profile-config-<edge_node_name>
restartPolicy: Always
- \ No newline at end of file
+
diff --git a/mappers/modbus_mapper/dpl/deviceProfile.json b/mappers/modbus_mapper/dpl/deviceProfile.json
index 00082d319..78401c0b7 100644
--- a/mappers/modbus_mapper/dpl/deviceProfile.json
+++ b/mappers/modbus_mapper/dpl/deviceProfile.json
@@ -4,7 +4,7 @@
"name": "modbus-mock-instance-01",
"model": "modbus-mock-model",
"protocol": "modbus-tcp-01"
- }],
+ }],
"deviceModels": [{
"properties": [{
"name": "temperature",
diff --git a/mappers/modbus_mapper/src/devicetwin.js b/mappers/modbus_mapper/src/devicetwin.js
index 6f15d5425..5f8dc3fc5 100644
--- a/mappers/modbus_mapper/src/devicetwin.js
+++ b/mappers/modbus_mapper/src/devicetwin.js
@@ -9,7 +9,7 @@ class DeviceTwin {
constructor(mqttClient) {
this.mqttClient = mqttClient;
}
-
+
// transferType transfer data according to the dpl configuration
transferType(visitor, property, data, callback) {
let transData;
@@ -88,7 +88,7 @@ class DeviceTwin {
default:
logger.error('unknown dataType: ', property.dataType);
callback(null);
- break;
+ break;
}
}
@@ -219,7 +219,7 @@ class DeviceTwin {
return;
}
if (!deviceTwin.hasOwnProperty('actual') ||
- (deviceTwin.hasOwnProperty('expected') && deviceTwin.expected.hasOwnProperty('metadata') && deviceTwin.actual.hasOwnProperty('metadata') &&
+ (deviceTwin.hasOwnProperty('expected') && deviceTwin.expected.hasOwnProperty('metadata') && deviceTwin.actual.hasOwnProperty('metadata') &&
deviceTwin.expected.metadata.timestamp > deviceTwin.actual.metadata.timestamp &&
deviceTwin.expected.value !== deviceTwin.actual.value)) {
callback(deviceTwin.expected.value);
diff --git a/mappers/modbus_mapper/src/index.js b/mappers/modbus_mapper/src/index.js
index da2af8dae..2edc685e0 100644
--- a/mappers/modbus_mapper/src/index.js
+++ b/mappers/modbus_mapper/src/index.js
@@ -46,7 +46,7 @@ async.series([
}
});
},
-
+
//load dpl first time
function(callback) {
WatchFiles.loadDpl(options.dpl_name, (devInsMap, devModMap, devProMap, modVistrMap)=>{
@@ -167,7 +167,7 @@ async.series([
}
});
});
- }
+ }
} catch (err) {
logger.error('failed to change devicetwin of device[%s], err: ', deviceID, err);
}
@@ -247,7 +247,7 @@ WatchFiles.watchChange(path.join(__dirname, 'dpl'), ()=>{
});
}
});
- callback();
+ callback();
}
],function(err) {
if (err) {
diff --git a/mappers/modbus_mapper/src/watchfile.js b/mappers/modbus_mapper/src/watchfile.js
index 1cf13c675..6c5340492 100644
--- a/mappers/modbus_mapper/src/watchfile.js
+++ b/mappers/modbus_mapper/src/watchfile.js
@@ -80,7 +80,7 @@ function buildMaps(dplConfigs, i) {
} else {
logger.error('failed to find model[%s] for deviceid', dplConfigs.deviceModels[i].model);
}
-
+
let foundPro = dplConfigs.protocols.findIndex((element)=>{
return element.name === dplConfigs.deviceInstances[i].protocol;
});
@@ -88,7 +88,7 @@ function buildMaps(dplConfigs, i) {
devPro.set(dplConfigs.deviceInstances[i].id, dplConfigs.protocols[foundMod]);
} else {
logger.error('failed to find protocol[%s] for deviceid', dplConfigs.deviceModels[i].protocol);
- }
+ }
}
// buildVisitorMaps build map[model-property-protocol]propertyVisitor
@@ -100,7 +100,7 @@ function buildVisitorMaps(dplConfigs, i, j) {
modVisitr.set(util.format('%s-%s-%s', dplConfigs.propertyVisitors[foundVisitor].modelName, dplConfigs.propertyVisitors[foundVisitor].propertyName, dplConfigs.propertyVisitors[foundVisitor].protocol), dplConfigs.propertyVisitors[foundVisitor]);
} else {
logger.error('failed to find visitor for model[%s], property[%s]', dplConfigs.deviceModels[i].name, dplConfigs.deviceModels[i].properties[j].name);
- }
+ }
}
module.exports = {watchChange, loadDpl, loadConfig};
diff --git a/staging/src/github.com/kubeedge/beehive/Makefile b/staging/src/github.com/kubeedge/beehive/Makefile
index 4f95ca8ff..7bfc85e09 100644
--- a/staging/src/github.com/kubeedge/beehive/Makefile
+++ b/staging/src/github.com/kubeedge/beehive/Makefile
@@ -46,8 +46,8 @@ test: ## test case
# https://deepzz.com/post/study-golang-test.html
# https://deepzz.com/post/the-command-flag-of-go-test.html
benchmark: ## run benchmarks tests
- @go test ${TESTFLAGS} $(filter-out ${INTEGRATION_PACKAGE},${PACKAGES}) -bench . -run Benchmark
-
+ @go test ${TESTFLAGS} $(filter-out ${INTEGRATION_PACKAGE},${PACKAGES}) -bench . -run Benchmark
+
coverage: ## generate coverprofiles from the unit tests, except tests that require root
@rm -f coverage.txt
@go test -i ${TESTFLAGS} $(filter-out ${INTEGRATION_PACKAGE},${PACKAGES}) 2> /dev/null
diff --git a/staging/src/github.com/kubeedge/viaduct/README.md b/staging/src/github.com/kubeedge/viaduct/README.md
index bdd9186a5..8fcc6a07b 100644
--- a/staging/src/github.com/kubeedge/viaduct/README.md
+++ b/staging/src/github.com/kubeedge/viaduct/README.md
@@ -1,7 +1,7 @@
# Viaduct
The viaduct is a bridge that carries a road across a valley,the valley is the gap between cloud and edge and the bridge is the connection across the gap
# Overview
-Viaduct use the protobuf3.0 to serialize message that defined in beehive and provide
+Viaduct use the protobuf3.0 to serialize message that defined in beehive and provide
apis for connection and message operations
By now, Viaduct has supported websocket(gorilla websocket) and quic(quic-go) as the basic transport protocol
diff --git a/staging/src/github.com/kubeedge/viaduct/examples/chat/README.md b/staging/src/github.com/kubeedge/viaduct/examples/chat/README.md
index 13156c8d4..3db1b2187 100644
--- a/staging/src/github.com/kubeedge/viaduct/examples/chat/README.md
+++ b/staging/src/github.com/kubeedge/viaduct/examples/chat/README.md
@@ -1,4 +1,4 @@
-# Generate Certificates
+# Generate Certificates
ca certificate and a cert/key pair is required to have a setup for examples/chat. Same cert/key pair can be used in both server and client.
# Generete Root Key
@@ -10,7 +10,7 @@ ca certificate and a cert/key pair is required to have a setup for examples/chat
# Generate csr, Fill required details after running the command
openssl req -new -key chat.key -out chat.csr
# Generate Certificate
- openssl x509 -req -in chat.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out char.crt -days 500 -sha256
+ openssl x509 -req -in chat.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out char.crt -days 500 -sha256
# How to Run
diff --git a/staging/src/github.com/kubeedge/viaduct/examples/mirror/README.md b/staging/src/github.com/kubeedge/viaduct/examples/mirror/README.md
index 92c892ccd..792b53bdd 100644
--- a/staging/src/github.com/kubeedge/viaduct/examples/mirror/README.md
+++ b/staging/src/github.com/kubeedge/viaduct/examples/mirror/README.md
@@ -15,7 +15,7 @@
- start server
./mirror --cmd-type=server --type=websocket --addr=localhost:9890
-
+
- start client
./mirror --cmd-type=client --type=websocket --addr=wss://localhost:9890/test
diff --git a/tests/e2e/mapper/bluetooth/README.md b/tests/e2e/mapper/bluetooth/README.md
index 596d84990..1d57d55a7 100644
--- a/tests/e2e/mapper/bluetooth/README.md
+++ b/tests/e2e/mapper/bluetooth/README.md
@@ -1,3 +1,3 @@
-# BLUETOOTH MAPPER E2E
+# BLUETOOTH MAPPER E2E
For running e2e tests for bluetooth mapper follow instructions given [here](../../../../docs/guides/bluetooth_mapper_e2e_guide.md) \ No newline at end of file