Friday, February 19, 2021

GIT - Knowledge Base

Git 2.17.1



Goals:

  • Have a list of useful Git commands, along with GitHub basic concepts.

Concepts:
  • GitHub
  • GitHub Commit/Push:
  • GitHub Pull Request:
  • GitHub Merge:
  • GitHub Squash:
    • Git Squash is a Git feature that allows a dev to simplify the Git tree by merging sequential commits into one another. Basically, you start by choosing a base commit and merging all changes from the next commits into this one. This essentially makes it the same as having all the changes you made in several commits in just one commit—the base commit.


Diagrams:





GIT Commands:

  • Show the Git software version:
    • git --version

  • Show the actual branch and modified files in a working directory, staged for your next commit:
    • git status

  • Check Your Current Git Configuration:
    • git config --global credential.helper
      • This command will show you the credential helper that Git is using. If you see "manager," it means Git is using a credential manager to store your GitHub token.

  • If you've stored GitHub credentials using the git config command and want to remove them:
    • List Your Git Configuration:
      • git config --list
        • This command will display all the Git configuration settings, including any credentials.
    • Locate the Credential Configuration in the output of the command above. The specific configuration may vary depending on how you've set up your Git credentials.
  • Store Credentials Permanently (Plaintext):
    • git config --global credential.helper store
      • This method saves your credentials in plaintext in a file (~/.git-credentials by default). It is simple but not secure, as anyone with access to your user account can read the file.
      • The next time you perform a Git operation that requires authentication (e.g., git pull or git push), Git will prompt for your username and password (or token) and then save them for future use.
  • Store Credentials Temporarily (In Memory):
    • git config --global credential.helper 'cache --timeout=3600
      • This method caches your credentials in memory for a limited time (default is 15 minutes).
  • Per-Repository Credential Files:
    • cd /path/to/your/repo
    • git config credential.helper 'store --file ~/.git_repo_credentials'
      • If you want different credentials for different repositories, you can configure a credential helper for each repository.
      • This saves credentials to a custom file for just that repository, rather than globally.
  • Remove the Credential Configuration:
    • git config --global --unset <config-name>
      • E.g.:  git config --global --unset credential.helper
    • Verify Removal:
      • git config --list
        • Make sure the configurations you removed are no longer present in the output.
        • After successfully removing the credentials, Git won't use the stored credentials for GitHub authentication, and you'll be prompted to enter your credentials when needed.
    • Windows OS:
      • Open the "Credentials Manager"
        • Click on "Windows Credentials" button
        • Delete the GitHub entries under Generic Credentials

  • Show the URL that it originally cloned a local Git repository from:
    • When Offline:
      • git config --get remote.origin.url
      • OR
      • git remote get-url origin
    • When Online and authenticated with Github:
      • git remote show origin

  • Check which user Git is using to commit your changes:
    • git config --list | grep user.name
    • git config --list | grep user.email

  • Change the username or email that Git is using:
    • git config --global user.name "Your Name"
    • git config --global user.email "your_email@example.com"

  • Show the commit ID:
    • git rev-parse HEAD
    • git rev-parse --short HEAD

  • Show commit logs:
    • git log

  • Show the current commit:
    • git log | head -n 1

  • Show the Difference between two commits:
    • git diff <first-branch-name-or-commit-id> <second-branch-name-or-commit-id> -- <filename>
    • git diff develop origin/master -- README.md

  • How to Revert a single file:
    • git checkout [commit-ID] -- path/to/file
    • git checkout [commit ID]~1 -- path/to/file

  • How to Revert a single commit:
    • git revert <commit-hash>
    • git revert HEAD~2

  • How to Restore working tree files:
    • git restore path/to/file
    • The command can also be used to restore the content in the index with --staged, or restore both the working tree and the index with --staged --worktree
      • To restore all files in the current directory
        • git restore .

    • List your branches. A (*) will appear next to the currently active branch:
      • git branch
      • git branch -a

    • Switch to another branch and check it out into your working directory:
      • git checkout <branch-name> [--force]

    • Fetch down all the branches from the Git remote:
      • git fetch [alias]

    • Stashing the changes:
      • The git stash takes uncommitted both staged and unstaged changes, saves them away for further use, and then returns them from your working copy. You can run the git status so you can see the dirty state. Then run git stash to stash the changes:
        • git stash
        • git stash list
          • stash@{0}: WIP on my-branch: 1234567 Refactoring some code
          • stash@{1}: WIP on my-branch: 1234567 Refactoring some code
      • Re-applying your stashed changes:
        • The git stash pop removes the changes from your stash and re-applies them to your working copy.
          • git stash pop
        • You can choose which stash to re-apply like this:
          • git stash pop stash@{1}
        • If you want to re-apply the changes and keep them in your stash:
          • git stash apply stash@{0}
        • Use git stash show to view a summary of a stash:
          • git stash show
          • git stash show stash@{0}
        • You can also use the -p or --patch options to see the full diff of a stash:
          • git stash show -p

    • Commit your changes:
      • git add .
      • git commit -m "commit message"


    • Pull the updated code:
      • git pull
      • If it's failing due to some file changes, you need to handle that. See the Stash information above.
        • git stash
      • If you want to ignore and lose all changes that you made to that local repository, then you can:
        • git fetch origin
        • git reset --hard
        • git clean -f
        • git pull

    • Making a Git push from a detached head:
      • git branch new-branch-name
      • git push -u origin new-branch-name

    • Create a new branch from a commit ID:
      • Navigate to the commit in question, and then click on the <> button next to the commit in your history. This will show the web interface for browsing that particular commits snapshot of the repository.
      • Click the down arrow on this button to show the dropdown. You can create a new branch simply by typing in the name of the new branch in the search field.
      • Click on the "Create branch: ..." link at the bottom of this dropdown, and a new branch should be created.

    • Git Reset vs Git Revert vs Git Restore
      • git-revert:  is about making a new commit that reverts the changes made by other commits.
      • git-restore:  is about restoring files in the working tree from either the index or another commit. This command does not update your branch. The command can also be used to restore files in the index from another commit.
      • git-reset:  is about updating your branch, moving the tip in order to add or remove commits from the branch. This operation changes the commit history. This command can also be used to restore the index, overlapping with git restore.

    • Retrieve an entire repository from a hosted location via URL:
      • git clone <REPO-URL>

    • Create, Commit and Push a Tag:
      • git tag <tag-name> -a
      • git push origin <tag-name>

    • Open a repository in browser using GitHub CLI:
      • gh repo view -w


    GitHub Workflow:

    A workflow run is made up of one or more jobs, which run in parallel by default. To run jobs sequentially, you can define dependencies on other jobs using the jobs.<job_id>.needs keyword.
    Each job runs in a runner environment specified by runs-on.
    You can run unlimited jobs as long as you are within the workflow usage limits.

    • Runners:
      • GitHub UI -> Your Organizations ->  <organization-name> -> Settings -> Actions -> Runners
      • Create a new Runner -> GitHub-hosted runner
        • Name: <our-runner-name> (e.g: ubuntu-22.04-4cores-16gb)
        • Image: Ubuntu
        • Ubuntu Version: 22.04
        • Size: 4-cores 16GB RAM ...
        • Auto-scaling - Maximum Job Concurrency: 2
        • Groups: <our-runner-group>
        • Networking - No public IP address



    Troubleshooting:
    • "remote: Repository not found":
      • Remove all the github.com credential details from the system.
        • For mac
          • Delete the github.com password from the Keychain Access.
        • For windows
          • Delete the credentials from Credential Manager.



    References:

    Sunday, August 30, 2020

    Programming ATTiny85 Microcontroller on Arduino

    MacOS 10.15.6
    Windows 10
    Arduino 1.8.13


    Goals:

    • Install drivers and libraries to develop firmware for the ATTiny85 Microcontroller using Arduino IDE, including the ATTiny85 chip alone (uploading firmware using Arduino Uno) and also ATTiny85 Dev Board.



    ATTINY85 Features:
    • PB3 - OWOWOD - One Wire / One Way Output for Debugging library. It allows you to output text from the ATtiny85 microcontroller or other similar, though USB-to-Serial or TTL converter and to the computer screen using COM port monitoring tool.
    • Watchdog timer
    • The reset pin can also be used as a (weak) I/O pin.
    NameFeatures
    CPU Architecture and Speed8-bit RISC Architecture, 1 MIPS@1MHz
    CPU Frequency0-8MHz  Calibrated Internal R-C Oscillator
    Operating Voltage Range+1.8V to +5.5V (ATTINY85V)
    +2.7V to +5.5V (ATTINY85) (+6.0V  being absolute maximum supply voltage)
    GPIO PORTS6 GPIO Pins in total 
    InterruptsOne External interrupt on INT0 – PB2 - Pin7
    Timers
    • one 8-bit Timer/Counter with compare modes
    • one 8-bit high-speed Timer/Counter
    PWM3 PWM pins (PB0, PB1 and PB4)
    Maximum DC Current per I/O Pin40 mA
    Maximum DC Current through VCC and GND Pins200 mA
    SPIOne SPI communication channel with pins: MOSI – GPIO5, MISO – GPIO6, SCK – GPIO7
    I2COne I2C channel
    Operating Temperature-55ºC to +125ºC
    ADC4 Pins 10-bit ADC (PB2, PB3, PB4 and PB5)
    DACNo Available
    Enhanced USART Module1 Channel
    SRAM256 bytes
    FLASH ( Program Memory)8K bytes [10000 write/erase cycles]
    EEPROM512 bytes
    Comparatorone analog comparator with input pins as AIN0 – GPIO5 AIN1 – GPIO6
    Low power consumptionActive Mode: 1 MHz, 1.8V: 300 μA
    Power-down Mode: 0.1μA at 1.8V

    Install:

    • By default the Arduino IDE does not support the ATtiny85. So, we need to add support for the ATtiny85 to the Arduino URL Board Manager:
      • Arduino -> Preferences -> Additional Board Managers URLs
        • Copy & paste the following URL (if you already have a board manager URL just add a comma before pasting):
          • https://raw.githubusercontent.com/damellis/attiny/ide-1.6.x-boards-manager/package_damellis_attiny_index.json
        • Also add the URL to support digistump:
          • http://digistump.com/package_digistump_index.json
      • Restart the Arduino IDE
    • Install the ATtiny Board Package:
      • Tools -> Board -> Boards Manager
        • Install attiny board package




    • Install Sensor Libraries:
      • Sensor NRF24L01:
        • NRFLite Library:
          • Arduino -> Sketch -> Include Library -> Library Manager
            • Install NRFLite library by Dave Parson


    • Prepare Arduino Uno to upload the code to the ATTINY85 microcontroller:
      • Set the Arduino Uno Into ISP Mode:
        •  We will need to "prep" the Arduino fist by uploading the ISP sketch to it.
          • Arduino IDE select File -> Examples -> 11. Arduino ISP-->ArduinoISP
          • The ISP sketch should open and upload it to your Arduino Uno
      • Arduino - Attiny85 wiring:

    • Making the ATtiny85 Arduino Compatible:
      • It's required to burn the Arduino bootloader onto the chip to make sure the chip will accept any programs uploaded via the Arduino IDE.
        • Tools -> Board scroll to the bottom select ATtiny25/45/85
        • Tools -> Processor -> ATtiny85
        • Tools -> Clock -> 8 MHz (internal)
        • Tools -> Programmer -> Arduino as ISP
        • Check that all wiring, capacitor, and board selections are correct
        • Finally select Burn Bootloader
    • Uploading the Sketch to Attiny85 chip:
      • Tools -> Board scroll to the bottom select ATtiny25/45/85
      • Tools -> Processor -> 8 MHz (internal)
      • Tools -> Programmer -> Arduino as ISP
      • Upload the sketch


    Testing:


    References:




    If you like this content, feel free to

    Wednesday, May 27, 2020

    Install MicroK8s on MacOS or Ubuntu

    MacOS v10.15.4 (Catalina)
    Ubuntu 20.04.4 (Focal)
    MicroK8s v1.24


    Goals:
    • Install and run a Kubernetes Cluster on MacOS or on Ubuntu using MicroK8s.


    Install:
    • Before installing Microk8s:
      • Make sure the machine's hostname does not contain capital letters and/or underscores. This is not a valid name for a Kubernetes node, causing node registration to fail.
      • Make sure you have the host network configured. I mean, try to ping some site on the internet (e.g. www.google.com), ping the gateway and ping the dns configured to the machine.
    • MacOS:
      • Download and install a version of Multipass, a VM system for running Ubuntu and other packages required by MicroK8s:
      • Open a terminal window:
        • brew install ubuntu/microk8s/microk8s
        • microk8s install
    • Ubuntu:
      • Open a terminal window to install MicroK8s. The --channel parameter is optional.
        • sudo apt update
        • sudo snap install microk8s --classic --channel=1.24/stable
      • Change permissions to run MicroK8s without sudo:
        • sudo usermod -aG microk8s $USER
        • sudo chown -f -R $USER ~/.kube
        • sudo shutdown -r now
      • [Optional] You may need to configure the Ubuntu firewall to allow pod-to-pod and pod-to-internet communications:
        • sudo ufw allow in on cni0
        • sudo ufw allow out on cni0
        • sudo ufw default allow routed
    • The following steps are the same for both OS
    • Check the MicroK8s status:
      • microk8s status --wait-ready
    • Enable MicroK8s Addons:
      • microk8s enable rbac dns ha-cluster

      • Wait until you see that the new addons we just installed are already enabled:
        • microk8s status --wait-ready

    • Install and configure a default kubernetes Storage Class:
      • So far, we have not yet installed a Storage Class for our MicroK8s installation.
      • Depending on the installation method, your kubernetes cluster may be deployed with an existing StorageClass that is marked as default. This default StorageClass is then used to dynamically provision storage for PersistentVolumeClaims that do not require any specific storage class. However, the pre-installed default StorageClass may not fit well with your expected workload.
      • To see the Storage Classes available, run the command:
        • kubectl get storageclasses
      • Option 1 - The MicroK8s default storage class:
        • Pros: Less cpu and memory resources are used by kubernetes;
        • Cons: We can't include these persistent volumes in the Velero Backup
        • How to install:
          • microk8s enable hostpath-storage
      • Option 2 - The Openebs storage classes (recommended):
        • Pros: We can include these persistent volumes in the Velero Backup;
        • Cons: More cpu and memory resources will be used by kubernetes.
        • How to install:
          • [Optional] sudo apt install open-iscsi
          • sudo systemctl enable iscsid
          • microk8s enable openebs
        • Mark a StorageClass as default:
          • microk8s kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

    • [Optional] Enable Kubernetes Dashboard:
      • microk8s enable dashboard

      • Again, wait until you see that the dashboard addon we just installed is already enabled:
        • microk8s status --wait-ready
    • Assuming that Role-Based Access Control (RBAC) is enabled in your microk8s installation, we need to create an Administrative Service Account.
      • Create the administrative service account in the kube-system namespace:
        • microk8s kubectl -n kube-system create serviceaccount admin-user 
      • Grant permissions to administrative service account:
        • microk8s kubectl create clusterrolebinding --clusterrole=cluster-admin --serviceaccount=kube-system:admin-user admin-user-rolebinding 
      • Create the access token:
        • microk8s kubectl -n kube-system create token admin-user
    • We can access the Dashboard using its Cluster IP address:
      • The kubernetes-dashboard service in the kube-system namespace has a Cluster IP. To get the Cluster IP and port of kubernetes-dashboard we need to run the command below:
        • microk8s kubectl get endpoints -A | grep kubernetes-dashboard
      • Point your browser to  https:/<kubernetes-dashboard-endpoint-ip>:<port>
    • At this point you can access kubernetes Dashboard only on your cluster host machine. That's because your cluster host machine can reach the cluster internal IPs. 
    • However, if we need to access the Dashboard from another machine we have two options to do that. We can expose the Dashboard as an external service or we can configure Kubernetes Ingress to redirect external requests to the internal service. 
    • Option 1 - Expose Dashboard as an external Service: 
      • In order to expose Dashboard to external access we need to do some extra configuration. One downside of the approach is that you have to use Firefox browser to access the Dashboard.
        • microk8s kubectl -n kube-system patch service kubernetes-dashboard -p '{"spec":{"type":"NodePort","ports":[{"port":443,"nodePort":31234}]}}'
      • After this command, the Kubernetes Dashboard is exposed in your microk8s installation on port 31234. Note that Service NodePorts can only be in a "special" port range, 30000~32767 by default. 
        • Point your Firefox browser to  https://<kubernetes-host-machine>:31234
    • Option 2 - Configure Kubernetes Ingress Addon:
      • microk8s enable ingress
      • Once again, wait until you see that the Ingress addon we just installed is already enabled:
        • microk8s status --wait-ready
      • Create a file (eg. ingress-dashboard-config.yml) with the content below. Remember to use the right port for the Dashboard in the content below (at the last line of the file), the port number we got from `microk8s kubectl get endpoints`  command above:

      • microk8s kubectl apply -f ingress-dashboard-config.yml 
      • Point your browser (anyone) to  https://<kubernetes-host-machine>/dashboard/

     

    • And finally we can login to Dashboard using the token created for the admin-user:

    • [Optional] Enable MicroK8s built-in insecure registry. The registry is hosted within the Kubernetes cluster and is exposed as a NodePort service on port 32000 of the localhost. The size of the registry should be >= 20Gi:
      • microk8s enable registry:size=20Gi
        • Enabling the private registry
        • Applying registry manifest
        • namespace/container-registry created
        • persistentvolumeclaim/registry-claim created
        • deployment.apps/registry created
        • service/registry created
        • The registry is enabled
        • The size of the persistent volume is 20Gi
      • To disable the built-in registry:
        • microk8s disable registry
        • microk8s disable storage:destroy-storage
      • Pushing to this insecure registry may fail in some versions of Docker unless the daemon is explicitly configured to trust this registry. To address this we need to open Docker Desktop -> Preferences -> Docker Engine -> then add the follow configuration:
        • sudo nano /etc/docker/daemon.json
          • { "insecure-registries" : ["localhost:32000"] }
        • Click on  `Apply & Restart`  button

    • [Optional] Use a custom network domain for Kubernetes, other than cluster.local
      • Edit the CoreDNS configmap:
        • microk8s kubectl -n kube-system edit configmap coredns
      • Add this line below to the CoreDNS configmap:
        • rewrite name substring my-custom-domain.com cluster.local
      • Restart the CoreDNS service, or restart the MicroK8s.

    • Check the MicroK8s installation:
      • microk8s inspect
        • Important: Pay attention to possible `configuration needed` messages.
      • microk8s config
      • microk8s kubectl cluster-info
      • microk8s kubectl version

      • microk8s kubectl get nodes
        • Check the status of the node. It must be "Ready".
      • microk8s kubectl top nodes
        • Check the CPU and Memory usage
      • microk8s kubectl top node -n kube-system
        • Check the CPU and Memory usage
    • Create an alias for kubectl:
      • sudo snap alias microk8s.kubectl kubectl
      • sudo snap alias microk8s.kubectl k8s
    • Ubuntu only:
      • Install Docker:
        • sudo snap install docker
        • Alternatively, install using apt-get
        • Change permissions to run Docker without sudo:
          • sudo groupadd docker
          • sudo usermod -aG docker $USER
        • Or, in case you executed docker using sudo before:
          • sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
          • sudo chmod g+rwx "$HOME/.docker" -R
        • sudo chown root:docker /var/run/docker.sock
        • docker version
        • docker-compose version
        • sudo reboot

    Tests:
    • Check the Kubernetes DNS service:
      • microk8s kubectl get endpoints
      • microk8s kubectl -n kube-system get pod
      • microk8s kubectl -n kube-system get svc kube-dns
      • microk8s kubectl -n kube-system get endpoints kube-dns

    • Create a simple Pod to use as a DNS test environment:
      • microk8s kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
      • microk8s kubectl exec -i -t dnsutils -- nslookup kubernetes.default
      • microk8s kubectl exec -i -t dnsutils -- nslookup www.google.com
      • If we have the SERVFAIL in the response of the last command, then we need to reconfigure the Kubernetes DNS:
        • Check the local DNS configuration first. Take a look inside the resolv.conf file. (See Customizing DNS Service and Known issues below for more information)
          • microk8s kubectl exec -ti dnsutils -- cat /etc/resolv.conf
        • Troubleshooting - Check for errors on the DNS pod:
          • microk8s kubectl logs --namespace=kube-system -l k8s-app=kube-dns

          • Check definition of CoreDNS:
            • microk8s kubectl get configmap -n kube-system coredns -o yaml
          • Fetch your actual DNS using the command:
            • nmcli dev show 2>/dev/null | grep DNS | sed 's/^.*:\s*//'
          • Change forward address in CoreDNS config map from default (8.8.8.8 8.8.4.4) to your actual DNS:
            • microk8s kubectl -n kube-system edit configmap coredns
              • Change the following line, on the `Corefile` session, from this (using the <i> key):
                • forward . 8.8.8.8 8.8.4.4
              • To this:
                • forward . your.dns.ips.here.separated.by.space
              • And save (using <Esc> and <:> <x> <Enter> keys)
            • After saving the changes, it may take up to a minute or two for Kubernetes to propagate these changes to the CoreDNS pods
            • Now test the DNS resolution again:
              • microk8s kubectl exec -i -t dnsutils -- nslookup www.google.com

      • When the DNS is running:
        • If we have for example a service called  my-service  running on a namespace called  my-namespace, and the domain name for our cluster is  cluster.local, then the service can be accessed with the address:
          • my-service.my-namespace.svc.cluster.local
        • If a Pod in the default namespace has the IP address 172.17.0.3, and the domain name for our cluster is  cluster.local, then the Pod has a DNS name:
          • 172-17-0-3.default.pod.cluster.local
    • Deploy an application:
      • microk8s kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
        • deployment.apps/kubernetes-bootcamp created
    • See the application Pod status
      • microk8s kubectl get pods
    NAME                                                    READY   STATUS                 RESTARTS   AGE
    kubernetes-bootcamp-6f6656d949-z82zm   1/1     Running   0                    1m2s
      • microk8s kubectl describe pod/<pod-id>
    ...
    Events:
      Type    Reason     Age    From                  Message
      ----    ------     ----   ----                  -------
      Normal  Scheduled  2m15s  default-scheduler     Successfully assigned default/kubernetes-bootcamp-6f6656d949-z82zm to microk8s-vm
      Normal  Pulling    2m13s  kubelet, microk8s-vm  Pulling image "gcr.io/google-samples/kubernetes-bootcamp:v1"
      Normal  Pulled     108s   kubelet, microk8s-vm  Successfully pulled image "gcr.io/google-samples/kubernetes-bootcamp:v1"
      Normal  Created    108s   kubelet, microk8s-vm  Created container kubernetes-bootcamp
      Normal  Started    108s   kubelet, microk8s-vm  Started container kubernetes-bootcamp

    • Delete the deployment:
      • microk8s kubectl delete deployment kubernetes-bootcamp
    • [Attention - Don't do this if you installed the Kubernetes Ingress Addon] Deploy NGINX server:
      • microk8s kubectl create deployment nginx --image=nginx
      • microk8s kubectl get deployments
      • microk8s kubectl describe deployment nginx
      • microk8s kubectl create service nodeport nginx --tcp=80:80
      • microk8s kubectl get svc
        • service/nginx        NodePort    10.152.183.133   <none>        80:30618/TCP   7s
      • Open the nginx web page on a browser or using curl / wget:
        • http://<kubernetes-host-machine>:<nginx-service-port>
      • Delete the NGINX deployment
        • microk8s kubectl delete deployment nginx

    Configure Access to Multiple Clusters (Work-in-progress):
    • Kubeconfig file:
      • echo $KUBECONFIG
      • Use multiple kubeconfig files at the same time and view merged config
        • export KUBECONFIG=~/.kube/config:~/.kube/config2 
    • Show merged kubeconfig settings:
      • microk8s kubectl config view
      • microk8s config
      • microk8s kubectl config view --raw
    • Display list of contexts:
      • microk8s kubectl config get-contexts
    • kubectl config get-contexts:
      • microk8s kubectl config current-context
    • Get a list of users:
      • microk8s kubectl config view -o jsonpath='{.users[*].name}'
    • To add a new cluster, we need to add a user to the cluster that supports basic auth, that will be used when connecting the kubeconf:
      • microk8s kubectl config set-credentials cluster-admin --username=admin --password=<password>
    • Set the default context to <context-name>:
      • microk8s kubectl config use-context <context-name>
    • Set a context utilizing a specific username and namespace:

    Add another Node to the MicroK8s Cluster:
    • The MicroK8s instance on which this command is run will be the master of the cluster and will host the Kubernetes control plane:
      • microk8s add-node
        • Join node with: microk8s join 192.168.64.2:25000/19c8a4677a2f03ea738749e9baecec88
    • The `add-node` command prints a microk8s join command which should be executed on the MicroK8s instance that you wish to join to the cluster:
      • microk8s join 192.168.64.2:25000/19c8a4677a2f03ea738749e9baecec88
    • Joining a node to the cluster should only take a few seconds. Afterwards you should be able to see the node has joined running the above command on master:
      • microk8s kubectl get nodes
        • NAME            STATUS   ROLES    AGE    VERSION
        • 192.168.1.110   Ready    <none>   35s    v1.18.2-41+4706dd1a7d2b25
        • microk8s-vm     Ready    <none>   2d2h   v1.18.2-41+b5cdb79a4060a3
    • Running the `get nodes` command on secondary (leaf) node will get the message:
      • This MicroK8s deployment is acting as a node in a cluster. Please use the microk8s kubectl on the master.
    • Notes:
      • The pods already running on the secondary (leaf) node will be stopped and will not be started anywhere, I mean, not on the leaf node anymore and also not on the master.
      • The Kubernetes Config file is saved by default at /var/snap/microk8s/current/credentials/client.config file on the master node.
      • How to show the default port used by the cluster:
        • cat /var/snap/microk8s/current/credentials/client.config | grep server
          • server: https://127.0.0.1:16443
    • ToDo: Configure k8s command to access the remote Master.
    • To remove a node from the cluster, we have two steps. First, use the command bellow on Master:
      • microk8s remove-node <node-name>
      • Then use the following command on Node:
        • microk8s leave
          • Stopped.
          • Started.
          • Enabling pod scheduling
          • node/ubuntu already uncordoned
    • microk8s kubectl get nodes

    See other important commands at Kubernetes Knowledge Base


    Clean-up
    • Remove Microk8s on Ubuntu:
      • sudo snap remove microk8s
      • sudo snap saved
      • sudo snap forget <snapshot-set-id>
    • Remove Microk8s on macOS:
      • brew uninstall ubuntu/microk8s/microk8s


    References:

    Friday, May 8, 2020

    Create Multi-arch Docker Images using Docker BuildX CLI plugin

    Software:
    • MacOS 10.15.4  64-bit (Catalina)
    • Ubuntu Server 19.04  64-bit (eoan)
    • Raspbian GNU/Linux 10  32-bit (buster)
    • Docker/Buildx v0.3.1-tp-docker

    Hardware:
    • MacBook:
      • MacOS 64-bit OS
      • Docker 19.03.8
    • Raspberry Pi 1, 2, 3 or 4:
      • Raspbian 10 32-bit OS
      • Docker 18.09.1
    • Raspberry Pi 3 or 4:
      • Ubuntu Server 64-bit OS
      • Docker 19.03.6

    Goals:
    • Verify that Docker images are platform dependent (single-arch).
    • Learn how to build multi-arch docker images that can be used on several platforms (e.g. amd64, arm64, arm).

    Prerequisites:
    • An externally-accessible insecure Docker Registry running on the MacOS machine. See how to configure it here
    • Configure Docker to use experimental features
    • Configure Docker to work with insecure-registry

    Build a simple Docker Image:
    • Let's start creating a simple Dockerfile to build an image:
      • mkdir ~/docker-test
      • cd ~/docker-test
      • nano ./Dockerfile
        • FROM ubuntu:18.04
        • CMD ["sleep", "infinity"]
    • Build an image on a MacBook and tag as AMD64:
      • docker login 192.168.1.107:5000
      • docker build -t 192.168.1.107:5000/docker-test:amd64 .
      • Push the image to the private docker registry:
        • docker push 192.168.1.107:5000/docker-test:amd64
      • Try to run a container based on this image:
        • Machine: MacBook
          • docker run --entrypoint bash -it 192.168.1.107:5000/docker-test:amd64
            • root@d465832164be:/# uname -m
            • x86_64
        • Machine: Raspberry Pi running Ubuntu 64-bit OS
          • docker run --entrypoint bash -it 192.168.1.107:5000/docker-test:amd64
            • standard_init_linux.go:211: exec user process caused "exec format error"
        • Machine: Raspberry Pi running Raspbian 32-bit OS
          • docker run --entrypoint bash -it 192.168.1.107:5000/docker-test:amd64
            • standard_init_linux.go:207: exec user process caused "exec format error"
    • Build an image on Raspberry Pi running Raspbian 32-bit OS and tag as ARM32
      • Copy the Dockerfile to a folder on Raspberry Pi 32-bit OS machine and `cd` to it:
        • docker login 192.168.1.107:5000
        • docker build -t 192.168.1.107:5000/docker-test:arm32 .
      • Push the image to the private docker registry:
        • docker push 192.168.1.107:5000/docker-test:arm32
      • Try to run a container based on this image:
        • Machine: MacBook
          • docker run --entrypoint bash -it 192.168.1.107:5000/docker-test:arm32
            • root@ead4869f18b9:/# uname -m
            • armv7l
        • Machine: Raspberry Pi running Ubuntu 64-bit OS
          • docker run --entrypoint bash -it 192.168.1.107:5000/docker-test:arm32
            • root@95b80ff7d56f:/# uname -m
            • aarch64
        • Machine: Raspberry Pi running Raspbian 32-bit OS
          • docker run --entrypoint bash -it 192.168.1.107:5000/docker-test:arm32
            • root@d8bbc13939e5:/# uname -m
            • armv7l
    • Build an image on Raspberry Pi running Ubuntu 64-bit OS and tag as ARM64:
      • Copy the Dockerfile to a folder on Raspberry Pi 32-bit OS machine and `cd` to it:
        • docker login 192.168.1.107:5000
        • docker build -t 192.168.1.107:5000/docker-test:arm64 .
      • Push the image to the private docker registry:
        • docker push 192.168.1.107:5000/docker-test:arm64
      • Try to run a container based on this image:
        • Machine: MacBook
          • docker run --entrypoint bash -it 192.168.1.107:5000/docker-test:arm64
            • root@feb84b4dfc5c:/# uname -m
            • aarch64
        • Machine: Raspberry Pi running Ubuntu 64-bit OS
          • docker run --entrypoint bash -it 192.168.1.107:5000/docker-test:arm64
            • root@c9e0aa8eb98e:/# uname -m
            • aarch64
        • Machine: Raspberry Pi running Raspbian 32-bit OS
          • docker run --entrypoint bash -it 192.168.1.107:5000/docker-test:arm64
            • standard_init_linux.go:207: exec user process caused "exec format error"
    • Listing Docker images:
      • docker images | grep 192.168.1.107
        • 192.168.1.107:5000/docker-test  arm32  51c74aa10614   46.7MB
        • 192.168.1.107:5000/docker-test  arm64  c6dd32c882f6    57.7MB
        • 192.168.1.107:5000/docker-test  amd64  32458c475b0e  64.2MB
      • docker image inspect 192.168.1.107:5000/docker-test:arm32 | grep Architecture
        • "Architecture": "arm",
      • docker image inspect 192.168.1.107:5000/docker-test:arm64 | grep Architecture
        • "Architecture": "arm64",
      • docker image inspect 192.168.1.107:5000/docker-test:amd64 | grep Architecture
        • "Architecture": "amd64",
    • Conclusions:
      • The images built with `docker build` command are platform dependent.
      • An image built on ARM32 platform CAN be used on ARM64 platform.
      • An image built on ARM64 platform can NOT be used on ARM32 platform. 
      • The Docker Desktop (for MacOS and Windows) has QEMU emulation and can run many platform images, regardless of the platform the image was built for.
    QEMU emulation for the arm/v6, arm/v7 and arm64 Docker images


    Build Multi-arch simple Docker Images:
    • Let's start creating a simple Dockerfile to build an image:
      • mkdir ~/docker-test-multiarch
      • cd ~/docker-test-multiarch
      • nano ./Dockerfile
        • FROM ubuntu:18.04
        • ARG TARGETPLATFORM
        • ARG BUILDPLATFORM
        • RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM"
        • CMD ["sleep", "infinity"]
    • Build multi-architecture images on a MacBook or Linux:
      • cd ~/docker-test-multiarch
      • Log in on the registry server:
        • docker login -u <registry-user> [registry_url]
          • enter the registry password
      • Create a new instance of an isolated builder:
        • docker buildx create --name multiarch-builder --platform linux/amd64,linux/arm64
      • Switches the current builder instance. Build commands invoked after this command will run on a specified builder.
        • docker buildx use multiarch-builder
      • [Optional] docker buildx ls
      • [Optional] docker buildx inspect multiarch-builder
      • Build the images for the desired platforms (architectures):
        • docker buildx build -t <registry-url>/<image-name>:<tag> [-f Dockerfile] --platform linux/amd64,linux/arm64 --push .


      • Cleanup the environment:
        • docker buildx use default
        • docker buildx stop multiarch-builder
        • docker buildx rm multiarch-builder
     
    References:
    If you like this content, feel free to