Monday, September 18, 2023

View SSL Certificate Information

 Linux or MacOS


We can use the openssl command-line tool to check information about an SSL certificate in a bash terminal. Here's how we can do it:


View Certificate Information:

To view the details of an SSL certificate in a file, we can use the following command:
  • openssl x509 -in  <certificate-file-name>  -text -noout

Check Certificate Expiry:

To quickly check the expiry date of a certificate, we can use:
  • openssl x509 -in  <certificate-file-name>  -enddate -noout

Verify SSL Connection:

If we want to verify the SSL connection of a website, we can use the following command:
  • openssl s_client -connect <domain-name>:443
This command will initiate an SSL connection to the given domain on port 443 (the default HTTPS port) and display detailed information about the certificate, the certificate chain, and the SSL handshake.

Check Certificate Format:

To check the format of a certificate file, we can use the following command:
  • openssl x509 -in  <certificate-file-name>  -text -noout
If the certificate is in PEM format, this command will display the certificate details. If the certificate is not in PEM format, we might get an error indicating that the input file could not be loaded.

Check Private Key Format:

To check the format of a private key file, we can use the following command:
  • openssl rsa -in  <certificate-file-name>  -check
If the private key is in PEM format, this command will display the private key details. If the private key is not in PEM format, we might get an error indicating that the input file could not be loaded.

Check CSR (Certificate Signing Request) Format:

To check the format of a CSR file, we can use the following command:
  • openssl req -in  <csr-file-name>   -text -noout
If the CSR is in PEM format, this command will display the CSR details. If the CSR is not in PEM format, we might get an error indicating that the input file could not be loaded.

Remember that these commands are intended to provide information about the format of the files and their content. If we encounter errors or unexpected outputs, it's possible that the files are corrupted or not in the expected format.

Check .pfx File Contents:

To check the format of a .pfx file, we can use the following command:
  • openssl pkcs12 -info -in  <pfx-file-name>
This command will provide detailed information about the contents of the .pfx file, including the certificate(s) and any additional information.

Check the Certificate Chain in the .pfx File:

We can use the following command to view the certificate chain in our .pfx file:
  • openssl pkcs12 -in  <pfx-file-name>  -clcerts -nokeys -out certificate-chain.pem
This command will extract the certificate chain from the .pfx file and save it in a PEM-encoded file (certificate-chain.pem).

Check the Private Key in the .pfx File:

To verify the private key within the .pfx file, we can use the following command:
  • openssl pkcs12 -in  <pfx-file-name>  -nocerts -nodes | openssl rsa -check
This command extracts the private key from the .pfx file and checks its validity.

Check .pfx Expiry Date:

To verify the private key within the .pfx file, we can use the following command:
  • openssl pkcs12 -in  <pfx-file-name>  -noout -enddate
This command will display the expiry date of the certificate.

Check .pfx Password:

If the .pfx file is password-protected, we might need to enter the password to access its contents. We will need the correct password available.


SSL Certificate Checker Tools:

There are online SSL certificate checker tools that can help you retrieve the CA certificate chain associated with your SSL certificate. Tools like "SSL Checker" or "SSL Shopper" can display the full certificate chain, including the root and intermediate certificates.

Saturday, August 14, 2021

Backup and Restore Kubernetes resources using Velero and MinIO

MacOS 11.5
MicroK8s 1.20.9
MinIO
Velero 1.6.3


Info:

  • MinIO is a High Performance Object Storage released under GNU. It is API compatible with Amazon S3 cloud storage service. It can handle unstructured data such as photos, videos, log files, backups, and container images with (currently) the maximum supported object size of 5TB.
  • Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
  • MicroK8s is the simplest production-grade upstream Kubernetes. Lightweight and focused. Single command install on Linux, Windows and macOS. Made for devops, great for edge, appliances and IoT. Full high availability Kubernetes with autonomous clusters.

Note:


Install:

  • MinIO will store the kubernetes backup files.
  • Run standalone MinIO server on Docker:
    • Create a local folder to store the minio data:
      • mkdir /Users/marcus/minio/data
    • docker container create -p 9000:9000 -p 9001:9001 --name minio -v /Users/marcus/minio/data:/data -e "MINIO_ROOT_USER=myminioaccesskey" -e "MINIO_ROOT_PASSWORD=myminiosecretkey" minio/minio:latest server /data --console-address ":9001"
    • OR, using the minio default user / password (minioadmin / minioadmin):
      • docker container create -p 9000:9000 -p 9001:9001 --name minio -v /Users/marcus/minio/data:/data minio/minio:RELEASE.2021-09-09T21-37-07Z server /data --console-address ":9001"
    • docker container start minio
  • Velero consists of:
    • A server that runs on the kubernetes cluster
    • A command-line client that runs locally
  • Install Velero server:
    • ssh into the machine where the kubernetes cluster is running.
      • cd ~/Downloads
      • wget https://github.com/vmware-tanzu/velero/releases/download/v1.6.3/velero-v1.6.3-linux-amd64.tar.gz
      • tar -xvzf velero-v1.6.3-linux-amd64.tar.gz
      • cd velero-v1.6.3-linux-amd64/
      • sudo mv velero /usr/local/bin/
      • export KUBECONFIG=/var/snap/microk8s/current/credentials/client.config
      • nano cred-velero
        • [default]
        • export BUCKET=velero
        • export REGION=minio
        • aws_access_key_id=myminioaccesskey
        • aws_secret_access_key=myminiosecretkey
      • velero install --default-volumes-to-restic --use-restic --provider aws --bucket velero --plugins velero/velero-plugin-for-aws:v1.0.0 --secret-file ./cred-velero --snapshot-location-config region=minio --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://<host-ip>:9000
        • Attention: Do NOT use 127.0.0.1 or localhost in the s3Url parameter, because the velero and minio are not running on the same host.
      • Wait until the velero pod is running:
        • kubectl get ns
      • velero version

  • Patch the `hostPath` to be compatible with microk8s (issue 4035):
    • kubectl -n velero patch daemonset restic -p '{"spec":{"template":{"spec":{"volumes":[{"name":"host-pods","hostPath":{"path":"/var/snap/microk8s/common/var/lib/kubelet/pods"}}]}}}}'

Testing:

  • MinIO console:
    • http://127.0.0.1:9001
      • user: myminioaccesskey
      • password: myminiosecretkey

    • Create a new bucket named  `velero` to store backup files:
      • Buckets -> Create Bucket
  • Velero Backup:
    • velero backup create bkp-test --include-namespaces=test
    • velero get backup
    • velero backup describe bkp-test [--details]
    • velero backup logs bkp-test
  • Simulate an error deleting the entire namespace:
    • kubectl delete ns test
    • kubectl get ns
  • Velero Restore:
    • velero restore create --from-backup bkp-test
    • velero get restore
    • kubectl get ns
    • kubectl -n test get pod


Troubleshooting:
  • Check the Velero deployment log:
    • kubectl logs deployment/velero -n velero
  • Check the contents of the Velero secret:
    • kubectl get secrets -n velero cloud-credentials -o jsonpath="{.data.cloud}" | base64 -d

Uninstalling Velero:
  • kubectl delete namespace/velero clusterrolebinding/velero
  • kubectl delete crds -l component=velero

Uninstalling Minio:
  • docker stop minio
  • docker rm minio

References:

Thursday, June 17, 2021

A GIT Branching Strategy and Release Management

MacOS 11.4
Git 2.17.1



Goals:

  • Present a development model to build software that is explicitly versioned, including the branching strategy and the release strategy.
  • Present a set of procedures that every team member has to follow in order to come to a managed software development process.





Branching Strategy:

  • The central repo holds two main branches with an infinite lifetime, the master and develop:

    • origin/master or origin/main
      • The main branch where the source code of HEAD always reflects a production-ready state.
    • origin/develop
      • The main branch where the source code of HEAD always reflects a state with the latest delivered development changes for the next release. This is where any automatic nightly builds are built from.
  • Next to the main branches master and develop, this development model uses a variety of supporting branches. Unlike the main branches, these branches always have a limited life time, since they will be removed eventually. Each of these branches have a specific purpose and are bound to strict rules:
    • feature branches
      • Used to develop new features for the upcoming or a distant future release.
      • It exists as long as the feature is in development. 
      • May branch off from develop and must merge back into develop after passing the developer tests.
      • Git commands:
        • git checkout develop
        • git checkout -b <feature-branch-name>
    • release branches
      • Support preparation of a new production release.
      • May branch off of develop and must merge back into develop and master.
      • Branch naming convention: release-<name-or
    • hotfix branches
      • Arise from the necessity to act immediately upon an undesired state of a live production version.
      • May branch off from master and must merge back into master and develop branches.
      • Branch naming convention: hotfix-*



  • Pull Requests (PR) allows developers to create whatever branches they want without polluting the main fork of the repository.
    • Include the ability to request a review from another developer.
    • Reviews allow collaborators to comment on the changes proposed in pull requests, approve the changes, or request further changes before the pull request is merged.
    • A review has three possible statuses:
      • Comment: Submit general feedback without explicitly approving the changes or requesting additional changes.
      • Request changes: Submit feedback that must be addressed before the pull request can be merged.
      • Approve: Submit feedback and approve merging the changes proposed in the pull request.
    • Repository administrators can require that all pull requests are approved before being merged.

Release Management:

  • Using a dedicated branch to prepare releases makes it possible for one team to polish the current release while another team continues working on features for the next release. It also creates well-defined phases of development and we can actually see it in the structure of the repository.
  • Once develop branch has acquired enough features for a release (or a predetermined release date is approaching), we fork a release branch off of develop.
    • Creating this branch starts the next release cycle, so no new features can be added after this point—only bug fixes, documentation generation, and other release-oriented tasks should go in this branch (rather than on the develop branch).
    • Git commands to create a release branch:
      • git checkout develop
      • git checkout -b release/<x>.<y>.<z>
  • Finishing a release branch:
    • Once the release is ready to ship, the release branch gets merged into master and tagged with a version number.
    • Each time when changes are merged back into master, this is a new production release by definition. Theoretically, we could use a Git hook script to automatically build and roll-out our software to our production servers every time there was a commit on master.
    • Git commands to merge release branch and create a tag:
      • git checkout master
      • git merge release/<x>.<y>.<z>
      • git tag <tag-name> -a
      • git push origin <tag-name>
    • In addition, the release branch should be merged back into develop branch, which may have progressed since the release process was initiated. It’s important to merge back into develop because critical updates may have been added to the release branch and they need to be accessible to future features. This step may well lead to a merge conflict.
    • The release branch may be removed




Hotfix branches:

  • Hotfix branches are used to quickly patch production releases.
  • They are a lot like release branches and feature branches except they're based on master instead of develop. This is the only branch that should fork directly off of master.
  • As soon as the fix is complete, it should be merged into both master and develop, and master should be tagged with an updated version number.
  • Having a dedicated line of development for bug fixes lets the team address issues without interrupting the rest of the workflow or waiting for the next release cycle. We can think of hotfix branches as ad hoc release branches that work directly with master.
  • Git commands to create a hotfix branch:
    • git checkout master
    • git checkout -b hotfix/<x>.<y>.<k>
  • Similar to finishing a release branch, a hotfix branch gets merged into both master and develop branches.
  • Git commands to merge a hotfix branch to master:
    • git checkout master
    • git merge hotfix/<x>.<y>.<k>
  • Git commands to merge a hotfix branch to develop. This step may well lead to a merge conflict.
    • git checkout develop
    • git merge hotfix/<x>.<y>.<k>


References:

  • A successful Git branching model
  • Gitflow Workflow 
  • A Branching and Releasing Strategy That Fits GitHub Flow 
  • Github Releases 
  • Github Tags 
  • Semantic Versioning 
  • NPM Docs - About Semantic Versioning
  • Set Up a Private Git Server

    Ubuntu 20.04
    GIT 2.17.1



    Goals:
    • Create a private Git repository server without the restrictions of the providers free plans.
    • Replicate the state of an origin repository, including all the branches (including master) and all the tags as well.


    Install:
    • SSH keys:
      • If you have a  ~/.ssh  folder but don't have a public key on it (e.g: id_rsa.pub):
        •  Generate a public key using the private key using the command below:
          • ssh-keygen -y -f  /home/ubuntu/.ssh/id_rsa  >  /home/ubuntu/.ssh/id_rsa.pub
      • Show the ssh public key. It will be used later during the installation.
          • cat /home/ubuntu/.ssh/id_rsa.pub
    • Git Server:
      • Install Git server:
        • sudo apt update
        • sudo apt install git
      • Create a git user and a base repository folder:
        • sudo useradd -r -m -U -d /home/git -s /bin/bash git
        • sudo su - git
          • mkdir ~/.ssh
          • chmod 0700 ~/.ssh
          • touch ~/.ssh/authorized_keys
          • chmod 0600 ~/.ssh/authorized_keys
          • Copy the content of the file  /home/ubuntu/.ssh/id_rsa.pub  to the file  /home/git/.ssh/authorized_keys. Also add the public keys of any users you want to access your private git server.
            • nano ~/.ssh/authorized_keys
          • cd /home/git
          • Create the git base repository name (optional).
            • mkdir /home/git/private-repo
    • Mirror an existing Git repository:
      • Go to the git base directory and clone the existing repo using https. One advantage of using https is that we don't need to have a firewall rule to allow ssh port traffic:
        • cd /home/git/private-repo
        • git clone --mirror https://github.com/velosomarcus/aws-kubectl.git
          • Inform username and password to do the mirror over https.
      • Open a terminal on another computer to test the access to the private repo. The ssh public key of the computer/user should be added to the  /home/git/.ssh/authorized_keys  of the private Git Server computer:
        • cd /home/ubuntu
        • git clone git@192.168.1.105:private-repo/aws-kubectl.git
        • Expected output:
          • Cloning into 'aws-kubectl'...
      • After that, everytime we want to update the mirror repo we need to:
        • sudo su - git
        • cd /home/git/private-repo/aws-kubectl.git
        • git remote update

    Testing:
    • Create a new empty repository:
      • Open a terminal on the Git Server machine.
        • sudo su - git 
      • Create an empty repo:
        • git init --bare /home/git/private-repo/project-name.git
        • Expected output:
          • Initialized empty Git repository in /home/git/private-repo/project-name.git/
      • Configuring a local Git Repository, potentially on another machine:
        •  cd /path/to/local/project
        • git init .
      • Add the git remote to your local repository:
        • git remote add origin git@192.168.1.105:private-repo/project-name.git
      • Create a test file:
        • touch test_file
        • git add .
        • git commit -m "Initial commit"
        • git push -u origin master
        • Expected output:
          • Counting objects: 3, done.
          • Writing objects: 100% (3/3), 218 bytes | 218.00 KiB/s, done.
          • Total 3 (delta 0), reused 0 (delta 0)
          • To 192.168.1.105:private-repo/project-name.git
          •  * [new branch]      master -> master
          • Branch 'master' set up to track remote branch 'master' from 'origin'.
      • It is important to note that the remote repository must exist before you add the git remote to your local repository.
      • To be able to push the local git changes to the private Git server you’ll need to add your local user ssh public key to the remote `git` user’s  `/home/git/.ssh/authorized_keys` file. To add a new collaborator, just copy its public ssh key to the `git` authorized_keys file.

    References:

    Thursday, April 22, 2021

    Install Ubuntu OS on the VirtualBox VM Server

    Windows 10
    Virtual Box
    Ubuntu Server 20.04


    Goals:
    • Install a Ubuntu OS on a Virtual Machine on another OS, like Windows or MacOS.

    Install:
    • Download VirtualBox.
    • Install Virtual Box.
    • Configure a VirtualBox Virtual Machine for Ubuntu:
      • In VirtualBox click New
      • Set a Name for the virtual machine (e.g. Ubuntu 20.04.2)
      • Set the Type as Linux and the Version as Ubuntu (64-bit)
      • Set the VM's Memory size:  (e.g. 4GB)
      • Select Create a virtualized disk now then Create
      • Check the default VDI is selected
      • Select Dynamically allocated or Fixed for the virtual hard disk size and Create (zzz...)
      • Click on the virtual machine created
      • Click on Settings -> System -> Processor
      • Set the number of processors:  (e.g. 3)
      • Click on Display -> Screen -> Video Memory
      • Set the Video Memory: (e.g. 64MB)
      • Click Ok
    • Install the Ubuntu OS:
      • Select the VM created above
      • Click Settings
        • Click Storage
          • Select Controller IDE
          • In the Attributes pane click the disc icon next to IDE Secondary Master
          • Click Choose a disk file and browse for the Ubuntu ISO

          • Click OK to add the ISO then OK to finish
        • Click Network
          •  In the Adapter 1 tab,  on the Attached to combo, select Bridged Adapter and click OK.


    • Install VirtualBox Extension Pack on Windows:
      • Select Tools -> Preferences:
        • In the Preferences window, go to the Extensions section. Click the Add a new package button to add the extension pack.
        • Browse the file you have downloaded
        • Install it
    • Configure Copy and Paste:
      • Select the VM
      • Select General -> Advanced -> Shared Clipboard

    • Boot Ubuntu in the Virtual Machine:
      • Select the VM and click Start.
    • Identifies a folder which VirtualBox will look in to find VirtualBox.xml:
      • Menu -> File -> Preferences
      • Select General -> Default Machine Folder
      • OR:
      • Create an environment variable set (globally, or for the current user of the host) called VBOX_USER_HOME
    • Configure Display Resolution:
      • Open a terminal window:
        • sudo apt-get install virtualbox-guest-dkms
      • Restart Ubuntu:
        • sudo reboot
      • Open Settings -> Displays -> Change Resolution



    Extra Tools:
    • How to add an existing VirtualBox VM to the VirtualBox Manager:
      • Click on 'Tools' and then click on 'Add' button
      • Choose the path to the '.vbox' file and click on 'Open' button.

    • How to increase the size of the VirtualBox disk drive:
      • First we have to stop (powered off) the VM.
      • Increase the Size using a Windows CMD terminal:
        • vboxmanage modifyhd "/Users/marcus/VirtualBox VMs/ubuntu2004/ubuntu2004.vdi" --resize <new-size-in-MB>
      • Or, we can use the VirtualBox GUI:
        • menu File -> Virtual Media Manager, then double click a virtual hard disk in the list and use the “Size” slider at the bottom of the window to change its size. Click “Apply” when you’re done.
      • We will still have to enlarge the partition on the disk to take advantage of the additional space. The partition remains the same size even while the disk size increases. We might use the linux  gparted  application to extend the partition size, if it's not a LVM partition.

    • How to convert dynamically sized VirtualBox VDI hard drive to fixed size:
      • Find the location of your actual VDI HD:
      • Open a terminal window and clone your hard drive to a fixed size:
        • vboxmanage clonehd "/Users/marcus/VirtualBox VMs/ubuntu2004/ubuntu2004.vdi" "/Volumes/Seagate Exp/ubuntu2004-fixed.vdi" --variant Fixed
          • 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
          • Clone medium created in format 'VDI'. UUID: ...
      • Open the virtual box manager, go to settings for your machine, remove the old (dynamically sized) drive from the SATA controller for this machine and click Ok.
      • Run the following command in a terminal window:
        • vboxmanage list hdds
        • Make sure to have a backup of your old drive file, just in case. The next step will permanently delete this file.
        • Remove your old drive from the list by running the command:
          • vboxmanage closemedium disk <UUID> --delete
      • Copy your new fixed size VDI HD to the same directory of the old one:
        • cp "/Volumes/Seagate Exp/ubuntu2004-fixed.vdi" "/Users/marcus/VirtualBox VMs/ubuntu2004/ubuntu2004.vdi"
      • Go to the VirtualBox Manager and add the new fixed size VDI HD to the SATA controller for your VM.
      • Start your VM with the new fixed size VDI HD.

    • How to Resize and Extend a LVM Partition:
      • Using a terminal window:
        • sudo pvs
        • df -h
        • sudo lvextend -r -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
        • sudo reboot
        • df -h



    • How to Clone a Snapshot:



      • VirtualBox’s  clonehd  will not let you clone a snapshot other then the base. This is due to the fact that snapshots are simply “difference lists” between the previous snapshot and the current state. Whenever you take a snapshot VirtualBox will “freeze” the current state as a read-only VDI file and create a new VDI where it will save only the blocks that the VM has written too after the snapshot.
      • In order to clone a snapshot VirtualBox will need to first consolidate all the changes between all the previous snapshots.
      • what we can do is to force VirtualBox to consolidate “Current State” back into the base snapshot "Snapshot 1" (“ubuntu2004.vdi” in the above screenshot) by discarding the base snapshot:
        • Open the “Snapshots” tab for the VM whose state you want to clone, select the first snapshot “Snapshot 1” and choose “delete” from the toolbar. What really happens is that VirtualBox pretends to remove “Snapshot 1” but it actually removes “Current State” after copying all the changes in “Current State” back into “Snapshot 1”. You now have a base snapshot which contains the state in “Current State” which you can clone.

    • How to Get rid of cloud-init, when installing Ubuntu Server:
      • echo 'datasource_list: [ None ]' | sudo -s tee /etc/cloud/cloud.cfg.d/90_dpkg.cfg
      • sudo apt-get purge cloud-init
      • sudo rm -rf /etc/cloud/
      • sudo rm -rf /var/lib/cloud/
      • sudo shutdown -r now


    References:

    Saturday, March 20, 2021

    Go Lang - Installation and Basic Tests

    MacOS 11.2.3 - Big Sur
    Go Language 1.16.2 - darwin/amd64




    Goals:
    • Start learning Go Language from scratch.

    Information:

    • Basic commands:
      • go version
      • go env
        • $GOPATH controls where the Go compiler and tools will look for source code.
      • go help <command>
      • go get <lib-name> - automatically install dependencies (e.g:  go get -u github.com/gen2brain/dlgs)
      • go run <file.go> - this command first compiles the specified file(s), then executes the resulting binary.
      • go build <file.go> - command to compile our code into an executable binary.
      • go install <file.go> - command works like `go build`, except instead of putting the binary file in the source code folder, it installs it to $GOPATH/bin.
      • godoc -http=:6060 - extracts and generates documentation for Go programs. With the -http flag, it runs as a web server and presents the documentation as a web page.
    • Basic rules:
      • A function that returns values must declare it, the type declaration follows the function name. Every function that declares a return type, must end with a return statement.
    func greeting() string {
    return "Hello world"
    }

      • Functions imported from another package are always namespaced with the package name (e.g: lib.SomeFunction() )
      • Visible External Functions, Variables and Methods starts with Capital Letter.
      • In any  `.go file, the first non-comment line is a package declaration. The package declaration is mandatory. If the file is in a subfolder of the project its package must have the name of the subfolder. Every  `.go file in the same folder must have the same package name.
      • By convention, 3rd-party packages are named after their repository URL. For example, a xpto library hosted on Github would be imported as "github.com/<user-or-company-name>/xpto". The application is in xpto.go. This file declares its package as main, and defines a main() function. This tells the compiler to generate an executable from the file. The package can import from the standard library, and from our lib package specified by its path.
    /$GOPATH
    |--- /src
          |--- /github.com
                |--- /<user-or-company-name>
                      |--- /xpto
                      |   |__ xpto.go
                      |--- /lib
                          |__ util.go

      • A variable must be declared as a specific type before a value can be assigned to it. Once declared, a variable may only be assigned values of its declared type. Go also provides an operator, :=, that combined declaration and assignment in the same statement.
      • Our Hello World program declares its package as main, and contains a special function main(). That tells Go to compile this code as an executable program, with the entry point at main(). The function main() has no arguments and no return value.


    Install:

    • MacOS
      • Download and install it from here.
        • Open the package file you downloaded and follow the prompts to install Go.
        • The package installs the Go distribution to /usr/local/go.
        • The package should put the /usr/local/go/bin directory in your PATH environment variable.
        • You may need to restart any open Terminal sessions for the change to take effect.
        • Verify the version of Go:
          • go version
    • Linux
      • Download and install:
        • cd
        • wget -c https://dl.google.com/go/go1.16.2.linux-amd64.tar.gz -O - | sudo tar -xz -C /usr/local
        • Add the location of the Go directory to the $PATH environment variable:
          • nano ~/.profile
            • export PATH=$PATH:/usr/local/go/bin
        • Load the new PATH environment variable into the current shell session:
          • source ~/.profile
        • Verify the version of Go:
          • go version

    Testing:
    • Example 1 - Hello World
    hello/hello.go
    // A "Hello World" program that prints a greeting with the current time.
    package main

    import (
    "fmt"
    "time"
    )

    // greeting returns a pleasant, semi-useful greeting.
    func greeting() string {
    return "Hello world, the time is: " + time.Now().String()
    }

    func main() {
    fmt.Println(greeting())
    }
    • Configure the environment:
      • export GOPATH=/home/ubuntu/go
      • export GO111MODULE=auto
      • export GOROOT=/usr/local/go
    • Run the Hello World example:
      • cd $GOPATH
      • cd hello
      • go run hello.go

    • Example 2 - Call a Lib Function
    projectpath/main.go
    package main

    import (
    "fmt"
    "projectpath/lib"
    )

    func main() {
    fmt.Println(lib.Test())
     
    projectpath/lib/util.go
    package lib

    import "time"

    func Test() string {
    return time.Now().String()
    }

    Friday, February 19, 2021

    GIT - Knowledge Base

    Git 2.17.1



    Goals:

    • Have a list of useful Git commands, along with GitHub basic concepts.

    Concepts:
    • GitHub
    • GitHub Commit/Push:
    • GitHub Pull Request:
    • GitHub Merge:
    • GitHub Squash:
      • Git Squash is a Git feature that allows a dev to simplify the Git tree by merging sequential commits into one another. Basically, you start by choosing a base commit and merging all changes from the next commits into this one. This essentially makes it the same as having all the changes you made in several commits in just one commit—the base commit.


    Diagrams:





    GIT Commands:

    • Show the Git software version:
      • git --version

    • Show the actual branch and modified files in a working directory, staged for your next commit:
      • git status

    • Check Your Current Git Configuration:
      • git config --global credential.helper
        • This command will show you the credential helper that Git is using. If you see "manager," it means Git is using a credential manager to store your GitHub token.

    • If you've stored GitHub credentials using the git config command and want to remove them:
      • List Your Git Configuration:
        • git config --list
          • This command will display all the Git configuration settings, including any credentials.
      • Locate the Credential Configuration in the output of the command above. The specific configuration may vary depending on how you've set up your Git credentials.
      • Remove the Credential Configuration:
        • git config --global --unset <config-name>
          • E.g.:  git config --global --unset credential.helper
      • Verify Removal:
        • git config --list
          • Make sure the configurations you removed are no longer present in the output.
          • After successfully removing the credentials, Git won't use the stored credentials for GitHub authentication, and you'll be prompted to enter your credentials when needed.
      • Windows OS:
        • Open the "Credentials Manager"
          • Click on "Windows Credentials" button
          • Delete the GitHub entries under Generic Credentials

    • Show the URL that it originally cloned a local Git repository from:
      • When Offline:
        • git config --get remote.origin.url
        • OR
        • git remote get-url origin
      • When Online and authenticated with Github:
        • git remote show origin

    • Check which user Git is using to commit your changes:
      • git config --list | grep user.name
      • git config --list | grep user.email

    • Change the username or email that Git is using:
      • git config --global user.name "Your Name"
      • git config --global user.email "your_email@example.com"

    • Show the commit ID:
      • git rev-parse HEAD
      • git rev-parse --short HEAD

    • Show commit logs:
      • git log

    • Show the current commit:
      • git log | head -n 1

    • Show the Difference between two commits:
      • git diff <first-branch-name-or-commit-id> <second-branch-name-or-commit-id> -- <filename>
      • git diff develop origin/master -- README.md

    • How to Revert a single file:
      • git checkout [commit-ID] -- path/to/file
      • git checkout [commit ID]~1 -- path/to/file

    • How to Revert a single commit:
      • git revert <commit-hash>
      • git revert HEAD~2

    • How to Restore working tree files:
      • git restore path/to/file
      • The command can also be used to restore the content in the index with --staged, or restore both the working tree and the index with --staged --worktree
        • To restore all files in the current directory
          • git restore .

      • List your branches. A (*) will appear next to the currently active branch:
        • git branch
        • git branch -a

      • Switch to another branch and check it out into your working directory:
        • git checkout <branch-name> [--force]

      • Fetch down all the branches from the Git remote:
        • git fetch [alias]

      • Stashing the changes:
        • The git stash takes uncommitted both staged and unstaged changes, saves them away for further use, and then returns them from your working copy. You can run the git status so you can see the dirty state. Then run git stash to stash the changes:
          • git stash
          • git stash list
            • stash@{0}: WIP on my-branch: 1234567 Refactoring some code
            • stash@{1}: WIP on my-branch: 1234567 Refactoring some code
        • Re-applying your stashed changes:
          • The git stash pop removes the changes from your stash and re-applies them to your working copy.
            • git stash pop
          • You can choose which stash to re-apply like this:
            • git stash pop stash@{1}
          • If you want to re-apply the changes and keep them in your stash:
            • git stash apply stash@{0}
          • Use git stash show to view a summary of a stash:
            • git stash show
            • git stash show stash@{0}
          • You can also use the -p or --patch options to see the full diff of a stash:
            • git stash show -p

      • Commit your changes:
        • git add .
        • git commit -m "commit message"


      • Pull the updated code:
        • git pull
        • If it's failing due to some file changes, you need to handle that. See the Stash information above.
          • git stash
        • If you want to ignore and lose all changes that you made to that local repository, then you can:
          • git fetch origin
          • git reset --hard
          • git clean -f
          • git pull

      • Making a Git push from a detached head:
        • git branch new-branch-name
        • git push -u origin new-branch-name

      • Create a new branch from a commit ID:
        • Navigate to the commit in question, and then click on the <> button next to the commit in your history. This will show the web interface for browsing that particular commits snapshot of the repository.
        • Click the down arrow on this button to show the dropdown. You can create a new branch simply by typing in the name of the new branch in the search field.
        • Click on the "Create branch: ..." link at the bottom of this dropdown, and a new branch should be created.

      • Git Reset vs Git Revert vs Git Restore
        • git-revert:  is about making a new commit that reverts the changes made by other commits.
        • git-restore:  is about restoring files in the working tree from either the index or another commit. This command does not update your branch. The command can also be used to restore files in the index from another commit.
        • git-reset:  is about updating your branch, moving the tip in order to add or remove commits from the branch. This operation changes the commit history. This command can also be used to restore the index, overlapping with git restore.

      • Retrieve an entire repository from a hosted location via URL:
        • git clone <REPO-URL>

      • Create, Commit and Push a Tag:
        • git tag <tag-name> -a
        • git push origin <tag-name>

      • Open a repository in browser using GitHub CLI:
        • gh repo view -w


      GitHub Workflow:

      A workflow run is made up of one or more jobs, which run in parallel by default. To run jobs sequentially, you can define dependencies on other jobs using the jobs.<job_id>.needs keyword.
      Each job runs in a runner environment specified by runs-on.
      You can run unlimited jobs as long as you are within the workflow usage limits.

      • Runners:
        • GitHub UI -> Your Organizations ->  <organization-name> -> Settings -> Actions -> Runners
        • Create a new Runner -> GitHub-hosted runner
          • Name: <our-runner-name> (e.g: ubuntu-22.04-4cores-16gb)
          • Image: Ubuntu
          • Ubuntu Version: 22.04
          • Size: 4-cores 16GB RAM ...
          • Auto-scaling - Maximum Job Concurrency: 2
          • Groups: <our-runner-group>
          • Networking - No public IP address



      Troubleshooting:
      • "remote: Repository not found":
        • Remove all the github.com credential details from the system.
          • For mac
            • Delete the github.com password from the Keychain Access.
          • For windows
            • Delete the credentials from Credential Manager.



      References: