About Knowledge Base

This repo is intended to serve as a knowledge base for myself. The content is a collection from various sources over a long time. I try to be as transparent as possible with the original sources and link to them whenever possible.

The content might be opinionated ;)

During the last years I have read many blog posts, articles, github readmes... I would like to thank all of these people who share their knowledge and therefore their hard work. I truly believe that free and open knowledge is the only way for real innovation. This should reach from eduction to research to final products.

Currently I am sorting out all my notes from the past years and I try to bring them in a logical and easy to search order.

This project is work in progress forever.

About Me

I consider myself a software engineer with interest in the whole Application Lifecycle. This goes from the evaluation before the first line of code to the final deployment and operation. I am interested in the software as well as in the hardware from memory to server infrastructure. While coding is the part I like the most, I always want know the hows and whys.

About the structure

I try to keep it organized as much as possible to find things easily.

About Feedback

I am always happy to receive constructive feedback.

Disclaimer

While I try to be as precise as possible with the content and the sources there is always something one can easily oversee. So if you are missing your mention or found incorrect data, please raise an issue here: https://github.com/schlumpfit/knowledge-base/issues.

Knowledge Base Index

This site is continuos work in progress.

Currently I am sorting out all my notes and private repos and knowledge I have collected in the past years and sum them up in a structured way. This why sometimes the structure already exists but without text, since I did not bring the notes in a presentable form yet :)

The content on these pages reach from personal summaries/interpretation, links to resources to code snippets and manuals. The content has no claim to be complete, but only covers topics I have worked with.

Application Lifecycle

Summary of all the steps included in a typical application lifecycle. This reaches from the qualification before the first line of code to requirements, architecture, development, verification, versioning, deployment and observability up to the documentation.

For more details see: Application Lifecycle

Documentation

I already mentioned documentation as part of the application lifecycle. But because I think it is such an important topic and also valid for infrastructure, guidelines etc., I have listed it here again.

For more details see: Documentation

Infrastructure

Every software needs some hardware to run on. This can be embedded devices, on-prem servers or some cloud instances. Even serverless deployments in the cloud need servers ;).

This also includes virtualization and containerization, infrastructure as code and workload orchestrators like kubernetes or nomad.

For more details see: Infrastructure

Network

From the socket to the network.

For more details see: Network

Cryptography

A very hyped topic I am not very deep into, but sum up the basics like PKI, TLS, SSH. No blockchain.

For more details see Cryptography

Application Development and Lifecycle

Everything that is part of an application lifecycle.

Agile and DevOps

Some general thoughts on what agile and DevOps means for the application lifecycle.

See DevOps

Qualification and Evaluation

The higher the critically or impact of the planned new tool or application is, the more emphasis should be put on the the tool qualification.

When working in safety related environments this is a must have, but for normal tools this is usually also a good starting point.

For more details see: Qualification ad Evaluation

Requirements

See Requirements

Architecture

See Architecture

Development

See: Application Development

Verification

While in the past the focus was heavily on the pure feature development, today there is a noticeable shift to the left. This means that now people or organizations care much more about quality assurance through static code analysis, unit, integration and system test as well as supply chain and dependency analysis.

For more infos see: Verification

Versioning

See Versioning

Deployment

See Deployment

Observability

See Observability

Documentation

See Documentation

General Thoughts

Some thoughts about what agile software development aims to offer and how the DevOps culture fits into this.

DevOps and Agile

In my words agile software development means to take an adaptive incremental approach to a final product.

Smaller increments lead to a more fine grained feedback loop, but only if they are published in some way. This is where DevOps steps in, to automate and accelerate the process from the code to the final deployment.

Sources:

Value

Only visible when it hits users/customers.

Qualification

Some very basic question that should always be answered:

  • What benefit brings the new tool (If it is not for learning purposes)
  • Is there maybe an existing tool or library for the same use case
  • Is there maybe an existing tool or library where I could contribute
  • Is the tool or library in question still up-to-date or is it maintained
  • How much will it take to create the new application
  • How much will it take and who is going to maintain the new application

For more critical or safety relevant applications this can/must be extended.

A good Summary: https://www.mathworks.com/content/dam/mathworks/tag-team/Objects/m/61793_CMR10-16.pdf

Requirements

Write the requirement in a human readable format to express what is expected from a certain tool or application and link the implemented code to the requirement.

There is a whole lot more to this topic and it often is a pain point during software development.

Software Architecture

What are my problems - What can solve these problems

Agile does not mean to dump design, but brings in more design through an iterative design approach.

Separation of concerns

Precondition for the iterative approach. Fail early, but be possible to adopt early.

  • Do only one thing, but do it good
  • Decouple modules
  • Decouple dependencies

Micro services

Modular monolith vs micro service

Interprocess communication

  • Start in a single repo

Pro

  • Bootstrap development and deployment workflow
  • Scalable

Cons

  • Introducing latency
  • Introducing complexity
  • Harder deployments

Risk-storming

Attack-graphing

Thread-modelling

Architecture Decision Record

Service oriented architecture

Monolith

Multiple responsibilities (UI, API, Logic, Authorization, Authentication) within a single application.

Pro:

  • One source
  • Little overhead
  • Easy testing

Cons:

  • Hard to scale independently
  • Big applications can suffer from to frequent code changes from different teams

Microservices

The big monolith is split up into smaller applications with a separation of concerns.

Pro:

  • Scalability (If the services are set up the correct way)
  • Flexibility in terms of programming language, framework, deployment
  • Easier separation into work teams

Cons:

  • More complex network setup
  • Higher administrative effort to provide micro services
  • Strongly depend on contracts between services which can only be tested during integration

Contracts

Contracts are the promises between microservices on what happens if an RPC is sent from one service to another. They can be seen as the definition of a functions arguments and return value.

So calling a service that does not stick to the before agreed contract, is like calling a function with the wrong arguments, which typically leads to failure.

See: https://docs.pact.io/

Communication

Monolith: Inter-process communication

Micro services: Network/RPC

Service to Service access (East-West Traffic).

Mitigation for microservices:

  • Software defined Network: Automate all VLANS/Firewall rules.
  • Service Mesh: Flat Network, Sidecar pattern mit mTLS certificates.
  • Auto scheduling:
    • Results in unknown IPs and Ports
      • Service Discovery

Application Development

  • Perform some basic qualification before creating anything new
  • Use Version Control (Git)
  • Add a README with a getting started and contribution section
  • Structure your code so it can be easily understood by others
  • Follow best practices of your language or organization
  • Add unit tests
  • Add static code analysis
  • Add dependency analysis
  • Add a build pipeline

Think about the future of the application. While small scripts and new frameworks are typically very fast to get started with, they can often slow down development a lot in the future because of missing functionality. In my opinion the ideal setup is a bigger proven framework with the ability to opt out everything that is not needed in the first place. This might still slow down the development process in the beginning depending on the knowledge of the framework at hand, but can provide many battle tested features and good community support in the future.

12-Factor App

CLI

API

Remote Procedure Call.

Verification

Static code analysis

Dependency and Supply Chain Analysis

Testing

Unit

Integration

System

Static code analysis

Testing

TBD

Why

Unit

Integration

System

Fuzzing

Dependency and Supply chain analysis

SBOM

A “software bill of materials” (SBOM) is a nested inventory, a list of ingredients that make up software components.

See Sbom

Vulnerabilities

License Scan

Software bill of materials

device          \
library/tool     -- Component \
application     /              \
                                \
owned           \                \
3rd party        -- Component      -- Project/Product
open source     /                /
                                /
licenses        \              /
vulnerabilities  -- Component /
state           /

CDX Composition

  • Generate Micro SBOMs manually or automatically.
  • Collect all micro SBOMs
  • Collect all components
  • Remove duplications
  • Reflect dependency graph

Versioning

Purpose

Machine Readable

Human Readable

Observability

Measure how well a state of a service or application can be described from the outside world.

Typical flow of data and actions:

Application <- Instrumentation -> Telemetry <- Observability <- Analysis -> Actions

3 Pillars of Observability

PillarFormatPurposeDescription
MetricsMachine ReadableDetectDo I have a Problem
TracingMachine ReadableTroubleshootWhere is the Problem
LoggingHuman ReadablePinpointWhat is the Problem

Notes:

Do not use print do not re-invent the wheel. There is an existing library already. Think about the audience and the use case of the data to collect Only collect data that brings value and is manageable Choose the right format and be consistent

Data formats

  • Human readable: plaintext
  • Machine readable: structured text (json) or structured data (bytestreams, protobuf, binlogs, pflog)

Logging

Time series of log events written as log messages to a logbook (stdout, database, collector).

Human readable format.

Event

Describe some state at distinct point in time.

  • Immutable
  • Timestamped
  • Categorized
  • Discrete
  • Record

Metrics

Numeric values of measured data at a given time. Recorded within a fixed interval and used for historical visualization and alerting.

Machine readable format does not change und usually consist of:

  • Metric name
  • Timestamp
  • Labels with measured data

Traces

Scoped series of (distributed) events and their duration.

Analysis

  • Graphical
  • Automated alerting

CPU Load

On Linux systems on can read kernel metrics via:

cat /proc/stat

The values are measured in USER_HZ which can be obtained by running getconf CLK_TCK but typically defaults to 100. So each value is a counter of 1/100ths of a second since the boot time btime which is measured as Epoch Unix Timestamp.

#!/bin/bash 

while :; do
  cpu_now=($(head -n1 /proc/stat)) # Get "cpu" line which is the total of all cores 

  cpu_sum="${cpu_now[@]:1}" # Skip first column
  cpu_sum=$((${cpu_sum// /+})) # Add all colums to get the total

  cpu_sum_last="${cpu_last[@]:1}"
  cpu_sum_last=$((${cpu_sum_last// /+}))
  
  # Calculate the delta between two reads for each column
  cpu_delta=$((cpu_sum - cpu_sum_last)) 
  user_delta=$((cpu_now[1] - cpu_last[1])) # Time spent in user mode (CPU bound)
  nice_delta=$((cpu_now[2] - cpu_last[2])) # Time spent in user mode with low priority (CPU bound)
  system_delta=$((cpu_now[3] - cpu_last[3])) # Time spent in system mode (CPU bound)
  idle_delta=$((cpu_now[4] - cpu_last[4])) # Time spent in the idle task (Ideling)
  iowait_delta=$((cpu_now[5] - cpu_last[5])) # Time waiting for I/O to complete (Network/Disk bound)
  irq_delta=$((cpu_now[6] - cpu_last[6])) # Time serving hardware interrupts
  softirq_delta=$((cpu_now[7] - cpu_last[7])) # Time serving software interrupts
  steal_delta=$((cpu_now[8] - cpu_last[8])) # Time stolen by a guest VM
  guest_delta=$((cpu_now[9] - cpu_last[9])) # Time spent running a virtual CPU for guest VM
  guest_niced_delta=$((cpu_now[9] - cpu_last[9])) # Times spent running a virtual CPU with low Priority

  cpu_used=$((cpu_delta - idle_delta)) # Total time spent in doing something
  cpu_usage=$((100 * cpu_used / cpu_delta)) # Calculate the percentage
  
  # Keep for delta
  cpu_last=("${cpu_now[@]}") 

  echo "CPU usage at $cpu_usage%" 
  sleep 1 
done

File sizes

AmountNameEquals ToSize(In Bytes)
1Bit1 Bit1/8
1Nibble4 Bits1/2
1Byte8 Bits1
1Kilobyte1024 Bytes1024
1Megabyte1024 Kilobytes1048576
1Gigabyte1024 Megabytes1073741824
1Terrabyte1024 Gigabytes1099511627776
1Petabyte1024 Terabytes1125899906842624
1Exabyte1024 Petabytes1152921504606846976
1Zettabyte1024 Exabytes1180591620717411303424
1Yottabyte1024 Zettabytes1208925819614629174706176

Documentation

⚠ You need documentation!

While it takes time to write usable documentation and takes a lot of effort to keep documentation updated, I think there is no way around documentation.

There are tons of tools, frameworks, apps etc. to make it easy to write and publish documentation in the first place, but usually it is a onetime job. After the documentation is created it is forgotten.

Ideas to mitigate these problems during software development:

  • Only document the non-obvious
  • Treat documentation as code
    • Use any VCS
    • Write tests
  • Choose a format that is easy to learn and use
  • Keep the documentation close to the sources
    • Generate code documentation from source code
    • Keep additional documentation in the same repository

Readme

Add a Readme.md file to every repository or application.

Chapters

This is a layout idea for an example application.

See: Example Documentation

In code documentation

TBD

Additional documentation

TBD

Visualization as code

Tools and workflows

Mdbook

The page you are viewing is made using mdbook with github pages.

https://crates.io/crates/mdbook

cargo install mdbook 
mdbook init knowledge-base
cd knowledge-base
mdbook serve

mdbook-toc

Autogenerate a toc for every page.

https://crates.io/crates/mdbook-toc

cargo install mdbook-toc

Add the following block to book.toml

[preprocessor.toc]
command = "mdbook-toc"
renderer = ["html"]

mdbook-pagetoc

Display a toc on the right side of the page.

https://github.com/schlumpfit/mdBook-pagetoc

Add these three files to:

  • ./style.css
  • ./sidebar.js
  • ./theme/index.hbs

Raw Files:

Add the following block to book.toml

[output.html]
additional-css = ["style.css"]
additional-js  = ["sidebar.js"]

mdbook-linkcheck

Check if all links are pointing somewhere.

https://crates.io/crates/mdbook-linkcheck

Add the following block to book.toml

[output.linkcheck]
follow-web-links = true

[output.linkcheck.http-headers]
'crates\.io' = ["Accept: text/html"]

Example Application

This is a short description of the purpose or use case of the application.

Getting started

Installation

Download the binary from XYZ:

curl ...

Configuration

Optional chapter to describe basic configuration required and a link to more detailed description.

Execution

Run the following command to perform some action:

/bin/bash echo "Hello"

User Manual

Detailed description or links to the resources.

  • Setup recommendations
  • Production guide
  • API documentation
  • CLI documentation ...

Changelog

Contribution

Bug tracking and planing

  • Link to Jira, Github issues ...

Requirements

  • List of basic requirements or link.

Architecture

  • Overview
  • Design decisions

Development

  • Link to sources and artifacts
  • Coding Guidelines

Verification

  • Coverage
  • Traceability

CI/CD

  • Build instructions
  • Deployment instructions

Appendix

Infrastructure

Network

Network Namespaces

Service Discovery

Consul

Proxies

Nginx

Traefik

Automation

Immutable Infrastructure

Provisioning

Virtualisation

Docker

QEMU

Bare Metal

Workload Orchestration

Kubernetes

Nomad

Packer

Terraform

See: Terraform

Configuration

Ansible

Chef

Secrets Management

Vault

Networking

The structure of this overview follows the OSI layers.

Physical

https://www.ieee802.org/3/

10Base-T

802.3i-1990 - IEEE Standard for Local and Metropolitan Area Networks.

100Base-T

1000Base-T

Single Pair Ethernet

  • 802.3cg (10Base-T1)
  • 802.3bw (100Base-T1)
  • 802.3bp (1000Base-T1)
  • 802.3ch (Multi-Gig Automotive Ethernet)

Ethernet Frame

Min length: 64 bytes Max length: MTU (defaults to 1518 - 1522(Vlan))

In case the ethernet header + the ethernet payload has less than 64 bytes, the frame is padded. This is typically done by the ethernet driver / network card. Packet filters (tcpdump) see the leaving packets before they go threw the HW, and therefore can not see the padding. Wireshark also cuts off the FCS.

IEEE 802.3 Ethernet Header

|    7 bytes    |    1 byte     |    6 bytes    |    6 bytes    |    2 bytes    |    2 bytes    | 46-1500 bytes |    4 bytes    |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|    Preamble   | Start Frame   |  Destination  |    Source     |  Ether type   |  Ether type   |    Payload    |      FCS      |
|               | Delimiter     |    Address    |    Address    |  Size         |  Size         |               |      CRC      |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Network

IPv4

Address: 192.168.000.1 Subnet Mask: 255.255.255.0 Network: 192.168.0.0/24

Address:

  • dotted decimal: 192.168.0.1
  • binary: 11000000.10101000.00000000.00000001
  • network byte order: big endian

1 byte = 8 bit 1 byte:

most significant bit               least significant bit
|                                  |
|                                  |

1   |1   |1   |1   |1   |1   |1   |1
----|----|----|----|----|----|----|----
128 |64  |32  |16  |8   |4   |2   |1

Subnet Mask

Internet /0 00000000.00000000.00000000.00000000 0.0.0.0 Class A /1 10000000.00000000.00000000.00000000 128.0.0.0 ... Class A /8 11111111.00000000.00000000.00000000 255.0.0.0 Class B /9 11111111.10000000.00000000.00000000 255.128.0.0 ... Class B /16 11111111.11111111.00000000.00000000 255.255.0.0 Class C /17 11111111.11111111.10000000.00000000 255.255.128.0 ... Class C /24 11111111.11111111.11111111.00000000 255.255.255.0 Hosts /25 11111111.11111111.11111111.10000000 255.255.255.128 Hosts /26 11111111.11111111.11111111.11000000 255.255.255.192 Hosts /27 11111111.11111111.11111111.11100000 255.255.255.224 Hosts /28 11111111.11111111.11111111.11110000 255.255.255.240 Hosts /29 11111111.11111111.11111111.11111000 255.255.255.248 Hosts /30 11111111.11111111.11111111.11111100 255.255.255.252 Hosts /31 11111111.11111111.11111111.11111110 255.255.255.254 Hosts /32 11111111.11111111.11111111.11111111 255.255.255.255

/24 -> 255.255.255.0 -> 8 host bits -> 254 Host Addresses /16 -> 255.255.0.0 -> 16 host bits -> 65.534 Host Addresses

Classless Interdomain Routing (CIDR) /20 -> 255.255.v

Network

IPv6

Transport

UDP

TCP

Session

Presentation

Application

Socket Programming

Vlans

MacVlan

VXLan

DHCP and DNS

DHCP

Client

Check wether we use systemd-networkd or NetworkManager

networkctl status
journalctl -u NetworkManager.service | grep DHCP

Check lease

systemd-netword:

cat /var/lib/dhclient/*.lease

Remove old leases and request new offer:

dhclient -r -v eth0 && rm /var/lib/dhcp/dhclient.* ; dhclient -v eth0

NetworkManager

cat /var/lib/NetworkManager/*.lease 

DNS

Debugging

dig @name-server.fqdn record-name.to.lookup

systemctl is-active systemd-resolved
systemctl is-active named # bind9
resolvectl status
systemd-resolve --status
less /etc/resolve.conf

sudo resolvectl flush-caches
sudo systemd-resolve --flush-chaches

nsupdate

https://www.rfc-editor.org/rfc/rfc2136

sudo apt install bind9-dnsutils

Add key file ./.key:

key "<your domain>.fqdn.key" {
  algorithm hmac-sha512;
  secret "<key>";
};

Connect vis nsupdate

nsupdate -k ./<domain>.key

Connect to the nameserver in the opened prompt:

> server <name-server>.fqdn
>

Add a CNAME:

  • "cname.to.add.fqdn.": CNAME/alias to add.
  • "3600": TTL in seconds (not hops).
  • "CNAME": Record Type.
  • "target.to.point.to.fqdn.": Target where the cname should point to.
> update add cname.to.add.fqdn. 3600 CNAME target.to.point.to.fqdn.
> show
> send

Delete a record:

> update delete name.to.delete.fqdn.
> show
> send

Use eg dig to confirm new cname.

Terraform

Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure.

Install terraform: https://developer.hashicorp.com/terraform/downloads

Getting started

terraform init
terraform workspace list
terraform workspace select <workspace>
terraform validate
terraform state list
terraform plan
terraform apply
terraform show
terraform destroy

Debug

terraform console
terraform import aws_<resource>.<name> arn:aws....
terraform state show
terraform state rm aws_<resource>.<name>
TF_LOG=<TRACE|DEBUG|INFO|WARN|ERROR>
TF_LOG_PATH=./tf.log
validation {
   condition = <https://developer.hashicorp.com/terraform/language/expressions/custom-conditions#condition-expressions>
   error_message = "Some message to display"
}

Best Practices

Googles Best Practices

  • Use explicit versions instead of latest or ~> ..
  • Add a Readme.md to each project and module
  • Stay DRY
  • Use descriptive names
  • Plan locally apply from VCS
  • Use workspaces to separate network, middleware, application with read access to upper layer

Git

! Do not commit state files to VCS

Run terraform fmt -recursive before commit.

Add .gitignore:

**/.terraform/*
*.tfstate
*.tfstate.*
*override.tf.*
.terraformrc
terraform.rc
.terraform.lock.hcl

Structure

-- SERVICE-DIRECTORY/
   -- modules/
      -- <service-name>/
         -- main.tf
         -- variables.tf
         -- outputs.tf
         -- provider.tf
         -- README
   -- environments/
      -- dev/
         -- backend.tf
         -- main.tf
      -- prod/
         -- backend.tf
         -- main.tf

! Only use workspaces to keep multiple versions of the same infrastructure like dev,prod or feature based

Separate variables by workspace:

vars.tf # Var definition and defaults for default workspace
workspace1.tfvars
workspace2.tfvars

Use workspace interpolation

terraform workspace select <workspace>
terraform plan -var-file="$(terraform workspace show).tfvars"

State

JFrog Remote State and Locking Provider

terraform {
  backend “remote” {
    hostname = “my_artifactory_domain.org”
    organization = “backend repository name”
    workspaces {
          prefix = “my-prefix-”   
    }
  }
}

Resolve state drift

In the first place terraform makes the assumption of a 1-to-1 mapping of the real world and its state.

This assumption can break in the real world. For example if a service goes down or due to some other automation or manual interactions.

With unchanged sources a terraform plan should result in no actions required.

If there are actions and you know the current sources should definitely match the current state one can follow:

Graph

terraform graph | dot -Tsvg > graph.svg

AWS

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

  • Disable root access keys
  • Add MFA device to root account
  • Add admin group and user
  • Use the least privilege model

Generate SSH Key

resource "aws_key_pair" "ssh_keys" {
  for_each = {
    key_name1: "ssh-rsa ..."
  }

  key_name   = each.key
  public_key = each.value
}

Policies

https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html

  • Deny always wins over Allow

EKS

Check https://calculator.aws/#/addService

aws eks --region eu-central1 update-kubeconfig --name cluster1
kubectl get svc

VPC

Do not overlap cidr_clocks with other sites or accounts

Internet Gateway

Route Table

ip route add 0.0.0.0 via internet gateway

Subnet

Route Table Association

Security Group

EC2 Instance

Cryptography

Symmetric

  • One or more parties share the exact same key for en-/decryption
  • Second save channel is required to distribute the key
  • Typically faster than Asymmetric encryption

Algorithms: AES, TwoFish, 3DES

Asymmetric

Post-Quantum

Public Key Infrastructure

Backup

  • en-/de-cryption keys: Yes - Dataloss
  • signing keys: No - Keep integrety. Exactly only 1 person or org can sgin. Arguable.

Threshold cryptography

  • Shamir's secret sharing: Split the secret in pieces and store them on different places

Architecture

Different keys for signing and encryption

root -> key

Timestamping

Certificate

Electronic Identity Card

Link Public Key to your name

Contains:

  • DN: Unique name of owner
  • Serial: Unique serial number
  • Start: start date of validity
  • End: end date of validity
  • CRL: certificate revocation list
  • Key: Public Key
  • CA DN: Uniqe name of the certificate authority that signed the certificate

OCSP

Online Certificate Status Protocol

X.509

Certificate Authority

GlobalSign WebTrust DigiSign

Registration Authority

Resources

  • https://jamielinux.com/docs/openssl-certificate-authority/

TLS1.3

The Transport Layer Security (TLS) Protocol Version 1.3: https://www.rfc-editor.org/rfc/rfc8446

Explanation

The Illustrated TLS 1.3 Connection

Tools

Resources

SSH

Port forwarding

This can be handy if certain ports are blocked by the firewall.

# Forward local port 3000 to remote service running on port 3001
ssh -L 3000:localhost:3001 remote-host

# Forward remotes local port to your local machine
ssh -R 5432:localhost:5432 remote-host

# Act as a jumphost so that remote-host can access remote-db
ssh -R 5432:localhost:5432 remote-host
ssh -L 5432:localhost:5432 remote-db

SSH signed certificates