CoreOS
This page was last modified on 7 March 2017, at 13:35.
![]() | |
OS family | Unix-like |
---|---|
Working state | In development |
Source model | Open source |
Initial release | October 3, 2013 |
Latest release | 1298.5.0[1] / February 28, 2017 |
Marketing target | Servers and Computer cluster |
Platforms | x86-64 |
Kernel type | Monolithic (Linux kernel) |
License | Apache License 2.0 |
Official website |
coreos |
CoreOS - is an operating system with open source software based on Linux to build easily and flexibly scalable clusters. The operating system provides CoreOS only minimal functionality needed to deploy applications within a software container, detection means services and transmission settings. CoreOS has a minimalist distro (136Mb), based on ChromeOS, which in turn is based on Gentoo.
CoreOS can be divided into the following parts:
- Systemd - manages local services on machines in the cluster
- Docker - provide insulation services, but its use is, in principle, not necessarily
- Etcd - distributively stores cluster configuration
- Fleet - provides distributed services management ( "superstructure" over the systemd)
Contents
Overview
CoreOS provides no package manager as a way for distributing payload applications, requiring instead all applications to run inside their containers. Serving as a single control host, a CoreOS instance uses the underlying operating-system-level virtualization features of the Linux kernel to create and configure multiple containers that perform as isolated Linux systems. That way, resource partitioning between containers is performed through multiple isolated userspace instances, instead of using a hypervisor and providing full-fledged virtual machines. This approach relies on the Linux cgroups and namespaces functionalities,
Initially, CoreOS exclusively used Docker as a component providing an additional layer of abstraction and interface.
CoreOS uses ebuild scripts from Gentoo Linux for automated compilation of its system components.
Updates distribution
CoreOS achieves additional security and reliability of its operating system updates by employing FastPatch as a dual-partition scheme for the read-only part of its installation, meaning that the updates are performed as a whole and installed onto a passive secondary boot partition that becomes active upon a reboot or kexec. This approach avoids possible issues arising from updating only certain parts of the operating system, ensures easy rollbacks to a known-to-be-stable version of the operating system, and allows each boot partition to be signed for additional security.
To ensure that only a certain part of the cluster reboots at once when the operating system updates are applied, preserving that way the resources required for running deployed applications, CoreOS provides locksmith as a reboot manager. Using locksmith, it is possible to select between different update strategies that are determined by how the reboots are performed as the last step in applying updates; for example, it may be configured how many cluster members are allowed to reboot simultaneously. Internally, locksmith operates as the daemon that runs on cluster members, while the command-line utility manages configuration parameters.
Cluster infrastructure
CoreOS provides etcd, a daemon that runs across all computers in a cluster and provides a dynamic configuration registry, allowing various configuration data to be easily and reliably shared between the cluster members. Since the key–value data stored within etcd is automatically distributed and replicated with automated master election and consensus establishment using the Raft algorithm, all changes in stored data are reflected across the entire cluster, while the achieved redundancy prevents failures of single cluster members from causing data loss. Beside the configuration management, etcd also provides service discovery by allowing deployed applications to announce themselves and the services they offer. Communication with Mono|etcd is performed through an exposed REST-based API, which internally uses JSON on top of HTTP; the API may be used directly, or indirectly through etcdctl, which is a specialized command-line utility also supplied by CoreOS.
Deployment
When running on dedicated hardware, CoreOS can be either permanently installed to local storage, such as a hard disk drive (HDD) or solid-state drive (SSD), or booted remotely Network booting|over a network using Preboot Execution Environment (PXE) in general, or iPXE as one of its implementations. CoreOS also supports deployments on various hardware virtualization platforms, including Amazon EC2, Ocean, Google Compute Engine, Microsoft Azure, OpenStack, QEMU/KVM, Vagrant and VMware.
CoreOS can also be deployed through its commercial distribution called Tectonic, which additionally integrates Google's Kubernetes as a cluster management utility. As of|2015|04, Tectonic is planned to be offered as beta software to select customers. Furthermore, CoreOS provides Flannel as a component implementing an overlay network required primarily for the integration with Kubernetes.
Installation
CoreOS can set different ways. But for experimental purposes it is easiest through Vagrant:
- Install Git, VirtualBox, then Vagrant latest versions.
- Go to the folder, rename it user-data.sample file in the user-data, config.rb.sample in config.rb.
- Go to discovery.etcd.io/new, copy to clipboard appeared URL, uncomment the line of discovery to the instructions in the user-data and replace it URL.
- Go to the folder and there vagrant up.
- Wait for the download of the base image.
- Do vagrant ssh.
VirtualBox (linux) installation This script based on CoreOS install instruction
1 #!/bin/bash
2
3 USAGE="Usage: $0 -h | [-n name-prefix] [-s size]
4 This script create cluster of size VBox VMs using default parameters.
5 Feel free to modify it.
6 Options:
7 -h This help
8 -n Name prefix for VMs (default: core)
9 -s Size of cluster (default: 3)
10 CREATEVDI_OPTS will be passed to create-coreos-vdi scripts
11 This tool creates a CoreOS VDI image to be used with VirtualBox.
12 "
13 PREFIX="core"
14 SIZE=3
15
16 : ${CREATEVDI_OPTS:="-V stable"}
17 while getopts "n:s:h" OPTION
18 do
19 case $OPTION in
20 n) PREFIX="$OPTARG" ;;
21 s) SIZE=$OPTARG ;;
22 h) echo "$USAGE"; exit;;
23 *) exit 1;;
24 esac
25 done
26
27 #Disk image downloading
28 : ${IMAGE:=coreos_prod.vdi}
29 echo $IMAGE
30 if [ ! -f $IMAGE ]
31 then
32 if [ ! -f "create-coreos-vdi" ]
33 then
34 wget https://raw.githubusercontent.com/coreos/scripts/master/contrib/create-coreos-vdi
35 chmod +x create-coreos-vdi
36 fi
37 source ./create-coreos-vdi $CREATEVDI_OPTS
38 mv $VDI_IMAGE $IMAGE
39 fi
40
41 #token generation
42 TOKEN=$(curl http://discovery.etcd.io/new?size=$SIZE | sed 's|.*/||')
43
44 #cloud-config template
45 CLOUD_CONFIG='#cloud-config
46 users:
47 - name: "user"
48 passwd: "$1$53YHkhSo$bvjpI.GPhDuC8pUfqAlrT."
49 groups:
50 - "sudo"
51 - "docker"
52 ssh-authorized-keys:
53 - "<SSH_KEY>"
54 ssh_authorized_keys:
55 - <SSH_KEY>
56 hostname: <HOSTNAME>
57 coreos:
58 etcd2:
59 advertise-client-urls: http://<IP>:2379,http://<IP>:4001
60 initial-advertise-peer-urls: http://<IP>:2380
61 discovery: https://discovery.etcd.io/<TOKEN>
62 listen-peer-urls: http://<IP>:2380,http://<IP>:7001
63 listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
64 units:
65 - name: etcd2.service
66 command: start
67 - name: fleet.service
68 command: start
69
70 - name: static.network
71 runtime: true
72 content: |
73 [Match]
74 Name=enp0s3
75
76 [Network]
77 DNS=192.168.15.1
78 Address=<IP>/24
79 Gateway=192.168.15.1
80 #'
81 #adding SSH keys
82 while read l; do
83 if [ -z "$SSH_KEY" ]; then
84 SSH_KEY="$l"
85 else
86 SSH_KEY="$SSH_KEY
87 - $l"
88 fi
89 done < ~/.ssh/id_rsa.pub
90
91 #filling config
92 CLOUD_CONFIG="${CLOUD_CONFIG//<SSH_KEY>/${SSH_KEY}}"
93 CLOUD_CONFIG="${CLOUD_CONFIG//<TOKEN>/${TOKEN}}"
94
95 #configuring cluster NAT-network
96 VBoxManage natnetwork remove --netname ${PREFIX}_net
97 VBoxManage natnetwork add --netname ${PREFIX}_net --network "192.168.15.0/24" --dhcp off --enable
98
99 #VM creation
100 for i in $(seq 1 $SIZE)
101 do
102
103 NAME=$PREFIX$i
104 CLOUD_CONFIG_x=$CLOUD_CONFIG
105 public_ipv4="192.168.15.$((100+$i))"
106
107 #filling config
108 CLOUD_CONFIG_x="${CLOUD_CONFIG_x//<HOSTNAME>/${NAME}}"
109 CLOUD_CONFIG_x="${CLOUD_CONFIG_x//<IP>/${public_ipv4}}"
110
111 WORKDIR="tmp.${RANDOM}"
112 mkdir "$WORKDIR"
113
114 CONFIG_DIR="${WORKDIR}/openstack/latest"
115 CONFIG_FILE="${CONFIG_DIR}/user_data"
116 CONFIGDRIVE_FILE="$NAME.iso"
117
118 mkdir -p "$CONFIG_DIR"
119 touch ${CONFIG_FILE}
120 cat > ${CONFIG_FILE} << EOL
121 ${CLOUD_CONFIG_x}
122 EOL
123
124 #config-drive creation
125 mkisofs -R -V config-2 -o "$CONFIGDRIVE_FILE" "$WORKDIR"
126 rm -rf $WORKDIR
127
128 #clonning VDI
129 VBoxManage clonehd $IMAGE $NAME.vdi
130 VBoxManage modifyhd $NAME.vdi --resize 10240
131
132 #VM configuring
133 VBoxManage createvm --name $NAME --ostype Linux26_64 --register
134 VBoxManage modifyvm $NAME --memory 1024
135 #network configuration
136 VBoxManage modifyvm $NAME --nic1 natnetwork
137 VBoxManage modifyvm $NAME --nat-network1 ${PREFIX}_net
138 VBoxManage natnetwork modify --netname ${PREFIX}_net --port-forward-4 "ssh_$NAME:tcp:[]:$((1022+$i)):[$public_ipv4]:22"
139 VBoxManage modifyvm $NAME --nic2 nat
140 #port-forwarding
141 VBoxManage modifyvm $NAME --natpf2 "ssh_$NAME,tcp,127.0.0.1,$((2200+$i)),,22"
142 #attaching VDI
143 VBoxManage storagectl $NAME --name "SATA Controller" --add sata --controller IntelAHCI
144 VBoxManage storageattach $NAME --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium $NAME.vdi
145 #attaching config-drive
146 VBoxManage storagectl $NAME --name "IDE Controller" --add ide
147 VBoxManage storageattach $NAME --storagectl "IDE Controller" --port 0 --device 0 --type dvddrive --medium $NAME.iso
148 done
See also
- Application virtualization software technology that encapsulates application software from the operating system on which it is executed
- Comparison of application virtualization softwarevarious portable and scripting language virtual machines
- Comparison of platform virtualization software snd various emulators and hypervisors, which emulate the whole physical computers
- LXC (Linux Containers)an environment for running multiple isolated Linux systems (containers) on a single Linux control host
- Operating-system-level virtualization implementations based on operating system kernel's support for multiple isolated userspace instances
- Software as a service a software licensing and delivery model that hosts the software centrally and licenses it on a subscription basis
- Virtualizationa general concept of providing virtual versions of computer hardware platforms, operating systems, storage devices, etc.
References
Cite error: Invalid <references>
tag;
parameter "group" is allowed only.
<references />
, or <references group="..." />
External links
- First glimpse at CoreOS, September 3, 2013, by Sébastien Han
- CoreOS: Linux for the cloud and the datacenter, ZDNet, July 2, 2014, by Steven J. Vaughan-Nichols
- What's CoreOS? An existential threat to Linux vendors, InfoWorld, October 9, 2014, by Matt Asay
- Understanding CoreOS distributed architecture, March 4, 2015, a talk to Alex Polvi by Aaron Delp and Brian Gracely
- CoreOS fleet architecture, August 26, 2014, by Brian Waldon et al.
- Running CoreOS on Google Compute Engine, May 23, 2014
- CoreOS moves from Btrfs to Ext4 + OverlayFS, Phoronix, January 18, 2015, by Michael Larabel
- Containers and persistent data, LWN.net, May 28, 2015, by Josh Berkus
Присоединяйся к команде
ISSN:
Следуй за Полисом
Оставайся в курсе последних событий
License
Except as otherwise noted, the content of this page is licensed under the Creative Commons Creative Commons «Attribution-NonCommercial-NoDerivatives» 4.0 License, and code samples are licensed under the Apache 2.0 License. See Terms of Use for details.