The formal state of an Arvo instance is an event history, as a linked list of nouns from first to last. The history starts with a bootstrap sequence that delivers Arvo itself, first as an inscrutable kernel, then as the self-compiling source for that kernel. (Whitepaper)
When Arvo starts for the first time, how does it work? It tells you something of that process; as of Vere 2.12/Arvo 412 K, the boot sequence output looks like this:
Fake ship ~fes:
~urbit 2.12boot: home is fesloom: mapped 2048MBlite: arvo formula 2a2274c9lite: core 4bb376f0lite: final state 4bb376f0boot: downloading pill https://bootstrap.urbit.org/urbit-v2.12.pillboot: parsing %solid pilldock: pace (live): configured at fes/.bin/pacevere: binary copy succeededloom: mapped 2048MBboot: protected loomlive: logical bootboot: installed 661 jets---------------- playback starting ----------------pier: replaying events 1-14arvo: metamorphosisclay: kernel updatedclay: rebuilding %base after kernel updategall: installing %acmegall: installing %azimuthgall: installing %dbuggall: installing %dojogall: installing %eth-watchergall: installing %hooddrum: link [~fes %dojo]kiln: bootgall: installing %hermgall: installing %lensgall: installing %pinggall: installing %spidergall: installing %talk-uiNot running %settings-store yet, got %pokegall: installing %docketgall: installing %treatygall: installing %hark-storegall: installing %hark-system-hookgall: installing %settingsgall: installing %settings-storegall: installing %storagegall: installing %reelgall: installing %baitgall: installing %vitalsgall: installing %growldocket: fetching %http glob for %talk deskdocket: fetching %http glob for %garden deskdocket: fetching %http glob for %talk deskdocket: fetching %http glob for %garden deskdocket: fetching %http glob for %webterm deskdocket: fetching %http glob for %landscape deskgall: installing %metadata-storegall: installing %contact-storegall: installing %chat-storegall: installing %graph-storegall: installing %group-store%group-store: on-initgall: installing %invite-storegall: installing %s3-storegall: installing %chat-hookgall: installing %chat-viewgall: installing %clockgall: installing %contact-hookgall: installing %contact-pull-hookgall: installing %contact-push-hookgall: installing %contact-viewgall: installing %dm-hookgall: installing %graph-pull-hookgall: installing %graph-push-hookgall: installing %group-pull-hookgall: installing %group-push-hookgall: installing %group-viewgall: installing %hark-chat-hookgall: installing %hark-graph-hookgall: installing %hark-group-hookgall: installing %hark-invite-hookgall: installing %invite-hookgall: installing %invite-viewgall: installing %launchgall: installing %metadata-hookgall: installing %metadata-pull-hookgall: installing %metadata-push-hookgall: installing %observe-hookgall: installing %sanegall: installing %weathergall: not running %file-server yet, got %poke%group-store: on-peek on path /y/groups%group-store: on-watch on path /groups%group-store: on-watch on path /groups%group-store: on-watch on path /groups%group-store: on-watch on path /groups%group-store: on-watch on path /groupsdocket: fetching %http glob for %groups deskgall: installing %groupsgall: installing %chatgall: installing %contacts%contacts: on-initgall: installing %heapgall: installing %diarygall: installing %harkgall: installing %notifygall: installing %groups-uigall: installing %grouper%contacts: on-poke with mark %noun%contacts: on-agent on wire /migrate, %poke-ack[%agent-giving-on-system-duct %diary %fact]pier: (14): play: done---------------- playback complete ----------------vere: checking version compatibilityloom: image backup completelick init mkdir fes/.urb/devames: live on 31592 (localhost only)conn: listening on fes/.urb/conn.socklick: %born failure;http: web interface live on http://localhost:8080http: loopback live on http://localhost:12321pier (25): livedocket: fetching %http glob for %garden deskames: metamorphosis; ~zod is your neighbor~fes:dojo>
Live ship comet:
~urbit 2.12boot: home is /home/neal/comet-412loom: mapped 2048MBlite: arvo formula 2a2274c9lite: core 4bb376f0lite: final state 4bb376f0Downloading pill https://bootstrap.urbit.org/urbit-v2.12.pillMining a comet. May take up to an hour.If you want to boot faster, get an Urbit identity.Found comet ~mipped-pinlug-loshec-tastun--tabfen-bitwex-norsul-wanzodboot: verifying keysGetting sponsorboot: retrieving galaxy tableboot: retrieving network domainsboot: retrieving keys for sponsor ~wanzodboot: retrieving keys for sponsor ~zodboot: parsing %solid pillpace (live): configured at /home/neal/comet-412/.bin/pacevere: binary copy succeededloom: mapped 2048MBboot: protected loomlogical bootboot: installed 661 jets---------------- playback starting ----------------pier: replaying events 1-14arvo: metamorphosisgall: not running %azimuth yet, got %pokearvo: kernel updatedclay: rebuilding %base after kernel updategall: installing %acmegall: installing %azimuthgall: installing %dbuggall: installing %dojogall: installing %eth-watchergall: installing %hoodlink [~mipped-pinlug-loshec-tastun--tabfen-bitwex-norsul-wanzod %dojo]kiln: bootgall: installing %hermgall: installing %lensgall: nstalling %pinggall: installing %spidergall: installing %talk-uigall: not running %settings-store yet, got %pokegall: installing %docketgall: installing %treatygall: installing %hark-storegall: installing %hark-system-hookgall: installing %settingsgall: installing %settings-storegall: installing %storagegall: installing %reelgall: installing %baitgall: installing %vitalsgall: installing %growldocket: fetching %http glob for %talk deskdocket: fetching %http glob for %garden deskdocket: fetching %http glob for %talk deskdocket: fetching %http glob for %garden deskdocket: fetching %http glob for %webterm deskdocket: fetching %http glob for %landscape deskgall: installing %metadata-storegall: installing %contact-storegall: installing %chat-storegall: installing %graph-storegall: installing %group-store%group-store: on-initgall: installing %invite-storegall: installing %s3-storegall: installing %chat-hookgall: installing %chat-viewgall: installing %clockgall: installing %contact-hookgall: installing %contact-pull-hookgall: installing %contact-push-hookgall: installing %contact-viewgall: installing %dm-hookgall: installing %graph-pull-hookgall: installing %graph-push-hookgall: installing %group-pull-hookgall: installing %group-push-hookgall: installing %group-viewgall: installing %hark-chat-hookgall: installing %hark-graph-hookgall: installing %hark-group-hookgall: installing %hark-invite-hookgall: installing %invite-hookgall: installing %invite-viewgall: installing %launchgall: installing %metadata-hookgall: installing %metadata-pull-hookgall: installing %metadata-push-hookgall: installing %observe-hookgall: installing %sanegall: installing %weathergall: not running %file-server yet, got %poke|«play»store: on-peek on path /y/groups%group-store: on-watch on path /groups%group-store: on-watch on path /groups%group-store: on-watch on path /groups%group-store: on-watch on path /groups%group-store: on-watch on path /groupsdocket: fetching %http glob for %groups deskgall: installing %groupsgall: installing %chatgall: installing %contacts%contacts: on-initgall: installing %heapgall: installing %diarygall: installing %harkgall: installing %notifygall: installing %groups-uigall: installing %grouper%contacts: on-poke with mark %noun%contacts: on-agent on wire /migrate, %poke-ack[%agent-giving-on-system-duct %diary %fact]pier: (14): play: done---------------- playback complete ----------------vere: checking version compatibilityloom: image backup completelick init mkdir /home/neal/comet-412/.urb/devames: live on 52253conn: listening on /home/neal/comet-412/.urb/conn.socklick: %born failure;http: web interface live on http://localhost:8081http: loopback live on http://localhost:12322pier (25): livedocket: fetching %http glob for %garden deskames: czar zod.urbit.org: ip .35.247.119.159ames: metamorphosis; ~zod is your neighborames: czar at zod.ur~mipped_wanzod:dojo>
At the 10,000' level, we can read the current boot process into a few discrete stages:
- Runtime startup
- Boot sequence (pill)
- Arvo larval phase
- Arvo main sequence
- Userspace startup (a lot of that output results from userspace slogs)
In this lesson, we will examine each of these steps.
Runtime Startup
To start an Urbit ship for the first time, you have to provide a ship name and the corresponding private key. It's easiest to demonstrate this with a moon using the values obtained from |moon
. After allocating memory, the logical boot process proceeds.
The runtime spawns the king (king.c
) and indirectly the serf (serf.c
) processes. These will both run for the lifetime of the Urbit process.
- The serf is the Nock runtime. It tracks the current state of Arvo as a noun, updating the state by poking it with nouns. It informs the king of the new state.
- Vere provides a standard serf, what was known formerly as the
urbit-worker
process. - Sword (née Ares) can be used as a serf in its Nock interpreter capacity, but requires I/O driver support to function this way entirely for Urbit.
- Vere provides a standard serf, what was known formerly as the
- The king manages snapshots of Arvo's state and interfaces with Unix.
- Vere is the only Urbit king currently.
- King Haskell was an alternative king process that was dropped for maintenance reasons.
The serf only ever talks to the king, while the king talks with both the serf and Unix.
When the runtime begins, it drops into vere/main.c
and checks the command-line options and commands. main()
has to decide what it needs to do (i.e. the command) and then it accordingly sets global flags. If this is a first-time boot or a restart of a pier, then main()
starts the king with u3_king_commence()
. (In general, main.c
isn't very Urbit-y, it's a fairly orthodox C startup file.)
The first thing the king does is use vere/dawn.c
to retrieve the state of Ethereum and the claimed ship's identity. If this can be verified, then the sponsor chain is retrieved and preparation for the Arvo bootstrap sequence is made.
boot: verifying keysGetting sponsorboot: retrieving galaxy tableboot: retrieving network domainsboot: retrieving keys for sponsor ~wanzodboot: retrieving keys for sponsor ~zod
- See how a comet is mined in
vere/dawn.c:u3_dawn_come
.
You can see u3v_wish
present at several places, demonstrating the Arvo ++wish
evaluation arm.
The boot sequence is set up in vere/pier.c:u3_pier_boot()
, triggered by the king immediately after the dawn.c
call. There are some runtime boilerplate issues to resolve, such as creating the /.urb
folder for the event log and the loom. Snapshots are made and replays are checked, &c.
Finally, the king hooks up the bootstrap from the supplied pill (vere/king.c:_king_boot_ivory
→ noun/serial.c:u3s_cue_xeno
) and starts the main event loop (uv_run
). uv_run
is actually a loop handler from libuv
, not a part of Urbit proper. It provides asynchronous I/O, which makes sense since every event in Urbit either comes from or results in a Unix system call.
Regarding libuv
:
libuv's name and logo stand for "Unicorn Velociraptor", where:
- U or Unicorn is a reference to universal and multi-platform.
- V or Velociraptor is a reference to velocity and high-performance.
Pill I
When you boot a ship, you need all the parts of the boot sequence that are not unique, as well as your private keys and up-to-date information about the PKI, and some entropy etc. The runtime provides some of this information. The pill is then the recipe for the bootstrap sequence. The bootstrap sequence is how you get to an Arvo kernel. Once you have an Arvo kernel, you can compute in the normal event timeline.
A big part of the practical complexity is obtaining identity and keys from Azimuth. You need your own keys of course, but you need the public keys of anyone you need to talk to. So you start with the galaxy table (hard-coded) and can build the sponsorship chain by construction. Then you can get the rest from an Ethereum node.
We also want to avoid booting into an invalid state.
The pill contains:
- A list of Nock events to create an Arvo kernel.
- A list of Arvo events to follow once the Arvo kernel has been created.
- A list of userspace events to follow that setup.
There are three main pill types:
- An ivory pill is a runtime support pill compiled into the binary. It produces just the
%zuse
core for use by Vere's I/O process. (This prevents needing to redefine certain parts of the Hoon stdlib functionality in Vere.)
.ivory/pill +ivory %base
- A brass pill is a complete bootstrap sequence including the vanes being recompiled against a target
%base
desk (the first argument).
.brass/pill +brass %base
A brass pill is a recipe for a complete bootstrap sequence, starting with a bootstrap Hoon compiler as a Nock formula. It compiles a Hoon compiler from source, then uses it to compile everything else in the kernel. (
~master-morzod
)
For instance, the developer pill is produced as a brass pill.
A solid pill is a kernel developer expedient, which doesn't recompile the vanes the way a brass pill does.
A baby pill is a minimalist pill, like the one you produced for
ca04
's homework. (~wicdev-wisryt walks through the process of creating a baby pill here↗.)
Bootstrapping
Before we plug the newborn node into the network, we feed it a series of bootstrap or ``larval'' packets that prepare it for adult life as a packet transceiver on the public network. The larval sequence is private, solving the secret delivery problem, and can contain as much code as we like. (Whitepaper)
If Nock is a frozen function from nouns, then we can define an OS. That OS, Arvo, guarantees that the state of your ship is a pure function of the things that have happened to it. The event log is a linked list of events, operated on by what the whitepaper calls a “functional BIOS”: [2 [0 3] [0 2]]
.
So the formula starts with the first event. What's in the first event? It performs the bootstrap of Arvo itself then loops to take the events off one at a time.
When we boot a ship, the runtime implements this directly using u3v_boot
, which is given a list and runs the formula from a solid pill.
An +$ovum
is a pair of wire
(routing data) and card
. A +$card
is raw event datum, a pair of a term
tag and an arbitrary noun.
The First Five Events
The ++eden
core supplies the first five events to create the event series that will result in Arvo and its lifecycle function.
- Event One,
++aeon
. Start the event loop. - Event Two,
++boot
. Bootstrap anarvo
kernel from source. - Event Three,
++fate
. Produce the Hoon bootstrap compiler. - Event Four,
++hoon
. Produce the compiler source. - Event Five,
++arvo
. Produce the kernel source.
Event One: ++aeon
++ aeon^- *=> *log=[boot=* tale=*]!==+ [arvo epic]=.*(tale.log boot.log)|- ^- *?@ epic arvo%= $epic +.epicarvo .*([arvo -.epic] [%9 2 %10 [6 %0 3] %0 2])==
++aeon
is the first function run on any ship. The gate on the outer edge of Arvo is retrieved, then all of the events are run in a list processing loop, invoking Arvo in each one. This is Hoon code to just produce raw Nock using !=
zaptis. The =>
tisgar asserts that we expect the subject (outside the Nock) to look like the first thing in the event log and the rest of the log. This is the first event of the log. The subject that it expects is the rest of the log (which will be evaluated using [2 [0 3] [0 2]]
.) So boot.log
is Event Two and tale.log
is Events Three through Infinity.
This produces arvo
, the stateless kernel, and epic
, the rest of the log. The formula in Event Two can take as many events as it needs from the sequence to construct arvo
, then the incremental process can continue. If epic
is an atom ~
sig (the null terminator), then arvo
is ready. Arvo is the result of calling arvo
on -.epic
, the next event.
Event Two: ++boot
++ boot^- *=> *log=[fate=* hoon=@ arvo=@ epic=*]!=:::: activate the compiler gate. the product of this formula:: is smaller than the formula. so you might think we should:: save the gate itself rather than the formula producing it.:: but we have to run the formula at runtime, to register jets.:::: as always, we have to use raw nock as we have no type.:: the gate is in fact ++ride.::~> %slog.[0 leaf+"1-b"]=/ compiler-gate .*(0 fate.log):::: compile the compiler source, producing (pair span nock).:: the compiler ignores its input so we use a trivial type.::~> %slog.[0 leaf+"1-c (compiling compiler, wait a few minutes)"]=/ compiler-tool~> %bout.*([compiler-gate noun/hoon.log] [%9 2 %10 [6 %0 3] %0 2]):::: switch to the second-generation compiler. we want to be:: able to generate matching reflection nouns even if the:: language changes -- the first-generation formula will:: generate last-generation spans for `!>`, etc.::~> %slog.[0 leaf+"1-d"]=. compiler-gate ~>(%bout .*(0 +.compiler-tool)):::: get the span (type) of the kernel core, which is the context:: of the compiler gate. we just compiled the compiler,:: so we know the span (type) of the compiler gate. its:: context is at tree address `+>` (ie, `+7` or Lisp `cddr`).:: we use the compiler again to infer this trivial program.::~> %slog.[0 leaf+"1-e"]=/ kernel-span~> %bout-:.*([compiler-gate -.compiler-tool '+>'] [%9 2 %10 [6 %0 3] %0 2]):::: compile the arvo source against the kernel core.::~> %slog.[0 leaf+"1-f"]=/ kernel-tool~> %bout.*([compiler-gate kernel-span arvo.log] [%9 2 %10 [6 %0 3] %0 2]):::: create the arvo kernel, whose subject is the kernel core.::~> %slog.[0 leaf+"1-g"]~> %bout[.*(+>.compiler-gate +.kernel-tool) epic.log]--
The next event noun bootstraps a kernel from its source (arvo
).
Event Three: ++fate
The next noun (event) is the Hoon bootstrap compiler as source.
Details of this process are supplied in /lib/pill
.
Event Four: ++hoon
Next we produce the compiler source.
Details of this process are supplied in /lib/pill
.
Event Five: ++arvo
Then we produce the kernel source.
Details of this process are supplied in /lib/pill
.
/lib/pill
- To see how the events are created, let's take a look at the
++brass
arm in/lib/pill
. This uses vase mode (see e.g. theswat
which is a delayedslap
in a trap) to produce the cores and events.
Lifecycle
Once you have all of this, you have completed the lifecycle evaluation of the bootstrap sequence and can run the rest of the event log.
Arvo enters the larval phase during the boot sequence and during certain OTAs. An OTA can just change %lull
/%zuse
/a vane, which doesn't touch Arvo; or it can changes /sys/hoon
or /sys/arvo
, in which case it needs to handle an Arvo upgrade. From the big picture, we build the new Arvo, compile Hoon and Arvo, gather all persistent and ephemeral state plus any new upgrade state, and hand that to the new Arvo (new world).
The larval core is the outermost core in /sys/arvo
. When you first bootstrap, the core that the runtime talks to is the larval phase. The larval core is designed to accumulate preconditions and then metamorphose into the adult Arvo. It needs the current time, entropy, the identity, and the standard library.
Symmetry breaking---the event that defines the identity of the com-pu-ter---is exempt from this requirement. Once identity is established, it can't be updated. If you want a new identity, create a new instance. (Whitepaper)
The larval stage was introduced into the boot and upgrade sequence as a way to solve a practical problem in self-reference. If the ship is not known, then what should happen? If you bunt, then everything is ~zod. If you have a (unit @p)
, then all code using our
becomes cumbersome. So while identity is injected early into the kernel, it hasn't happened yet. Specifically, it takes place when it acquires identity and entropy and sheds the larval core. This is called “breaking symmetry” because prior to this point every Urbit is identical. (This was not always true, as Joe Bryan notes in this talk↗ at 10:21ff.) The larval stage performs the following steps in order:
- The standard library,
zuse
, is installed. - Entropy is added
- Identity is added
- Metamorphosis into the next stage of Arvo
|%++ load :: +4|= hir=heir?: ?=(%grub -.hir)~>(%mean.'arvo: larval reboot' !!) :: XX support(^load hir)::++ peek _~ :: +22++ poke :: +23|= [now=@da ovo=ovum]^- ^~| poke/p.card.ovo=/ wip~> %mean.'arvo: bad wisp';;(wisp card.ovo)::=. ..poke?- -.wip%verb ..poke(lac ?~(p.wip !lac u.p.wip))%wack ..poke(eny `p.wip)%what ..poke(gub (what gub p.wip))%whom ..poke(who ~|(%whom-once ?>(?=(~ who) `p.wip)))::%wyrd ?. (sane:wyrd kel.p.wip)~>(%mean.'wyrd: insane' !!)%- %+ need:wyrd kel.p.wip^- wynn:* hoon/hoon-versionarvo/arvo?~ lul ~:- lull/;;(@ud q:(slap $:u.lul limb/%lull))?~ zus ~[zuse/;;(@ud q:(slap $:u.zus limb/%zuse)) ~]==..poke(ver `p.wip)==:::: upgrade once we've accumulated necessary state::?~ hir=(molt now gub)[~ ..poke]~> %slog.[0 leaf+"arvo: metamorphosis"](load u.hir)::++ wish :: +10|= txt=*q:(slap ?~(zus pit $:u.zus) (ream ;;(@t txt)))--
++load
/+4
passes through to the new world Arvo, because the larval stage is a trivial core wrapped around the mature core.++wish
/+10
evaluates as normal.++peek
/++22
produces the type of~
; i.e., every scry blocks.++poke
/++23
handles the upgrade-related events. Larval events are actually quite simple:
+$ wisp$% $>(?(%verb %what) waif) :: update from files (event from anywhere)$>(?(%wack %wyrd) wasp) :: iterate entropy (event from runtime)[%whom p=ship] :: acquire identity (frozen after boot)==
++le
++le
is the Arvo event-loop engine. It provides an ++abet
-pattern-driven core for building worklists.
++peek
handles any reads into the larval stage block.++poke
is how you compute an event, as normal.++load
is how Arvo transfers state to a future version of itself.++wish
is of course theurbit eval
wrapper and generates no events.
See also:
++what
, the update engine, which handles a kernel update and continuation from the worklist.
Metamorphosis
Metamorphosis means producing the parent core, the mature Arvo. This takes place through ++load
, which hands itself forward.
The larval stage upgrade mechanism is just a pass-through or it crashes.
++molt
takes care of the process in a practical way by gathering the known system information and packaging it for ++load
.
When bootstrapping is done, the runtime strips off the gate to access the real Arvo core. (This is something of a manual trick.) This is how the structural interface Arvo is accessed, through the hardcoded arm addresses.
- ~master-morzod, “Annotation on the Boot Process”↗
- ~master-morzod, ~lagrev-nocfep “Dev Chat: Joe Bryan on the Boot Sequence”↗
Main Sequence (Mature Arvo)
Once the outer larval core has been shed, the system is back in the Arvo main sequence (ca04
).
Vanes & Userspace
%lull
is compiled against ..part
, the first half of Arvo.
Shutdown
In a general sense, Urbit is only aware of the world while it lives. But of course on a real machine, the ship will be shut down, migrated, and execute on different runtimes.
- What happens when you run
|exit
? Trace out that process and back into the king for graceful shutdown.