This design document describes the minimum setup and architecture for creating
a NMS network management system using SlapOS. It will describe the administrative
layer and the required hard- and software that needs to be installed.
It will also provide information on the user layer and its technical requirements.
For a more detailed overview of SlapOS itself, its concept and system design,
please refer to the SlapOS
architecture design document.
The installation is generic, automated and independent of how the SlapOS
network will eventually be used. It requires at least two computers and
at most 1 day to install and setup.
This section describes the mininmal setup required for a SlapOS network.
A SlapOS network consists of two layers.
The administrative layer consists of two nodes, which
run the SlapOS Master (COMP-ROOT) and the first network node (COMP-0) which provides
connectivity between all nodes in the network via the Re6st Registry and a Frontend (Apache).
All other nodes in a network form the user layer, simple nodes that, after having Re6st and SlapOS installed,
will wait for the Master to instruct them as to which software to install and instantiate to users.
Note, that during installation, the SlapOS Master node starts out as
a regular formatted node (with just the SlapOS Kernel). It is only through
a software called SlapProxy (a minimal SlapOS Master), which is run
during installation of a SlapOS Master, and which installs and instantiates
the actual SlapOS Master/ERP5 Cloud Engine along with a Frontend (Apache)
for accessing it, that the COMP-ROOT node is created. This means,
that SlapOS is used to deploy SlapOS recursively and that there are in fact
two networks - the one during installation with SlapProxy acting as
Master and the first node being COMP-ROOT and
the second actual network of COMP-ROOT and COMP-0,1,2,3....
The administrative layer is described in the following section.
The administrative layer is used to describe the essential
services required to deploy and manage any type of SlapOS cloud
(Edge, Home, Distributed, Datacenter-based) including a NMS Network Management System.
It consists of a Master node (COMP-ROOT)
and a first slave node (COMP-0) which provides services to the master for managing and communicating
between nodes on the network. At the very minimum these services are a Frontend (Apache)
and a Re6st Registry. In case of a NMS system, a Simcard DB service is also required.
The Registry connects nodes within the network and handles communication between
all nodes over IPv6 while the Frontend provides "user-friendly" IPv4 adresses for accessing the network.
The SimcardDB maintains a register of issued simcard which LTE stacks regularly query for updates.
The COMP-ROOT will host a Frontend (Apache) and the SlapOS Master (ERP5 Cloud Engine)
and requires at a minimum:
The installation is generic, fully automated, and will take ~20 min if
installable from cache plus another 20-40 min for running the configurator. It will install SlapOS
Proxy and provide 3 access points on ports 80, 443 and 5443.
Note, that this machine must have public IPv4 or private IPv4 reachable
from COMP-0 (and users). The above server should be sufficient to run a network
of 80-160 actual computers (nodes), which then use computer partitions
to provide instances of software releases called "hosting subscriptions" to users.
Each hosting subscription (for example erp5 or LET) is in turn run through
several URL-based services. "Several" because complex services tend to require multiple softwares and
additional services to provide a final service. For running erp5 for example,
an Frontend Apache Process (For HTTP Conenctions), a relational database
(Mariadb), one object database (NEO or ZEO), Memcache and Cloudooo are required.
Other software solutions might require even more. LTE can be deployed as
a Default Mode or as a cluster, (eNB, ePC, IMS, MBMS), where it will occupy
four computers partitions, by requesting individually each service.
As mentioned above, initially there are two different types of Masters being
used: SlapProxy (minimalistic) and the eventual SlapOS Master (ERP5-based).
This is done in order to manage the deployment of a SlapOS Master in the
same way as managing any other kind of instance, a recursive approach by
which a minimalistic Master (SlapProxy) can deploy another ERP5-based Master.
For smaller use cases, like running only a single node, SlapProxy alone
could be sufficient (the Nexedi Webrunner
is an example of a SlapProxy (IDE) being used to deploy a software instance
on a single machine for a single user). However, in case of a larger network
of nodes and using SlapOS for cloud orchestration and computing, SlapProxy
is used at startup only and retired once the actual SlapOS Master is up and running.
Two softwares will eventually be run on the SlapOS Master - ERP5 Cloud Engine itself for
user management, deployment of services, usage accounting and capacity management) and an Frontend (Apache) for accessing the Dashboard.
If a valid
SSL wildcard (!) certificate and IPv6
are available, deployment will require about an hour. The Frontend (Apache) will communicate over
ports 80 and 443 and the registry on port 19201. The minimum requirements for this machine are:
COMP-0 contacts the COMP-ROOT over port 5443 and also requires public IPv4 or private IPv4 reachable by
the SlapOS Master and users.
With the above specifications about 8 computers which equates to about 800
partitions (instanciated software services) can be provided through URLs (services in SlapOS are all provided over https).
A standard SlapOS network installation requires to install two softwares on COMP-0. For
a NMS system, a third software is mandatory.
One is the Re6st Registry is, a
service maintaining a register of nodes on a network that also issues
new network access tokens. It also required to ensure that IPv6 is available
throughout the network as all internal data exchange and accessing of
partitions is done over IPv6.
The second required software is a Frontend (Apache) used to make a node
accessible as well as to connect with other services such as
the monitor. Since all connections need to be secured and any number of
nodes will result in issuance of a significant number of domains and URLs,
a wildcard SSL certificate is necessary for the Frontend (Apache). The Frontend
is also required as users will access HTTP services running on remote
nodes accessible via IPv6 with the Frontend handling some browser limitations
when accessing distributed services (eg access HTTPS with valid certificates,
access via IPv4, CORS etc).
The last required software on a NMS system is a SimcardDB, a NEO database,
that users can request slaves (simcard ids) from. LTE stacks are associated with a (clustered) SimcardDB and
regularly query for updates to propagate new simcard ids to the LTE stack.
The user layer is described in the following section. While it is not required for a minimal setup to have
any nodes beyond the COMP-0 node, SlapOS is a solution for orchestration of large networks so in a typical SlapOS network,
there will always be a user layer providing software instances to users.
All nodes in the user layer are setup in the same way. After installing IPv6, SlapOS is
installed during which the node will also be associated with a SlapOS Master. After installation
and formatting a node, the Master can request installation of specific software releases
and then provide instances of these releases to users.
Hardware requirements depend on the software instances being run on the nodes. For simple services,
a machine similar to the one used for COMP-0 will be sufficient whereas more complex softwares running multiple services (such as ERP5 or
LTE) should look more towards the COMP-ROOT requirements as indicator of what will be necesary.
During installation a node can (and should) also be registered on network
(using the token obtained from the re6st registry). Afterwards it is up to the Master
to decide which software will be installed and to whom it will be provided. For example a network may consist of nodes separated
on different continents with a node on each continent providing the same software. The Master could then decide to provide
instances of ERP5 using the node closest to the respective user.