Skip to content
robert-sanfeliu edited this page Feb 10, 2026 · 4 revisions

NebulOuS

NebulOuS is an innovative Meta Operating System designed to facilitate secure and efficient application provisioning and reconfiguration within the cloud computing continuum. The system effectively brokers resources from a diverse cloud continuum, spanning from the far edge to public and private cloud offerings, as well as processing nodes with significant capacity in between. NebulOuS can handle from container-based and VM-based deployments to "raw" bare metal deployments. Additionally, NebulOuS not only supports "serverful" approaches but also embraces the serverless paradigm, empowering users to optimize function execution.

There are three primary actors that play crucial roles in NebulOuS:

  • Resource Provider: This entity offers computational resources to be brokered by NebulOuS. It can include telecom providers offering compute resources from their edge locations, organizational units making private resources available for intra or inter-organizational collaborative infrastructure sharing, among others.
  • Organization Admin: Responsible for managing the resources (provided by the Resource providers) to be brokered by NebulOuS. This implies registering these resources inside NebulOuS and defining organization-wide preferences regarding the usage of such resources (e.g.,  prefer AWS over Google cloud).
  • Application owner: Assumes the role of a cloud continuum consumer and utilizes NebulOuS to deploy workloads in the cloud-to-edge continuum (typically a software developer or DevOps). For each application to be deployed, the user provides its service graph and infrastructure requirements using Open Application Model; the QoS requirements for the application and the goals and constraints for deciding on the optimal deployment of the application.

The purpose of NebulOuS is to decide the optimal deployment of each application using the resources brokered by NebulOuS, perform the deployment, monitor the execution of that application and searching for a new optimal deployment of the application in an event of QoS violation.

The process for deciding the optimal deployment of an application considers different inputs: i) the available resources brokered by NebulOuS, ii) the organization wide preferences regarding the different computing providers expressed by the Organization Administrator and iii) the requirements and constraints imposed by the application. With this information, NebulOuS decides on the number of computing nodes necessary for deploying the application, their hardware specs (CPU/RAM), the location of each of these nodes and the node where each application component is to be deployed. Moreover, for applications that allow scaling in/out, NebulOuS also decides on the number of instances of each application component. Once an optimized deployment is determined, NebulOuS proceeds to execute the necessary steps to materialize that deployment. For this, NebulOuS automatically instantiates necessary computing nodes in the cloud, creates a secure isolated virtual network that interconnects all computing nodes regardless of their location (cloud/fog/edge) and proceeds to deploy each of the application components. After the application is finally deployed, NebulOuS continually monitors the Quality of Service (QoS) required by various application components (as specified by their SLAs) and takes necessary actions to remediate QoS violations (re-executing the optimization-deployment cycle). The QoS monitoring relies on two sources of metrics: infrastructure metrics and application metrics.

Infrastructure metrics include network latency, CPU usage, RAM, etc… This kind of metrics are automatically collected by NebulOuS .

Application specific metrics are tightly coupled with the application goal. For instance, for an application component that exposes a REST interface these might include  the time to process a request, the number of requests per second, etc… These metrics must be provided by the application itself (implemented in its code). However, NebulOuS offers the application developer the documentation and toolkit to ease the collection of this kind of metrics.

Besides service-oriented applications, NebulOuS offers specialized mechanisms for handling IoT data processing pipelines, workflows and serverless functions.

Architecture overview

image

NebulOuS operates through two connected environments, a Control Plane Cluster, which provides global management and orchestration and the Application Clusters, created on demand to host user applications across cloud, fog, and edge resources. Both environments rely on Kubernetes technologies and cooperate through the NebulOuS messaging middleware. The NebulOuS control plane consists of all platform-wide coordination components common to all applications, which have been fully containerized and deployed on a dedicated Kubernetes cluster. This environment hosts the services that govern the brokerage of resources, reconfiguration processes, SLA handling, deployment orchestration, and user interaction. Control-plane components and NebulOuS components deployed within the application clusters communicate asynchronously via the NebulOuS EXN Middleware, implemented using an ActiveMQ broker exposed over AMQP (Advanced Message Queuing Protocol). Topics and message flows follow NebulOuS-defined conventions that allow loosely coupled interactions between orchestrators, brokers, monitoring services, and optimisation modules. The control plane is responsible for providing the UI and API endpoints for resource registration, policy definition, and application specification. It orchestrates the resource brokerage and ranking mechanisms while initiating and overseeing the creation of application clusters. In addition, it generates SLAs and their associated smart contracts, coordinates application adaptation through the Optimizer Controller, and supervises the cluster lifecycle, including handling cloud-edge resource onboarding. Once the control plane is deployed, NebulOuS enables users to deploy and execute containerized applications across the cloud–edge continuum. Eligible resources whether cloud VMs, service-provider edge nodes, or user-edge devices, are onboarded and provisioned by the platform and can then be used to form application clusters. These clusters are Kubernetes-based environments, with the option to be deployed using either standard Kubernetes distributions or the lightweight K3s distribution. They are automatically instantiated by the Execution Adapter and managed throughout their lifecycle by the Deployment Manager. An application cluster may consist of cloud-only nodes, edge-only nodes, or mixed topologies combining both. The creation of distributed Kubernetes/K3s clusters that federate heterogeneous resources across different administrative domains remains a key advancement of NebulOuS. All application cluster nodes are interconnected through an encrypted overlay network established by the Overlay Network Manager, ensuring secure and seamless communication. During installation, NebulOuS deploys a CRI (Container Runtime Interface)-compatible container runtime together with Cilium, leveraging its WireGuard-based tunnels for intra-cluster encryption, with Kube-flannel offered as an alternative CNI (Container Network Interface) option. This is followed by the installation of Kubernetes/K3s itself, Event Management System agents, and the Overlay Networking agents. Finally, the system installs KubeVela and Policy Controllers, along with the newly relocated components, AI-driven Anomaly Detection, Security & Privacy Manager, Smart Contract Encapsulator runtime logic, Forecasters, Prediction Orchestrator, Metric Persistor and Timeseries Database.

Clone this wiki locally