Saturday, August 25, 2018

NB-IoT

Throughout parts one and two I discussed the concepts of IoT and (I)IoT, (big) data analytics, data placement, the triggering of workflows and I had a more detailed look at the LPWAN and LTE (cellular) type networks. I also included a cheat sheet where I highlighted 6 of the most common (and upcoming) (I)IoT networks including their main characteristics and features. Today I’d like to focus on the various individual networks mentioned like: Sigfox, LoRa, NB-IOT etc. and talk a bit more about their background and future potential.
Since all the network types covered below are part of the earlier published IoT networking technology cheat sheet, I’ve included it in this post as well.
Sigfox
In 2009 Sigfox started to build the first modern LPWAN network (France), which quickly took off, especially within Europe, and it continues to grow by the day. I recently spoke to some of the Dutch Sigfox representatives and their story was quite impressive, as were their numbers.
And while today’s LPWAN networks have a lot in common with their pressers, one of the biggest differences is today’s online integration making it possible to apply (near) real-time monitoring, and all the benefits that come with it – also the main reason why these types of networks have become so popular in such a short time frame, which goes for some of the other networks as well.
Sigfox is best used for (extreme) low bandwidth applications, especially where energy consumption, or the lack of it, plays a critical role. As mentioned in the cheat sheet as well, it is an open (proprietary) standard, which has its pros and cons. Like LoRa, it is a completely separate network exclusively designed just for IoT purposes. It operates at 868 GHz in Europe and at 900 MHz throughout the USA.
Europe (including the UK) has by far the broadest coverage to date, and as mentioned is still expanding on a daily basis. In the USA Sigfox is still heavily under construction as you can clearly see on their website as well. Parts of Chicago, Miami, San Diego and San Francisco do already have solid coverage – more to come, I’m sure. As far as some of the more technical details go, have a look at the earlier mentioned cheat sheet below or here.
LoRa
Stands for Long Range and is one of the better known LPWAN’s, or Low Power Wide Area Networks. Data throughput rates range from 0.3 tot 50 Kbit/s as described in the LoRa(Wan) R1.0 Open Standard for the IoT, which broadens its range for several other use-cases besides low power, low battery consumption as compared to Sigfox, for example. When it comes to low power consumption, LoRa uses something called an Adaptive Data Rate algorithm, or ADR to optimize and control battery life and network capacity. In short: the LoRaWAN network server manages the data rate for each connected sensor via an algorithm. A unique approach.
When there’s is talk of LoRa, LoRaWAN is almost always mentioned in the same sentence and does tend to cause some confusion from time to time. LoRa is a technology developed by the chip manufacturer, Semtech, which is also why it’s not considered an open standard, but proprietary like Sigfox. LoRa refers to wireless modulation allowing low power communication – think of it as the physical layer. LoRaWAN, on the other hand, refers to the networking protocol making use of the above-mentioned LoRa (Semtech) physical components enabling medium to long-range, low power communication – I hope that makes sense.
Back in June 2016 The Netherlands became the first country to have a nationwide LoRa network for IoT purposes, rolled out by KPN, by the way. Nationwide coverage (or close to) of LoRa can also be found in Belgium, France and big parts of Italy, with Germany soon to follow as well. Other parts of the world where LoRa is being deployed include but are not limited to: the USA, New Zeeland, Australia, Japan, India (Tata steel communications) and more. One advantage that LoRa (Again, the physical part) has over other LPWAN offerings, is the option to also deploy private networks, which is also one of the main reasons it is so widely spread.
NB-IOT
NB-IOT is another 3GPP Rel. 13 proposal, like LTE Cat 0 and 1, though it doesn’t operate in the LTE band/spectrum. The technology used is DSSS modulation, as highlighted in the cheat sheet as well. It’s still fairly new, in fact the requirements of the 3GPP Rel. 13 standard for NB-IOT have just been finalized as of early 2016. Note that there is also a cellular LTE based LPWAN option named LTE-M, based on the 3GPP standard, which will be discussed later.
While NB-IOT can’t operate in the LTE spectrum directly, it can make use of existing LTE base stations/networks. Resource blocks can be allocated for NB-IOT purposes, either ‘in-band’ or in the so-called LTE ‘guard-bands’. The question is: will current LTE carriers do/allow this since reducing the number of current LTE ‘blocks’ automatically means less capacity, and thus income on the LTE side. It will come down to, which one will be most profitable and future-proof, though this can be a tough question to answer.
For all this to work (the reuse of LTE resource blocks) different software and a couple of additional modifications will be needed, and in the case of Alcatel based infrastructures new hardware will need to be installed as well, meaning an upfront investment, which could turn out to be costly for the carriers involved. As a second option, unused 200 kHz bands that have been previously used for GSM networks could also be leveraged/reused for NB-IOT purposes. So, depending on which type of network (GSM or LTE) has a bigger presence in a certain area, country or continent even, one will probably have a preference over the other.
Within an NB-IOT network all data is directly sent to the main server (s), so no gateways are needed, unlike most of the other (narrowband) IoT networks out there. No gateways, means less infrastructural components, equals less (er) upfront investments. Next to that, ‘NB-IOT only’ chips, when compared to LTE will be simpler and cheaper to manufacture – the all-over component costs are lower.
Even though there are still a lot of uncertainties, pros and cons to the various network types available today – and still in development, quite a few (big) companies are interested and investing in NB-IOT type networks, like: Huawei, Ericsson, Qualcomm, and Vodafone.
LTE-M
Next to NB-IOT (as previously discussed) the 3rd Generation Partnership Project (3GPP) also introduced LTE-M, which is short for LTE-MTC (Machine-Type Communication), also known and referred to as Cat-M. Another narrowband, low (er) power network option that can co-exist with other LTE services. Together with NB-IOT they expand the LTE technology portfolio to support an even wider range of low-power IoT use cases. An LTE-M network should also be able to offer up to 10 years of battery life on a 5WH battery.
One of the main advantages of LTE-M over NB-IOT is that it is compatible with existing LTE networks. The only thing LTE carriers will have to take care of is a relatively simple software upgrade, at least on paper. Besides that, LTE-M is also seen as more secure and can handle higher data rates as well. Because existing LTE/4G networks are used and the maximum data rate of an average LTE device is ‘only’ around 100 Kbits/s the network won’t be heavily utilized. Also, carriers can offer pricing closer to 2G service plans instead of 4G. As a result, LTE-M is often seen as being superior to NB-IOT.
Both LTE-M and NB-IOT are part of the 3GPP Release 13. Release 14 is supposed to bring new capabilities such as single-cell multicast to both eMTC and NB-IoT, enabling easy over-the-air firmware upgrades as well as enhanced device positioning for asset location tracking.
LTE-M and NB-IOT are both being rolled out as we speak and offer (very) limited availability today. If you are looking for a low powered network for instant use, have a look at LoRa and/or Sigfox as discussed earlier, depending on your geographical location. While both are still under heavy development as well, they do already cover large parts of Europe and smaller parts of the US and Asia, to name just a few.
Cat 1 and Cat 0
LTE Cat 1 expands on the 4G portfolio (not offering the same features and/or benefits though, read on) and is currently the only full LTE based IoT standard available today, taking full advantage of the broad range and wide coverage LTE networks have to offer – fully standardized, as part of the 3GGP Release 8. While it doesn’t offer the same performance as 3G networks (that’s also why it was never seen as a relevant broadband service in Europe) it is an excellent fit for low bandwidth and browser based IoT applications. As the 3G technology is slowly being deprecated (also referred to as ‘sunsets’) world-wide, it is expected that Cat 1 will take its place instead. Think of Cat 1 as an early, potentially attractive alternative for IoT applications over LTE.
LTE Cat 0 is part of the 3GGP Release 12 and is considered to be the foundation for the earlier mentioned LTE-M standard. Different from Cat 1, Cat 0 has been designed with IoT in mind, as such it also offers lower data rates for both up and download aimed at power consumption, while at the same time making it cheaper because of this. Complexity, as opposed to Cat 1, has been reduced by over 50% by including only one receiver antenna and support for half-duplex operations. Where Cat 1 might take the place of 3G once it ‘sunsets’ LTE-M (with some help from Cat 0) will probably do the same for 2G in the (near) future.
Cheat sheet
Since the earlier published IoT networking technology cheat sheet contains a lot of additional (more technical) information, I’ve included it here as well. Read the accompanying blogpost for some more detailed information.
Future notes
Which technology or network type will prevail in the future is (very) hard to predict. In fact, there’s no real reason why they should be mutual exclusive, they don’t have to be. The fact that LTE networks have such a broad range globally and that they can also be used to provide NB-IOT and LTE-M networks with relative ease could oppose a threat to LPWAN networks. Especially when companies like Verizon and AT&T are the ones pushing the technology. Though the same can be said for LoRa as well, companies such as IBM and Cisco are showing immense interest, as are CSP’s like Swisscom and KPN.
On the other hand, with the LTE/cellular companies focussing on the high-end market, so to speak, and the LPWAN providers focussing on the lower to mid-market range, mainly in the form of sensor based data transport, there could be room for both. It also depends on in which part of the world or country (in some cases) you reside. If we look at Europe, the LPWAN networks are globally distributed already (LoRa in particular) while standards like NB-IOT and LTE-M and other closely related variants are still to be implemented. In the US and other parts of the world, it’s the direct opposite.
Other factors like, the rise of the 5G network (standards have yet to be defined – 2018), which is supposed to be operational for commercial use in 2020, or at least parts of it, the type of IoT devices and applications used, in terms of bandwidth and throughput, but also, capabilities around signal penetration and security might become more prominent going forward. In the end, it will all depend on the use-case at hand, since different network technologies will have different characteristics, the range of (I)IoT is too broad to fit ‘just’ one or two networks. Perhaps it’s not fair to say that every IoT application has its own unique requirements, but in some cases, a statement like this is not that far of.
All in all, lots of pros, cons and other consideration. If you haven’t done so already, make sure to check out parts one and two of this series as well, to get a complete picture.

Tuesday, March 20, 2018

OSM

OSM is delivering an open source Management and Orchestration (MANO) stack aligned with ETSI NFV Information Models. As an operator-led community, OSM is offering a production-quality open source MANO stack that meets the requirements of commercial NFV networks.

Ref:ETSI

Saturday, March 10, 2018

Network Slicing in 5G

5G is no longer something that is coming down the line, it is here and it will impact consumers and enterprises alike. In simple terms, 5G will be up to 100x faster than current 4G and 10x faster than the broadband connectivity that we are used to. With this speed the promise of other technology trends like IoT, AR, VR, Edge Computing and more really become possible.

One of the distinct key features of the 5G system architecture is network slicing. 4G supported certain aspects of this with the functionality for dedicated Core Networks. Compared to this 5G network slicing is a more powerful concept and includes the whole PLMN. Within the scope of the 3GPP 5G system architecture, a network slice refers to the set of 3GPP defined features and functionalities that together form a complete PLMN for providing services to UEs. Network slicing allows for controlled composition of a PLMN from the specified network functions with their specifics and provided services that are required for a specific usage scenario.
Earlier system architectures enabled what was typically a single deployment of a PLMN to provide all features, capabilities and services required for all wanted usage scenarios. Much of the capabilities and features provided by the single, common deployment was in fact required for only a subset of the PLMN’s users/UEs. Network slicing enables the network operator to deploy multiple, independent PLMNs where each is customized by instantiating only the features, capabilities and services required to satisfy the subset of the served users/UEs or a related business customer needs.
The very abstract representation below shows an example of a PLMN deploying four network slices. Each includes all that is necessary to form a complete PLMN. The two network slices for smartphones demonstrate that an operator may deploy multiple network slices with exactly the same system features, capabilities and services, but dedicated to different business segments and therefore each possibly providing different capacity for the number of UEs and data traffic. The other slices present that there can be the differentiation between network slices also by the provided system features, capabilities and services. The M2M network slice could, for example, offer UE battery power saving features unsuitable for smartphone slices, as those features imply latencies not acceptable for typical smart phone usages.
 The service-based architecture together with softwarization and virtualization provides the agility enabling an operator to respond to customer needs much more quickly. Dedicated and customized network slices can be deployed with the functions, features, availability and capacity as needed. Typically, such deployments will be based on a service level agreement. Further, an operator may benefit by applying virtualization, platforms and management infrastructure commonly for 3GPP-specific and for other network capabilities not defined by 3GPP, but that a network operator may need or want to deploy in his network or administrative domain. This allows for a flexible assignment of the same resources as needs and priorities change over time.
Deployments of both the smaller scope of the 3GPP defined functionality but also those of the larger scope of all that is deployed within an operator’s administrative domain are both commonly termed a “network”. Because of this ambiguity and as the term “slicing” is used in industry and academia for slicing of virtually any kind of (network) resources, it is important to emphasize that the 3GPP system architecture specifications define network slicing only within the scope of 3GPP specified resources, i.e. that what specifically composes a PLMN. This doesn’t hinder a PLMN network slice deployment from using e.g. sliced transport network resources. Please note, however, that the latter is fully independent of the scope of the 3GPP system architecture description. Pursuing the example further, PLMN slices can be deployed with as well as without sliced transport network resources.
The above figure presents more specifics of 3GPP network slicing. In that figure, network slice #3 is a straightforward deployment where all network functions serve a single network slice only. The figure also shows how a UE receives service from multiple network slices, #1 and #2. In such deployments, there are network functions in common for a set of slices, including the AMF and the related policy control (PCF) and network function services repository (NRF). This is because there is a single access control and mobility management instance per UE that is responsible for all services of a UE. The user plane services, specifically the data services, can be obtained via multiple, separate network slices. In the figure, slice #1 provides the UE with data services for Data Network #1, and slice #2 for Data Network #2. Those slices and the data services are independent of each other apart from interaction with common access and mobility control that applies for all services of the user/UE. This makes it possible to tailor each slice for e.g. different QoS data services or different application functions, all determined by means of the policy control framework.
Above discussion has highlighted one of the advancements of the 3GPP system architecture introduced with Phase 1 of 5G. Studies concerning Phase 2 of 5G will begin in the first quarter of 2018

References & specifications

  1. TS 23.501 – System Architecture for the 5G System; Stage 2
  2. TS 23.502 – Procedures for the 5G System; Stage 2
  3. TS 23.503 – Policy and Charging Control Framework for the 5G System; Stage 2

Monday, December 25, 2017

Storage concepts

Storage is found in many parts of the OpenStack cloud environment. It is important to understand the distinction between ephemeralstorage and persistent storage:
  • Ephemeral storage - If you only deploy OpenStack Compute service (nova), by default your users do not have access to any form of persistent storage. The disks associated with VMs are ephemeral, meaning that from the user’s point of view they disappear when a virtual machine is terminated.
  • Persistent storage - Persistent storage means that the storage resource outlives any other resource and is always available, regardless of the state of a running instance.
OpenStack clouds explicitly support three types of persistent storage: Object StorageBlock Storage, and File-based storage.

Object storage

Object storage is implemented in OpenStack by the Object Storage service (swift). Users access binary objects through a REST API. If your intended users need to archive or manage large datasets, you should provide them with Object Storage service. Additional benefits include:
  • OpenStack can store your virtual machine (VM) images inside of an Object Storage system, as an alternative to storing the images on a file system.
  • Integration with OpenStack Identity, and works with the OpenStack Dashboard.
  • Better support for distributed deployments across multiple datacenters through support for asynchronous eventual consistency replication.
You should consider using the OpenStack Object Storage service if you eventually plan on distributing your storage cluster across multiple data centers, if you need unified accounts for your users for both compute and object storage, or if you want to control your object storage with the OpenStack Dashboard. For more information, see the Swift project page.

Block storage

Block storage is implemented in OpenStack by the Block Storage service (cinder). Because these volumes are persistent, they can be detached from one instance and re-attached to another instance and the data remains intact.
The Block Storage service supports multiple back ends in the form of drivers. Your choice of a storage back end must be supported by a block storage driver.
Most block storage drivers allow the instance to have direct access to the underlying storage hardware’s block device. This helps increase the overall read/write IO. However, support for utilizing files as volumes is also well established, with full support for NFS, GlusterFS and others.
These drivers work a little differently than a traditional block storage driver. On an NFS or GlusterFS file system, a single file is created and then mapped as a virtual volume into the instance. This mapping and translation is similar to how OpenStack utilizes QEMU’s file-based virtual machines stored in /var/lib/nova/instances.

Differences between storage types

Table. OpenStack storage explains the differences between Openstack storage types.

Commodity storage technologies

There are various commodity storage back end technologies available. Depending on your cloud user’s needs, you can implement one or many of these technologies in different combinations.

Ceph

Ceph is a scalable storage solution that replicates data across commodity storage nodes.
Ceph utilises and object storage mechanism for data storage and exposes the data via different types of storage interfaces to the end user it supports interfaces for: - Object storage - Block storage - File-system interfaces
Ceph provides support for the same Object Storage API as swift and can be used as a back end for the Block Storage service (cinder) as well as back-end storage for glance images.
Ceph supports thin provisioning implemented using copy-on-write. This can be useful when booting from volume because a new volume can be provisioned very quickly. Ceph also supports keystone-based authentication (as of version 0.56), so it can be a seamless swap in for the default OpenStack swift implementation.
Ceph’s advantages include:
  • The administrator has more fine-grained control over data distribution and replication strategies.
  • Consolidation of object storage and block storage.
  • Fast provisioning of boot-from-volume instances using thin provisioning.
  • Support for the distributed file-system interface CephFS.
You should consider Ceph if you want to manage your object and block storage within a single system, or if you want to support fast boot-from-volume.

Gluster

A distributed shared file system. As of Gluster version 3.3, you can use Gluster to consolidate your object storage and file storage into one unified file and object storage solution, which is called Gluster For OpenStack (GFO). GFO uses a customized version of swift that enables Gluster to be used as the back-end storage.
The main reason to use GFO rather than swift is if you also want to support a distributed file system, either to support shared storage live migration or to provide it as a separate service to your end users. If you want to manage your object and file storage within a single system, you should consider GFO.

LVM

The Logical Volume Manager (LVM) is a Linux-based system that provides an abstraction layer on top of physical disks to expose logical volumes to the operating system. The LVM back-end implements block storage as LVM logical partitions.
On each host that will house block storage, an administrator must initially create a volume group dedicated to Block Storage volumes. Blocks are created from LVM logical volumes.

SCSI

Internet Small Computer Systems Interface (iSCSI) is a network protocol that operates on top of the Transport Control Protocol (TCP) for linking data storage devices. It transports data between an iSCSI initiator on a server and iSCSI target on a storage device.
iSCSI is suitable for cloud environments with Block Storage service to support applications or for file sharing systems. Network connectivity can be achieved at a lower cost compared to other storage back end technologies since iSCSI does not require host bus adaptors (HBA) or storage-specific network devices.

NFS

Network File System (NFS) is a file system protocol that allows a user or administrator to mount a file system on a server. File clients can access mounted file systems through Remote Procedure Calls (RPC).
The benefits of NFS is low implementation cost due to shared NICs and traditional network components, and a simpler configuration and setup process.
For more information on configuring Block Storage to use NFS storage, see Configure an NFS storage back end in the OpenStack Administrator Guide.

Sheepdog

Sheepdog is a userspace distributed storage system. Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshot, cloning, rollback and thin provisioning.
It is essentially an object storage system that manages disks and aggregates the space and performance of disks linearly in hyper scale on commodity hardware in a smart way. On top of its object store, Sheepdog provides elastic volume service and http service. Sheepdog does require a specific kernel version and can work nicely with xattr-supported file systems.

ZFS

The Solaris iSCSI driver for OpenStack Block Storage implements blocks as ZFS entities. ZFS is a file system that also has the functionality of a volume manager. This is unlike on a Linux system, where there is a separation of volume manager (LVM) and file system (such as, ext3, ext4, xfs, and btrfs). ZFS has a number of advantages over ext4, including improved data-integrity checking.
The ZFS back end for OpenStack Block Storage supports only Solaris-based systems, such as Illumos. While there is a Linux port of ZFS, it is not included in any of the standard Linux distributions, and it has not been tested with OpenStack Block Storage. As with LVM, ZFS does not provide replication across hosts on its own, you need to add a replication solution on top of ZFS if your cloud needs to be able to handle storage-node failures.

Ref: https://docs.openstack.org/arch-design/

Sunday, December 17, 2017

Hyper-Threading

Hyper-threading was Intel’s first attempt to bring parallel computation to consumer PCs. It debuted on desktop CPUs with the Pentium 4 HT back in 2002. The Pentium 4’s of the day featured just a single CPU core, so it could really only perform one task at a time—even if it was able to switch between tasks quickly enough that it seemed like multitasking. Hyper-threading attempted to make up for that.

A single physical CPU core with hyper-threading appears as two logical CPUs to an operating system. The CPU is still a single CPU, so it’s a little bit of a cheat. While the operating system sees two CPUs for each core, the actual CPU hardware only has a single set of execution resources for each core. The CPU pretends it has more cores than it does, and it uses its own logic to speed up program execution. In other words, the operating system is tricked into seeing two CPUs for each actual CPU core.
Hyper-threading allows the two logical CPU cores to share physical execution resources. This can speed things up somewhat—if one virtual CPU is stalled and waiting, the other virtual CPU can borrow its execution resources. Hyper-threading can help speed your system up, but it’s nowhere near as good as having actual additional cores.

Saturday, December 2, 2017

NFC (Cloud Native)

This is basically separation of applications from data and uses stateless machines to process
services. In this phase, all compute resources are pooled for higher reliability. The
cloud session load balancer (CSLB) and cloud session database (CSDB) jointly work
to evenly distribute services and allow the distribution, processing, and data layers
to scale separately. Additionally, the software architecture and services are reconstructed
so that automatic deployment, O&M, scaling, and gray upgrades can be performed
for each individual service.

Distribution, processing, and data layers are divided to separate applications from data. The
microservice architecture is introduced to hasten service delivery and enhance system security.

Distribution layer: The CSLB allows for services and interfaces to have their own independent IP addresses so that service flows can be evenly distributed and VMs can be automatically scaled.

Processing layer: Processes use load-sharing, and are stateless and pooled to ensure high
system availability and on-demand service provisioning.
Data layer: The CSDB, a distributed memory database for a cloud-based environment with x86 servers, is used to support service scaling while ensuring carrier-grade reliability and service experience.

Microservice architecture: The microservice management architecture is used to allow
developers to develop and manage applications via individual microservices. The architecture
also provides diverse functions to ensure service security and reliability.

What is NFV?

One line definition..

"Decoupling of Software to Hardware is called NFV" 

NFV decouples network functions from dedicated hardware and deploys these network
functions on general-purpose x86 servers, storage, and network devices. On an NFV network,
hardware resources are abstracted into pools and carriers can rapidly roll out services using the
resources from these pools. Additionally, an NFV network allows for elastic scaling and
automated O&M.

In telecom world, this time ETSI rule NFV framework unlike 3GPP.



What is Hypervisor?

Hypervisor is the virtualization software layer between physical servers and operating systems. It takes the role of the virtual machine monitor (VMM) and allows multiple operating systems and applications to share the hardware. Mainstream hypervisors include open-source KVM and Xen, Microsoft HyperV, and VMware ESXi.