28 resultados para Ubiquitous Computing, Pervasive Computing, Internet of Things, Cloud Computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Massive Internet of Things is expected to play a crucial role in Beyond 5G (B5G) wireless communication systems, offering seamless connectivity among heterogeneous devices without human intervention. However, the exponential proliferation of smart devices and IoT networks, relying solely on terrestrial networks, may not fully meet the demanding IoT requirements in terms of bandwidth and connectivity, especially in areas where terrestrial infrastructures are not economically viable. To unleash the full potential of 5G and B5G networks and enable seamless connectivity everywhere, the 3GPP envisions the integration of Non-Terrestrial Networks (NTNs) into the terrestrial ones starting from Release 17. However, this integration process requires modifications to the 5G standard to ensure reliable communications despite typical satellite channel impairments. In this framework, this thesis aims at proposing techniques at the Physical and Medium Access Control layers that require minimal adaptations in the current NB-IoT standard via NTN. Thus, firstly the satellite impairments are evaluated and, then, a detailed link budget analysis is provided. Following, analyses at the link and the system levels are conducted. In the former case, a novel algorithm leveraging time-frequency analysis is proposed to detect orthogonal preambles and estimate the signals’ arrival time. Besides, the effects of collisions on the detection probability and Bit Error Rate are investigated and Non-Orthogonal Multiple Access approaches are proposed in the random access and data phases. The system analysis evaluates the performance of random access in case of congestion. Various access parameters are tested in different satellite scenarios, and the performance is measured in terms of access probability and time required to complete the procedure. Finally, a heuristic algorithm is proposed to jointly design the access and data phases, determining the number of satellite passages, the Random Access Periodicity, and the number of uplink repetitions that maximize the system's spectral efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In next generation Internet-of-Things, the overhead introduced by grant-based multiple access protocols may engulf the access network as a consequence of the proliferation of connected devices. Grant-free access protocols are therefore gaining an increasing interest to support massive multiple access. In addition to scalability requirements, new demands have emerged for massive multiple access, including latency and reliability. The challenges envisaged for future wireless communication networks, particularly in the context of massive access, include: i) a very large population size of low power devices transmitting short packets; ii) an ever-increasing scalability requirement; iii) a mild fixed maximum latency requirement; iv) a non-trivial requirement on reliability. To this aim, we suggest the joint utilization of grant-free access protocols, massive MIMO at the base station side, framed schemes to let the contention start and end within a frame, and succesive interference cancellation techniques at the base station side. In essence, this approach is encapsulated in the concept of coded random access with massive MIMO processing. These schemes can be explored from various angles, spanning the protocol stack from the physical (PHY) to the medium access control (MAC) layer. In this thesis, we delve into both of these layers, examining topics ranging from symbol-level signal processing to succesive interference cancellation-based scheduling strategies. In parallel with proposing new schemes, our work includes a theoretical analysis aimed at providing valuable system design guidelines. As a main theoretical outcome, we propose a novel joint PHY and MAC layer design based on density evolution on sparse graphs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, in Ubiquitous computing scenarios users more and more require to exploit online contents and services by means of any device at hand, no matter their physical location, and by personalizing and tailoring content and service access to their own requirements. The coordinated provisioning of content tailored to user context and preferences, and the support for mobile multimodal and multichannel interactions are of paramount importance in providing users with a truly effective Ubiquitous support. However, so far the intrinsic heterogeneity and the lack of an integrated approach led to several either too vertical, or practically unusable proposals, thus resulting in poor and non-versatile support platforms for Ubiquitous computing. This work investigates and promotes design principles to help cope with these ever-changing and inherently dynamic scenarios. By following the outlined principles, we have designed and implemented a middleware support platform to support the provisioning of Ubiquitous mobile services and contents. To prove the viability of our approach, we have realized and stressed on top of our support platform a number of different, extremely complex and heterogeneous content and service provisioning scenarios. The encouraging results obtained are pushing our research work further, in order to provide a dynamic platform that is able to not only dynamically support novel Ubiquitous applicative scenarios by tailoring extremely diverse services and contents to heterogeneous user needs, but is also able to reconfigure and adapt itself in order to provide a truly optimized and tailored support for Ubiquitous service provisioning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The OPERA experiment aims at the direct observation of ν_mu -> ν_tau oscillations in the CNGS (CERN Neutrinos to Gran Sasso) neutrino beam produced at CERN; since the ν_e contamination in the CNGS beam is low, OPERA will also be able to study the sub-dominant oscillation channel ν_mu -> ν_e. OPERA is a large scale hybrid apparatus divided in two supermodules, each equipped with electronic detectors, an iron spectrometer and a highly segmented ~0.7 kton target section made of Emulsion Cloud Chamber (ECC) units. During my research work in the Bologna Lab. I have taken part to the set-up of the automatic scanning microscopes studying and tuning the scanning system performances and efficiencies with emulsions exposed to a test beam at CERN in 2007. Once the triggered bricks were distributed to the collaboration laboratories, my work was centered on the procedure used for the localization and the reconstruction of neutrino events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermal infrared (IR, 10.5 – 12.5 m) images from the Meteosat Visible and Infrared Imager (MVIRI) of cold cloud episodes (cloud top brightness temperature < 241 K) are used as a proxy of precipitating clouds to derive a warm season (May-August) climatology of their coherency, duration, span, and speed over Europe and the Mediterranean. The analysis focuses over the 30°-54°N, 15°W-40°E domain in May-August 1996-2005. Harmonic analysis using discrete Fourier transforms is applied together with a statistical analysis and an investigation of the diurnal cycle. This study has the objective to make available a set of results on the propagation dynamics of the cloud systems with the aim of assist numerical modellers in improving summer convection parameterization. The zonal propagation of cold cloud systems is accompanied by a weak meridional component confined to narrow latitude belts. The persistence of cold clouds over the area evidences the role of orography, the Pyrenees, the Alps, the Balkans and Anatolia. A diurnal oscillation is found with a maximum marking the initiation of convection in the lee of the mountains and shifting from about 1400 UTC at 40°E to 1800 UTC at 0°. A moderate eastward propagation of the frequency maximum from all mountain chains across the domain exists and the diurnal maxima are completely suppressed west of 5°W. The mean power spectrum of the cold cloud frequency distribution evidences a period of one day all over Europe disappearing over the ocean (west of 10°W). Other maxima are found in correspondence of 6 to 10 days in the longitudes from 15° W to 0° and indicate the activity of the westerlies with frontal passage over the continent. Longer periods activities (from 15 up to 30 days) were stronger around 10° W and from 5° W to 15° E and are likely related to the Madden Julian Oscillation influence. The maxima of the diurnal signal are in phase with the presence of elevated terrain and with land masses. A median zonal phase speed of 16.1 ms-1 is found for all events ≥ 1000 km and ≥ 20 h and a full set of results divided by years and recurrence categories is also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Progress in miniaturization of electronic components and design of wireless systems paved the way towards ubiquitous and pervasive communications, enabling anywhere and anytime connectivity. Wireless devices present on, inside, around the human body are becoming commonly used, leading to the class of body-centric communications. The presence of the body with all its peculiar characteristics has to be properly taken into account in the development and design of wireless networks in this context. This thesis addresses various aspects of body-centric communications, with the aim of investigating network performance achievable in different scenarios. The main original contributions pertain to the performance evaluation for Wireless Body Area Networks (WBANs) at the Medium Access Control layer: the application of Link Adaptation to these networks is proposed, Carrier Sense Multiple Access with Collision Avoidance algorithms used for WBAN are extensively investigated, coexistence with other wireless systems is examined. Then, an analytical model for interference in wireless access network is developed, which can be applied to the study of communication between devices located on humans and fixed nodes of an external infrastructure. Finally, results on experimental activities regarding the investigation of human mobility and sociality are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, in-situ measurements of aerosol chemical composition, particle number size distribution, cloud-relevant properties and ground-based cloud observations were combined with high-resolution satellite sea surface chlorophyll-a concentration and air mass back-trajectory data to investigate the impact of the marine biota on aerosol physico-chemical and cloud properties. Studies were performed over the North-Eastern Atlantic Ocean, the central Mediterranean Sea, and the Arctic Ocean, by deploying both multi-year datasets and short-time scale observations. All the data were chosen to be representative of the marine atmosphere, reducing to a minimum any anthropogenic input. A relationship between the patterns of marine biological activity and the time evolution of marine aerosol properties was observed, under a variety of aspects, from chemical composition to number concentration and size distribution, up to the most cloud‐relevant properties. At short-time scales (1-2 months), the aerosol properties tend to respond to biological activity variations with a delay of about one to three weeks. This delay should be considered in model applications that make use of Chlorophyll-a to predict marine aerosol properties at high temporal resolution. The impact of oceanic biological activity on the microphysical properties of marine stratiform clouds is also evidenced by our analysis, over the Eastern North Atlantic Ocean. Such clouds tend to have a higher number of smaller cloud droplets in periods of high biological activity with respect to quiescent periods. This confirms the possibility of feedback interactions within the biota-aerosol-cloud climate system. Achieving a better characterization of the time and space relationships linking oceanic biological activity to marine aerosol composition and properties may significantly impact our future capability of predicting the chemical composition of the marine atmosphere, potentially contributing to reducing the uncertainty of future climate predictions, through a better understanding of the natural climate system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Ǧābirian corpus was a receiver of ancient Greek ideas and, at the same time, a source of knowledge for the later Greek-speaking world, in particular for medieval Byzantine alchemy. Both aspects are explored in the dissertation with respect to the notion of nature. After a general introduction to the Corpus and the sciences described in it, particular attention is devoted to a Byzantine anonymous text, The Work of Four Elements, which was probably influenced by the Ǧābirian Books of Seventy. These texts exemplify how, in the theory of the Ǧābirian science, things are constructed from four natures (hot, cold, moist and dry), the balance of which defines what a thing is. By changing the balance of natures, one can transmute any metals into gold that is perfectly proportioned in terms of natures. Ǧābir presents the art of dyeing metals gold in the Books of Seven Metals which, along with chrysopoetic recipes, also include medical recipes and theoretical contents such as the theories of four humours, properties, and talismans. Moreover, Ǧābir postulated a substrate that does not change in itself and continues to exist when natures move in and out of things. Such primary existence is called the fifth nature as an additional principle to the four natures. This key concept for the Ǧābirian theory, which has been underexplored so far, is discussed through the textual and critical analysis of various unedited sources: the Books of Seven Metals and the Book of the Fifth Nature. This study confirms that the fifth nature was probably derived from ancient Greek philosophical concepts such as the Empedoclean particles, the Aristotelian fifth element and the Stoic pneuma. Thus, this research indicates the importance of the Ǧābirian corpus both in the history of alchemy and the history of philosophy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of Bitcoin suggested a disintermediated economy in which Internet users can take part directly. The conceptual disruption brought about by this Internet of Money (IoM) mirrors the cross-industry impacts of blockchain and distributed ledger technologies (DLTs). While related instances of non-centralisation thwart regulatory efforts to establish accountability, in the financial domain further challenges arise from the presence in the IoM of two seemingly opposing traits: anonymity and transparency. Indeed, DLTs are often described as architecturally transparent, but the perceived level of anonymity of cryptocurrency transfers fuels fears of illicit exploitation. This is a primary concern for the framework to prevent money laundering and the financing of terrorism and proliferation (AML/CFT/CPF), and a top priority both globally and at the EU level. Nevertheless, the anonymous and transparent features of the IoM are far from clear-cut, and the same is true for its levels of disintermediation and non-centralisation. Almost fifteen years after the first Bitcoin transaction, the IoM today comprises a diverse set of socio-technical ecosystems. Building on an analysis of their phenomenology, this dissertation shows how there is more to their traits of anonymity and transparency than it may seem, and how these features range across a spectrum of combinations and degrees. In this context, trade-offs can be evaluated by referring to techno-legal benchmarks, established through socio-technical assessments grounded on teleological interpretation. Against this backdrop, this work provides framework-level recommendations for the EU to respond to the twofold nature of the IoM legitimately and effectively. The methodology cherishes the mutual interaction between regulation and technology when drafting regulation whose compliance can be eased by design. This approach mitigates the risk of overfitting in a fast-changing environment, while acknowledging specificities in compliance with the risk-based approach that sits at the core of the AML/CFT/CPF regime.