980 resultados para Point cloud processing
Resumo:
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.
Resumo:
With the proliferation of new mobile devices and applications, the demand for ubiquitous wireless services has increased dramatically in recent years. The explosive growth in the wireless traffic requires the wireless networks to be scalable so that they can be efficiently extended to meet the wireless communication demands. In a wireless network, the interference power typically grows with the number of devices without necessary coordination among them. On the other hand, large scale coordination is always difficult due to the low-bandwidth and high-latency interfaces between access points (APs) in traditional wireless networks. To address this challenge, cloud radio access network (C-RAN) has been proposed, where a pool of base band units (BBUs) are connected to the distributed remote radio heads (RRHs) via high bandwidth and low latency links (i.e., the front-haul) and are responsible for all the baseband processing. But the insufficient front-haul link capacity may limit the scale of C-RAN and prevent it from fully utilizing the benefits made possible by the centralized baseband processing. As a result, the front-haul link capacity becomes a bottleneck in the scalability of C-RAN. In this dissertation, we explore the scalable C-RAN in the effort of tackling this challenge. In the first aspect of this dissertation, we investigate the scalability issues in the existing wireless networks and propose a novel time-reversal (TR) based scalable wireless network in which the interference power is naturally mitigated by the focusing effects of TR communications without coordination among APs or terminal devices (TDs). Due to this nice feature, it is shown that the system can be easily extended to serve more TDs. Motivated by the nice properties of TR communications in providing scalable wireless networking solutions, in the second aspect of this dissertation, we apply the TR based communications to the C-RAN and discover the TR tunneling effects which alleviate the traffic load in the front-haul links caused by the increment of TDs. We further design waveforming schemes to optimize the downlink and uplink transmissions in the TR based C-RAN, which are shown to improve the downlink and uplink transmission accuracies. Consequently, the traffic load in the front-haul links is further alleviated by the reducing re-transmissions caused by transmission errors. Moreover, inspired by the TR-based C-RAN, we propose the compressive quantization scheme which applies to the uplink of multi-antenna C-RAN so that more antennas can be utilized with the limited front-haul capacity, which provide rich spatial diversity such that the massive TDs can be served more efficiently.
Resumo:
Part 5: Service Orientation in Collaborative Networks
Resumo:
Honey is rich in sugar content and dominated by fructose and glucose that make honey prone to crystallize during storage. Due to honey composition, the anhydrous glass transition temperature of honey is very low that makes honey difficult to dry alone and drying aid or filler is needed to dry honey. Maltodextrin is a common drying aid material used in drying of sugar-rich food. The present study aims to study the processing of honey powder by vacuum drying method and the impact of drying process and formulation on the stability of honey powder. To achieve the objectives, the series of experiments were done: investigating of maltodextrin DE 10 properties, studying the effect of drying temperature, total solid concentration, DE value, maltodextrin concentration and anti-caking agent on honey powder processing and stability. Maltodextrin provide stable glass compared to lower molecular weight sugars. Dynamic Dew Point Isotherm (DDI) data could be used to determine amorphous content of a system. The area under the first derivative curve from DDI curve is equal to the amount of water needed by amorphous material to crystallize. The drying temperature affected the amorphous content of vacuum-dried honey powder. The higher temperature seemed to result in honey powder with more amorphous component. The ratio of maltodextrin affected more significantly the stability of honey powder compared to the treatments of total solids concentration, DE value and drying temperature. The critical water activity of honey powder was lower than water activity of the equilibrium water content corresponding to BET monolayer water content. Addition of anti-caking agent increased stability and flow-ability of honey powder. Addition of Calcium stearate could inhibit collapse of the honey powder during storage.
Resumo:
Companies operating in the wood processing industry need to increase their productivity by implementing automation technologies in their production systems. An increasing global competition and rising raw material prizes challenge their competitiveness. Yet, too extensive automation brings risks such as a deterioration in situation awareness and operator deskilling. The concept of Levels of Automation is generally seen as means to achieve a balanced task allocation between the operators’ skills and competences and the need for automation technology relieving the humans from repetitive or hazardous work activities. The aim of this thesis was to examine to what extent existing methods for assessing Levels of Automation in production processes are applicable in the wood processing industry when focusing on an improved competitiveness of production systems. This was done by answering the following research questions (RQ): RQ1: What method is most appropriate to be applied with measuring Levels of Automation in the wood processing industry? RQ2: How can the measurement of Levels of Automation contribute to an improved competitiveness of the wood processing industry’s production processes? Literature reviews were used to identify the main characteristics of the wood processing industry affecting its automation potential and appropriate assessment methods for Levels of Automation in order to answer RQ1. When selecting the most suitable method, factors like the relevance to the target industry, application complexity or operational level the method is penetrating were important. The DYNAMO++ method, which covers both a rather quantitative technical-physical and a more qualitative social-cognitive dimension, was seen as most appropriate when taking into account these factors. To answer RQ 2, a case study was undertaken at a major Swedish manufacturer of interior wood products to point out paths how the measurement of Levels of Automation contributes to an improved competitiveness of the wood processing industry. The focus was on the task level on shop floor and concrete improvement suggestions were elaborated after applying the measurement method for Levels of Automation. Main aspects considered for generalization were enhancements regarding ergonomics in process design and cognitive support tools for shop-floor personnel through task standardization. Furthermore, difficulties regarding the automation of grading and sorting processes due to the heterogeneous material properties of wood argue for a suitable arrangement of human intervention options in terms of work task allocation. The application of a modified version of DYNAMO++ reveals its pros and cons during a case study which covers a high operator involvement in the improvement process and the distinct predisposition of DYNAMO++ to be applied in an assembly system.
Resumo:
In the last decades, Artificial Intelligence has witnessed multiple breakthroughs in deep learning. In particular, purely data-driven approaches have opened to a wide variety of successful applications due to the large availability of data. Nonetheless, the integration of prior knowledge is still required to compensate for specific issues like lack of generalization from limited data, fairness, robustness, and biases. In this thesis, we analyze the methodology of integrating knowledge into deep learning models in the field of Natural Language Processing (NLP). We start by remarking on the importance of knowledge integration. We highlight the possible shortcomings of these approaches and investigate the implications of integrating unstructured textual knowledge. We introduce Unstructured Knowledge Integration (UKI) as the process of integrating unstructured knowledge into machine learning models. We discuss UKI in the field of NLP, where knowledge is represented in a natural language format. We identify UKI as a complex process comprised of multiple sub-processes, different knowledge types, and knowledge integration properties to guarantee. We remark on the challenges of integrating unstructured textual knowledge and bridge connections with well-known research areas in NLP. We provide a unified vision of structured knowledge extraction (KE) and UKI by identifying KE as a sub-process of UKI. We investigate some challenging scenarios where structured knowledge is not a feasible prior assumption and formulate each task from the point of view of UKI. We adopt simple yet effective neural architectures and discuss the challenges of such an approach. Finally, we identify KE as a form of symbolic representation. From this perspective, we remark on the need of defining sophisticated UKI processes to verify the validity of knowledge integration. To this end, we foresee frameworks capable of combining symbolic and sub-symbolic representations for learning as a solution.
Resumo:
The deployment of ultra-dense networks is one of the most promising solutions to manage the phenomenon of co-channel interference that affects the latest wireless communication systems, especially in hotspots. To meet the requirements of the use-cases and the immense amount of traffic generated in these scenarios, 5G ultra-dense networks are being deployed using various technologies, such as distributed antenna system (DAS) and cloud-radio access network (C-RAN). Through these centralized densification schemes, virtualized baseband processing units coordinate the distributed access points and manage the available network resources. In particular, link adaptation techniques are shown to be fundamental to overall system operation and performance enhancement. The core of this dissertation is the result of an analysis and a comparison of dynamic and adaptive methods for modulation and coding scheme (MCS) selection applied to the latest mobile telecommunications standards. A novel algorithm based on the proportional-integral-derivative (PID) controller principles and block error rate (BLER) target has been proposed. Tests were conducted in a 4G and 5G system level laboratory and, by means of a channel emulator, the performance was evaluated for different channel models and target BLERs. Furthermore, due to the intrinsic sectorization of the end-users distribution in the investigated scenario, a preliminary analysis on the joint application of users grouping algorithms with multi-antenna and multi-user techniques has been performed. In conclusion, the importance and impact of other fundamental physical layer operations, such as channel estimation and power control, on the overall end-to-end system behavior and performance were highlighted.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
In recent years, developed countries have turned their attention to clean and renewable energy, such as wind energy and wave energy that can be converted to electrical power. Companies and academic groups worldwide are investigating several wave energy ideas today. Accordingly, this thesis studies the numerical simulation of the dynamic response of the wave energy converters (WECs) subjected to the ocean waves. This study considers a two-body point absorber (2BPA) and an oscillating surge wave energy converter (OSWEC). The first aim is to mesh the bodies of the earlier mentioned WECs to calculate their hydrostatic properties using axiMesh.m and Mesh.m functions provided by NEMOH. The second aim is to calculate the first-order hydrodynamic coefficients of the WECs using the NEMOH BEM solver and to study the ability of this method to eliminate irregular frequencies. The third is to generate a *.h5 file for 2BPA and OSWEC devices, in which all the hydrodynamic data are included. The BEMIO, a pre-and post-processing tool developed by WEC-Sim, is used in this study to create *.h5 files. The primary and final goal is to run the wave energy converter Simulator (WEC-Sim) to simulate the dynamic responses of WECs studied in this thesis and estimate their power performance at different sites located in the Mediterranean Sea and the North Sea. The hydrodynamic data obtained by the NEMOH BEM solver for the 2BPA and OSWEC devices studied in this thesis is imported to WEC-Sim using BEMIO. Lastly, the power matrices and annual energy production (AEP) of WECs are estimated for different sites located in the Sea of Sicily, Sea of Sardinia, Adriatic Sea, Tyrrhenian Sea, and the North Sea. To this end, the NEMOH and WEC-Sim are still the most practical tools to estimate the power generation of WECs numerically.
Resumo:
The aim of this study was to evaluate fat substitute in processing of sausages prepared with surimi of waste from piramutaba filleting. The formulation ingredients were mixed with the fat substitutes added according to a fractional planning 2(4-1), where the independent variables, manioc starch (Ms), hydrogenated soy fat (F), texturized soybean protein (Tsp) and carrageenan (Cg) were evaluated on the responses of pH, texture (Tx), raw batter stability (RBS) and water holding capacity (WHC) of the sausage. Fat substitutes were evaluated in 11 formulations and the results showed that the greatest effects on the responses were found to Ms, F and Cg, being eliminated from the formulation Tsp. To find the best formulation for processing piramutaba sausage was made a complete factorial planning of 2(3) to evaluate the concentrations of fat substitutes in an enlarged range. The optimum condition found for fat substitutes in the sausages formulation were carrageenan (0.51%), manioc starch (1.45%) and fat (1.2%).
Resumo:
Balsamic vinegar (BV) is a typical and valuable Italian product, worldwide appreciated thanks to its characteristic flavors and potential health benefits. Several studies have been conducted to assess physicochemical and microbial compositions of BV, as well as its beneficial properties. Due to highly-disseminated claims of antioxidant, antihypertensive and antiglycemic properties, BV is a known target for frauds and adulterations. For that matter, product authentication, certifying its origin (region or country) and thus the processing conditions, is becoming a growing concern. Striving for fraud reduction as well as quality and safety assurance, reliable analytical strategies to rapidly evaluate BV quality are very interesting, also from an economical point of view. This work employs silica plate laser desorption/ionization mass spectrometry (SP-LDI-MS) for fast chemical profiling of commercial BV samples with protected geographical indication (PGI) and identification of its adulterated samples with low-priced vinegars, namely apple, alcohol and red/white wines.
Resumo:
To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. 23 children (13 male) between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure); dichotic digit test and staggered spondaic word test (selective attention); pitch pattern and duration pattern sequence tests (temporal processing) and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.
Resumo:
The aim of this research was to analyze temporal auditory processing and phonological awareness in school-age children with benign childhood epilepsy with centrotemporal spikes (BECTS). Patient group (GI) consisted of 13 children diagnosed with BECTS. Control group (GII) consisted of 17 healthy children. After neurological and peripheral audiological assessment, children underwent a behavioral auditory evaluation and phonological awareness assessment. The procedures applied were: Gaps-in-Noise test (GIN), Duration Pattern test, and Phonological Awareness test (PCF). Results were compared between the groups and a correlation analysis was performed between temporal tasks and phonological awareness performance. GII performed significantly better than the children with BECTS (GI) in both GIN and Duration Pattern test (P < 0.001). GI performed significantly worse in all of the 4 categories of phonological awareness assessed: syllabic (P = 0.001), phonemic (P = 0.006), rhyme (P = 0.015) and alliteration (P = 0.010). Statistical analysis showed a significant positive correlation between the phonological awareness assessment and Duration Pattern test (P < 0.001). From the analysis of the results, it was concluded that children with BECTS may have difficulties in temporal resolution, temporal ordering, and phonological awareness skills. A correlation was observed between auditory temporal processing and phonological awareness in the suited sample.
Resumo:
The biofilm formation of Enterococcus faecalis and Enterococcus faecium isolated from the processing of ricotta on stainless steel coupons was evaluated, and the effect of cleaning and sanitization procedures in the control of these biofilms was determined. The formation of biofilms was observed while varying the incubation temperature (7, 25 and 39°C) and time (0, 1, 2, 4, 6 and 8days). At 7°C, the counts of E. faecalis and E. faecium were below 2log10CFU/cm(2). For the temperatures of 25 and 39°C, after 1day, the counts of E. faecalis and E. faecium were 5.75 and 6.07log10CFU/cm(2), respectively, which is characteristic of biofilm formation. The tested sanitation procedures a) acid-anionic tensioactive cleaning, b) anionic tensioactive cleaning+sanitizer and c) acid-anionic tensioactive cleaning+sanitizer were effective in removing the biofilms, reducing the counts to levels below 0.4log10CFU/cm(2). The sanitizer biguanide was the least effective, and peracetic acid was the most effective. These studies revealed the ability of enterococci to form biofilms and the importance of the cleaning step and the type of sanitizer used in sanitation processes for the effective removal of biofilms.
Resumo:
Trees from tropical montane cloud forest (TMCF) display very dynamic patterns of water use. They are capable of downwards water transport towards the soil during leaf-wetting events, likely a consequence of foliar water uptake (FWU), as well as high rates of night-time transpiration (Enight) during drier nights. These two processes might represent important sources of water losses and gains to the plant, but little is known about the environmental factors controlling these water fluxes. We evaluated how contrasting atmospheric and soil water conditions control diurnal, nocturnal and seasonal dynamics of sap flow in Drimys brasiliensis (Miers), a common Neotropical cloud forest species. We monitored the seasonal variation of soil water content, micrometeorological conditions and sap flow of D. brasiliensis trees in the field during wet and dry seasons. We also conducted a greenhouse experiment exposing D. brasiliensis saplings under contrasting soil water conditions to deuterium-labelled fog water. We found that during the night D. brasiliensis possesses heightened stomatal sensitivity to soil drought and vapour pressure deficit, which reduces night-time water loss. Leaf-wetting events had a strong suppressive effect on tree transpiration (E). Foliar water uptake increased in magnitude with drier soil and during longer leaf-wetting events. The difference between diurnal and nocturnal stomatal behaviour in D. brasiliensis could be attributed to an optimization of carbon gain when leaves are dry, as well as minimization of nocturnal water loss. The leaf-wetting events on the other hand seem important to D. brasiliensis water balance, especially during soil droughts, both by suppressing tree transpiration (E) and as a small additional water supply through FWU. Our results suggest that decreases in leaf-wetting events in TMCF might increase D. brasiliensis water loss and decrease its water gains, which could compromise its ecophysiological performance and survival during dry periods.