12 resultados para WIDE-RANGE CURRENT MEASUREMENT
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Advances in wireless networking and content delivery systems are enabling new challenging provisioning scenarios where a growing number of users access multimedia services, e.g., audio/video streaming, while moving among different points of attachment to the Internet, possibly with different connectivity technologies, e.g., Wi-Fi, Bluetooth, and cellular 3G. That calls for novel middlewares capable of dynamically personalizing service provisioning to the characteristics of client environments, in particular to discontinuities in wireless resource availability due to handoffs. This dissertation proposes a novel middleware solution, called MUM, that performs effective and context-aware handoff management to transparently avoid service interruptions during both horizontal and vertical handoffs. To achieve the goal, MUM exploits the full visibility of wireless connections available in client localities and their handoff implementations (handoff awareness), of service quality requirements and handoff-related quality degradations (QoS awareness), and of network topology and resources available in current/future localities (location awareness). The design and implementation of the all main MUM components along with extensive on the field trials of the realized middleware architecture confirmed the validity of the proposed full context-aware handoff management approach. In particular, the reported experimental results demonstrate that MUM can effectively maintain service continuity for a wide range of different multimedia services by exploiting handoff prediction mechanisms, adaptive buffering and pre-fetching techniques, and proactive re-addressing/re-binding.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
The surprising discovery of the X(3872) resonance by the Belle experiment in 2003, and subsequent confirmation by BaBar, CDF and D0, opened up a new chapter of QCD studies and puzzles. Since then, detailed experimental and theoretical studies have been performed in attempt to determine and explain the proprieties of this state. Since the end of 2009 the world’s largest and highest-energy particle accelerator, the Large Hadron Collider (LHC), started its operations at the CERN laboratories in Geneva. One of the main experiments at LHC is CMS (Compact Muon Solenoid), a general purpose detector projected to address a wide range of physical phenomena, in particular the search of the Higgs boson, the only still unconfirmed element of the Standard Model (SM) of particle interactions and, new physics beyond the SM itself. Even if CMS has been designed to study high energy events, it’s high resolution central tracker and superior muon spectrometer made it an optimal tool to study the X(3872) state. In this thesis are presented the results of a series of study on the X(3872) state performed with the CMS experiment. Already with the first year worth of data, a clear peak for the X(3872) has been identified, and the measurement of the cross section ratio with respect to the Psi(2S) has been performed. With the increased statistic collected during 2011 it has been possible to study, in bins of transverse momentum, the cross section ratio between X(3872) and Psi(2S) and separate their prompt and non-prompt component.
Resumo:
Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.
Resumo:
This dissertation presents the theory and the conducted activity that lead to the construction of a high voltage high frequency arbitrary waveform voltage generator. The generator has been specifically designed to supply power to a wide range of plasma actuators. The system has been completely designed, manufactured and tested at the Department of Electrical, Electronic and Information Engineering of the University of Bologna. The generator structure is based on the single phase cascaded H-bridge multilevel topology and is comprised of 24 elementary units that are series connected in order to form the typical staircase output voltage waveform of a multilevel converter. The total number of voltage levels that can be produced by the generator is 49. Each level is 600 V making the output peak-to-peak voltage equal to 28.8 kV. The large number of levels provides high resolution with respect to the output voltage having thus the possibility to generate arbitrary waveforms. Maximum frequency of operation is 20 kHz. A study of the relevant literature shows that this is the first time that a cascaded multilevel converter of such dimensions has been constructed. Isolation and control challenges had to be solved for the realization of the system. The biggest problem of the current technology in power supplies for plasma actuators is load matching. Resonant converters are the most used power supplies and are seriously affected by this problem. The manufactured generator completely solves this issue providing consistent voltage output independently of the connected load. This fact is very important when executing tests and during the comparison of the results because all measures should be comparable and not dependent from matching issues. The use of the multilevel converter for power supplying a plasma actuator is a real technological breakthrough that has provided and will continue to provide very significant experimental results.
Resumo:
Il sistema comune europeo dell’imposta sul valore aggiunto privilegia caratteri e finalità economiche nel definire chi siano gli operatori economici soggetti all’IVA. Una disciplina particolare è, tuttavia, prevista per i soggetti di diritto pubblico che, oltre alla principale attività istituzionale, esercitano un’attività di carattere economico. Ai sensi dell’articolo 13 della Direttiva del 28 novembre 2006, 2006/112/CE, gli Stati, le Regioni, le Province, i Comuni e gli altri enti di diritto pubblico, in relazione alle attività ed operazioni che essi effettuano in quanto pubbliche autorità, non sono considerati soggetti passivi IVA anche se in relazione ad esse percepiscono diritti, canoni, contributi o retribuzioni. La vigente disciplina europea delle attività economiche esercitate dagli enti pubblici, oltre che inadeguata al contesto economico attuale, rischia di diventare un fattore che influenza negativamente l’efficacia del modello impositivo dell’IVA e l’agire degli enti pubblici. La tesi propone un modello alternativo che prevede l’inversione dell’impostazione attuale della Direttiva IVA al fine di considerare, di regola, soggetti passivi IVA gli organismi pubblici che svolgono - ancorché nella veste di pubblica autorità - attività oggettivamente economiche.
Resumo:
Epoxy resins are mainly produced by reacting bisphenol A with epichlorohydrin. Growing concerns about the negative health effects of bisphenol A are urging researchers to find alternatives. In this work diphenolic acid is suggested, as it derives from levulinic acid, obtained from renewable resources. Nevertheless, it is also synthesized from phenol, from fossil resources, which, in the current paper has been substituted by plant-based phenols. Two interesting derivatives were identified: diphenolic acid from catechol and from resorcinol. Epichlorohydrin on the other hand, is highly carcinogenic and volatile, leading to a tremendous risk of exposure. Thus, two approaches have been investigated and compared with epichlorohydrin. The resulting resins have been characterized to find an appropriate application, as epoxy are commonly used for a wide range of products, ranging from composite materials for boats to films for food cans. Self-curing capacity was observed for the resin deriving from diphenolic acid from catechol. The glycidyl ether of the diphenolic acid from resorcinol, a fully renewable compound, was cured in isothermal and non-isothermal tests tracked by DSC. Two aliphatic amines were used, namely 1,4-butanediamine and 1,6-hexamethylendiamine, in order to determine the effect of chain length on the curing of an epoxy-amine system and determine the kinetic parameters. The latter are crucial to plan any industrial application. Both diamines demonstrated superior properties compared to traditional bisphenol A-amine systems.
Resumo:
La ricerca sulla comunicazione e gestione multilingue della conoscenza in azienda si è sinora concentrata sulle multinazionali o PMI in fase di globalizzazione. La presente ricerca riguarda invece le PMI in zone storicamente multilingui al fine di studiare se l’abitudine all’uso di lingue diverse sul mercato locale possa rappresentare un vantaggio competitivo. La tesi illustra una ricerca multimetodo condotta nel 2012-2013 in Alto Adige/Südtirol. Il dataset consiste in 443 risposte valide a un questionario online e 23 interviste con manager e imprenditori locali. Le domande miravano a capire come le aziende altoatesine affrontino la sfida del multilinguismo, con particolare attenzione ai seguenti ambiti: comunicazione multilingue, documentazione, traduzione e terminologia. I risultati delineano un quadro generale delle strategie di multilinguismo applicate in Alto Adige, sottolineandone punti di forza e punti deboli. Nonostante la presenza di personale multilingue infatti il potenziale vantaggio competitivo che ne deriva non è sfruttato appieno: le aziende si rivolgono ai mercati in cui si parla la loro stessa lingua (le imprese a conduzione italiana al mercato nazionale, quelle di lingua tedesca ad Austria e Germania). La comunicazione interna è multilingue solo nei casi in sia imprescindibile. Le “traduzioni fai-da-te” offrono l’illusione di gestire lingue diverse, ma il livello qualitativo rimane limitato. I testi sono sovente tradotti da personale interno privo di competenze specifiche. Anche nella cooperazione con i traduttori esterni si evidenza la mancata capacità di ottenere il massimo profitto dagli investimenti. La tesi propone delle raccomandazioni pratiche volte a ottimizzare i processi attuali e massimizzare la resa delle risorse disponibili per superare la sfida della gestione e comunicazione multilingue. Le raccomandazioni non richiedono investimenti economici di rilievo e sono facilmente trasferibili anche ad altre regioni multilingui/di confine, come ad altre PMI che impiegano personale plurilingue. Possono dunque risultare utili per un elevato numero di imprese.
Resumo:
A possible future scenario for the water injection (WI) application has been explored as an advanced strategy for modern GDI engines. The aim is to verify whether the PWI (Port Water Injection) and DWI (Direct Water Injection) architectures can replace current fuel enrichment strategies to limit turbine inlet temperatures (TiT) and knock engine attitude. In this way, it might be possible to extend the stoichiometric mixture condition over the entire engine map, meeting possible future restrictions in the use of AES (Auxiliary Emission Strategies) and future emission limitations. The research was first addressed through a comprehensive assessment of the state-of-the-art of the technology and the main effects of the chemical-physical water properties. Then, detailed chemical kinetics simulations were performed in order to compute the effects of WI on combustion development and auto-ignition. The latter represents an important methodology step for accurate numerical combustion simulations. The water injection was then analysed in detail for a PWI system, through an experimental campaign for macroscopic and microscopic injector characterization inside a test chamber. The collected data were used to perform a numerical validation of the spray models, obtaining an excellent matching in terms of particle size and droplet velocity distributions. Finally, a wide range of three-dimensional CFD simulations of a virtual high-bmep engine were realized and compared, exploring also different engine designs and water/fuel injection strategies under non-reacting and reacting flow conditions. According to the latter, it was found that thanks to the introduction of water, for both PWI and DWI systems, it could be possible to obtain an increase of the target performance and an optimization of the bsfc (Break Specific Fuel Consumption), lowering the engine knock risk at the same time, while the TiT target has been achieved hardly only for one DWI configuration.
Resumo:
The innovation in several industrial sectors has been recently characterized by the need for reducing the operative temperature either for economic or environmental related aspects. Promising technological solutions require the acquisition of fundamental-based knowledge to produce safe and robust systems. In this sense, reactive systems often represent the bottleneck. For these reasons, this work was focused on the integration of chemical (i.e., detailed kinetic mechanism) and physical (i.e., computational fluid dynamics) models. A theoretical-based kinetic mechanism mimicking the behaviour of oxygenated fuels and their intermediates under oxidative conditions in a wide range of temperature and pressure was developed. Its validity was tested against experimental data collected in this work by using the heat flux burner, as well as measurements retrieved from the current literature. Besides, estimations deriving from existing models considered as the benchmark in the combustion field were compared with the newly generated mechanism. The latter was found to be the most accurate for the investigated conditions and fuels. Most influential species and reactions on the combustion of butyl acetate were identified. The corresponding thermodynamic parameter and rate coefficients were quantified through ab initio calculations. A reduced detailed kinetic mechanism was produced and implemented in an open-source computational fluid dynamics model to characterize pool fires caused by the accidental release of aviation fuel and liquefied natural gas, at first. Eventually, partial oxidation processes involving light alkenes were optimized following the quick, fair, and smoot (QFS) paradigm. The proposed procedure represents a comprehensive and multidisciplinary approach for the construction and validation of accurate models, allowing for the characterization of developing industrial sectors and techniques.
Resumo:
Network monitoring is of paramount importance for effective network management: it allows to constantly observe the network’s behavior to ensure it is working as intended and can trigger both automated and manual remediation procedures in case of failures and anomalies. The concept of SDN decouples the control logic from legacy network infrastructure to perform centralized control on multiple switches in the network, and in this context, the responsibility of switches is only to forward packets according to the flow control instructions provided by controller. However, as current SDN switches only expose simple per-port and per-flow counters, the controller has to do almost all the processing to determine the network state, which causes significant communication overhead and excessive latency for monitoring purposes. The absence of programmability in the data plane of SDN prompted the advent of programmable switches, which allow developers to customize the data-plane pipeline and implement novel programs operating directly in the switches. This means that we can offload certain monitoring tasks to programmable data planes, to perform fine-grained monitoring even at very high packet processing speeds. Given the central importance of network monitoring exploiting programmable data planes, the goal of this thesis is to enable a wide range of monitoring tasks in programmable switches, with a specific focus on the ones equipped with programmable ASICs. Indeed, most network monitoring solutions available in literature do not take computational and memory constraints of programmable switches into due account, preventing, de facto, their successful implementation in commodity switches. This claims that network monitoring tasks can be executed in programmable switches. Our evaluations show that the contributions in this thesis could be used by network administrators as well as network security engineers, to better understand the network status depending on different monitoring metrics, and thus prevent network infrastructure and service outages.
Resumo:
The recent widespread use of social media platforms and web services has led to a vast amount of behavioral data that can be used to model socio-technical systems. A significant part of this data can be represented as graphs or networks, which have become the prevalent mathematical framework for studying the structure and the dynamics of complex interacting systems. However, analyzing and understanding these data presents new challenges due to their increasing complexity and diversity. For instance, the characterization of real-world networks includes the need of accounting for their temporal dimension, together with incorporating higher-order interactions beyond the traditional pairwise formalism. The ongoing growth of AI has led to the integration of traditional graph mining techniques with representation learning and low-dimensional embeddings of networks to address current challenges. These methods capture the underlying similarities and geometry of graph-shaped data, generating latent representations that enable the resolution of various tasks, such as link prediction, node classification, and graph clustering. As these techniques gain popularity, there is even a growing concern about their responsible use. In particular, there has been an increased emphasis on addressing the limitations of interpretability in graph representation learning. This thesis contributes to the advancement of knowledge in the field of graph representation learning and has potential applications in a wide range of complex systems domains. We initially focus on forecasting problems related to face-to-face contact networks with time-varying graph embeddings. Then, we study hyperedge prediction and reconstruction with simplicial complex embeddings. Finally, we analyze the problem of interpreting latent dimensions in node embeddings for graphs. The proposed models are extensively evaluated in multiple experimental settings and the results demonstrate their effectiveness and reliability, achieving state-of-the-art performances and providing valuable insights into the properties of the learned representations.