15 resultados para Distributed Array
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.
Resumo:
The organization of the nervous and immune systems is characterized by obvious differences and striking parallels. Both systems need to relay information across very short and very long distances. The nervous system communicates over both long and short ranges primarily by means of more or less hardwired intercellular connections, consisting of axons, dendrites, and synapses. Longrange communication in the immune system occurs mainly via the ordered and guided migration of immune cells and systemically acting soluble factors such as antibodies, cytokines, and chemokines. Its short-range communication either is mediated by locally acting soluble factors or transpires during direct cell–cell contact across specialized areas called “immunological synapses” (Kirschensteiner et al., 2003). These parallels in intercellular communication are complemented by a complex array of factors that induce cell growth and differentiation: these factors in the immune system are called cytokines; in the nervous system, they are called neurotrophic factors. Neither the cytokines nor the neurotrophic factors appear to be completely exclusive to either system (Neumann et al., 2002). In particular, mounting evidence indicates that some of the most potent members of the neurotrophin family, for example, nerve growth factor (NGF) and brainderived neurotrophic factor (BDNF), act on or are produced by immune cells (Kerschensteiner et al., 1999) There are, however, other neurotrophic factors, for example the insulin-like growth factor-1 (IGF-1), that can behave similarly (Kermer et al., 2000). These factors may allow the two systems to “cross-talk” and eventually may provide a molecular explanation for the reports that inflammation after central nervous system (CNS) injury has beneficial effects (Moalem et al., 1999). In order to shed some more light on such a cross-talk, therefore, transcription factors modulating mu-opioid receptor (MOPr) expression in neurons and immune cells are here investigated. More precisely, I focused my attention on IGF-I modulation of MOPr in neurons and T-cell receptor induction of MOPr expression in T-lymphocytes. Three different opioid receptors [mu (MOPr), delta (DOPr), and kappa (KOPr)] belonging to the G-protein coupled receptor super-family have been cloned. They are activated by structurallyrelated exogenous opioids or endogenous opioid peptides, and contribute to the regulation of several functions including pain transmission, respiration, cardiac and gastrointestinal functions, and immune response (Zollner and Stein 2007). MOPr is expressed mainly in the central nervous system where it regulates morphine-induced analgesia, tolerance and dependence (Mayer and Hollt 2006). Recently, induction of MOPr expression in different immune cells induced by cytokines has been reported (Kraus et al., 2001; Kraus et al., 2003). The human mu-opioid receptor gene (OPRM1) promoter is of the TATA-less type and has clusters of potential binding sites for different transcription factors (Law et al. 2004). Several studies, primarily focused on the upstream region of the OPRM1 promoter, have investigated transcriptional regulation of MOPr expression. Presently, however, it is still not completely clear how positive and negative transcription regulators cooperatively coordinate cellor tissue-specific transcription of the OPRM1 gene, and how specific growth factors influence its expression. IGF-I and its receptors are widely distributed throughout the nervous system during development, and their involvement in neurogenesis has been extensively investigated (Arsenijevic et al. 1998; van Golen and Feldman 2000). As previously mentioned, such neurotrophic factors can be also produced and/or act on immune cells (Kerschenseteiner et al., 2003). Most of the physiologic effects of IGF-I are mediated by the type I IGF surface receptor which, after ligand binding-induced autophosphorylation, associates with specific adaptor proteins and activates different second messengers (Bondy and Cheng 2004). These include: phosphatidylinositol 3-kinase, mitogen-activated protein kinase (Vincent and Feldman 2002; Di Toro et al. 2005) and members of the Janus kinase (JAK)/STAT3 signalling pathway (Zong et al. 2000; Yadav et al. 2005). REST plays a complex role in neuronal cells by differentially repressing target gene expression (Lunyak et al. 2004; Coulson 2005; Ballas and Mandel 2005). REST expression decreases during neurogenesis, but has been detected in the adult rat brain (Palm et al. 1998) and is up-regulated in response to global ischemia (Calderone et al. 2003) and induction of epilepsy (Spencer et al. 2006). Thus, the REST concentration seems to influence its function and the expression of neuronal genes, and may have different effects in embryonic and differentiated neurons (Su et al. 2004; Sun et al. 2005). In a previous study, REST was elevated during the early stages of neural induction by IGF-I in neuroblastoma cells. REST may contribute to the down-regulation of genes not yet required by the differentiation program, but its expression decreases after five days of treatment to allow for the acquisition of neural phenotypes. Di Toro et al. proposed a model in which the extent of neurite outgrowth in differentiating neuroblastoma cells was affected by the disappearance of REST (Di Toro et al. 2005). The human mu-opioid receptor gene (OPRM1) promoter contains a DNA sequence binding the repressor element 1 silencing transcription factor (REST) that is implicated in transcriptional repression. Therefore, in the fist part of this thesis, I investigated whether insulin-like growth factor I (IGF-I), which affects various aspects of neuronal induction and maturation, regulates OPRM1 transcription in neuronal cells in the context of the potential influence of REST. A series of OPRM1-luciferase promoter/reporter constructs were transfected into two neuronal cell models, neuroblastoma-derived SH-SY5Y cells and PC12 cells. In the former, endogenous levels of human mu-opioid receptor (hMOPr) mRNA were evaluated by real-time PCR. IGF-I upregulated OPRM1 transcription in: PC12 cells lacking REST, in SH-SY5Y cells transfected with constructs deficient in the REST DNA binding element, or when REST was down-regulated in retinoic acid-differentiated cells. IGF-I activates the signal transducer and activator of transcription-3 (STAT3) signaling pathway and this transcription factor, binding to the STAT1/3 DNA element located in the promoter, increases OPRM1 transcription. T-cell receptor (TCR) recognizes peptide antigens displayed in the context of the major histocompatibility complex (MHC) and gives rise to a potent as well as branched intracellular signalling that convert naïve T-cells in mature effectors, thus significantly contributing to the genesis of a specific immune response. In the second part of my work I exposed wild type Jurkat CD4+ T-cells to a mixture of CD3 and CD28 antigens in order to fully activate TCR and study whether its signalling influence OPRM1 expression. Results were that TCR engagement determined a significant induction of OPRM1 expression through the activation of transcription factors AP-1, NF-kB and NFAT. Eventually, I investigated MOPr turnover once it has been expressed on T-cells outer membrane. It turned out that DAMGO induced MOPr internalisation and recycling, whereas morphine did not. Overall, from the data collected in this thesis we can conclude that that a reduction in REST is a critical switch enabling IGF-I to up-regulate human MOPr, helping these findings clarify how human MOPr expression is regulated in neuronal cells, and that TCR engagement up-regulates OPRM1 transcription in T-cells. My results that neurotrophic factors a and TCR engagement, as well as it is reported for cytokines, seem to up-regulate OPRM1 in both neurons and immune cells suggest an important role for MOPr as a molecular bridge between neurons and immune cells; therefore, MOPr could play a key role in the cross-talk between immune system and nervous system and in particular in the balance between pro-inflammatory and pro-nociceptive stimuli and analgesic and neuroprotective effects.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
With the business environments no longer confined to geographical borders, the new wave of digital technologies has given organizations an enormous opportunity to bring together their distributed workforce and develop the ability to work together despite being apart (Prasad & Akhilesh, 2002). resupposing creativity to be a social process, the way that this phenomenon occurs when the configuration of the team is substantially modified will be questioned. Very little is known about the impact of interpersonal relationships in the creativity (Kurtzberg & Amabile, 2001). In order to analyse the ways in which the creative process may be developed, we ought to be taken into consideration the fact that participants are dealing with a quite an atypical situation. Firstly, in these cases socialization takes place amongst individuals belonging to a geographically dispersed workplace, where interpersonal relationships are mediated by the computer, and where trust must be developed among persons who have never met one another. Participants not only have multiple addresses and locations, but above all different nationalities, and different cultures, attitudes, thoughts, and working patterns, and languages. Therefore, the central research question of this thesis is as follows: “How does the creative process unfold in globally distributed teams?” With a qualitative approach, we used the case study of the Business Unit of Volvo 3P, an arm of Volvo Group. Throughout this research, we interviewed seven teams engaged in the development of a new product in the chassis and cab areas, for the brands Volvo and Renault Trucks, teams that were geographically distributed in Brazil, Sweden, France and India. Our research suggests that corporate values, alongside with intrinsic motivation and task which lay down the necessary foundations for the development of the creative process in GDT.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their cor- rectness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock- freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π- calculus, in order to understand their expressive power.
Resumo:
The thesis analyses the hydrodynamic induced by an array of Wave energy Converters (WECs), under an experimental and numerical point of view. WECs can be considered an innovative solution able to contribute to the green energy supply and –at the same time– to protect the rear coastal area under marine spatial planning considerations. This research activity essentially rises due to this combined concept. The WEC under exam is a floating device belonging to the Wave Activated Bodies (WAB) class. Experimental data were performed at Aalborg University in different scales and layouts, and the performance of the models was analysed under a variety of irregular wave attacks. The numerical simulations performed with the codes MIKE 21 BW and ANSYS-AQWA. Experimental results were also used to calibrate the numerical parameters and/or to directly been compared to numerical results, in order to extend the experimental database. Results of the research activity are summarized in terms of device performance and guidelines for a future wave farm installation. The device length should be “tuned” based on the local climate conditions. The wave transmission behind the devices is pretty high, suggesting that the tested layout should be considered as a module of a wave farm installation. Indications on the minimum inter-distance among the devices are provided. Furthermore, a CALM mooring system leads to lower wave transmission and also larger power production than a spread mooring. The two numerical codes have different potentialities. The hydrodynamics around single and multiple devices is obtained with MIKE 21 BW, while wave loads and motions for a single moored device are derived from ANSYS-AQWA. Combining the experimental and numerical it is suggested –for both coastal protection and energy production– to adopt a staggered layout, which will maximise the devices density and minimize the marine space required for the installation.
Resumo:
A Smart City is a high-performance urban context, where citizens live independently and are more aware of the surrounding opportunities, thanks to forward-looking development of economy politics, governance, mobility and environment. ICT infrastructures play a key-role in this new research field being also a mean for society to allow new ideas to prosper and new, more efficient approaches to be developed. The aim of this work is to research and develop novel solutions, here called smart services, in order to solve several upcoming problems and known issues in urban areas and more in general in the modern society context. A specific focus is posed on smart governance and on privacy issues which have been arisen in the cellular age.
Resumo:
In this thesis work we will explore and discuss the properties of the gamma-ray sources included in the first Fermi-LAT catalog of sources above 10 GeV (1FHL), by considering both blazars and the non negligible fraction of still unassociated gamma-ray sources (UGS, 13%). We perform a statistical analysis of a complete sample of hard gamma-ray sources, included in the 1FHL catalog, mostly composed of HSP blazars, and we present new VLBI observations of the faintest members of the sample. The new VLBI data, complemented by an extensive search of the archives for brighter sources, are essential to gather a sample as large as possible for the assessment of the significance of the correlation between radio and very high energy (E>100 GeV) emission bands. After the characterization of the statistical properties of HSP blazars and UGS, we use a complementary approach, by focusing on an intensive multi-frequency observing VLBI and gamma-ray campaign carried out for one of the most remarkable and closest HSP blazar Markarian 421.