708 resultados para Environments
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
Der Übergang der Grenzschicht von stark ozeanisch auf kontinental beeinflusst wurde in 2 tropischen Küstenwaldgebieten Amerikas untersucht, wo Luftmassen vom Meer kommend über den Kontinent transportiert werden.Zwei Feldkampagnen wurden durchgeführt; in Costa Rica (CR; 07.1996) und in Surinam (04.1998). In CR wurde im nordöstlichen Flachgebiet (etwa 10°25' N; 84°W) Regenwasser gesammelt, in dem später Carboxylate, anorganischen Anionen, Ca2+, K+, NH4+ und Mg2+ gemessen wurden. Die Proben wurden an 5 verschiedenen Stellen entlang der Windrichtung 1, 20, 60, 60 und 80 km von der Küste gesammelt. In Surinam (Sipaliwini, 2°02' N, 56°08' W) wurden organischen Säuren aus der Gasphase gesammelt etwa 550 km von der Küste entfernt, sowohl wie O3 und CO. Die Proben wurden mittels Ionenchromatographie und Kapillarelektrophorese analysiert. Morgendliche Einmischung der nächtlichen residualen Schicht und Luft der unteren freien Troposphäre war Hauptquelle für HCOOH und CH3COOH in der Tagesmischschicht. Es wurde gezeigt, dass lokale Produktion dieser Säuren durch chemische Reaktionen eine kleine Rolle gespielt hat und dass direkte Emission vernachlässigbar war.Aus den beiden Feldkampagnen folgt, dass die Konzentrationen der sekundären Verbindungen HCOOH, CH3COOH, Ozon und CO in der Tagesmischschicht von Importen bestimmt wurden, was gilt für Regen- und Trockenzeit bis zu Entfernungen von 550 km zur Küste.
Resumo:
In this study, conditions of deposition and stratigraphical architecture of Neogene (Tortonian, 11-6,7Ma) sediments of southern central Crete were analysed. In order to improve resolution of paleoclimatic data, new methods were applied to quantify environmental parameters and to increase the chronostratigraphic resolution in shallow water sediments. A relationship between paleoenvironmental change observed on Crete and global processes was established and a depositional model was developed. Based on a detailed analysis of the distribution of non geniculate coralline red algae, index values for water temperature and water depth were established and tested with the distribution patterns of benthic foraminifera and symbiont-bearing corals. Calcite shelled bivalves were sampled from the Algarve coast (southern Portugal) and central Crete and then 87Sr/86Sr was measured. A high resolution chronostratigraphy was developed based on the correlation between fluctuations in Sr ratios in the measured sections and in a late Miocene global seawater Sr isotope reference curve. Applying this method, a time frame was established to compare paleoenvironmental data from southern central Crete with global information on climate change reflected in oxygen isotope data. The comparison between paleotemperature data based on red algae and global oxygen isotope data showed that the employed index values reflect global change in temperature. Data indicate a warm interval during earliest Tortonian, a second short warm interval between 10 and 9,5Ma, a longer climatic optimum between 9 and 8Ma and an interval of increasing temperatures in the latest Tortonian. The distribution of coral reefs and carpets shows that during the warm intervals, the depositional environment became tropical while temperate climates prevailed during the cold interval. Since relative tectonic movements after initial half-graben formation in the early Tortonian were low in southern central Crete, sedimentary successions strongly respond to global sea-level fluctuation. A characteristic sedimentary succession formed during a 3rd order sea-level cycle: It comprises mixed siliciclastic-limestone deposited during sea-level fall and lowstand, homogenous red algal deposits formed during sea-level rise and coral carpets formed during late rise and highstand. Individual beds in the succession reflect glacioeustatic fluctuations that are most prominent in the mixed siliciclastic-limestone interval. These results confirm the fact that sedimentary successions deposited at the critical threshold between temperate and tropical environments develop characteristic changes in depositional systems and biotic associations that can be used to assemble paleoclimatic datasets.
Resumo:
The objective of the Ph.D. thesis is to put the basis of an all-embracing link analysis procedure that may form a general reference scheme for the future state-of-the-art of RF/microwave link design: it is basically meant as a circuit-level simulation of an entire radio link, with – generally multiple – transmitting and receiving antennas examined by EM analysis. In this way the influence of mutual couplings on the frequency-dependent near-field and far-field performance of each element is fully accounted for. The set of transmitters is treated as a unique nonlinear system loaded by the multiport antenna, and is analyzed by nonlinear circuit techniques. In order to establish the connection between transmitters and receivers, the far-fields incident onto the receivers are evaluated by EM analysis and are combined by extending an available Ray Tracing technique to the link study. EM theory is used to describe the receiving array as a linear active multiport network. Link performances in terms of bit error rate (BER) are eventually verified a posteriori by a fast system-level algorithm. In order to validate the proposed approach, four heterogeneous application contexts are provided. A complete MIMO link design in a realistic propagation scenario is meant to constitute the reference case study. The second one regards the design, optimization and testing of various typologies of rectennas for power generation by common RF sources. Finally, the project and implementation of two typologies of radio identification tags, at X-band and V-band respectively. In all the cases the importance of an exhaustive nonlinear/electromagnetic co-simulation and co-design is demonstrated to be essential for any accurate system performance prediction.
Resumo:
The Eifel volcanism is part of the Central European Volcanic Province (CEVP) and is located in the Rhenish Massif, close to the Rhine and Leine Grabens. The Quaternary Eifel volcanism appears to be related to a mantle plume activity. However, the causes of the Tertiary Hocheifel volcanism remain debated. We present geochronological, geochemical and isotope data to assess the geotectonic settings of the Tertiary Eifel volcanism. Based on 40Ar/39Ar dating, we were able to identify two periods in the Hocheifel activity: from 43.6 to 39.0 Ma and from 37.5 to 35.0 Ma. We also show that the pre-rifting volcanism in the northernmost Upper Rhine Graben (59 to 47 Ma) closely precede the Hocheifel volcanic activity. In addition, the volcanism propagates from south to north within the older phase of the Hocheifel activity. At the time of Hocheifel volcanism, the tectonic activity in the Hocheifel was controlled by stress field conditions identical to those of the Upper Rhine Graben. Therefore, magma generation in the Hocheifel appears to be caused by decompression due to Middle to Late Eocene extension. Our geochemical data indicate that the Hocheifel magmas were produced by partial melting of a garnet peridotite at 75-90 km depth. We also show that crustal contamination is minor although the magmas erupted through a relatively thick continental lithosphere. Sr, Nd and Pb isotopic compositions suggest that the source of the Hocheifel magmas is a mixing between depleted FOZO or HIMU-like material and enriched EM2-like material. The Tertiary Hocheifel and the Quaternary Eifel lavas appear to have a common enriched end-member. However, the other sources are likely to be distinct. In addition, the Hocheifel lavas share a depleted component with the other Tertiary CEVP lavas. Although the Tertiary Hocheifel and the Quaternary Eifel lavas appear to originate from different sources, the potential involvement of a FOZO-like component would indicate the contribution of deep mantle material. Thus, on the basis of the geochemical and isotope data, we cannot rule out the involvement of plume-type material in the Hocheifel magmas. The Ko’olau Scientific Drilling Project (KSDP) has been initiated in order to evaluate the long-term evolution of Ko’olau volcano and obtain information about the Hawaiian mantle plume. High precision Pb triple spike data, as well as Sr and Nd isotope data on KSDP lavas and Honolulu Volcanics (HVS) reveal compositional source variations during Ko’olau growth. Pb isotopic compositions indicate that, at least, three Pb end-members are present in Ko’olau lavas. Changes in the contributions of each component are recorded in the Pb, Sr and Nd isotopes stratigraphy. The radiogenic component is present, at variable proportion, in all three stages of Ko’olau growth. It shows affinities with the least radiogenic “Kea-lo8” lavas present in Mauna Kea. The first unradiogenic component was present in the main-shield stage of Ko’olau growth but its contribution decreased with time. It has EM1 type characteristics and corresponds to the “Ko’olau” component of Hawaiian mantle plume. The second unradiogenic end-member, so far only sampled by Honololu lavas, has isotopic characteristics similar to those of a depleted mantle. However, they are different from those of the recent Pacific lithosphere (EPR MORB) indicating that the HVS are not derived from MORB-related source. We suggest, instead, that the HVS result from melting of a plume material. Thus the evolution of a single Hawaiian volcano records the geochemical and isotopic changes within the Hawaiian plume.
Resumo:
Many industries and academic institutions share the vision that an appropriate use of information originated from the environment may add value to services in multiple domains and may help humans in dealing with the growing information overload which often seems to jeopardize our life. It is also clear that information sharing and mutual understanding between software agents may impact complex processes where many actors (humans and machines) are involved, leading to relevant socioeconomic benefits. Starting from these two input, architectural and technological solutions to enable “environment-related cooperative digital services” are here explored. The proposed analysis starts from the consideration that our environment is physical space and here diversity is a major value. On the other side diversity is detrimental to common technological solutions, and it is an obstacle to mutual understanding. An appropriate environment abstraction and a shared information model are needed to provide the required levels of interoperability in our heterogeneous habitat. This thesis reviews several approaches to support environment related applications and intends to demonstrate that smart-space-based, ontology-driven, information-sharing platforms may become a flexible and powerful solution to support interoperable services in virtually any domain and even in cross-domain scenarios. It also shows that semantic technologies can be fruitfully applied not only to represent application domain knowledge. For example semantic modeling of Human-Computer Interaction may support interaction interoperability and transformation of interaction primitives into actions, and the thesis shows how smart-space-based platforms driven by an interaction ontology may enable natural ad flexible ways of accessing resources and services, e.g, with gestures. An ontology for computational flow execution has also been built to represent abstract computation, with the goal of exploring new ways of scheduling computation flows with smart-space-based semantic platforms.
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
Automatically recognizing faces captured under uncontrolled environments has always been a challenging topic in the past decades. In this work, we investigate cohort score normalization that has been widely used in biometric verification as means to improve the robustness of face recognition under challenging environments. In particular, we introduce cohort score normalization into undersampled face recognition problem. Further, we develop an effective cohort normalization method specifically for the unconstrained face pair matching problem. Extensive experiments conducted on several well known face databases demonstrate the effectiveness of cohort normalization on these challenging scenarios. In addition, to give a proper understanding of cohort behavior, we study the impact of the number and quality of cohort samples on the normalization performance. The experimental results show that bigger cohort set size gives more stable and often better results to a point before the performance saturates. And cohort samples with different quality indeed produce different cohort normalization performance. Recognizing faces gone after alterations is another challenging problem for current face recognition algorithms. Face image alterations can be roughly classified into two categories: unintentional (e.g., geometrics transformations introduced by the acquisition devide) and intentional alterations (e.g., plastic surgery). We study the impact of these alterations on face recognition accuracy. Our results show that state-of-the-art algorithms are able to overcome limited digital alterations but are sensitive to more relevant modifications. Further, we develop two useful descriptors for detecting those alterations which can significantly affect the recognition performance. In the end, we propose to use the Structural Similarity (SSIM) quality map to detect and model variations due to plastic surgeries. Extensive experiments conducted on a plastic surgery face database demonstrate the potential of SSIM map for matching face images after surgeries.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
Through the use of Cloud Foundry "stack" concept, a new isolation is provided to the application running on the PaaS. A new deployment feature that can easily scale on distributed system, both public and private clouds.
Resumo:
Methane is the most abundant reduced organic compound in the atmosphere. As the strongest known long-lived greenhouse gas after water vapour and carbon dioxide methane perturbs the radiation balance of Earth’s atmosphere. The abiotic formation of methane requires ultraviolet irradiation of organic matter or takes place in locations with high temperature and/or pressure, e.g. during biomass burning or serpentinisation of olivine, under hydrothermal conditions in the oceans deep or below tectonic plates. The biotic methane formation was traditionally thought to be formed only by methanogens under strictly anaerobic conditions, such as in wetland soils, rice paddies and agricultural waste. rnIn this dissertation several chemical pathways are described which lead to the formation of methane under aerobic and ambient conditions. Organic precursor compounds such as ascorbic acid and methionine were shown to release methane in a chemical system including ferrihydrite and hydrogen peroxide in aquatic solution. Moreover, it was shown by using stable carbon isotope labelling experiments that the thio-methyl group of methionine was the carbon precursor for the methane produced. Methionine, a compound that plays an important role in transmethylation processes in plants was also applied to living plants. Stable carbon isotope labelling experiments clearly verified that methionine acts as a precursor compound for the methane from plants. Further experiments in which the electron transport chain was inhibited suggest that the methane generation is located in the mitochondria of the plants. The abiotic formation of methane was shown for several soil samples. Important environmental parameter such as temperature, UV irradiation and moisture were identified to control methane formation. The organic content of the sample as well as water and hydrogen peroxide might also play a major role in the formation of methane from soils. Based on these results a novel scheme was developed that includes both biotic and chemical sources of methane in the pedosphere.rn
Resumo:
Im Bereich sicherheitsrelevanter eingebetteter Systeme stellt sich der Designprozess von Anwendungen als sehr komplex dar. Entsprechend einer gegebenen Hardwarearchitektur lassen sich Steuergeräte aufrüsten, um alle bestehenden Prozesse und Signale pünktlich auszuführen. Die zeitlichen Anforderungen sind strikt und müssen in jeder periodischen Wiederkehr der Prozesse erfüllt sein, da die Sicherstellung der parallelen Ausführung von größter Bedeutung ist. Existierende Ansätze können schnell Designalternativen berechnen, aber sie gewährleisten nicht, dass die Kosten für die nötigen Hardwareänderungen minimal sind. Wir stellen einen Ansatz vor, der kostenminimale Lösungen für das Problem berechnet, die alle zeitlichen Bedingungen erfüllen. Unser Algorithmus verwendet Lineare Programmierung mit Spaltengenerierung, eingebettet in eine Baumstruktur, um untere und obere Schranken während des Optimierungsprozesses bereitzustellen. Die komplexen Randbedingungen zur Gewährleistung der periodischen Ausführung verlagern sich durch eine Zerlegung des Hauptproblems in unabhängige Unterprobleme, die als ganzzahlige lineare Programme formuliert sind. Sowohl die Analysen zur Prozessausführung als auch die Methoden zur Signalübertragung werden untersucht und linearisierte Darstellungen angegeben. Des Weiteren präsentieren wir eine neue Formulierung für die Ausführung mit fixierten Prioritäten, die zusätzlich Prozessantwortzeiten im schlimmsten anzunehmenden Fall berechnet, welche für Szenarien nötig sind, in denen zeitliche Bedingungen an Teilmengen von Prozessen und Signalen gegeben sind. Wir weisen die Anwendbarkeit unserer Methoden durch die Analyse von Instanzen nach, welche Prozessstrukturen aus realen Anwendungen enthalten. Unsere Ergebnisse zeigen, dass untere Schranken schnell berechnet werden können, um die Optimalität von heuristischen Lösungen zu beweisen. Wenn wir optimale Lösungen mit Antwortzeiten liefern, stellt sich unsere neue Formulierung in der Laufzeitanalyse vorteilhaft gegenüber anderen Ansätzen dar. Die besten Resultate werden mit einem hybriden Ansatz erzielt, der heuristische Startlösungen, eine Vorverarbeitung und eine heuristische mit einer kurzen nachfolgenden exakten Berechnungsphase verbindet.
Resumo:
In this master thesis I evaluated the performance of a Ultra-Wide Bandwidth (UWB) radar system for indoor environments mapping. In particular, I used a statistical Bayesian approach which is able to combine all the measurements collected by the radar, including system non-idealities such as the error on the estimated antenna pointing direction or on the estimated radar position. First I verified through simulations that the system was able to provide a sufficiently accurate reconstruction of the surrounding environment despite the limitations imposed by the UWB technology. In fact, the emission of UWB pulses is limited in terms of transmitted power by international regulations. Motivated by the promising results obtained through simulations, I successively carried out a measurement campaign in a real indoor environment using a UWB commercial device. The obtained results showed that the UWB radar system is capable of providing an accurate reconstruction of indoor environments also adopting not directional antennas.
Resumo:
Questo scritto mira a fare una panoramica dei problemi legati alla sicurezza della comunicazione tra componenti interne dei veicoli e delle soluzioni oggigiorno disponibili. Partendo con una descrizione generale del circuito interno dell’auto analizzeremo i suoi punti di accesso e discuteremo i danni prodotti dalla sua manomissione illecita. In seguito vedremo se ´è possibile prevenire tali attacchi dando un’occhiata alle soluzioni disponibili e soffermandoci in particolare sui moduli crittografici e le loro applicazioni. Infine presenteremo l’implementazione pratica di un protocollo di autenticazione tra ECUs e una dimostrazione matematica della sua sicurezza.
Resumo:
This thesis investigates one-dimensional random walks in random environment whose transition probabilities might have an infinite variance. The ergodicity of the dynamical system ''from the point of view of the particle'' is proved under the assumptions of transitivity and existence of an absolutely continuous steady state on the space of the environments. We show that, if the average of the local drift over the environments is summable and null, then the RWRE is recurrent. We provide an example satisfying all the hypotheses.