980 resultados para dynamic set
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
Die Kernmagnetresonanz (NMR) ist eine vielseitige Technik, die auf spin-tragende Kerne angewiesen ist. Seit ihrer Entdeckung ist die Kernmagnetresonanz zu einem unverzichtbaren Werkzeug in unzähligen Anwendungen der Physik, Chemie, Biologie und Medizin geworden. Das größte Problem der NMR ist ihre geringe Sensitivtät auf Grund der sehr kleinen Energieaufspaltung bei Raumtemperatur. Für Protonenspins, die das größte magnetogyrische Verhältnis besitzen, ist der Polarisationsgrad selbst in den größten verfügbaren Magnetfeldern (24 T) nur ~7*10^(-5).rnDurch die geringe inhärente Polarisation ist folglich eine theoretische Sensitivitätssteigerung von mehr als 10^4 möglich. rnIn dieser Arbeit wurden verschiedene technische Aspekte und unterschiedliche Polarisationsagenzien für Dynamic Nuclear Polarization (DNP) untersucht.rnDie technische Entwicklung des mobilen Aufbaus umfasst die Verwendung eines neuen Halbach Magneten, die Konstruktion neuer Probenköpfe und den automatisierten Ablauf der Experimente mittels eines LabVIEW basierten Programms. Desweiteren wurden zwei neue Polarisationsagenzien mit besonderen Merkmalen für den Overhauser und den Tieftemperatur DNP getestet. Zusätzlich konnte die Durchführbarkeit von NMR Experimenten an Heterokernen (19F und 13C) im mobilen Aufbau bei 0,35 T gezeigt werden. Diese Ergebnisse zeigen die Möglichkeiten der Polarisationstechnik DNP auf, wenn Heterokerne mit einem kleinen magnetogyrischen Verhältnis polarisiert werden müssen.rnDie Sensitivitätssteigerung sollte viele neue Anwendungen, speziell in der Medizin, ermöglichen.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
With the outlook of improving seismic vulnerability assessment for the city of Bishkek (Kyrgyzstan), the global dynamic behaviour of four nine-storey r.c. large-panel buildings in elastic regime is studied. The four buildings were built during the Soviet era within a serial production system. Since they all belong to the same series, they have very similar geometries both in plan and in height. Firstly, ambient vibration measurements are performed in the four buildings. The data analysis composed of discrete Fourier transform, modal analysis (frequency domain decomposition) and deconvolution interferometry, yields the modal characteristics and an estimate of the linear impulse response function for the structures of the four buildings. Then, finite element models are set up for all four buildings and the results of the numerical modal analysis are compared with the experimental ones. The numerical models are finally calibrated considering the first three global modes and their results match the experimental ones with an error of less then 20%.
Resumo:
The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).
Resumo:
Particulate matter (PM) emissions standards set by the US Environmental Protection Agency (EPA) have become increasingly stringent over the years. The EPA regulation for PM in heavy duty diesel engines has been reduced to 0.01 g/bhp-hr for the year 2010. Heavy duty diesel engines make use of an aftertreatment filtration device, the Diesel Particulate Filter (DPF). DPFs are highly efficient in filtering PM (known as soot) and are an integral part of 2010 heavy duty diesel aftertreatment system. PM is accumulated in the DPF as the exhaust gas flows through it. This PM needs to be removed by oxidation periodically for the efficient functioning of the filter. This oxidation process is also known as regeneration. There are 2 types of regeneration processes, namely active regeneration (oxidation of PM by external means) and passive oxidation (oxidation of PM by internal means). Active regeneration occurs typically in high temperature regions, about 500 - 600 °C, which is much higher than normal diesel exhaust temperatures. Thus, the exhaust temperature has to be raised with the help of external devices like a Diesel Oxidation Catalyst (DOC) or a fuel burner. The O2 oxidizes PM producing CO2 as oxidation product. In passive oxidation, one way of regeneration is by the use of NO2. NO2 oxidizes the PM producing NO and CO2 as oxidation products. The passive oxidation process occurs at lower temperatures (200 - 400 °C) in comparison to the active regeneration temperatures. Generally, DPF substrate walls are washcoated with catalyst material to speed up the rate of PM oxidation. The catalyst washcoat is observed to increase the rate of PM oxidation. The goal of this research is to develop a simple mathematical model to simulate the PM depletion during the active regeneration process in a DPF (catalyzed and non-catalyzed). A simple, zero-dimensional kinetic model was developed in MATLAB. Experimental data required for calibration was obtained by active regeneration experiments performed on PM loaded mini DPFs in an automated flow reactor. The DPFs were loaded with PM from the exhaust of a commercial heavy duty diesel engine. The model was calibrated to the data obtained from active regeneration experiments. Numerical gradient based optimization techniques were used to estimate the kinetic parameters of the model.
Resumo:
Interactive ray tracing of non-trivial scenes is just becoming feasible on single graphics processing units (GPU). Recent work in this area focuses on building effective acceleration structures, which work well under the constraints of current GPUs. Most approaches are targeted at static scenes and only allow navigation in the virtual scene. So far support for dynamic scenes has not been considered for GPU implementations. We have developed a GPU-based ray tracing system for dynamic scenes consisting of a set of individual objects. Each object may independently move around, but its geometry and topology are static.
Resumo:
Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.
Resumo:
The African great lakes are of utmost importance for the local economy (fishing), as well as being essential to the survival of the local people. During the past decades, these lakes experienced fast changes in ecosystem structure and functioning, and their future evolution is a major concern. In this study, for the first time a set of one-dimensional lake models are evaluated for Lake Kivu (2.28°S; 28.98°E), East Africa. The unique limnology of this meromictic lake, with the importance of salinity and subsurface springs in a tropical high-altitude climate, presents a worthy challenge to the seven models involved in the Lake Model Intercomparison Project (LakeMIP). Meteorological observations from two automatic weather stations are used to drive the models, whereas a unique dataset, containing over 150 temperature profiles recorded since 2002, is used to assess the model’s performance. Simulations are performed over the freshwater layer only (60 m) and over the average lake depth (240 m), since salinity increases with depth below 60 m in Lake Kivu and some lake models do not account for the influence of salinity upon lake stratification. All models are able to reproduce the mixing seasonality in Lake Kivu, as well as the magnitude and seasonal cycle of the lake enthalpy change. Differences between the models can be ascribed to variations in the treatment of the radiative forcing and the computation of the turbulent heat fluxes. Fluctuations in wind velocity and solar radiation explain inter-annual variability of observed water column temperatures. The good agreement between the deep simulations and the observed meromictic stratification also shows that a subset of models is able to account for the salinity- and geothermal-induced effects upon deep-water stratification. Finally, based on the strengths and weaknesses discerned in this study, an informed choice of a one-dimensional lake model for a given research purpose becomes possible.
Resumo:
Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^
Resumo:
One-dimensional dynamic computer simulation was employed to investigate the separation and migration order change of ketoconazole enantiomers at low pH in presence of increasing amounts of (2-hydroxypropyl)-β-cyclodextrin (OHP-β-CD). The 1:1 interaction of ketoconazole with the neutral cyclodextrin was simulated under real experimental conditions and by varying input parameters for complex mobilities and complexation constants. Simulation results obtained with experimentally determined apparent ionic mobilities, complex mobilities, and complexation constants were found to compare well with the calculated separation selectivity and experimental data. Simulation data revealed that the migration order of the ketoconazole enantiomers at low (OHP-β-CD) concentrations (i.e. below migration order inversion) is essentially determined by the difference in complexation constants and at high (OHP-β-CD) concentrations (i.e. above migration order inversion) by the difference in complex mobilities. Furthermore, simulations with complex mobilities set to zero provided data that mimic migration order and separation with the chiral selector being immobilized. For the studied CEC configuration, no migration order inversion is predicted and separations are shown to be quicker and electrophoretic transport reduced in comparison to migration in free solution. The presented data illustrate that dynamic computer simulation is a valuable tool to study electrokinetic migration and separations of enantiomers in presence of a complexing agent.
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
Calving is a major mechanism of ice discharge of the Antarctic and Greenland ice sheets, and a change in calving front position affects the entire stress regime of marine terminating glaciers. The representation of calving front dynamics in a 2-D or 3-D ice sheet model remains non-trivial. Here, we present the theoretical and technical framework for a level-set method, an implicit boundary tracking scheme, which we implement into the Ice Sheet System Model (ISSM). This scheme allows us to study the dynamic response of a drainage basin to user-defined calving rates. We apply the method to Jakobshavn Isbræ, a major marine terminating outlet glacier of the West Greenland Ice Sheet. The model robustly reproduces the high sensitivity of the glacier to calving, and we find that enhanced calving triggers significant acceleration of the ice stream. Upstream acceleration is sustained through a combination of mechanisms. However, both lateral stress and ice influx stabilize the ice stream. This study provides new insights into the ongoing changes occurring at Jakobshavn Isbræ and emphasizes that the incorporation of moving boundaries and dynamic lateral effects, not captured in flow-line models, is key for realistic model projections of sea level rise on centennial timescales.
Resumo:
Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation.