917 resultados para dynamic time warping
Resumo:
Background: Most mortality atlases show static maps from count data aggregated over time. This procedure has several methodological problems and serious limitations for decision making in Public Health. The evaluation of health outcomes, including mortality, should be approached from a dynamic time perspective that is specific for each gender and age group. At the moment, researches in Spain do not provide a dynamic image of the population’s mortality status from a spatio-temporal point of view. The aim of this paper is to describe the spatial distribution of mortality from all causes in small areas of Andalusia (Southern Spain) and evolution over time from 1981 to 2006. Methods: A small-area ecological study was devised using the municipality as the unit for analysis. Two spatiotemporal hierarchical Bayesian models were estimated for each age group and gender. One of these was used to estimate the specific mortality rate, together with its time trends, and the other to estimate the specific rate ratio for each municipality compared with Spain as a whole. Results: More than 97% of the municipalities showed a diminishing or flat mortality trend in all gender and age groups. In 2006, over 95% of municipalities showed male and female mortality specific rates similar or significantly lower than Spanish rates for all age groups below 65. Systematically, municipalities in Western Andalusia showed significant male and female mortality excess from 1981 to 2006 only in age groups over 65. Conclusions: The study shows a dynamic geographical distribution of mortality, with a different pattern for each year, gender and age group. This information will contribute towards a reflection on the past, present and future of mortality in Andalusia.
Resumo:
We study the relationship between topological scales and dynamic time scales in complex networks. The analysis is based on the full dynamics towards synchronization of a system of coupled oscillators. In the synchronization process, modular structures corresponding to well-defined communities of nodes emerge in different time scales, ordered in a hierarchical way. The analysis also provides a useful connection between synchronization dynamics, complex networks topology, and spectral graph analysis.
Resumo:
For general home monitoring, a system should automatically interpret people’s actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions.
Resumo:
The work for the present thesis started in California, during my semester as an exchange student overseas. California is known worldwide for its seismicity and its effort in the earthquake engineering research field. For this reason, I immediately found interesting the Structural Dynamics Professor, Maria Q. Feng's proposal, to work on a pushover analysis of the existing Jamboree Road Overcrossing bridge. Concrete is a popular building material in California, and for the most part, it serves its functions well. However, concrete is inherently brittle and performs poorly during earthquakes if not reinforced properly. The San Fernando Earthquake of 1971 dramatically demonstrated this characteristic. Shortly thereafter, code writers revised the design provisions for new concrete buildings so to provide adequate ductility to resist strong ground shaking. There remain, nonetheless, millions of square feet of non-ductile concrete buildings in California. The purpose of this work is to perform a Pushover Analysis and compare the results with those of a Nonlinear Time-History Analysis of an existing bridge, located in Southern California. The analyses have been executed through the software OpenSees, the Open System for Earthquake Engineering Simulation. The bridge Jamboree Road Overcrossing is classified as a Standard Ordinary Bridge. In fact, the JRO is a typical three-span continuous cast-in-place prestressed post-tension box-girder. The total length of the bridge is 366 ft., and the height of the two bents are respectively 26,41 ft. and 28,41 ft.. Both the Pushover Analysis and the Nonlinear Time-History Analysis require the use of a model that takes into account for the nonlinearities of the system. In fact, in order to execute nonlinear analyses of highway bridges it is essential to incorporate an accurate model of the material behavior. It has been observed that, after the occurrence of destructive earthquakes, one of the most damaged elements on highway bridges is a column. To evaluate the performance of bridge columns during seismic events an adequate model of the column must be incorporated. Part of the work of the present thesis is, in fact, dedicated to the modeling of bents. Different types of nonlinear element have been studied and modeled, with emphasis on the plasticity zone length determination and location. Furthermore, different models for concrete and steel materials have been considered, and the selection of the parameters that define the constitutive laws of the different materials have been accurate. The work is structured into four chapters, to follow a brief overview of the content. The first chapter introduces the concepts related to capacity design, as the actual philosophy of seismic design. Furthermore, nonlinear analyses both static, pushover, and dynamic, time-history, are presented. The final paragraph concludes with a short description on how to determine the seismic demand at a specific site, according to the latest design criteria in California. The second chapter deals with the formulation of force-based finite elements and the issues regarding the objectivity of the response in nonlinear field. Both concentrated and distributed plasticity elements are discussed into detail. The third chapter presents the existing structure, the software used OpenSees, and the modeling assumptions and issues. The creation of the nonlinear model represents a central part in this work. Nonlinear material constitutive laws, for concrete and reinforcing steel, are discussed into detail; as well as the different scenarios employed in the columns modeling. Finally, the results of the pushover analysis are presented in chapter four. Capacity curves are examined for the different model scenarios used, and failure modes of concrete and steel are discussed. Capacity curve is converted into capacity spectrum and intersected with the design spectrum. In the last paragraph, the results of nonlinear time-history analyses are compared to those of pushover analysis.
Resumo:
In der vorliegenden Arbeit wurde die Druckabhängigkeit der molekularen Dynamik mittels 2H-NMR und Viskositätsmessungen untersucht. Für die Messungen wurde der niedermolekulare organische Glasbildner ortho-Terphenyl (OTP) ausgewählt, da dieser aufgrund einer Vielzahl vorliegender Arbeiten als Modellsubstanz angesehen werden kann. Daneben wurden auch Messungen an Salol durchgeführt.Die Untersuchungen erstreckten sich über einen weiten Druck- und Temperaturbereich ausgehend von der Schmelze bis weit in die unterkühlte Flüssigkeit. Dieser Bereich wurde aufgrund experimenteller Voraussetzungen immer durch eine Druckerhöhung erreicht.Beide Substanzen zeigten druckabhängig ein Verhalten, das dem der Temperaturvariation bei Normaldruck sehr ähnelt. Auf einer Zeitskala der molekularen Dynamik von 10E-9 s bis zu 10E+2 s wurde daher am Beispiel von OTP ein Druck-Temperatur-Zeit-Superpositionsprinzip diskutiert. Zudem konnte eine Temperatur-Dichte-Skalierung mit rho T-1/4 erfolgreich durchgeführt werden. Dies entspricht einem rein repulsiven Potentialverlauf mit rho -12±3 .Zur Entscheidung, ob die Verteilungsbreiten der mittleren Rotationskorrelationszeiten durch Druckvariation beeinflußt werden, wurden auch Ergebnisse anderer experimenteller Methoden herangezogen. Unter Hinzuziehung aller Meßergebnisse kann sowohl eine Temperatur- als auch Druckabhängigkeit der Verteilungsbreite bestätigt werden. Zur Auswertung von Viskositätsdaten wurde ein Verfahren vorgestellt, das eine quantitative Aussage über den Fragilitätsindex von unterkühlten Flüssigkeiten auch dann zuläßt, wenn die Messungen nicht bis zur Glasübergangstemperatur Tg durchgeführt werden. Die Auswertung der druckabhängigen Viskositätsdaten von OTP und Salol zeigt einen sehr differenzierten druckabhängigen Verlauf des Fragilitätsindexes für beide Glasbildner. OTP zeigt zunächst eine leichte Abnahme und danach wieder eine Zunahme des Fragilitätsindexes, dieses Ergebnis wird auch von Simulationsdaten, die der Literatur entnommen wurden, unterstützt. Salol hingegen zeigt zunächst eine deutliche Zunahme und danach eine Abnahme des Fragilitätsindexes. Das unterschiedliche Verhalten der beiden Glasbildner mit ähnlichem Fragilitätsindex bei Normaldruck wird auf die Wasserstoffbrückenbindungen innerhalb von Salol zurückgeführt.
Resumo:
The work done is about the seismic analysis of an existing reinforced concrete structure that is equipped with a special bracing device. The main objective of the research is to provide a simple procedure that can be followed in order to design the lateral bracing system in such a way that the actual behavior of the structure matches the desired pre-defined objective curve. a great attention is devoted to the internal actions in the structural elements produced by the braces. The device used is called: Crescent shaped braces. This device is a special type of bracing because it has a banana-like geometry that allows the designer to have more control over the stiffness of the structure, especially under cyclic behavior, Unlike the conventional bracing that resists only through its axial stiffness. This device has been installed in a hospital in Italy. However, it has not been exposed to any ground motion so far. Different analysis methods, such as static pushover and dynamic time-history have been used in the analysis of the structure.
Resumo:
Presents a simulation study of the costing of police custody operations at a UK police force. The custody operation incorporates the arrest, booking-in, interview, detention and court appearance activities. The Activity Based Costing (ABC) approach is used as a framework to show how costs are generated by the three “drivers” of cost, activity and resource. These relate to the design efficiency of the process, the timing and mix of demand on the process and the cost of resources used to undertake the process respectively. The use of discrete-event simulation allows the incorporation of dynamic (time-dependent) and stochastic (variability) elements in the cost analysis. This enables both the amount and timing of the use of capacity and the generation of cost to be established. The concept of committed and flexible resources directs management decisions to the redeployment of unused capacity or alternatively the identification of additional capacity requirements.
Resumo:
Atomistic Molecular Dynamics provides powerful and flexible tools for the prediction and analysis of molecular and macromolecular systems. Specifically, it provides a means by which we can measure theoretically that which cannot be measured experimentally: the dynamic time-evolution of complex systems comprising atoms and molecules. It is particularly suitable for the simulation and analysis of the otherwise inaccessible details of MHC-peptide interaction and, on a larger scale, the simulation of the immune synapse. Progress has been relatively tentative yet the emergence of truly high-performance computing and the development of coarse-grained simulation now offers us the hope of accurately predicting thermodynamic parameters and of simulating not merely a handful of proteins but larger, longer simulations comprising thousands of protein molecules and the cellular scale structures they form. We exemplify this within the context of immunoinformatics.
Resumo:
A novel approach of automatic ECG analysis based on scale-scale signal representation is proposed. The approach uses curvature scale-space representation to locate main ECG waveform limits and peaks and may be used to correct results of other ECG analysis techniques or independently. Moreover dynamic matching of ECG CSS representations provides robust preliminary recognition of ECG abnormalities which has been proven by experimental results.
Resumo:
Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.
Resumo:
Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.
Resumo:
High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system's performance. The paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The results show that the proposed heuristics achieve a reasonably higher system's availability than static offline decisions when lower replication ratios are imposed due to resource or cost limitations. The paper introduces a novel approach to coordinate the activation of passive replicas in interdependent distributed environments. The proposed distributed coordination model reduces the complexity of the needed interactions among nodes and is faster to converge to a globally acceptable solution than a traditional centralised approach.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system’s performance. This paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The activation of passive replicas is coordinated through a fast convergence protocol that reduces the complexity of the needed interactions among nodes until a new collective global service solution is determined.
Resumo:
This paper proposes a dynamic scheduler that supports the coexistence of guaranteed and non-guaranteed bandwidth servers to efficiently handle soft-tasks’ overloads by making additional capacity available from two sources: (i) residual capacity allocated but unused when jobs complete in less than their budgeted execution time; (ii) stealing capacity from inactive non-isolated servers used to schedule best-effort jobs. The effectiveness of the proposed approach in reducing the mean tardiness of periodic jobs is demonstrated through extensive simulations. The achieved results become even more significant when tasks’ computation times have a large variance.