917 resultados para dynamic time warping
Resumo:
This dissertation describes an approach for developing a real-time simulation for working mobile vehicles based on multibody modeling. The use of multibody modeling allows comprehensive description of the constrained motion of the mechanical systems involved and permits real-time solving of the equations of motion. By carefully selecting the multibody formulation method to be used, it is possible to increase the accuracy of the multibody model while at the same time solving equations of motion in real-time. In this study, a multibody procedure based on semi-recursive and augmented Lagrangian methods for real-time dynamic simulation application is studied in detail. In the semirecursive approach, a velocity transformation matrix is introduced to describe the dependent coordinates into relative (joint) coordinates, which reduces the size of the generalized coordinates. The augmented Lagrangian method is based on usage of global coordinates and, in that method, constraints are accounted using an iterative process. A multibody system can be modelled as either rigid or flexible bodies. When using flexible bodies, the system can be described using a floating frame of reference formulation. In this method, the deformation mode needed can be obtained from the finite element model. As the finite element model typically involves large number of degrees of freedom, reduced number of deformation modes can be obtained by employing model order reduction method such as Guyan reduction, Craig-Bampton method and Krylov subspace as shown in this study The constrained motion of the working mobile vehicles is actuated by the force from the hydraulic actuator. In this study, the hydraulic system is modeled using lumped fluid theory, in which the hydraulic circuit is divided into volumes. In this approach, the pressure wave propagation in the hoses and pipes is neglected. The contact modeling is divided into two stages: contact detection and contact response. Contact detection determines when and where the contact occurs, and contact response provides the force acting at the collision point. The friction between tire and ground is modelled using the LuGre friction model, which describes the frictional force between two surfaces. Typically, the equations of motion are solved in the full matrices format, where the sparsity of the matrices is not considered. Increasing the number of bodies and constraint equations leads to the system matrices becoming large and sparse in structure. To increase the computational efficiency, a technique for solution of sparse matrices is proposed in this dissertation and its implementation demonstrated. To assess the computing efficiency, augmented Lagrangian and semi-recursive methods are implemented employing a sparse matrix technique. From the numerical example, the results show that the proposed approach is applicable and produced appropriate results within the real-time period.
Resumo:
This study presents an understanding of how a U.S. based, international MBA school has been able to achieve competitive advantage within a relatively short period of time. A framework is built to comprehend how the dynamic capability and value co-creation theories are connected and to understand how the dynamic capabilities have enabled value co-creation to happen between the school and its students, leading to such competitive advantage for the school. The data collection method followed a qualitative single-case study with a process perspective. Seven semi-structured interviews were made in September and October of 2015; one current employee of the MBA school was interviewed, with the other six being graduates and/or former employees of the MBA school. In addition, the researcher has worked as a recruiter at the MBA school, enabling to build bridges and a coherent whole of the empirical findings. Data analysis was conducted by first identifying themes from interviews, after which a narrative was written and a causal network model was built. Thus, a combination of thematic analysis, narrative and grounded theory were used as data analysis methods. This study finds that value co-creation is enabled by the dynamic capabilities of the MBA school; also capabilities would not be dynamic if value co-creation did not take place. Thus, this study presents that even though the two theories represent different level analyses, they are intertwined and together they can help to explain competitive advantage. The MBA case school’s dynamic capabilities are identified to be the sales & marketing capabilities and international market creation capabilities, thus the study finds that the MBA school does not only co-create value with existing students (customers) in the school setting, but instead, most of the value co-creation happens between the school and the student cohorts (network) already in the recruiting phase. Therefore, as a theoretical implication, the network should be considered as part of the context. The main value created seem to lie in the MBA case school’s international setting & networks. MBA schools around the world can learn from this study; schools should try to find their own niche and specialize, based on their own values and capabilities. With a differentiating focus and a unique and practical content, the schools can and should be well-marketed and proactively sold in order to receive more student applications and enhance competitive advantage. Even though an MBA school can effectively be treated as a business, as the study shows, the main emphasis should still be on providing quality education. Good content with efficient marketing can be the winning combination for an MBA school.
Resumo:
Highly dynamic systems, often considered as resilient systems, are characterised by abiotic and biotic processes under continuous and strong changes in space and time. Because of this variability, the detection of overlapping anthropogenic stress is challenging. Coastal areas harbour dynamic ecosystems in the form of open sandy beaches, which cover the vast majority of the world’s ice-free coastline. These ecosystems are currently threatened by increasing human-induced pressure, among which mass-development of opportunistic macroalgae (mainly composed of Chlorophyta, so called green tides), resulting from the eutrophication of coastal waters. The ecological impact of opportunistic macroalgal blooms (green tides, and blooms formed by other opportunistic taxa), has long been evaluated within sheltered and non-tidal ecosystems. Little is known, however, on how more dynamic ecosystems, such as open macrotidal sandy beaches, respond to such stress. This thesis assesses the effects of anthropogenic stress on the structure and the functioning of highly dynamic ecosystems using sandy beaches impacted by green tides as a study case. The thesis is based on four field studies, which analyse natural sandy sediment benthic community dynamics over several temporal (from month to multi-year) and spatial (from local to regional) scales. In this thesis, I report long-lasting responses of sandy beach benthic invertebrate communities to green tides, across thousands of kilometres and over seven years; and highlight more pronounced responses of zoobenthos living in exposed sandy beaches compared to semi-exposed sands. Within exposed sandy sediments, and across a vertical scale (from inshore to nearshore sandy habitats), I also demonstrate that the effects of the presence of algal mats on intertidal benthic invertebrate communities is more pronounced than that on subtidal benthic invertebrate assemblages, but also than on flatfish communities. Focussing on small-scale variations in the most affected faunal group (i.e. benthic invertebrates living at low shore), this thesis reveals a decrease in overall beta-diversity along a eutrophication-gradient manifested in the form of green tides, as well as the increasing importance of biological variables in explaining ecological variability of sandy beach macrobenthic assemblages along the same gradient. To illustrate the processes associated with the structural shifts observed where green tides occurred, I investigated the effects of high biomasses of opportunistic macroalgae (Ulva spp.) on the trophic structure and functioning of sandy beaches. This work reveals a progressive simplification of sandy beach food web structure and a modification of energy pathways over time, through direct and indirect effects of Ulva mats on several trophic levels. Through this thesis I demonstrate that highly dynamic systems respond differently (e.g. shift in δ13C, not in δ15N) and more subtly (e.g. no mass-mortality in benthos was found) to anthropogenic stress compared to what has been previously shown within more sheltered and non-tidal systems. Obtaining these results would not have been possible without the approach used through this work; I thus present a framework coupling field investigations with analytical approaches to describe shifts in highly variable ecosystems under human-induced stress.
Resumo:
According to Diener (1984), the three primary components of subjective well-being (SWB) are high life satisfaction (LS), frequent positive affect (P A), and infrequent negative affect (NA). The present dissertation extends previous research and theorizing on SWB by testing an innovative framework developed by Shmotkin (2005) in which SWB is conceptualized as an agentic process that promotes and maintains positive functioning. Two key components ofShmotkin's framework were explored in a longitudinal study of university students. In Part 1, SWB was examined as an integrated system of components organized within individuals. Using cluster analysis, five distinct configurations of LS, P A, and NA were identified at each wave. Individuals' SWB configurations were moderately stable over time, with the highest and lowest stabilities observed among participants characterized by "high SWB" and "low SWB" configurations, respectively. Changes in SWB configurations in the direction of a high SWB pattern, and stability among participants already characterized by high SWB, coincided with better than expected mental, physical, and interpersonal functioning over time. More positive levels of functioning and improvements in functioning over time discriminated among SWB configurations. However, prospective effects of SWB configurations on subsequent functioning were not observed. In Part 2, subjective temporal perspective "trajectories" were examined based on individuals' ratings of their past, present, and anticipated future LS. Upward subjective LS trajectories were normative at each wave. Cross-sectional analyses revealed consistent associations between upward subjective trajectories and lower levels of LS, as well as less positive mental, physical, and interpersonal functioning. Upward subjective LS trajectories were biased both with respect to underestimation of past LS and overestimation of future LS, demonstrating their illusional nature. Further, whereas more negative retrospective bias was associated with greater current distress and dysfunction, more positive prospective bias was associated with less positive functioning in the future. Prospective relations, however, were not consistently observed. Thus, steep upward subjective LS trajectory appeared to be a form of wishful-thinking, rather than an adaptive form of selfenhancement. Major limitations and important directions for future research are considered. Implications for Shmotkin's (2005) framework, and for research on SWB more generally, also are discussed
Resumo:
The aim of this thesis is to price options on equity index futures with an application to standard options on S&P 500 futures traded on the Chicago Mercantile Exchange. Our methodology is based on stochastic dynamic programming, which can accommodate European as well as American options. The model accommodates dividends from the underlying asset. It also captures the optimal exercise strategy and the fair value of the option. This approach is an alternative to available numerical pricing methods such as binomial trees, finite differences, and ad-hoc numerical approximation techniques. Our numerical and empirical investigations demonstrate convergence, robustness, and efficiency. We use this methodology to value exchange-listed options. The European option premiums thus obtained are compared to Black's closed-form formula. They are accurate to four digits. The American option premiums also have a similar level of accuracy compared to premiums obtained using finite differences and binomial trees with a large number of time steps. The proposed model accounts for deterministic, seasonally varying dividend yield. In pricing futures options, we discover that what matters is the sum of the dividend yields over the life of the futures contract and not their distribution.
Characterizing Dynamic Optimization Benchmarks for the Comparison of Multi-Modal Tracking Algorithms
Resumo:
Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.
Resumo:
We characterize the solution to a model of consumption smoothing using financing under non-commitment and savings. We show that, under certain conditions, these two different instruments complement each other perfectly. If the rate of time preference is equal to the interest rate on savings, perfect smoothing can be achieved in finite time. We also show that, when random revenues are generated by periodic investments in capital through a concave production function, the level of smoothing achieved through financial contracts can influence the productive investment efficiency. As long as financial contracts cannot achieve perfect smoothing, productive investment will be used as a complementary smoothing device.
Resumo:
This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.
Resumo:
Dans les études sur le transport, les modèles de choix de route décrivent la sélection par un utilisateur d’un chemin, depuis son origine jusqu’à sa destination. Plus précisément, il s’agit de trouver dans un réseau composé d’arcs et de sommets la suite d’arcs reliant deux sommets, suivant des critères donnés. Nous considérons dans le présent travail l’application de la programmation dynamique pour représenter le processus de choix, en considérant le choix d’un chemin comme une séquence de choix d’arcs. De plus, nous mettons en œuvre les techniques d’approximation en programmation dynamique afin de représenter la connaissance imparfaite de l’état réseau, en particulier pour les arcs éloignés du point actuel. Plus précisément, à chaque fois qu’un utilisateur atteint une intersection, il considère l’utilité d’un certain nombre d’arcs futurs, puis une estimation est faite pour le restant du chemin jusqu’à la destination. Le modèle de choix de route est implanté dans le cadre d’un modèle de simulation de trafic par événements discrets. Le modèle ainsi construit est testé sur un modèle de réseau routier réel afin d’étudier sa performance.
Resumo:
L’observation de l’exécution d’applications JavaScript est habituellement réalisée en instrumentant une machine virtuelle (MV) industrielle ou en effectuant une traduction source-à-source ad hoc et complexe. Ce mémoire présente une alternative basée sur la superposition de machines virtuelles. Notre approche consiste à faire une traduction source-à-source d’un programme pendant son exécution pour exposer ses opérations de bas niveau au travers d’un modèle objet flexible. Ces opérations de bas niveau peuvent ensuite être redéfinies pendant l’exécution pour pouvoir en faire l’observation. Pour limiter la pénalité en performance introduite, notre approche exploite les opérations rapides originales de la MV sous-jacente, lorsque cela est possible, et applique les techniques de compilation à-la-volée dans la MV superposée. Notre implémentation, Photon, est en moyenne 19% plus rapide qu’un interprète moderne, et entre 19× et 56× plus lente en moyenne que les compilateurs à-la-volée utilisés dans les navigateurs web populaires. Ce mémoire montre donc que la superposition de machines virtuelles est une technique alternative compétitive à la modification d’un interprète moderne pour JavaScript lorsqu’appliqué à l’observation à l’exécution des opérations sur les objets et des appels de fonction.
Resumo:
Les logiciels sont de plus en plus complexes et leur développement est souvent fait par des équipes dispersées et changeantes. Par ailleurs, de nos jours, la majorité des logiciels sont recyclés au lieu d’être développés à partir de zéro. La tâche de compréhension, inhérente aux tâches de maintenance, consiste à analyser plusieurs dimensions du logiciel en parallèle. La dimension temps intervient à deux niveaux dans le logiciel : il change durant son évolution et durant son exécution. Ces changements prennent un sens particulier quand ils sont analysés avec d’autres dimensions du logiciel. L’analyse de données multidimensionnelles est un problème difficile à résoudre. Cependant, certaines méthodes permettent de contourner cette difficulté. Ainsi, les approches semi-automatiques, comme la visualisation du logiciel, permettent à l’usager d’intervenir durant l’analyse pour explorer et guider la recherche d’informations. Dans une première étape de la thèse, nous appliquons des techniques de visualisation pour mieux comprendre la dynamique des logiciels pendant l’évolution et l’exécution. Les changements dans le temps sont représentés par des heat maps. Ainsi, nous utilisons la même représentation graphique pour visualiser les changements pendant l’évolution et ceux pendant l’exécution. Une autre catégorie d’approches, qui permettent de comprendre certains aspects dynamiques du logiciel, concerne l’utilisation d’heuristiques. Dans une seconde étape de la thèse, nous nous intéressons à l’identification des phases pendant l’évolution ou pendant l’exécution en utilisant la même approche. Dans ce contexte, la prémisse est qu’il existe une cohérence inhérente dans les évènements, qui permet d’isoler des sous-ensembles comme des phases. Cette hypothèse de cohérence est ensuite définie spécifiquement pour les évènements de changements de code (évolution) ou de changements d’état (exécution). L’objectif de la thèse est d’étudier l’unification de ces deux dimensions du temps que sont l’évolution et l’exécution. Ceci s’inscrit dans notre volonté de rapprocher les deux domaines de recherche qui s’intéressent à une même catégorie de problèmes, mais selon deux perspectives différentes.
Inference for nonparametric high-frequency estimators with an application to time variation in betas
Resumo:
We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.
Resumo:
Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.
Resumo:
Temporal changes in odor concentration are vitally important to many animals orienting and navigating in their environment. How are such temporal changes detected? Within the scope of the present work an accurate stimulation and analysis system was developed to examine the dynamics of physiological properties of Drosophila melanogaster olfactory receptor organs. Subsequently a new method for delivering odor stimuli was tested and used to present the first dynamic characterization of olfactory receptors at the level of single neurons. Initially, recordings of the whole antenna were conducted while stimulating with different odors. The odor delivery system allowed the dynamic characterization of the whole fly antenna, including its sensilla and receptor neurons. Based on the obtained electroantennogram data a new odor delivery method called digital sequence method was developed. In addition the degree of accuracy was enhanced, initially using electroantennograms, and later recordings of odorant receptor cells at the single sensilla level. This work shows for the first time that different odors evoked different responses within one neuron depending on the chemical structure of the odor. The present work offers new insights into the dynamic properties of olfactory transduction in Drosophila melanogaster and describes time dependent parameters underlying these properties.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.