943 resultados para Complex systems prediction
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Resumo:
Java Enterprise Applications (JEAs) are complex systems composed using various technologies that in turn rely on languages other than Java, such as XML or SQL. Given the complexity of these applications, the need to reverse engineer them in order to support further development becomes critical. In this paper we show how it is possible to split a system into layers and how is possible to interpret the distance between application elements in order to support the refactoring of JEAs. The purpose of this paper is to explore ways to provide suggestions about the refactoring operations to perform on the code by evaluating the distance between layers and elements belonging those layers. We split JEAs into layers by considering the kinds and the purposes of the elements composing the application. We measure distance between elements by using the notion of the shortest path in a graph. Also we present how to enrich the interpretation of the distance value with enterprise pattern detection in order to refine the suggestion about modifications to perform on the code.
Resumo:
Simulation techniques are almost indispensable in the analysis of complex systems. Materials- and related information flow processes in logistics often possess such complexity. Further problem arise as the processes change over time and pose a Big Data problem as well. To cope with these issues adaptive simulations are more and more frequently used. This paper presents a few relevant advanced simulation models and intro-duces a novel model structure, which unifies modelling of geometrical relations and time processes. This way the process structure and their geometric relations can be handled in a well understandable and transparent way. Capabilities and applicability of the model is also presented via a demonstrational example.
Resumo:
Kriging-based optimization relying on noisy evaluations of complex systems has recently motivated contributions from various research communities. Five strategies have been implemented in the DiceOptim package. The corresponding functions constitute a user-friendly tool for solving expensive noisy optimization problems in a sequential framework, while offering some flexibility for advanced users. Besides, the implementation is done in a unified environment, making this package a useful device for studying the relative performances of existing approaches depending on the experimental setup. An overview of the package structure and interface is provided, as well as a description of the strategies and some insight about the implementation challenges and the proposed solutions. The strategies are compared to some existing optimization packages on analytical test functions and show promising performances.
Resumo:
Neurodegeneration in Parkinson's disease dementia (PDD) and dementia with Lewy bodies (DLB) affect cortical and subcortical networks involved in saccade generation. We therefore expected impairments in saccade performance in both disorders. In order to improve the pathophysiological understanding and to investigate the usefulness of saccades for differential diagnosis, saccades were tested in age- and education-matched patients with PDD (n = 20) and DLB (n = 20), Alzheimer's disease (n = 22) and Parkinson's disease (n = 24), and controls (n = 24). Reflexive (gap, overlap) and complex saccades (prediction, decision and antisaccade) were tested with electro-oculography. PDD and DLB patients had similar impairment in all tasks (P > 0.05, not significant). Compared with controls, they were impaired in both reflexive saccade execution (gap and overlap latencies, P < 0.0001; gains, P < 0.004) and complex saccade performance (target prediction, P < 0.0001; error decisions, P < 0.003; error antisaccades: P < 0.0001). Patients with Alzheimer's disease were only impaired in complex saccade performance (Alzheimer's disease versus controls, target prediction P < 0.001, error decisions P < 0.0001, error antisaccades P < 0.0001), but not reflexive saccade execution (for all, P > 0.05). Patients with Parkinson's disease had, compared with controls, similar complex saccade performance (for all, P > 0.05) and only minimal impairment in reflexive tasks, i.e. hypometric gain in the gap task (P = 0.04). Impaired saccade execution in reflexive tasks allowed discrimination between DLB versus Alzheimer's disease (sensitivity > or =60%, specificity > or =77%) and between PDD versus Parkinson's disease (sensitivity > or =60%, specificity > or =88%) when +/-1.5 standard deviations was used for group discrimination. We conclude that impairments in reflexive saccades may be helpful for differential diagnosis and are minimal when either cortical (Alzheimer's disease) or nigrostriatal neurodegeneration (Parkinson's disease) exists solely; however, they become prominent with combined cortical and subcortical neurodegeneration in PDD and DLB. The similarities in saccade performance in PDD and DLB underline the overlap between these conditions and underscore differences from Alzheimer's disease and Parkinson's disease.
Resumo:
We highlight the progress, current status, and open challenges of QCD-driven physics, in theory and in experiment. We discuss how the strong interaction is intimately connected to a broad sweep of physical problems, in settings ranging from astrophysics and cosmology to strongly coupled, complex systems in particle and condensed-matter physics, as well as to searches for physics beyond the Standard Model. We also discuss how success in describing the strong interaction impacts other fields, and, in turn, how such subjects can impact studies of the strong interaction. In the course of the work we offer a perspective on the many research streams which flow into and out of QCD, as well as a vision for future developments.
Resumo:
The new computing paradigm known as cognitive computing attempts to imitate the human capabilities of learning, problem solving, and considering things in context. To do so, an application (a cognitive system) must learn from its environment (e.g., by interacting with various interfaces). These interfaces can run the gamut from sensors to humans to databases. Accessing data through such interfaces allows the system to conduct cognitive tasks that can support humans in decision-making or problem-solving processes. Cognitive systems can be integrated into various domains (e.g., medicine or insurance). For example, a cognitive system in cities can collect data, can learn from various data sources and can then attempt to connect these sources to provide real time optimizations of subsystems within the city (e.g., the transportation system). In this study, we provide a methodology for integrating a cognitive system that allows data to be verbalized, making the causalities and hypotheses generated from the cognitive system more understandable to humans. We abstract a city subsystem—passenger flow for a taxi company—by applying fuzzy cognitive maps (FCMs). FCMs can be used as a mathematical tool for modeling complex systems built by directed graphs with concepts (e.g., policies, events, and/or domains) as nodes and causalities as edges. As a verbalization technique we introduce the restriction-centered theory of reasoning (RCT). RCT addresses the imprecision inherent in language by introducing restrictions. Using this underlying combinatorial design, our approach can handle large data sets from complex systems and make the output understandable to humans.
Resumo:
The paper revives a theoretical definition of party coherence as being composed of two basic elements, cohesion and factionalism, to propose and apply a novel empirical measure based on spin physics. The simultaneous analysis of both components using a single measurement concept is applied to data representing the political beliefs of candidates in the Swiss general elections of 2003 and 2007, proposing a connection between the coherence of the beliefs party members hold and the assessment of parties being at risk of splitting. We also compare our measure with established polarization measures and demonstrate its advantage with respect to multi-dimensional data that lack clear structure. Furthermore, we outline how our analysis supports the distinction between bottom-up and top-down mechanisms of party splitting. In this way, we are able to turn the intuition of coherence into a defined quantitative concept that, additionally, offers a methodological basis for comparative research of party coherence. Our work serves as an example of how a complex systems approach allows to get a new perspective on a long-standing issue in political science.