906 resultados para Complex systems prediction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In der Vergangenheit haben Untersuchung an biologischen und Modell-Systemen gezeigt, dass amorphes Calciumcarbonat als instabiles Zwischenprodukt bei der Bildung kristalliner Strukturen aus CaCO3 auftritt. Über dessen Rolle im Fällungsprozess von CaCO3 ist nicht viel bekannt und es wird davon ausgegangen, dass es als CaCO3-Speicher für die nachfolgenden kristallinen Produkte dient. Der genaue Reaktionsablauf, der zur Bildung von amorphem Calciumcarbonat (ACC) führt, ist nicht bekannt. Ziel dieser Arbeit war die Entwicklung einer Fällungstechnik, die die Beobachtung der Bildungskinetik von ACC durch Lichtstreuung ermöglicht. In Fällungsexperimenten wird gezeigt, dass die Fällung unter nicht-turbulenten Bedingungen zur Bildung von amorphem Calciumcarbonat führt. Hinsichtlich der Basen- und Alkylcarbonatmenge, die äquivalent oder im Überschuss zur Calciumionenkonzentration eingesetzt wird, entstehen zwei verschiedene Fällungsprodukte. In Bezug auf ihre chemische Zusammensetzung, thermische und mechanische Eigenschaften werden diese charakterisiert. In beiden Fällen wird ein amorphes CaCO3 mit einem Wassergehalt von 0,5 mol/L pro Mol CaCO3 erhalten. Die in situ Generierung von Carbonat führt zur Bildung von sphärischem amorphem Calciumcarbonat, das eine gewisse Tendenz zur Koazervation zeigt. Die bei gleichem Reaktionsumsatz beobachtete Temperaturabhängigkeit des Partikelradius konnten wir unter Annahme einer Mischungslücke mit unterer kritischer Mischungstemperatur interpretieren. Für die Bildung von amorphem Calciumcarbonat schlagen wir daher einen Mechanismus via binodaler flüssig-flüssig Entmischung vor. Nach einer kurzen Keimbildungsperiode können flüssige Tröpfchen aus wasserhaltigem CaCO3 wachsen und dann infolge von stetigem Wasserverlust glasartig erstarren und so amorphes Calciumcarbonat bilden. Bekräftigt wird dieses Modell durch die Wachstumskinetik, die mittels Lichtstreuung und SAXS verfolgt worden ist. In den Fällungsversuchen sind je nach Reaktionsbedingungen, zwei verschiedene Zeitgesetze des Teilchenwachstums erkennbar: Bei schneller Freisetzung von Carbonat liegt ein parabolischer Verlauf des Radienwachstums vor; hingegen führt eine langsame Freisetzung von Carbonat zu einem linearen Wachstum der Radien. Diese Abhängigkeiten lassen sich im Rahmen der bekannten Kinetik einer flüssig-flüssig Entmischung deuten. Ferner wird der Einfluss von doppelthydrophilen Blockcopolymeren (PEO-PMAA) auf die Teilchengröße und die Kinetik der Bildung von amorphem Calciumcarbonat untersucht. Zum Einsatz kommen zwei verschiedene Blockcopolymere, die sich in der Länge des PEO-Blocks unterscheiden. Im Fällungsexperiment führt das in sehr kleinen Konzentrationen vorliegende Blockcopolymere zur Stabilisierung von kleineren Partikeln. Das Blockcopolymer mit der längeren PEO-Einheit weist eine größere Effizienz auf. Die Ergebnisse lassen sich durch Annahme von Adsorption des Polymers an der Oberfläche interpretieren. Der Einfluss der doppelthydrophilen Blockcopolymere auf die Bildung von ACC deutet darauf, dass amorphes Calciumcarbonat eine komplexere Rolle als lediglich die eines Calciumcarbonatspeichers für das spätere Wachstum kristalliner Produkte einnimmt. Für die Wirkung von Polymerzusätzen muss somit nicht nur die Wechselwirkung mit den gegen Ende gebildeten Kristalle betrachtet werden, sondern auch der Einfluss, den das Polymer auf die Bildung des amorphen Calciumcarbonats hat. Die hier neu entwickelte Methode bietet die Möglichkeit, auch für komplexere Polymere, wie z.B. extrahierte Proteine, den Einfluss auf die Bildung der amorphen Vorstufe zu untersuchen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Being basic ingredients of numerous daily-life products with significant industrial importance as well as basic building blocks for biomaterials, charged hydrogels continue to pose a series of unanswered challenges for scientists even after decades of practical applications and intensive research efforts. Despite a rather simple internal structure it is mainly the unique combination of short- and long-range forces which render scientific investigations of their characteristic properties to be quite difficult. Hence early on computer simulations were used to link analytical theory and empirical experiments, bridging the gap between the simplifying assumptions of the models and the complexity of real world measurements. Due to the immense numerical effort, even for high performance supercomputers, system sizes and time scales were rather restricted until recently, whereas it only now has become possible to also simulate a network of charged macromolecules. This is the topic of the presented thesis which investigates one of the fundamental and at the same time highly fascinating phenomenon of polymer research: The swelling behaviour of polyelectrolyte networks. For this an extensible simulation package for the research on soft matter systems, ESPResSo for short, was created which puts a particular emphasis on mesoscopic bead-spring-models of complex systems. Highly efficient algorithms and a consistent parallelization reduced the necessary computation time for solving equations of motion even in case of long-ranged electrostatics and large number of particles, allowing to tackle even expensive calculations and applications. Nevertheless, the program has a modular and simple structure, enabling a continuous process of adding new potentials, interactions, degrees of freedom, ensembles, and integrators, while staying easily accessible for newcomers due to a Tcl-script steering level controlling the C-implemented simulation core. Numerous analysis routines provide means to investigate system properties and observables on-the-fly. Even though analytical theories agreed on the modeling of networks in the past years, our numerical MD-simulations show that even in case of simple model systems fundamental theoretical assumptions no longer apply except for a small parameter regime, prohibiting correct predictions of observables. Applying a "microscopic" analysis of the isolated contributions of individual system components, one of the particular strengths of computer simulations, it was then possible to describe the behaviour of charged polymer networks at swelling equilibrium in good solvent and close to the Theta-point by introducing appropriate model modifications. This became possible by enhancing known simple scaling arguments with components deemed crucial in our detailed study, through which a generalized model could be constructed. Herewith an agreement of the final system volume of swollen polyelectrolyte gels with results of computer simulations could be shown successfully over the entire investigated range of parameters, for different network sizes, charge fractions, and interaction strengths. In addition, the "cell under tension" was presented as a self-regulating approach for predicting the amount of swelling based on the used system parameters only. Without the need for measured observables as input, minimizing the free energy alone already allows to determine the the equilibrium behaviour. In poor solvent the shape of the network chains changes considerably, as now their hydrophobicity counteracts the repulsion of like-wise charged monomers and pursues collapsing the polyelectrolytes. Depending on the chosen parameters a fragile balance emerges, giving rise to fascinating geometrical structures such as the so-called pear-necklaces. This behaviour, known from single chain polyelectrolytes under similar environmental conditions and also theoretically predicted, could be detected for the first time for networks as well. An analysis of the total structure factors confirmed first evidences for the existence of such structures found in experimental results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Significant interest in nanotechnology, is stimulated by the fact that materials exhibit qualitative changes of properties when their dimensions approach ”finite-sizes”. Quantization of electronic, optical and acoustic energies at the nanoscale provides novel functions, with interests spanning from electronics and photonics to biology. The present dissertation involves the application of Brillouin light scattering (BLS) to quantify and utilize material displacementsrnfor probing phononics and elastic properties of structured systems with dimensions comparable to the wavelength of visible light. The interplay of wave propagation with materials exhibiting spatial inhomogeneities at sub-micron length scales provides information not only about elastic properties but also about structural organization at those length scales. In addition the vector nature of q allows, for addressing the directional dependence of thermomechanical properties. To meet this goal, one-dimensional confined nanostructures and a biological system possessing high hierarchical organization were investigated. These applications extend the capabilities of BLS from a characterization tool for thin films to a method for unravelingrnintriguing phononic properties in more complex systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Java Enterprise Applications (JEAs) are complex systems composed using various technologies that in turn rely on languages other than Java, such as XML or SQL. Given the complexity of these applications, the need to reverse engineer them in order to support further development becomes critical. In this paper we show how it is possible to split a system into layers and how is possible to interpret the distance between application elements in order to support the refactoring of JEAs. The purpose of this paper is to explore ways to provide suggestions about the refactoring operations to perform on the code by evaluating the distance between layers and elements belonging those layers. We split JEAs into layers by considering the kinds and the purposes of the elements composing the application. We measure distance between elements by using the notion of the shortest path in a graph. Also we present how to enrich the interpretation of the distance value with enterprise pattern detection in order to refine the suggestion about modifications to perform on the code.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simulation techniques are almost indispensable in the analysis of complex systems. Materials- and related information flow processes in logistics often possess such complexity. Further problem arise as the processes change over time and pose a Big Data problem as well. To cope with these issues adaptive simulations are more and more frequently used. This paper presents a few relevant advanced simulation models and intro-duces a novel model structure, which unifies modelling of geometrical relations and time processes. This way the process structure and their geometric relations can be handled in a well understandable and transparent way. Capabilities and applicability of the model is also presented via a demonstrational example.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Kriging-based optimization relying on noisy evaluations of complex systems has recently motivated contributions from various research communities. Five strategies have been implemented in the DiceOptim package. The corresponding functions constitute a user-friendly tool for solving expensive noisy optimization problems in a sequential framework, while offering some flexibility for advanced users. Besides, the implementation is done in a unified environment, making this package a useful device for studying the relative performances of existing approaches depending on the experimental setup. An overview of the package structure and interface is provided, as well as a description of the strategies and some insight about the implementation challenges and the proposed solutions. The strategies are compared to some existing optimization packages on analytical test functions and show promising performances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neurodegeneration in Parkinson's disease dementia (PDD) and dementia with Lewy bodies (DLB) affect cortical and subcortical networks involved in saccade generation. We therefore expected impairments in saccade performance in both disorders. In order to improve the pathophysiological understanding and to investigate the usefulness of saccades for differential diagnosis, saccades were tested in age- and education-matched patients with PDD (n = 20) and DLB (n = 20), Alzheimer's disease (n = 22) and Parkinson's disease (n = 24), and controls (n = 24). Reflexive (gap, overlap) and complex saccades (prediction, decision and antisaccade) were tested with electro-oculography. PDD and DLB patients had similar impairment in all tasks (P > 0.05, not significant). Compared with controls, they were impaired in both reflexive saccade execution (gap and overlap latencies, P < 0.0001; gains, P < 0.004) and complex saccade performance (target prediction, P < 0.0001; error decisions, P < 0.003; error antisaccades: P < 0.0001). Patients with Alzheimer's disease were only impaired in complex saccade performance (Alzheimer's disease versus controls, target prediction P < 0.001, error decisions P < 0.0001, error antisaccades P < 0.0001), but not reflexive saccade execution (for all, P > 0.05). Patients with Parkinson's disease had, compared with controls, similar complex saccade performance (for all, P > 0.05) and only minimal impairment in reflexive tasks, i.e. hypometric gain in the gap task (P = 0.04). Impaired saccade execution in reflexive tasks allowed discrimination between DLB versus Alzheimer's disease (sensitivity > or =60%, specificity > or =77%) and between PDD versus Parkinson's disease (sensitivity > or =60%, specificity > or =88%) when +/-1.5 standard deviations was used for group discrimination. We conclude that impairments in reflexive saccades may be helpful for differential diagnosis and are minimal when either cortical (Alzheimer's disease) or nigrostriatal neurodegeneration (Parkinson's disease) exists solely; however, they become prominent with combined cortical and subcortical neurodegeneration in PDD and DLB. The similarities in saccade performance in PDD and DLB underline the overlap between these conditions and underscore differences from Alzheimer's disease and Parkinson's disease.