899 resultados para Hyperbolic Dynamic System


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Product design decisions can have a significant impact on the financial and operation performance of manufacturing companies. Therefore good analysis of the financial impact of design decisions is required if the profitability of the business is to be maximised. The product design process can be viewed as a chain of decisions which links decisions about the concept to decisions about the detail. The idea of decision chains can be extended to include the design and operation of the 'downstream' business processes which manufacture and support the product. These chains of decisions are not independent but are interrelated in a complex manner. To deal with the interdependencies requires a modelling approach which represents all the chains of decisions, to a level of detail not normally considered in the analysis of product design. The operational, control and financial elements of a manufacturing business constitute a dynamic system. These elements interact with each other and with external elements (i.e. customers and suppliers). Analysing the chain of decisions for such an environment requires the application of simulation techniques, not just to any one area of interest, but to the whole business i.e. an enterprise simulation. To investigate the capability and viability of enterprise simulation an experimental 'Whole Business Simulation' system has been developed. This system combines specialist simulation elements and standard operational applications software packages, to create a model that incorporates all the key elements of a manufacturing business, including its customers and suppliers. By means of a series of experiments, the performance of this system was compared with a range of existing analysis tools (i.e. DFX, capacity calculation, shop floor simulator, and business planner driven by a shop floor simulator).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Novel molecular complexity measures are designed based on the quantum molecular kinematics. The Hamiltonian matrix constructed in a quasi-topological approximation describes the temporal evolution of the modelled electronic system and determined the time derivatives for the dynamic quantities. This allows to define the average quantum kinematic characteristics closely related to the curvatures of the electron paths, particularly, the torsion reflecting the chirality of the dynamic system. A special attention has been given to the computational scheme for this chirality measure. The calculations on realistic molecular systems demonstrate reasonable behaviour of the proposed molecular complexity indices.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Methods for the calculation of complexity have been investigated as a possible alternative for the analysis of the dynamics of molecular systems. “Computational mechanics” is the approach chosen to describe emergent behavior in molecular systems that evolve in time. A novel algorithm has been developed for symbolization of a continuous physical trajectory of a dynamic system. A method for calculating statistical complexity has been implemented and tested on representative systems. It is shown that the computational mechanics approach is suitable for analyzing the dynamic complexity of molecular systems and offers new insight into the process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The complex of questions connected with the analysis, estimation and structural-parametrical optimization of dynamic system is considered in this article. Connection of such problems with tasks of control by beams of trajectories is emphasized. The special attention is concentrated on the review and analysis of spent scientific researches, the attention is stressed to their constructability and applied directedness. Efficiency of the developed algorithmic and software is demonstrated on the tasks of modeling and optimization of output beam characteristics in linear resonance accelerators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Limited literature regarding parameter estimation of dynamic systems has been identified as the central-most reason for not having parametric bounds in chaotic time series. However, literature suggests that a chaotic system displays a sensitive dependence on initial conditions, and our study reveals that the behavior of chaotic system: is also sensitive to changes in parameter values. Therefore, parameter estimation technique could make it possible to establish parametric bounds on a nonlinear dynamic system underlying a given time series, which in turn can improve predictability. By extracting the relationship between parametric bounds and predictability, we implemented chaos-based models for improving prediction in time series. ^ This study describes work done to establish bounds on a set of unknown parameters. Our research results reveal that by establishing parametric bounds, it is possible to improve the predictability of any time series, although the dynamics or the mathematical model of that series is not known apriori. In our attempt to improve the predictability of various time series, we have established the bounds for a set of unknown parameters. These are: (i) the embedding dimension to unfold a set of observation in the phase space, (ii) the time delay to use for a series, (iii) the number of neighborhood points to use for avoiding detection of false neighborhood and, (iv) the local polynomial to build numerical interpolation functions from one region to another. Using these bounds, we are able to get better predictability in chaotic time series than previously reported. In addition, the developments of this dissertation can establish a theoretical framework to investigate predictability in time series from the system-dynamics point of view. ^ In closing, our procedure significantly reduces the computer resource usage, as the search method is refined and efficient. Finally, the uniqueness of our method lies in its ability to extract chaotic dynamics inherent in non-linear time series by observing its values. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This is an investigation on the development of a numerical assessment method for the hydrodynamic performance of an oscillating water column (OWC) wave energy converter. In the research work, a systematic study has been carried out on how the hydrodynamic problem can be solved and represented reliably, focusing on the phenomena of the interactions of the wave-structure and the wave-internal water surface. These phenomena are extensively examined numerically to show how the hydrodynamic parameters can be reliably obtained and used for the OWC performance assessment. In studying the dynamic system, a two-body system is used for the OWC wave energy converter. The first body is the device itself, and the second body is an imaginary “piston,” which replaces part of the water at the internal water surface in the water column. One advantage of the two-body system for an OWC wave energy converter is its physical representations, and therefore, the relevant mathematical expressions and the numerical simulation can be straightforward. That is, the main hydrodynamic parameters can be assessed using the boundary element method of the potential flow in frequency domain, and the relevant parameters are transformed directly from frequency domain to time domain for the two-body system. However, as it is shown in the research, an appropriate representation of the “imaginary” piston is very important, especially when the relevant parameters have to be transformed from frequency-domain to time domain for a further analysis. The examples given in the research have shown that the correct parameters transformed from frequency domain to time domain can be a vital factor for a successful numerical simulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A lightweight Java application suite has been developed and deployed allowing collaborative learning between students and tutors at remote locations. Students can engage in group activities online and also collaborate with tutors. A generic Java framework has been developed and applied to electronics, computing and mathematics education. The applications are respectively: (a) a digital circuit simulator, which allows students to collaborate in building simple or complex electronic circuits; (b) a Java programming environment where the paradigm is behavioural-based robotics, and (c) a differential equation solver useful in modelling of any complex and nonlinear dynamic system. Each student sees a common shared window on which may be added text or graphical objects and which can then be shared online. A built-in chat room supports collaborative dialogue. Students can work either in collaborative groups or else in teams as directed by the tutor. This paper summarises the technical architecture of the system as well as the pedagogical implications of the suite. A report of student evaluation is also presented distilled from use over a period of twelve months. We intend this suite to facilitate learning between groups at one or many institutions and to facilitate international collaboration. We also intend to use the suite as a tool to research the establishment and behaviour of collaborative learning groups. We shall make our software freely available to interested researchers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the end of the Cold War, recurring civil conflicts have been the dominant form of violent armed conflict in the world, accounting for 70% of conflicts active between 2000-2013. Duration and intensity of episodes within recurring conflicts in Africa exhibit four behaviors characteristic of archetypal dynamic system structures. The overarching questions asked in this study are whether these patterns are robustly correlated with fundamental concepts of resiliency in dynamic systems that scale from micro-to macro levels; are they consistent with theoretical risk factors and causal mechanisms; and what are the policy implications. Econometric analysis and dynamic systems modeling of 36 conflicts in Africa between 1989 -2014 are combined with process tracing in a case study of Somalia to evaluate correlations between state characteristics, peace operations and foreign aid on the likelihood of observed conflict patterns, test hypothesized causal mechanisms across scales, and develop policy recommendations for increasing human security while decreasing resiliency of belligerents. Findings are that observed conflict patterns scale from micro to macro levels; are strongly correlated with state characteristics that proxy a mix of cooperative (e.g., gender equality) and coercive (e.g., security forces) conflict-balancing mechanisms; and are weakly correlated with UN and regional peace operations and humanitarian aid. Interactions between peace operations and aid interventions that effect conflict persistence at micro levels are not seen in macro level analysis, due to interdependent, micro-level feedback mechanisms, sequencing, and lagged effects. This study finds that the dynamic system structures associated with observed conflict patterns contain tipping points between balancing mechanisms at the interface of micro-macro level interactions that are determined as much by factors related to how intervention policies are designed and implemented, as what they are. Policy implications are that reducing risk of conflict persistence requires that peace operations and aid interventions (1) simultaneously increase transparency, promote inclusivity (with emphasis on gender equality), and empower local civilian involvement in accountability measures at the local levels; (2) build bridges to horizontally and vertically integrate across levels; and (3) pave pathways towards conflict transformation mechanisms and justice that scale from the individual, to community, regional, and national levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scottish sandstone buildings are now suffering the long-term effects of salt-crystallisation damage, owing in part to the repeated deposition of de-icing salts during winter months. The use of de-icing salts is necessary in order to maintain safe road and pavement conditions during cold weather, but their use comes at a price. Sodium chloride (NaCl), which is used as the primary de-icing salt throughout the country, is a salt known to be damaging to sandstone masonry. However, there remains a range of alternative, commercially available de-icing salts. It is unknown however, what effect these salts have on porous building materials, such as sandstone. In order to protect our built heritage against salt-induced decay, it is vital to understand the effects of these different salts on the range of sandstone types that we see within the historic buildings of Scotland. Eleven common types of sandstone were characterised using a suite of methods in order to understand their mineralogy, pore structure and their response to moisture movement, which are vital properties that govern a stone’s response to weathering and decay. Sandstones were then placed through a range of durability tests designed to measure their resistance to various weathering processes. Three salt crystallisation tests were undertaken on the sandstones over a range of 16 to 50 cycles, which tested their durability to NaCl, CaCl2, MgCl2 and a chloride blend salt. Samples were primarily analysed by measuring their dry weight loss after each cycle, visually after each cycle and by other complimentary methods in order to understand their changing response to moisture uptake after salt treatment. Salt crystallisation was identified as the primary mechanism of decay across each salt, with the extent of damage in each sandstone influenced by environmental conditions and pore-grain properties of the stone. Damage recorded in salt crystallisation tests was ultimately caused by the generation of high crystallisation pressures within the confined pore networks of each stone. Stone and test-specific parameters controlled the location and magnitude of damage, with the amount of micro-pores, their spatial distribution, the water absorption coefficient and the drying efficiency of each stone being identified as the most important stone-specific properties influencing salt-induced decay. Strong correlations were found between the dry weight loss of NaCl treated samples and the proportion of pores <1µm in diameter. Crystallisation pressures are known to scale inversely with pore size, while the spatial distribution of these micro-pores is thought to influence the rate, overall extent and type of decay within the stone by concentrating crystallisation pressures in specific regions of the stone. The water absorption determines the total amount of moisture entering into the stone, which represents the total amount of void space for salt crystallisation. The drying parameters on the other hand, ultimately control the distribution of salt crystallisation. Those stones that were characterised by a combination of a high proportion of micro-pores, high water absorption values and slow drying kinetics were shown to be most vulnerable to NaCl-induced decay. CaCl2 and MgCl2 are shown to have similar crystallisation behaviour, forming thin crystalline sheets under low relative humidity and/or high temperature conditions. Distinct differences in their behaviour that are influenced by test specific criteria were identified. The location of MgCl2 crystallisation close to the stone surface, as influenced by prolonged drying under moderate temperature drying conditions, was identified as the main factor that caused substantial dry weight loss in specific stone types. CaCl2 solutions remained unaffected under these conditions and only crystallised under high temperatures. Homogeneous crystallisation of CaCl2 throughout the stone produced greater internal change, with little dry weight loss recorded. NaCl formed distinctive isometric hopper crystals that caused damage through the non-equilibrium growth of salts in trapped regions of the stone. Damage was sustained as granular decay and contour scaling across most stone types. The pore network and hydric properties of the stones continually evolve in response to salt crystallisation, creating a dynamic system whereby the initial, known properties of clean quarried stone will not continually govern the processes of salt crystallisation, nor indeed can they continually predict the behaviour of stone to salt-induced decay.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis reports on an investigation of the feasibility and usefulness of incorporating dynamic management facilities for managing sensed context data in a distributed contextaware mobile application. The investigation focuses on reducing the work required to integrate new sensed context streams in an existing context aware architecture. Current architectures require integration work for new streams and new contexts that are encountered. This means of operation is acceptable for current fixed architectures. However, as systems become more mobile the number of discoverable streams increases. Without the ability to discover and use these new streams the functionality of any given device will be limited to the streams that it knows how to decode. The integration of new streams requires that the sensed context data be understood by the current application. If the new source provides data of a type that an application currently requires then the new source should be connected to the application without any prior knowledge of the new source. If the type is similar and can be converted then this stream too should be appropriated by the application. Such applications are based on portable devices (phones, PDAs) for semi-autonomous services that use data from sensors connected to the devices, plus data exchanged with other such devices and remote servers. Such applications must handle input from a variety of sensors, refining the data locally and managing its communication from the device in volatile and unpredictable network conditions. The choice to focus on locally connected sensory input allows for the introduction of privacy and access controls. This local control can determine how the information is communicated to others. This investigation focuses on the evaluation of three approaches to sensor data management. The first system is characterised by its static management based on the pre-pended metadata. This was the reference system. Developed for a mobile system, the data was processed based on the attached metadata. The code that performed the processing was static. The second system was developed to move away from the static processing and introduce a greater freedom of handling for the data stream, this resulted in a heavy weight approach. The approach focused on pushing the processing of the data into a number of networked nodes rather than the monolithic design of the previous system. By creating a separate communication channel for the metadata it is possible to be more flexible with the amount and type of data transmitted. The final system pulled the benefits of the other systems together. By providing a small management class that would load a separate handler based on the incoming data, Dynamism was maximised whilst maintaining ease of code understanding. The three systems were then compared to highlight their ability to dynamically manage new sensed context. The evaluation took two approaches, the first is a quantitative analysis of the code to understand the complexity of the relative three systems. This was done by evaluating what changes to the system were involved for the new context. The second approach takes a qualitative view of the work required by the software engineer to reconfigure the systems to provide support for a new data stream. The evaluation highlights the various scenarios in which the three systems are most suited. There is always a trade-o↵ in the development of a system. The three approaches highlight this fact. The creation of a statically bound system can be quick to develop but may need to be completely re-written if the requirements move too far. Alternatively a highly dynamic system may be able to cope with new requirements but the developer time to create such a system may be greater than the creation of several simpler systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este artículo evalúa la relación de causalidad entre la gestión del conocimiento y las capacidades de innovación tecnológica, y el efecto de esta relación sobre los resultados operacionales del sector textil en la ciudad de Medellín. Se empleó la metodología de dinámica de sistemas, con simulación de esce­narios para valorar las condiciones actuales de las organizaciones del sector en términos de acumulación de conocimiento y capacidades. La información se obtuvo mediante entrevistas a expertos y acceso a información especializada del sector. Se evidencia que una mejora de la relación entre la gestión del conocimiento e innovación tecnológica genera un incremento aproximado del 15% en los ingresos operacionales del sector. Asimismo, se encontró que a medida que las variables comunes de interés (Es­trategias organizacionales, canales de comunicación, formación, cultura, acciones de fortalecimiento en I+D), se acercan a los valores deseados, la acumulación de conocimiento y de capacidades de innovación tecnológica alcanzan los valores objetivos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El sector eléctrico está experimentando cambios importantes tanto a nivel de gestión como a nivel de mercado. Una de las claves que están acelerando este cambio es la penetración cada vez mayor de los Sistemas de Generación Distribuida (DER), que están dando un mayor protagonismo al usuario a la hora de plantear la gestión del sistema eléctrico. La complejidad del escenario que se prevé en un futuro próximo, exige que los equipos de la red tenga la capacidad de interactuar en un sistema mucho más dinámico que en el presente, donde la interfaz de conexión deberá estar dotada de la inteligencia necesaria y capacidad de comunicación para que todo el sistema pueda ser gestionado en su conjunto de manera eficaz. En la actualidad estamos siendo testigos de la transición desde el modelo de sistema eléctrico tradicional hacia un nuevo sistema, activo e inteligente, que se conoce como Smart Grid. En esta tesis se presenta el estudio de un Dispositivo Electrónico Inteligente (IED) orientado a aportar soluciones para las necesidades que la evolución del sistema eléctrico requiere, que sea capaz de integrase en el equipamiento actual y futuro de la red, aportando funcionalidades y por tanto valor añadido a estos sistemas. Para situar las necesidades de estos IED se ha llevado a cabo un amplio estudio de antecedentes, comenzando por analizar la evolución histórica de estos sistemas, las características de la interconexión eléctrica que han de controlar, las diversas funciones y soluciones que deben aportar, llegando finalmente a una revisión del estado del arte actual. Dentro de estos antecedentes, también se lleva a cabo una revisión normativa, a nivel internacional y nacional, necesaria para situarse desde el punto de vista de los distintos requerimientos que deben cumplir estos dispositivos. A continuación se exponen las especificaciones y consideraciones necesarias para su diseño, así como su arquitectura multifuncional. En este punto del trabajo, se proponen algunos enfoques originales en el diseño, relacionados con la arquitectura del IED y cómo deben sincronizarse los datos, dependiendo de la naturaleza de los eventos y las distintas funcionalidades. El desarrollo del sistema continua con el diseño de los diferentes subsistemas que lo componen, donde se presentan algunos algoritmos novedosos, como el enfoque del sistema anti-islanding con detección múltiple ponderada. Diseñada la arquitectura y funciones del IED, se expone el desarrollo de un prototipo basado en una plataforma hardware. Para ello se analizan los requisitos necesarios que debe tener, y se justifica la elección de una plataforma embebida de altas prestaciones que incluye un procesador y una FPGA. El prototipo desarrollado se somete a un protocolo de pruebas de Clase A, según las normas IEC 61000-4-30 e IEC 62586-2, para comprobar la monitorización de parámetros. También se presentan diversas pruebas en las que se han estimado los retardos implicados en los algoritmos relacionados con las protecciones. Finalmente se comenta un escenario de prueba real, dentro del contexto de un proyecto del Plan Nacional de Investigación, donde este prototipo ha sido integrado en un inversor dotándole de la inteligencia necesaria para un futuro contexto Smart Grid.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cardiovascular diseases (CVDs) have reached an epidemic proportion in the US and worldwide with serious consequences in terms of human suffering and economic impact. More than one third of American adults are suffering from CVDs. The total direct and indirect costs of CVDs are more than $500 billion per year. Therefore, there is an urgent need to develop noninvasive diagnostics methods, to design minimally invasive assist devices, and to develop economical and easy-to-use monitoring systems for cardiovascular diseases. In order to achieve these goals, it is necessary to gain a better understanding of the subsystems that constitute the cardiovascular system. The aorta is one of these subsystems whose role in cardiovascular functioning has been underestimated. Traditionally, the aorta and its branches have been viewed as resistive conduits connected to an active pump (left ventricle of the heart). However, this perception fails to explain many observed physiological results. My goal in this thesis is to demonstrate the subtle but important role of the aorta as a system, with focus on the wave dynamics in the aorta.

The operation of a healthy heart is based on an optimized balance between its pumping characteristics and the hemodynamics of the aorta and vascular branches. The delicate balance between the aorta and heart can be impaired due to aging, smoking, or disease. The heart generates pulsatile flow that produces pressure and flow waves as it enters into the compliant aorta. These aortic waves propagate and reflect from reflection sites (bifurcations and tapering). They can act constructively and assist the blood circulation. However, they may act destructively, promoting diseases or initiating sudden cardiac death. These waves also carry information about the diseases of the heart, vascular disease, and coupling of heart and aorta. In order to elucidate the role of the aorta as a dynamic system, the interplay between the dominant wave dynamic parameters is investigated in this study. These parameters are heart rate, aortic compliance (wave speed), and locations of reflection sites. Both computational and experimental approaches have been used in this research. In some cases, the results are further explained using theoretical models.

The main findings of this study are as follows: (i) developing a physiologically realistic outflow boundary condition for blood flow modeling in a compliant vasculature; (ii) demonstrating that pulse pressure as a single index cannot predict the true level of pulsatile workload on the left ventricle; (iii) proving that there is an optimum heart rate in which the pulsatile workload of the heart is minimized and that the optimum heart rate shifts to a higher value as aortic rigidity increases; (iv) introducing a simple bio-inspired device for correction and optimization of aortic wave reflection that reduces the workload on the heart; (v) deriving a non-dimensional number that can predict the optimum wave dynamic state in a mammalian cardiovascular system; (vi) demonstrating that waves can create a pumping effect in the aorta; (vii) introducing a system parameter and a new medical index, Intrinsic Frequency, that can be used for noninvasive diagnosis of heart and vascular diseases; and (viii) proposing a new medical hypothesis for sudden cardiac death in young athletes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The biofilms microbial forms of association are responsible for generating, accelerating and / or induce the process of corrosion. The damage generated in the petroleum industry for this type of corrosion is significatives, representing major investment for your control. The aim of this study was to evaluate such tests antibiograms the effects of extracts of Jatropha curcas and essential oil of Lippia gracilis Schauer on microrganisms isolated from water samples and, thereafter, select the most effective natural product for further evaluation of biofilms formed in dynamic system. Extracts of J. curcas were not efficient on the complete inhibition of microbial growth in tests type antibiogram, and essential oil of L. gracilis Schauer most effective and determined for the other tests. A standard concentration of essential oil of 20 μL was chosen and established for the evaluation of the biofilms and the rate of corrosion. The biocide effect was determined by microbial counts of five types of microorganisms: aerobic bacteria, precipitating iron, total anaerobic, sulphate reducers (BRS) and fungi. The rate of corrosion was measured by loss of mass. Molecular identification and scanning electron microscopy (SEM) were performed. The data showed reduction to zero of the most probable number (MPN) of bacteria precipitating iron and BRS from 115 and 113 minutes of contact, respectively. There was also inhibited in fungi, reducing to zero the rate of colony-forming units (CFU) from 74 minutes of exposure. However, for aerobic and anaerobic bacteria there was no significant difference in the time of exposure to the essential oil, remaining constant. The rate of corrosion was also influenced by the presence of oil. The essential oil of L. gracilis was shown to be potentially effective