905 resultados para Dynamic systems
Resumo:
We study the evolution of structural defects in AlxGa1-xN films (with x=0.0-0.6) bombarded with kilo-electron-volt heavy ions at 77 and 300 K. We use a combination of Rutherford backscattering/channeling spectrometry and cross-sectional transmission electron microscopy. Results show that an increase in Al content not only strongly enhances dynamic annealing processes but can also change the main features of the amorphization behavior. In particular, the damage buildup behavior at 300 K is essentially similar for all the AlGaN films studied. Ion-beam-produced disorder at 300 K accumulates preferentially in the crystal bulk region up to a certain saturation level (similar to50%-60% relative disorder). Bombardment at 300 K above a critical fluence results in a rapid increase in damage from the saturation level up to complete disordering, with a buried amorphous layer nucleating in the crystal bulk. However, at 77 K, the saturation effect of lattice disorder in the bulk occurs only for xgreater than or similar to0.1. Based on the analysis of these results for AlGaN and previously reported data for InGaN, we discuss physical mechanisms of the susceptibility of group-III nitrides to ion-beam-induced disordering and to the crystalline-to-amorphous phase transition. (C) 2004 American Institute of Physics.
Resumo:
Purpose: Although manufacturers of bicycle power monitoring devices SRM and Power Tap (PT) claim accuracy to within 2.5%, there are limited scientific data available in support. The purpose of this investigation was to assess the accuracy of SRM and PT under different conditions. Methods: First, 19 SRM were calibrated, raced for 11 months, and retested using a dynamic CALRIG (50-1000 W at 100 rpm). Second, using the same procedure, five PT were repeat tested on alternate days. Third, the most accurate SRM and PT were tested for the influence of cadence (60, 80, 100, 120 rpm), temperature (8 and 21degreesC) and time (1 h at similar to300 W) on accuracy. Finally, the same SRM and PT were downloaded and compared after random cadence and gear surges using the CALRIG and on a training ride. Results: The mean error scores for SRM and PT factory calibration over a range of 50-1000 W were 2.3 +/- 4.9% and -2.5 +/- 0.5%, respectively. A second set of trials provided stable results for 15 calibrated SRM after 11 months (-0.8 +/- 1.7%), and follow-up testing of all PT units confirmed these findings (-2.7 +/- 0.1%). Accuracy for SRM and PT was not largely influenced by time and cadence; however. power output readings were noticeably influenced by temperature (5.2% for SRM and 8.4% for PT). During field trials, SRM average and max power were 4.8% and 7.3% lower, respectively, compared with PT. Conclusions: When operated according to manufacturers instructions, both SRM and PT offer the coach, athlete, and sport scientist the ability to accurately monitor power output in the lab and the field. Calibration procedures matching performance tests (duration, power, cadence, and temperature) are, however, advised as the error associated with each unit may vary.
Resumo:
We introduce a novel way of measuring the entropy of a set of values undergoing changes. Such a measure becomes useful when analyzing the temporal development of an algorithm designed to numerically update a collection of values such as artificial neural network weights undergoing adjustments during learning. We measure the entropy as a function of the phase-space of the values, i.e. their magnitude and velocity of change, using a method based on the abstract measure of entropy introduced by the philosopher Rudolf Carnap. By constructing a time-dynamic two-dimensional Voronoi diagram using Voronoi cell generators with coordinates of value- and value-velocity (change of magnitude), the entropy becomes a function of the cell areas. We term this measure teleonomic entropy since it can be used to describe changes in any end-directed (teleonomic) system. The usefulness of the method is illustrated when comparing the different approaches of two search algorithms, a learning artificial neural network and a population of discovering agents. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
In liquid-liquid dispersion systems, the dynamic change of the interfacial properties between the two immiscible liquids plays an important role in both the emulsification process and emulsion stabilization. In this paper, experimentally measured dynamic interfacial tensions of 1-chlorobutane in the aqueous solutions of various random copolymers of polyvinyl acetate and polyvinyl alcohol (PVAA) are presented. Theoretical analyses on these results suggest that the adsorption of the polymer molecules is controlled neither by the bulk diffusion process nor the activation energy barrier for the adsorption but the conformation of polymer molecules. Based on the concept of critical concentration of condensation for polymer adsorption, as well as the observation that the rate at which the dynamic interfacial tension changes does not correlate to the PVAA's ability to stabilize a single drop, it is postulated that the main stabilization mechanism for the PVAAs is by steric hindrance, not the Gibbs-Marangoni effect offered by the small molecule surfactants.
Resumo:
Granulation is one of the fundamental operations in particulate processing and has a very ancient history and widespread use. Much fundamental particle science has occurred in the last two decades to help understand the underlying phenomena. Yet, until recently the development of granulation systems was mostly based on popular practice. The use of process systems approaches to the integrated understanding of these operations is providing improved insight into the complex nature of the processes. Improved mathematical representations, new solution techniques and the application of the models to industrial processes are yielding better designs, improved optimisation and tighter control of these systems. The parallel development of advanced instrumentation and the use of inferential approaches provide real-time access to system parameters necessary for improvements in operation. The use of advanced models to help develop real-time plant diagnostic systems provides further evidence of the utility of process system approaches to granulation processes. This paper highlights some of those aspects of granulation. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
A systematic goal-driven top-down modelling methodology is proposed that is capable of developing a multiscale model of a process system for given diagnostic purposes. The diagnostic goal-set and the symptoms are extracted from HAZOP analysis results, where the possible actions to be performed in a fault situation are also described. The multiscale dynamic model is realized in the form of a hierarchical coloured Petri net by using a novel substitution place-transition pair. Multiscale simulation that focuses automatically on the fault areas is used to predict the effect of the proposed preventive actions. The notions and procedures are illustrated on some simple case studies including a heat exchanger network and a more complex wet granulation process.
Resumo:
This paper presents a new method for producing a functional-structural plant model that simulates response to different growth conditions, yet does not require detailed knowledge of underlying physiology. The example used to present this method is the modelling of the mountain birch tree. This new functional-structural modelling approach is based on linking an L-system representation of the dynamic structure of the plant with a canonical mathematical model of plant function. Growth indicated by the canonical model is allocated to the structural model according to probabilistic growth rules, such as rules for the placement and length of new shoots, which were derived from an analysis of architectural data. The main advantage of the approach is that it is relatively simple compared to the prevalent process-based functional-structural plant models and does not require a detailed understanding of underlying physiological processes, yet it is able to capture important aspects of plant function and adaptability, unlike simple empirical models. This approach, combining canonical modelling, architectural analysis and L-systems, thus fills the important role of providing an intermediate level of abstraction between the two extremes of deeply mechanistic process-based modelling and purely empirical modelling. We also investigated the relative importance of various aspects of this integrated modelling approach by analysing the sensitivity of the standard birch model to a number of variations in its parameters, functions and algorithms. The results show that using light as the sole factor determining the structural location of new growth gives satisfactory results. Including the influence of additional regulating factors made little difference to global characteristics of the emergent architecture. Changing the form of the probability functions and using alternative methods for choosing the sites of new growth also had little effect. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The notorious "dimensionality curse" is a well-known phenomenon for any multi-dimensional indexes attempting to scale up to high dimensions. One well-known approach to overcome degradation in performance with respect to increasing dimensions is to reduce the dimensionality of the original dataset before constructing the index. However, identifying the correlation among the dimensions and effectively reducing them are challenging tasks. In this paper, we present an adaptive Multi-level Mahalanobis-based Dimensionality Reduction (MMDR) technique for high-dimensional indexing. Our MMDR technique has four notable features compared to existing methods. First, it discovers elliptical clusters for more effective dimensionality reduction by using only the low-dimensional subspaces. Second, data points in the different axis systems are indexed using a single B+-tree. Third, our technique is highly scalable in terms of data size and dimension. Finally, it is also dynamic and adaptive to insertions. An extensive performance study was conducted using both real and synthetic datasets, and the results show that our technique not only achieves higher precision, but also enables queries to be processed efficiently. Copyright Springer-Verlag 2005
Resumo:
Tropical deforestation is the major contemporary threat to global biodiversity, because a diminishing extent of tropical forests supports the majority of the Earth's biodiversity. Forest clearing is often spatially concentrated in regions where human land use pressures, either planned or unplanned, increase the likelihood of deforestation. However, it is not a random process, but often moves in waves originating from settled areas. We investigate the spatial dynamics of land cover change in a tropical deforestation hotspot in the Colombian Amazon. We apply a forest cover zoning approach which permitted: calculation of colonization speed; comparative spatial analysis of patterns of deforestation and regeneration; analysis of spatial patterns of mature and recently regenerated forests; and the identification of local-level hotspots experiencing the fastest deforestation or regeneration. The colonization frontline moved at an average of 0.84 km yr(-1) from 1989 to 2002, resulting in the clearing of 3400 ha yr(-1) of forests beyond the 90% forest cover line. The dynamics of forest clearing varied across the colonization front according to the amount of forest in the landscape, but was spatially concentrated in well-defined 'local hotspots' of deforestation and forest regeneration. Behind the deforestation front, the transformed landscape mosaic is composed of cropping and grazing lands interspersed with mature forest fragments and patches of recently regenerated forests. We discuss the implications of the patterns of forest loss and fragmentation for biodiversity conservation within a framework of dynamic conservation planning.
Resumo:
Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.
Resumo:
An appreciation of the physical mechanisms which cause observed seismicity complexity is fundamental to the understanding of the temporal behaviour of faults and single slip events. Numerical simulation of fault slip can provide insights into fault processes by allowing exploration of parameter spaces which influence microscopic and macroscopic physics of processes which may lead towards an answer to those questions. Particle-based models such as the Lattice Solid Model have been used previously for the simulation of stick-slip dynamics of faults, although mainly in two dimensions. Recent increases in the power of computers and the ability to use the power of parallel computer systems have made it possible to extend particle-based fault simulations to three dimensions. In this paper a particle-based numerical model of a rough planar fault embedded between two elastic blocks in three dimensions is presented. A very simple friction law without any rate dependency and no spatial heterogeneity in the intrinsic coefficient of friction is used in the model. To simulate earthquake dynamics the model is sheared in a direction parallel to the fault plane with a constant velocity at the driving edges. Spontaneous slip occurs on the fault when the shear stress is large enough to overcome the frictional forces on the fault. Slip events with a wide range of event sizes are observed. Investigation of the temporal evolution and spatial distribution of slip during each event shows a high degree of variability between the events. In some of the larger events highly complex slip patterns are observed.
Resumo:
Traditional real-time control systems are tightly integrated into the industrial processes they govern. Now, however, there is increasing interest in networked control systems. These provide greater flexibility and cost savings by allowing real-time controllers to interact with industrial processes over existing communications networks. New data packet queuing protocols are currently being developed to enable precise real-time control over a network with variable propagation delays. We show how one such protocol was formally modelled using timed automata, and how model checking was used to reveal subtle aspects of the control system's dynamic behaviour.
Resumo:
Business environments have become exceedingly dynamic and competitive in recent times. This dynamism is manifested in the form of changing process requirements and time constraints. Workflow technology is currently one of the most promising fields of research in business process automation. However, workflow systems to date do not provide the flexibility necessary to support the dynamic nature of business processes. In this paper we primarily discuss the issues and challenges related to managing change and time in workflows representing dynamic business processes. We also present an analysis of workflow modifications and provide feasibility considerations for the automation of this process.
Resumo:
This paper presents a new approach to improving the effectiveness of autonomous systems that deal with dynamic environments. The basis of the approach is to find repeating patterns of behavior in the dynamic elements of the system, and then to use predictions of the repeating elements to better plan goal directed behavior. It is a layered approach involving classifying, modeling, predicting and exploiting. Classifying involves using observations to place the moving elements into previously defined classes. Modeling involves recording features of the behavior on a coarse grained grid. Exploitation is achieved by integrating predictions from the model into the behavior selection module to improve the utility of the robot's actions. This is in contrast to typical approaches that use the model to select between different strategies or plays. Three methods of adaptation to the dynamic features of the environment are explored. The effectiveness of each method is determined using statistical tests over a number of repeated experiments. The work is presented in the context of predicting opponent behavior in the highly dynamic and multi-agent robot soccer domain (RoboCup)