966 resultados para Dynamic processes
Resumo:
The development of electrophoretic computer models and their use for simulation of electrophoretic processes has increased significantly during the last few years. Recently, GENTRANS and SIMUL5 were extended with algorithms that describe chemical equilibria between solutes and a buffer additive in a fast 1:1 interaction process, an approach that enables simulation of the electrophoretic separation of enantiomers. For acidic cationic systems with sodium and H3 0(+) as leading and terminating components, respectively, acetic acid as counter component, charged weak bases as samples, and a neutral CD as chiral selector, the new codes were used to investigate the dynamics of isotachophoretic adjustment of enantiomers, enantiomer separation, boundaries between enantiomers and between an enantiomer and a buffer constituent of like charge, and zone stability. The impact of leader pH, selector concentration, free mobility of the weak base, mobilities of the formed complexes and complexation constants could thereby be elucidated. For selected examples with methadone enantiomers as analytes and (2-hydroxypropyl)-β-CD as selector, simulated zone patterns were found to compare well with those monitored experimentally in capillary setups with two conductivity detectors or an absorbance and a conductivity detector. Simulation represents an elegant way to provide insight into the formation of isotachophoretic boundaries and zone stability in presence of complexation equilibria in a hitherto inaccessible way.
Resumo:
Ore-forming and geoenviromental systems commonly involve coupled fluid flowand chemical reaction processes. The advanced numerical methods and computational modeling have become indispensable tools for simulating such processes in recent years. This enables many hitherto unsolvable geoscience problems to be addressed using numerical methods and computational modeling approaches. For example, computational modeling has been successfully used to solve ore-forming and mine site contamination/remediation problems, in which fluid flow and geochemical processes play important roles in the controlling dynamic mechanisms. The main purpose of this paper is to present a generalized overview of: (1) the various classes and models associated with fluid flow/chemically reacting systems in order to highlight possible opportunities and developments for the future; (2) some more general issues that need attention in the development of computational models and codes for simulating ore-forming and geoenviromental systems; (3) the related progresses achieved on the geochemical modeling over the past 50 years or so; (4) the general methodology for modeling of oreforming and geoenvironmental systems; and (5) the future development directions associated with modeling of ore-forming and geoenviromental systems.
Resumo:
Dynamic penetrometer data obtained with the Nimrod penetrometer (MARUM). Data is presented as (i) penetration depth (including for different layers if present), (ii) measured deceleration and (iv) estimated quasi-static bearing capacity including range of uncertainty due to the processing method. Lat/Long coordinates are given.
Resumo:
Offshore wind farms are beginning to form part of coastal and marine landscapes located in dynamic surroundings. An integral management model must therefore be applied to achieve not only technical and economic viability of the project but also respect for the environment. Amongst other aspects, the latter calls for an analysis of the possible impact these facilities may have on littoral processes and this requires the differences between littoral processes prior and subsequent to the facility’s construction to be known. The maritime climate, the composition of the coast, lay-out distribution and characteristics of the facility’s components need to be known, particularly foundations as they are the main obstacles waves and currents meet. This article first addresses different aspects related to an offshore wind farm’s influence on the analysis of how it affects littoral dynamics and, because of their importance in this study, pays special attention to foundations. Coastal erosion due to this type of facility is then examined. The main conclusion of this article is that, whilst there are certain opinions claiming the coast is not affected by the presence of this kind of facility since the distance from location to coast and between wind turbine generators themselves is long, the impact must be analysed in each specific case, at least until experience proves otherwise and criteria are adopted in this respect.
Resumo:
Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation.
Resumo:
Received signal strength-based localization systems usually rely on a calibration process that aims at characterizing the propagation channel. However, due to the changing environmental dynamics, the behavior of the channel may change after some time, thus, recalibration processes are necessary to maintain the positioning accuracy. This paper proposes a dynamic calibration method to initially calibrate and subsequently update the parameters of the propagation channel model using a Least Mean Squares approach. The method assumes that each anchor node in the localization infrastructure is characterized by its own propagation channel model. In practice, a set of sniffers is used to collect RSS samples, which will be used to automatically calibrate each channel model by iteratively minimizing the positioning error. The proposed method is validated through numerical simulation, showing that the positioning error of the mobile nodes is effectively reduced. Furthermore, the method has a very low computational cost; therefore it can be used in real-time operation for wireless resource-constrained nodes.
Resumo:
The microstructural evolution of an AZ31 rolled sheet during dynamic deformation at strain rates of ∼103 s−1 has been investigated by electron backscatter diffraction, X-ray and neutron diffraction. The influence of orientation on the predominant deformation mechanisms and on the recovery processes taking place during deformation has been systematically examined. The results have been compared with those corresponding to the same alloy tested quasi-statically under equivalent conditions. It has been found that strain rate enhances the activation of extension twinning dramatically, while contraction and secondary twinning are not significantly influenced. The polarity of extension twinning is even reversed in some grains under selected testing conditions. Significant grain subdivision by the formation of geometrically necessary boundaries (GNBs) takes place during both quasi-static and dynamic deformation of this AZ31 alloy. It is remarkable that GNBs of high misorientations form even at the highest strain rates. The phenomenon of recovery has been found to be orientation dependent
Resumo:
The proposal highlights certain design strategies and a case study that can link the material urban space to digital emerging realms. The composite nature of urban spaces ?material/ digital- is understood as an opportunity to reconfigure public urban spaces without high-cost, difficult to apply interventions and, furthermore, to reactivate them by inserting dynamic, interactive and playful conditions that engage people and re-establish their relations to the cities. The structuring of coexisting and interconnected material and digital aspects in public urban spaces is proposed through the implementation of hybridization processes. Hybrid spaces can fascinate and provoke the public and especially younger people to get involved and interact with physical aspects of urban public spaces as well as digital representations or interpretations of those. Digital game?s design in urban public spaces can be comprehended as a tool that allows architects to understand and to configure hybrids of material and digital conceptions and project all in one, as an inseparable totality. Digital technologies have for a long time now intervened in our perception of traditional dipoles such as subject - environment. Architects, especially in the past, have been responsible for material mediations and tangible interfaces that permit subjects to relate to their physical environments in a controlled and regulated manner; but, nowadays, architects are compelled to embody in design, the transition that is happening in all aspects of everyday life, that is, from material to digital realities. In addition, the disjunctive relation of material and digital realms is ceding and architects are now faced with the challenge that supposes the merging of both in a single, all-inclusive reality. The case study is a design project for a game implemented simultaneously in a specific urban space and on the internet. This project developed as the spring semester course New Media in Architecture at the Department of Architecture, Democritus University of Thrace, Greece is situated at the city of Xanthi. Composite cities can use design strategies and technological tools to configure augmented and appealing urban spaces that articulate and connect different realms in a single engaging reality.
Resumo:
Dynamic combinatorial libraries are mixtures of compounds that exist in a dynamic equilibrium and can be driven to compositional self adaptation via selective binding of a specific assembly of certain components to a molecular target. We present here an extension of this initial concept to dynamic libraries that consists of two levels, the first formed by the coordination of terpyridine-based ligands to the transition metal template, and the second, by the imine formation with the aldehyde substituents on the terpyridine moieties. Dialdehyde 7 has been synthesized, converted into a variety of ligands, oxime ethers L11–L33 and acyl hydrazones L44–L77, and subsequently into corresponding cobalt complexes. A typical complex, Co(L22)22+ is shown to engage in rapid exchange with a competing ligand L11 and with another complex, Co(L22)22+ in 30% acetonitrile/water at pH 7.0 and 25°C. The exchange in the corresponding Co(III) complexes is shown to be much slower. Imine exchange in the acyl hydrazone complexes (L44–L77) is strongly controlled by pH and temperature. The two types of exchange, ligand and imine, can thus be used as independent equilibrium processes controlled by different types of external intervention, i.e., via oxidation/reduction of the metal template and/or change in the pH/temperature of the medium. The resulting double-level dynamic libraries are therefore named orthogonal, in similarity with the orthogonal protecting groups in organic synthesis. Sample libraries of this type have been synthesized and showed the complete expected set of components in electrospray ionization MS.
Resumo:
The dichotomy between two groups of workers on neuroelectrical activity is retarding progress. To study the interrelations between neuronal unit spike activity and compound field potentials of cell populations is both unfashionable and technically challenging. Neither of the mutual disparagements is justified: that spikes are to higher functions as the alphabet is to Shakespeare and that slow field potentials are irrelevant epiphenomena. Spikes are not the basis of the neural code but of multiple codes that coexist with nonspike codes. Field potentials are mainly information-rich signs of underlying processes, but sometimes they are also signals for neighboring cells, that is, they exert influence. This paper concerns opportunities for new research with many channels of wide-band (spike and slow wave) recording. A wealth of structure in time and three-dimensional space is different at each scale—micro-, meso-, and macroactivity. The depth of our ignorance is emphasized to underline the opportunities for uncovering new principles. We cannot currently estimate the relative importance of spikes and synaptic communication vs. extrasynaptic graded signals. In spite of a preponderance of literature on the former, we must consider the latter as probably important. We are in a primitive stage of looking at the time series of wide-band voltages in the compound, local field, potentials and of choosing descriptors that discriminate appropriately among brain loci, states (functions), stages (ontogeny, senescence), and taxa (evolution). This is not surprising, since the brains in higher species are surely the most complex systems known. They must be the greatest reservoir of new discoveries in nature. The complexity should not deter us, but a dose of humility can stimulate the flow of imaginative juices.
Resumo:
Theories of image segmentation suggest that the human visual system may use two distinct processes to segregate figure from background: a local process that uses local feature contrasts to mark borders of coherent regions and a global process that groups similar features over a larger spatial scale. We performed psychophysical experiments to determine whether and to what extent the global similarity process contributes to image segmentation by motion and color. Our results show that for color, as well as for motion, segmentation occurs first by an integrative process on a coarse spatial scale, demonstrating that for both modalities the global process is faster than one based on local feature contrasts. Segmentation by motion builds up over time, whereas segmentation by color does not, indicating a fundamental difference between the modalities. Our data suggest that segmentation by motion proceeds first via a cooperative linking over space of local motion signals, generating almost immediate perceptual coherence even of physically incoherent signals. This global segmentation process occurs faster than the detection of absolute motion, providing further evidence for the existence of two motion processes with distinct dynamic properties.
Resumo:
In the present paper, the endogenous theory of time preference is extended to analyze those processes of capital accumulation and changes in environmental quality that are dynamically optimum with respect to the intertemporal preference ordering of the representative individual of the society in question. The analysis is carried out within the conceptual framework of the dynamic analysis of environmental quality, as has been developed by a number of economists for specific cases of the fisheries and forestry commons. The duality principles on intertemporal preference ordering and capital accumulation are extended to the situation where processes of capital accumulation are subject to the Penrose effect, which exhibit the marginal decrease in the effect of investment in private and social overhead capital upon the rate at which capital is accumulated. The dynamically optimum time-path of economic activities is characterized by the proportionality of two systems of imputed, or efficient, prices, one associated with the given intertemporal ordering and another associated with processes of accumulation of private and social overhead capital. It is particularly shown that the dynamically optimality of the processes of capital accumulation involving both private and social overhead capital is characterized by the conditions that are identical with those involving private capital, with the role of social overhead capital only indirectly exhibited.
Resumo:
Microtubule asters forming the mitotic spindle are assembled around two centrosomes through the process of dynamic instability in which microtubules alternate between growing and shrinking states. By modifying the dynamics of this assembly process, cell cycle enzymes, such as cdc2 cyclin kinases, regulate length distributions in the asters. It is believed that the same enzymes control the number of assembled microtubules by changing the "nucleating activity" of the centrosomes. Here we show that assembly of microtubule asters may be strongly altered by effects connected with diffusion of tubulin monomers. Theoretical analysis of a simple model describing assembly of microtubule asters clearly shows the existence of a region surrounding the centrosome depleted in GTP tubulin. The number of assembled microtubules may in some cases be limited by this depletion effect rather than by the number of available nucleation sites on the centrosome.
Resumo:
Abstract. Speckle is being used as a characterization tool for the analysis of the dynamics of slow-varying phenomena occurring in biological and industrial samples at the surface or near-surface regions. The retrieved data take the form of a sequence of speckle images. These images contain information about the inner dynamics of the biological or physical process taking place in the sample. Principal component analysis (PCA) is able to split the original data set into a collection of classes. These classes are related to processes showing different dynamics. In addition, statistical descriptors of speckle images are used to retrieve information on the characteristics of the sample. These statistical descriptors can be calculated in almost real time and provide a fast monitoring of the sample. On the other hand, PCA requires a longer computation time, but the results contain more information related to spatial–temporal patterns associated to the process under analysis. This contribution merges both descriptions and uses PCA as a preprocessing tool to obtain a collection of filtered images, where statistical descriptors are evaluated on each of them. The method applies to slow-varying biological and industrial processes.
Resumo:
In this work, we present a systematic method for the optimal development of bioprocesses that relies on the combined use of simulation packages and optimization tools. One of the main advantages of our method is that it allows for the simultaneous optimization of all the individual components of a bioprocess, including the main upstream and downstream units. The design task is mathematically formulated as a mixed-integer dynamic optimization (MIDO) problem, which is solved by a decomposition method that iterates between primal and master sub-problems. The primal dynamic optimization problem optimizes the operating conditions, bioreactor kinetics and equipment sizes, whereas the master levels entails the solution of a tailored mixed-integer linear programming (MILP) model that decides on the values of the integer variables (i.e., number of equipments in parallel and topological decisions). The dynamic optimization primal sub-problems are solved via a sequential approach that integrates the process simulator SuperPro Designer® with an external NLP solver implemented in Matlab®. The capabilities of the proposed methodology are illustrated through its application to a typical fermentation process and to the production of the amino acid L-lysine.