883 resultados para Nonlinear correlation coefficients


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Armington Assumption in the context of multi-regional CGE models is commonly interpreted as follows: Same commodities with different origins are imperfect substitutes for each other. In this paper, a static spatial CGE model that is compatible with this assumption and explicitly considers the transport sector and regional price differentials is formulated. Trade coefficients, which are derived endogenously from the optimization behaviors of firms and households, are shown to take the form of a potential function. To investigate how the elasticity of substitutions affects equilibrium solutions, a simpler version of the model that incorporates three regions and two sectors (besides the transport sector) is introduced. Results indicate: (1) if commodities produced in different regions are perfect substitutes, regional economies will be either autarkic or completely symmetric and (2) if they are imperfect substitutes, the impact of elasticity on the price equilibrium system as well as trade coefficients will be nonlinear and sometimes very sensitive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On the basis of the BBGKY hierarchy of equations an expression is derived for the response of a fully ionized plasma to a strong, high-frequency electric field in the limit of infinite ion mass. It is found that even in this limit the ionion correlation function is substantially affected by the field. The corrections to earlier nonlinear results for the current density appear to be quite ssential. The validity of the model introduced by Dawson and Oberman to study the response to a vanishingly small field is confirmed for larger values of the field when the eorrect expression for the ion-ion correlations i s introduced; the model by itself does not yield such an expression. The results have interest for the heating of the plasma and for the propagation of a strong electromagnetic wave through the plasma. The theory seems to be valid for any field intensity for which the plasma is stable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La energía eólica marina es uno de los recursos energéticos con mayor proyección pudiendo contribuir a reducir el consumo de combustibles fósiles y a cubrir la demanda de energía en todo el mundo. El concepto de aerogenerador marino está basado en estructuras fijas como jackets o en plataformas flotantes, ya sea una semisumergible o una TLP. Se espera que la energía eólica offshore juegue un papel importante en el perfil de producción energética de los próximos años; por tanto, las turbinas eólicas deben hacerse más fables y rentables para ser competitivas frente a otras fuentes de energía. Las estructuras flotantes pueden experimentar movimientos resonantes en estados de la mar con largos períodos de oleaje. Estos movimientos disminuyen su operatividad y pueden causar daños en los componentes eléctricos de las turbinas y en las palas, también en los risers y moorings. La respuesta de la componente vertical del movimiento puede reducirse mediante diferentes actuaciones: (1) aumentando la amortiguación del sistema, (2) manteniendo el período del movimiento vertical fuera del rango de la energía de la ola, y (3) reduciendo las fuerzas de excitación verticales. Un ejemplo típico para llevar a cabo esta reducción son las "Heave Plates". Las heave plates son placas que se utilizan en la industria offshore debido a sus características hidrodinámicas, ya que aumentan la masa añadida y la amortiguación del sistema. En un análisis hidrodinámico convencional, se considera una estructura sometida a un oleaje con determinadas características y se evalúan las cargas lineales usando la teoría potencial. El amortiguamiento viscoso, que juega un papel crucial en la respuesta en resonancia del sistema, es un dato de entrada para el análisis. La tesis se centra principalmente en la predicción del amortiguamiento viscoso y de la masa añadida de las heave plates usadas en las turbinas eólicas flotantes. En los cálculos, las fuerzas hidrodinámicas se han obtenido con el f n de estudiar cómo los coeficientes hidrodinámicos de masa añadida5 y amortiguamiento varían con el número de KC, que caracteriza la amplitud del movimiento respecto al diámetro del disco. Por otra parte, se ha investigado la influencia de la distancia media de la ‘heave plate’ a la superficie libre o al fondo del mar, sobre los coeficientes hidrodinámicos. En este proceso, un nuevo modelo que describe el trabajo realizado por la amortiguación en función de la enstrofía, es descrito en el presente documento. Este nuevo enfoque es capaz de proporcionar una correlación directa entre el desprendimiento local de vorticidad y la fuerza de amortiguación global. El análisis también incluye el estudio de los efectos de la geometría de la heave plate, y examina la sensibilidad de los coeficientes hidrodinámicos al incluir porosidad en ésta. Un diseño novedoso de una heave plate, basado en la teoría fractal, también fue analizado experimentalmente y comparado con datos experimentales obtenidos por otros autores. Para la resolución de las ecuaciones de Navier Stokes se ha usado un solver basado en el método de volúmenes finitos. El solver usa las librerías de OpenFOAM (Open source Field Operation And Manipulation), para resolver un problema multifásico e incompresible, usando la técnica VOF (volume of fluid) que permite capturar el movimiento de la superficie libre. Los resultados numéricos han sido comparados con resultados experimentales llevados a cabo en el Canal del Ensayos Hidrodinámicos (CEHINAV) de la Universidad Politécnica de Madrid y en el Canal de Experiencias Hidrodinámicas (CEHIPAR) en Madrid, al igual que con otros experimentos realizados en la Escuela de Ingeniería Mecánica de la Universidad de Western Australia. Los principales resultados se presentan a continuación: 1. Para pequeños valores de KC, los coeficientes hidrodinámicos de masa añadida y amortiguamiento incrementan su valor a medida que el disco se aproxima al fondo marino. Para los casos cuando el disco oscila cerca de la superficie libre, la dependencia de los coeficientes hidrodinámicos es más fuerte por la influencia del movimiento de la superficie libre. 2. Los casos analizados muestran la existencia de un valor crítico de KC, donde la tendencia de los coeficientes hidrodinámicos se ve alterada. Dicho valor crítico depende de la distancia al fondo marino o a la superficie libre. 3. El comportamiento físico del flujo, para valores de KC cercanos a su valor crítico ha sido estudiado mediante el análisis del campo de vorticidad. 4. Introducir porosidad al disco, reduce la masa añadida para los valores de KC estudiados, pero se ha encontrado que la porosidad incrementa el valor del coeficiente de amortiguamiento cuando se incrementa la amplitud del movimiento, logrando un máximo de damping para un disco con 10% de porosidad. 5. Los resultados numéricos y experimentales para los discos con faldón, muestran que usar este tipo de geometrías incrementa la masa añadida cuando se compara con el disco sólido, pero reduce considerablemente el coeficiente de amortiguamiento. 6. Un diseño novedoso de heave plate basado en la teoría fractal ha sido experimentalmente estudiado a diferentes calados y comparado con datos experimentales obtenidos por otro autores. Los resultados muestran un comportamiento incierto de los coeficientes y por tanto este diseño debería ser estudiado más a fondo. ABSTRACT Offshore wind energy is one of the promising resources which can reduce the fossil fuel energy consumption and cover worldwide energy demands. Offshore wind turbine concepts are based on either a fixed structure as a jacket or a floating offshore platform like a semisubmersible, spar or tension leg platform. Floating offshore wind turbines have the potential to be an important part of the energy production profile in the coming years. In order to accomplish this wind integration, these wind turbines need to be made more reliable and cost efficient to be competitive with other sources of energy. Floating offshore artifacts, such oil rings and wind turbines, may experience resonant heave motions in sea states with long peak periods. These heave resonances may increase the system downtime and cause damage on the system components and as well as on risers and mooring systems. The heave resonant response may be reduced by different means: (1) increasing the damping of the system, (2) keeping the natural heave period outside the range of the wave energy, and (3) reducing the heave excitation forces. A typical example to accomplish this reduction are “Heave Plates”. Heave plates are used in the offshore industry due to their hydrodynamic characteristics, i.e., increased added mass and damping. Conventional offshore hydrodynamic analysis considers a structure in waves, and evaluates the linear and nonlinear loads using potential theory. Viscous damping, which is expected to play a crucial role in the resonant response, is an empirical input to the analysis, and is not explicitly calculated. The present research has been mainly focused on the prediction of viscous damping and added mass of floating offshore wind turbine heave plates. In the calculations, the hydrodynamic forces have been measured in order to compute how the hydrodynamic coefficients of added mass1 and damping vary with the KC number, which characterises the amplitude of heave motion relative to the diameter of the disc. In addition, the influence on the hydrodynamic coefficients when the heave plate is oscillating close to the free surface or the seabed has been investigated. In this process, a new model describing the work done by damping in terms of the flow enstrophy, is described herein. This new approach is able to provide a direct correlation between the local vortex shedding processes and the global damping force. The analysis also includes the study of different edges geometry, and examines the sensitivity of the damping and added mass coefficients to the porosity of the plate. A novel porous heave plate based on fractal theory has also been proposed, tested experimentally and compared with experimental data obtained by other authors for plates with similar porosity. A numerical solver of Navier Stokes equations, based on the finite volume technique has been applied. It uses the open-source libraries of OpenFOAM (Open source Field Operation And Manipulation), to solve 2 incompressible, isothermal immiscible fluids using a VOF (volume of fluid) phase-fraction based interface capturing approach, with optional mesh motion and mesh topology changes including adaptive re-meshing. Numerical results have been compared with experiments conducted at Technical University of Madrid (CEHINAV) and CEHIPAR model basins in Madrid and with others performed at School of Mechanical Engineering in The University of Western Australia. A brief summary of main results are presented below: 1. At low KC numbers, a systematic increase in added mass and damping, corresponding to an increase in the seabed proximity, is observed. Specifically, for the cases when the heave plate is oscillating closer to the free surface, the dependence of the hydrodynamic coefficients is strongly influenced by the free surface. 2. As seen in experiments, a critical KC, where the linear trend of the hydrodynamic coefficients with KC is disrupted and that depends on the seabed or free surface distance, has been found. 3. The physical behavior of the flow around the critical KC has been explained through an analysis of the flow vorticity field. 4. The porosity of the heave plates reduces the added mass for the studied porosity at all KC numbers, but the porous heave plates are found to increase the damping coefficient with increasing amplitude of oscillation, achieving a maximum damping coefficient for the heave plate with 10% porosity in the entire KC range. 5. Another concept taken into account in this work has been the heave plates with flaps. Numerical and experimental results show that using discs with flaps will increase added mass when compared to the plain plate but may also significantly reduce damping. 6. A novel heave plate design based on fractal theory has tested experimentally for different submergences and compared with experimental data obtained by other authors for porous plates. Results show an unclear behavior in the coefficients and should be studied further. Future work is necessary in order to address a series of open questions focusing on 3D effects, optimization of the heave plates shapes, etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluorescein-labeled oligodeoxynucleotides (oligos) were introduced into cultured rat myoblasts, and their molecular movements inside the nucleus were studied by fluorescence correlation spectroscopy (FCS) and fluorescence recovery after photobleaching (FRAP). FCS revealed that a large fraction of both intranuclear oligo(dT) (43%) and oligo(dA) (77%) moves rapidly with a diffusion coefficient of 4 × 10−7 cm2/s. Interestingly, this rate of intranuclear oligo movement is similar to their diffusion rates measured in aqueous solution. In addition, we detected a large fraction (45%) of the intranuclear oligo(dT), but not oligo(dA), diffusing at slower rates (≤1 × 10−7 cm2/s). The amount of this slower-moving oligo(dT) was greatly reduced if the oligo(dT) was prehybridized in solution with (unlabeled) oligo(dA) prior to introduction to cells, presumably because the oligo(dT) was then unavailable for subsequent hybridization to endogenous poly(A) RNA. The FCS-measured diffusion rate for much of the slower oligo(dT) population approximated the diffusion rate in aqueous solution of oligo(dT) hybridized to a large polyadenylated RNA (1.0 × 10−7 cm2/s). Moreover, this intranuclear movement rate falls within the range of calculated diffusion rates for an average-sized heterogeneous nuclear ribonucleoprotein particle in aqueous solution. A subfraction of oligo(dT) (15%) moved over 10-fold more slowly, suggesting it was bound to very large macromolecular complexes. Average diffusion coefficients obtained from FRAP experiments were in agreement with the FCS data. These results demonstrate that oligos can move about within the nucleus at rates comparable to those in aqueous solution and further suggest that this is true for large ribonucleoprotein complexes as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The onset of measles vaccination in England and Wales in 1968 coincided with a marked drop in the temporal correlation of epidemic patterns between major cities. We analyze a variety of hypotheses for the mechanisms driving this change. Straightforward stochastic models suggest that the interaction between a lowered susceptible population (and hence increased demographic noise) and nonlinear dynamics is sufficient to cause the observed drop in correlation. The decorrelation of epidemics could potentially lessen the chance of global extinction and so inhibit attempts at measles eradication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate the correlation of the magnitude of corneal toricity and power vector components of both corneal surfaces measured with a Scheimpflug photography-based system. Methods: A total of 117 healthy normal eyes of 117 subjects selected randomly with ages ranging from 7 to 80 years were included. All eyes received an anterior segment and corneal analysis with the Sirius system (CSO) evaluating the anterior and posterior mean toricity for 3 and 7 mm (aAST and pAST). The vector components J0 and J45 as well as the overall strength blur (B) were calculated for each keratometric measurement using the procedure defined by Thibos and Horner. Results: The coefficient of correlation between aAST and pAST was 0.52 and 0.62 and the mean anteroposterior ratio for toricity was 0.46 ± 0.39 and 0.57 ± 0.75 for 3 and 7 mm, respectively. These ratios correlated significantly with aAST, anterior corneal J0, and manifest refraction J0 (r ≥ 0.39, P < 0.01). The coefficient of correlation was 0.69 and 0.81 between anterior and posterior J0 for 3 and 7 mm, respectively. For J45, the coefficients were 0.62 and 0.71, respectively. The linear regression analysis revealed that the pAST and power vectors could be predicted from the anterior corneal data (R2 ≥ 0.40, P < 0.01). Conclusions: The toricity and astigmatic power vector components of the posterior corneal surface in the human healthy eye are related to those of the anterior and therefore can be predicted consistently from the anterior toricity and astigmatic power vectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies stability properties of linear optimization problems with finitely many variables and an arbitrary number of constraints, when only left hand side coefficients can be perturbed. The coefficients of the constraints are assumed to be continuous functions with respect to an index which ranges on certain compact Hausdorff topological space, and these properties are preserved by the admissible perturbations. More in detail, the paper analyzes the continuity properties of the feasible set, the optimal set and the optimal value, as well as the preservation of desirable properties (boundedness, uniqueness) of the feasible and of the optimal sets, under sufficiently small perturbations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

in this paper we investigate the solvability of the Neumann problem (1.1) involving the critical Sobolev exponents on the right-hand side of the equation and in the boundary condition. It is assumed that the coefficients Q and P are smooth. We examine the common effect of the mean curvature of the boundary a deltaOhm and the shape of the graph of the coefficients Q and P on the existence of solutions of problem (1.1). (C) 2003 Published by Elsevier Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The RAFT-CLD-T methodology is demonstrated to be not only applicable to 1-substituted monomers such as styrene and acrylates, but also to 1,1-disubstituted monomers such as MMA. The chain length of the terminating macromolecules is controlled by CPDB in MMA bulk free radical polymerization at 80 degrees C. The evolution of the chain length dependent termination rate coefficient, k(t)(i,i), was constructed in a step-wise fashion, since the MMA/CPDB system displays hybrid behavior (between conventional and living free radical polymerization) resulting in initial high molecular weight polymers formed at low RAFT agent concentrations. The obtained CLD of k(t) in MMA polymerizations is compatible with the composite model for chain length dependent termination. For the initial chain-length regime, up to a degree of polymerization of 100, k(t) decreases with alpha (in the expression k(t)(i,i) = k(t)(0) . i(-alpha)) being close to 0.65 at 80 degrees C. At chain lengths exceeding 100, the decrease is less pronounced (affording an alpha of 0.15 at 80 degrees C). However, the data are best represented by a continuously decreasing nonlinear functionality implying a chain length dependent alpha.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water-sampler equilibrium partitioning coefficients and aqueous boundary layer mass transfer coefficients for atrazine, diuron, hexazionone and fluometuron onto C18 and SDB-RPS Empore disk-based aquatic passive samplers have been determined experimentally under a laminar flow regime (Re = 5400). The method involved accelerating the time to equilibrium of the samplers by exposing them to three water concentrations, decreasing stepwise to 50% and then 25% of the original concentration. Assuming first-order Fickian kinetics across a rate-limiting aqueous boundary layer, both parameters are determined computationally by unconstrained nonlinear optimization. In addition, a method of estimation of mass transfer coefficients-therefore sampling rates-using the dimensionless Sherwood correlation developed for laminar flow over a flat plate is applied. For each of the herbicides, this correlation is validated to within 40% of the experimental data. The study demonstrates that for trace concentrations (sub 0.1 mu g/L) and these flow conditions, a naked Empore disk performs well as an integrative sampler over short deployments (up to 7 days) for the range of polar herbicides investigated. The SDB-RPS disk allows a longer integrative period than the C18 disk due to its higher sorbent mass and/or its more polar sorbent chemistry. This work also suggests that for certain passive sampler designs, empirical estimation of sampling rates may be possible using correlations that have been available in the chemical engineering literature for some time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The particle size of the bed sediments in or on many natural streams, alluvial fans, laboratory flumes, irrigation canals and mine waste deltas varies exponentially with distance along the stream. A plot of the available worldwide exponential bed particle size diminution coefficient data against stream length is presented which shows that all the data lie within a single narrow band extending over virtually the whole range of stream lengths and bed sediment particle sizes found on Earth. This correlation applies to both natural and artificial flows with both sand and gravel beds, irrespective of either the solids concentration or whether normal or reverse sorting occurs. This strongly suggests that there are common mechanisms underlying the exponential diminution of bed particles in subaerial aqueous flows of all kinds. Thus existing models of sorting and abrasion applicable to some such flows may be applicable to others. A comparison of exponential laboratory abrasion and field diminution coefficients suggests that abrasion is unlikely to be significant in gravel and sand bed streams shorter than about 10 km to 100 km, and about 500 km, respectively. Copyright (C) 1999 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: This study aimed to explore methods of assessing interactions between neuronal sources using MEG beamformers. However, beamformer methodology is based on the assumption of no linear long-term source interdependencies [VanVeen BD, vanDrongelen W, Yuchtman M, Suzuki A. Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 1997;44:867-80; Robinson SE, Vrba J. Functional neuroimaging by synthetic aperture magnetometry (SAM). In: Recent advances in Biomagnetism. Sendai: Tohoku University Press; 1999. p. 302-5]. Although such long-term correlations are not efficient and should not be anticipated in a healthy brain [Friston KJ. The labile brain. I. Neuronal transients and nonlinear coupling. Philos Trans R Soc Lond B Biol Sci 2000;355:215-36], transient correlations seem to underlie functional cortical coordination [Singer W. Neuronal synchrony: a versatile code for the definition of relations? Neuron 1999;49-65; Rodriguez E, George N, Lachaux J, Martinerie J, Renault B, Varela F. Perception's shadow: long-distance synchronization of human brain activity. Nature 1999;397:430-3; Bressler SL, Kelso J. Cortical coordination dynamics and cognition. Trends Cogn Sci 2001;5:26-36]. Methods: Two periodic sources were simulated and the effects of transient source correlation on the spatial and temporal performance of the MEG beamformer were examined. Subsequently, the interdependencies of the reconstructed sources were investigated using coherence and phase synchronization analysis based on Mutual Information. Finally, two interacting nonlinear systems served as neuronal sources and their phase interdependencies were studied under realistic measurement conditions. Results: Both the spatial and the temporal beamformer source reconstructions were accurate as long as the transient source correlation did not exceed 30-40 percent of the duration of beamformer analysis. In addition, the interdependencies of periodic sources were preserved by the beamformer and phase synchronization of interacting nonlinear sources could be detected. Conclusions: MEG beamformer methods in conjunction with analysis of source interdependencies could provide accurate spatial and temporal descriptions of interactions between linear and nonlinear neuronal sources. Significance: The proposed methods can be used for the study of interactions between neuronal sources. © 2005 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis presents experimental results for shell-side transfer coefficients and pressure drops across four different tube banks, using small-scale models, with yawed tubes, as found in many types of heat exchangers, boilers and nuclear reactors. The tube banks investigated have a staggered tube layout on a rotated square pitch, with a 1.25 pitch-to-diameter ratio. The angle of attack was varied between 45o and 90o. An extensive range of Reynolds number, i.e. 0.5. to 12,600, covering so-called laminar, transition and turbulent flows, was investigated. A diffusion-controlled electrochemical mass transfer technique has been employed to measure mass transfer coefficients. The heat transfer coefficients may be then readily obtained from the mass transfer values by applying the well-established Chilton-Colburn analogy. The results for the normal tube bank, which forms the base case for the study on inclined tube banks, show close agreement with previous work. The transfer coefficients and pressure drops of the inclined tube banks are compared with results from the ideal normal tube bank to examine the effect of inclination angle on heat transfer and pressure drop variations. The variation of the transfer coefficients row-by-row and the entrance and exit effects have also been investigated. An auxilary investigation has been carried out on the role of natural convection. A preliminary correlation of transfer coefficients and pressure drops against the variation in the yaw angle has been attempted. The results are discussed in the light of the few existing theoretical treatments and experimental data for these situations, and recommendations made for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate a digital back-propagation simplification method to enable computationally-efficient digital nonlinearity compensation for a coherently-detected 112 Gb/s polarization multiplexed quadrature phase shifted keying transmission over a 1,600 km link (20x80km) with no inline compensation. Through numerical simulation, we report up to 80% reduction in required back-propagation steps to perform nonlinear compensation, in comparison to the standard back-propagation algorithm. This method takes into account the correlation between adjacent symbols at a given instant using a weighted-average approach, and optimization of the position of nonlinear compensator stage to enable practical digital back-propagation.