979 resultados para approximate calculation of sums


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic properties of disordered binary alloys are studied via the calculation of the average Density of States (DOS) in two and three dimensions. We propose a new approximate scheme that allows for the inclusion of local order effects in finite geometries and extrapolates the behavior of infinite systems following finite-size scaling ideas. We particularly investigate the limit of the Quantum Site Percolation regime described by a tight-binding Hamiltonian. This limit was chosen to probe the role of short range order (SRO) properties under extreme conditions. The method is numerically highly efficient and asymptotically exact in important limits, predicting the correct DOS structure as a function of the SRO parameters. Magnetic field effects can also be included in our model to study the interplay of local order and the shifted quantum interference driven by the field. The average DOS is highly sensitive to changes in the SRO properties and striking effects are observed when a magnetic field is applied near the segregated regime. The new effects observed are twofold: there is a reduction of the band width and the formation of a gap in the middle of the band, both as a consequence of destructive interference of electronic paths and the loss of coherence for particular values of the magnetic field. The above phenomena are periodic in the magnetic flux. For other limits that imply strong localization, the magnetic field produces minor changes in the structure of the average DOS. © World Scientific Publishing Company.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Squeeze film damping effects naturally occur if structures are subjected to loading situations such that a very thin film of fluid is trapped within structural joints, interfaces, etc. An accurate estimate of squeeze film effects is important to predict the performance of dynamic structures. Starting from linear Reynolds equation which governs the fluid behavior coupled with structure domain which is modeled by Kirchhoff plate equation, the effects of nondimensional parameters on the damped natural frequencies are presented using boundary characteristic orthogonal functions. For this purpose, the nondimensional coupled partial differential equations are obtained using Rayleigh-Ritz method and the weak formulation, are solved using polynomial and sinusoidal boundary characteristic orthogonal functions for structure and fluid domain respectively. In order to implement present approach to the complex geometries, a two dimensional isoparametric coupled finite element is developed based on Reissner-Mindlin plate theory and linearized Reynolds equation. The coupling between fluid and structure is handled by considering the pressure forces and structural surface velocities on the boundaries. The effects of the driving parameters on the frequency response functions are investigated. As the next logical step, an analytical method for solution of squeeze film damping based upon Green’s function to the nonlinear Reynolds equation considering elastic plate is studied. This allows calculating modal damping and stiffness force rapidly for various boundary conditions. The nonlinear Reynolds equation is divided into multiple linear non-homogeneous Helmholtz equations, which then can be solvable using the presented approach. Approximate mode shapes of a rectangular elastic plate are used, enabling calculation of damping ratio and frequency shift as well as complex resistant pressure. Moreover, the theoretical results are correlated and compared with experimental results both in the literature and in-house experimental procedures including comparison against viscoelastic dampers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pelvic osteotomies improve containment of the femoral head in cases of developmental dysplasia of the hip or in femoroacetabular impingement due to acetabular retroversion. In the evolution of osteotomies, the Ganz Periacetabular Osteotomy (PAO) is among the complex reorientation osteotomies and allows for complete mobilization of the acetabulum without compromising the integrity of the pelvic ring. For the complex reorientation osteotomies, preoperative planning of the required acetabular correction is an important step, due to the need to comprehend the three-dimensional (3D) relationship between acetabulum and femur. Traditionally, planning was performed using conventional radiographs in different projections, reducing the 3D problem to a two-dimensional one. Known disturbance variables, mainly tilt and rotation of the pelvis make assessment by these means approximate at the most. The advent of modern enhanced computation skills and new imaging techniques gave room for more sophisticated means of preoperative planning. Apart from analysis of acetabular geometry on conventional x-rays by sophisticated software applications, more accurate assessment of coverage and congruency and thus amount of correction necessary can be performed on multiplanar CT images. With further evolution of computer-assisted orthopaedic surgery, especially the ability to generate 3D models from the CT data, examiners were enabled to simulate the in vivo situation in a virtual in vitro setting. Based on this ability, different techniques have been described. They basically all employ virtual definition of an acetabular fragment. Subsequently reorientation can be simulated using either 3D calculation of standard parameters of femoroacetabular morphology, or joint contact pressures, or a combination of both. Other techniques employ patient specific implants, templates or cutting guides to achieve the goal of safe periacetabular osteotomies. This chapter will give an overview of the available techniques for planning of periacetabular osteotomy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We conducted an integrated paleomagnetic and rock magnetic study on cores recovered from Ocean Drilling Program Sites 1276 and 1277 of the Newfoundland Basin. Stable components of magnetization are determined from Cretaceous-aged sedimentary and basement cores after detailed thermal and alternating-field demagnetization. Results from a series of rock magnetic measurements corroborate the demagnetization behavior and show that titanomagnetites are the main magnetic carrier. In view of the normal polarity of magnetization and radiometric dates for the sills at Site 1276 (~98 and ~105 Ma, both within the Cretaceous Normal Superchron) and for a gabbro intrusion in peridotite at Site 1277 (~126 Ma, Chron M1), our results suggest that the primary magnetization of the Cretaceous rocks is likely retained in these rocks. The overall magnetic inclination of lithologic Unit 2 in Hole 1277A between 143 and 180 meters below seafloor is 38°, implying significant (~35° counterclockwise, viewed to the north) rotation of the basement around a horizontal axis parallel to the rift axis (010°). The paleomagnetic rotational estimates should help refine models for the tectonic evolution of the basement. The mean inclinations for Sites 1276 and 1277 rocks imply paleolatitudes of 30.3° ± 5.1° and 22.9° ± 12.0°, respectively, with the latter presumably influenced by tectonic rotation. These values are consistent with those inferred from the mid-Cretaceous reference poles for North America, suggesting that the inclination determinations are reliable and consistent with a drill site on a location in the North America plate since at least the mid-Cretaceous. The combined paleolatitude results from Leg 210 sites indicate that the Newfoundland Basin was some 1800 km south of its current position in the mid-Cretaceous. Assuming a constant rate of motion, the paleolatitude data would suggest a rate of 12.1 mm/yr for the interval from ~130 Ma (Site 1276 age) to present, and 19.6 mm/yr for the interval from 126 Ma (Site 1277 age) to recent. The paleolatitude and rotational data from this study are consistent with the possibility that Site 1276 may have passed over the Canary and Madeira hotspots that formed the Newfoundland Seamounts in the mid-Cretaceous.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RESUMEN La dispersión del amoniaco (NH3) emitido por fuentes agrícolas en medias distancias, y su posterior deposición en el suelo y la vegetación, pueden llevar a la degradación de ecosistemas vulnerables y a la acidificación de los suelos. La deposición de NH3 suele ser mayor junto a la fuente emisora, por lo que los impactos negativos de dichas emisiones son generalmente mayores en esas zonas. Bajo la legislación comunitaria, varios estados miembros emplean modelos de dispersión inversa para estimar los impactos de las emisiones en las proximidades de las zonas naturales de especial conservación. Una revisión reciente de métodos para evaluar impactos de NH3 en distancias medias recomendaba la comparación de diferentes modelos para identificar diferencias importantes entre los métodos empleados por los distintos países de la UE. En base a esta recomendación, esta tesis doctoral compara y evalúa las predicciones de las concentraciones atmosféricas de NH3 de varios modelos bajo condiciones, tanto reales como hipotéticas, que plantean un potencial impacto sobre ecosistemas (incluidos aquellos bajo condiciones de clima Mediterráneo). En este sentido, se procedió además a la comparación y evaluación de varias técnicas de modelización inversa para inferir emisiones de NH3. Finalmente, se ha desarrollado un modelo matemático simple para calcular las concentraciones de NH3 y la velocidad de deposición de NH3 en ecosistemas vulnerables cercanos a una fuente emisora. La comparativa de modelos supuso la evaluación de cuatro modelos de dispersión (ADMS 4.1; AERMOD v07026; OPS-st v3.0.3 y LADD v2010) en un amplio rango de casos hipotéticos (dispersión de NH3 procedente de distintos tipos de fuentes agrícolas de emisión). La menor diferencia entre las concentraciones medias estimadas por los distintos modelos se obtuvo para escenarios simples. La convergencia entre las predicciones de los modelos fue mínima para el escenario relativo a la dispersión de NH3 procedente de un establo ventilado mecánicamente. En este caso, el modelo ADMS predijo concentraciones significativamente menores que los otros modelos. Una explicación de estas diferencias podríamos encontrarla en la interacción de diferentes “penachos” y “capas límite” durante el proceso de parametrización. Los cuatro modelos de dispersión fueron empleados para dos casos reales de dispersión de NH3: una granja de cerdos en Falster (Dinamarca) y otra en Carolina del Norte (EEUU). Las concentraciones medias anuales estimadas por los modelos fueron similares para el caso americano (emisión de granjas ventiladas de forma natural y balsa de purines). La comparación de las predicciones de los modelos con concentraciones medias anuales medidas in situ, así como la aplicación de los criterios establecidos para la aceptación estadística de los modelos, permitió concluir que los cuatro modelos se comportaron aceptablemente para este escenario. No ocurrió lo mismo en el caso danés (nave ventilada mecánicamente), en donde el modelo LADD no dio buenos resultados debido a la ausencia de procesos de “sobreelevacion de penacho” (plume-rise). Los modelos de dispersión dan a menudo pobres resultados en condiciones de baja velocidad de viento debido a que la teoría de dispersión en la que se basan no es aplicable en estas condiciones. En situaciones de frecuente descenso en la velocidad del viento, la actual guía de modelización propone usar un modelo que sea eficaz bajo dichas condiciones, máxime cuando se realice una valoración que tenga como objeto establecer una política de regularización. Esto puede no ser siempre posible debido a datos meteorológicos insuficientes, en cuyo caso la única opción sería utilizar un modelo más común, como la versión avanzada de los modelos Gausianos ADMS o AERMOD. Con el objetivo de evaluar la idoneidad de estos modelos para condiciones de bajas velocidades de viento, ambos modelos fueron utilizados en un caso con condiciones Mediterráneas. Lo que supone sucesivos periodos de baja velocidad del viento. El estudio se centró en la dispersión de NH3 procedente de una granja de cerdos en Segovia (España central). Para ello la concentración de NH3 media mensual fue medida en 21 localizaciones en torno a la granja. Se realizaron también medidas de concentración de alta resolución en una única localización durante una campaña de una semana. En este caso, se evaluaron dos estrategias para mejorar la respuesta del modelo ante bajas velocidades del viento. La primera se basó en “no zero wind” (NZW), que sustituyó periodos de calma con el mínimo límite de velocidad del viento y “accumulated calm emissions” (ACE), que forzaban al modelo a calcular las emisiones totales en un periodo de calma y la siguiente hora de no-calma. Debido a las importantes incertidumbres en los datos de entrada del modelo (inputs) (tasa de emisión de NH3, velocidad de salida de la fuente, parámetros de la capa límite, etc.), se utilizó el mismo caso para evaluar la incertidumbre en la predicción del modelo y valorar como dicha incertidumbre puede ser considerada en evaluaciones del modelo. Un modelo dinámico de emisión, modificado para el caso de clima Mediterráneo, fue empleado para estimar la variabilidad temporal en las emisiones de NH3. Así mismo, se realizó una comparativa utilizando las emisiones dinámicas y la tasa constante de emisión. La incertidumbre predicha asociada a la incertidumbre de los inputs fue de 67-98% del valor medio para el modelo ADMS y entre 53-83% del valor medio para AERMOD. La mayoría de esta incertidumbre se debió a la incertidumbre del ratio de emisión en la fuente (50%), seguida por la de las condiciones meteorológicas (10-20%) y aquella asociada a las velocidades de salida (5-10%). El modelo AERMOD predijo mayores concentraciones que ADMS y existieron más simulaciones que alcanzaron los criterios de aceptabilidad cuando se compararon las predicciones con las concentraciones medias anuales medidas. Sin embargo, las predicciones del modelo ADMS se correlacionaron espacialmente mejor con las mediciones. El uso de valores dinámicos de emisión estimados mejoró el comportamiento de ADMS, haciendo empeorar el de AERMOD. La aplicación de estrategias destinadas a mejorar el comportamiento de este último tuvo efectos contradictorios similares. Con el objeto de comparar distintas técnicas de modelización inversa, varios modelos (ADMS, LADD y WindTrax) fueron empleados para un caso no agrícola, una colonia de pingüinos en la Antártida. Este caso fue empleado para el estudio debido a que suponía la oportunidad de obtener el primer factor de emisión experimental para una colonia de pingüinos antárticos. Además las condiciones eran propicias desde el punto de vista de la casi total ausencia de concentraciones ambiente (background). Tras el trabajo de modelización existió una concordancia suficiente entre las estimaciones obtenidas por los tres modelos. De este modo se pudo definir un factor de emisión de para la colonia de 1.23 g NH3 por pareja criadora por día (con un rango de incertidumbre de 0.8-2.54 g NH3 por pareja criadora por día). Posteriores aplicaciones de técnicas de modelización inversa para casos agrícolas mostraron también un buen compromiso estadístico entre las emisiones estimadas por los distintos modelos. Con todo ello, es posible concluir que la modelización inversa es una técnica robusta para estimar tasas de emisión de NH3. Modelos de selección (screening) permiten obtener una rápida y aproximada estimación de los impactos medioambientales, siendo una herramienta útil para evaluaciones de impactos en tanto que permite eliminar casos que presentan un riesgo potencial de daño bajo. De esta forma, lo recursos del modelo pueden Resumen (Castellano) destinarse a casos en donde la posibilidad de daño es mayor. El modelo de Cálculo Simple de los Límites de Impacto de Amoniaco (SCAIL) se desarrolló para obtener una estimación de la concentración media de NH3 y de la tasa de deposición seca asociadas a una fuente agrícola. Está técnica de selección, basada en el modelo LADD, fue evaluada y calibrada con diferentes bases de datos y, finalmente, validada utilizando medidas independientes de concentraciones realizadas cerca de las fuentes. En general SCAIL dio buenos resultados de acuerdo a los criterios estadísticos establecidos. Este trabajo ha permitido definir situaciones en las que las concentraciones predichas por modelos de dispersión son similares, frente a otras en las que las predicciones difieren notablemente entre modelos. Algunos modelos nos están diseñados para simular determinados escenarios en tanto que no incluyen procesos relevantes o están más allá de los límites de su aplicabilidad. Un ejemplo es el modelo LADD que no es aplicable en fuentes con velocidad de salida significativa debido a que no incluye una parametrización de sobreelevacion del penacho. La evaluación de un esquema simple combinando la sobreelevacion del penacho y una turbulencia aumentada en la fuente mejoró el comportamiento del modelo. Sin embargo más pruebas son necesarias para avanzar en este sentido. Incluso modelos que son aplicables y que incluyen los procesos relevantes no siempre dan similares predicciones. Siendo las razones de esto aún desconocidas. Por ejemplo, AERMOD predice mayores concentraciones que ADMS para dispersión de NH3 procedente de naves de ganado ventiladas mecánicamente. Existe evidencia que sugiere que el modelo ADMS infraestima concentraciones en estas situaciones debido a un elevado límite de velocidad de viento. Por el contrario, existen evidencias de que AERMOD sobreestima concentraciones debido a sobreestimaciones a bajas Resumen (Castellano) velocidades de viento. Sin embrago, una modificación simple del pre-procesador meteorológico parece mejorar notablemente el comportamiento del modelo. Es de gran importancia que estas diferencias entre las predicciones de los modelos sean consideradas en los procesos de evaluación regulada por los organismos competentes. Esto puede ser realizado mediante la aplicación del modelo más útil para cada caso o, mejor aún, mediante modelos múltiples o híbridos. ABSTRACT Short-range atmospheric dispersion of ammonia (NH3) emitted by agricultural sources and its subsequent deposition to soil and vegetation can lead to the degradation of sensitive ecosystems and acidification of the soil. Atmospheric concentrations and dry deposition rates of NH3 are generally highest near the emission source and so environmental impacts to sensitive ecosystems are often largest at these locations. Under European legislation, several member states use short-range atmospheric dispersion models to estimate the impact of ammonia emissions on nearby designated nature conservation sites. A recent review of assessment methods for short-range impacts of NH3 recommended an intercomparison of the different models to identify whether there are notable differences to the assessment approaches used in different European countries. Based on this recommendation, this thesis compares and evaluates the atmospheric concentration predictions of several models used in these impact assessments for various real and hypothetical scenarios, including Mediterranean meteorological conditions. In addition, various inverse dispersion modelling techniques for the estimation of NH3 emissions rates are also compared and evaluated and a simple screening model to calculate the NH3 concentration and dry deposition rate at a sensitive ecosystem located close to an NH3 source was developed. The model intercomparison evaluated four atmospheric dispersion models (ADMS 4.1; AERMOD v07026; OPS-st v3.0.3 and LADD v2010) for a range of hypothetical case studies representing the atmospheric dispersion from several agricultural NH3 source types. The best agreement between the mean annual concentration predictions of the models was found for simple scenarios with area and volume sources. The agreement between the predictions of the models was worst for the scenario representing the dispersion from a mechanically ventilated livestock house, for which ADMS predicted significantly smaller concentrations than the other models. The reason for these differences appears to be due to the interaction of different plume-rise and boundary layer parameterisations. All four dispersion models were applied to two real case studies of dispersion of NH3 from pig farms in Falster (Denmark) and North Carolina (USA). The mean annual concentration predictions of the models were similar for the USA case study (emissions from naturally ventilated pig houses and a slurry lagoon). The comparison of model predictions with mean annual measured concentrations and the application of established statistical model acceptability criteria concluded that all four models performed acceptably for this case study. This was not the case for the Danish case study (mechanically ventilated pig house) for which the LADD model did not perform acceptably due to the lack of plume-rise processes in the model. Regulatory dispersion models often perform poorly in low wind speed conditions due to the model dispersion theory being inapplicable at low wind speeds. For situations with frequent low wind speed periods, current modelling guidance for regulatory assessments is to use a model that can handle these conditions in an acceptable way. This may not always be possible due to insufficient meteorological data and so the only option may be to carry out the assessment using a more common regulatory model, such as the advanced Gaussian models ADMS or AERMOD. In order to assess the suitability of these models for low wind conditions, they were applied to a Mediterranean case study that included many periods of low wind speed. The case study was the dispersion of NH3 emitted by a pig farm in Segovia, Central Spain, for which mean monthly atmospheric NH3 concentration measurements were made at 21 locations surrounding the farm as well as high-temporal-resolution concentration measurements at one location during a one-week campaign. Two strategies to improve the model performance for low wind speed conditions were tested. These were ‘no zero wind’ (NZW), which replaced calm periods with the minimum threshold wind speed of the model and ‘accumulated calm emissions’ (ACE), which forced the model to emit the total emissions during a calm period during the first subsequent non-calm hour. Due to large uncertainties in the model input data (NH3 emission rates, source exit velocities, boundary layer parameters), the case study was also used to assess model prediction uncertainty and assess how this uncertainty can be taken into account in model evaluations. A dynamic emission model modified for the Mediterranean climate was used to estimate the temporal variability in NH3 emission rates and a comparison was made between the simulations using the dynamic emissions and a constant emission rate. Prediction uncertainty due to model input uncertainty was 67-98% of the mean value for ADMS and between 53-83% of the mean value for AERMOD. Most of this uncertainty was due to source emission rate uncertainty (~50%), followed by uncertainty in the meteorological conditions (~10-20%) and uncertainty in exit velocities (~5-10%). AERMOD predicted higher concentrations than ADMS and more of the simulations met the model acceptability criteria when compared with the annual mean measured concentrations. However, the ADMS predictions were better correlated spatially with the measurements. The use of dynamic emission estimates improved the performance of ADMS but worsened the performance of AERMOD and the application of strategies to improved model performance had similar contradictory effects. In order to compare different inverse modelling techniques, several models (ADMS, LADD and WindTrax) were applied to a non-agricultural case study of a penguin colony in Antarctica. This case study was used since it gave the opportunity to provide the first experimentally-derived emission factor for an Antarctic penguin colony and also had the advantage of negligible background concentrations. There was sufficient agreement between the emission estimates obtained from the three models to define an emission factor for the penguin colony (1.23 g NH3 per breeding pair per day with an uncertainty range of 0.8-2.54 g NH3 per breeding pair per day). This emission estimate compared favourably to the value obtained using a simple micrometeorological technique (aerodynamic gradient) of 0.98 g ammonia per breeding pair per day (95% confidence interval: 0.2-2.4 g ammonia per breeding pair per day). Further application of the inverse modelling techniques for a range of agricultural case studies also demonstrated good agreement between the emission estimates. It is concluded, therefore, that inverse dispersion modelling is a robust technique for estimating NH3 emission rates. Screening models that can provide a quick and approximate estimate of environmental impacts are a useful tool for impact assessments because they can be used to filter out cases that potentially have a minimal environmental impact allowing resources to be focussed on more potentially damaging cases. The Simple Calculation of Ammonia Impact Limits (SCAIL) model was developed as a screening model to provide an estimate of the mean NH3 concentration and dry deposition rate downwind of an agricultural source. This screening tool, based on the LADD model, was evaluated and calibrated with several experimental datasets and then validated using independent concentration measurements made near sources. Overall SCAIL performed acceptably according to established statistical criteria. This work has identified situations where the concentration predictions of dispersion models are similar and other situations where the predictions are significantly different. Some models are simply not designed to simulate certain scenarios since they do not include the relevant processes or are beyond the limits of their applicability. An example is the LADD model that is not applicable to sources with significant exit velocity since the model does not include a plume-rise parameterisation. The testing of a simple scheme combining a momentum-driven plume rise and increased turbulence at the source improved model performance, but more testing is required. Even models that are applicable and include the relevant process do not always give similar predictions and the reasons for this need to be investigated. AERMOD for example predicts higher concentrations than ADMS for dispersion from mechanically ventilated livestock housing. There is evidence to suggest that ADMS underestimates concentrations in these situations due to a high wind speed threshold. Conversely, there is also evidence that AERMOD overestimates concentrations in these situations due to overestimation at low wind speeds. However, a simple modification to the meteorological pre-processor appears to improve the performance of the model. It is important that these differences between the predictions of these models are taken into account in regulatory assessments. This can be done by applying the most suitable model for the assessment in question or, better still, using multiple or hybrid models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The calculation of the effective delayed neutron fraction, beff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for beff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we call the k-eigenvalue technique and other techniques based on different interpretations of the physical meaning of the adjoint weighting. To test the validity of all these techniques we have implemented them with the MCNPX code and we have benchmarked them against a range of critical and subcritical systems for which either experimental or deterministic values of beff are available. Furthermore, several nuclear data libraries have been used in order to assess the impact of the uncertainty in nuclear data in the calculated value of beff .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we give two infinite families of explicit exact formulas that generalize Jacobi’s (1829) 4 and 8 squares identities to 4n2 or 4n(n + 1) squares, respectively, without using cusp forms. Our 24 squares identity leads to a different formula for Ramanujan’s tau function τ(n), when n is odd. These results arise in the setting of Jacobi elliptic functions, Jacobi continued fractions, Hankel or Turánian determinants, Fourier series, Lambert series, inclusion/exclusion, Laplace expansion formula for determinants, and Schur functions. We have also obtained many additional infinite families of identities in this same setting that are analogous to the η-function identities in appendix I of Macdonald’s work [Macdonald, I. G. (1972) Invent. Math. 15, 91–143]. A special case of our methods yields a proof of the two conjectured [Kac, V. G. and Wakimoto, M. (1994) in Progress in Mathematics, eds. Brylinski, J.-L., Brylinski, R., Guillemin, V. & Kac, V. (Birkhäuser Boston, Boston, MA), Vol. 123, pp. 415–456] identities involving representing a positive integer by sums of 4n2 or 4n(n + 1) triangular numbers, respectively. Our 16 and 24 squares identities were originally obtained via multiple basic hypergeometric series, Gustafson’s Cℓ nonterminating 6φ5 summation theorem, and Andrews’ basic hypergeometric series proof of Jacobi’s 4 and 8 squares identities. We have (elsewhere) applied symmetry and Schur function techniques to this original approach to prove the existence of similar infinite families of sums of squares identities for n2 or n(n + 1) squares, respectively. Our sums of more than 8 squares identities are not the same as the formulas of Mathews (1895), Glaisher (1907), Ramanujan (1916), Mordell (1917, 1919), Hardy (1918, 1920), Kac and Wakimoto, and many others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a model of a nonlinear double-barrier structure to describe in a simple way the effects of electron-electron scattering while remaining analytically tractable. The model is based on a generalized effective-mass equation where a nonlinear local field interaction is introduced to account for those inelastic scattering phenomena. Resonance peaks seen in the transmission coefficient spectra for the linear case appear shifted to higher energies depending on the magnitude of the nonlinear coupling. Our results are in good agreement with self-consistent solutions of the Schrodinger and Poisson equations. The calculation procedure is seen to be very fast, which makes our technique a good candidate for a rapid approximate analysis of these structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ‘leading coordinate’ approach to computing an approximate reaction pathway, with subsequent determination of the true minimum energy profile, is applied to a two-proton chain transfer model based on the chromophore and its surrounding moieties within the green fluorescent protein (GFP). Using an ab initio quantum chemical method, a number of different relaxed energy profiles are found for several plausible guesses at leading coordinates. The results obtained for different trial leading coordinates are rationalized through the calculation of a two-dimensional relaxed potential energy surface (PES) for the system. Analysis of the 2-D relaxed PES reveals that two of the trial pathways are entirely spurious, while two others contain useful information and can be used to furnish starting points for successful saddle-point searches. Implications for selection of trial leading coordinates in this class of proton chain transfer reactions are discussed, and a simple diagnostic function is proposed for revealing whether or not a relaxed pathway based on a trial leading coordinate is likely to furnish useful information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis covers both experimental and computer investigations into the dynamic behaviour of mechanical seals. The literature survey shows no investigations on the effect of vibration on mechanical seals of the type common in the various process industries. Typical seal designs are discussed. A form of Reynolds' equation has been developed that permits the calculation of stiffnesses and damping coefficients for the fluid film. The dynamics of the mechanical seal floating ring have been investigated using approximate formulae, and it has been shown that the floating ring will behave as a rigid body. Some elements, such as the radial damping due to the fluid film, are small and may be neglected. The equations of motion of the floating ring have been developed utilising the significant elements, and a solution technique described. The stiffness and damping coefficients of nitrile rubber o-rings have been obtained. These show a wide variation, with a constant stiffness up to 60 Hz. The importance of the effect of temperature on the properties is discussed. An unsuccessful test rig is described in the appendices. The dynamic behaviour of a mechanical seal has been investigated experimentally, including the effect of changes of speed, sealed pressure and seal geometry. The results, as expected, show that high vibration levels result in both high leakage and seal temperatures. Computer programs have been developed to solve Reynolds' Equation and the equations of motion. Two solution techniques for this latter program were developed, the unsuccesful technique is described in the appendices. Some stability problems were encountered, but despite these the solution shows good agreement with some of the experimental conditions. Possible reasons for the discrepancies are discussed. Various suggestions for future work in this field are given. These include the combining of the programs and more extensive experimental and computer modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article shows the social importance of subsistence minimum in Georgia. The methodology of its calculation is also shown. We propose ways of improving the calculation of subsistence minimum in Georgia and how to extend it for other developing countries. The weights of food and non-food expenditures in the subsistence minimum baskets are essential in these calculations. Daily consumption value of the minimum food basket has been calculated too. The average consumer expenditures on food supply and the other expenditures to the share are considered in dynamics. Our methodology of the subsistence minimum calculation is applied for the case of Georgia. However, it can be used for similar purposes based on data from other developing countries, where social stability is achieved, and social inequalities are to be actualized. ACM Computing Classification System (1998): H.5.3, J.1, J.4, G.3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the authors propose simple methods to evaluate the achievable rates and outage probability of a cognitive radio (CR) link that takes into account the imperfectness of spectrum sensing. In the considered system, the CR transmitter and receiver correlatively sense and dynamically exploit the spectrum pool via dynamic frequency hopping. Under imperfect spectrum sensing, false-alarm and miss-detection occur which cause impulsive interference emerged from collisions due to the simultaneous spectrum access of primary and cognitive users. That makes it very challenging to evaluate the achievable rates. By first examining the static link where the channel is assumed to be constant over time, they show that the achievable rate using a Gaussian input can be calculated accurately through a simple series representation. In the second part of this study, they extend the calculation of the achievable rate to wireless fading environments. To take into account the effect of fading, they introduce a piece-wise linear curve fitting-based method to approximate the instantaneous achievable rate curve as a combination of linear segments. It is then demonstrated that the ergodic achievable rate in fast fading and the outage probability in slow fading can be calculated to achieve any given accuracy level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aijt-Sahalia (2002) introduced a method to estimate transitional probability densities of di®usion processes by means of Hermite expansions with coe±cients determined by means of Taylor series. This note describes a numerical procedure to ¯nd these coe±cients based on the calculation of moments. One advantage of this procedure is that it can be used e®ectively when the mathematical operations required to ¯nd closed-form expressions for these coe±cients are otherwise infeasible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Housing affordability is gaining increasing prominence in the Australian socioeconomic landscape, despite strong economic growth and prosperity. It is a major consideration for any new development. However, it is multi-dimensional, has many facets, is complex and interwoven. One factor widely held to impact housing affordability is holding costs. Although it is only one contributor, the nature and extent of its impact requires clarification. It is certainly more multifarious than simple calculation of the interest or opportunity cost of land holding. For example, preliminary analysis suggests that even small shifts in the regulatory assessment period can significantly affect housing affordability. Other costs associated with “holding” also impact housing affordability, however these costs cannot always be easily identified. Nevertheless it can be said that ultimately the real impact is felt by those whom can least afford it - new home buyers whom can be relatively easily pushed into the realms of un-affordability.