894 resultados para Numerical approximation and analysis
Resumo:
Results from aircraft and surface observations provided evidence for the existence of mesoscale circulations over the Boreal Ecosystem-Atmosphere Study (BOREAS) domain. Using an integrated approach that included the use of analytical modeling, numerical modeling, and data analysis, we have found that there are substantial contributions to the total budgets of heat over the BOREAS domain generated by mesoscale circulations. This effect is largest when the synoptic flow is relatively weak, yet it is present under less favorable conditions, as shown by the case study presented here. While further analysis is warranted to document this effect, the existence of mesoscale flow is not surprising, since it is related to the presence of landscape patches, including lakes, which are of a size on the order of the local Rossby radius and which have spatial differences in maximum sensible heat flux of about 300 W m−2. We have also analyzed the vertical temperature profile simulated in our case study as well as high-resolution soundings and we have found vertical profiles of temperature change above the boundary layer height, which we attribute in part to mesoscale contributions. Our conclusion is that in regions with organized landscapes, such as BOREAS, even with relatively strong synoptic winds, dynamical scaling criteria should be used to assess whether mesoscale effects should be parameterized or explicitly resolved in numerical models of the atmosphere.
Resumo:
Geomagnetic activity has long been known to exhibit approximately 27 day periodicity, resulting from solar wind structures repeating each solar rotation. Thus a very simple near-Earth solar wind forecast is 27 day persistence, wherein the near-Earth solar wind conditions today are assumed to be identical to those 27 days previously. Effective use of such a persistence model as a forecast tool, however, requires the performance and uncertainty to be fully characterized. The first half of this study determines which solar wind parameters can be reliably forecast by persistence and how the forecast skill varies with the solar cycle. The second half of the study shows how persistence can provide a useful benchmark for more sophisticated forecast schemes, namely physics-based numerical models. Point-by-point assessment methods, such as correlation and mean-square error, find persistence skill comparable to numerical models during solar minimum, despite the 27 day lead time of persistence forecasts, versus 2–5 days for numerical schemes. At solar maximum, however, the dynamic nature of the corona means 27 day persistence is no longer a good approximation and skill scores suggest persistence is out-performed by numerical models for almost all solar wind parameters. But point-by-point assessment techniques are not always a reliable indicator of usefulness as a forecast tool. An event-based assessment method, which focusses key solar wind structures, finds persistence to be the most valuable forecast throughout the solar cycle. This reiterates the fact that the means of assessing the “best” forecast model must be specifically tailored to its intended use.
Resumo:
In this paper we develop and apply methods for the spectral analysis of non-selfadjoint tridiagonal infinite and finite random matrices, and for the spectral analysis of analogous deterministic matrices which are pseudo-ergodic in the sense of E. B. Davies (Commun. Math. Phys. 216 (2001), 687–704). As a major application to illustrate our methods we focus on the “hopping sign model” introduced by J. Feinberg and A. Zee (Phys. Rev. E 59 (1999), 6433–6443), in which the main objects of study are random tridiagonal matrices which have zeros on the main diagonal and random ±1’s as the other entries. We explore the relationship between spectral sets in the finite and infinite matrix cases, and between the semi-infinite and bi-infinite matrix cases, for example showing that the numerical range and p-norm ε - pseudospectra (ε > 0, p ∈ [1,∞] ) of the random finite matrices converge almost surely to their infinite matrix counterparts, and that the finite matrix spectra are contained in the infinite matrix spectrum Σ. We also propose a sequence of inclusion sets for Σ which we show is convergent to Σ, with the nth element of the sequence computable by calculating smallest singular values of (large numbers of) n×n matrices. We propose similar convergent approximations for the 2-norm ε -pseudospectra of the infinite random matrices, these approximations sandwiching the infinite matrix pseudospectra from above and below.
Resumo:
The quantification of uncertainty is an increasingly popular topic, with clear importance for climate change policy. However, uncertainty assessments are open to a range of interpretations, each of which may lead to a different policy recommendation. In the EQUIP project researchers from the UK climate modelling, statistical modelling, and impacts communities worked together on ‘end-to-end’ uncertainty assessments of climate change and its impacts. Here, we use an experiment in peer review amongst project members to assess variation in the assessment of uncertainties between EQUIP researchers. We find overall agreement on key sources of uncertainty but a large variation in the assessment of the methods used for uncertainty assessment. Results show that communication aimed at specialists makes the methods used harder to assess. There is also evidence of individual bias, which is partially attributable to disciplinary backgrounds. However, varying views on the methods used to quantify uncertainty did not preclude consensus on the consequential results produced using those methods. Based on our analysis, we make recommendations for developing and presenting statements on climate and its impacts. These include the use of a common uncertainty reporting format in order to make assumptions clear; presentation of results in terms of processes and trade-offs rather than only numerical ranges; and reporting multiple assessments of uncertainty in order to elucidate a more complete picture of impacts and their uncertainties. This in turn implies research should be done by teams of people with a range of backgrounds and time for interaction and discussion, with fewer but more comprehensive outputs in which the range of opinions is recorded.
Resumo:
To develop targeted methods for treating bacterial infections, the feasibility of using glycoside derivatives of the antibacterial compound L-R-aminoethylphosphonic acid (L-AEP) has been investigated. These derivatives are hypothesized to be taken up by bacterial cells via carbohydrate uptake mechanisms, and then hydrolysed in situ by bacterial borne glycosidase enzymes, to selectively afford L-AEP. Therefore the synthesis and analysis of ten glycoside derivatives of L-AEP, for selective targeting of specific bacteria, is reported. The ability of these derivatives to inhibit the growth of a panel of Gram-negative bacteria in two different media is discussed. β-Glycosides (12a) and (12b) that contained L-AEP linked to glucose or galactose via a carbamate linkage inhibited growth of a range of organisms with the best MICs being <0.75 mg/ml; for most species the inhibition was closely related to the hydrolysis of the equivalent chromogenic glycosides. This suggests that for (12a) and (12b), release of L-AEP was indeed dependent upon the presence of the respective glycosidase enzyme.
Resumo:
Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.
Resumo:
The critical behavior of the stochastic susceptible-infected-recovered model on a square lattice is obtained by numerical simulations and finite-size scaling. The order parameter as well as the distribution in the number of recovered individuals is determined as a function of the infection rate for several values of the system size. The analysis around criticality is obtained by exploring the close relationship between the present model and standard percolation theory. The quantity UP, equal to the ratio U between the second moment and the squared first moment of the size distribution multiplied by the order parameter P, is shown to have, for a square system, a universal value 1.0167(1) that is the same for site and bond percolation, confirming further that the SIR model is also in the percolation class.
Resumo:
The angular distributions for elastic scattering and breakup of halo nuclei are analysed using a near-side/far-side decomposition within the framework of the dynamical eikonal approximation. This analysis is performed for (11)Be impinging on Pb at 69 MeV/nucleon. These distributions exhibit very similar features. In particular they are both near-side dominated, as expected from Coulomb-dominated reactions. The general shape of these distributions is sensitive mostly to the projectile-target interactions, but is also affected by the extension of the halo. This suggests the elastic scattering not to be affected by a loss of flux towards the breakup channel. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We discuss the generalized eigenvalue problem for computing energies and matrix elements in lattice gauge theory, including effective theories such as HQET. It is analyzed how the extracted effective energies and matrix elements converge when the time separations are made large. This suggests a particularly efficient application of the method for which we can prove that corrections vanish asymptotically as exp(-(E(N+1) - E(n))t). The gap E(N+1) - E(n) can be made large by increasing the number N of interpolating fields in the correlation matrix. We also show how excited state matrix elements can be extracted such that contaminations from all other states disappear exponentially in time. As a demonstration we present numerical results for the extraction of ground state and excited B-meson masses and decay constants in static approximation and to order 1/m(b) in HQET.
Resumo:
For a fixed family F of graphs, an F-packing in a graph G is a set of pairwise vertex-disjoint subgraphs of G, each isomorphic to an element of F. Finding an F-packing that maximizes the number of covered edges is a natural generalization of the maximum matching problem, which is just F = {K(2)}. In this paper we provide new approximation algorithms and hardness results for the K(r)-packing problem where K(r) = {K(2), K(3,) . . . , K(r)}. We show that already for r = 3 the K(r)-packing problem is APX-complete, and, in fact, we show that it remains so even for graphs with maximum degree 4. On the positive side, we give an approximation algorithm with approximation ratio at most 2 for every fixed r. For r = 3, 4, 5 we obtain better approximations. For r = 3 we obtain a simple 3/2-approximation, achieving a known ratio that follows from a more involved algorithm of Halldorsson. For r = 4, we obtain a (3/2 + epsilon)-approximation, and for r = 5 we obtain a (25/14 + epsilon)-approximation. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This work aims at combining the Chaos theory postulates and Artificial Neural Networks classification and predictive capability, in the field of financial time series prediction. Chaos theory, provides valuable qualitative and quantitative tools to decide on the predictability of a chaotic system. Quantitative measurements based on Chaos theory, are used, to decide a-priori whether a time series, or a portion of a time series is predictable, while Chaos theory based qualitative tools are used to provide further observations and analysis on the predictability, in cases where measurements provide negative answers. Phase space reconstruction is achieved by time delay embedding resulting in multiple embedded vectors. The cognitive approach suggested, is inspired by the capability of some chartists to predict the direction of an index by looking at the price time series. Thus, in this work, the calculation of the embedding dimension and the separation, in Takens‘ embedding theorem for phase space reconstruction, is not limited to False Nearest Neighbor, Differential Entropy or other specific method, rather, this work is interested in all embedding dimensions and separations that are regarded as different ways of looking at a time series by different chartists, based on their expectations. Prior to the prediction, the embedded vectors of the phase space are classified with Fuzzy-ART, then, for each class a back propagation Neural Network is trained to predict the last element of each vector, whereas all previous elements of a vector are used as features.
Resumo:
Esta dissertação doutoral, com base em dados empíricos coletados com 50 mães distribuídas no Brasil (n = 30) e nos EUA (n = 20), tem como objetivo fornecer uma melhor compreensão do desperdício de alimento no contexto da baixa renda. A tese é composta por três artigos, que combinados, cumprem os objetivos de identificar os antecedentes do desperdício de alimento e delinear uma tipologia dos desperdiçadores de alimento. Adicionalmente, contextualiza o desperdício global e um capítulo propõe uma agenda futura para estudos sobre desperdício de alimento no âmbito do consumidor. O desperdício de alimento nas famílias, enquanto tema de pesquisa, oferece a oportunidade para o trabalho acadêmico em marketing cumprir os critérios de relevância social, gerencial e para políticas públicas. No primeiro estudo, descrevem-se os fatores do chamado "paradoxo do desperdício de alimento", a identificação e análise do desperdício de alimento em famílias com restrições orçamentárias, enquanto apresentam-se o itinerário do consumo de alimentos e os antecedentes do desperdício. Este primeiro artigo, elaborado com dados coletados em famílias brasileiras, ilustra também o papel das normas culturais, tais como o preparo abundante de alimento para mostrar hospitalidade ou como forma de não ser percebido como pobre, no aumento do desperdício. No segundo artigo, uma grounded-theory (teoria fundamentada nos dados) destaca o papel do afeto e da abundância no desperdício de alimento familiar. Para enriquecer as contribuições teóricas, este segundo estudo apresenta um framework com seis dimensões do desperdício de alimento (1. Afeto; 2. Abundância; 3. Multiplicidade de escolhas; 4. Conveniência; 5. Procrastinação; 6. Rotina sem planejamento). Baseado em dados empíricos coletados em famílias americanas, este estudo proporciona novas explicações, a exemplo de como o estoque abundante de comfort foods - uma forma de impulsionar tanto emoções positivas para si quanto mostrar afeto para crianças – pode gerar mais desperdício de alimentos. Em síntese, o segundo artigo identifica uma consequência negativa do afeto e da abundância de alimentos no contexto familiar, e apresenta um framework teoricamente relevante. Finalmente, o terceiro artigo, a partir do conjunto de dados dos estudos anteriores e de nova coleta com dez famílias, propõe uma tipologia comportamental do desperdício de alimento, uma contribuição original aos estudos de comportamento do consumidor. A identificação de cinco tipos de desperdiçadores de alimentos - (1) Mães carinhosas; (2) Cozinheiras abundantes; (3) Desperdiçadoras de sobras; (4) Procrastinadoras; (5) Mães versáteis - contribui para a teoria, enquanto implicações potenciais para educadores nutricionais e agentes públicos são exploradas a partir dos resultados. Como uma forma de explicar as características de cada um dos cinco tipos identificados, compara-se aspectos das amostras brasileira e norte-americana, que apresentam similaridades no comportamento de desperdício de alimento. Os níveis de desperdício percebidos por país também são comparados. Em suma, os achados dos três artigos podem contribuir para maximizar os resultados de campanhas de conscientização voltadas à mitigação do desperdício de alimento, e apresentam ideias para varejistas interessados em iniciativas de sustentabilidade. Mais abrangentemente, os resultados apresentados também podem ser aplicados para incrementar programas de combate à fome e projetos de educação nutricional realizados pelo setor público ou ONGs.
Resumo:
The aim of the present study was to compare heart rate variability (HRV) at rest and during exercise using a temporal series obtained with the Polar S810i monitor and a signal from a LYNX® signal conditioner (BIO EMG 1000 model) with a channel configured for the acquisition of ECG signals. Fifteen healthy subjects aged 20.9 ± 1.4 years were analyzed. The subjects remained at rest for 20 min and performed exercise for another 20 min with the workload selected to achieve 60% of submaximal heart rate. RR series were obtained for each individual with a Polar S810i instrument and with an ECG analyzed with a biological signal conditioner. The HRV indices (rMSSD, pNN50, LFnu, HFnu, and LF/HF) were calculated after signal processing and analysis. The unpaired Student t-test and intraclass correlation coefficient were used for data analysis. No statistically significant differences were observed when comparing the values analyzed by means of the two devices for HRV at rest and during exercise. The intraclass correlation coefficient demonstrated satisfactory correlation between the values obtained by the devices at rest (pNN50 = 0.994; rMSSD = 0.995; LFnu = 0.978; HFnu = 0.978; LF/HF = 0.982) and during exercise (pNN50 = 0.869; rMSSD = 0.929; LFnu = 0.973; HFnu = 0.973; LF/HF = 0.942). The calculation of HRV values by means of temporal series obtained from the Polar S810i instrument appears to be as reliable as those obtained by processing the ECG signal captured with a signal conditioner.
Resumo:
An enantioselective high-performance liquid chromatographic method for the analysis of carvedilol in plasma and urine was developed and validated using (-)-menthyl chloroformate (MCF) as a derivatizing reagent. Chloroform was used for extraction, and analysis was performed by HPLC on a C18 column with a fluorescence detector. The quantitation limit was 0.25 ng/ml for S(-)-carvedilol in plasma and 0.5 ng/ml for R(+)-carvedilol in plasma and for both enantiomers in urine. The method was applied to the study of enantioselectivity in the pharmacokinetics of carvedilol administered in a multiple dose regimen (25mg/12h) to a hypertensive elderly female patient. The data obtained demonstrated highest plasma levels for the R(+)-carvedilol(AUCSS 75.64 vs 37.29ng/ml). The enantiomeric ratio R(+)/S(-) was 2.03 for plasma and 1.49 0 - 12 for urine (Aeo-12 17.4 vs 11.7 pg). Copyright (c) 2008 John Wiley & Sons, Ltd.