675 resultados para B method
Resumo:
In CoDaWork’05, we presented an application of discriminant function analysis (DFA) to 4 different compositional datasets and modelled the first canonical variable using a segmented regression model solely based on an observation about the scatter plots. In this paper, multiple linear regressions are applied to different datasets to confirm the validity of our proposed model. In addition to dating the unknown tephras by calibration as discussed previously, another method of mapping the unknown tephras into samples of the reference set or missing samples in between consecutive reference samples is proposed. The application of these methodologies is demonstrated with both simulated and real datasets. This new proposed methodology provides an alternative, more acceptable approach for geologists as their focus is on mapping the unknown tephra with relevant eruptive events rather than estimating the age of unknown tephra. Kew words: Tephrochronology; Segmented regression
Resumo:
This paper describes the basis of citation auctions as a new approach to selecting scientific papers for publication. Our main idea is to use an auction for selecting papers for publication through - differently from the state of the art - bids that consist of the number of citations that a scientist expects to receive if the paper is published. Hence, a citation auction is the selection process itself, and no reviewers are involved. The benefits of the proposed approach are two-fold. First, the cost of refereeing will be either totally eliminated or significantly reduced, because the process of citation auction does not need prior understanding of the paper's content to judge the quality of its contribution. Additionally, the method will not prejudge the content of the paper, so it will increase the openness of publications to new ideas. Second, scientists will be much more committed to the quality of their papers, paying close attention to distributing and explaining their papers in detail to maximize the number of citations that the paper receives. Sample analyses of the number of citations collected in papers published in years 1999-2004 for one journal, and in years 2003-2005 for a series of conferences (in a totally different discipline), via Google scholar, are provided. Finally, a simple simulation of an auction is given to outline the behaviour of the citation auction approach
Resumo:
A simple extended finite field nuclear relaxation procedure for calculating vibrational contributions to degenerate four-wave mixing (also known as the intensity-dependent refractive index) is presented. As a by-product one also obtains the static vibrationally averaged linear polarizability, as well as the first and second hyperpolarizability. The methodology is validated by illustrative calculations on the water molecule. Further possible extensions are suggested
Resumo:
In the static field limit, the vibrational hyperpolarizability consists of two contributions due to: (1) the shift in the equilibrium geometry (known as nuclear relaxation), and (2) the change in the shape of the potential energy surface (known as curvature). Simple finite field methods have previously been developed for evaluating these static field contributions and also for determining the effect of nuclear relaxation on dynamic vibrational hyperpolarizabilities in the infinite frequency approximation. In this paper the finite field approach is extended to include, within the infinite frequency approximation, the effect of curvature on the major dynamic nonlinear optical processes
Resumo:
Molts sistemes mecànics existents tenen un comportament vibratori funcionalment perceptible, que es posa de manifest enfront d'excitacions transitòries. Normalment, les vibracions generades segueixen presents després del transitori (vibracions residuals), i poden provocar efectes negatius en la funció de disseny del mecanisme. El mètode que es proposa en aquesta tesi té com a objectiu principal la síntesi de lleis de moviment per reduir les vibracions residuals. Addicionalment, els senyals generats permeten complir dues condicions definides per l'usuari (anomenats requeriments funcionals). El mètode es fonamenta en la relació existent entre el contingut freqüencial d'un senyal transitori, i la vibració residual generada, segons sigui l'esmorteïment del sistema. Basat en aquesta relació, i aprofitant les propietats de la transformada de Fourier, es proposa la generació de lleis de moviment per convolució temporal de polsos. Aquestes resulten formades per trams concatenats de polinomis algebraics, cosa que facilita la seva implementació en entorns numèrics per mitjà de corbes B-spline.
Resumo:
AEA Technology has provided an assessment of the probability of α-mode containment failure for the Sizewell B PWR. After a preliminary review of the methodologies available it was decided to use the probabilistic approach described in the paper, based on an extension of the methodology developed by Theofanous et al. (Nucl. Sci. Eng. 97 (1987) 259–325). The input to the assessment is 12 probability distributions; the bases for the quantification of these distributions are discussed. The α-mode assessment performed for the Sizewell B PWR has demonstrated the practicality of the event-tree method with input data represented by probability distributions. The assessment itself has drawn attention to a number of topics, which may be plant and sequence dependent, and has indicated the importance of melt relocation scenarios. The α-mode failure probability following an accident that leads to core melt relocation to the lower head for the Sizewell B PWR has been assessed as a few parts in 10 000, on the basis of current information. This assessment has been the first to consider elevated pressures (6 MPa and 15 MPa) besides atmospheric pressure, but the results suggest only a modest sensitivity to system pressure.
Resumo:
It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
There is a growing interest in using stochastic parametrizations in numerical weather and climate prediction models. Previously, Palmer (2001) outlined the issues that give rise to the need for a stochastic parametrization and the forms such a parametrization could take. In this article a method is presented that uses a comparison between a standard-resolution version and a high-resolution version of the same model to gain information relevant for a stochastic parametrization in that model. A correction term that could be used in a stochastic parametrization is derived from the thermodynamic equations of both models. The origin of the components of this term is discussed. It is found that the component related to unresolved wave-wave interactions is important and can act to compensate for large parametrized tendencies. The correction term is not proportional to the parametrized tendency. Finally, it is explained how the correction term could be used to give information about the shape of the random distribution to be used in a stochastic parametrization. Copyright © 2009 Royal Meteorological Society
Resumo:
In this paper we show stability and convergence for a novel Galerkin boundary element method approach to the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data. This problem models, for example, outdoor sound propagation over inhomogeneous flat terrain. To achieve a good approximation with a relatively low number of degrees of freedom we employ a graded mesh with smaller elements adjacent to discontinuities in impedance, and a special set of basis functions for the Galerkin method so that, on each element, the approximation space consists of polynomials (of degree $\nu$) multiplied by traces of plane waves on the boundary. In the case where the impedance is constant outside an interval $[a,b]$, which only requires the discretization of $[a,b]$, we show theoretically and experimentally that the $L_2$ error in computing the acoustic field on $[a,b]$ is ${\cal O}(\log^{\nu+3/2}|k(b-a)| M^{-(\nu+1)})$, where $M$ is the number of degrees of freedom and $k$ is the wavenumber. This indicates that the proposed method is especially commendable for large intervals or a high wavenumber. In a final section we sketch how the same methodology extends to more general scattering problems.
Resumo:
We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.
Resumo:
Quasi-Newton-Raphson minimization and conjugate gradient minimization have been used to solve the crystal structures of famotidine form B and capsaicin from X-ray powder diffraction data and characterize the chi(2) agreement surfaces. One million quasi-Newton-Raphson minimizations found the famotidine global minimum with a frequency of ca 1 in 5000 and the capsaicin global minimum with a frequency of ca 1 in 10 000. These results, which are corroborated by conjugate gradient minimization, demonstrate the existence of numerous pathways from some of the highest points on these chi(2) agreement surfaces to the respective global minima, which are passable using only downhill moves. This important observation has significant ramifications for the development of improved structure determination algorithms.