49 resultados para Filter selection paper

em CentAUR: Central Archive University of Reading - UK


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Symmetry restrictions on Raman selection rules can be obtained, quite generally, by considering a Raman allowed transition as the result of two successive dipole allowed transitions, and imposing the usual symmetry restrictions on the dipole transitions. This leads to the same results as the more familiar polarizability theory, but the vibration-rotation selection rules are easier to obtain by this argument. The selection rules for symmetric top molecules involving the (+l) and (-l) components of a degenerate vibrational level with first-order Coriolis splitting are derived in this paper. It is shown that these selection rules depend on the order of the highest-fold symmetry axis Cn, being different for molecules with n=3, n=4, or n ≧ 5; moreover the selection rules are different again for molecules belonging to the point groups Dnd with n even, and Sm with 1/2m even, for which the highest-fold symmetry axes Cn and Sm are related by m=2n. Finally it is shown that an apparent anomaly between the observed Raman and infra-red vibration-rotation spectra of the allene molecule is resolved when the correct selection rules are used, and a value for the A rotational constant of allene is derived without making use of the zeta sum rule.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the mixed logit (ML) using Bayesian methods was employed to examine willingness-to-pay (WTP) to consume bread produced with reduced levels of pesticides so as to ameliorate environmental quality, from data generated by a choice experiment. Model comparison used the marginal likelihood, which is preferable for Bayesian model comparison and testing. Models containing constant and random parameters for a number of distributions were considered, along with models in ‘preference space’ and ‘WTP space’ as well as those allowing for misreporting. We found: strong support for the ML estimated in WTP space; little support for fixing the price coefficient a common practice advocated and adopted in the environmental economics literature; and, weak evidence for misreporting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the process of Participatory Varietal Selection (PVS) and presents approaches and ideas based on PVS activities conducted on upland rice throughout Ghana between 1997 and 2003. In particular the role of informal seed systems in PVS is investigated and implications for PVS design are identified. PVS programmes were conducted in two main agroecological zones, Forest and Savannah, with 1,578 and 1,143 mm of annual rainfall, respectively, and between 40 and 100 varieties tested at each site. In the Savannah zone IR12979-24-1 was officially released and in the Forest zone IDSA 85 was widely accepted by farmers. Two surveys were conducted in an area of the Forest zone to study mechanisms of spread. Here small amounts (1-2 kg) of seed of selected varieties had been given to 94 farmers. In 2002, 37% of 2,289 farmers in communities surveyed had already grown a PVS variety and had obtained seed via informal mechanisms from other farmers, i.e. through gift, exchange or purchase. A modified approach for PVS is presented which enables important issues identified in the paper to be accommodated. These issues include: utilising existing seed spread mechanisms; facilitating formal release of acceptable varieties; assessing post-harvest traits, and; the need for PVS to be an ongoing and sustainable process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The steadily accumulating literature on technical efficiency in fisheries attests to the importance of efficiency as an indicator of fleet condition and as an object of management concern. In this paper, we extend previous work by presenting a Bayesian hierarchical approach that yields both efficiency estimates and, as a byproduct of the estimation algorithm, probabilistic rankings of the relative technical efficiencies of fishing boats. The estimation algorithm is based on recent advances in Markov Chain Monte Carlo (MCMC) methods—Gibbs sampling, in particular—which have not been widely used in fisheries economics. We apply the method to a sample of 10,865 boat trips in the US Pacific hake (or whiting) fishery during 1987–2003. We uncover systematic differences between efficiency rankings based on sample mean efficiency estimates and those that exploit the full posterior distributions of boat efficiencies to estimate the probability that a given boat has the highest true mean efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade, there has been increasing circumstantial evidence for the action of natural selection in the genome, arising largely from molecular genetic surveys of large numbers of markers. In nonmodel organisms without densely mapped markers, a frequently used method is to identify loci that have unusually high or low levels of genetic differentiation, or low genetic diversity relative to other populations. The paper by Makinen et al. (2008a) in this issue of Molecular Ecology reports the results of a survey of microsatellite allele frequencies at more than 100 loci in seven populations of the three-spined stickleback (Gasterosteus aculeatus). They show that a microsatellite locus and two indel markers located within the intron of the Eda gene, known to control the number of lateral plates in the stickleback (Fig. 1), tend to be much more highly genetically differentiated than other loci, a finding that is consistent with the action of local selection. They identify a further two independent candidates for local selection, and, most intriguingly, they further suggest that up to 15% of their loci may provide evidence of balancing selection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two-component systems capable of self-assembling into soft gel-phase materials are of considerable interest due to their tunability and versatility. This paper investigates two-component gels based on a combination of a L-lysine-based dendron and a rigid diamine spacer (1,4-diaminobenzene or 1,4-diaminocyclohexane). The networked gelator was investigated using thermal measurements, circular dichroism, NMR spectroscopy and small angle neutron scattering (SANS) giving insight into the macroscopic properties, nanostructure and molecular-scale organisation. Surprisingly, all of these techniques confirmed that irrespective of the molar ratio of the components employed, the "solid-like" gel network always consisted of a 1:1 mixture of dendron/diamine. Additionally, the gel network was able to tolerate a significant excess of diamine in the "liquid-like" phase before being disrupted. In the light of this observation, we investigated the ability of the gel network structure to evolve from mixtures of different aromatic diamines present in excess. We found that these two-component gels assembled in a component-selective manner, with the dendron preferentially recognising 1,4-diaminobenzene (>70%). when similar competitor diamines (1,2- and 1,3-diaminobenzene) are present. Furthermore, NMR relaxation measurements demonstrated that the gel based oil 1,4-diaminobenzene was better able to form a selective ternary complex with pyrene than the gel based oil 1,4-diaminocyclohexane, indicative of controlled and selective pi-pi interactions within a three-component assembly. As such, the results ill this paper demonstrate how component selection processes in two-component gel systems call control hierarchical self-assembly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The VISIR instrument for the European Southern Observatory (ESO) Very Large Telescope (VLT) is a thermal-infrared imager and spectrometer currently being developed by the French Service d'Astrophysique of CEA Saclay, and Dutch NFRA ASTRON Dwingeloo consortium. This cryogenic instrument will employ precision infrared bandpass filters in the N-( =7.5-14µm) and Q-( =16-28µm) band mid-IR atmospheric windows to study interstellar and circumstellar environments crucial for star and planetary formation theories. As the filters in these mid-IR wavelength ranges are of interest to many astronomical cryogenic instruments, a worldwide astronomical filter consortium was set up with participation from 12 differing institutes, each requiring instrument specific filter operating environments and optical metrology. This paper describes the design and fabrication methods used to manufacture these astronomical consortium filters, including the rationale for the selection of multilayer coating designs, temperature-dependant optical properties of the filter materials and FTIR spectral measurements showing the changes in passband and blocking performance on cooling to <50K. We also describe the development of a 7-14µm broadband antireflection coating deposited on Ge lenses and KRS-5 grisms for cryogenic operation at 40K

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supplier selection has a great impact on supply chain management. The quality of supplier selection also affects profitability of organisations which work in the supply chain. As suppliers can provide variety of services and customers demand higher quality of service provision, the organisation is facing challenges for making the right choice of supplier for the right needs. The existing methods for supplier selection, such as data envelopment analysis (DEA) and analytical hierarchy process (AHP) can automatically perform selection of competitive suppliers and further decide winning supplier(s). However, these methods are not capable of determining the right selection criteria which should be derived from the business strategy. An ontology model described in this paper integrates the strengths of DEA and AHP with new mechanisms which ensure the right supplier to be selected by the right criteria for the right customer's needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the integration of an Utkin observer with the unscented Kalman filter, investigates the performance of the combined observer, termed the unscented Utkin observer, and compares it with an unscented Kalman filter. Simulation tests are performed using a model of a single link robot arm with a revolute elastic joint rotating in a vertical plane. The results indicate that the unscented Utkin observer outperforms the unscented Kalman filter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a feature selection approach based on Gabor wavelet feature and boosting for face verification. By convolution with a group of Gabor wavelets, the original images are transformed into vectors of Gabor wavelet features. Then for individual person, a small set of significant features are selected by the boosting algorithm from a large set of Gabor wavelet features. The experiment results have shown that the approach successfully selects meaningful and explainable features for face verification. The experiments also suggest that for the common characteristics such as eyes, noses, mouths may not be as important as some unique characteristic when training set is small. When training set is large, the unique characteristics and the common characteristics are both important.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new distributed spam filter system based on mobile agent is proposed in this paper. We introduce the application of mobile agent technology to the spam filter system. The system architecture, the work process, the pivotal technology of the distributed spam filter system based on mobile agent, and the Naive Bayesian filter method are described in detail. The experiment results indicate that the system can prevent spam emails effectively.