994 resultados para Comparison principle
Resumo:
While the Probability Ranking Principle for Information Retrieval provides the basis for formal models, it makes a very strong assumption regarding the dependence between documents. However, it has been observed that in real situations this assumption does not always hold. In this paper we propose a reformulation of the Probability Ranking Principle based on quantum theory. Quantum probability theory naturally includes interference effects between events. We posit that this interference captures the dependency between the judgement of document relevance. The outcome is a more sophisticated principle, the Quantum Probability Ranking Principle, that provides a more sensitive ranking which caters for interference/dependence between documents’ relevance.
Resumo:
Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.
Resumo:
Introduction Natural product provenance is important in the food, beverage and pharmaceutical industries, for consumer confidence and with health implications. Raman spectroscopy has powerful molecular fingerprint abilities. Surface Enhanced Raman Spectroscopy’s (SERS) sharp peaks allow distinction between minimally different molecules, so it should be suitable for this purpose. Methods Naturally caffeinated beverages with Guarana extract, coffee and Red Bull energy drink as a synthetic caffeinated beverage for comparison (20 µL ea.) were reacted 1:1 with Gold nanoparticles functionalised with anti-caffeine antibody (ab15221) (10 minutes), air dried and analysed in a micro-Raman instrument. The spectral data was processed using Principle Component Analysis (PCA). Results The PCA showed Guarana sourced caffeine varied significantly from synthetic caffeine (Red Bull) on component 1 (containing 76.4% of the variance in the data). See figure 1. The coffee containing beverages, and in particular Robert Timms (instant coffee) were very similar on component 1, but the barista espresso showed minor variance on component 1. Both coffee sourced caffeine samples varied with red Bull on component 2, (20% of variance). ************************************************************ Figure 1 PCA comparing a naturally caffeinated beverage containing Guarana with coffee. ************************************************************ Discussion PCA is an unsupervised multivariate statistical method that determines patterns within data. Figure 1 shows Caffeine in Guarana is notably different to synthetic caffeine. Other researchers have revealed that caffeine in Guarana plants is complexed with tannins. Naturally sourced/ lightly processed caffeine (Monster Energy, Espresso) are more inherently different than synthetic (Red Bull) /highly processed (Robert Timms) caffeine, in figure 1, which is consistent with this finding and demonstrates this technique’s applicability. Guarana provenance is important because it is still largely hand produced and its demand is escalating with recognition of its benefits. This could be a powerful technique for Guarana provenance, and may extend to other industries where provenance / authentication are required, e.g. the wine or natural pharmaceuticals industries.
Resumo:
Objective: The study aimed to examine the difference in response rates between opt-out and opt-in participant recruitment in a population-based study of heavy-vehicle drivers involved in a police-attended crash. Methods: Two approaches to subject recruitment were implemented in two different states over a 14-week period and response rates for the two approaches (opt-out versus opt-in recruitment) were compared. Results: Based on the eligible and contactable drivers, the response rates were 54% for the optout group and 16% for the opt-in group. Conclusions and Implications: The opt-in recruitment strategy (which was a consequence of one jurisdiction’s interpretation of the national Privacy Act at the time) resulted in an insufficient and potentially biased sample for the purposes of conducting research into risk factors for heavy-vehicle crashes. Australia’s national Privacy Act 1988 has had a long history of inconsistent practices by state and territory government departments and ethical review committees. These inconsistencies can have profound effects on the validity of research, as shown through the significantly different response rates we reported in this study. It is hoped that a more unified interpretation of the Privacy Act across the states and territories, as proposed under the soon-to-be released Australian Privacy Principles will reduce the recruitment challenges outlined in this study.
Resumo:
Protocols for bioassessment often relate changes in summary metrics that describe aspects of biotic assemblage structure and function to environmental stress. Biotic assessment using multimetric indices now forms the basis for setting regulatory standards for stream quality and a range of other goals related to water resource management in the USA and elsewhere. Biotic metrics are typically interpreted with reference to the expected natural state to evaluate whether a site is degraded. It is critical that natural variation in biotic metrics along environmental gradients is adequately accounted for, in order to quantify human disturbance-induced change. A common approach used in the IBI is to examine scatter plots of variation in a given metric along a single stream size surrogate and a fit a line (drawn by eye) to form the upper bound, and hence define the maximum likely value of a given metric in a site of a given environmental characteristic (termed the 'maximum species richness line' - MSRL). In this paper we examine whether the use of a single environmental descriptor and the MSRL is appropriate for defining the reference condition for a biotic metric (fish species richness) and for detecting human disturbance gradients in rivers of south-eastern Queensland, Australia. We compare the accuracy and precision of the MSRL approach based on single environmental predictors, with three regression-based prediction methods (Simple Linear Regression, Generalised Linear Modelling and Regression Tree modelling) that use (either singly or in combination) a set of landscape and local scale environmental variables as predictors of species richness. We compared the frequency of classification errors from each method against set biocriteria and contrast the ability of each method to accurately reflect human disturbance gradients at a large set of test sites. The results of this study suggest that the MSRL based upon variation in a single environmental descriptor could not accurately predict species richness at minimally disturbed sites when compared with SLR's based on equivalent environmental variables. Regression-based modelling incorporating multiple environmental variables as predictors more accurately explained natural variation in species richness than did simple models using single environmental predictors. Prediction error arising from the MSRL was substantially higher than for the regression methods and led to an increased frequency of Type I errors (incorrectly classing a site as disturbed). We suggest that problems with the MSRL arise from the inherent scoring procedure used and that it is limited to predicting variation in the dependent variable along a single environmental gradient.
Resumo:
1. Biodiversity, water quality and ecosystem processes in streams are known to be influenced by the terrestrial landscape over a range of spatial and temporal scales. Lumped attributes (i.e. per cent land use) are often used to characterise the condition of the catchment; however, they are not spatially explicit and do not account for the disproportionate influence of land located near the stream or connected by overland flow. 2. We compared seven landscape representation metrics to determine whether accounting for the spatial proximity and hydrological effects of land use can be used to account for additional variability in indicators of stream ecosystem health. The landscape metrics included the following: a lumped metric, four inverse-distance-weighted (IDW) metrics based on distance to the stream or survey site and two modified IDW metrics that also accounted for the level of hydrologic activity (HA-IDW). Ecosystem health data were obtained from the Ecological Health Monitoring Programme in Southeast Queensland, Australia and included measures of fish, invertebrates, physicochemistry and nutrients collected during two seasons over 4 years. Linear models were fitted to the stream indicators and landscape metrics, by season, and compared using an information-theoretic approach. 3. Although no single metric was most suitable for modelling all stream indicators, lumped metrics rarely performed as well as other metric types. Metrics based on proximity to the stream (IDW and HA-IDW) were more suitable for modelling fish indicators, while the HA-IDW metric based on proximity to the survey site generally outperformed others for invertebrates, irrespective of season. There was consistent support for metrics based on proximity to the survey site (IDW or HA-IDW) for all physicochemical indicators during the dry season, while a HA-IDW metric based on proximity to the stream was suitable for five of the six physicochemical indicators in the post-wet season. Only one nutrient indicator was tested and results showed that catchment area had a significant effect on the relationship between land use metrics and algal stable isotope ratios in both seasons. 4. Spatially explicit methods of landscape representation can clearly improve the predictive ability of many empirical models currently used to study the relationship between landscape, habitat and stream condition. A comparison of different metrics may provide clues about causal pathways and mechanistic processes behind correlative relationships and could be used to target restoration efforts strategically.
Resumo:
Water management is vital for mine sites both for production and sustainability related issues. Effective water management is a complex task since the role of water on mine sites is multifaceted. Computers models are tools that represent mine site water interaction and can be used by mine sites to inform or evaluate their water management strategies. There exist several types of models that can be used to represent mine site water interactions. This paper presents three such models: an operational model, an aggregated systems model and a generic systems model. For each model the paper provides a description and example followed by an analysis of its advantages and disadvantages. The paper hypotheses that since no model is optimal for all situations, each model should be applied in situations where it is most appropriate based upon the scale of water interactions being investigated, either unit (operation), inter-site (aggregated systems) or intra-site (generic systems).
Resumo:
The nonlinear stability analysis introduced by Chen and Haughton [1] is employed to study the full nonlinear stability of the non-homogeneous spherically symmetric deformation of an elastic thick-walled sphere. The shell is composed of an arbitrary homogeneous, incompressible elastic material. The stability criterion ultimately requires the solution of a third-order nonlinear ordinary differential equation. Numerical calculations performed for a wide variety of well-known incompressible materials are then compared with existing bifurcation results and are found to be identical. Further analysis and comparison between stability and bifurcation are conducted for the case of thin shells and we prove by direct calculation that the two criteria are identical for all modes and all materials.
Resumo:
A cylindrical magnetron system and a hybrid inductively coupled plasma-assisted magnetron deposition system were examined experimentally in light of their discharge characteristics with a view to stress the enhanced controllability of the hybrid system. The comparative study has shown that the hybrid magnetron + the inductively coupled plasma system is a flexible, powerful, and convenient tool that has certain advantages as compared with the cylindrical dc magnetrons. In particular, the hybrid system features more linear current-voltage characteristics and the possibility of a bias-independent control of the discharge current.
Resumo:
PURPOSE To compare diffusion-weighted functional magnetic resonance imaging (DfMRI), a novel alternative to the blood oxygenation level-dependent (BOLD) contrast, in a functional MRI experiment. MATERIALS AND METHODS Nine participants viewed contrast reversing (7.5 Hz) black-and-white checkerboard stimuli using block and event-related paradigms. DfMRI (b = 1800 mm/s2 ) and BOLD sequences were acquired. Four parameters describing the observed signal were assessed: percent signal change, spatial extent of the activation, the Euclidean distance between peak voxel locations, and the time-to-peak of the best fitting impulse response for different paradigms and sequences. RESULTS The BOLD conditions showed a higher percent signal change relative to DfMRI; however, event-related DfMRI showed the strongest group activation (t = 21.23, P < 0.0005). Activation was more diffuse and spatially closer to the BOLD response for DfMRI when the block design was used. DfMRIevent showed the shortest TTP (4.4 +/- 0.88 sec). CONCLUSION The hemodynamic contribution to DfMRI may increase with the use of block designs.
Resumo:
Background The sequencing, de novo assembly and annotation of transcriptome datasets generated with next generation sequencing (NGS) has enabled biologists to answer genomic questions in non-model species with unprecedented ease. Reliable and accurate de novo assembly and annotation of transcriptomes, however, is a critically important step for transcriptome assemblies generated from short read sequences. Typical benchmarks for assembly and annotation reliability have been performed with model species. To address the reliability and accuracy of de novo transcriptome assembly in non-model species, we generated an RNAseq dataset for an intertidal gastropod mollusc species, Nerita melanotragus, and compared the assembly produced by four different de novo transcriptome assemblers; Velvet, Oases, Geneious and Trinity, for a number of quality metrics and redundancy. Results Transcriptome sequencing on the Ion Torrent PGM™ produced 1,883,624 raw reads with a mean length of 133 base pairs (bp). Both the Trinity and Oases de novo assemblers produced the best assemblies based on all quality metrics including fewer contigs, increased N50 and average contig length and contigs of greater length. Overall the BLAST and annotation success of our assemblies was not high with only 15-19% of contigs assigned a putative function. Conclusions We believe that any improvement in annotation success of gastropod species will require more gastropod genome sequences, but in particular an increase in mollusc protein sequences in public databases. Overall, this paper demonstrates that reliable and accurate de novo transcriptome assemblies can be generated from short read sequencers with the right assembly algorithms. Keywords: Nerita melanotragus; De novo assembly; Transcriptome; Heat shock protein; Ion torrent
Resumo:
Dual-energy X-ray absorptiometry (DXA) and isotope dilution technique have been used as reference methods to validate the estimates of body composition by simple field techniques; however, very few studies have compared these two methods. We compared the estimates of body composition by DXA and isotope dilution (18O) technique in apparently healthy Indian men and women (aged 19–70 years, n 152, 48 % men) with a wide range of BMI (14–40 kg/m2). Isotopic enrichment was assessed by isotope ratio mass spectroscopy. The agreement between the estimates of body composition measured by the two techniques was assessed by the Bland–Altman method. The mean age and BMI were 37 (SD 15) years and 23·3 (SD 5·1) kg/m2, respectively, for men and 37 (SD 14) years and 24·1 (SD 5·8) kg/m2, respectively, for women. The estimates of fat-free mass were higher by about 7 (95 % CI 6, 9) %, those of fat mass were lower by about 21 (95 % CI 218,223) %, and those of body fat percentage (BF%) were lower by about 7·4 (95 % CI 28·2, 26·6) % as obtained by DXA compared with the isotope dilution technique. The Bland–Altman analysis showed wide limits of agreement that indicated poor agreement between the methods. The bias in the estimates of BF% was higher at the lower values of BF%. Thus, the two commonly used reference methods showed substantial differences in the estimates of body composition with wide limits of agreement. As the estimates of body composition are method-dependent, the two methods cannot be used interchangeably
Resumo:
The foliage of a plant performs vital functions. As such, leaf models are required to be developed for modelling the plant architecture from a set of scattered data captured using a scanning device. The leaf model can be used for purely visual purposes or as part of a further model, such as a fluid movement model or biological process. For these reasons, an accurate mathematical representation of the surface and boundary is required. This paper compares three approaches for fitting a continuously differentiable surface through a set of scanned data points from a leaf surface, with a technique already used for reconstructing leaf surfaces. The techniques which will be considered are discrete smoothing D2-splines [R. Arcangeli, M. C. Lopez de Silanes, and J. J. Torrens, Multidimensional Minimising Splines, Springer, 2004.], the thin plate spline finite element smoother [S. Roberts, M. Hegland, and I. Altas, Approximation of a Thin Plate Spline Smoother using Continuous Piecewise Polynomial Functions, SIAM, 1 (2003), pp. 208--234] and the radial basis function Clough-Tocher method [M. Oqielat, I. Turner, and J. Belward, A hybrid Clough-Tocher method for surface fitting with application to leaf data., Appl. Math. Modelling, 33 (2009), pp. 2582-2595]. Numerical results show that discrete smoothing D2-splines produce reconstructed leaf surfaces which better represent the original physical leaf.
Resumo:
Particle number concentrations vary significantly with environment and, in this study, we attempt to assess the significance of these differences. Towards this aim, we reviewed 85 papers that have reported particle number concentrations levels at 126 sites covering different environments. We grouped the results into eight categories according to measurement location including: road tunnel, on-road, road-side, street canyon, urban, urban background, rural, and clean background. Median values were calculated for each category. This review was restricted to papers that presented concentrations numerically. The majority of the reports were based on either CPC or SMPS measurements, with a limited number of papers reporting results from both instruments at the same site. Hence there were several overlaps between the number of CPC and SMPS measuring sites. Most of the studies reported multiple measurements at a given study site, while some studies included results from more than one site. From these reports, the overall median value for each location category was calculated...
Resumo:
Differences in the levels of risk perceived by cyclists and car drivers may contribute to the dangers in their interactions. Levels of perceived risk have been shown to vary according to personal and environmental factors and between countries. Cycling rates in France are higher than in Australia, particularly among women. This study investigated whether cultural differences between France and Australia are reflected in perceived risks for experienced adult cyclists and drivers in the two countries. In online surveys, regular cyclists (France 336, Australia 444) and drivers (France 92, Australia 151) were asked to rate the level of risk in six situations: failure to yield; going through a red light; not signalling when turning; swerving; tail-gating; and not checking traffic. The effects of type of interacting vehicle and participant type on perceived risk were similar in France and Australia. However, the influence of responsibility for the risky behaviour differed according to participant type, type of situation and nationality. When the bicycle rider committed the road rule violation, Australian cyclists and drivers gave higher risk ratings than French cyclists and drivers. In both countries, cyclists rated themselves significantly higher than drivers on the perceived control and overconfidence subscales of the perceived skill measure. The French cyclists rated themselves higher than Australian cyclists on these scales, which could be responsible for overall lower perceived risk levels when interacting with a bike. Australian cyclists rated themselves significantly lower than drivers on the incompetence subscale but French cyclists rated themselves higher than drivers. In both countries incompetence scores were positively related to levels of perceived risk. Weekly time was associated with perceived risk in Australia but not in France. Frequency of traffic violations was not associated with perceived risk in either country. In conclusion, levels of perceived risk differed between drivers and cyclists in both countries and were influenced by type of interacting vehicle, experience and perceived skill. However, some differences between the results from the two countries merit further investigation to shed light on potential improvements in safety and cycling participation.