942 resultados para Subpixel precision


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Annual counts of migrating raptors at fixed observation points are a widespread practice, and changes in numbers counted over time, adjusted for survey effort, are commonly used as indices of trends in population size. Unmodeled year-to-year variation in detectability may introduce bias, reduce precision of trend estimates, and reduce power to detect trends. We conducted dependent double-observer surveys at the annual fall raptor migration count at Lucky Peak, Idaho, in 2009 and 2010 and applied Huggins closed-capture removal models and information-theoretic model selection to determine the relative importance of factors affecting detectability. The most parsimonious model included effects of observer team identity, distance, species, and day of the season. We then simulated 30 years of counts with heterogeneous individual detectability, a population decline (λ = 0.964), and unexplained random variation in the number of available birds. Imperfect detectability did not bias trend estimation, and increased the time required to achieve 80% power by less than 11%. Results suggested that availability is a greater source of variance in annual counts than detectability; thus, efforts to account for availability would improve the monitoring value of migration counts. According to our models, long-term trends in observer efficiency or migratory flight distance may introduce substantial bias to trend estimates. Estimating detectability with a novel count protocol like our double-observer method is just one potential means of controlling such effects. The traditional approach of modeling the effects of covariates and adjusting the index may also be effective if ancillary data is collected consistently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce quantum sensing schemes for measuring very weak forces with a single trapped ion. They use the spin-motional coupling induced by the laser-ion interaction to transfer the relevant force information to the spin-degree of freedom. Therefore, the force estimation is carried out simply by observing the Ramsey-type oscillations of the ion spin states. Three quantum probes are considered, which are represented by systems obeying the Jaynes-Cummings, quantum Rabi (in 1D) and Jahn-Teller (in 2D) models. By using dynamical decoupling schemes in the Jaynes-Cummings and Jahn-Teller models, our force sensing protocols can be made robust to the spin dephasing caused by the thermal and magnetic field fluctuations. In the quantum-Rabi probe, the residual spin-phonon coupling vanishes, which makes this sensing protocol naturally robust to thermally-induced spin dephasing. We show that the proposed techniques can be used to sense the axial and transverse components of the force with a sensitivity beyond the yN/\wurzel{Hz}range, i.e. in the xN/\wurzel{Hz}(xennonewton, 10^−27). The Jahn-Teller protocol, in particular, can be used to implement a two-channel vector spectrum analyzer for measuring ultra-low voltages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability of agents and services to automatically locate and interact with unknown partners is a goal for both the semantic web and web services. This, \serendipitous interoperability", is hindered by the lack of an explicit means of describing what services (or agents) are able to do, that is, their capabilities. At present, informal descriptions of what services can do are found in \documentation" elements; or they are somehow encoded in operation names and signatures. We show, by ref- erence to existing service examples, how ambiguous and imprecise capa- bility descriptions hamper the attainment of automated interoperability goals in the open, global web environment. In this paper we propose a structured, machine readable description of capabilities, which may help to increase the recall and precision of service discovery mechanisms. Our capability description draws on previous work in capability and process modeling and allows the incorporation of external classi¯cation schemes. The capability description is presented as a conceptual meta model. The model supports conceptual queries and can be used as an extension to the DAML-S Service Pro¯le.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mechanical harmonic transmissions are relatively new kind of drives having several unusual features. For example, they can provide reduction ratio up to 500:1 in one stage, have very small teeth module compared to conventional drives and very large number of teeth (up to 1000) on a flexible gear. If for conventional drives manufacturing methods are well-developed, fabrication of large size harmonic drives presents a challenge. For example, how to fabricate a thin shell of 1.7m in diameter and wall thickness of 30mm having high precision external teeth at one end and internal splines at the other end? It is so flexible that conventional fabrication methods become unsuitable. In this paper special fabrication methods are discussed that can be used for manufacturing of large size harmonic drive components. They include electro-slag welding and refining, the use of special expandable devices to locate and hold a flexible gear, welding peripheral parts of disks with wear resistant materials with subsequent machining and others. These fabrication methods proved to be effective and harmonic drives built with the use of these innovative technologies have been installed on heavy metallurgical equipment and successfully tested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The accurate measurement of Cardiac output (CO) is vital in guiding the treatment of critically ill patients. Invasive or minimally invasive measurement of CO is not without inherent risks to the patient. Skilled Intensive Care Unit (ICU) nursing staff are in an ideal position to assess changes in CO following therapeutic measures. The USCOM (Ultrasonic Cardiac Output Monitor) device is a non-invasive CO monitor whose clinical utility and ease of use requires testing. Objectives To compare cardiac output measurement using a non-invasive ultrasonic device (USCOM) operated by a non-echocardiograhically trained ICU Registered Nurse (RN), with the conventional pulmonary artery catheter (PAC) using both thermodilution and Fick methods. Design Prospective observational study. Setting and participants Between April 2006 and March 2007, we evaluated 30 spontaneously breathing patients requiring PAC for assessment of heart failure and/or pulmonary hypertension at a tertiary level cardiothoracic hospital. Methods SCOM CO was compared with thermodilution measurements via PAC and CO estimated using a modified Fick equation. This catheter was inserted by a medical officer, and all USCOM measurements by a senior ICU nurse. Mean values, bias and precision, and mean percentage difference between measures were determined to compare methods. The Intra-Class Correlation statistic was also used to assess agreement. The USCOM time to measure was recorded to assess the learning curve for USCOM use performed by an ICU RN and a line of best fit demonstrated to describe the operator learning curve. Results In 24 of 30 (80%) patients studied, CO measures were obtained. In 6 of 30 (20%) patients, an adequate USCOM signal was not achieved. The mean difference (±standard deviation) between USCOM and PAC, USCOM and Fick, and Fick and PAC CO were small, −0.34 ± 0.52 L/min, −0.33 ± 0.90 L/min and −0.25 ± 0.63 L/min respectively across a range of outputs from 2.6 L/min to 7.2 L/min. The percent limits of agreement (LOA) for all measures were −34.6% to 17.8% for USCOM and PAC, −49.8% to 34.1% for USCOM and Fick and −36.4% to 23.7% for PAC and Fick. Signal acquisition time reduced on average by 0.6 min per measure to less than 10 min at the end of the study. Conclusions In 80% of our cohort, USCOM, PAC and Fick measures of CO all showed clinically acceptable agreement and the learning curve for operation of the non-invasive USCOM device by an ICU RN was found to be satisfactorily short. Further work is required in patients receiving positive pressure ventilation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Australia, the Queensland fruit fly (B. tryoni), is the most destructive insect pest of horticulture, attacking nearly all fruit and vegetable crops. This project has researched and prototyped a system for monitoring fruit flies so that authorities can be alerted when a fly enters a crop in a more efficient manner than is currently used. This paper presents the idea of our sensor platform design as well as the fruit fly detection and recognition algorithm by using machine vision techniques. Our experiments showed that the designed trap and sensor platform is capable to capture quality fly images, the invasive flies can be successfully detected and the average precision of the Queensland fruit fly recognition is 80% from our experiment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Here we search for evidence of the existence of a sub-chondritic 142Nd/144Nd reservoir that balances the Nd isotope chemistry of the Earth relative to chondrites. If present, it may reside in the source region of deeply sourced mantle plume material. We suggest that lavas from Hawai’i with coupled elevations in 186Os/188Os and 187Os/188Os, from Iceland that represent mixing of upper mantle and lower mantle components, and from Gough with sub-chondritic 143Nd/144Nd and high 207Pb/206Pb, are favorable samples that could reflect mantle sources that have interacted with an Early-Enriched Reservoir (EER) with sub-chondritic 142Nd/144Nd. High-precision Nd isotope analyses of basalts from Hawai’i, Iceland and Gough demonstrate no discernable 142Nd/144Nd deviation from terrestrial standards. These data are consistent with previous high-precision Nd isotope analysis of recent mantle-derived samples and demonstrate that no mantle-derived material to date provides evidence for the existence of an EER in the mantle. We then evaluate mass balance in the Earth with respect to both 142Nd/144Nd and 143Nd/144Nd. The Nd isotope systematics of EERs are modeled for different sizes and timing of formation relative to ε143Nd estimates of the reservoirs in the μ142Nd = 0 Earth, where μ142Nd is ((measured 142Nd/144Nd/terrestrial standard 142Nd/144Nd)−1 * 10−6) and the μ142Nd = 0 Earth is the proportion of the silicate Earth with 142Nd/144Nd indistinguishable from the terrestrial standard. The models indicate that it is not possible to balance the Earth with respect to both 142Nd/144Nd and 143Nd/144Nd unless the μ142Nd = 0 Earth has a ε143Nd within error of the present-day Depleted Mid-ocean ridge basalt Mantle source (DMM). The 4567 Myr age 142Nd–143Nd isochron for the Earth intersects μ142Nd = 0 at ε143Nd of +8 ± 2 providing a minimum ε143Nd for the μ142Nd = 0 Earth. The high ε143Nd of the μ142Nd = 0 Earth is confirmed by the Nd isotope systematics of Archean mantle-derived rocks that consistently have positive ε143Nd. If the EER formed early after solar system formation (0–70 Ma) continental crust and DMM can be complementary reservoirs with respect to Nd isotopes, with no requirement for significant additional reservoirs. If the EER formed after 70 Ma then the μ142Nd = 0 Earth must have a bulk ε143Nd more radiogenic than DMM and additional high ε143Nd material is required to balance the Nd isotope systematics of the Earth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We developed orthogonal least-squares techniques for fitting crystalline lens shapes, and used the bootstrap method to determine uncertainties associated with the estimated vertex radii of curvature and asphericities of five different models. Three existing models were investigated including one that uses two separate conics for the anterior and posterior surfaces, and two whole lens models based on a modulated hyperbolic cosine function and on a generalized conic function. Two new models were proposed including one that uses two interdependent conics and a polynomial based whole lens model. The models were used to describe the in vitro shape for a data set of twenty human lenses with ages 7–82 years. The two-conic-surface model (7 mm zone diameter) and the interdependent surfaces model had significantly lower merit functions than the other three models for the data set, indicating that most likely they can describe human lens shape over a wide age range better than the other models (although with the two-conic-surfaces model being unable to describe the lens equatorial region). Considerable differences were found between some models regarding estimates of radii of curvature and surface asphericities. The hyperbolic cosine model and the new polynomial based whole lens model had the best precision in determining the radii of curvature and surface asphericities across the five considered models. Most models found significant increase in anterior, but not posterior, radius of curvature with age. Most models found a wide scatter of asphericities, but with the asphericities usually being positive and not significantly related to age. As the interdependent surfaces model had lower merit function than three whole lens models, there is further scope to develop an accurate model of the complete shape of human lenses of all ages. The results highlight the continued difficulty in selecting an appropriate model for the crystalline lens shape.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The population Monte Carlo algorithm is an iterative importance sampling scheme for solving static problems. We examine the population Monte Carlo algorithm in a simplified setting, a single step of the general algorithm, and study a fundamental problem that occurs in applying importance sampling to high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of estimate under conditions on the importance function. We demonstrate the exponential growth of the asymptotic variance with the dimension and show that the optimal covariance matrix for the importance function can be estimated in special cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lamb waves propagation in composite materials has been studied extensively since it was first observed in 1982. In this paper, we show a procedure to simulate the propagation of Lamb waves in composite laminates using a two-dimensional model in ANSYS. This is done by simulating the Lamb waves propagating along the plane of the structure in the form of a time dependent force excitation. In this paper, an 8-layered carbon reinforced fibre plastic (CRFP) is modelled as transversely isotropic and dissipative medium and the effect of flaws is analyzed with respect to the defects induced between various layers of the composite laminate. This effort is the basis for the future development of a 3D model for similar applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract With the phenomenal growth of electronic data and information, there are many demands for the development of efficient and effective systems (tools) to perform the issue of data mining tasks on multidimensional databases. Association rules describe associations between items in the same transactions (intra) or in different transactions (inter). Association mining attempts to find interesting or useful association rules in databases: this is the crucial issue for the application of data mining in the real world. Association mining can be used in many application areas, such as the discovery of associations between customers’ locations and shopping behaviours in market basket analysis. Association mining includes two phases. The first phase, called pattern mining, is the discovery of frequent patterns. The second phase, called rule generation, is the discovery of interesting and useful association rules in the discovered patterns. The first phase, however, often takes a long time to find all frequent patterns; these also include much noise. The second phase is also a time consuming activity that can generate many redundant rules. To improve the quality of association mining in databases, this thesis provides an alternative technique, granule-based association mining, for knowledge discovery in databases, where a granule refers to a predicate that describes common features of a group of transactions. The new technique first transfers transaction databases into basic decision tables, then uses multi-tier structures to integrate pattern mining and rule generation in one phase for both intra and inter transaction association rule mining. To evaluate the proposed new technique, this research defines the concept of meaningless rules by considering the co-relations between data-dimensions for intratransaction-association rule mining. It also uses precision to evaluate the effectiveness of intertransaction association rules. The experimental results show that the proposed technique is promising.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing diversity of the Internet has created a vast number of multilingual resources on the Web. A huge number of these documents are written in various languages other than English. Consequently, the demand for searching in non-English languages is growing exponentially. It is desirable that a search engine can search for information over collections of documents in other languages. This research investigates the techniques for developing high-quality Chinese information retrieval systems. A distinctive feature of Chinese text is that a Chinese document is a sequence of Chinese characters with no space or boundary between Chinese words. This feature makes Chinese information retrieval more difficult since a retrieved document which contains the query term as a sequence of Chinese characters may not be really relevant to the query since the query term (as a sequence Chinese characters) may not be a valid Chinese word in that documents. On the other hand, a document that is actually relevant may not be retrieved because it does not contain the query sequence but contains other relevant words. In this research, we propose two approaches to deal with the problems. In the first approach, we propose a hybrid Chinese information retrieval model by incorporating word-based techniques with the traditional character-based techniques. The aim of this approach is to investigate the influence of Chinese segmentation on the performance of Chinese information retrieval. Two ranking methods are proposed to rank retrieved documents based on the relevancy to the query calculated by combining character-based ranking and word-based ranking. Our experimental results show that Chinese segmentation can improve the performance of Chinese information retrieval, but the improvement is not significant if it incorporates only Chinese segmentation with the traditional character-based approach. In the second approach, we propose a novel query expansion method which applies text mining techniques in order to find the most relevant words to extend the query. Unlike most existing query expansion methods, which generally select the highly frequent indexing terms from the retrieved documents to expand the query. In our approach, we utilize text mining techniques to find patterns from the retrieved documents that highly correlate with the query term and then use the relevant words in the patterns to expand the original query. This research project develops and implements a Chinese information retrieval system for evaluating the proposed approaches. There are two stages in the experiments. The first stage is to investigate if high accuracy segmentation can make an improvement to Chinese information retrieval. In the second stage, a text mining based query expansion approach is implemented and a further experiment has been done to compare its performance with the standard Rocchio approach with the proposed text mining based query expansion method. The NTCIR5 Chinese collections are used in the experiments. The experiment results show that by incorporating the text mining based query expansion with the hybrid model, significant improvement has been achieved in both precision and recall assessments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To determine whether there are clinical and public health dilemmas resulting from the reproducibility of routine vitamin D assays. Methods: Blinded agreement studies were conducted in eight clinical laboratories using two commonly used assays to measure serum 25-hydroxyvitamin D (25(OH)D) levels in Australasia and Canada (DiaSorin Radioimmunoassay (RIA) and DiaSorin LIAISON® one). Results: Only one laboratory measured 25(OH)D with excellent precision. Replicate 25(OH)D measurements varied by up to 97% and 15% of paired results differed by more than 50%. Thirteen percent of subjects received one result indicating insufficiency [25-50 nmol/l] and another suggesting adequacy [>50 nmol/l]). Agreement ranged from poor to excellent for laboratories using the manual RIA, while the precision of the semi-automated Liaison® system was consistently poor. Conclusions: Recent interest in the relevance of vitamin D to human health has increased demand for 25(OH)D testing and associated costs. Our results suggest clinicians and public health authorities are making decisions about treatment or changes to public health policy based on imprecise data. Clinicians, researchers and policy makers should be made aware of the imprecision of current 25(OH)D testing so that they exercise caution when using these assays for clinical practice, and when interpreting the findings of epidemiological studies based on vitamin D levels measured using these assays. Development of a rapid, reproducible, accurate and robust assay should be a priority due to interest in populationbased screening programs and research to inform public health policy about the amount of sun exposure required for human health. In the interim, 25(OH)D results should routinely include a statement of measurement uncertainty.