236 resultados para Precision Xtra®


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method for determination of tricyclazole in water using solid phase extraction and high performance liquid chromatography (HPLC) with UV detection at 230nm and a mobile phase of acetonitrile:water (20:80, v/v) was developed. A performance comparison between two types of solid phase sorbents, the C18 sorbent of Supelclean ENVI-18 cartridge and the styrene-divinyl benzene copolymer sorbent of Sep-Pak PS2-Plus cartridge was conducted. The Sep-Pak PS2-Plus cartridges were found more suitable for extracting tricyclazole from water samples than the Supelclean ENVI-18 cartridges. For this cartridge, both methanol and ethyl acetate produced good results. The method was validated with good linearity and with a limit of detection of 0.008gL-1 for a 500-fold concentration through the SPE procedure. The recoveries of the method were stable at 80% and the precision was from 1.1-6.0% within the range of fortified concentrations. The validated method was also applied to measure the concentrations of tricyclazole in real paddy water.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anti-cancer drug loaded-nanoparticles (NPs) or encapsulation of NPs in colon-targeted delivery systems shows potential for increasing the local drug concentration in the colon leading to improved treatment of colorectal cancer. To investigate the potential of the NP-based strategies for colon-specific delivery, two formulations, free Eudragit® NPs and enteric-coated NP-loaded chitosan–hypromellose microcapsules (MCs) were fluorescently-labelled and their tissue distribution in mice after oral administration was monitored by multispectral small animal imaging. The free NPs showed a shorter transit time throughout the mouse digestive tract than the MCs, with extensive excretion of NPs in faeces at 5 h. Conversely, the MCs showed complete NP release in the lower region of the mouse small intestine at 8 h post-administration. Overall, the encapsulation of NPs in MCs resulted in a higher colonic NP intensity from 8 h to 24 h post-administration compared to the free NPs, due to a NP ‘guarding’ effect of MCs during their transit along mouse gastrointestinal tract which decreased NP excretion in faeces. These imaging data revealed that this widely-utilised colon-targeting MC formulation lacked site-precision for releasing its NP load in the colon, but the increased residence time of the NPs in the lower gastrointestinal tract suggests that it is still useful for localised release of chemotherapeutics, compared to NP administration alone. In addition, both formulations resided in the stomach of mice at considerable concentrations over 24 h. Thus, adhesion of NP- or MC-based oral delivery systems to gastric mucosa may be problematic for colon-specific delivery of the cargo to the colon and should be carefully investigated for a full evaluation of particulate delivery systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within online learning communities, receiving timely and meaningful insights into the quality of learning activities is an important part of an effective educational experience. Commonly adopted methods – such as the Community of Inquiry framework – rely on manual coding of online discussion transcripts, which is a costly and time consuming process. There are several efforts underway to enable the automated classification of online discussion messages using supervised machine learning, which would enable the real-time analysis of interactions occurring within online learning communities. This paper investigates the importance of incorporating features that utilise the structure of on-line discussions for the classification of "cognitive presence" – the central dimension of the Community of Inquiry framework focusing on the quality of students' critical thinking within online learning communities. We implemented a Conditional Random Field classification solution, which incorporates structural features that may be useful in increasing classification performance over other implementations. Our approach leads to an improvement in classification accuracy of 5.8% over current existing techniques when tested on the same dataset, with a precision and recall of 0.630 and 0.504 respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural nanopatterned surfaces (nNPS) present on insect wings have demonstrated bactericidal activity [1, 2]. Fabricated nanopatterned surfaces (fNPS) derived by characterization of these wings have also shown superior bactericidal activity [2]. However bactericidal NPS topologies vary in both geometry and chemical characteristics of the individual features in different insects and fabricated surfaces, rendering it difficult to ascertain the optimum geometrical parameters underling bactericidal activity. This situation calls for the adaptation of new and emerging techniques, which are capable of fabricating and characterising comparable structures to nNPS from biocompatible materials. In this research, CAD drawn nNPS representing an area of 10 μm x10 μm was fabricated on a fused silica glass by Nanoscribe photonic professional GT 3D laser lithography system using two photon polymerization lithography. The glass was cleaned with acetone and isopropyl alcohol thrice and a drop of IP-DIP photoresist from Nanoscribe GmbH was cast onto the glass slide prior to patterning. Photosensitive IP-DIP resist was polymerized with high precision to make the surface nanopatterns using a 780 nm wavelength laser. Both moving-beam fixedsample (MBFS) and fixed-beam moving-sample (FBMS) fabrication approaches were tested during the fabrication process to determine the best approach for the precise fabrication of the required nanotopological pattern. Laser power was also optimized to fabricate the required fNPS, where this was changed from 3mW to 10mW to determine the optimum laser power for the polymerization of the photoresist for fabricating FNPS...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aerial surveys conducted using manned or unmanned aircraft with customized camera payloads can generate a large number of images. Manual review of these images to extract data is prohibitive in terms of time and financial resources, thus providing strong incentive to automate this process using computer vision systems. There are potential applications for these automated systems in areas such as surveillance and monitoring, precision agriculture, law enforcement, asset inspection, and wildlife assessment. In this paper, we present an efficient machine learning system for automating the detection of marine species in aerial imagery. The effectiveness of our approach can be credited to the combination of a well-suited region proposal method and the use of Deep Convolutional Neural Networks (DCNNs). In comparison to previous algorithms designed for the same purpose, we have been able to dramatically improve recall to more than 80% and improve precision to 27% by using DCNNs as the core approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cancer is fundamentally a genomic disease caused by mutations or rearrangements in the DNA or epigenetic machinery of a patient. An emerging field in cancer treatment targets key aberrations arising from the mutational landscape of an individual patient’s disease rather than employing a cancer-wide cytotoxic therapy approach. In prostate cancer in particular, where there is an observed variation in response to standard treatments between patients with disease of a similar pathological stage and grade, mutationdirected treatment may grow to be a viable tool for clinicians to tailor more effective treatments. This review will describe a number of mutations across multiple forms of cancer that have been successfully antagonised by targeted therapeutics including their identification, the development of targeted compounds to combat them and the development of resistance to these therapies. This review will continue to examine these same mutations in the treatment and management of prostate cancer; the prevalence of targetable mutations in prostate cancer, recent clinical trials of targeted-agents and the potential or limitations for their use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a new high precision focused word sense disambiguation (WSD) approach is proposed, which not only attempts to identify the proper sense for a word but also provides the probabilistic evaluation for the identification confidence at the same time. A novel Instance Knowledge Network (IKN) is built to generate and maintain semantic knowledge at the word, type synonym set and instance levels. Related algorithms based on graph matching are developed to train IKN with probabilistic knowledge and to use IKN for probabilistic word sense disambiguation. Based on the Senseval-3 all-words task, we run extensive experiments to show the performance enhancements in different precision ranges and the rationality of probabilistic based automatic confidence evaluation of disambiguation. We combine our WSD algorithm with five best WSD algorithms in senseval-3 all words tasks. The results show that the combined algorithms all outperform the corresponding algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Australian farmers have used precision agriculture technology for many years with the use of ground – based and satellite systems. However, these systems require the use of vehicles in order to analyse a wide area which can be time consuming and cost ineffective. Also, satellite imagery may not be accurate for analysis. Low cost of Unmanned Aerial Vehicles (UAV) present an effective method of analysing large plots of agricultural fields. As the UAV can travel over long distances and fly over multiple plots, it allows for more data to be captured by a sampling device such as a multispectral camera and analysed thereafter. This would allow farmers to analyse the health of their crops and thus focus their efforts on certain areas which may need attention. This project evaluates a multispectral camera for use on a UAV for agricultural applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Developing accurate and reliable crop detection algorithms is an important step for harvesting automation in horticulture. This paper presents a novel approach to visual detection of highly-occluded fruits. We use a conditional random field (CRF) on multi-spectral image data (colour and Near-Infrared Reflectance, NIR) to model two classes: crop and background. To describe these two classes, we explore a range of visual-texture features including local binary pattern, histogram of oriented gradients, and learn auto-encoder features. The pro-posed methods are evaluated using hand-labelled images from a dataset captured on a commercial capsicum farm. Experimental results are presented, and performance is evaluated in terms of the Area Under the Curve (AUC) of the precision-recall curves.Our current results achieve a maximum performance of 0.81AUC when combining all of the texture features in conjunction with colour information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Commercial environments may receive only a fraction of expected genetic gains for growth rate as predicted from the selection environment This fraction is the result of undesirable genotype-by-environment interactions (G x E) and measured by the genetic correlation (r(g)) of growth between environments. Rapid estimates of genetic correlation achieved in one generation are notoriously difficult to estimate with precision. A new design is proposed where genetic correlations can be estimated by utilising artificial mating from cryopreserved semen and unfertilised eggs stripped from a single female. We compare a traditional phenotype analysis of growth to a threshold model where only the largest fish are genotyped for sire identification. The threshold model was robust to differences in family mortality differing up to 30%. The design is unique as it negates potential re-ranking of families caused by an interaction between common maternal environmental effects and growing environment. The design is suitable for rapid assessment of G x E over one generation with a true 0.70 genetic correlation yielding standard errors as low as 0.07. Different design scenarios were tested for bias and accuracy with a range of heritability values, number of half-sib families created, number of progeny within each full-sib family, number of fish genotyped, number of fish stocked, differing family survival rates and at various simulated genetic correlation levels

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In treatment comparison experiments, the treatment responses are often correlated with some concomitant variables which can be measured before or at the beginning of the experiments. In this article, we propose schemes for the assignment of experimental units that may greatly improve the efficiency of the comparison in such situations. The proposed schemes are based on general ranked set sampling. The relative efficiency and cost-effectiveness of the proposed schemes are studied and compared. It is found that some proposed schemes are always more efficient than the traditional simple random assignment scheme when the total cost is the same. Numerical studies show promising results using the proposed schemes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pseudo-marginal methods such as the grouped independence Metropolis-Hastings (GIMH) and Markov chain within Metropolis (MCWM) algorithms have been introduced in the literature as an approach to perform Bayesian inference in latent variable models. These methods replace intractable likelihood calculations with unbiased estimates within Markov chain Monte Carlo algorithms. The GIMH method has the posterior of interest as its limiting distribution, but suffers from poor mixing if it is too computationally intensive to obtain high-precision likelihood estimates. The MCWM algorithm has better mixing properties, but less theoretical support. In this paper we propose to use Gaussian processes (GP) to accelerate the GIMH method, whilst using a short pilot run of MCWM to train the GP. Our new method, GP-GIMH, is illustrated on simulated data from a stochastic volatility and a gene network model.