832 resultados para accuracy analysis
Resumo:
The cortisol awakening response (CAR) is typically measured in the domestic setting. Moderate sample timing inaccuracy has been shown to result in erroneous CAR estimates and such inaccuracy has been shown partially to explain inconsistency in the CAR literature. The need for more reliable measurement of the CAR has recently been highlighted in expert consensus guidelines where it was pointed out that less than 6% of published studies provided electronic-monitoring of saliva sampling time in the post-awakening period. Analyses of a merged data-set of published studies from our laboratory are presented. To qualify for selection, both time of awakening and collection of the first sample must have been verified by electronic-monitoring and sampling commenced within 15 min of awakening. Participants (n = 128) were young (median age of 20 years) and healthy. Cortisol values were determined in the 45 min post-awakening period on 215 sampling days. On 127 days, delay between verified awakening and collection of the first sample was less than 3 min (‘no delay’ group); on 45 days there was a delay of 4–6 min (‘short delay’ group); on 43 days the delay was 7–15 min (‘moderate delay’ group). Cortisol values for verified sampling times accurately mapped on to the typical post-awakening cortisol growth curve, regardless of whether sampling deviated from desired protocol timings. This provides support for incorporating rather than excluding delayed data (up to 15 min) in CAR analyses. For this population the fitted cortisol growth curve equation predicted a mean cortisol awakening level of 6 nmols/l (±1 for 95% CI) and a mean CAR rise of 6 nmols/l (±2 for 95% CI). We also modelled the relationship between real delay and CAR magnitude, when the CAR is calculated erroneously by incorrectly assuming adherence to protocol time. Findings supported a curvilinear hypothesis in relation to effects of sample delay on the CAR. Short delays of 4–6 min between awakening and commencement of saliva sampling resulted an overestimated CAR. Moderate delays of 7–15 min were associated with an underestimated CAR. Findings emphasize the need to employ electronic-monitoring of sampling accuracy when measuring the CAR in the domestic setting.
Resumo:
Abstract Molecular probe-based methods (Fluorescent in-situ hybridisation or FISH, Next Generation Sequencing or NGS) have proved successful in improving both the efficiency and accuracy of the identification of microorganisms, especially those that lack distinct morphological features, such as picoplankton. However, FISH methods have the major drawback that they can only identify one or just a few species at a time because of the reduced number of available fluorochromes that can be added to the probe. Although the length of sequence that can be obtained is continually improving, NGS still requires a great deal of handling time, its analysis time is still months and with a PCR step it will always be sensitive to natural enzyme inhibitors. With the use of DNA microarrays, it is possible to identify large numbers of taxa on a single-glass slide, the so-called phylochip, which can be semi-quantitative. This review details the major steps in probe design, design and production of a phylochip and validation of the array. Finally, major microarray studies in the phytoplankton community are reviewed to demonstrate the scope of the method.
Resumo:
Abstract Molecular probe-based methods (Fluorescent in-situ hybridisation or FISH, Next Generation Sequencing or NGS) have proved successful in improving both the efficiency and accuracy of the identification of microorganisms, especially those that lack distinct morphological features, such as picoplankton. However, FISH methods have the major drawback that they can only identify one or just a few species at a time because of the reduced number of available fluorochromes that can be added to the probe. Although the length of sequence that can be obtained is continually improving, NGS still requires a great deal of handling time, its analysis time is still months and with a PCR step it will always be sensitive to natural enzyme inhibitors. With the use of DNA microarrays, it is possible to identify large numbers of taxa on a single-glass slide, the so-called phylochip, which can be semi-quantitative. This review details the major steps in probe design, design and production of a phylochip and validation of the array. Finally, major microarray studies in the phytoplankton community are reviewed to demonstrate the scope of the method.
Resumo:
A compositional multivariate approach is used to analyse regional scale soil geochemical data obtained as part of the Tellus Project generated by the Geological Survey Northern Ireland (GSNI). The multi-element total concentration data presented comprise XRF analyses of 6862 rural soil samples collected at 20cm depths on a non-aligned grid at one site per 2 km2. Censored data were imputed using published detection limits. Using these imputed values for 46 elements (including LOI), each soil sample site was assigned to the regional geology map provided by GSNI initially using the dominant lithology for the map polygon. Northern Ireland includes a diversity of geology representing a stratigraphic record from the Mesoproterozoic, up to and including the Palaeogene. However, the advance of ice sheets and their meltwaters over the last 100,000 years has left at least 80% of the bedrock covered by superficial deposits, including glacial till and post-glacial alluvium and peat. The question is to what extent the soil geochemistry reflects the underlying geology or superficial deposits. To address this, the geochemical data were transformed using centered log ratios (clr) to observe the requirements of compositional data analysis and avoid closure issues. Following this, compositional multivariate techniques including compositional Principal Component Analysis (PCA) and minimum/maximum autocorrelation factor (MAF) analysis method were used to determine the influence of underlying geology on the soil geochemistry signature. PCA showed that 72% of the variation was determined by the first four principal components (PC’s) implying “significant” structure in the data. Analysis of variance showed that only 10 PC’s were necessary to classify the soil geochemical data. To consider an improvement over PCA that uses the spatial relationships of the data, a classification based on MAF analysis was undertaken using the first 6 dominant factors. Understanding the relationship between soil geochemistry and superficial deposits is important for environmental monitoring of fragile ecosystems such as peat. To explore whether peat cover could be predicted from the classification, the lithology designation was adapted to include the presence of peat, based on GSNI superficial deposit polygons and linear discriminant analysis (LDA) undertaken. Prediction accuracy for LDA classification improved from 60.98% based on PCA using 10 principal components to 64.73% using MAF based on the 6 most dominant factors. The misclassification of peat may reflect degradation of peat covered areas since the creation of superficial deposit classification. Further work will examine the influence of underlying lithologies on elemental concentrations in peat composition and the effect of this in classification analysis.
Resumo:
Schistosomiasis is a chronic and debilitating disease caused by blood flukes (digenetic trematodes) of the genus Schistosoma. Schistosomes are sexually dimorphic and exhibit dramatic morphological changes during a complex lifecycle which requires subtle gene regulatory mechanisms to fulfil these complex biological processes. In the current study, a 41,982 features custom DNA microarray, which represents the most comprehensive probe coverage for any schistosome transcriptome study, was designed based on public domain and local databases to explore differential gene expression in S. japonicum. We found that approximately 1/10 of the total annotated genes in the S. japonicum genome are differentially expressed between adult males and females. In general, genes associated with the cytoskeleton, and motor and neuronal activities were readily expressed in male adult worms, whereas genes involved in amino acid metabolism, nucleotide biosynthesis, gluconeogenesis, glycosylation, cell cycle processes, DNA synthesis and genome fidelity and stability were enriched in females. Further, miRNAs target sites within these gene sets were predicted, which provides a scenario whereby the miRNAs potentially regulate these sex-biased expressed genes. The study significantly expands the expressional and regulatory characteristics of gender-biased expressed genes in schistosomes with high accuracy. The data provide a better appreciation of the biological and physiological features of male and female schistosome parasites, which may lead to novel vaccine targets and the development of new therapeutic interventions.
Resumo:
A small scale sample nuclear waste package, consisting of a 28 mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500 keV), with a source size of <0.5 mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30 cm2 scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10 Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned.
Resumo:
The availability of BRAF inhibitors has given metastatic melanoma patients an effective new treatment choice and molecular testing to determine the presence or absence of a BRAF codon 600 mutation is pivotal in the clinical management of these patients. This molecular test must be performed accurately and appropriately to ensure that the patient receives the most suitable treatment in a timely manner. Laboratories have introduced such testing; however, some experience low sample throughput making it critical that an external quality assurance programme is available to help promote a high standard of testing, reporting and provide an educational aspect for BRAF molecular testing. Laboratories took part in three rounds of external quality assessment (EQA) during a 12-month period giving participants a measure of the accuracy of genotyping, clinical interpretation of the result and experience in testing a range of different samples. Formalin fixed paraffin embedded tissue sections from malignant melanoma patients were distributed to participants for BRAF molecular testing. The standard of testing was generally high but distribution of a mutation other than the most common, p.(Val600Glu), highlighted concerns with detection or reporting of the presence of rarer mutations. The main issues raised in the interpretation of the results were the importance of clear unambiguous interpretation of the result tailored to the patient and the understanding that the treatment is different from that given to other stratified medicine programmes. The variability in reporting and wide range of methodologies used indicate a continuing need for EQA in this field.
Resumo:
INTRODUCTION: The dichotomization of non-small cell carcinoma (NSCLC) subtype into squamous (SQCC) and adenocarcinoma (ADC) has become important in recent years and is increasingly required with regard to management. The aim of this study was to determine the utility of a panel of commercially available antibodies in refining the diagnosis on small biopsies and also to determine whether cytologic material is suitable for somatic EGFR genotyping in a prospectively analyzed series of patients undergoing investigation for suspected lung cancer. METHODS: Thirty-two consecutive cases of NSCLC were first tested using a panel comprising cytokeratin 5/6, P63, thyroid transcription factor-1, 34betaE12, and a D-PAS stain for mucin, to determine their value in refining diagnosis of NSCLC. After this test phase, two further pathologists independently reviewed the cases using a refined panel that excluded 34betaE12 because of its low specificity for SQCC, and refinement of diagnosis and concordance were assessed. Ten cases of ADC, including eight derived from cytologic samples, were sent for EGFR mutation analysis. RESULTS: There was refinement of diagnosis in 65% of cases of NSCLC to either SQCC or ADC in the test phase. This included 10 of 13 cases where cell pellets had been prepared from transbronchial needle aspirates. Validation by two further pathologists with varying expertise in lung pathology confirmed increased refinement and concordance of diagnosis. All samples were adequate for analysis, and they all showed a wild-type EGFR genotype. CONCLUSION: A panel comprising cytokeratin 5/6, P63, thyroid transcription factor-1, and a D-PAS stain for mucin increases diagnostic accuracy and agreement between pathologists when faced with refining a diagnosis of NSCLC to SQCC or ADC. These small samples, even cell pellets derived from transbronchial needle aspirates, seem to be adequate for EGFR mutation analysis.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
This papers examines the use of trajectory distance measures and clustering techniques to define normal
and abnormal trajectories in the context of pedestrian tracking in public spaces. In order to detect abnormal
trajectories, what is meant by a normal trajectory in a given scene is firstly defined. Then every trajectory
that deviates from this normality is classified as abnormal. By combining Dynamic Time Warping and a
modified K-Means algorithms for arbitrary-length data series, we have developed an algorithm for trajectory
clustering and abnormality detection. The final system performs with an overall accuracy of 83% and 75%
when tested in two different standard datasets.
Resumo:
Increasing research has highlighted the effects of changing climates on the occurrence and prevalence of toxigenic Aspergillus species producing aflatoxins. There is concern of the toxicological effects to human health and animal productivity following acute and chronic exposure that may affect the future ability to provide safe and sufficient food globally. Considerable research has focused on the detection of these toxins, based on the physicochemical and biochemical properties of the aflatoxin compounds, in agricultural products for human and animal consumption. As improvements in food security continue more regulations for acceptable levels of aflatoxins have arisen globally; the most stringent in Europe. These regulations are important for developing countries as aflatoxin occurrence is high significantly effecting international trade and the economy. In developed countries analytical approaches have become highly sophisticated, capable of attaining results with high precision and accuracy, suitable for regulatory laboratories. Regrettably, many countries that are affected by aflatoxin contamination do not have resources for high tech HPLC and MS instrumentation and require more affordable, yet robust equally accurate alternatives that may be used by producers, processors and traders in emerging economies. It is especially important that those companies wishing to exploit the opportunities offered by lucrative but highly regulated markets in the developed world, have access to analytical methods that will ensure that their exports meet their customers quality and safety requirements.
This work evaluates the ToxiMet system as an alternative approach to UPLC–MS/MS for the detection and determination of aflatoxins relative to current European regulatory standards. Four commodities: rice grain, maize cracked and flour, peanut paste and dried distillers grains were analysed for natural aflatoxin contamination. For B1 and total aflatoxins determination the qualitative correlation, above or below the regulatory limit, was good for all commodities with the exception of the dried distillers grain samples for B1 for which no calibration existed. For B1 the quantitative R2 correlations were 0.92, 0.92, 0.88 (<250 μg/kg) and 0.7 for rice, maize, peanuts and dried distillers grain samples respectively whereas for total aflatoxins the quantitative correlation was 0.92, 0.94, 0.88 and 0.91. The ToxiMet system could be used as an alternative for aflatoxin analysis for current legislation but some consideration should be given to aflatoxin M1 regulatory levels for these commodities considering the high levels detected in this study especially for maize and peanuts
Resumo:
Safety on public transport is a major concern for the relevant authorities. We
address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone.
Resumo:
There is little consensus regarding how verticality (social power, dominance, and status) is related to accurate interpersonal perception. The relation could be either positive or negative, and there could be many causal processes at play. The present article discusses the theoretical possibilities and presents a meta-analysis of this question. In studies using a standard test of interpersonal accuracy, higher socioeconomic status (SES) predicted higher accuracy defined as accurate inference about the meanings of cues; also, higher experimentally manipulated vertical position predicted higher accuracy defined as accurate recall of others’ words. In addition, although personality dominance did not predict accurate inference overall, the type of personality dominance did, such that empathic/responsible dominance had a positive relation and egoistic/aggressive dominance had a negative relation to accuracy. In studies involving live interaction, higher experimentally manipulated vertical position produced lower accuracy defined as accurate inference about cues; however, methodological problems place this result in doubt.
Resumo:
Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.
Resumo:
During the epoch when the first collapsed structures formed (6<z<50) our Universe went through an extended period of changes. Some of the radiation from the first stars and accreting black holes in those structures escaped and changed the state of the Intergalactic Medium (IGM). The era of this global phase change in which the state of the IGM was transformed from cold and neutral to warm and ionized, is called the Epoch of Reionization.In this thesis we focus on numerical methods to calculate the effects of this escaping radiation. We start by considering the performance of the cosmological radiative transfer code C2-Ray. We find that although this code efficiently and accurately solves for the changes in the ionized fractions, it can yield inaccurate results for the temperature changes. We introduce two new elements to improve the code. The first element, an adaptive time step algorithm, quickly determines an optimal time step by only considering the computational cells relevant for this determination. The second element, asynchronous evolution, allows different cells to evolve with different time steps. An important constituent of methods to calculate the effects of ionizing radiation is the transport of photons through the computational domain or ``ray-tracing''. We devise a novel ray tracing method called PYRAMID which uses a new geometry - the pyramidal geometry. This geometry shares properties with both the standard Cartesian and spherical geometries. This makes it on the one hand easy to use in conjunction with a Cartesian grid and on the other hand ideally suited to trace radiation from a radially emitting source. A time-dependent photoionization calculation not only requires tracing the path of photons but also solving the coupled set of photoionization and thermal equations. Several different solvers for these equations are in use in cosmological radiative transfer codes. We conduct a detailed and quantitative comparison of four different standard solvers in which we evaluate how their accuracy depends on the choice of the time step. This comparison shows that their performance can be characterized by two simple parameters and that the C2-Ray generally performs best.