941 resultados para Biodosimetry errors
Resumo:
Townsend's primary and secondary ionization coefficients α/p and γ were determined in nitrogen over a wide range of E/p (100-1000 V cm−1 Torr−1) and p (0·4 to 12 Torr at 0 °C) using the pressure variation technique. This technique, along with the Gosseries method of evaluation of ionization coefficients, seems to be more suitable at higher values of E/p, since the errors in these coefficients could be minimized by a suitable selection of p and d, thus eliminating the non-equilibrium ionization condition.
Resumo:
New Zealand's Greenhouse Gas Inventory (the NZ Inventory) currently estimates methane (CH4) emissions from anaerobic dairy effluent ponds by: (1) determining the total pond volume across New Zealand; (2) dividing this volume by depth to obtain the total pond surface area; and (3) multiplying this area by an observational average CH4 flux. Unfortunately, a mathematically erroneous determination of pond volume has led to an imbalanced equation and a geometry error was made when scaling-up the observational CH4 flux. Furthermore, even if these errors are corrected, the nationwide estimate still hinges on field data from a study that used a debatable method to measure pond CH4 emissions at a single site, as well as a potentially inaccurate estimation of the amount of organic waste anaerobically treated. The development of a new methodology is therefore critically needed.
Resumo:
The expressionist head of a young man emerges from the dark shadows. His face is a long oval, with full lips and strongly flared nostrils, framed by black hair and small black beard.
Resumo:
Pressurised hot water extraction (PHWE) exploits the unique temperature-dependent solvent properties of water minimising the use of harmful organic solvents. Water is environmentally friendly, cheap and easily available extraction medium. The effects of temperature, pressure and extraction time in PHWE have often been studied, but here the emphasis was on other parameters important for the extraction, most notably the dimensions of the extraction vessel and the stability and solubility of the analytes to be extracted. Non-linear data analysis and self-organising maps were employed in the data analysis to obtain correlations between the parameters studied, recoveries and relative errors. First, pressurised hot water extraction (PHWE) was combined on-line with liquid chromatography-gas chromatography (LC-GC), and the system was applied to the extraction and analysis of polycyclic aromatic hydrocarbons (PAHs) in sediment. The method is of superior sensitivity compared with the traditional methods, and only a small 10 mg sample was required for analysis. The commercial extraction vessels were replaced by laboratory-made stainless steel vessels because of some problems that arose. The performance of the laboratory-made vessels was comparable to that of the commercial ones. In an investigation of the effect of thermal desorption in PHWE, it was found that at lower temperatures (200ºC and 250ºC) the effect of thermal desorption is smaller than the effect of the solvating property of hot water. At 300ºC, however, thermal desorption is the main mechanism. The effect of the geometry of the extraction vessel on recoveries was studied with five specially constructed extraction vessels. In addition to the extraction vessel geometry, the sediment packing style and the direction of water flow through the vessel were investigated. The geometry of the vessel was found to have only minor effect on the recoveries, and the same was true of the sediment packing style and the direction of water flow through the vessel. These are good results because these parameters do not have to be carefully optimised before the start of extractions. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) were compared as trapping techniques for PHWE. LLE was more robust than SPE and it provided better recoveries and repeatabilities than did SPE. Problems related to blocking of the Tenax trap and unrepeatable trapping of the analytes were encountered in SPE. Thus, although LLE is more labour intensive, it can be recommended over SPE. The stabilities of the PAHs in aqueous solutions were measured using a batch-type reaction vessel. Degradation was observed at 300ºC even with the shortest heating time. Ketones and quinones and other oxidation products were observed. Although the conditions of the stability studies differed considerably from the extraction conditions in PHWE, the results indicate that the risk of analyte degradation must be taken into account in PHWE. The aqueous solubilities of acenaphthene, anthracene and pyrene were measured, first below and then above the melting point of the analytes. Measurements below the melting point were made to check that the equipment was working, and the results were compared with those obtained earlier. Good agreement was found between the measured and literature values. A new saturation cell was constructed for the solubility measurements above the melting point of the analytes because the flow-through saturation cell could not be used above the melting point. An exponential relationship was found between the solubilities measured for pyrene and anthracene and temperature.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
Microarrays are high throughput biological assays that allow the screening of thousands of genes for their expression. The main idea behind microarrays is to compute for each gene a unique signal that is directly proportional to the quantity of mRNA that was hybridized on the chip. A large number of steps and errors associated with each step make the generated expression signal noisy. As a result, microarray data need to be carefully pre-processed before their analysis can be assumed to lead to reliable and biologically relevant conclusions. This thesis focuses on developing methods for improving gene signal and further utilizing this improved signal for higher level analysis. To achieve this, first, approaches for designing microarray experiments using various optimality criteria, considering both biological and technical replicates, are described. A carefully designed experiment leads to signal with low noise, as the effect of unwanted variations is minimized and the precision of the estimates of the parameters of interest are maximized. Second, a system for improving the gene signal by using three scans at varying scanner sensitivities is developed. A novel Bayesian latent intensity model is then applied on these three sets of expression values, corresponding to the three scans, to estimate the suitably calibrated true signal of genes. Third, a novel image segmentation approach that segregates the fluorescent signal from the undesired noise is developed using an additional dye, SYBR green RNA II. This technique helped in identifying signal only with respect to the hybridized DNA, and signal corresponding to dust, scratch, spilling of dye, and other noises, are avoided. Fourth, an integrated statistical model is developed, where signal correction, systematic array effects, dye effects, and differential expression, are modelled jointly as opposed to a sequential application of several methods of analysis. The methods described in here have been tested only for cDNA microarrays, but can also, with some modifications, be applied to other high-throughput technologies. Keywords: High-throughput technology, microarray, cDNA, multiple scans, Bayesian hierarchical models, image analysis, experimental design, MCMC, WinBUGS.
Resumo:
In this paper we investigate the effectiveness of class specific sparse codes in the context of discriminative action classification. The bag-of-words representation is widely used in activity recognition to encode features, and although it yields state-of-the art performance with several feature descriptors it still suffers from large quantization errors and reduces the overall performance. Recently proposed sparse representation methods have been shown to effectively represent features as a linear combination of an over complete dictionary by minimizing the reconstruction error. In contrast to most of the sparse representation methods which focus on Sparse-Reconstruction based Classification (SRC), this paper focuses on a discriminative classification using a SVM by constructing class-specific sparse codes for motion and appearance separately. Experimental results demonstrates that separate motion and appearance specific sparse coefficients provide the most effective and discriminative representation for each class compared to a single class-specific sparse coefficients.
Resumo:
Our attention has been drawn to lapsi and errors in a recent publication in this journal concerning Cricotopus Wulp (Diptera: Chironomidae) (Drayson et al., 2015).
Resumo:
Much of our understanding and management of ecological processes requires knowledge of the distribution and abundance of species. Reliable abundance or density estimates are essential for managing both threatened and invasive populations, yet are often challenging to obtain. Recent and emerging technological advances, particularly in unmanned aerial vehicles (UAVs), provide exciting opportunities to overcome these challenges in ecological surveillance. UAVs can provide automated, cost-effective surveillance and offer repeat surveys for pest incursions at an invasion front. They can capitalise on manoeuvrability and advanced imagery options to detect species that are cryptic due to behaviour, life-history or inaccessible habitat. UAVs may also cause less disturbance, in magnitude and duration, for sensitive fauna than other survey methods such as transect counting by humans or sniffer dogs. The surveillance approach depends upon the particular ecological context and the objective. For example, animal, plant and microbial target species differ in their movement, spread and observability. Lag-times may exist between a pest species presence at a site and its detectability, prompting a need for repeat surveys. Operationally, however, the frequency and coverage of UAV surveys may be limited by financial and other constraints, leading to errors in estimating species occurrence or density. We use simulation modelling to investigate how movement ecology should influence fine-scale decisions regarding ecological surveillance using UAVs. Movement and dispersal parameter choices allow contrasts between locally mobile but slow-dispersing populations, and species that are locally more static but invasive at the landscape scale. We find that low and slow UAV flights may offer the best monitoring strategy to predict local population densities in transects, but that the consequent reduction in overall area sampled may sacrifice the ability to reliably predict regional population density. Alternative flight plans may perform better, but this is also dependent on movement ecology and the magnitude of relative detection errors for different flight choices. Simulated investigations such as this will become increasingly useful to reveal how spatio-temporal extent and resolution of UAV monitoring should be adjusted to reduce observation errors and thus provide better population estimates, maximising the efficacy and efficiency of unmanned aerial surveys.
Resumo:
This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.
Resumo:
Delay and disruption tolerant networks (DTNs) are computer networks where round trip delays and error rates are high and disconnections frequent. Examples of these extreme networks are space communications, sensor networks, connecting rural villages to the Internet and even interconnecting commodity portable wireless devices and mobile phones. Basic elements of delay tolerant networks are a store-and-forward message transfer resembling traditional mail delivery, an opportunistic and intermittent routing, and an extensible cross-region resource naming service. Individual nodes of the network take an active part in routing the traffic and provide in-network data storage for application data that flows through the network. Application architecture for delay tolerant networks differs also from those used in traditional networks. It has become feasible to design applications that are network-aware and opportunistic, taking an advantage of different network connection speeds and capabilities. This might change some of the basic paradigms of network application design. DTN protocols will also support in designing applications which depend on processes to be persistent over reboots and power failures. DTN protocols could also be applicable to traditional networks in cases where high tolerance to delays or errors would be desired. It is apparent that challenged networks also challenge the traditional strictly layered model of network application design. This thesis provides an extensive introduction to delay tolerant networking concepts and applications. Most attention is given to challenging problems of routing and application architecture. Finally, future prospects of DTN applications and implementations are envisioned through recent research results and an interview with an active researcher of DTN networks.
Resumo:
Among different methods, the transmission-line or the impedance tube method has been most popular for the experimental evaluation of the acoustical impedance of any termination. The current state of method involves extrapolation of the measured data to the reflecting surface or exact locations of the pressure maxima, both of which are known to be rather tricky. The present paper discusses a method which makes use of the positions of the pressure minima and the values of the standing-wave ratio at these points. Lippert's concept of enveloping curves has been extended. The use of Smith or Beranek charts, with their inherent inaccuracy, has been altogether avoided. The existing formulas for the impedance have been corrected. Incidentally, certain other errors in the current literature have also been brought to light.Subject Classification: 85.20.
Automatic detection of diabetic foot complications with infrared thermography by asymmetric analysis
Resumo:
Early identification of diabetic foot complications and their precursors is essential in preventing their devastating consequences, such as foot infection and amputation. Frequent, automatic risk assessment by an intelligent telemedicine system might be feasible and cost effective. Infrared thermography is a promising modality for such a system. The temperature differences between corresponding areas on contralateral feet are the clinically significant parameters. This asymmetric analysis is hindered by (1) foot segmentation errors, especially when the foot temperature and the ambient temperature are comparable, and by (2) different shapes and sizes between contralateral feet due to deformities or minor amputations. To circumvent the first problem, we used a color image and a thermal image acquired synchronously. Foot regions, detected in the color image, were rigidly registered to the thermal image. This resulted in 97.8% ± 1.1% sensitivity and 98.4% ± 0.5% specificity over 76 high-risk diabetic patients with manual annotation as a reference. Nonrigid landmark-based registration with Bsplines solved the second problem. Corresponding points in the two feet could be found regardless of the shapes and sizes of the feet. With that, the temperature difference of the left and right feet could be obtained.
Resumo:
PURPOSE To examine longitudinal changes in choroidal thickness and axial length in a population of children with a range of refractive errors. METHODS One hundred and one children (41 myopes and 60 nonmyopes) aged 10 to 15 years participated in this prospective, observational longitudinal study. For each child, 6-month measures of choroidal thickness (using enhanced depth imaging optical coherence tomography) and axial ocular biometry were collected four times over an 18-month period. Linear mixed-models were used to examine the longitudinal changes in choroidal thickness and the relationship between changes in choroidal thickness and axial eye growth over the study period. RESULTS A significant group mean increase in subfoveal choroidal thickness was observed over 18 months (mean increase 13 6 22 lm, P < 0.001). Myopic children exhibited significantly thinner choroids compared with nonmyopic children (P < 0.001), although there was no significant time by refractive group interaction (P ¼ 0.46), indicating similar changes in choroidal thickness over time in myopes and nonmyopes. However, a significant association between the change in choroidal thickness and the change in axial length over time was found (P < 0.001, β = −0.14). Children showing faster axial eye growth exhibited significantly less choroidal thickening over time compared with children showing slower axial eye growth. CONCLUSIONS A significant increase in choroidal thickness occurs over an 18-month period in normal 10- to 15-year-old children. Children undergoing faster axial eye growth exhibited less thickening and, in some cases, a thinning of the choroid. These findings support a potential role for the choroid in the mechanisms regulating eye growth in childhood.
Resumo:
• Evidence from cross-sectional studies1,2 suggests that choroidal thickness (ChT) varies with age and refractive error in childhood. However, to date there have been no longitudinal studies examining changes in pediatric ChT. • In this prospective study, the longitudinal changes in ChT and its relationship with eye growth were examined in a population of normal children with a range of refractive errors.