884 resultados para Fully automated
Resumo:
Purpose The aim of the study was to determine the association, agreement, and detection capability of manual, semiautomated, and fully automated methods of corneal nerve fiber length (CNFL) quantification of the human corneal subbasal nerve plexus (SNP). Methods Thirty-three participants with diabetes and 17 healthy controls underwent laser scanning corneal confocal microscopy. Eight central images of the SNP were selected for each participant and analyzed using manual (CCMetrics), semiautomated (NeuronJ), and fully automated (ACCMetrics) software to quantify the CNFL. Results For the entire cohort, mean CNFL values quantified by CCMetrics, NeuronJ, and ACCMetrics were 17.4 ± 4.3 mm/mm2, 16.0 ± 3.9 mm/mm2, and 16.5 ± 3.6 mm/mm2, respectively (P < 0.01). CNFL quantified using CCMetrics was significantly higher than those obtained by NeuronJ and ACCMetrics (P < 0.05). The 3 methods were highly correlated (correlation coefficients 0.87–0.98, P < 0.01). The intraclass correlation coefficients were 0.87 for ACCMetrics versus NeuronJ and 0.86 for ACCMetrics versus CCMetrics. Bland–Altman plots showed good agreement between the manual, semiautomated, and fully automated analyses of CNFL. A small underestimation of CNFL was observed using ACCMetrics with increasing the amount of nerve tissue. All 3 methods were able to detect CNFL depletion in diabetic participants (P < 0.05) and in those with peripheral neuropathy as defined by the Toronto criteria, compared with healthy controls (P < 0.05). Conclusions Automated quantification of CNFL provides comparable neuropathy detection ability to manual and semiautomated methods. Because of its speed, objectivity, and consistency, fully automated analysis of CNFL might be advantageous in studies of diabetic neuropathy.
Resumo:
In this thesis, different techniques for image analysis of high density microarrays have been investigated. Most of the existing image analysis techniques require prior knowledge of image specific parameters and direct user intervention for microarray image quantification. The objective of this research work was to develop of a fully automated image analysis method capable of accurately quantifying the intensity information from high density microarrays images. The method should be robust against noise and contaminations that commonly occur in different stages of microarray development.
Resumo:
Metabolic stable isotope labeling is increasingly employed for accurate protein (and metabolite) quantitation using mass spectrometry (MS). It provides sample-specific isotopologues that can be used to facilitate comparative analysis of two or more samples. Stable Isotope Labeling by Amino acids in Cell culture (SILAC) has been used for almost a decade in proteomic research and analytical software solutions have been established that provide an easy and integrated workflow for elucidating sample abundance ratios for most MS data formats. While SILAC is a discrete labeling method using specific amino acids, global metabolic stable isotope labeling using isotopes such as (15)N labels the entire element content of the sample, i.e. for (15)N the entire peptide backbone in addition to all nitrogen-containing side chains. Although global metabolic labeling can deliver advantages with regard to isotope incorporation and costs, the requirements for data analysis are more demanding because, for instance for polypeptides, the mass difference introduced by the label depends on the amino acid composition. Consequently, there has been less progress on the automation of the data processing and mining steps for this type of protein quantitation. Here, we present a new integrated software solution for the quantitative analysis of protein expression in differential samples and show the benefits of high-resolution MS data in quantitative proteomic analyses.
Resumo:
BACKGROUND: In contrast to RIA, recently available ELISAs provide the potential for fully automated analysis of adiponectin. To date, studies reporting on the diagnostic characteristics of ELISAs and investigating on the relationship between ELISA- and RIA-based methods are rare. METHODS: Thus, we established and evaluated a fully automated platform (BEP 2000; Dade-Behring, Switzerland) for determination of adiponectin levels in serum by two different ELISA methods (competitive human adiponectin ELISA; high sensitivity human adiponectin sandwich ELISA; both Biovendor, Czech Republic). Further, as a reference method, we also employed a human adiponectin RIA (Linco Research, USA). Samples from 150 patients routinely presenting to our cardiology unit were tested. RESULTS: ELISA measurements could be accomplished in less than 3 h, measurement of RIA had a duration of 24 h. The ELISAs were evaluated for precision, analytical sensitivity and specificity, linearity on dilution and spiking recovery. In the investigated patients, type 2 diabetes, higher age and male gender were significantly associated with lower serum adiponectin concentrations. Correlations between the ELISA methods and the RIA were strong (competitive ELISA, r=0.82; sandwich ELISA, r=0.92; both p<0.001). However, Deming regression and Bland-Altman analysis indicated lack of agreement of the 3 methods preventing direct comparison of results. The equations of the regression lines are: Competitive ELISA=1.48 x RIA-0.88; High sensitivity sandwich ELISA=0.77 x RIA+1.01. CONCLUSIONS: Fully automated measurement of adiponectin by ELISA is feasible and substantially more rapid than RIA. The investigated ELISA test systems seem to exhibit analytical characteristics allowing for clinical application. In addition, there is a strong correlation between the ELISA methods and RIA. These findings might promote a more widespread use of adiponectin measurements in clinical research.
Resumo:
The study aim was to determine whether using automated side loader (ASL) trucks in higher proportions compared to other types of trucks for residential waste collection results in lower injury rates (from all causes). The primary hypothesis was that the risk of injury to workers was lower for those who work with ASL trucks than for workers who work with other types of trucks used in residential waste collection. To test this hypothesis, data were collected from one of the nation’s largest companies in the solid waste management industry. Different local operating units (i.e. facilities) in the company used different types of trucks to varying degrees, which created a special opportunity to examine refuse collection injuries and illnesses and the risk reduction potential of ASL trucks.^ The study design was ecological and analyzed end-of-year data provided by the company for calendar year 2007. During 2007, there were a total of 345 facilities which provided residential services. Each facility represented one observation.^ The dependent variable – injury and illness rate, was defined as a facility’s total case incidence rate (TCIR) recorded in accordance with federal OSHA requirements for the year 2007. The TCIR is the rate of total recordable injury and illness cases per 100 full-time workers. The independent variable, percent of ASL trucks, was calculated by dividing the number of ASL trucks by the total number of residential trucks at each facility.^ Multiple linear regression models were estimated for the impact of the percent of ASL trucks on TCIR per facility. Adjusted analyses included three covariates: median number of hours worked per week for residential workers; median number of months of work experience for residential workers; and median age of residential workers. All analyses were performed with the statistical software, Stata IC (version 11.0).^ The analyses included three approaches to classifying exposure, percent of ASL trucks. The first approach included two levels of exposure: (1) 0% and (2) >0 - <100%. The second approach included three levels of exposure: (1) 0%, (2) ≥ 1 - < 100%, and (3) 100%. The third approach included six levels of exposure to improve detection of a dose-response relationship: (1) 0%, (2) 1 to <25%, (3) 25 to <50%, (4) 50 to <75%, (5) 75 to <100%, and (6) 100%. None of the relationships between injury and illness rate and percent ASL trucks exposure levels was statistically significant (i.e., p<0.05), even after adjustment for all three covariates.^ In summary, the present study shows that there is some risk reduction impact of ASL trucks but not statistically significant. The covariates demonstrated a varied yet more modest impact on the injury and illness rate but again, none of the relationships between injury and illness rate and the covariates were statistically significant (i.e., p<0.05). However, as an ecological study, the present study also has the limitations inherent in such designs and warrants replication in an individual level cohort design. Any stronger conclusions are not suggested.^
Resumo:
A joint research to develop an efficient method for automated identification and quantification of ores [1], based on Reflected Light Microscopy (RLM) in the VNIR realm (Fig. 1), provides an alternative to modern SEM based equipments used by geometallurgists, but for ~ 1/10th of the price.
Resumo:
A fully automated and online artifact removal method for the electroencephalogram (EEG) is developed for use in brain-computer interfacing. The method (FORCe) is based upon a novel combination of wavelet decomposition, independent component analysis, and thresholding. FORCe is able to operate on a small channel set during online EEG acquisition and does not require additional signals (e.g. electrooculogram signals). Evaluation of FORCe is performed offline on EEG recorded from 13 BCI particpants with cerebral palsy (CP) and online with three healthy participants. The method outperforms the state-of the-art automated artifact removal methods Lagged auto-mutual information clustering (LAMIC) and Fully automated statistical thresholding (FASTER), and is able to remove a wide range of artifact types including blink, electromyogram (EMG), and electrooculogram (EOG) artifacts.
Resumo:
A fully automated, versatile Temperature Programmed Desorption (TDP), Temperature Programmed Reaction (TPR) and Evolved Gas Analysis (EGA) system has been designed and fabricated. The system consists of a micro-reactor which can be evacuated to 10−6 torr and can be heated from 30 to 750°C at a rate of 5 to 30°C per minute. The gas evolved from the reactor is analysed by a quadrupole mass spectrometer (1–300 amu). Data on each of the mass scans and the temperature at a given time are acquired by a PC/AT system to generate thermograms. The functioning of the system is exemplified by the temperature programmed desorption (TPD) of oxygen from YBa2Cu3−xCoxO7 ± δ, catalytic ammonia oxidation to NO over YBa2Cu3O7−δ and anaerobic oxidation of methanol to CO2, CO and H2O over YBa2Cu3O7−δ (Y123) and PrBa2Cu3O7−δ (Pr123) systems.
Resumo:
The validation of a fully automated dissolved Ni monitor for in situ estuarine studies is presented, based on adsorptive cathodic stripping voltammetry (AdCSV). Dissolved Ni concentrations were determined following on-line filtration and UV digestion, and addition of an AdCSV ligand (dimethyl glyoxime) and pH buffer (N-2-hydroxyethylpiperazine-N′-2-ethanesulphonic acid). The technique is capable of up to six fully quantified Ni measurements per hour. The automated in situ methodology was applied successfully during two surveys on the Tamar estuary (south west Britain). The strongly varying sample matrix encountered in the estuarine system did not present analytical interferences, and each sample was quantified using internal standard additions. Up to 37 Ni measurements were performed during each survey, which involved 13 h of continuous sampling and analysis. The high resolution data from the winter and summer tidal cycle studies allowed a thorough interpretation of the biogeochemical processes in the studied estuarine system.
Resumo:
As a by-product of the ‘information revolution’ which is currently unfolding, lifetimes of man (and indeed computer) hours are being allocated for the automated and intelligent interpretation of data. This is particularly true in medical and clinical settings, where research into machine-assisted diagnosis of physiological conditions gains momentum daily. Of the conditions which have been addressed, however, automated classification of allergy has not been investigated, even though the numbers of allergic persons are rising, and undiagnosed allergies are most likely to elicit fatal consequences. On the basis of the observations of allergists who conduct oral food challenges (OFCs), activity-based analyses of allergy tests were performed. Algorithms were investigated and validated by a pilot study which verified that accelerometer-based inquiry of human movements is particularly well-suited for objective appraisal of activity. However, when these analyses were applied to OFCs, accelerometer-based investigations were found to provide very poor separation between allergic and non-allergic persons, and it was concluded that the avenues explored in this thesis are inadequate for the classification of allergy. Heart rate variability (HRV) analysis is known to provide very significant diagnostic information for many conditions. Owing to this, electrocardiograms (ECGs) were recorded during OFCs for the purpose of assessing the effect that allergy induces on HRV features. It was found that with appropriate analysis, excellent separation between allergic and nonallergic subjects can be obtained. These results were, however, obtained with manual QRS annotations, and these are not a viable methodology for real-time diagnostic applications. Even so, this was the first work which has categorically correlated changes in HRV features to the onset of allergic events, and manual annotations yield undeniable affirmation of this. Fostered by the successful results which were obtained with manual classifications, automatic QRS detection algorithms were investigated to facilitate the fully automated classification of allergy. The results which were obtained by this process are very promising. Most importantly, the work that is presented in this thesis did not obtain any false positive classifications. This is a most desirable result for OFC classification, as it allows complete confidence to be attributed to classifications of allergy. Furthermore, these results could be particularly advantageous in clinical settings, as machine-based classification can detect the onset of allergy which can allow for early termination of OFCs. Consequently, machine-based monitoring of OFCs has in this work been shown to possess the capacity to significantly and safely advance the current state of clinical art of allergy diagnosis
Resumo:
The second round of the community-wide initiative Critical Assessment of automated Structure Determination of Proteins by NMR (CASD-NMR-2013) comprised ten blind target datasets, consisting of unprocessed spectral data, assigned chemical shift lists and unassigned NOESY peak and RDC lists, that were made available in both curated (i.e. manually refined) or un-curated (i.e. automatically generated) form. Ten structure calculation programs, using fully automated protocols only, generated a total of 164 three-dimensional structures (entries) for the ten targets, sometimes using both curated and un-curated lists to generate multiple entries for a single target. The accuracy of the entries could be established by comparing them to the corresponding manually solved structure of each target, which was not available at the time the data were provided. Across the entire data set, 71 % of all entries submitted achieved an accuracy relative to the reference NMR structure better than 1.5 Å. Methods based on NOESY peak lists achieved even better results with up to 100 % of the entries within the 1.5 Å threshold for some programs. However, some methods did not converge for some targets using un-curated NOESY peak lists. Over 90 % of the entries achieved an accuracy better than the more relaxed threshold of 2.5 Å that was used in the previous CASD-NMR-2010 round. Comparisons between entries generated with un-curated versus curated peaks show only marginal improvements for the latter in those cases where both calculations converged.
Resumo:
Tissue microarray (TMA) is a high throughput analysis tool to identify new diagnostic and prognostic markers in human cancers. However, standard automated method in tumour detection on both routine histochemical and immunohistochemistry (IHC) images is under developed. This paper presents a robust automated tumour cell segmentation model which can be applied to both routine histochemical tissue slides and IHC slides and deal with finer pixel-based segmentation in comparison with blob or area based segmentation by existing approaches. The presented technique greatly improves the process of TMA construction and plays an important role in automated IHC quantification in biomarker analysis where excluding stroma areas is critical. With the finest pixel-based evaluation (instead of area-based or object-based), the experimental results show that the proposed method is able to achieve 80% accuracy and 78% accuracy in two different types of pathological virtual slides, i.e., routine histochemical H&E and IHC images, respectively. The presented technique greatly reduces labor-intensive workloads for pathologists and highly speeds up the process of TMA construction and provides a possibility for fully automated IHC quantification.