24 resultados para automated lexical analysis

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

90.00% 90.00%

Publicador:

Resumo:

For acutely lethal influenza infections, the relative pathogenic contributions of direct viral damage to lung epithelium versus dysregulated immunity remain unresolved. Here, we take a top-down systems approach to this question. Multigene transcriptional signatures from infected lungs suggested that elevated activation of inflammatory signaling networks distinguished lethal from sublethal infections. Flow cytometry and gene expression analysis involving isolated cell subpopulations from infected lungs showed that neutrophil influx largely accounted for the predictive transcriptional signature. Automated imaging analysis, together with these gene expression and flow data, identified a chemokine-driven feedforward circuit involving proinflammatory neutrophils potently driven by poorly contained lethal viruses. Consistent with these data, attenuation, but not ablation, of the neutrophil-driven response increased survival without changing viral spread. These findings establish the primacy of damaging innate inflammation in at least some forms of influenza-induced lethality and provide a roadmap for the systematic dissection of infection-associated pathology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Chronic obstructive pulmonary disease (COPD) is a respiratory inflammatory condition with autoimmune features including IgG autoantibodies. In this study we analyze the complexity of the autoantibody response and reveal the nature of the antigens that are recognized by autoantibodies in COPD patients. Methods An array of 1827 gridded immunogenic peptide clones was established and screened with 17 sera of COPD patients and 60 healthy controls. Protein arrays were evaluated both by visual inspection and a recently developed computer aided image analysis technique. By this computer aided image analysis technique we computed the intensity values for each peptide clone and each serum and calculated the area under the receiver operator characteristics curve (AUC) for each clone and the separation COPD sera versus control sera. Results By visual evaluation we detected 381 peptide clones that reacted with autoantibodies of COPD patients including 17 clones that reacted with more than 60% of the COPD sera and seven clones that reacted with more than 90% of the COPD sera. The comparison of COPD sera and controls by the automated image analysis system identified 212 peptide clones with informative AUC values. By in silico sequence analysis we found an enrichment of sequence motives previously associated with immunogenicity. Conclusion The identification of a rather complex humoral immune response in COPD patients supports the idea of COPD as a disease with strong autoimmune features. The identification of novel immunogenic antigens is a first step towards a better understanding of the autoimmune component of COPD.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vegetation phenology is an important indicator of climate change and climate variability and it is strongly connected to biospheric–atmospheric gas exchange. We aimed to evaluate the applicability of phenological information derived from digital imagery for the interpretation of CO2 exchange measurements. For the years 2005–2007 we analyzed seasonal phenological development of 2 temperate mixed forests using tower-based imagery from standard RGB cameras. Phenological information was jointly analyzed with gross primary productivity (GPP) derived from net ecosystem exchange data. Automated image analysis provided reliable information on vegetation developmental stages of beech and ash trees covering all seasons. A phenological index derived from image color values was strongly correlated with GPP, with a significant mean time lag of several days for ash trees and several weeks for beech trees in early summer (May to mid-July). Leaf emergence dates for the dominant tree species partly explained temporal behaviour of spring GPP but were also masked by local meteorological conditions. We conclude that digital cameras at flux measurement sites not only provide an objective measure of the physiological state of a forest canopy at high temporal and spatial resolutions, but also complement CO2 and water exchange measurements, improving our knowledge of ecosystem processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually, water is defined as the negation of islands. Albeit simple, such a definition of water is naive and impedes composition of islands. When developing an island grammar, sooner or later a programmer has to create water tailored to each individual island. Such an approach is fragile, however, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by a programmer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing - bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. We integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually water is defined as the negation of islands. Albeit simple, such a definition of water is naive and impedes composition of islands. When developing an island grammar, sooner or later a language engineer has to create water tailored to each individual island. Such an approach is fragile, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by an engineer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing —- bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. Our work focuses on applications of island parsing to data extraction from source code. We have integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND Multiple breath washout (MBW) derived Scond is an established index of ventilation inhomogeneity. Time-consuming post hoc calculations of the expirogram's slope of alveolar phase III (SIII) and the lack of available software hampered widespread application of Scond. METHODS Seventy-two school-aged children (45 with cystic fibrosis; CF) performed 3 nitrogen MBW. We tested a new automated algorithm for Scond analysis (Scondauto ) which comprised breath selection for SIII detection, calculation and reporting of test quality. We compared Scondauto to (i) standard Scond analysis (Scondmanual ) with manual breath selection and to (ii) pragmatic Scond analysis including all breaths (Scondall ). Primary outcomes were success rate and agreement between different Scond protocols, and Scond fitting quality (linear regression R(2) ). RESULTS Average Scondauto (0.06 for CF and 0.01 for controls) was not different from Scondmanual (0.06 for CF and 0.01 for controls) and showed comparable fitting quality (R(2) 0.53 for CF and 0.13 for controls vs. R(2) 0.54 for CF and 0.13 for controls). Scondall was similar in CF and controls but with inferior fitting quality compared to Scondauto and Scondmanual . CONCLUSIONS Automated Scond calculation is feasible and produces robust results comparable to the standard manual way of Scond calculation. This algorithm provides a valid, fast and objective tool for regular use, even in children. Pediatr Pulmonol. © 2014 Wiley Periodicals, Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND A precise detection of volume change allows for better estimating the biological behavior of the lung nodules. Postprocessing tools with automated detection, segmentation, and volumetric analysis of lung nodules may expedite radiological processes and give additional confidence to the radiologists. PURPOSE To compare two different postprocessing software algorithms (LMS Lung, Median Technologies; LungCARE®, Siemens) in CT volumetric measurement and to analyze the effect of soft (B30) and hard reconstruction filter (B70) on automated volume measurement. MATERIAL AND METHODS Between January 2010 and April 2010, 45 patients with a total of 113 pulmonary nodules were included. The CT exam was performed on a 64-row multidetector CT scanner (Somatom Sensation, Siemens, Erlangen, Germany) with the following parameters: collimation, 24x1.2 mm; pitch, 1.15; voltage, 120 kVp; reference tube current-time, 100 mAs. Automated volumetric measurement of each lung nodule was performed with the two different postprocessing algorithms based on two reconstruction filters (B30 and B70). The average relative volume measurement difference (VME%) and the limits of agreement between two methods were used for comparison. RESULTS At soft reconstruction filters the LMS system produced mean nodule volumes that were 34.1% (P < 0.0001) larger than those by LungCARE® system. The VME% was 42.2% with a limit of agreement between -53.9% and 138.4%.The volume measurement with soft filters (B30) was significantly larger than with hard filters (B70); 11.2% for LMS and 1.6% for LungCARE®, respectively (both with P < 0.05). LMS measured greater volumes with both filters, 13.6% for soft and 3.8% for hard filters, respectively (P < 0.01 and P > 0.05). CONCLUSION There is a substantial inter-software (LMS/LungCARE®) as well as intra-software variability (B30/B70) in lung nodule volume measurement; therefore, it is mandatory to use the same equipment with the same reconstruction filter for the follow-up of lung nodule volume.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. METHODS Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b(+)Prph2(Rd2) /J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. RESULTS Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. CONCLUSIONS Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. TRANSLATIONAL RELEVANCE The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

AMS-14C applications often require the analysis of small samples. Such is the case of atmospheric aerosols where frequently only a small amount of sample is available. The ion beam physics group at the ETH, Zurich, has designed an Automated Graphitization Equipment (AGE III) for routine graphite production for AMS analysis from organic samples of approximately 1 mg. In this study, we explore the potential use of the AGE III for graphitization of particulate carbon collected in quartz filters. In order to test the methodology, samples of reference materials and blanks with different sizes were prepared in the AGE III and the graphite was analyzed in a MICADAS AMS (ETH) system. The graphite samples prepared in the AGE III showed recovery yields higher than 80% and reproducible 14C values for masses ranging from 50 to 300 lg. Also, reproducible radiocarbon values were obtained for aerosol filters of small sizes that had been graphitized in the AGE III. As a study case, the tested methodology was applied to PM10 samples collected in two urban cities in Mexico in order to compare the source apportionment of biomass and fossil fuel combustion. The obtained 14C data showed that carbonaceous aerosols from Mexico City have much lower biogenic signature than the smaller city of Cuernavaca.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes informatics for cross-sample analysis with comprehensive two-dimensional gas chromatography (GCxGC) and high-resolution mass spectrometry (HRMS). GCxGC-HRMS analysis produces large data sets that are rich with information, but highly complex. The size of the data and volume of information requires automated processing for comprehensive cross-sample analysis, but the complexity poses a challenge for developing robust methods. The approach developed here analyzes GCxGC-HRMS data from multiple samples to extract a feature template that comprehensively captures the pattern of peaks detected in the retention-times plane. Then, for each sample chromatogram, the template is geometrically transformed to align with the detected peak pattern and generate a set of feature measurements for cross-sample analyses such as sample classification and biomarker discovery. The approach avoids the intractable problem of comprehensive peak matching by using a few reliable peaks for alignment and peak-based retention-plane windows to define comprehensive features that can be reliably matched for cross-sample analysis. The informatics are demonstrated with a set of 18 samples from breast-cancer tumors, each from different individuals, six each for Grades 1-3. The features allow classification that matches grading by a cancer pathologist with 78% success in leave-one-out cross-validation experiments. The HRMS signatures of the features of interest can be examined for determining elemental compositions and identifying compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Behavioral studies suggest that women and men differ in the strategic elaboration of verbally encoded information especially in the absence of external task demand. However, measuring such covert processing requires other than behavioral data. The present study used event-related potentials to compare sexes in lower and higher order semantic processing during the passive reading of semantically related and unrelated word pairs. Women and men showed the same early context effect in the P1-N1 transition period. This finding indicates that the initial lexical-semantic access is similar in men and women. In contrast, sexes differed in higher order semantic processing. Women showed an earlier and longer lasting context effect in the N400 accompanied by larger signal strength in temporal networks similarly recruited by men and women. The results suggest that women spontaneously conduct a deeper semantic analysis. This leads to faster processing of related words in the active neural networks as reflected in a shorter stability of the N400 map in women. Taken together, the findings demonstrate that there is a selective sex difference in the controlled semantic analysis during passive word reading that is not reflected in different functional organization but in the depth of processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In contrast to RIA, recently available ELISAs provide the potential for fully automated analysis of adiponectin. To date, studies reporting on the diagnostic characteristics of ELISAs and investigating on the relationship between ELISA- and RIA-based methods are rare. METHODS: Thus, we established and evaluated a fully automated platform (BEP 2000; Dade-Behring, Switzerland) for determination of adiponectin levels in serum by two different ELISA methods (competitive human adiponectin ELISA; high sensitivity human adiponectin sandwich ELISA; both Biovendor, Czech Republic). Further, as a reference method, we also employed a human adiponectin RIA (Linco Research, USA). Samples from 150 patients routinely presenting to our cardiology unit were tested. RESULTS: ELISA measurements could be accomplished in less than 3 h, measurement of RIA had a duration of 24 h. The ELISAs were evaluated for precision, analytical sensitivity and specificity, linearity on dilution and spiking recovery. In the investigated patients, type 2 diabetes, higher age and male gender were significantly associated with lower serum adiponectin concentrations. Correlations between the ELISA methods and the RIA were strong (competitive ELISA, r=0.82; sandwich ELISA, r=0.92; both p<0.001). However, Deming regression and Bland-Altman analysis indicated lack of agreement of the 3 methods preventing direct comparison of results. The equations of the regression lines are: Competitive ELISA=1.48 x RIA-0.88; High sensitivity sandwich ELISA=0.77 x RIA+1.01. CONCLUSIONS: Fully automated measurement of adiponectin by ELISA is feasible and substantially more rapid than RIA. The investigated ELISA test systems seem to exhibit analytical characteristics allowing for clinical application. In addition, there is a strong correlation between the ELISA methods and RIA. These findings might promote a more widespread use of adiponectin measurements in clinical research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To determine the accuracy of automated vessel-segmentation software for vessel-diameter measurements based on three-dimensional contrast-enhanced magnetic resonance angiography (3D-MRA). METHOD: In 10 patients with high-grade carotid stenosis, automated measurements of both carotid arteries were obtained with 3D-MRA by two independent investigators and compared with manual measurements obtained by digital subtraction angiography (DSA) and 2D maximum-intensity projection (2D-MIP) based on MRA and duplex ultrasonography (US). In 42 patients undergoing carotid endarterectomy (CEA), intraoperative measurements (IOP) were compared with postoperative 3D-MRA and US. RESULTS: Mean interoperator variability was 8% for measurements by DSA and 11% by 2D-MIP, but there was no interoperator variability with the automated 3D-MRA analysis. Good correlations were found between DSA (standard of reference), manual 2D-MIP (rP=0.6) and automated 3D-MRA (rP=0.8). Excellent correlations were found between IOP, 3D-MRA (rP=0.93) and US (rP=0.83). CONCLUSION: Automated 3D-MRA-based vessel segmentation and quantification result in accurate measurements of extracerebral-vessel dimensions.