12 resultados para operational modal analysis

em Université de Lausanne, Switzerland


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This guide introduces Data Envelopment Analysis (DEA), a performance measurement technique, in such a way as to be appropriate to decision makers with little or no background in economics and operational research. The use of mathematics is kept to a minimum. This guide therefore adopts a strong practical approach in order to allow decision makers to conduct their own efficiency analysis and to easily interpret results. DEA helps decision makers for the following reasons: - By calculating an efficiency score, it indicates if a firm is efficient or has capacity for improvement. - By setting target values for input and output, it calculates how much input must be decreased or output increased in order to become efficient. - By identifying the nature of returns to scale, it indicates if a firm has to decrease or increase its scale (or size) in order to minimize the average cost. - By identifying a set of benchmarks, it specifies which other firms' processes need to be analysed in order to improve its own practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several evidences suggest that astrocytes release small transmitter molecules, peptides, and protein factors via regulated exocytosis, implying that they function as specialized neurosecretory cells. However, very little is known about the molecular and functional properties of regulated secretion in astrocytes in the adult brain. Establishing these properties is central to the understanding of the communication mode(s) of these cells and their role(s) in the control of synaptic functions and of cerebral blood flow. In this study, we have set-up a high-resolution confocal microscopy approach to distinguish protein expression in astrocytic structures and neighboring synaptic terminals in adult brain tissue. This approach was applied to investigate the expression pattern of core SNARE proteins for vesicle fusion in the dentate gyrus and CA1 regions of the mouse hippocampus. Our comparative analysis shows that astrocytes abundantly express, in their cell body and main processes, all three protein partners necessary to form an operational SNARE complex but not in the same isoforms expressed in neighbouring synaptic terminals. Thus, SNAP25 and VAMP2 are absent from astrocytic processes and typically concentrated in terminals, while SNAP23 and VAMP3 have the opposite expression pattern. Syntaxin 1 is present in both synaptic terminals and astrocytes. These data support the view that astrocytes in the adult hippocampus can communicate via regulated exocytosis and also indicates that astrocytic exocytosis may differ in its properties from action potential-dependent exocytosis at neuronal synapses, as it relies on a distinctive set of SNARE proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Disease-management programs may enhance the quality of care provided to patients with chronic diseases, such as chronic obstructive pulmonary disease (COPD). The aim of this systematic review was to assess the effectiveness of COPD disease-management programs. METHODS: We conducted a computerized search of MEDLINE, EMBASE, CINAHL, PsychINFO, and the Cochrane Library (CENTRAL) for studies evaluating interventions meeting our operational definition of disease management: patient education, 2 or more different intervention components, 2 or more health care professionals actively involved in patients' care, and intervention lasting 12 months or more. Programs conducted in hospital only and those targeting patients receiving palliative care were excluded. Two reviewers evaluated 12,749 titles and fully reviewed 139 articles; among these, data from 13 studies were included and extracted. Clinical outcomes considered were all-cause mortality, lung function, exercise capacity (walking distance), health-related quality of life, symptoms, COPD exacerbations, and health care use. A meta-analysis of exercise capacity and all-cause mortality was performed using random-effects models. RESULTS: The studies included were 9 randomized controlled trials, 1 controlled trial, and 3 uncontrolled before-after trials. Results indicate that the disease-management programs studied significantly improved exercise capacity (32.2 m, 95% confidence interval [CI], 4.1-60.3), decreased risk of hospitalization, and moderately improved health-related quality of life. All-cause mortality did not differ between groups (pooled odds ratio 0.84, 95% CI, 0.54-1.40). CONCLUSION: COPD disease-management programs modestly improved exercise capacity, health-related quality of life, and hospital admissions, but not all-cause mortality. Future studies should explore the specific elements or characteristics of these programs that bring the greatest benefit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the key challenges in the field of nanoparticle (NP) analysis is in producing reliable and reproducible characterisation data for nanomaterials. This study looks at the reproducibility using a relatively new, but rapidly adopted, technique, Nanoparticle Tracking Analysis (NTA) on a range of particle sizes and materials in several different media. It describes the protocol development and presents both the data and analysis of results obtained from 12 laboratories, mostly based in Europe, who are primarily QualityNano members. QualityNano is an EU FP7 funded Research Infrastructure that integrates 28 European analytical and experimental facilities in nanotechnology, medicine and natural sciences with the goal of developing and implementing best practice and quality in all aspects of nanosafety assessment. This study looks at both the development of the protocol and how this leads to highly reproducible results amongst participants. In this study, the parameter being measured is the modal particle size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CE is a powerful analytical tool used to separate intact biomolecules such as proteins. The coupling of CE with TOF/MS produces a very promising method that can be used to detect and identify proteins in different matrices. This paper describes an efficient, rapid, and simple CE-ESI-TOF/MS procedure for the analysis of endogenous human growth hormone and recombinant human growth hormone without sample preparation. Operational factors were optimized using an experimental design, and the method was successfully applied to distinguish human growth hormone and recombinant human growth hormone in unknown samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: This is the first meta-analysis on the efficacy of composite resin restorations in anterior teeth. The objective of the present meta-analysis was to verify whether specific material classes, tooth conditioning methods and operational procedures influence the result for Class III and Class IV restorations. MATERIAL AND METHODS: The database SCOPUS and PubMed were searched for clinical trials on anterior resin composites without restricting the search to the year of publication. The inclusion criteria were: (1) prospective clinical trial with at least 2 years of observation; (2) minimal number of restorations at last recall=20; (3) report on drop-out rate; (4) report of operative technique and materials used in the trial, and (5) utilization of Ryge or modified Ryge evaluation criteria. For the statistical analysis, a linear mixed model was used with random effects to account for the heterogeneity between the studies. p-Values smaller than 0.05 were considered to be significant. RESULTS: Of the 84 clinical trials, 21 studies met the inclusion criteria, 14 of them for Class III restorations, 6 for Class IV restorations and 1 for closure of diastemata; the latter was included in the Class IV group. Twelve of the 21 studies started before 1991 and 18 before 2001. The estimated median overall success rate (without replacement) after 10 years for Class III composite resin restorations was 95% and for Class IV restorations 90%. The main reason for the replacement of Class IV restorations was bulk fractures, which occurred significantly more frequently with microfilled composites than with hybrid and macrofilled composites. Caries adjacent to restorations was infrequent in most studies and accounted only for about 2.5% of all replaced restorations after 10 years irrespective of the cavity class. Class III restorations with glass ionomer derivates suffered significantly more loss of anatomical form than did fillings with other types of material. When the enamel was acid-etched and no bonding agent was applied, significantly more restorations showed marginal staining and detectable margins compared to enamel etching with enamel bonding or the total etch technique; fillings with self-etching systems were in between of these two outcome variables. Bevelling of the enamel was associated with a significantly reduced deterioration of the anatomical form compared to no bevelling but not with less marginal staining or less detectable margins. The type of isolation (absolute/relative) had a statistically significant influence on marginal caries which, however, might be a random finding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNA is nowadays swabbed routinely to investigate serious and volume crimes, but research remains scarce when it comes to determining the criteria that may impact the success rate of DNA swabs taken on different surfaces and situations. To investigate these criteria in fully operational conditions, DNA analysis results of 4772 swabs taken by the forensic unit of a police department in Western Switzerland over a 2.5-year period (2012-2014) in volume crime cases were considered. A representative and random sample of 1236 swab analyses was extensively examined and codified, describing several criteria such as whether the swabbing was performed at the scene or in the lab, the zone of the scene where it was performed, the kind of object or surface that was swabbed, whether the target specimen was a touch surface or a biological fluid, and whether the swab targeted a single surface or combined different surfaces. The impact of each criterion and of their combination was assessed in regard to the success rate of DNA analysis, measured through the quality of the resulting profile, and whether the profile resulted in a hit in the national database or not. Results show that some situations - such as swabs taken on door and window handles for instance - have a higher success rate than average swabs. Conversely, other situations lead to a marked decrease in the success rate, which should discourage further analyses of such swabs. Results also confirm that targeting a DNA swab on a single surface is preferable to swabbing different surfaces with the intent to aggregate cells deposited by the offender. Such results assist in predicting the chance that the analysis of a swab taken in a given situation will lead to a positive result. The study could therefore inform an evidence-based approach to decision-making at the crime scene (what to swab or not) and at the triage step (what to analyse or not), contributing thus to save resource and increase the efficiency of forensic science efforts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of qualitative research methods has grown substantially over the last twenty years, both in social sciences and, more recently, in the health sciences. This growth came with questions on the quality criteria needed to evaluate this work, and numerous guidelines were published. The latters include many discrepancies though, both in their vocabulary and construction. Many expert evaluators decry the absence of consensual and reliable evaluation tools. The authors present the results of an evaluation of 58 existing guidelines in 4 major health science fields (medicine and epidemiology; nursing and health education; social sciences and public health; psychology / psychiatry, research methods and organization) by expert users (article reviewers, experts allocating funds, editors, etc.). The results propose a toolbox containing 12 consensual criteria with the definitions given by expert users. They also indicate in which disciplinary field each type of criteria is known to be more or less essential. Nevertheless, the authors highlight the limitations of the criteria comparability, as soon as one focuses on their specific definitions. They conclude that each criterion in the toolbox must be explained to come to broader consensus and identify definitions that are consensual to all the fields examined and easily operational.