990 resultados para sample processing
Resumo:
Purpose: Cross-sectional imaging techniques have pioneered forensic medicine. The involvement of a radiographer and formation of "forensic radiographers" allows an improvement of the quality of radiological examinations and facilitates the implementation of techniques, such as sample collections, and the performance of post-mortem angiography. Methods and Materials: During a period of three months, five radiographers with clinical experience have undergone a special training in order to learn procedures dedicated to forensic imaging. These procedures involved: I). acquisition of MDCT data, II). sample collection for toxicological or histological analyses by performing CT-guided biopsies and liquid sampling, III). post-mortem angiography and IV). post-processing of all data acquired. To perform the post-mortem angiography, radiographers were in charge of the preparation of the perfusion device and the investigated body. Therefore, cannulas were inserted into the femoral vessels and connected to the machine. For angiography, the radiographers had to synchronize the perfusion with the CT-acquisitions. Results: All five radiographers have acquired new skills to become "forensic radiographers". They were able to perform post-mortem MDCT, sample collection, post-mortem angiography and post-processing of the acquired data all by themselves. Most problems have been observed concerning the preparation of the body for post-mortem angiography. Conclusion: Our experience shows that radiographers are able to perform high quality examinations after a short period of training. Their collaboration is well accepted by the forensic team and regarding the increase of radiological exams in forensic department, it would be nonsense to exclude radiographers from the forensic-radiological team.
Resumo:
Abstract
Resumo:
In (1) H magnetic resonance spectroscopy, macromolecule signals underlay metabolite signals, and knowing their contribution is necessary for reliable metabolite quantification. When macromolecule signals are measured using an inversion-recovery pulse sequence, special care needs to be taken to correctly remove residual metabolite signals to obtain a pure macromolecule spectrum. Furthermore, since a single spectrum is commonly used for quantification in multiple experiments, the impact of potential macromolecule signal variability, because of regional differences or pathologies, on metabolite quantification has to be assessed. In this study, we introduced a novel method to post-process measured macromolecule signals that offers a flexible and robust way of removing residual metabolite signals. This method was applied to investigate regional differences in the mouse brain macromolecule signals that may affect metabolite quantification when not taken into account. However, since no significant differences in metabolite quantification were detected, it was concluded that a single macromolecule spectrum can be generally used for the quantification of healthy mouse brain spectra. Alternatively, the study of a mouse model of human glioma showed several alterations of the macromolecule spectrum, including, but not limited to, increased mobile lipid signals, which had to be taken into account to avoid significant metabolite quantification errors.
Resumo:
Metacaspases are cysteine peptidases that could play a role similar to caspases in the cell death programme of plants, fungi and protozoa. The human protozoan parasite Leishmania major expresses a single metacaspase (LmjMCA) harbouring a central domain with the catalytic dyad histidine and cysteine as found in caspases. In this study, we investigated the processing sites important for the maturation of LmjMCA catalytic domain, the cellular localization of LmjMCA polypeptides, and the functional role of the catalytic domain in the cell death pathway of Leishmania parasites. Although LmjMCA polypeptide precursor form harbours a functional mitochondrial localization signal (MLS), we determined that LmjMCA polypeptides are mainly localized in the cytoplasm. In stress conditions, LmjMCA precursor forms were extensively processed into soluble forms containing the catalytic domain. This domain was sufficient to enhance sensitivity of parasites to hydrogen peroxide by impairing the mitochondrion. These data provide experimental evidences of the importance of LmjMCA processing into an active catalytic domain and of its role in disrupting mitochondria, which could be relevant in the design of new drugs to fight leishmaniasis and likely other protozoan parasitic diseases.
Resumo:
The objective of this work was to evaluate the chemical and physical characteristics of grains of soybean (Glycine max) cultivars for food processing. The soybean cultivars evaluated were: grain-type - BRS 133 and BRS 258; food-type - BRS 213 (null lipoxygenases), BRS 267 (vegetable-type) and BRS 216 (small grain size). BRS 267 and BRS 216 cultivars showed higher protein content, indicating that they could promote superior nutritional value. BRS 213 cultivar showed the lowest lipoxygenase activity, and BRS 267, the lowest hexanal content. These characteristics can improve soyfood flavor. After cooking, BRS 267 cultivar grains presented a higher content of aglycones (more biologically active form of isoflavones) and oleic acid, which makes it proper for functional foods and with better stability for processing, and also showed high content of fructose, glutamic acid and alanine, compounds related to the soybean mild flavor. Because of its large grain size, BRS 267 is suitable for tofu and edamame, while small-grain-sized BRS 216 is good for natto and for soybean sprouts production. BRS 216 and BRS 213 cultivars presented shorter cooking time, which may be effective for reducing processing costs.
Resumo:
Validation is the main bottleneck preventing theadoption of many medical image processing algorithms inthe clinical practice. In the classical approach,a-posteriori analysis is performed based on someobjective metrics. In this work, a different approachbased on Petri Nets (PN) is proposed. The basic ideaconsists in predicting the accuracy that will result froma given processing based on the characterization of thesources of inaccuracy of the system. Here we propose aproof of concept in the scenario of a diffusion imaginganalysis pipeline. A PN is built after the detection ofthe possible sources of inaccuracy. By integrating thefirst qualitative insights based on the PN withquantitative measures, it is possible to optimize the PNitself, to predict the inaccuracy of the system in adifferent setting. Results show that the proposed modelprovides a good prediction performance and suggests theoptimal processing approach.
Resumo:
The objective of this study was to determine the minimum number of plants per plot that must be sampled in experiments with sugarcane (Saccharum officinarum) full-sib families in order to provide an effective estimation of genetic and phenotypic parameters of yield-related traits. The data were collected in a randomized complete block design with 18 sugarcane full-sib families and 6 replicates, with 20 plants per plot. The sample size was determined using resampling techniques with replacement, followed by an estimation of genetic and phenotypic parameters. Sample-size estimates varied according to the evaluated parameter and trait. The resampling method permits an efficient comparison of the sample-size effects on the estimation of genetic and phenotypic parameters. A sample of 16 plants per plot, or 96 individuals per family, was sufficient to obtain good estimates for all traits considered of all the characters evaluated. However, for Brix, if sample separation by trait were possible, ten plants per plot would give an efficient estimate for most of the characters evaluated.
Resumo:
A crucial step in the life cycle of arenaviruses is the biosynthesis of the mature fusion-active viral envelope glycoprotein (GP) that is essential for virus-host cell attachment and entry. The maturation of the arenavirus GP precursor (GPC) critically depends on proteolytic processing by the cellular proprotein convertase (PC) subtilisin kexin isozyme-1 (SKI-1)/site-1 protease (S1P). Here we undertook a molecular characterization of the SKI-1/S1P processing of the GPCs of the prototypic arenavirus lymphocytic choriomeningitis virus (LCMV) and the pathogenic Lassa virus (LASV). Previous studies showed that the GPC of LASV undergoes processing in the endoplasmic reticulum (ER)/cis-Golgi compartment, whereas the LCMV GPC is cleaved in a late Golgi compartment. Herein we confirm these findings and provide evidence that the SKI-1/S1P recognition site RRLL, present in the SKI-1/S1P prodomain and LASV GPC, but not in the LCMV GPC, is crucial for the processing of the LASV GPC in the ER/cis-Golgi compartment. Our structure-function analysis revealed that the cleavage of arenavirus GPCs, but not cellular substrates, critically depends on the autoprocessing of SKI-1/S1P, suggesting differences in the processing of cellular and viral substrates. Deletion mutagenesis showed that the transmembrane and intracellular domains of SKI-1/S1P are dispensable for arenavirus GPC processing. The expression of a soluble form of the protease in SKI-I/S1P-deficient cells resulted in the efficient processing of arenavirus GPCs and rescued productive virus infection. However, exogenous soluble SKI-1/S1P was unable to process LCMV and LASV GPCs displayed at the surface of SKI-I/S1P-deficient cells, indicating that GPC processing occurs in an intracellular compartment. In sum, our study reveals important differences in the SKI-1/S1P processing of viral and cellular substrates.
Resumo:
Abstract
Resumo:
Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated
Resumo:
BACKGROUND: PCR has the potential to detect and precisely quantify specific DNA sequences, but it is not yet often used as a fully quantitative method. A number of data collection and processing strategies have been described for the implementation of quantitative PCR. However, they can be experimentally cumbersome, their relative performances have not been evaluated systematically, and they often remain poorly validated statistically and/or experimentally. In this study, we evaluated the performance of known methods, and compared them with newly developed data processing strategies in terms of resolution, precision and robustness. RESULTS: Our results indicate that simple methods that do not rely on the estimation of the efficiency of the PCR amplification may provide reproducible and sensitive data, but that they do not quantify DNA with precision. Other evaluated methods based on sigmoidal or exponential curve fitting were generally of both poor resolution and precision. A statistical analysis of the parameters that influence efficiency indicated that it depends mostly on the selected amplicon and to a lesser extent on the particular biological sample analyzed. Thus, we devised various strategies based on individual or averaged efficiency values, which were used to assess the regulated expression of several genes in response to a growth factor. CONCLUSION: Overall, qPCR data analysis methods differ significantly in their performance, and this analysis identifies methods that provide DNA quantification estimates of high precision, robustness and reliability. These methods allow reliable estimations of relative expression ratio of two-fold or higher, and our analysis provides an estimation of the number of biological samples that have to be analyzed to achieve a given precision.
Resumo:
A recent publication in this journal [Neumann et al., Forensic Sci. Int. 212 (2011) 32-46] presented the results of a field study that revealed the data provided by the fingermarks not processed in a forensic science laboratory. In their study, the authors were interested in the usefulness of this additional data in order to determine whether such fingermarks would have been worth submitting to the fingermark processing workflow. Taking these ideas as a starting point, this communication here places the fingermark in its context of a case brought before a court, and examines the question of processing or not processing a fingermark from a decision-theoretic point of view. The decision-theoretic framework presented provides an answer to this question in the form of a quantified expression of the expected value of information (EVOI) associated with the processed fingermark, which can then be compared with the cost of processing the mark.
Resumo:
BACKGROUND: Dried blood spots (DBS) sampling has gained popularity in the bioanalytical community as an alternative to conventional plasma sampling, as it provides numerous benefits in terms of sample collection and logistics. The aim of this work was to show that these advantages can be coupled with a simple and cost-effective sample pretreatment, with subsequent rapid LC-MS/MS analysis for quantitation of 15 benzodiazepines, six metabolites and three Z-drugs. For this purpose, a simplified offline procedure was developed that consisted of letting a 5-µl DBS infuse directly into 100 µl of MeOH, in a conventional LC vial. RESULTS: The parameters related to the DBS pretreatment, such as extraction time or internal standard addition, were investigated and optimized, demonstrating that passive infusion in a regular LC vial was sufficient to quantitatively extract the analytes of interest. The method was validated according to international criteria in the therapeutic concentration ranges of the selected compounds. CONCLUSION: The presented strategy proved to be efficient for the rapid analysis of the selected drugs. Indeed, the offline sample preparation was reduced to a minimum, using a small amount of organic solvent and consumables, without affecting the accuracy of the method. Thus, this approach enables simple and rapid DBS analysis, even when using a non-DBS-dedicated autosampler, while lowering the costs and environmental impact.
Resumo:
A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.