131 resultados para transformed data
Resumo:
OBJECTIVES: Age- and height-adjusted spirometric lung function of South Asian children is lower than those of white children. It is unclear whether this is purely genetic, or partly explained by the environment. In this study, we assessed whether cultural factors, socioeconomic status, intrauterine growth, environmental exposures, or a family and personal history of wheeze contribute to explaining the ethnic differences in spirometric lung function. METHODS: We studied children aged 9 to 14 years from a population-based cohort, including 1088 white children and 275 UK-born South Asians. Log-transformed spirometric data were analyzed using multiple linear regressions, adjusting for anthropometric factors. Five different additional models adjusted for (1) cultural factors, (2) indicators of socioeconomic status, (3) perinatal data reflecting intrauterine growth, (4) environmental exposures, and (5) personal and family history of wheeze. RESULTS: Height- and gender-adjusted forced vital capacity (FVC) and forced expired volume in 1 second (FEV1) were lower in South Asian than white children (relative difference -11% and -9% respectively, P < .001), but PEF and FEF50 were similar (P ≥ .5). FEV1/FVC was higher in South Asians (1.8%, P < .001). These differences remained largely unchanged in all 5 alternative models. CONCLUSIONS: Our study confirmed important differences in lung volumes between South Asian and white children. These were not attenuated after adjustment for cultural and socioeconomic factors and intrauterine growth, neither were they explained by differences in environmental exposures nor a personal or family history of wheeze. This suggests that differences in lung function may be mainly genetic in origin. The implication is that ethnicity-specific predicted values remain important specifically for South Asian children.
Resumo:
Indirect calorimetry based on respiratory exchange measurement has been successfully used from the beginning of the century to obtain an estimate of heat production (energy expenditure) in human subjects and animals. The errors inherent to this classical technique can stem from various sources: 1) model of calculation and assumptions, 2) calorimetric factors used, 3) technical factors and 4) human factors. The physiological and biochemical factors influencing the interpretation of calorimetric data include a change in the size of the bicarbonate and urea pools and the accumulation or loss (via breath, urine or sweat) of intermediary metabolites (gluconeogenesis, ketogenesis). More recently, respiratory gas exchange data have been used to estimate substrate utilization rates in various physiological and metabolic situations (fasting, post-prandial state, etc.). It should be recalled that indirect calorimetry provides an index of overall substrate disappearance rates. This is incorrectly assumed to be equivalent to substrate "oxidation" rates. Unfortunately, there is no adequate golden standard to validate whole body substrate "oxidation" rates, and this contrasts to the "validation" of heat production by indirect calorimetry, through use of direct calorimetry under strict thermal equilibrium conditions. Tracer techniques using stable (or radioactive) isotopes, represent an independent way of assessing substrate utilization rates. When carbohydrate metabolism is measured with both techniques, indirect calorimetry generally provides consistent glucose "oxidation" rates as compared to isotopic tracers, but only when certain metabolic processes (such as gluconeogenesis and lipogenesis) are minimal or / and when the respiratory quotients are not at the extreme of the physiological range. However, it is believed that the tracer techniques underestimate true glucose "oxidation" rates due to the failure to account for glycogenolysis in the tissue storing glucose, since this escapes the systemic circulation. A major advantage of isotopic techniques is that they are able to estimate (given certain assumptions) various metabolic processes (such as gluconeogenesis) in a noninvasive way. Furthermore when, in addition to the 3 macronutrients, a fourth substrate is administered (such as ethanol), isotopic quantification of substrate "oxidation" allows one to eliminate the inherent assumptions made by indirect calorimetry. In conclusion, isotopic tracers techniques and indirect calorimetry should be considered as complementary techniques, in particular since the tracer techniques require the measurement of carbon dioxide production obtained by indirect calorimetry. However, it should be kept in mind that the assessment of substrate oxidation by indirect calorimetry may involve large errors in particular over a short period of time. By indirect calorimetry, energy expenditure (heat production) is calculated with substantially less error than substrate oxidation rates.
Resumo:
The assessment of medical technologies has to answer several questions ranging from safety and effectiveness to complex economical, social, and health policy issues. The type of data needed to carry out such evaluation depends on the specific questions to be answered, as well as on the stage of development of a technology. Basically two types of data may be distinguished: (a) general demographic, administrative, or financial data which has been collected not specifically for technology assessment; (b) the data collected with respect either to a specific technology or to a disease or medical problem. On the basis of a pilot inquiry in Europe and bibliographic research, the following categories of type (b) data bases have been identified: registries, clinical data bases, banks of factual and bibliographic knowledge, and expert systems. Examples of each category are discussed briefly. The following aims for further research and practical goals are proposed: criteria for the minimal data set required, improvement to the registries and clinical data banks, and development of an international clearinghouse to enhance information diffusion on both existing data bases and available reports on medical technology assessments.
Resumo:
The present review will briefly summarize the interplay between coagulation and inflammation, highlighting possible effects of direct inhibition of factor Xa and thrombin beyond anticoagulation. Additionally, the rationale for the use of the new direct oral anticoagulants (DOACs) for indications such as cancer-associated venous thromboembolism (CAT), mechanical heart valves, thrombotic anti-phospholipid syndrome (APS), and heparin-induced thrombocytopenia (HIT) will be explored. Published data on patients with cancer or mechanical heart valves treated with DOAC will be discussed, as well as planned studies in APS and HIT. Although at the present time published evidence is insufficient for recommending DOAC in the above-mentioned indications, there are good arguments in favor of clinical trials investigating their efficacy in these contexts. Direct inhibition of factor Xa or thrombin may reveal interesting effects beyond anticoagulation as well.
Multimodel inference and multimodel averaging in empirical modeling of occupational exposure levels.
Resumo:
Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.
Resumo:
As part of a collaborative project on the epidemiology of craniofacial anomalies, funded by the National Institutes for Dental and Craniofacial Research and channeled through the Human Genetics Programme of the World Health Organization, the International Perinatal Database of Typical Orofacial Clefts (IPDTOC) was established in 2003. IPDTOC is collecting case-by-case information on cleft lip with or without cleft palate and on cleft palate alone from birth defects registries contributing to at least one of three collaborative organizations: European Surveillance Systems of Congenital Anomalies (EUROCAT) in Europe, National Birth Defects Prevention Network (NBDPN) in the United States, and International Clearinghouse for Birth Defects Surveillance and Research (ICBDSR) worldwide. Analysis of the collected information is performed centrally at the ICBDSR Centre in Rome, Italy, to maximize the comparability of results. The present paper, the first of a series, reports data on the prevalence of cleft lip with or without cleft palate from 54 registries in 30 countries over at least 1 complete year during the period 2000 to 2005. Thus, the denominator comprises more than 7.5 million births. A total of 7704 cases of cleft lip with or without cleft palate (7141 livebirths, 237 stillbirths, 301 terminations of pregnancy, and 25 with pregnancy outcome unknown) were available. The overall prevalence of cleft lip with or without cleft palate was 9.92 per 10,000. The prevalence of cleft lip was 3.28 per 10,000, and that of cleft lip and palate was 6.64 per 10,000. There were 5918 cases (76.8%) that were isolated, 1224 (15.9%) had malformations in other systems, and 562 (7.3%) occurred as part of recognized syndromes. Cases with greater dysmorphological severity of cleft lip with or without cleft palate were more likely to include malformations of other systems.
Resumo:
Within the framework of a retrospective study of the incidence of hip fractures in the canton of Vaud (Switzerland), all cases of hip fracture occurring among the resident population in 1986 and treated in the hospitals of the canton were identified from among five different information sources. Relevant data were then extracted from the medical records. At least two sources of information were used to identify cases in each hospital, among them the statistics of the Swiss Hospital Association (VESKA). These statistics were available for 9 of the 18 hospitals in the canton that participated in the study. The number of cases identified from the VESKA statistics was compared to the total number of cases for each hospital. For the 9 hospitals the number of cases in the VESKA statistics was 407, whereas, after having excluded diagnoses that were actually "status after fracture" and double entries, the total for these hospitals was 392, that is 4% less than the VESKA statistics indicate. It is concluded that the VESKA statistics provide a good approximation of the actual number of cases treated in these hospitals, with a tendency to overestimate this number. In order to use these statistics for calculating incidence figures, however, it is imperative that a greater proportion of all hospitals (50% presently in the canton, 35% nationwide) participate in these statistics.
Resumo:
The HUPO Proteomics Standards Initiative has developed several standardized data formats to facilitate data sharing in mass spectrometry (MS)-based proteomics. These allow researchers to report their complete results in a unified way. However, at present, there is no format to describe the final qualitative and quantitative results for proteomics and metabolomics experiments in a simple tabular format. Many downstream analysis use cases are only concerned with the final results of an experiment and require an easily accessible format, compatible with tools such as Microsoft Excel or R. We developed the mzTab file format for MS-based proteomics and metabolomics results to meet this need. mzTab is intended as a lightweight supplement to the existing standard XML-based file formats (mzML, mzIdentML, mzQuantML), providing a comprehensive summary, similar in concept to the supplemental material of a scientific publication. mzTab files can contain protein, peptide, and small molecule identifications together with experimental metadata and basic quantitative information. The format is not intended to store the complete experimental evidence but provides mechanisms to report results at different levels of detail. These range from a simple summary of the final results to a representation of the results including the experimental design. This format is ideally suited to make MS-based proteomics and metabolomics results available to a wider biological community outside the field of MS. Several software tools for proteomics and metabolomics have already adapted the format as an output format. The comprehensive mzTab specification document and extensive additional documentation can be found online.
Resumo:
Penicillin tolerance is an incompletely understood phenomenon that allows bacteria to resist drug-induced killing. Tolerance was studied with independent Streptococcus gordonii mutants generated by cyclic exposure to 500 times the MIC of penicillin. Parent cultures lost 4 to 5 log(10) CFU/ml of viable counts/24 h. In contrast, each of four independent mutant cultures lost < or =2 log(10) CFU/ml/24 h. The mutants had unchanged penicillin-binding proteins but contained increased amounts of two proteins with respective masses of ca. 50 and 45 kDa. One mutant (Tol1) was further characterized. The two proteins showing increased levels were homologous to the arginine deiminase and ornithine carbamoyl transferase of other gram-positive bacteria and were encoded by an operon that was >80% similar to the arginine-deiminase (arc) operon of these organisms. Partial nucleotide sequencing and insertion inactivation of the S. gordonii arc locus indicated that tolerance was not a direct consequence of arc alteration. On the other hand, genetic transformation of tolerance by Tol1 DNA always conferred arc deregulation. In nontolerant recipients, arc was repressed during exponential growth and up-regulated during postexponential growth. In tolerant transformants, arc was constitutively expressed. Tol1 DNA transformed tolerance at the same rate as transformation of a point mutation (10(-2) to 10(-3)). The tolerance mutation mapped on a specific chromosomal fragment but was physically distant from arc. Importantly, arc deregulation was observed in most (6 of 10) of additional independent penicillin-tolerant mutants. Thus, although not exclusive, the association between arc deregulation and tolerance was not fortuitous. Since penicillin selection mimicked the antibiotic pressure operating in the clinical environment, arc deregulation might be an important correlate of naturally occurring tolerance and help in understanding the mechanism(s) underlying this clinically problematic phenotype.
Resumo:
Le "data mining", ou "fouille de données", est un ensemble de méthodes et de techniques attractif qui a connu une popularité fulgurante ces dernières années, spécialement dans le domaine du marketing. Le développement récent de l'analyse ou du renseignement criminel soulève des problèmatiques auxqwuelles il est tentant de d'appliquer ces méthodes et techniques. Le potentiel et la place du data mining dans le contexte de l'analyse criminelle doivent être mieux définis afin de piloter son application. Cette réflexion est menée dans le cadre du renseignement produit par des systèmes de détection et de suivi systématique de la criminalité répétitive, appelés processus de veille opérationnelle. Leur fonctionnement nécessite l'existence de patterns inscrits dans les données, et justifiés par les approches situationnelles en criminologie. Muni de ce bagage théorique, l'enjeu principal revient à explorer les possibilités de détecter ces patterns au travers des méthodes et techniques de data mining. Afin de répondre à cet objectif, une recherche est actuellement menée au Suisse à travers une approche interdisciplinaire combinant des connaissances forensiques, criminologiques et computationnelles.