955 resultados para Synthetic and analytic methods
Resumo:
OBJECTIVE: To assess total free-living energy expenditure (EE) in Gambian farmers with two independent methods, and to determine the most realistic free-living EE and physical activity in order to establish energy requirements for rural populations in developing countries. DESIGN: In this cross-sectional study two methods were applied at the same time. SETTING: Three rural villages and Dunn Nutrition Centre Keneba, MRC, The Gambia. SUBJECTS: Eight healthy, male subjects were recruited from three rural Gambian villages in the sub-Sahelian area (age: 25 +/- 4y; weight: 61.2 +/- 10.1 kg; height: 169.5 +/- 6.5 cm, body mass index: 21.2 +/- 2.5 kg/m2). INTERVENTION: We assessed free-living EE with two inconspicuous and independent methods: the first one used doubly labeled water (DLW) (2H2 18O) over a period of 12 days, whereas the second one was based on continuous heart rate (HR) measurements on two to three days using individual regression lines (HR vs EE) established by indirect calorimetry in a respiration chamber. Isotopic dilution of deuterium (2H2O) was also used to assess total body water and hence fat-free mass (FFM). RESULTS: EE assessed by DLW was found to be 3880 +/- 994 kcal/day (16.2 +/- 4.2 MJ/day). Expressed per unit body weight the EE averaged 64.2 +/- 9.3 kcal/kg/d (269 +/- 38 kJ/kg/d). These results were consistent with the EE results assessed by HR: 3847 +/- 605 kcal/d (16.1 +/- 2.5 MJ/d) or 63.4 +/- 8.2 kcal/kg/d (265 +/- 34kJ/kg/d). Physical activity index, expressed as a multiple of basal metabolic rate (BMR), averaged 2.40 +/- 0.41 (DLW) or 2.40 +/- 0.28 (HR). CONCLUSIONS: These findings suggest an extremely high level of physical activity in Gambian men during intense agricultural work (wet season). This contrasts with the relative food shortage, previously reported during the harvesting period. We conclude that the assessment of EE during the agricultural season in non-industrialized countries needs further investigations in order to obtain information on the energy requirement of these populations. For this purpose the use of the DLW and HR methods have been shown to be useful and complementary.
Resumo:
The chemical and isotopic compositions of clay minerals such as illite and chlorite are commonly used to quantify diagenetic and low-grade metamorphic conditions, an approach that is also used in the present study of the Monte Perdido thrust fault from the South Pyrenean fold-and-thrust belt. The Monte Perdido thrust fault is a shallow thrust juxtaposing upper Cretaceous-Paleocene platform carbonates and Lower Eocene marls and turbidites from the Jaca basin. The core zone of the fault, about 6 m thick, consists of intensely deformed clay-bearing rocks bounded by major shear surfaces. Illite and chlorite are the main hydrous minerals in the fault zone. Illite is oriented along cleavage planes while chlorite formed along shear veins (< 50 mu m in thickness). Authigenic chlorite provides essential information about the origin of fluids and their temperature. delta O-18 and delta D values of newly formed chlorite support equilibration with sedimentary interstitial water, directly derived from the local hanging wall and footwall during deformation. Given the absence of large-scale fluid flow, the mineralization observed in the thrust faults records the P-T conditions of thrust activity. Temperatures of chlorite formation of about 240A degrees C are obtained via two independent methods: chlorite compositional thermometers and oxygen isotope fractionation between cogenetic chlorite and quartz. Burial depth conditions of 7 km are determined for the Monte Perdido thrust reactivation, coupling calculated temperature and fluid inclusion isochores. The present study demonstrates that both isotopic and thermodynamic methods applied to clay minerals formed in thrust fault are useful to help constrain diagenetic and low-grade metamorphic conditions.
Resumo:
Aims and objectives This study aimed to determine the discriminant validity and the test-retest reliability of a questionnaire testing the impact of evidence-based medicine (EBM) training on doctors' knowledge and skills. Methods Questionnaires were sent electronically to all doctors working as residents and chief residents in two French speaking hospital networks in Switzerland. Participants completed the questionnaire twice, within a 4-week interval. The discriminant validity was examined in comparing doctors' performance according to their reported EBM previous training. Proportion of agreement between both sessions of the questionnaire, Cohen's kappa and 'uniform kappa' determined its test-retest reliability. Results The participation rate was 9.8%/7.1% to first/second session. Performance increased according to the level of doctors' previous training in EBM. The observed proportion of agreement between both sessions was over 70% for 14/19 questions, and the 'uniform kappa' was superior to 0.60 for 15/19 questions. Conclusion The discriminant validity and test-retest reliability of the questionnaire were satisfying. The low participation rate did not prevent the study from achieving its aims.
Resumo:
The development of forensic intelligence relies on the expression of suitable models that better represent the contribution of forensic intelligence in relation to the criminal justice system, policing and security. Such models assist in comparing and evaluating methods and new technologies, provide transparency and foster the development of new applications. Interestingly, strong similarities between two separate projects focusing on specific forensic science areas were recently observed. These observations have led to the induction of a general model (Part I) that could guide the use of any forensic science case data in an intelligence perspective. The present article builds upon this general approach by focusing on decisional and organisational issues. The article investigates the comparison process and evaluation system that lay at the heart of the forensic intelligence framework, advocating scientific decision criteria and a structured but flexible and dynamic architecture. These building blocks are crucial and clearly lay within the expertise of forensic scientists. However, it is only part of the problem. Forensic intelligence includes other blocks with their respective interactions, decision points and tensions (e.g. regarding how to guide detection and how to integrate forensic information with other information). Formalising these blocks identifies many questions and potential answers. Addressing these questions is essential for the progress of the discipline. Such a process requires clarifying the role and place of the forensic scientist within the whole process and their relationship to other stakeholders.
Resumo:
BACKGROUND: Practice guidelines for examining febrile patients presenting upon returning from the tropics were developed to assist primary care physicians in decision making. Because of the low level of evidence available in this field, there was a need to validate them and assess their feasibility in the context they have been designed for. OBJECTIVES: The objectives of the study were to (1) evaluate physicians' adherence to recommendations; (2) investigate reasons for non-adherence; and (3) ensure good clinical outcome of patients, the ultimate goal being to improve the quality of the guidelines, in particular to tailor them for the needs of the target audience and population. METHODS: Physicians consulting the guidelines on the Internet (www.fevertravel.ch) were invited to participate in the study. Navigation through the decision chart was automatically recorded, including diagnostic tests performed, initial and final diagnoses, and clinical outcomes. The reasons for non-adherence were investigated and qualitative feedback was collected. RESULTS: A total of 539 physician/patient pairs were included in this study. Full adherence to guidelines was observed in 29% of the cases. Figure-specific adherence rate was 54.8%. The main reasons for non-adherence were as follows: no repetition of malaria tests (111/352) and no presumptive antibiotic treatment for febrile diarrhea (64/153) or abdominal pain without leukocytosis (46/101). Overall, 20% of diversions from guidelines were considered reasonable because there was an alternative presumptive diagnosis or the symptoms were mild, which means that the corrected adherence rate per case was 40.6% and corrected adherence per figure was 61.7%. No death was recorded and all complications could be attributed to the underlying illness rather than to adherence to guidelines. CONCLUSIONS: These guidelines proved to be feasible, useful, and leading to good clinical outcomes. Almost one third of physicians strictly adhered to the guidelines. Other physicians used the guidelines not to forget specific diagnoses but finally diverged from the proposed attitudes. These diversions should be scrutinized for further refinement of the guidelines to better fit to physician and patient needs.
Resumo:
Reliable estimates of heavy-truck volumes are important in a number of transportation applications. Estimates of truck volumes are necessary for pavement design and pavement management. Truck volumes are important in traffic safety. The number of trucks on the road also influences roadway capacity and traffic operations. Additionally, heavy vehicles pollute at higher rates than passenger vehicles. Consequently, reliable estimates of heavy-truck vehicle miles traveled (VMT) are important in creating accurate inventories of on-road emissions. This research evaluated three different methods to calculate heavy-truck annual average daily traffic (AADT) which can subsequently be used to estimate vehicle miles traveled (VMT). Traffic data from continuous count stations provided by the Iowa DOT were used to estimate AADT for two different truck groups (single-unit and multi-unit) using the three methods. The first method developed monthly and daily expansion factors for each truck group. The second and third methods created general expansion factors for all vehicles. Accuracy of the three methods was compared using n-fold cross-validation. In n-fold cross-validation, data are split into n partitions, and data from the nth partition are used to validate the remaining data. A comparison of the accuracy of the three methods was made using the estimates of prediction error obtained from cross-validation. The prediction error was determined by averaging the squared error between the estimated AADT and the actual AADT. Overall, the prediction error was the lowest for the method that developed expansion factors separately for the different truck groups for both single- and multi-unit trucks. This indicates that use of expansion factors specific to heavy trucks results in better estimates of AADT, and, subsequently, VMT, than using aggregate expansion factors and applying a percentage of trucks. Monthly, daily, and weekly traffic patterns were also evaluated. Significant variation exists in the temporal and seasonal patterns of heavy trucks as compared to passenger vehicles. This suggests that the use of aggregate expansion factors fails to adequately describe truck travel patterns.
Resumo:
A number of experimental methods have been reported for estimating the number of genes in a genome, or the closely related coding density of a genome, defined as the fraction of base pairs in codons. Recently, DNA sequence data representative of the genome as a whole have become available for several organisms, making the problem of estimating coding density amenable to sequence analytic methods. Estimates of coding density for a single genome vary widely, so that methods with characterized error bounds have become increasingly desirable. We present a method to estimate the protein coding density in a corpus of DNA sequence data, in which a ‘coding statistic’ is calculated for a large number of windows of the sequence under study, and the distribution of the statistic is decomposed into two normal distributions, assumed to be the distributions of the coding statistic in the coding and noncoding fractions of the sequence windows. The accuracy of the method is evaluated using known data and application is made to the yeast chromosome III sequence and to C.elegans cosmid sequences. It can also be applied to fragmentary data, for example a collection of short sequences determined in the course of STS mapping.
Resumo:
BACKGROUND: Abiotrophia and Granulicatella species, previously referred to as nutritionally variant streptococci (NVS), are significant causative agents of endocarditis and bacteraemia. In this study, we reviewed the clinical manifestations of infections due to A. defectiva and Granulicatella species that occurred at our institution between 1998 and 2004. METHODS: The analysis included all strains of NVS that were isolated from blood cultures or vascular graft specimens. All strains were identified by 16S rRNA sequence analysis. Patients' medical charts were reviewed for each case of infection. RESULTS: Eleven strains of NVS were isolated during the 6-year period. Identification of the strains by 16S rRNA showed 2 genogroups: Abiotrophia defectiva (3) and Granulicatella adiacens (6) or "para-adiacens" (2). The three A. defectiva strains were isolated from immunocompetent patients with endovascular infections, whereas 7 of 8 Granulicatella spp. strains were isolated from immunosuppressed patients, mainly febrile neutropenic patients. We report the first case of "G. para-adiacens" bacteraemia in the setting of febrile neutropenia. CONCLUSION: We propose that Granulicatella spp. be considered as a possible agent of bacteraemia in neutropenic patients.
Resumo:
Concrete curing is closely related to cement hydration, microstructure development, and concrete performance. Application of a liquid membrane-forming curing compound is among the most widely used curing methods for concrete pavements and bridge decks. Curing compounds are economical, easy to apply, and maintenance free. However, limited research has been done to investigate the effectiveness of different curing compounds and their application technologies. No reliable standard testing method is available to evaluate the effectiveness of curing, especially of the field concrete curing. The present research investigates the effects of curing compound materials and application technologies on concrete properties, especially on the properties of surface concrete. This report presents a literature review of curing technology, with an emphasis on curing compounds, and the experimental results from the first part of this research—lab investigation. In the lab investigation, three curing compounds were selected and applied to mortar specimens at three different times after casting. Two application methods, single- and double-layer applications, were employed. Moisture content, conductivity, sorptivity, and degree of hydration were measured at different depths of the specimens. Flexural and compressive strength of the specimens were also tested. Statistical analysis was conducted to examine the relationships between these material properties. The research results indicate that application of a curing compound significantly increased moisture content and degree of cement hydration and reduced sorptivity of the near-surface-area concrete. For given concrete materials and mix proportions, optimal application time of curing compounds depended primarily upon the weather condition. If a sufficient amount of a high-efficiency-index curing compound was uniformly applied, no double-layer application was necessary. Among all test methods applied, the sorptivity test is the most sensitive one to provide good indication for the subtle changes in microstructure of the near-surface-area concrete caused by different curing materials and application methods. Sorptivity measurement has a close relation with moisture content and degree of hydration. The research results have established a baseline for and provided insight into the further development of testing procedures for evaluation of curing compounds in field. Recommendations are provided for further field study.
Resumo:
Predicting which species will occur together in the future, and where, remains one of the greatest challenges in ecology, and requires a sound understanding of how the abiotic and biotic environments interact with dispersal processes and history across scales. Biotic interactions and their dynamics influence species' relationships to climate, and this also has important implications for predicting future distributions of species. It is already well accepted that biotic interactions shape species' spatial distributions at local spatial extents, but the role of these interactions beyond local extents (e.g. 10 km(2) to global extents) are usually dismissed as unimportant. In this review we consolidate evidence for how biotic interactions shape species distributions beyond local extents and review methods for integrating biotic interactions into species distribution modelling tools. Drawing upon evidence from contemporary and palaeoecological studies of individual species ranges, functional groups, and species richness patterns, we show that biotic interactions have clearly left their mark on species distributions and realised assemblages of species across all spatial extents. We demonstrate this with examples from within and across trophic groups. A range of species distribution modelling tools is available to quantify species environmental relationships and predict species occurrence, such as: (i) integrating pairwise dependencies, (ii) using integrative predictors, and (iii) hybridising species distribution models (SDMs) with dynamic models. These methods have typically only been applied to interacting pairs of species at a single time, require a priori ecological knowledge about which species interact, and due to data paucity must assume that biotic interactions are constant in space and time. To better inform the future development of these models across spatial scales, we call for accelerated collection of spatially and temporally explicit species data. Ideally, these data should be sampled to reflect variation in the underlying environment across large spatial extents, and at fine spatial resolution. Simplified ecosystems where there are relatively few interacting species and sometimes a wealth of existing ecosystem monitoring data (e.g. arctic, alpine or island habitats) offer settings where the development of modelling tools that account for biotic interactions may be less difficult than elsewhere.
Resumo:
OBJECTIVE: This study was undertaken to determine the delay of extubation attributable to ventilator-associated pneumonia (VAP) in comparison to other complications and complexity of surgery after repair of congenital heart lesions in neonates and children. METHODS: Cohort study in a pediatric intensive care unit of a tertiary referral center. All patients who had cardiac operations during a 22-month period and who survived surgery were eligible (n = 272, median age 1.3 years). Primary outcome was time to successful extubation. Primary variable of interest was VAP Surgical procedures were classified according to complexity. Cox proportional hazards models were calculated to adjust for confounding. Potential confounders comprised other known risk factors for delayed extubation. RESULTS: Median time to extubation was 3 days. VAP occurred in 26 patients (9.6%). The rate of VAP was not associated with complexity of surgery (P = 0.22), or cardiopulmonary bypass (P = 0.23). The adjusted analysis revealed as further factors associated with delayed extubation: other respiratory complications (n = 28, chylothorax, airway stenosis, diaphragm paresis), prolonged inotropic support (n = 48, 17.6%), and the need for secondary surgery (n = 51, 18.8%; e.g., re-operation, secondary closure of thorax). Older age promoted early extubation. The median delay of extubation attributable to VAP was 3.7 days (hazards ratio HR = 0.29, 95% CI 0.18-0.49), exceeding the effect size of secondary surgery (HR = 0.48) and other respiratory complications (HR = 0.50). CONCLUSION: VAP accounts for a major delay of extubation in pediatric cardiac surgery.
Resumo:
INTRODUCTION: Crevasse accidents can lead to severe injuries and even death, but little is known about their epidemiology and mortality. METHODS: We retrospectively reviewed helicopter-based emergency services rescue missions for crevasse victims in Switzerland between 2000 and 2010. Demographic and epidemiological data were collected. Injury severity was graded according to the National Advisory Committee for Aeronautics (NACA) score. RESULTS: A total of 415 victims of crevasse falls were included in the study. The mean victim age was 40 years (SD 13) (range 6-75), 84% were male, and 67% were foreigners. The absolute number of victims was much higher during the months of March, April, July, and August, amounting to 73% of all victims; 77% of victims were practicing mountaineering or ski touring. The mean depth of fall was 16.5m (SD 9.0) (range 1-35). Overall on-site mortality was 11%, and it was higher during the ski season than the ski offseason (14% vs. 7%; P=0.01), for foreigners (14% vs. 5%; P=0.01), and with higher mean depth of fall (22 vs. 15m; P=0.01). The NACA score was ≥4 for 22% of the victims, indicating potential or overt vital threatening injuries, but 24% of the victims were uninjured (NACA 0). Multivariable analyses revealed that depth of the fall, summer season, and snowshoeing were associated with higher NACA scores, whereas depth of the fall, snowshoeing, and foreigners but not season were associated with higher risk of death. CONCLUSION: The clinical spectrum of injuries sustained by the 415 patients in this study ranged from benign to life-threatening. Death occurred in 11% of victims and seems to be determined primarily by the depth of the fall.
Resumo:
Introduction: Nonoperative treatment of displaced midshaft clavicle fractures is associated with higher nonunion rate than previously reported. Moreover, its occurrence can compromise shoulder function. The aim of this study was to evaluate the outcome of surgical treatment of symptomatic clavicle midshaft delayed and nonunion. Methods: Between 1999 and 2008, 19 clavicle delayed unions and nonunions were treated by open reduction and reconstructive plate fixation with augmentation by autologous bone graft. Iliac bone graft was used in 15 atrophic cases, and graft from the callus was used in 4 hypertrophic nonunions. There were 14 men and 5 women, with an average age of 41 years (range, 19 to 59 years) at time of surgery. No patient had undergone a previous surgery and all complained of shoulder pain. Delayed unions and nonunions were defined as non-healing after 3 and 6 months respectively. The mean time to surgery was 8 months (range, 4 to 23 months). All patients were pre and postoperatively clinically evaluated and imaged with standard radiographs until complete healing. Results: After a mean time of 3 months (range, 2 to 7 months) all fractures were completely healed. All patients reported full range of motion at time of last follow-up. Nine patients (47%) reported slight shoulder pain but all returned to their previous professional activities after a mean time of 3 months (range, 1 to 8 months). We reported 12 (63%) minor complications. There were 6 (32%) plate-related discomforts which resolved after hardware removal, two (11%) scar numbness, two (11%) adhesive capsulitis with spontaneous complete recovery, and two (11%) AC-joint pain treated successfully with local corticosteroids injection. Conclusion: Surgical treatment of delayed unions and nonunions of midshaft clavicle fractures yields satisfactory results and a high union rate. However, 50% of the patients may still complain of slight residual shoulder pain.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
BACKGROUND: Acute coronary syndromes (ACS) in very young patients have been poorly described. We therefore evaluate ACS in patients aged 35 years and younger. METHODS: In this prospective cohort study, 76 hospitals treating ACS in Switzerland enrolled 28,778 patients with ACS between January 1, 1997, and October 1, 2008. ACS definition included ST-segment elevation myocardial infarction (STEMI), non-ST-segment elevation myocardial infarction (NSTEMI), and unstable angina (UA). RESULTS: 195 patients (0.7%) were 35 years old or younger. Compared to patients>35 years, these patients were more likely to present with chest pain (91.6% vs. 83.7%; P=0.003) and less likely to have heart failure (Killip class II to IV in 5.2% vs. 23.0%; P<0.001). STEMI was more prevalent in younger than in older patients (73.1% vs. 58.3%; P<0.001). Smoking, family history of CAD, and/or dyslipidemia were important cardiovascular risk factors in young patients (prevalence 77.2%, 55.0%, and 44.0%). The prevalence of overweight among young patients with ACS was high (57.8%). Cocaine abuse was associated with ACS in some young patients. Compared to older patients, young patients were more likely to receive early percutaneous coronary interventions and had better outcome with fewer major adverse cardiac events. CONCLUSIONS: Young patients with ACS differed from older patients in that the younger often presented with STEMI, received early aggressive treatment, and had favourable outcomes. Primary prevention of smoking, dyslipidemia and overweight should be more aggressively promoted in adolescence.