23 resultados para magnitude-based inferences

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Blood aspiration is a significant forensic finding. In this study, we examined the value of postmortem computed tomography (CT) imaging in evaluating findings of blood aspiration. We selected 37 cases with autopsy evidence of blood in the lungs and/or in the airways previously submitted to total-body CT scanning. The CT-images were retrospectively analyzed. In one case with pulmonary blood aspiration, biopsy specimens were obtained under CT guide for histological examination. In six cases, CT detected pulmonary abnormalities suggestive of blood aspiration, not mentioned in the autopsy reports. CT reconstructions provided additional data about the distribution and extent of aspiration. In one needle-biopsied case, the pulmonary specimens showed blood in the alveoli. We suggest the use of CT imaging as a tool complementary to traditional techniques in cases of blood aspiration to avoid misdiagnosis, to guide the investigation of lung tissue, and to allow for more evidence-based inferences on the cause of death.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Implicit Association Test (IAT) had already gained the status of a prominent assessment procedure before its psychometric properties and underlying task structure were understood. The present critique addresses five major problems that arise when the IAT is used for diagnostic inferences: (1) the asymmetry of causal and diagnostic inferences; (2) the viability of the underlying association model; (3) the lack of a testable model underlying IAT-based inferences; (4) the difficulties of interpreting difference scores; and (5) the susceptibility of the IAT to deliberate faking and strategic processing. Based on a theoretical reflection of these issues, and a comprehensive survey of published IAT studies, it is concluded that a number of uncontrolled factors can produce (or reduce) significant IAT scores independently of the personality attribute that is supposed to be captured by the IAT procedure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Relatively little is known about past cold-season temperature variability in high-Alpine regions because of a lack of natural cold-season temperature proxies as well as under-representation of high-altitude sites in meteorological, early-instrumental and documentary data sources. Recent studies have shown that chrysophyte stomatocysts, or simply cysts (sub-fossil algal remains of Chrysophyceae and Synurophyceae), are among the very few natural proxies that can be used to reconstruct cold-season temperatures. This study presents a quantitative, high-resolution (5-year), cold-season (Oct–May) temperature reconstruction based on sub-fossil chrysophyte stomatocysts in the annually laminated (varved) sediments of high-Alpine Lake Silvaplana, SE Switzerland (1,789 m a.s.l.), since AD 1500. We first explore the method used to translate an ecologically meaningful variable based on a biological proxy into a simple climate variable. A transfer function was applied to reconstruct the ‘date of spring mixing’ from cyst assemblages. Next, statistical regression models were tested to convert the reconstructed ‘dates of spring mixing’ into cold-season surface air temperatures with associated errors. The strengths and weaknesses of this approach are thoroughly tested. One much-debated, basic assumption for reconstructions (‘stationarity’), which states that only the environmental variable of interest has influenced cyst assemblages and the influence of confounding variables is negligible over time, is addressed in detail. Our inferences show that past cold-season air-temperature fluctuations were substantial and larger than those of other temperature reconstructions for Europe and the Alpine region. Interestingly, in this study, recent cold-season temperatures only just exceed those of previous, multi-decadal warm phases since AD 1500. These findings highlight the importance of local studies to assess natural climate variability at high altitudes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CD8 T cells play a key role in mediating protective immunity against selected pathogens after vaccination. Understanding the mechanism of this protection is dependent upon definition of the heterogeneity and complexity of cellular immune responses generated by different vaccines. Here, we identify previously unrecognized subsets of CD8 T cells based upon analysis of gene-expression patterns within single cells and show that they are differentially induced by different vaccines. Three prime-boost vector combinations encoding HIV Env stimulated antigen-specific CD8 T-cell populations of similar magnitude, phenotype, and functionality. Remarkably, however, analysis of single-cell gene-expression profiles enabled discrimination of a majority of central memory (CM) and effector memory (EM) CD8 T cells elicited by the three vaccines. Subsets of T cells could be defined based on their expression of Eomes, Cxcr3, and Ccr7, or Klrk1, Klrg1, and Ccr5 in CM and EM cells, respectively. Of CM cells elicited by DNA prime-recombinant adenoviral (rAd) boost vectors, 67% were Eomes(-) Ccr7(+) Cxcr3(-), in contrast to only 7% and 2% stimulated by rAd5-rAd5 or rAd-LCMV, respectively. Of EM cells elicited by DNA-rAd, 74% were Klrk1(-) Klrg1(-)Ccr5(-) compared with only 26% and 20% for rAd5-rAd5 or rAd5-LCMV. Definition by single-cell gene profiling of specific CM and EM CD8 T-cell subsets that are differentially induced by different gene-based vaccines will facilitate the design and evaluation of vaccines, as well as enable our understanding of mechanisms of protective immunity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The volcanic aerosol plume resulting from the Eyjafjallajökull eruption in Iceland in April and May 2010 was detected in clear layers above Switzerland during two periods (17–19 April 2010 and 16–19 May 2010). In-situ measurements of the airborne volcanic plume were performed both within ground-based monitoring networks and with a research aircraft up to an altitude of 6000 m a.s.l. The wide range of aerosol and gas phase parameters studied at the high altitude research station Jungfraujoch (3580 m a.s.l.) allowed for an in-depth characterization of the detected volcanic aerosol. Both the data from the Jungfraujoch and the aircraft vertical profiles showed a consistent volcanic ash mode in the aerosol volume size distribution with a mean optical diameter around 3 ± 0.3 μm. These particles were found to have an average chemical composition very similar to the trachyandesite-like composition of rock samples collected near the volcano. Furthermore, chemical processing of volcanic sulfur dioxide into sulfate clearly contributed to the accumulation mode of the aerosol at the Jungfraujoch. The combination of these in-situ data and plume dispersion modeling results showed that a significant portion of the first volcanic aerosol plume reaching Switzerland on 17 April 2010 did not reach the Jungfraujoch directly, but was first dispersed and diluted in the planetary boundary layer. The maximum PM10 mass concentrations at the Jungfraujoch reached 30 μgm−3 and 70 μgm−3 (for 10-min mean values) duri ng the April and May episode, respectively. Even low-altitude monitoring stations registered up to 45 μgm−3 of volcanic ash related PM10 (Basel, Northwestern Switzerland, 18/19 April 2010). The flights with the research aircraft on 17 April 2010 showed one order of magnitude higher number concentrations over the northern Swiss plateau compared to the Jungfraujoch, and a mass concentration of 320 (200–520) μgm−3 on 18 May 2010 over the northwestern Swiss plateau. The presented data significantly contributed to the time-critical assessment of the local ash layer properties during the initial eruption phase. Furthermore, dispersion models benefited from the detailed information on the volcanic aerosol size distribution and its chemical composition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background We present a compendium of N-ethyl-N-nitrosourea (ENU)-induced mouse mutations, identified in our laboratory over a period of 10 years either on the basis of phenotype or whole genome and/or whole exome sequencing, and archived in the Mutagenetix database. Our purpose is threefold: 1) to formally describe many point mutations, including those that were not previously disclosed in peer-reviewed publications; 2) to assess the characteristics of these mutations; and 3) to estimate the likelihood that a missense mutation induced by ENU will create a detectable phenotype. Findings In the context of an ENU mutagenesis program for C57BL/6J mice, a total of 185 phenotypes were tracked to mutations in 129 genes. In addition, 402 incidental mutations were identified and predicted to affect 390 genes. As previously reported, ENU shows strand asymmetry in its induction of mutations, particularly favoring T to A rather than A to T in the sense strand of coding regions and splice junctions. Some amino acid substitutions are far more likely to be damaging than others, and some are far more likely to be observed. Indeed, from among a total of 494 non-synonymous coding mutations, ENU was observed to create only 114 of the 182 possible amino acid substitutions that single base changes can achieve. Based on differences in overt null allele frequencies observed in phenotypic vs. non-phenotypic mutation sets, we infer that ENU-induced missense mutations create detectable phenotype only about 1 in 4.7 times. While the remaining mutations may not be functionally neutral, they are, on average, beneath the limits of detection of the phenotypic assays we applied. Conclusions Collectively, these mutations add to our understanding of the chemical specificity of ENU, the types of amino acid substitutions it creates, and its efficiency in causing phenovariance. Our data support the validity of computational algorithms for the prediction of damage caused by amino acid substitutions, and may lead to refined predictions as to whether specific amino acid changes are responsible for observed phenotypes. These data form the basis for closer in silico estimations of the number of genes mutated to a state of phenovariance by ENU within a population of G3 mice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To assess bone mineral density (BMD) in idiopathic calcium nephrolithiasis, dual-energy x-ray absorptiometry was performed at lumbar spine, upper femur (femoral neck, Ward's triangle, and total area), distal tibial diaphysis, and distal tibial epiphysis in 110 male idiopathic calcium stone formers (ICSF); 49 with and 61 without hypercalciuria on free-choice diet). Results were compared with those obtained in 234 healthy male controls, using (1) noncorrected BMD, (2) BMD corrected for age, height, and BMI, and (3) a skeletal score based on a tercile distribution of BMD values at following four sites: lumbar spine, Ward's triangle, tibial diaphysis, and tibial epiphysis. After correction, BMD--and therefore also skeletal score--tended to be lower in the stone formers than in controls at five of the six measurement sites, that is, lumbar spine, upper femur, Ward's triangle, tibial diaphysis, and tibial epiphysis, limit of significance being reached for the last two sites without difference between hypercalciuric (HCSF) and normocalciuric stone formers (NCSF). Estimated current daily calcium intake was significantly lower in patients (616 +/- 499 mg/24 h, mean +/- SEM) than in controls (773 +/- 532, p = 0.02). Of 17 patients who in the past had received a low-calcium diet for at least 1 year, 10 had a low skeletal score (4-6) whereas only 1 had a high score (10-12; p = 0.037). Of the 12 stone formers in the study with skeletal score 4 (i.e., the lowest), 8 had experienced in the past one or more fractures of any kind versus only 19 of the remaining 77 patients with skeletal score 5-12 (p = 0.01).(ABSTRACT TRUNCATED AT 250 WORDS)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of this relatively large amount of information, we have compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but the models underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data-assimilation method based on a particle filter. In one simulation, all the 50 proxy-based records are used while in the other two only the continental or oceanic proxy-based records constrain the model results. As expected, data assimilation leads to improving the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at midlatitude that warms up northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxy-based paleoclimate records whose reconstructed signal is either incompatible with the signal recorded by some other proxy-based records or with model physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flood seasonality of catchments in Switzerland is likely to change under climate change because of anticipated alterations of precipitation as well as snow accumulation and melt. Information on this change is crucial for flood protection policies, for example, or regional flood frequency analysis. We analysed projected changes in mean annual and maximum floods of a 22-year period for 189 catchments in Switzerland and two scenario periods in the 21st century based on an ensemble of climate scenarios. The flood seasonality was analysed with directional statistics that allow assessing both changes in the mean date a flood occurs as well as changes in the strength of the seasonality. We found that the simulated change in flood seasonality is a function of the change in flow regime type. If snow accumulation and melt is important in a catchment during the control period, then the anticipated change in flood seasonality is most pronounced. Decreasing summer precipitation in the scenarios additionally affects the flood seasonality (mean date of flood occurrence) and leads to a decreasing strength of seasonality, that is a higher temporal variability in most cases. The magnitudes of mean annual floods and more clearly of maximum floods (in a 22-year period) are expected to increase in the future because of changes in flood-generating processes and scaled extreme precipitation. Southern alpine catchments show a different signal, though: the simulated mean annual floods decrease in the far future, that is at the end of the 21st century. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ActiGraph accelerometer is commonly used to measure physical activity in children. Count cut-off points are needed when using accelerometer data to determine the time a person spent in moderate or vigorous physical activity. For the GT3X accelerometer no cut-off points for young children have been published yet. The aim of the current study was thus to develop and validate count cut-off points for young children. Thirty-two children aged 5 to 9 years performed four locomotor and four play activities. Activity classification into the light-, moderate- or vigorous-intensity category was based on energy expenditure measurements with indirect calorimetry. Vertical axis as well as vector magnitude cut-off points were determined through receiver operating characteristic curve analyses with the data of two thirds of the study group and validated with the data of the remaining third. The vertical axis cut-off points were 133 counts per 5 sec for moderate to vigorous physical activity (MVPA), 193 counts for vigorous activity (VPA) corresponding to a metabolic threshold of 5 MET and 233 for VPA corresponding to 6 MET. The vector magnitude cut-off points were 246 counts per 5 sec for MVPA, 316 counts for VPA - 5 MET and 381 counts for VPA - 6 MET. When validated, the current cut-off points generally showed high recognition rates for each category, high sensitivity and specificity values and moderate agreement in terms of the Kappa statistic. These results were similar for vertical axis and vector magnitude cut-off points. The current cut-off points adequately reflect MVPA and VPA in young children. Cut-off points based on vector magnitude counts did not appear to reflect the intensity categories better than cut-off points based on vertical axis counts alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE To extend the capabilities of the Cone Location and Magnitude Index algorithm to include a combination of topographic information from the anterior and posterior corneal surfaces and corneal thickness measurements to further improve our ability to correctly identify keratoconus using this new index: ConeLocationMagnitudeIndex_X. DESIGN Retrospective case-control study. METHODS Three independent data sets were analyzed: 1 development and 2 validation. The AnteriorCornealPower index was calculated to stratify the keratoconus data from mild to severe. The ConeLocationMagnitudeIndex algorithm was applied to all tomography data collected using a dual Scheimpflug-Placido-based tomographer. The ConeLocationMagnitudeIndex_X formula, resulting from analysis of the Development set, was used to determine the logistic regression model that best separates keratoconus from normal and was applied to all data sets to calculate PercentProbabilityKeratoconus_X. The sensitivity/specificity of PercentProbabilityKeratoconus_X was compared with the original PercentProbabilityKeratoconus, which only uses anterior axial data. RESULTS The AnteriorCornealPower severity distribution for the combined data sets are 136 mild, 12 moderate, and 7 severe. The logistic regression model generated for ConeLocationMagnitudeIndex_X produces complete separation for the Development set. Validation Set 1 has 1 false-negative and Validation Set 2 has 1 false-positive. The overall sensitivity/specificity results for the logistic model produced using the ConeLocationMagnitudeIndex_X algorithm are 99.4% and 99.6%, respectively. The overall sensitivity/specificity results for using the original ConeLocationMagnitudeIndex algorithm are 89.2% and 98.8%, respectively. CONCLUSIONS ConeLocationMagnitudeIndex_X provides a robust index that can detect the presence or absence of a keratoconic pattern in corneal tomography maps with improved sensitivity/specificity from the original anterior surface-only ConeLocationMagnitudeIndex algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In natural hazard research, risk is defined as a function of (1) the probability of occurrence of a hazardous process, and (2) the assessment of the related extent of damage, defined by the value of elements at risk exposed and their physical vulnerability. Until now, various works have been undertaken to determine vulnerability values for objects exposed to geomorphic hazards such as mountain torrents. Yet, many studies only provide rough estimates for vulnerability values based on proxies for process intensities. However, the deduced vulnerability functions proposed in the literature show a wide range, in particular with respect to medium and high process magnitudes. In our study, we compare vulnerability functions for torrent processes derived from studies in test sites located in the Austrian Alps and in Taiwan. Based on this comparison we expose needs for future research in order to enhance mountain hazard risk management with a particular focus on the question of vulnerability on a catchment scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tropical wetlands are estimated to represent about 50% of the natural wetland methane (CH4) emissions and explain a large fraction of the observed CH4 variability on timescales ranging from glacial–interglacial cycles to the currently observed year-to-year variability. Despite their importance, however, tropical wetlands are poorly represented in global models aiming to predict global CH4 emissions. This publication documents a first step in the development of a process-based model of CH4 emissions from tropical floodplains for global applications. For this purpose, the LPX-Bern Dynamic Global Vegetation Model (LPX hereafter) was slightly modified to represent floodplain hydrology, vegetation and associated CH4 emissions. The extent of tropical floodplains was prescribed using output from the spatially explicit hydrology model PCR-GLOBWB. We introduced new plant functional types (PFTs) that explicitly represent floodplain vegetation. The PFT parameterizations were evaluated against available remote-sensing data sets (GLC2000 land cover and MODIS Net Primary Productivity). Simulated CH4 flux densities were evaluated against field observations and regional flux inventories. Simulated CH4 emissions at Amazon Basin scale were compared to model simulations performed in the WETCHIMP intercomparison project. We found that LPX reproduces the average magnitude of observed net CH4 flux densities for the Amazon Basin. However, the model does not reproduce the variability between sites or between years within a site. Unfortunately, site information is too limited to attest or disprove some model features. At the Amazon Basin scale, our results underline the large uncertainty in the magnitude of wetland CH4 emissions. Sensitivity analyses gave insights into the main drivers of floodplain CH4 emission and their associated uncertainties. In particular, uncertainties in floodplain extent (i.e., difference between GLC2000 and PCR-GLOBWB output) modulate the simulated emissions by a factor of about 2. Our best estimates, using PCR-GLOBWB in combination with GLC2000, lead to simulated Amazon-integrated emissions of 44.4 ± 4.8 Tg yr−1. Additionally, the LPX emissions are highly sensitive to vegetation distribution. Two simulations with the same mean PFT cover, but different spatial distributions of grasslands within the basin, modulated emissions by about 20%. Correcting the LPX-simulated NPP using MODIS reduces the Amazon emissions by 11.3%. Finally, due to an intrinsic limitation of LPX to account for seasonality in floodplain extent, the model failed to reproduce the full dynamics in CH4 emissions but we proposed solutions to this issue. The interannual variability (IAV) of the emissions increases by 90% if the IAV in floodplain extent is accounted for, but still remains lower than in most of the WETCHIMP models. While our model includes more mechanisms specific to tropical floodplains, we were unable to reduce the uncertainty in the magnitude of wetland CH4 emissions of the Amazon Basin. Our results helped identify and prioritize directions towards more accurate estimates of tropical CH4 emissions, and they stress the need for more research to constrain floodplain CH4 emissions and their temporal variability, even before including other fundamental mechanisms such as floating macrophytes or lateral water fluxes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tree-rings offer one of the few possibilities to empirically quantify and reconstruct forest growth dynamics over years to millennia. Contemporaneously with the growing scientific community employing tree-ring parameters, recent research has suggested that commonly applied sampling designs (i.e. how and which trees are selected for dendrochronological sampling) may introduce considerable biases in quantifications of forest responses to environmental change. To date, a systematic assessment of the consequences of sampling design on dendroecological and-climatological conclusions has not yet been performed. Here, we investigate potential biases by sampling a large population of trees and replicating diverse sampling designs. This is achieved by retroactively subsetting the population and specifically testing for biases emerging for climate reconstruction, growth response to climate variability, long-term growth trends, and quantification of forest productivity. We find that commonly applied sampling designs can impart systematic biases of varying magnitude to any type of tree-ring-based investigations, independent of the total number of samples considered. Quantifications of forest growth and productivity are particularly susceptible to biases, whereas growth responses to short-term climate variability are less affected by the choice of sampling design. The world's most frequently applied sampling design, focusing on dominant trees only, can bias absolute growth rates by up to 459% and trends in excess of 200%. Our findings challenge paradigms, where a subset of samples is typically considered to be representative for the entire population. The only two sampling strategies meeting the requirements for all types of investigations are the (i) sampling of all individuals within a fixed area; and (ii) fully randomized selection of trees. This result advertises the consistent implementation of a widely applicable sampling design to simultaneously reduce uncertainties in tree-ring-based quantifications of forest growth and increase the comparability of datasets beyond individual studies, investigators, laboratories, and geographical boundaries.