910 resultados para Probabilistic interpretation
Resumo:
Estimating the time since discharge of a spent cartridge or a firearm can be useful in criminal situa-tions involving firearms. The analysis of volatile gunshot residue remaining after shooting using solid-phase microextraction (SPME) followed by gas chromatography (GC) was proposed to meet this objective. However, current interpretative models suffer from several conceptual drawbacks which render them inadequate to assess the evidential value of a given measurement. This paper aims to fill this gap by proposing a logical approach based on the assessment of likelihood ratios. A probabilistic model was thus developed and applied to a hypothetical scenario where alternative hy-potheses about the discharge time of a spent cartridge found on a crime scene were forwarded. In order to estimate the parameters required to implement this solution, a non-linear regression model was proposed and applied to real published data. The proposed approach proved to be a valuable method for interpreting aging-related data.
Resumo:
The scalar sector of the effective low-energy six-dimensional Kaluza-Klein theory is seen to represent an anisotropic fluid composed of two perfect fluids if the extra space metric has a Euclidean signature, or a perfect fluid of geometric strings if it has an indefinite signature. The Einstein field equations with such fluids can be explicitly integrated when the four-dimensional space-time has two commuting Killing vectors.
Resumo:
We assessed whether fasting modifies the prognostic value of these measurements for the risk of myocardial infarction (MI). Analyses used mixed effect models and Poisson regression. After confounders were controlled for, fasting triglyceride levels were, on average, 0.122 mmol/L lower than nonfasting levels. Each 2-fold increase in the latest triglyceride level was associated with a 38% increase in MI risk (relative rate, 1.38; 95% confidence interval, 1.26-1.51); fasting status did not modify this association. Our results suggest that it may not be necessary to restrict analyses to fasting measurements when considering MI risk.
Resumo:
Aim Conservation strategies are in need of predictions that capture spatial community composition and structure. Currently, the methods used to generate these predictions generally focus on deterministic processes and omit important stochastic processes and other unexplained variation in model outputs. Here we test a novel approach of community models that accounts for this variation and determine how well it reproduces observed properties of alpine butterfly communities. Location The western Swiss Alps. Methods We propose a new approach to process probabilistic predictions derived from stacked species distribution models (S-SDMs) in order to predict and assess the uncertainty in the predictions of community properties. We test the utility of our novel approach against a traditional threshold-based approach. We used mountain butterfly communities spanning a large elevation gradient as a case study and evaluated the ability of our approach to model species richness and phylogenetic diversity of communities. Results S-SDMs reproduced the observed decrease in phylogenetic diversity and species richness with elevation, syndromes of environmental filtering. The prediction accuracy of community properties vary along environmental gradient: variability in predictions of species richness was higher at low elevation, while it was lower for phylogenetic diversity. Our approach allowed mapping the variability in species richness and phylogenetic diversity projections. Main conclusion Using our probabilistic approach to process species distribution models outputs to reconstruct communities furnishes an improved picture of the range of possible assemblage realisations under similar environmental conditions given stochastic processes and help inform manager of the uncertainty in the modelling results
Resumo:
The International Society for Clinical Densitometry (ISCD) and the International Osteoporosis Foundation (IOF) convened the FRAX(®) Position Development Conference (PDC) in Bucharest, Romania, on November 14, 2010, following a two-day joint meeting of the ISCD and IOF on the "Interpretation and Use of FRAX(®) in Clinical Practice." These three days of critical discussion and debate, led by a panel of international experts from the ISCD, IOF and dedicated task forces, have clarified a number of important issues pertaining to the interpretation and implementation of FRAX(®) in clinical practice. The Official Positions resulting from the PDC are intended to enhance the quality and clinical utility of fracture risk assessment worldwide. Since the field of skeletal assessment is still evolving rapidly, some clinically important issues addressed at the PDCs are not associated with robust medical evidence. Accordingly, some Official Positions are based largely on expert opinion. Despite limitations inherent in such a process, the ISCD and IOF believe it is important to provide clinicians and technologists with the best distillation of current knowledge in the discipline of bone densitometry and provide an important focus for the scientific community to consider. This report describes the methodology and results of the ISCD-IOF PDC dedicated to FRAX(®).
Resumo:
This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.
Resumo:
Research in autophagy continues to accelerate,(1) and as a result many new scientists are entering the field. Accordingly, it is important to establish a standard set of criteria for monitoring macroautophagy in different organisms. Recent reviews have described the range of assays that have been used for this purpose.(2,3) There are many useful and convenient methods that can be used to monitor macroautophagy in yeast, but relatively few in other model systems, and there is much confusion regarding acceptable methods to measure macroautophagy in higher eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers of autophagosomes versus those that measure flux through the autophagy pathway; thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from fully functional autophagy that includes delivery to, and degradation within, lysosomes (in most higher eukaryotes) or the vacuole (in plants and fungi). Here, we present a set of guidelines for the selection and interpretation of the methods that can be used by investigators who are attempting to examine macroautophagy and related processes, as well as by reviewers who need to provide realistic and reasonable critiques of papers that investigate these processes. This set of guidelines is not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to verify an autophagic response.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
This work deals with the elaboration of flood hazard maps. These maps reflect the areas prone to floods based on the effects of Hurricane Mitch in the Municipality of Jucuarán of El Salvador. Stream channels located in the coastal range in the SE of El Salvador flow into the Pacific Ocean and generate alluvial fans. Communities often inhabit these fans can be affected by floods. The geomorphology of these stream basins is associated with small areas, steep slopes, well developed regolite and extensive deforestation. These features play a key role in the generation of flash-floods. This zone lacks comprehensive rainfall data and gauging stations. The most detailed topographic maps are on a scale of 1:25 000. Given that the scale was not sufficiently detailed, we used aerial photographs enlarged to the scale of 1:8000. The effects of Hurricane Mitch mapped on these photographs were regarded as the reference event. Flood maps have a dual purpose (1) community emergency plans, (2) regional land use planning carried out by local authorities. The geomorphological method is based on mapping the geomorphological evidence (alluvial fans, preferential stream channels, erosion and sedimentation, man-made terraces). Following the interpretation of the photographs this information was validated on the field and complemented by eyewitness reports such as the height of water and flow typology. In addition, community workshops were organized to obtain information about the evolution and the impact of the phenomena. The superimposition of this information enables us to obtain a comprehensive geomorphological map. Another aim of the study was the calculation of the peak discharge using the Manning and the paleohydraulic methods and estimates based on geomorphologic criterion. The results were compared with those obtained using the rational method. Significant differences in the order of magnitude of the calculated discharges were noted. The rational method underestimated the results owing to short and discontinuous periods of rainfall data with the result that probabilistic equations cannot be applied. The Manning method yields a wide range of results because of its dependence on the roughness coefficient. The paleohydraulic method yielded higher values than the rational and Manning methods. However, it should be pointed out that it is possible that bigger boulders could have been moved had they existed. These discharge values are lower than those obtained by the geomorphological estimates, i.e. much closer to reality. The flood hazard maps were derived from the comprehensive geomorphological map. Three categories of hazard were established (very high, high and moderate) using flood energy, water height and velocity flow deduced from geomorphological and eyewitness reports.
Resumo:
This article extends existing discussion in literature on probabilistic inference and decision making with respect to continuous hypotheses that are prevalent in forensic toxicology. As a main aim, this research investigates the properties of a widely followed approach for quantifying the level of toxic substances in blood samples, and to compare this procedure with a Bayesian probabilistic approach. As an example, attention is confined to the presence of toxic substances, such as THC, in blood from car drivers. In this context, the interpretation of results from laboratory analyses needs to take into account legal requirements for establishing the 'presence' of target substances in blood. In a first part, the performance of the proposed Bayesian model for the estimation of an unknown parameter (here, the amount of a toxic substance) is illustrated and compared with the currently used method. The model is then used in a second part to approach-in a rational way-the decision component of the problem, that is judicial questions of the kind 'Is the quantity of THC measured in the blood over the legal threshold of 1.5 μg/l?'. This is pointed out through a practical example.
Resumo:
This research consisted of five laboratory experiments designed to address the following two objectives in an integrated analysis: (1) To discriminate between the symbol Stop Ahead warning sign and a small set of other signs (which included the word-legend Stop Ahead sign); and (2) To analyze sign detection, recognizability, and processing characteristics by drivers. A set of 16 signs was used in each of three experiments. A tachistoscope was used to display each sign image to a respondent for a brief interval in a controlled viewing experiment. The first experiment was designed to test detection of a sign in the driver's visual field; the second experiment was designed to test the driver's ability to recognize a given sign in the visual field; and the third experiment was designed to test the speed and accuracy of a driver's response to each sign as a command to perform a driving action. A fourth experiment tested the meanings drivers associated with an eight-sign subset of the 16 signs used in the first three experiments. A fifth experiment required all persons to select which (if any) signs they considered to be appropriate for use on two scale model county road intersections. The conclusions are that word-legend Stop Ahead signs are more effective driver communication devices than symbol stop-ahead signs; that it is helpful to drivers to have a word plate supplementing the symbol sign if a symbol sign is used; and that the guidance in the Manual on Uniform Traffic Control Devices on the placement of advance warning signs should not supplant engineering judgment in providing proper sign communication at an intersection.
Resumo:
The objective of this report is to provide Iowa county engineers and highway maintenance personnel with procedures that will allow them to efficiently and effectively interpret and repair or avoid landslides. The research provides an overview of basic slope stability analyses that can be used to diagnose the cause and effect associated with a slope failure. Field evidence for identifying active or potential slope stability problems is outlined. A survey of county engineers provided data for presenting a slope stability risk map for the state of Iowa. Areas of high risk are along the western border and southeastern portion of the state. These regions contain deep to moderately deep loess. The central portion of the state is a low risk area where the surficial soils are glacial till or thin loess over till. In this region, the landslides appear to occur predominately in backslopes along deeply incised major rivers, such as the Des Moines River, or in foreslopes. The south-central portion of the state is an area of medium risk where failures are associated with steep backslopes and improperly compacted foreslopes. Soil shear strength data compiled from the Iowa DOT and consulting engineers files are correlated with geologic parent materials and mean values of shear strength parameters and unit weights were computed for glacial till, friable loess, plastic loess and local alluvium. Statistical tests demonstrate that friction angles and unit weights differ significantly but in some cases effective stress cohesion intercept and undrained shear strength data do not. Moreover, effective stress cohesion intercept and undrained shear strength data show a high degree of variability. The shear strength and unit weight data are used in slope stability analyses for both drained and undrained conditions to generate curves that can be used for a preliminary evaluation of the relative stability of slopes within the four materials. Reconnaissance trips to over fifty active and repaired landslides in Iowa suggest that, in general, landslides in Iowa are relatively shallow [i.e., failure surfaces less than 6 ft (2 m) deep] and are either translational or shallow rational. Two foreslope and two backslope failure case histories provide additional insights into slope stability problems and repair in Iowa. These include the observation that embankment soils compacted to less than 95% relative density show a marked strength decrease from soils at or above that density. Foreslopes constructed of soils derived from shale exhibit loss of strength as a result of weathering. In some situations, multiple causes of instability can be discerned from back analyses with the slope stability program XSTABL. In areas where the stratigraphy consists of loess over till or till over bedrock, the geologic contracts act as surfaces of groundwater accumulation that contribute to slope instability.
Resumo:
This contract extension was granted to analyze data obtained in the original contract period at a level of detail not called for in the original contract nor permitted by the time constraints of the original contract schedule. These further analyses focused on two primary questions: I. What sources of variation can be isolated within the overall pattern of driver recognition errors reported previously for the 16 signs tested in Project HR-256? 2. Were there systematic relations among data on the placement of signs in a simulated signing exercise and data on the respondents' ability to detect the presence of a sign in a visual field or their ability to recognize quickly and correctly a sign shown them or the speed with which these same persons can respond to a sign for a driver decision?