997 resultados para Injury Prediction.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The spared nerve injury (SNI) model mimics human neuropathic pain related to peripheral nerve injury and is based upon an invasive but simple surgical procedure. Since its first description in 2000, it has displayed a remarkable development. It produces a robust, reliable and long-lasting neuropathic pain-like behaviour (allodynia and hyperalgesia) as well as the possibility of studying both injured and non-injured neuronal populations in the same spinal ganglion. Besides, variants of the SNI model have been developed in rats, mice and neonatal/young rodents, resulting in several possible angles of analysis. Therefore, the purpose of this chapter is to provide a detailed guidance regarding the SNI model and its variants, highlighting its surgical and behavioural testing specificities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, we study the use of prediction markets for technology assessment. We particularly focus on their ability to assess complex issues, the design constraints required for such applications and their efficacy compared to traditional techniques. To achieve this, we followed a design science research paradigm, iteratively developing, instantiating, evaluating and refining the design of our artifacts. This allowed us to make multiple contributions, both practical and theoretical. We first showed that prediction markets are adequate for properly assessing complex issues. We also developed a typology of design factors and design propositions for using these markets in a technology assessment context. Then, we showed that they are able to solve some issues related to the R&D portfolio management process and we proposed a roadmap for their implementation. Finally, by comparing the instantiation and the results of a multi-criteria decision method and a prediction market, we showed that the latter are more efficient, while offering similar results. We also proposed a framework for comparing forecasting methods, to identify the constraints based on contingency factors. In conclusion, our research opens a new field of application of prediction markets and should help hasten their adoption by enterprises. Résumé français: Dans cette thèse, nous étudions l'utilisation de marchés de prédictions pour l'évaluation de nouvelles technologies. Nous nous intéressons plus particulièrement aux capacités des marchés de prédictions à évaluer des problématiques complexes, aux contraintes de conception pour une telle utilisation et à leur efficacité par rapport à des techniques traditionnelles. Pour ce faire, nous avons suivi une approche Design Science, développant itérativement plusieurs prototypes, les instanciant, puis les évaluant avant d'en raffiner la conception. Ceci nous a permis de faire de multiples contributions tant pratiques que théoriques. Nous avons tout d'abord montré que les marchés de prédictions étaient adaptés pour correctement apprécier des problématiques complexes. Nous avons également développé une typologie de facteurs de conception ainsi que des propositions de conception pour l'utilisation de ces marchés dans des contextes d'évaluation technologique. Ensuite, nous avons montré que ces marchés pouvaient résoudre une partie des problèmes liés à la gestion des portes-feuille de projets de recherche et développement et proposons une feuille de route pour leur mise en oeuvre. Finalement, en comparant la mise en oeuvre et les résultats d'une méthode de décision multi-critère et d'un marché de prédiction, nous avons montré que ces derniers étaient plus efficaces, tout en offrant des résultats semblables. Nous proposons également un cadre de comparaison des méthodes d'évaluation technologiques, permettant de cerner au mieux les besoins en fonction de facteurs de contingence. En conclusion, notre recherche ouvre un nouveau champ d'application des marchés de prédiction et devrait permettre d'accélérer leur adoption par les entreprises.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prognosis after severe traumatic brain injury (TBI) is determined by the severity of initial injury and secondary cerebral damage. The main determinants of secondary cerebral damage are brain ischemia and oedema. Traumatic brain injury is a heterogeneous disease. Head CT-scan is essential in evaluating initial type of injury and severity of brain oedema. A standardised approach based on prevention and treatment of secondary cerebral damage is the only effective therapeutic strategy of severe TBI. We review the classification, pathophysiology and treatment of secondary cerebral damage after severe TBI and discuss the management of intracranial hypertension, cerebral perfusion pressure and brain ischemia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Antemortem demonstration of ischemia has proved elusive in head injury because regional CBF reductions may represent hypoperfusion appropriately coupled to hypometabolism. Fifteen patients underwent positron emission tomography within 24 hours of head injury to map cerebral blood flow (CBF), cerebral oxygen metabolism (CMRO2), and oxygen extraction fraction (OEF). We estimated the volume of ischemic brain (IBV) and used the standard deviation of the OEF distribution to estimate the efficiency of coupling between CBF and CMRO2. The IBV in patients was significantly higher than controls (67 +/- 69 vs. 2 +/- 3 mL; P < 0.01). The coexistence of relative ischemia and hyperemia in some patients implies mismatching of perfusion to oxygen use. Whereas the saturation of jugular bulb blood (SjO2) correlated with the IBV (r = 0.8, P < 0.01), SjO2 values of 50% were only achieved at an IBV of 170 +/- 63 mL (mean +/- 95% CI), which equates to 13 +/- 5% of the brain. Increases in IBV correlated with a poor Glasgow Outcome Score 6 months after injury (rho = -0.6, P < 0.05). These results suggest significant ischemia within the first day after head injury. The ischemic burden represented by this "traumatic penumbra" is poorly detected by bedside clinical monitors and has significant associations with outcome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The soil CO2 emission has high spatial variability because it depends strongly on soil properties. The purpose of this study was to (i) characterize the spatial variability of soil respiration and related properties, (ii) evaluate the accuracy of results of the ordinary kriging method and sequential Gaussian simulation, and (iii) evaluate the uncertainty in predicting the spatial variability of soil CO2 emission and other properties using sequential Gaussian simulations. The study was conducted in a sugarcane area, using a regular sampling grid with 141 points, where soil CO2 emission, soil temperature, air-filled pore space, soil organic matter and soil bulk density were evaluated. All variables showed spatial dependence structure. The soil CO2 emission was positively correlated with organic matter (r = 0.25, p < 0.05) and air-filled pore space (r = 0.27, p < 0.01) and negatively with soil bulk density (r = -0.41, p < 0.01). However, when the estimated spatial values were considered, the air-filled pore space was the variable mainly responsible for the spatial characteristics of soil respiration, with a correlation of 0.26 (p < 0.01). For all variables, individual simulations represented the cumulative distribution functions and variograms better than ordinary kriging and E-type estimates. The greatest uncertainties in predicting soil CO2 emission were associated with areas with the highest estimated values, which produced estimates from 0.18 to 1.85 t CO2 ha-1, according to the different scenarios considered. The knowledge of the uncertainties generated by the different scenarios can be used in inventories of greenhouse gases, to provide conservative estimates of the potential emission of these gases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Substantial collective flow is observed in collisions between lead nuclei at Large Hadron Collider (LHC) as evidenced by the azimuthal correlations in the transverse momentum distributions of the produced particles. Our calculations indicate that the global v1-flow, which at RHIC peaked at negative rapidities (named third flow component or antiflow), now at LHC is going to turn toward forward rapidities (to the same side and direction as the projectile residue). Potentially this can provide a sensitive barometer to estimate the pressure and transport properties of the quark-gluon plasma. Our calculations also take into account the initial state center-of-mass rapidity fluctuations, and demonstrate that these are crucial for v1 simulations. In order to better study the transverse momentum flow dependence we suggest a new "symmetrized" v1S(pt) function, and we also propose a new method to disentangle global v1 flow from the contribution generated by the random fluctuations in the initial state. This will enhance the possibilities of studying the collective Global v1 flow both at the STAR Beam Energy Scan program and at LHC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The likelihood of significant exposure to drugs in infants through breast milk is poorly defined, given the difficulties of conducting pharmacokinetics (PK) studies. Using fluoxetine (FX) as an example, we conducted a proof-of-principle study applying population PK (popPK) modeling and simulation to estimate drug exposure in infants through breast milk. We simulated data for 1,000 mother-infant pairs, assuming conservatively that the FX clearance in an infant is 20% of the allometrically adjusted value in adults. The model-generated estimate of the milk-to-plasma ratio for FX (mean: 0.59) was consistent with those reported in other studies. The median infant-to-mother ratio of FX steady-state plasma concentrations predicted by the simulation was 8.5%. Although the disposition of the active metabolite, norfluoxetine, could not be modeled, popPK-informed simulation may be valid for other drugs, particularly those without active metabolites, thereby providing a practical alternative to conventional PK studies for exposure risk assessment in this population.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urinary indices are classically believed to allow differentiation of transient (or pre-renal) acute kidney injury (AKI) from persistent (or acute tubular necrosis) AKI. However, the data validating urinalysis in critically ill patients are weak. In the previous issue of Critical Care, Pons and colleagues demonstrate in a multicenter observational study that sodium and urea excretion fractions as well as urinary over plasma ratios performed poorly as diagnostic tests to separate such entities. This study confirms the limited diagnostic and prognostic ability of urine testing. Together with other studies, this study raises more fundamental questions about the value, meaning and pathophysiologic validity of the pre-renal AKI paradigm and suggests that AKI (like all other forms of organ injury) is a continuum of injury that cannot be neatly divided into functional (pre-renal or transient) or structural (acute tubular necrosis or persistent).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A patient's chest pain raises concern for the possibility of coronary heart disease (CHD). An easy to use clinical prediction rule has been derived from the TOPIC study in Lausanne. Our objective is to validate this clinical score for ruling out CHD in primary care patients with chest pain. Methods: This secondary analysis used data collected from a oneyear follow-up cohort study attending 76 GPs in Germany. Patients attending their GP with chest pain were questioned on their age, gender, duration of chest pain (1-60 min), sternal pain location, pain increases with exertion, absence of tenderness point at palpation, cardiovascular risks factors, and personal history of cardiovascular disease. Area under the curve (ROC), sensitivity and specificity of the Lausanne CHD score were calculated for patients with full data. Results: 1190 patients were included. Full data was available for 509 patients (42.8%). Missing data was not related to having CHD (p = 0.397) or having a cardiovascular risk factor (p = 0.275). 76 (14.9%) were diagnosed with a CHD. Prevalence of CHD were respectively of 68/344 (19.8%), 2/62 (3.2%), 6/103 (5.8%) in the high, intermediate and low risk category. ROC was of 72.9 (CI95% 66.8; 78.9). Ruling out patients with low risk has a sensitivity of 92.1% (CI95% 83.0; 96.7) and a specificity of 22.4% (CI95% 18.6%; 26.7%). Conclusion: The Lausanne CHD score shows reasonably good sensitivity and can be used to rule out coronary events in patients with chest pain. Patients at risk of CHD for other rarer reasons should nevertheless also be investigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Chest pain can be caused by various conditions, with life-threatening cardiac disease being of greatest concern. Prediction scores to rule out coronary artery disease have been developed for use in emergency settings. We developed and validated a simple prediction rule for use in primary care. METHODS: We conducted a cross-sectional diagnostic study in 74 primary care practices in Germany. Primary care physicians recruited all consecutive patients who presented with chest pain (n = 1249) and recorded symptoms and findings for each patient (derivation cohort). An independent expert panel reviewed follow-up data obtained at six weeks and six months on symptoms, investigations, hospital admissions and medications to determine the presence or absence of coronary artery disease. Adjusted odds ratios of relevant variables were used to develop a prediction rule. We calculated measures of diagnostic accuracy for different cut-off values for the prediction scores using data derived from another prospective primary care study (validation cohort). RESULTS: The prediction rule contained five determinants (age/sex, known vascular disease, patient assumes pain is of cardiac origin, pain is worse during exercise, and pain is not reproducible by palpation), with the score ranging from 0 to 5 points. The area under the curve (receiver operating characteristic curve) was 0.87 (95% confidence interval [CI] 0.83-0.91) for the derivation cohort and 0.90 (95% CI 0.87-0.93) for the validation cohort. The best overall discrimination was with a cut-off value of 3 (positive result 3-5 points; negative result <or= 2 points), which had a sensitivity of 87.1% (95% CI 79.9%-94.2%) and a specificity of 80.8% (77.6%-83.9%). INTERPRETATION: The prediction rule for coronary artery disease in primary care proved to be robust in the validation cohort. It can help to rule out coronary artery disease in patients presenting with chest pain in primary care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Soil information is needed for managing the agricultural environment. The aim of this study was to apply artificial neural networks (ANNs) for the prediction of soil classes using orbital remote sensing products, terrain attributes derived from a digital elevation model and local geology information as data sources. This approach to digital soil mapping was evaluated in an area with a high degree of lithologic diversity in the Serra do Mar. The neural network simulator used in this study was JavaNNS and the backpropagation learning algorithm. For soil class prediction, different combinations of the selected discriminant variables were tested: elevation, declivity, aspect, curvature, curvature plan, curvature profile, topographic index, solar radiation, LS topographic factor, local geology information, and clay mineral indices, iron oxides and the normalized difference vegetation index (NDVI) derived from an image of a Landsat-7 Enhanced Thematic Mapper Plus (ETM+) sensor. With the tested sets, best results were obtained when all discriminant variables were associated with geological information (overall accuracy 93.2 - 95.6 %, Kappa index 0.924 - 0.951, for set 13). Excluding the variable profile curvature (set 12), overall accuracy ranged from 93.9 to 95.4 % and the Kappa index from 0.932 to 0.948. The maps based on the neural network classifier were consistent and similar to conventional soil maps drawn for the study area, although with more spatial details. The results show the potential of ANNs for soil class prediction in mountainous areas with lithological diversity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visible and near infrared (vis-NIR) spectroscopy is widely used to detect soil properties. The objective of this study is to evaluate the combined effect of moisture content (MC) and the modeling algorithm on prediction of soil organic carbon (SOC) and pH. Partial least squares (PLS) and the Artificial neural network (ANN) for modeling of SOC and pH at different MC levels were compared in terms of efficiency in prediction of regression. A total of 270 soil samples were used. Before spectral measurement, dry soil samples were weighed to determine the amount of water to be added by weight to achieve the specified gravimetric MC levels of 5, 10, 15, 20, and 25 %. A fiber-optic vis-NIR spectrophotometer (350-2500 nm) was used to measure spectra of soil samples in the diffuse reflectance mode. Spectra preprocessing and PLS regression were carried using Unscrambler® software. Statistica® software was used for ANN modeling. The best prediction result for SOC was obtained using the ANN (RMSEP = 0.82 % and RPD = 4.23) for soil samples with 25 % MC. The best prediction results for pH were obtained with PLS for dry soil samples (RMSEP = 0.65 % and RPD = 1.68) and soil samples with 10 % MC (RMSEP = 0.61 % and RPD = 1.71). Whereas the ANN showed better performance for SOC prediction at all MC levels, PLS showed better predictive accuracy of pH at all MC levels except for 25 % MC. Therefore, based on the data set used in the current study, the ANN is recommended for the analyses of SOC at all MC levels, whereas PLS is recommended for the analysis of pH at MC levels below 20 %.