908 resultados para probability and reinforcement proportion
Resumo:
The saphenous vein is the conduit of choice in bypass graft procedures. Haemodynamic factors play a major role in the development of intimal hyperplasia (IH), and subsequent bypass failure. To evaluate the potential protective effect of external reinforcement on such a failure, we developed an ex vivo model for the perfusion of segments of human saphenous veins under arterial shear stress. In veins submitted to pulsatile high pressure (mean pressure at 100 mmHg) for 3 or 7 days, the use of an external macroporous polyester mesh 1) prevented the dilatation of the vessel, 2) decreased the development of IH, 3) reduced the apoptosis of smooth muscle cells, and the subsequent fibrosis of the media layer, 4) prevented the remodelling of extracellular matrix through the up-regulation of matrix metalloproteinases (MMP-2, MMP-9) and plasminogen activator type I. The data show that, in an experimental ex vivo setting, an external scaffold decreases IH and maintains the integrity of veins exposed to arterial pressure, via increase in shear stress and decrease wall tension, that likely contribute to trigger selective molecular and cellular changes.
Resumo:
This study aimed to develop a hip screening tool that combines relevant clinical risk factors (CRFs) and quantitative ultrasound (QUS) at the heel to determine the 10-yr probability of hip fractures in elderly women. The EPISEM database, comprised of approximately 13,000 women 70 yr of age, was derived from two population-based white European cohorts in France and Switzerland. All women had baseline data on CRFs and a baseline measurement of the stiffness index (SI) derived from QUS at the heel. Women were followed prospectively to identify incident fractures. Multivariate analysis was performed to determine the CRFs that contributed significantly to hip fracture risk, and these were used to generate a CRF score. Gradients of risk (GR; RR/SD change) and areas under receiver operating characteristic curves (AUC) were calculated for the CRF score, SI, and a score combining both. The 10-yr probability of hip fracture was computed for the combined model. Three hundred seven hip fractures were observed over a mean follow-up of 3.2 yr. In addition to SI, significant CRFs for hip fracture were body mass index (BMI), history of fracture, an impaired chair test, history of a recent fall, current cigarette smoking, and diabetes mellitus. The average GR for hip fracture was 2.10 per SD with the combined SI + CRF score compared with a GR of 1.77 with SI alone and of 1.52 with the CRF score alone. Thus, the use of CRFs enhanced the predictive value of SI alone. For example, in a woman 80 yr of age, the presence of two to four CRFs increased the probability of hip fracture from 16.9% to 26.6% and from 52.6% to 70.5% for SI Z-scores of +2 and -3, respectively. The combined use of CRFs and QUS SI is a promising tool to assess hip fracture probability in elderly women, especially when access to DXA is limited.
Resumo:
To permit the tracking of turbulent flow structures in an Eulerian frame from single-point measurements, we make use of a generalization of conventional two-dimensional quadrant analysis to three-dimensional octants. We characterize flow structures using the sequences of these octants and show how significance may be attached to particular sequences using statistical mull models. We analyze an example experiment and show how a particular dominant flow structure can be identified from the conditional probability of octant sequences. The frequency of this structure corresponds to the dominant peak in the velocity spectra and exerts a high proportion of the total shear stress. We link this structure explicitly to the propensity for sediment entrainment and show that greater insight into sediment entrainment can be obtained by disaggregating those octants that occur within the identified macroturbulence structure from those that do not. Hence, this work goes beyond critiques of Reynolds stress approaches to bed load entrainment that highlight the importance of outward interactions, to identifying and prioritizing the quadrants/octants that define particular flow structures. Key Points <list list-type=''bulleted'' id=''jgrf20196-list-0001''> <list-item id=''jgrf20196-li-0001''>A new method for analysing single point velocity data is presented <list-item id=''jgrf20196-li-0002''>Flow structures are identified by a sequence of flow states (termed octants) <list-item id=''jgrf20196-li-0003''>The identified structure exerts high stresses and causes bed-load entrainment
Resumo:
Background and aims: Few studies have examined whether subjective experiences during first cannabis use are related to other illicit drug (OID) use. This study investigated this topic. Methods: Baseline data from a representative sample of young Swiss men was obtained from an ongoing Cohort Study on Substance Use Risk Factors (N ¼ 5753). Logistic regressions were performed to examine the relationships between cannabis use and of subjective experiences during first cannabis use with 15 OID. Results: Positive experiences increased the likelihood of using hallucinogens (hallucinogens, salvia divinorum, spice; p50.015), stimulants (speed, ecstasy, cocaine, amphetamines/methamphetamines; p50.006) and also poppers, research chemicals, GHB/GBL, and crystal meth (p50.049). Sniffed drugs (poppers, solvents for sniffing) and ''hard'' drugs (heroin, ketamine, research chemicals, GHB/GBL and crystal meth) were more likely to be used by participants who experienced negative feelings on first use of cannabis (p50.034). Conclusion: Subjective feelings seemed to amplify the association of cannabis with OID. The risk increased for drugs with effects resembling feelings experienced on first cannabis use. Negative experiences should also be a concern, as they were associated with increased risk of using the ''hardest'' illicit drugs.
Resumo:
Adipose tissue (AT) is distributed as large differentiated masses, and smaller depots covering vessels, and organs, as well as interspersed within them. The differences between types and size of cells makes AT one of the most disperse and complex organs. Lipid storage is partly shared by other tissues such as muscle and liver. We intended to obtain an approximate estimation of the size of lipid reserves stored outside the main fat depots. Both male and female rats were made overweight by 4-weeks feeding of a cafeteria diet. Total lipid content was analyzed in brain, liver, gastrocnemius muscle, four white AT sites: subcutaneous, perigonadal, retroperitoneal and mesenteric, two brown AT sites (interscapular and perirenal) and in a pool of the rest of organs and tissues (after discarding gut contents). Organ lipid content was estimated and tabulated for each individual rat. Food intake was measured daily. There was a surprisingly high proportion of lipid not accounted for by the main macroscopic AT sites, even when brain, liver and BAT main sites were discounted. Muscle contained about 8% of body lipids, liver 1-1.4%, four white AT sites lipid 28-63% of body lipid, and the rest of the body (including muscle) 38-44%. There was a good correlation between AT lipid and body lipid, but lipid in"other organs" was highly correlated too with body lipid. Brain lipid was not. Irrespective of dietary intake, accumulation of body fat was uniform both for the main lipid storage and handling organs: large masses of AT (but also liver, muscle), as well as in the"rest" of tissues. These storage sites, in specialized (adipose) or not-specialized (liver, muscle) tissues reacted in parallel against a hyperlipidic diet challenge. We postulate that body lipid stores are handled and regulated coordinately, with a more centralized and overall mechanisms than usually assumed.
Resumo:
In this study we used market settlement prices of European call options on stock index futures to extract implied probability distribution function (PDF). The method used produces a PDF of returns of an underlying asset at expiration date from implied volatility smile. With this method, the assumption of lognormal distribution (Black-Scholes model) is tested. The market view of the asset price dynamics can then be used for various purposes (hedging, speculation). We used the so called smoothing approach for implied PDF extraction presented by Shimko (1993). In our analysis we obtained implied volatility smiles from index futures markets (S&P 500 and DAX indices) and standardized them. The method introduced by Breeden and Litzenberger (1978) was then used on PDF extraction. The results show significant deviations from the assumption of lognormal returns for S&P500 options while DAX options mostly fit the lognormal distribution. A deviant subjective view of PDF can be used to form a strategy as discussed in the last section.
Resumo:
Abstract Solitary pulmonary nodule corresponds to a common radiographic finding, which is frequently detected incidentally. The investigation of this entity remains complex, since characteristics of benign and malignant processes overlap in the differential diagnosis. Currently, many strategies are available to evaluate solitary pulmonary nodules with the main objective of characterizing benign lesions as best as possible, while avoiding to expose patients to the risks inherent to invasive methods, besides correctly detecting cases of lung cancer so as the potential curative treatment is not delayed. This first part of the study focuses on the epidemiology, the morfological evaluation and the methods to determine the likelihood of cancer in cases of indeterminate solitary pulmonary nodule.
Resumo:
The speed of traveling fronts for a two-dimensional model of a delayed reactiondispersal process is derived analytically and from simulations of molecular dynamics. We show that the one-dimensional (1D) and two-dimensional (2D) versions of a given kernel do not yield always the same speed. It is also shown that the speeds of time-delayed fronts may be higher than those predicted by the corresponding non-delayed models. This result is shown for systems with peaked dispersal kernels which lead to ballistic transport
Resumo:
The inferior colliculus is a primary relay for the processing of auditory information in the brainstem. The inferior colliculus is also part of the so-called brain aversion system as animals learn to switch off the electrical stimulation of this structure. The purpose of the present study was to determine whether associative learning occurs between aversion induced by electrical stimulation of the inferior colliculus and visual and auditory warning stimuli. Rats implanted with electrodes into the central nucleus of the inferior colliculus were placed inside an open-field and thresholds for the escape response to electrical stimulation of the inferior colliculus were determined. The rats were then placed inside a shuttle-box and submitted to a two-way avoidance paradigm. Electrical stimulation of the inferior colliculus at the escape threshold (98.12 ± 6.15 (A, peak-to-peak) was used as negative reinforcement and light or tone as the warning stimulus. Each session consisted of 50 trials and was divided into two segments of 25 trials in order to determine the learning rate of the animals during the sessions. The rats learned to avoid the inferior colliculus stimulation when light was used as the warning stimulus (13.25 ± 0.60 s and 8.63 ± 0.93 s for latencies and 12.5 ± 2.04 and 19.62 ± 1.65 for frequencies in the first and second halves of the sessions, respectively, P<0.01 in both cases). No significant changes in latencies (14.75 ± 1.63 and 12.75 ± 1.44 s) or frequencies of responses (8.75 ± 1.20 and 11.25 ± 1.13) were seen when tone was used as the warning stimulus (P>0.05 in both cases). Taken together, the present results suggest that rats learn to avoid the inferior colliculus stimulation when light is used as the warning stimulus. However, this learning process does not occur when the neutral stimulus used is an acoustic one. Electrical stimulation of the inferior colliculus may disturb the signal transmission of the stimulus to be conditioned from the inferior colliculus to higher brain structures such as amygdala
Resumo:
We performed a quantitative analysis of M and P cell mosaics of the common-marmoset retina. Ganglion cells were labeled retrogradely from optic nerve deposits of Biocytin. The labeling was visualized using horseradish peroxidase (HRP) histochemistry and 3-3'diaminobenzidine as chromogen. M and P cells were morphologically similar to those found in Old- and New-World primates. Measurements were performed on well-stained cells from 4 retinas of different animals. We analyzed separate mosaics for inner and outer M and P cells at increasing distances from the fovea (2.5-9 mm of eccentricity) to estimate cell density, proportion, and dendritic coverage. M cell density decreased towards the retinal periphery in all quadrants. M cell density was higher in the nasal quadrant than in other retinal regions at similar eccentricities, reaching about 740 cells/mm² at 2.5 mm of temporal eccentricity, and representing 8-14% of all ganglion cells. P cell density increased from peripheral to more central regions, reaching about 5540 cells/mm² at 2.5 mm of temporal eccentricity. P cells represented a smaller proportion of all ganglion cells in the nasal quadrant than in other quadrants, and their numbers increased towards central retinal regions. The M cell coverage factor ranged from 5 to 12 and the P cell coverage factor ranged from 1 to 3 in the nasal quadrant and from 5 to 12 in the other quadrants. These results show that central and peripheral retinal regions differ in terms of cell class proportions and dendritic coverage, and their properties do not result from simply scaling down cell density. Therefore, differences in functional properties between central and peripheral vision should take these distinct regional retinal characteristics into account.
Resumo:
The Feedback-Related Negativity (FRN) is thought to reflect the dopaminergic prediction error signal from the subcortical areas to the ACC (i.e., a bottom-up signal). Two studies were conducted in order to test a new model of FRN generation, which includes direct modulating influences of medial PFC (i.e., top-down signals) on the ACC at the time of the FRN. Study 1 examined the effects of one’s sense of control (top-down) and of informative cues (bottom-up) on the FRN measures. In Study 2, sense of control and instruction-based (top-down) and probability-based expectations (bottom-up) were manipulated to test the proposed model. The results suggest that any influences of medial PFC on the activity of the ACC that occur in the context of incentive tasks are not direct. The FRN was shown to be sensitive to salient stimulus characteristics. The results of this dissertation partially support the reinforcement learning theory, in that the FRN is a marker for prediction error signal from subcortical areas. However, the pattern of results outlined here suggests that prediction errors are based on salient stimulus characteristics and are not reward specific. A second goal of this dissertation was to examine whether ACC activity, measured through the FRN, is altered in individuals at-risk for problem-gambling behaviour (PG). Individuals in this group were more sensitive to the valence of the outcome in a gambling task compared to not at-risk individuals, suggesting that gambling contexts increase the sensitivity of the reward system to valence of the outcome in individuals at risk for PG. Furthermore, at-risk participants showed an increased sensitivity to reward characteristics and a decreased response to loss outcomes. This contrasts with those not at risk whose FRNs were sensitive to losses. As the results did not replicate previous research showing attenuated FRNs in pathological gamblers, it is likely that the size and time of the FRN does not change gradually with increasing risk of maladaptive behaviour. Instead, changes in ACC activity reflected by the FRN in general can be observed only after behaviour becomes clinically maladaptive or through comparison between different types of gain/loss outcomes.
Resumo:
We assess the predictive ability of three VPIN metrics on the basis of two highly volatile market events of China, and examine the association between VPIN and toxic-induced volatility through conditional probability analysis and multiple regression. We examine the dynamic relationship on VPIN and high-frequency liquidity using Vector Auto-Regression models, Granger Causality tests, and impulse response analysis. Our results suggest that Bulk Volume VPIN has the best risk-warning effect among major VPIN metrics. VPIN has a positive association with market volatility induced by toxic information flow. Most importantly, we document a positive feedback effect between VPIN and high-frequency liquidity, where a negative liquidity shock boosts up VPIN, which, in turn, leads to further liquidity drain. Our study provides empirical evidence that reflects an intrinsic game between informed traders and market makers when facing toxic information in the high-frequency trading world.
Flippable Pairs and Subset Comparisons in Comparative Probability Orderings and Related Simple Games
Resumo:
We show that every additively representable comparative probability order on n atoms is determined by at least n - 1 binary subset comparisons. We show that there are many orders of this kind, not just the lexicographic order. These results provide answers to two questions of Fishburn et al (2002). We also study the flip relation on the class of all comparative probability orders introduced by Maclagan. We generalise an important theorem of Fishburn, Peke?c and Reeds, by showing that in any minimal set of comparisons that determine a comparative probability order, all comparisons are flippable. By calculating the characteristics of the flip relation for n = 6 we discover that the regions in the corresponding hyperplane arrangement can have no more than 13 faces and that there are 20 regions with 13 faces. All the neighbours of the 20 comparative probability orders which correspond to those regions are representable. Finally we define a class of simple games with complete desirability relation for which its strong desirability relation is acyclic, and show that the flip relation carries all the information about these games. We show that for n = 6 these games are weighted majority games.
Resumo:
This paper builds on the assumption that countries behave in such a way as to improve, via their economic strength, the probability that they will attain the hegemonic position on the world stage. The quest for hegemony is modeled as a game, with countries being differentiated initially only by some endowment which yields a pollution free flow of income. A country's level of pollution is assumed directly related to its economic strength, as measured by its level of production. Two types of countries are distinguished: richly-endowed countries, for whom the return on their endowment is greater than the return they can expect from winning the hegemony race, and poorly-endowed countries, who can expect a greater return from winning the race than from their endowment. We show that in a symmetric world of poorly-endowed countries the equilibrium level of emissions is larger than in a symmetric world of richly-endowed countries: the former, being less well endowed to begin with, try harder to win the race. In the asymmetric world composed of both types of countries, the poorly-endowed countries will be polluting more than the richly endowed countries. Numerical simulations show that if the number of richly-endowed countries is increased keeping the total number of countries constant, the equilibrium level of global emissions will decrease; if the lot of the poorly-endowed countries is increased by increasing their initial endowment keeping that of the richly-endowed countries constant, global pollution will decrease; increasing the endowments of each type of countries in the same proportion, and hence increasing the average endowment in that proportion, will decrease global pollution; redistributing from the richly-endowed in favor of the poorly-endowed while keeping the average endowment constant will in general result in an increase in the equilibrium level of global pollution.