48 resultados para Boundary value problems on manifolds
em Université de Lausanne, Switzerland
Resumo:
We introduce an algebraic operator framework to study discounted penalty functions in renewal risk models. For inter-arrival and claim size distributions with rational Laplace transform, the usual integral equation is transformed into a boundary value problem, which is solved by symbolic techniques. The factorization of the differential operator can be lifted to the level of boundary value problems, amounting to iteratively solving first-order problems. This leads to an explicit expression for the Gerber-Shiu function in terms of the penalty function.
Resumo:
S100B is a prognostic factor for melanoma as elevated levels correlate with disease progression and poor outcome. We determined its prognostic value based on updated information using serial determinations in stage IIb/III melanoma patients. 211 Patients who participated in the EORTC 18952 trial, evaluating efficacy of adjuvant intermediate doses of interferon α2b (IFN) versus observation, entered a corollary study. Over a period of 36 months, 918 serum samples were collected. The Cox time-dependent model was used to assess prognostic value of the latest (most recent) S100B determination. At first measurement, 178 patients had S100B values <0.2 μg/l and 33 ≥ 0.2 μg/l. Within the first group, 61 patients had, later on, an increased value of S100B (≥ 0.2 μg/l). An initial increased value of S100B, or during follow-up, was associated with worse distant metastasis-free survival (DMFS); hazard ratio (HR) of S100B ≥ 0.2 versus S100B < 0.2 was 5.57 (95% confidence interval (CI) 3.81-8.16), P < 0.0001, after adjustment for stage, number of lymph nodes and sex. In stage IIb patients, the HR adjusted for sex was 2.14 (95% CI 0.71, 6.42), whereas in stage III, the HR adjusted for stage, number of lymph nodes and sex was 6.76 (95% CI 4.50-10.16). Similar results were observed regarding overall survival (OS). Serial determination of S100B in stage IIb-III melanoma is a strong independent prognostic marker, even stronger compared to stage and number of positive lymph nodes. The prognostic impact of S100B ≥ 0.2 μg/l is more pronounced in stage III disease compared with stage IIb.
Resumo:
INTRODUCTION: The influence of specific health problems on health-related quality of life (HRQoL) in childhood cancer survivors is unknown. We compared HRQoL between survivors of childhood cancer and their siblings, determined factors associated with HRQoL, and investigated the influence of chronic health problems on HRQoL. METHODS: Within the Swiss Childhood Cancer Survivor Study, we sent a questionnaire to all survivors (≥16 years) registered in the Swiss Childhood Cancer Registry, who survived >5 years and were diagnosed 1976-2005 aged <16 years. Siblings received similar questionnaires. We assessed HRQoL using Short Form-36 (SF-36). Health problems from a standard questionnaire were classified into overweight, vision impairment, hearing, memory, digestive, musculoskeletal or neurological, and thyroid problems. RESULTS: The sample included 1,593 survivors and 695 siblings. Survivors scored significantly lower than siblings in physical function, role limitation, general health, and the Physical Component Summary (PCS). Lower score in PCS was associated with a diagnosis of central nervous system tumor, retinoblastoma or bone tumor, having had surgery, cranio-spinal irradiation, or bone marrow transplantation. Lower score in Mental Component Summary was associated with older age. All health problems decreased HRQoL in all scales. Most affected were survivors reporting memory problems and musculoskeletal or neurological problems. Health problems had the biggest impact on physical functioning, general health, and energy and vitality. CONCLUSIONS: In this study, we showed the negative impact of specific chronic health problems on survivors' HRQoL. IMPLICATIONS FOR CANCER SURVIVORS: Therapeutic preventive measures, risk-targeted follow-up, and interventions might help decrease health problems and, consequently, improve survivors' quality of life.
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
Early consumption of full servings of alcohol and early experience of drunkenness have been linked with alcohol-related harmful effects in adolescence, as well as adult health and social problems. On the basis of secondary analysis of county-level prevalence data, the present study explored the current pattern of drinking and drunkenness among 15- and 16-year-old adolescents in 40 European and North American countries. Data from the 2006 Health Behavior in School Children survey and the European School Survey Project on Alcohol and other Drugs were used. The potential role of alcohol control and policy measures in explaining variance in drinking patterns across countries was also examined. Policy measures and data on adult consumption patterns were taken from the WHO Global Information System on Alcohol and Health, Eurostat and the indicator of alcohol control policy strength developed by Brand DA, Saisana M, Rynn LA et al. [(2007) Comparative analysis of alcohol control policies in 30 countries. PLoS Med 4:e151.]. We found that a non-significant trend existed whereby higher prices and stronger alcohol controls were associated with a lower proportion of weekly drinking but a higher proportion of drunkenness. It is important that future research explores the causal relationships between alcohol policy measures and alcohol consumption patterns to determine whether strict policies do in fact have any beneficial effect on drinking patterns, or rather, lead to rebellion and an increased prevalence of binge drinking.
Resumo:
AbstractPerforming publicly has become increasingly important in a variety of professions. This condition is associated with performance anxiety in almost all performers. Whereas some performers successfully cope with this anxiety, for others it represents a major problem and even threatens their career. Musicians and especially music students were shown to be particularly affected by performance anxiety.Therefore, the goal of this PhD thesis was to gain a better understanding of performance anxiety in university music students. More precisely, the first part of this thesis aimed at increasing knowledge on the occurrence, the experience, and the management of performance anxiety (Article 1). The second part aimed at investigating the hypothesis that there is an underlying hyperventilation problem in musicians with a high level of anxiety before a performance. This hypothesis was addressed in two ways: firstly, by investigating the association between the negative affective dimension of music performance anxiety (MPA) and self-perceived physiological symptoms that are known to co-occur with hyperventilation (Article 2) and secondly, by analyzing this association on the physiological level before a private (audience-free) and a public performance (Article 3). Article 4 places some key variables of Article 3 in a larger context by jointly analyzing the phases before, during, and after performing.The main results of the self-report data show (a) that stage fright is experienced as a problem by one-third of the surveyed students, (b) that the students express a considerable need for more help to better cope with it, and (c) that there is a positive association between negative feelings of MPA and the self-reported hyperventilation complaints before performing. This latter finding was confirmed on the physiological level in a tendency of particularly high performance-anxious musicians to hyperventilate. Furthermore, the psycho-physiological activation increased from a private to a public performance, and was higher during the performances than before or after them. The physiological activation was mainly independent of the MPA score. Finally, there was a low response coherence between the actual physiological activation and the self-reports on the instantaneous anxiety, tension, and perceived physiological activation.Given the high proportion of music students who consider stage fright as a problem and given the need for more help to better cope with it, a better understanding of this phenomenon and its inclusion in the educational process is fundamental to prevent future occupational problems. On the physiological level, breathing exercises might be a good means to decrease - but also to increase - the arousal associated with a public performance in order to meet an optimal level of arousal needed for a good performance.
Resumo:
X-ray is a technology that is used for numerous applications in the medical field. The process of X-ray projection gives a 2-dimension (2D) grey-level texture from a 3- dimension (3D) object. Until now no clear demonstration or correlation has positioned the 2D texture analysis as a valid indirect evaluation of the 3D microarchitecture. TBS is a new texture parameter based on the measure of the experimental variogram. TBS evaluates the variation between 2D image grey-levels. The aim of this study was to evaluate existing correlations between 3D bone microarchitecture parameters - evaluated from μCT reconstructions - and the TBS value, calculated on 2D projected images. 30 dried human cadaveric vertebrae were acquired on a micro-scanner (eXplorer Locus, GE) at isotropic resolution of 93 μm. 3D vertebral body models were used. The following 3D microarchitecture parameters were used: Bone volume fraction (BV/TV), Trabecular thickness (TbTh), trabecular space (TbSp), trabecular number (TbN) and connectivity density (ConnD). 3D/2D projections has been done by taking into account the Beer-Lambert Law at X-ray energy of 50, 100, 150 KeV. TBS was assessed on 2D projected images. Correlations between TBS and the 3D microarchitecture parameters were evaluated using a linear regression analysis. Paired T-test is used to assess the X-ray energy effects on TBS. Multiple linear regressions (backward) were used to evaluate relationships between TBS and 3D microarchitecture parameters using a bootstrap process. BV/TV of the sample ranged from 18.5 to 37.6% with an average value at 28.8%. Correlations' analysis showedthat TBSwere strongly correlatedwith ConnD(0.856≤r≤0.862; p<0.001),with TbN (0.805≤r≤0.810; p<0.001) and negatively with TbSp (−0.714≤r≤−0.726; p<0.001), regardless X-ray energy. Results show that lower TBS values are related to "degraded" microarchitecture, with low ConnD, low TbN and a high TbSp. The opposite is also true. X-ray energy has no effect onTBS neither on the correlations betweenTBS and the 3Dmicroarchitecture parameters. In this study, we demonstrated that TBS was significantly correlated with 3D microarchitecture parameters ConnD and TbN, and negatively with TbSp, no matter what X-ray energy has been used. This article is part of a Special Issue entitled ECTS 2011. Disclosure of interest: None declared.
Resumo:
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid-solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently bench-marked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
Resumo:
Rate of metabolism and body temperature were studied between -6°C and 38°C in the common pipistrelle bat Pipistrellus pipistrellus (Vespertilionidae), a European species lying close to the lower end of the mammalian size range (body mass 4.9±0.8g, N=28). Individuals maintained only occasionally a normothermic body temperature averaging 35.4±1.1°C (N=4) and often showed torpor during metabolic runs. The thermoneutral zone was found above 33°C, and basal rate of metabolism averaged 7.6±0.8mL O(2)h(-1) (N=28), which is 69% of the value predicted on the basis of body mass. Minimal wet thermal conductance was 161% of the expected value. During torpor, the rate of metabolism was related exponentially to body temperature with a Q(10) value of 2.57. Torpid bats showed intermittent ventilation, with the frequency of ventilatory cycles increasing exponentially with body temperature. Basal rate of metabolism (BMR) varied significantly with season and body temperature, but not with body mass. It was lower before the hibernation period than during the summer. The patterns observed are generally consistent with those exhibited by other vespertilionids of temperate regions. However, divergences occur with previous measurements on European pipistrelles, and the causes of the seasonal variation in BMR, which has only rarely been searched for among vespertilionids, remain to be examined.
Resumo:
The purpose of this paper is to examine the CSR practices and their implementation in the context of French professional sports clubs. In doing so, it analyses the link between the governance of sports clubs and CSR, which is viewed as a component of governance expanded to stakeholders and contributing to the creation of shared value. Drawing on interview data with key stakeholders of four professional sport clubs (football and basketball) and secondary material, the study sheds light on the determinants, the implementation as well as the impact of CSR on the governance of the professional clubs under examination.
Resumo:
BACKGROUND: While reduction of DUP (Duration of Untreated Psychosis) is a key goal in early intervention strategies, the predictive value of DUP on outcome has been questioned. We planned this study in order to explore the impact of three different definition of "treatment initiation" on the predictive value of DUP on outcome in an early psychosis sample. METHODS: 221 early psychosis patients aged 18-35 were followed-up prospectively over 36 months. DUP was measured using three definitions for treatment onset: Initiation of antipsychotic medication (DUP1); engagement in a specialized programme (DUP2) and combination of engagement in a specialized programme and adherence to medication (DUP3). RESULTS: 10% of patients never reached criteria for DUP3 and therefore were never adequately treated over the 36-month period of care. While DUP1 and DUP2 had a limited predictive value on outcome, DUP3, based on a more restrictive definition for treatment onset, was a better predictor of positive and negative symptoms, as well as functional outcome at 12, 24 and 36 months. Globally, DUP3 explained 2 to 5 times more of the variance than DUP1 and DUP2, with effect sizes falling in the medium range according to Cohen. CONCLUSIONS: The limited predictive value of DUP on outcome in previous studies may be linked to problems of definitions that do not take adherence to treatment into account. While they need replication, our results suggest effort to reduce DUP should continue and aim both at early detection and development of engagement strategies.
Resumo:
In his timely article, Cherniss offers his vision for the future of "Emotional Intelligence" (EI). However, his goal of clarifying the concept by distinguishing definitions from models and his support for "Emotional and Social Competence" (ESC) models will, in our opinion, not make the field advance. To be upfront, we agree that emotions are important for effective decision-making, leadership, performance and the like; however, at this time, EI and ESC have not yet demonstrated incremental validity over and above IQ and personality tests in meta-analyses (Harms & Credé, 2009; Van Rooy & Viswesvaran, 2004). If there is a future for EI, we see it in the ability model of Mayer, Salovey and associates (e.g, Mayer, Caruso, & Salovey, 2000), which detractors and supporters agree holds the most promise (Antonakis, Ashkanasy, & Dasborough, 2009; Zeidner, Roberts, & Matthews, 2008). With their use of quasi-objective scoring measures, the ability model grounds EI in existing frameworks of intelligence, thus differentiating itself from ESC models and their self-rated trait inventories. In fact, we do not see the value of ESC models: They overlap too much with current personality models to offer anything new for science and practice (Zeidner, et al., 2008). In this commentary we raise three concerns we have with Cherniss's suggestions for ESC models: (1) there are important conceptual problems in both the definition of ESC and the distinction of ESC from EI; (2) Cherniss's interpretation of neuroscience findings as supporting the constructs of EI and ESC is outdated, and (3) his interpretation of the famous marshmallow experiment as indicating the existence of ESCs is flawed. Building on the promise of ability models, we conclude by providing suggestions to improve research in EI.
Resumo:
Indirect calorimetry based on respiratory exchange measurement has been successfully used from the beginning of the century to obtain an estimate of heat production (energy expenditure) in human subjects and animals. The errors inherent to this classical technique can stem from various sources: 1) model of calculation and assumptions, 2) calorimetric factors used, 3) technical factors and 4) human factors. The physiological and biochemical factors influencing the interpretation of calorimetric data include a change in the size of the bicarbonate and urea pools and the accumulation or loss (via breath, urine or sweat) of intermediary metabolites (gluconeogenesis, ketogenesis). More recently, respiratory gas exchange data have been used to estimate substrate utilization rates in various physiological and metabolic situations (fasting, post-prandial state, etc.). It should be recalled that indirect calorimetry provides an index of overall substrate disappearance rates. This is incorrectly assumed to be equivalent to substrate "oxidation" rates. Unfortunately, there is no adequate golden standard to validate whole body substrate "oxidation" rates, and this contrasts to the "validation" of heat production by indirect calorimetry, through use of direct calorimetry under strict thermal equilibrium conditions. Tracer techniques using stable (or radioactive) isotopes, represent an independent way of assessing substrate utilization rates. When carbohydrate metabolism is measured with both techniques, indirect calorimetry generally provides consistent glucose "oxidation" rates as compared to isotopic tracers, but only when certain metabolic processes (such as gluconeogenesis and lipogenesis) are minimal or / and when the respiratory quotients are not at the extreme of the physiological range. However, it is believed that the tracer techniques underestimate true glucose "oxidation" rates due to the failure to account for glycogenolysis in the tissue storing glucose, since this escapes the systemic circulation. A major advantage of isotopic techniques is that they are able to estimate (given certain assumptions) various metabolic processes (such as gluconeogenesis) in a noninvasive way. Furthermore when, in addition to the 3 macronutrients, a fourth substrate is administered (such as ethanol), isotopic quantification of substrate "oxidation" allows one to eliminate the inherent assumptions made by indirect calorimetry. In conclusion, isotopic tracers techniques and indirect calorimetry should be considered as complementary techniques, in particular since the tracer techniques require the measurement of carbon dioxide production obtained by indirect calorimetry. However, it should be kept in mind that the assessment of substrate oxidation by indirect calorimetry may involve large errors in particular over a short period of time. By indirect calorimetry, energy expenditure (heat production) is calculated with substantially less error than substrate oxidation rates.