33 resultados para Maximizing
Resumo:
Animals can often coordinate their actions to achieve mutually beneficial outcomes. However, this can result in a social dilemma when uncertainty about the behavior of partners creates multiple fitness peaks. Strategies that minimize risk ("risk dominant") instead of maximizing reward ("payoff dominant") are favored in economic models when individuals learn behaviors that increase their payoffs. Specifically, such strategies are shown to be "stochastically stable" (a refinement of evolutionary stability). Here, we extend the notion of stochastic stability to biological models of continuous phenotypes at a mutation-selection-drift balance. This allows us to make a unique prediction for long-term evolution in games with multiple equilibria. We show how genetic relatedness due to limited dispersal and scaled to account for local competition can crucially affect the stochastically-stable outcome of coordination games. We find that positive relatedness (weak local competition) increases the chance the payoff dominant strategy is stochastically stable, even when it is not risk dominant. Conversely, negative relatedness (strong local competition) increases the chance that strategies evolve that are neither payoff nor risk dominant. Extending our results to large multiplayer coordination games we find that negative relatedness can create competition so extreme that the game effectively changes to a hawk-dove game and a stochastically stable polymorphism between the alternative strategies evolves. These results demonstrate the usefulness of stochastic stability in characterizing long-term evolution of continuous phenotypes: the outcomes of multiplayer games can be reduced to the generic equilibria of two-player games and the effect of spatial structure can be analyzed readily.
Resumo:
Imatinib is the standard of care for patients with advanced metastatic gastrointestinal stromal tumors (GIST), and is also approved for adjuvant treatment in patients at substantial risk of relapse. Studies have shown that maximizing benefit from imatinib depends on long-term administration at recommended doses. Pharmacokinetic (PK) and pharmacodynamic factors, adherence, and drug-drug interactions can affect exposure to imatinib and impact clinical outcomes. This article reviews the relevance of these factors to imatinib's clinical activity and response in the context of what has been demonstrated in chronic myelogenous leukemia (CML), and in light of new data correlating imatinib exposure to response in patients with GIST. Because of the wide inter-patient variability in drug exposure with imatinib in both CML and GIST, blood level testing (BLT) may play a role in investigating instances of suboptimal response, unusually severe toxicities, drug-drug interactions, and suspected non-adherence. Published clinical data in CML and in GIST were considered, including data from a PK substudy of the B2222 trial correlating imatinib blood levels with clinical responses in patients with GIST. Imatinib trough plasma levels <1100ng/mL were associated with lower rates of objective response and faster development of progressive disease in patients with GIST. These findings have been supported by other analyses correlating free imatinib (unbound) levels with response. These results suggest a future application for imatinib BLT in predicting and optimizing therapeutic response. Nevertheless, early estimates of threshold imatinib blood levels must be confirmed prospectively in future studies and elaborated for different patient subgroups.
Resumo:
New directly acting antivirals (DAAs) that inhibit hepatitis C virus (HCV) replication are increasingly used for the treatment of chronic hepatitis C. A marked pharmacokinetic variability and a high potential for drug-drug interactions between DAAs and numerous drug classes have been identified. In addition, ribavirin (RBV), commonly associated with hemolytic anemia, often requires dose adjustment, advocating for therapeutic drug monitoring (TDM) in patients under combined antiviral therapy. However, an assay for the simultaneous analysis of RBV and DAAs constitutes an analytical challenge because of the large differences in polarity among these drugs, ranging from hydrophilic (RBV) to highly lipophilic (telaprevir [TVR]). Moreover, TVR is characterized by erratic behavior on standard octadecyl-based reversed-phase column chromatography and must be separated from VRT-127394, its inactive C-21 epimer metabolite. We have developed a convenient assay employing simple plasma protein precipitation, followed by high-performance liquid chromatography coupled to tandem mass spectrometry (HPLC-MS/MS) for the simultaneous determination of levels of RBV, boceprevir, and TVR, as well as its metabolite VRT-127394, in plasma. This new, simple, rapid, and robust HPLC-MS/MS assay offers an efficient method of real-time TDM aimed at maximizing efficacy while minimizing the toxicity of antiviral therapy.
Resumo:
PURPOSE: Extensive multilobar cortical dysplasia in infants commonly is first seen with catastrophic epilepsy and poses a therapeutic challenge with respect to control of epilepsy, brain development, and psychosocial outcome. Experience with surgical treatment of these lesions is limited, often not very encouraging, and holds a higher operative risk when compared with that in older children and adults. METHODS: Two infants were evaluated for surgical control of catastrophic epilepsy present since birth, along with a significant psychomotor developmental delay. Magnetic resonance imaging showed multilobar cortical dysplasia (temporoparietooccipital) with a good electroclinical correlation. They were treated with a temporal lobectomy and posterior (parietooccipital) disconnection. RESULTS: Both infants had excellent postoperative recovery and at follow-up (1.5 and 3.5 years) evaluation had total control of seizures with a definite "catch up" in their development, both motor and cognitive. No long-term complications have been detected to date. CONCLUSIONS: The incorporation of disconnective techniques in the surgery for extensive multilobar cortical dysplasia in infants has made it possible to achieve excellent seizure results by maximizing the extent of surgical treatment to include the entire epileptogenic zone. These techniques decrease perioperative morbidity, and we believe would decrease the potential for the development of long-term complications associated with large brain excisions.
Resumo:
We characterize the value function of maximizing the total discounted utility of dividend payments for a compound Poisson insurance risk model when strictly positive transaction costs are included, leading to an impulse control problem. We illustrate that well known simple strategies can be optimal in the case of exponential claim amounts. Finally we develop a numerical procedure to deal with general claim amount distributions.
Resumo:
The results of numerous economic games suggest that humans behave more cooperatively than would be expected if they were maximizing selfish interests. It has been argued that this is because individuals gain satisfaction from the success of others, and that such prosocial preferences require a novel evolutionary explanation. However, in previous games, imperfect behavior would automatically lead to an increase in cooperation, making it impossible to decouple any form of mistake or error from prosocial cooperative decisions. Here we empirically test between these alternatives by decoupling imperfect behavior from prosocial preferences in modified versions of the public goods game, in which individuals would maximize their selfish gain by completely (100%) cooperating. We found that, although this led to higher levels of cooperation, it did not lead to full cooperation, and individuals still perceived their group mates as competitors. This is inconsistent with either selfish or prosocial preferences, suggesting that the most parsimonious explanation is imperfect behavior triggered by psychological drives that can prevent both complete defection and complete cooperation. More generally, our results illustrate the caution that must be exercised when interpreting the evolutionary implications of economic experiments, especially the absolute level of cooperation in a particular treatment.
Resumo:
Cross-hole radar tomography is a useful tool for mapping shallow subsurface electrical properties viz. dielectric permittivity and electrical conductivity. Common practice is to invert cross-hole radar data with ray-based tomographic algorithms using first arrival traveltimes and first cycle amplitudes. However, the resolution of conventional standard ray-based inversion schemes for cross-hole ground-penetrating radar (GPR) is limited because only a fraction of the information contained in the radar data is used. The resolution can be improved significantly by using a full-waveform inversion that considers the entire waveform, or significant parts thereof. A recently developed 2D time-domain vectorial full-waveform crosshole radar inversion code has been modified in the present study by allowing optimized acquisition setups that reduce the acquisition time and computational costs significantly. This is achieved by minimizing the number of transmitter points and maximizing the number of receiver positions. The improved algorithm was employed to invert cross-hole GPR data acquired within a gravel aquifer (4-10 m depth) in the Thur valley, Switzerland. The simulated traces of the final model obtained by the full-waveform inversion fit the observed traces very well in the lower part of the section and reasonably well in the upper part of the section. Compared to the ray-based inversion, the results from the full-waveform inversion show significantly higher resolution images. At either side, 2.5 m distance away from the cross-hole plane, borehole logs were acquired. There is a good correspondence between the conductivity tomograms and the natural gamma logs at the boundary of the gravel layer and the underlying lacustrine clay deposits. Using existing petrophysical models, the inversion results and neutron-neutron logs are converted to porosity. Without any additional calibration, the values obtained for the converted neutron-neutron logs and permittivity results are very close and similar vertical variations can be observed. The full-waveform inversion provides in both cases additional information about the subsurface. Due to the presence of the water table and associated refracted/reflected waves, the upper traces are not well fitted and the upper 2 m in the permittivity and conductivity tomograms are not reliably reconstructed because the unsaturated zone is not incorporated into the inversion domain.
Resumo:
Background: Exposure to fine particulate matter air pollutants (PM2.5) affects heart rate variability parameters, and levels of serum proteins associated with inflammation, hemostasis and thrombosis. This study investigated sources potentially responsible for cardiovascular and hematological effects in highway patrol troopers. Results: Nine healthy young non-smoking male troopers working from 3 PM to midnight were studied on four consecutive days during their shift and the following night. Sources of in-vehicle PM2.5 were identified with variance-maximizing rotational principal factor analysis of PM2.5-components and associated pollutants. Two source models were calculated. Sources of in-vehicle PM2.5 identified were 1) crustal material, 2) wear of steel automotive components, 3) gasoline combustion, 4) speed-changing traffic with engine emissions and brake wear. In one model, sources 1 and 2 collapsed to a single source. Source factors scores were compared to cardiac and blood parameters measured ten and fifteen hours, respectively, after each shift. The "speed-change" factor was significantly associated with mean heart cycle length (MCL, +7% per standard deviation increase in the factor score), heart rate variability (+16%), supraventricular ectopic beats (+39%), % neutrophils (+7%), % lymphocytes (-10%), red blood cell volume MCV (+1%), von Willebrand Factor (+9%), blood urea nitrogen (+7%), and protein C (-11%). The "crustal" factor (but not the "collapsed" source) was associated with MCL (+3%) and serum uric acid concentrations (+5%). Controlling for potential confounders had little influence on the effect estimates. Conclusion: PM2.5 originating from speed-changing traffic modulates the autonomic control of the heart rhythm, increases the frequency of premature supraventricular beats and elicits proinflammatory and pro-thrombotic responses in healthy young men. [Authors]
Resumo:
Aim Structure of the Thesis In the first article, I focus on the context in which the Homo Economicus was constructed - i.e., the conception of economic actors as fully rational, informed, egocentric, and profit-maximizing. I argue that the Homo Economicus theory was developed in a specific societal context with specific (partly tacit) values and norms. These norms have implicitly influenced the behavior of economic actors and have framed the interpretation of the Homo Economicus. Different factors however have weakened this implicit influence of the broader societal values and norms on economic actors. The result is an unbridled interpretation and application of the values and norms of the Homo Economicus in the business environment, and perhaps also in the broader society. In the second article, I show that the morality of many economic actors relies on isomorphism, i.e., the attempt to fit into the group by adopting the moral norms surrounding them. In consequence, if the norms prevailing in a specific group or context (such as a specific region or a specific industry) change, it can be expected that actors with an 'isomorphism morality' will also adapt their ethical thinking and their behavior -for the 'better' or for the 'worse'. The article further describes the process through which corporations could emancipate from the ethical norms prevailing in the broader society, and therefore develop an institution with specific norms and values. These norms mainly rely on mainstream business theories praising the economic actor's self-interest and neglecting moral reasoning. Moreover, because of isomorphism morality, many economic actors have changed their perception of ethics, and have abandoned the values prevailing in the broader society in order to adopt those of the economic theory. Finally, isomorphism morality also implies that these economic actors will change their morality again if the institutional context changes. The third article highlights the role and responsibility of business scholars in promoting a systematic reflection and self-critique of the business system and develops alternative models to fill the moral void of the business institution and its inherent legitimacy crisis. Indeed, the current business institution relies on assumptions such as scientific neutrality and specialization, which seem at least partly challenged by two factors. First, self-fulfilling prophecy provides scholars with an important (even if sometimes undesired) normative influence over practical life. Second, the increasing complexity of today's (socio-political) world and interactions between the different elements constituting our society question the strong specialization of science. For instance, economic theories are not unrelated to psychology or sociology, and economic actors influence socio-political structures and processes, e.g., through lobbying (Dobbs, 2006; Rondinelli, 2002), or through marketing which changes not only the way we consume, but more generally tries to instill a specific lifestyle (Cova, 2004; M. K. Hogg & Michell, 1996; McCracken, 1988; Muniz & O'Guinn, 2001). In consequence, business scholars are key actors in shaping both tomorrow's economic world and its broader context. A greater awareness of this influence might be a first step toward an increased feeling of civic responsibility and accountability for the models and theories developed or taught in business schools.
Resumo:
The molecular and isotopic chemistry of organic residues from archaeological potsherds was used to obtain further insight into the dietary trends and economies at the Constance lake-shore Neolithic settlements. The archaeological organic residues from the Early Late Neolithic (3922-3902 BC) site Hornstaad-Hornle IA/Germany are, at present, the oldest archaeological samples analysed at the Institute of Mineralogy and Geochemistry of the University of Lausanne. The approach includes 13C/12C and 15N/14N ratios of the bulk organic residues, fatty acids distribution and 13C/12C ratios of individual fatty acids. The results are compared with those obtained from the over 500 years younger Neolithic (3384-3370 BC) settlement of Arbon Bleiche 3/Switzerland and with samples of modern vegetable oils and fat of animals that have been fed exclusively on C3 forage grasses. The overall fatty acid composition (C9 to C24 range, maximizing at C14 and C16), the bulk 13C/12C and 15N/14N ratios (delta13C, delta15N) and the 13C/12C ratios of palmitic (C16:0), stearic (C18:0) and oleic acids (C18:1) of the organic residues indicate that most of the studied samples (25 from 47 samples and 5 from 41 in the delta13C18:0 vs. delta13C16:0 and delta13C18:0 vs. delta13C18:1 diagrams, respectively) from Hornstaad-Hornle IA and Arbon Bleiche 3 sherds contain fat residues of pre-industrial ruminant milk, and young suckling calf/lamb adipose. These data provide direct proof of milk and meat (mainly from young suckling calves) consumption and farming practices for a sustainable dairying in Neolithic villages in central Europe around 4000 BC.dagger
Resumo:
The institutional regimes framework has previously been applied to the institutional conditions that support or hinder the sustainability of housing stocks. This resource-based approach identifies the actors across different sectors that have an interest in housing, how they use housing, the mechanisms affecting their use (public policy, use rights, contracts, etc.) and the effects of their uses on the sustainability of housing within the context of the built environment. The potential of the institutional regimes framework is explored for its suitability to the many considerations of housing resilience. By identifying all the goods and services offered by the resource 'housing stock', researchers and decision-makers could improve the resilience of housing by better accounting for the ecosystem services used by housing, decreasing the vulnerability of housing to disturbances, and maximizing recovery and reorganization following a disturbance. The institutional regimes framework is found to be a promising tool for addressing housing resilience. Further questions are raised for translating this conceptual framework into a practical application underpinned with empirical data.
Resumo:
Rapport de synthèse Cette thèse consiste en trois essais sur les stratégies optimales de dividendes. Chaque essai correspond à un chapitre. Les deux premiers essais ont été écrits en collaboration avec les Professeurs Hans Ulrich Gerber et Elias S. W. Shiu et ils ont été publiés; voir Gerber et al. (2006b) ainsi que Gerber et al. (2008). Le troisième essai a été écrit en collaboration avec le Professeur Hans Ulrich Gerber. Le problème des stratégies optimales de dividendes remonte à de Finetti (1957). Il se pose comme suit: considérant le surplus d'une société, déterminer la stratégie optimale de distribution des dividendes. Le critère utilisé consiste à maximiser la somme des dividendes escomptés versés aux actionnaires jusqu'à la ruine2 de la société. Depuis de Finetti (1957), le problème a pris plusieurs formes et a été résolu pour différents modèles. Dans le modèle classique de théorie de la ruine, le problème a été résolu par Gerber (1969) et plus récemment, en utilisant une autre approche, par Azcue and Muler (2005) ou Schmidli (2008). Dans le modèle classique, il y a un flux continu et constant d'entrées d'argent. Quant aux sorties d'argent, elles sont aléatoires. Elles suivent un processus à sauts, à savoir un processus de Poisson composé. Un exemple qui correspond bien à un tel modèle est la valeur du surplus d'une compagnie d'assurance pour lequel les entrées et les sorties sont respectivement les primes et les sinistres. Le premier graphique de la Figure 1 en illustre un exemple. Dans cette thèse, seules les stratégies de barrière sont considérées, c'est-à-dire quand le surplus dépasse le niveau b de la barrière, l'excédent est distribué aux actionnaires comme dividendes. Le deuxième graphique de la Figure 1 montre le même exemple du surplus quand une barrière de niveau b est introduite, et le troisième graphique de cette figure montre, quand à lui, les dividendes cumulés. Chapitre l: "Maximizing dividends without bankruptcy" Dans ce premier essai, les barrières optimales sont calculées pour différentes distributions du montant des sinistres selon deux critères: I) La barrière optimale est calculée en utilisant le critère usuel qui consiste à maximiser l'espérance des dividendes escomptés jusqu'à la ruine. II) La barrière optimale est calculée en utilisant le second critère qui consiste, quant à lui, à maximiser l'espérance de la différence entre les dividendes escomptés jusqu'à la ruine et le déficit au moment de la ruine. Cet essai est inspiré par Dickson and Waters (2004), dont l'idée est de faire supporter aux actionnaires le déficit au moment de la ruine. Ceci est d'autant plus vrai dans le cas d'une compagnie d'assurance dont la ruine doit être évitée. Dans l'exemple de la Figure 1, le déficit au moment de la ruine est noté R. Des exemples numériques nous permettent de comparer le niveau des barrières optimales dans les situations I et II. Cette idée, d'ajouter une pénalité au moment de la ruine, a été généralisée dans Gerber et al. (2006a). Chapitre 2: "Methods for estimating the optimal dividend barrier and the probability of ruin" Dans ce second essai, du fait qu'en pratique on n'a jamais toute l'information nécessaire sur la distribution du montant des sinistres, on suppose que seuls les premiers moments de cette fonction sont connus. Cet essai développe et examine des méthodes qui permettent d'approximer, dans cette situation, le niveau de la barrière optimale, selon le critère usuel (cas I ci-dessus). Les approximations "de Vylder" et "diffusion" sont expliquées et examinées: Certaines de ces approximations utilisent deux, trois ou quatre des premiers moments. Des exemples numériques nous permettent de comparer les approximations du niveau de la barrière optimale, non seulement avec les valeurs exactes mais également entre elles. Chapitre 3: "Optimal dividends with incomplete information" Dans ce troisième et dernier essai, on s'intéresse à nouveau aux méthodes d'approximation du niveau de la barrière optimale quand seuls les premiers moments de la distribution du montant des sauts sont connus. Cette fois, on considère le modèle dual. Comme pour le modèle classique, dans un sens il y a un flux continu et dans l'autre un processus à sauts. A l'inverse du modèle classique, les gains suivent un processus de Poisson composé et les pertes sont constantes et continues; voir la Figure 2. Un tel modèle conviendrait pour une caisse de pension ou une société qui se spécialise dans les découvertes ou inventions. Ainsi, tant les approximations "de Vylder" et "diffusion" que les nouvelles approximations "gamma" et "gamma process" sont expliquées et analysées. Ces nouvelles approximations semblent donner de meilleurs résultats dans certains cas.
Resumo:
Discussion on improving the power of genome-wide association studies to identify candidate variants and genes is generally centered on issues of maximizing sample size; less attention is given to the role of phenotype definition and ascertainment. The authors used genome-wide data from patients infected with human immunodeficiency virus type 1 (HIV-1) to assess whether differences in type of population (622 seroconverters vs. 636 seroprevalent subjects) or the number of measurements available for defining the phenotype resulted in differences in the effect sizes of associations between single nucleotide polymorphisms and the phenotype, HIV-1 viral load at set point. The effect estimate for the top 100 single nucleotide polymorphisms was 0.092 (95% confidence interval: 0.074, 0.110) log(10) viral load (log(10) copies of HIV-1 per mL of blood) greater in seroconverters than in seroprevalent subjects. The difference was even larger when the authors focused on chromosome 6 variants (0.153 log(10) viral load) or on variants that achieved genome-wide significance (0.232 log(10) viral load). The estimates of the genetic effects tended to be slightly larger when more viral load measurements were available, particularly among seroconverters and for variants that achieved genome-wide significance. Differences in phenotype definition and ascertainment may affect the estimated magnitude of genetic effects and should be considered in optimizing power for discovering new associations.
Resumo:
BACKGROUND: Regional rates of hospitalization for ambulatory care sensitive conditions (ACSC) are used to compare the availability and quality of ambulatory care but the risk adjustment for population health status is often minimal. The objectives of the study was to examine the impact of more extensive risk adjustment on regional comparisons and to investigate the relationship between various area-level factors and the properly adjusted rates. METHODS: Our study is an observational study based on routine data of 2 million anonymous insured in 26 Swiss cantons followed over one or two years. A binomial negative regression was modeled with increasingly detailed information on health status (age and gender only, inpatient diagnoses, outpatient conditions inferred from dispensed drugs and frequency of physician visits). Hospitalizations for ACSC were identified from principal diagnoses detecting 19 conditions, with an updated list of ICD-10 diagnostic codes. Co-morbidities and surgical procedures were used as exclusion criteria to improve the specificity of the detection of potentially avoidable hospitalizations. The impact of the adjustment approaches was measured by changes in the standardized ratios calculated with and without other data besides age and gender. RESULTS: 25% of cases identified by inpatient main diagnoses were removed by applying exclusion criteria. Cantonal ACSC hospitalizations rates varied from to 1.4 to 8.9 per 1,000 insured, per year. Morbidity inferred from diagnoses and drugs dramatically increased the predictive performance, the greatest effect found for conditions linked to an ACSC. More visits were associated with fewer PAH although very high users were at greater risk and subjects who had not consulted at negligible risk. By maximizing health status adjustment, two thirds of the cantons changed their adjusted ratio by more than 10 percent. Cantonal variations remained substantial but unexplained by supply or demand. CONCLUSION: Additional adjustment for health status is required when using ACSC to monitor ambulatory care. Drug-inferred morbidities are a promising approach.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.