905 resultados para Incomplete Block-designs
Resumo:
Title V of the Social Security Act is the longest-standing public health legislation in American history. Enacted in 1935, Title V is a federal-state partnership that promotes and improves maternal and child health (MCH). According to each state’s unique needs, Title V supports a spectrum of services, from infrastructure building services like quality assurance and policy development, to gap-filling direct health care services. Title V resources are directed towards MCH priority populations: pregnant women, mothers, infants, women of reproductive years, children and adolescents and children and youth with special health care needs.
Resumo:
Rapport de synthèse Cette thèse consiste en trois essais sur les stratégies optimales de dividendes. Chaque essai correspond à un chapitre. Les deux premiers essais ont été écrits en collaboration avec les Professeurs Hans Ulrich Gerber et Elias S. W. Shiu et ils ont été publiés; voir Gerber et al. (2006b) ainsi que Gerber et al. (2008). Le troisième essai a été écrit en collaboration avec le Professeur Hans Ulrich Gerber. Le problème des stratégies optimales de dividendes remonte à de Finetti (1957). Il se pose comme suit: considérant le surplus d'une société, déterminer la stratégie optimale de distribution des dividendes. Le critère utilisé consiste à maximiser la somme des dividendes escomptés versés aux actionnaires jusqu'à la ruine2 de la société. Depuis de Finetti (1957), le problème a pris plusieurs formes et a été résolu pour différents modèles. Dans le modèle classique de théorie de la ruine, le problème a été résolu par Gerber (1969) et plus récemment, en utilisant une autre approche, par Azcue and Muler (2005) ou Schmidli (2008). Dans le modèle classique, il y a un flux continu et constant d'entrées d'argent. Quant aux sorties d'argent, elles sont aléatoires. Elles suivent un processus à sauts, à savoir un processus de Poisson composé. Un exemple qui correspond bien à un tel modèle est la valeur du surplus d'une compagnie d'assurance pour lequel les entrées et les sorties sont respectivement les primes et les sinistres. Le premier graphique de la Figure 1 en illustre un exemple. Dans cette thèse, seules les stratégies de barrière sont considérées, c'est-à-dire quand le surplus dépasse le niveau b de la barrière, l'excédent est distribué aux actionnaires comme dividendes. Le deuxième graphique de la Figure 1 montre le même exemple du surplus quand une barrière de niveau b est introduite, et le troisième graphique de cette figure montre, quand à lui, les dividendes cumulés. Chapitre l: "Maximizing dividends without bankruptcy" Dans ce premier essai, les barrières optimales sont calculées pour différentes distributions du montant des sinistres selon deux critères: I) La barrière optimale est calculée en utilisant le critère usuel qui consiste à maximiser l'espérance des dividendes escomptés jusqu'à la ruine. II) La barrière optimale est calculée en utilisant le second critère qui consiste, quant à lui, à maximiser l'espérance de la différence entre les dividendes escomptés jusqu'à la ruine et le déficit au moment de la ruine. Cet essai est inspiré par Dickson and Waters (2004), dont l'idée est de faire supporter aux actionnaires le déficit au moment de la ruine. Ceci est d'autant plus vrai dans le cas d'une compagnie d'assurance dont la ruine doit être évitée. Dans l'exemple de la Figure 1, le déficit au moment de la ruine est noté R. Des exemples numériques nous permettent de comparer le niveau des barrières optimales dans les situations I et II. Cette idée, d'ajouter une pénalité au moment de la ruine, a été généralisée dans Gerber et al. (2006a). Chapitre 2: "Methods for estimating the optimal dividend barrier and the probability of ruin" Dans ce second essai, du fait qu'en pratique on n'a jamais toute l'information nécessaire sur la distribution du montant des sinistres, on suppose que seuls les premiers moments de cette fonction sont connus. Cet essai développe et examine des méthodes qui permettent d'approximer, dans cette situation, le niveau de la barrière optimale, selon le critère usuel (cas I ci-dessus). Les approximations "de Vylder" et "diffusion" sont expliquées et examinées: Certaines de ces approximations utilisent deux, trois ou quatre des premiers moments. Des exemples numériques nous permettent de comparer les approximations du niveau de la barrière optimale, non seulement avec les valeurs exactes mais également entre elles. Chapitre 3: "Optimal dividends with incomplete information" Dans ce troisième et dernier essai, on s'intéresse à nouveau aux méthodes d'approximation du niveau de la barrière optimale quand seuls les premiers moments de la distribution du montant des sauts sont connus. Cette fois, on considère le modèle dual. Comme pour le modèle classique, dans un sens il y a un flux continu et dans l'autre un processus à sauts. A l'inverse du modèle classique, les gains suivent un processus de Poisson composé et les pertes sont constantes et continues; voir la Figure 2. Un tel modèle conviendrait pour une caisse de pension ou une société qui se spécialise dans les découvertes ou inventions. Ainsi, tant les approximations "de Vylder" et "diffusion" que les nouvelles approximations "gamma" et "gamma process" sont expliquées et analysées. Ces nouvelles approximations semblent donner de meilleurs résultats dans certains cas.
Resumo:
The study area. located north of Konva (Central Turkey), is composed of Silurian to Cretaceous metamorphosed rocks. The lower unit of the oldest formation (Silurian-Early Permian) is mostly made up of Silurian-Early Carboniferous metacarbonates. These rocks pass laterally and vertically to Devonian-Early Permian series having continental margin, shallow water and pelagic characteristics. They are intruded or juxtaposed to different kinds of metamagmatic rocks. which show MORB. continental arc and within plate characteristics. The Palaeozoic units are covered unconformably by Triassic-Cretaceous metasedimentary units. All these rocks are overthrusted by Mesozoic ophiolites. The Palaeozoic sequence can be seen as a northern Palaeotethys passive, then active margin. The northward subduction of the Palaeotethys ocean during the Carboniferous-Triassic times, induced the development of a magmatic arc and fore-arc sequence (Carboniferous-Permian). Before the Early Triassic (?Late Permian) time. the fore-arc sequence was uplifted above sea level and eroded. The Triassic sequences are regarded as marking the onset of back-arc opening and detachment of the Anatolian Konya block from the active Eurasian margin. Finally. a suture zone formed during the Carman between the Konya region and the Menderes-Tauride Cimmerian block due to the closing of Palaeotethvs. This geodynamic evolution can be correlated with the evolution of the Karaburun sequence in western Turkey.
Resumo:
When certain control parameters of nervous cell models are varied, complex bifurcation structures develop in which the dynamical behaviors available appear classified in blocks, according to criteria of dynamical likelihood. This block structured dynamics may be a clue to understand how activated neurons encode information by firing spike trains of their action potentials.
Resumo:
Developing a vaccine against the human immunodeficiency virus (HIV) poses an exceptional challenge. There are no documented cases of immune-mediated clearance of HIV from an infected individual, and no known correlates of immune protection. Although nonhuman primate models of lentivirus infection have provided valuable data about HIV pathogenesis, such models do not predict HIV vaccine efficacy in humans. The combined lack of a predictive animal model and undefined biomarkers of immune protection against HIV necessitate that vaccines to this pathogen be tested directly in clinical trials. Adaptive clinical trial designs can accelerate vaccine development by rapidly screening out poor vaccines while extending the evaluation of efficacious ones, improving the characterization of promising vaccine candidates and the identification of correlates of immune protection.
Resumo:
It is well established that cancer cells can recruit CD11b(+) myeloid cells to promote tumor angiogenesis and tumor growth. Increasing interest has emerged on the identification of subpopulations of tumor-infiltrating CD11b(+) myeloid cells using flow cytometry techniques. In the literature, however, discrepancies exist on the phenotype of these cells (Coffelt et al., Am J Pathol 2010;176:1564-1576). Since flow cytometry analysis requires particular precautions for accurate sample preparation and trustable data acquisition, analysis, and interpretation, some discrepancies might be due to technical reasons rather than biological grounds. We used the syngenic orthotopic 4T1 mammary tumor model in immunocompetent BALB/c mice to analyze and compare the phenotype of CD11b(+) myeloid cells isolated from peripheral blood and from tumors, using six-color flow cytometry. We report here that the nonspecific antibody binding through Fc receptors, the presence of dead cells and cell doublets in tumor-derived samples concur to generate artifacts in the phenotype of tumor-infiltrating CD11b(+) subpopulations. We show that the heterogeneity of tumor-infiltrating CD11b(+) subpopulations analyzed without particular precautions was greatly reduced upon Fc block treatment, dead cells, and cell doublets exclusion. Phenotyping of tumor-infiltrating CD11b(+) cells was particularly sensitive to these parameters compared to circulating CD11b(+) cells. Taken together, our results identify Fc block treatment, dead cells, and cell doublets exclusion as simple but crucial steps for the proper analysis of tumor-infiltrating CD11b(+) cell populations.
Resumo:
We report a case of neonatal lupus erythematosus (NLE) with congenital heart block and severe myocardial failure, which was followed from the 25th week of gestation because of fetal bradycardia. The child was delivered at the 37th week of gestation by elective cesarean section because of echocardiographically documented heart enlargement, pericardial effusion and moderate insufficiency of the mitral and tricuspid valves. In spite of immediate pacing, intubation and supportive treatment, the newborn developed progressive heart failure. Echocardiography showed endocarditis of the mitral valve and diffuse myocarditis. The heart failure resolved under steroid treatment. Our experience supports the early use of steroids in treating myocarditis due to NLE. Intrauterine steroid treatment in the presence of fetal hydrops and congenital heart block is discussed.
Resumo:
The objective of this work was to evaluate the effect of drought on genetic parameters and breeding values of cassava. The experiments were carried out in a completely randomized block design with three replicates, under field conditions with (WD) or without (FI) water deficit. Yield of storage roots (RoY), shoot (ShY), and starch (StY), as well as the number of roots (NR), and root dry matter content (DMC) were evaluated in 47 cassava accessions. Significant differences were observed among accessions; according to heritability, these differences had mostly a genetic nature. Heritability estimates for genotypic effects () ranged from 0.25±0.12 (NR) to 0.60±0.18 (DMC), and from 0.51±0.17 (NR) to 0.80±0.21 (RoY and StY) for WD and FI, respectively, as a consequence of greater environmental influence on WD. Selective accuracy was lower in WD, and ranged from 0.71 (NR) to 0.89 (RoY, DMC, and StY). However, genetic gains were quite high and ranged from 24.43% (DMC) to 113.41% (StY), in WD, and from 8.5% (DMC) to 75.70% (StY) in FI. These genetic parameters may be useful for defining which selection strategies, breeding methods, and experimental designs are more suitable to obtain cassava genetic gains for tolerance to drought.
Resumo:
General Introduction These three chapters, while fairly independent from each other, study economic situations in incomplete contract settings. They are the product of both the academic freedom my advisors granted me, and in this sense reflect my personal interests, and of their interested feedback. The content of each chapter can be summarized as follows: Chapter 1: Inefficient durable-goods monopolies In this chapter we study the efficiency of an infinite-horizon durable-goods monopoly model with a fmite number of buyers. We find that, while all pure-strategy Markov Perfect Equilibria (MPE) are efficient, there also exist previously unstudied inefficient MPE where high valuation buyers randomize their purchase decision while trying to benefit from low prices which are offered once a critical mass has purchased. Real time delay, an unusual monopoly distortion, is the result of this attrition behavior. We conclude that neither technological constraints nor concern for reputation are necessary to explain inefficiency in monopolized durable-goods markets. Chapter 2: Downstream mergers and producer's capacity choice: why bake a larger pie when getting a smaller slice? In this chapter we study the effect of downstream horizontal mergers on the upstream producer's capacity choice. Contrary to conventional wisdom, we find anon-monotonic relationship: horizontal mergers induce a higher upstream capacity if the cost of capacity is low, and a lower upstream capacity if this cost is high. We explain this result by decomposing the total effect into two competing effects: a change in hold-up and a change in bargaining erosion. Chapter 3: Contract bargaining with multiple agents In this chapter we study a bargaining game between a principal and N agents when the utility of each agent depends on all agents' trades with the principal. We show, using the Potential, that equilibria payoffs coincide with the Shapley value of the underlying coalitional game with an appropriately defined characteristic function, which under common assumptions coincides with the principal's equilibrium profit in the offer game. Since the problem accounts for differences in information and agents' conjectures, the outcome can be either efficient (e.g. public contracting) or inefficient (e.g. passive beliefs).
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
BACKGROUND: The past three decades have seen rapid improvements in the diagnosis and treatment of most cancers and the most important contributor has been research. Progress in rare cancers has been slower, not least because of the challenges of undertaking research. SETTINGS: The International Rare Cancers Initiative (IRCI) is a partnership which aims to stimulate and facilitate the development of international clinical trials for patients with rare cancers. It is focused on interventional--usually randomized--clinical trials with the clear goal of improving outcomes for patients. The key challenges are organisational and methodological. A multi-disciplinary workshop to review the methods used in ICRI portfolio trials was held in Amsterdam in September 2013. Other as-yet unrealised methods were also discussed. RESULTS: The IRCI trials are each presented to exemplify possible approaches to designing credible trials in rare cancers. Researchers may consider these for use in future trials and understand the choices made for each design. INTERPRETATION: Trials can be designed using a wide array of possibilities. There is no 'one size fits all' solution. In order to make progress in the rare diseases, decisions to change practice will have to be based on less direct evidence from clinical trials than in more common diseases.