6 resultados para Degree of contribution

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical models of the knee joint are important tools which have both theoretical and practical applications. They are used by researchers to fully understand the stabilizing role of the components of the joint, by engineers as an aid for prosthetic design, by surgeons during the planning of an operation or during the operation itself, and by orthopedists for diagnosis and rehabilitation purposes. The principal aims of knee models are to reproduce the restraining function of each structure of the joint and to replicate the relative motion of the bones which constitute the joint itself. It is clear that the first point is functional to the second one. However, the standard procedures for the dynamic modelling of the knee tend to be more focused on the second aspect: the motion of the joint is correctly replicated, but the stabilizing role of the articular components is somehow lost. A first contribution of this dissertation is the definition of a novel approach — called sequential approach — for the dynamic modelling of the knee. The procedure makes it possible to develop more and more sophisticated models of the joint by a succession of steps, starting from a first simple model of its passive motion. The fundamental characteristic of the proposed procedure is that the results obtained at each step do not worsen those already obtained at previous steps, thus preserving the restraining function of the knee structures. The models which stem from the first two steps of the sequential approach are then presented. The result of the first step is a model of the passive motion of the knee, comprehensive of the patello-femoral joint. Kinematical and anatomical considerations lead to define a one degree of freedom rigid link mechanism, whose members represent determinate components of the joint. The result of the second step is a stiffness model of the knee. This model is obtained from the first one, by following the rules of the proposed procedure. Both models have been identified from experimental data by means of an optimization procedure. The simulated motions of the models then have been compared to the experimental ones. Both models accurately reproduce the motion of the joint under the corresponding loading conditions. Moreover, the sequential approach makes sure the results obtained at the first step are not worsened at the second step: the stiffness model can also reproduce the passive motion of the knee with the same accuracy than the previous simpler model. The procedure proved to be successful and thus promising for the definition of more complex models which could also involve the effect of muscular forces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interaction between disciplines in the study of human population history is of primary importance, profiting from the biological and cultural characteristics of humankind. In fact, data from genetics, linguistics, archaeology and cultural anthropology can be combined to allow for a broader research perspective. This multidisciplinary approach is here applied to the study of the prehistory of sub-Saharan African populations: in this continent, where Homo sapiens originally started his evolution and diversification, the understanding of the patterns of human variation has a crucial relevance. For this dissertation, molecular data is interpreted and complemented with a major contribution from linguistics: linguistic data are compared to the genetic data and the research questions are contextualized within a linguistic perspective. In the four articles proposed, we analyze Y chromosome SNPs and STRs profiles and full mtDNA genomes on a representative number of samples to investigate key questions of African human variability. Some of these questions address i) the amount of genetic variation on a continental scale and the effects of the widespread migration of Bantu speakers, ii) the extent of ancient population structure, which has been lost in present day populations, iii) the colonization of the southern edge of the continent together with the degree of population contact/replacement, and iv) the prehistory of the diverse Khoisan ethnolinguistic groups, who were traditionally understudied despite representing one of the most ancient divergences of modern human phylogeny. Our results uncover a deep level of genetic structure within the continent and a multilayered pattern of contact between populations. These case studies represent a valuable contribution to the debate on our prehistory and open up further research threads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decades, medical malpractice has been framed as one of the most critical issues for healthcare providers and health policy, holding a central role on both the policy agenda and public debate. The Law and Economics literature has devoted much attention to medical malpractice and to the investigation of the impact of malpractice reforms. Nonetheless, some reforms have been much less empirically studied as in the case of schedules, and their effects remain highly debated. The present work seeks to contribute to the study of medical malpractice and of schedules of noneconomic damages in a civil law country with a public national health system, using Italy as case study. Besides considering schedules and exploiting a quasi-experimental setting, the novelty of our contribution consists in the inclusion of the performance of the judiciary (measured as courts’ civil backlog) in the empirical analysis. The empirical analysis is twofold. First, it investigates how limiting compensations for pain and suffering through schedules impacts on the malpractice insurance market in terms of presence of private insurers and of premiums applied. Second, it examines whether, and to what extent, healthcare providers react to the implementation of this policy in terms of both levels and composition of the medical treatments offered. Our findings show that the introduction of schedules increases the presence of insurers only in inefficient courts, while it does not produce significant effects on paid premiums. Judicial inefficiency is attractive to insurers for average values of schedules penetration of the market, with an increasing positive impact of inefficiency as the territorial coverage of schedules increases. Moreover, the implementation of schedules tends to reduce the use of defensive practices on the part of clinicians, but the magnitude of this impact is ultimately determined by the actual degree of backlog of the court implementing schedules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present thesis I study the contribution to firm value of inventories management from a risk management perspective. I find a significant contribution of inventories to the value of risk management especially through the operating flexibility channel. In contrast, I do not find evidence supporting the view of inventories a reserve of liquidity. Inventories substitute, albeit not perfectly, derivatives or cash holdings. The substitution between hedging with derivatives and inventory is moderated by the correlation between cash flow and the underlying asset in the derivative contract. Hedge ratios increase with the effectiveness of derivatives. The decision to hedge with cash holdings or inventories is strongly influenced by the degree of complementarity between production factors and by cash flow volatility. In addition, I provide a risk management based explanation of the secular substitution between inventories and cash holdings documented, among others, in Bates et al. (2009), Journal of Finance. In a sample of U.S. firms between 1980 and 2006, I empirically confirm the negative relation between inventories and cash and provide evidence on the poor performance of investment cash flow sensitivities as a measure of financial constraints also in the case of inventories investment. This result can be explained by firms' scarce reliance on inventories as a reserve of liquidity. Finally, as an extension of my study, I contrast with empirical data the theoretical predictions of a model on the integrated management of inventories, trade credit and cash holdings.