951 resultados para one-meson-exchange: independent-particle shell model
Resumo:
La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
Background: Breast cancer mortality has experienced important changes over the last century. Breast cancer occurs in the presence of other competing risks which can influence breast cancer incidence and mortality trends. The aim of the present work is: 1) to assess the impact of breast cancer deaths among mortality from all causes in Catalonia (Spain), by age and birth cohort and 2) to estimate the risk of death from other causes than breast cancer, one of the inputs needed to model breast cancer mortality reduction due to screening or therapeutic interventions. Methods: The multi-decrement life table methodology was used. First, all-cause mortality probabilities were obtained by age and cohort. Then mortality probability for breast cancer was subtracted from the all-cause mortality probabilities to obtain cohort life tables for causes other than breast cancer. These life tables, on one hand, provide an estimate of the risk of dying from competing risks, and on the other hand, permit to assess the impact of breast cancer deaths on all-cause mortality using the ratio of the probability of death for causes other than breast cancer by the all-cause probability of death. Results: There was an increasing impact of breast cancer on mortality in the first part of the 20th century, with a peak for cohorts born in 1945–54 in the 40–49 age groups (for which approximately 24% of mortality was due to breast cancer). Even though for cohorts born after 1955 there was only information for women under 50, it is also important to note that the impact of breast cancer on all-cause mortality decreased for those cohorts. Conclusion: We have quantified the effect of removing breast cancer mortality in different age groups and birth cohorts. Our results are consistent with US findings. We also have obtained an estimate of the risk of dying from competing-causes mortality, which will be used in the assessment of the effect of mammography screening on breast cancer mortality in Catalonia.
Resumo:
Our efforts are directed towards the understanding of the coscheduling mechanism in a NOW system when a parallel job is executed jointly with local workloads, balancing parallel performance against the local interactive response. Explicit and implicit coscheduling techniques in a PVM-Linux NOW (or cluster) have been implemented. Furthermore, dynamic coscheduling remains an open question when parallel jobs are executed in a non-dedicated Cluster. A basis model for dynamic coscheduling in Cluster systems is presented in this paper. Also, one dynamic coscheduling algorithm for this model is proposed. The applicability of this algorithm has been proved and its performance analyzed by simulation. Finally, a new tool (named Monito) for monitoring the different queues of messages in such an environments is presented. The main aim of implementing this facility is to provide a mean of capturing the bottlenecks and overheads of the communication system in a PVM-Linux cluster.
Resumo:
The objective of this thesis is to study wavelets and their role in turbulence applications. Under scrutiny in the thesis is the intermittency in turbulence models. Wavelets are used as a mathematical tool to study the intermittent activities that turbulence models produce. The first section generally introduces wavelets and wavelet transforms as a mathematical tool. Moreover, the basic properties of turbulence are discussed and classical methods for modeling turbulent flows are explained. Wavelets are implemented to model the turbulence as well as to analyze turbulent signals. The model studied here is the GOY (Gledzer 1973, Ohkitani & Yamada 1989) shell model of turbulence, which is a popular model for explaining intermittency based on the cascade of kinetic energy. The goal is to introduce better quantification method for intermittency obtained in a shell model. Wavelets are localized in both space (time) and scale, therefore, they are suitable candidates for the study of singular bursts, that interrupt the calm periods of an energy flow through various scales. The study concerns two questions, namely the frequency of the occurrence as well as the intensity of the singular bursts at various Reynolds numbers. The results gave an insight that singularities become more local as Reynolds number increases. The singularities become more local also when the shell number is increased at certain Reynolds number. The study revealed that the singular bursts are more frequent at Re ~ 107 than other cases with lower Re. The intermittency of bursts for the cases with Re ~ 106 and Re ~ 105 was similar, but for the case with Re ~ 104 bursts occured after long waiting time in a different fashion so that it could not be scaled with higher Re.
Resumo:
Työn tavoitteena oli suunnitella ja toteuttaa sähkön ja lämmön yhteistuotantolaitoksen tuotannon optimointi. Optimoinnin kriteerinä on tuotannon kannattavuus. Pyrittiin luomaan optimointimalli, joka ottaa optimoinnissa huomioon erityisesti kaukolämmön kulutusennusteen muutokset sekä sähkön pörssihinnan vaihtelut. Tuotannon kannalta olennaisin kriteeri on kaukolämmön kulutusennusteen pohjalta arvioidun kaukolämpökuorman tyydyttäminen mahdollisimman tehokkaasti ja taloudellisesti. Sähkön tuotannon merkittävimmiksi kriteereiksi muodostuivat sähkön tuotannon ennustettavuus ja tuotannon maksimointi sähkön pörssihinnan asettamissa puitteissa. Optimointiohjelmaa ei ole tarkoitus kytkeä suoraan voimalaitoksen ajojärjestelmään, vaan siitä on tarkoitus tulla erillinen ajosuunnittelijan työkalu. Itse ajosuunnitteluun vaikuttaa usein monipuolisemmat suunnittelukriteerit kuin pelkästään tuotannon tuottavuus. Näiden eri kriteerien painotuksia ei ohjelmassa huomioida, vaan ne päättää ajosuunnittelija. Tuloksena saatiin aikaan optimointiohjelma, joka laskee valittujen tuotantovaihtoehtojen kokonaistuotot eri kaukolämmön kulutusennusteiden ja sähkön pörssihintaennusteiden pohjalta.
Resumo:
Un dels organismes model més utilitzats en experimentació genètica és la Drosophila melanogaster ja que la facilitat de manipulació genètica i la seva simplicitat permeten estudiar processos biològics amb múltiples aplicabilitats en diferents àmbits d’estudi com el desenvolupament embrionari i la morfogènesis. La morfogènesi es un dels esdeveniments més importants durant el desenvolupament embrionari que permet la formació dels diferent teixits i òrgans, i que depèn de l'expressió genètica i de l'activació i coordinació de diferents vies de senyalització. Entendre com es coordinen aquest processos es fonamental per conèixer com es forma un òrgan. Així, l’objectiu principal d’aquest Treball de Final de Grau és identificar nous gens implicats en la formació del sistema traqueal (el nostre òrgan model) mitjançant un mini-‐cribratge funcional de gens que s’expressen en la tràquea, a més de generar eines per a l'estudi de la via de senyalització FGF/Bnl durant la remodelació del sistema traqueal mitjançant la tècnica de knock in. Per a dur-‐ho a terme, amb el suport de la base de dades de Gens i Genomes de Drosophila melanogaster (mod-‐ENCODE Tissue Expression Data) s’han seleccionat gens candidats expressats a la tràquea en estat larvari. Un cop identificats, s'ha estudiat la seva possible funció en el desenvolupament de les tràquees mitjançant el seu silenciament amb el sistema UAS-‐Gal4. Així hem vist que Vein (CG10491), CG17098, No Ocelli (CG4491) i Peptidasa (CG4017) presenten diversos fenotips que afecten la formació dels traqueoblasts. També hem vist que Vein, lligand de la via EGF és necessari per a la proliferació i supervivència de les cèl·∙lules traqueals del sac aeri. Finalment s’ha iniciat la generació d'un knock in en el gen branchless (bnl). Per aquest motiu s'han amplificat les regions 5’ i 3’ de l’exó 2 del gen Bnl i s'ha iniciat la seva clonació dirigida al vector de destí pTV-‐Cherry. Aquesta tècnica generarà eines que permetran entendre la funció del gen bnl durant la remodelació del sistema traqueal.
Resumo:
Particulate nanostructures are increasingly used for analytical purposes. Such particles are often generated by chemical synthesis from non-renewable raw materials. Generation of uniform nanoscale particles is challenging and particle surfaces must be modified to make the particles biocompatible and water-soluble. Usually nanoparticles are functionalized with binding molecules (e.g., antibodies or their fragments) and a label substance (if needed). Overall, producing nanoparticles for use in bioaffinity assays is a multistep process requiring several manufacturing and purification steps. This study describes a biological method of generating functionalized protein-based nanoparticles with specific binding activity on the particle surface and label activity inside the particles. Traditional chemical bioconjugation of the particle and specific binding molecules is replaced with genetic fusion of the binding molecule gene and particle backbone gene. The entity of the particle shell and binding moieties are synthesized from generic raw materials by bacteria, and fermentation is combined with a simple purification method based on inclusion bodies. The label activity is introduced during the purification. The process results in particles that are ready-to-use as reagents in bioaffinity. Apoferritin was used as particle body and the system was demonstrated using three different binding moieties: a small protein, a peptide and a single chain Fv antibody fragment that represents a complex protein including disulfide bridge.If needed, Eu3+ was used as label substance. The results showed that production system resulted in pure protein preparations, and the particles were of homogeneous size when visualized with transmission electron microscopy. Passively introduced label was stably associated with the particles, and binding molecules genetically fused to the particle specifically bound target molecules. Functionality of the particles in bioaffinity assays were successfully demonstrated with two types of assays; as labels and in particle-enhanced agglutination assay. This biological production procedure features many advantages that make the process especially suited for applications that have frequent and recurring requirements for homogeneous functional particles. The production process of ready, functional and watersoluble particles follows principles of “green chemistry”, is upscalable, fast and cost-effective.
Resumo:
Tässä työssä selvitettiin tuotelopetustoiminnan nykytilaa ja tuotteen elinkaaren hallinnan prosessimallin hyödyntämistä suuressa teleoperaattoriyrityksessä. Työn tavoitteena oli tutkia, mitkä tekijät hidastavat lopetuksia ja miten hitaus vaikuttaa alasajojen kustannuksiin. Toisaalta haluttiin selvittää, miten tuotteen elinkaaren prosessimallia käytetään case-yrityksessä erityisesti tuotelopetustoiminnan osalta, ja miten prosessimallin käytöstä voidaan hyötyä. Näiden kysymysten selvittämiseksi haastateltiin kymmentä tuotelopetuksiin osallistunutta tuotevastaavaa yritysasiakas- ja tuotantoyksiköissä, hankittiin tuotetietoa yrityksen useilta asiantuntijoilta ja teoriatietoa muun muassa referenssimalleista ja tuotelopetusstrategioista kirjallisuudesta. Syitä hitaisiin tuotelopetuksiin listattiin. Todettiin, että moniin ongelmiin ratkaisun tarjoaisivat tehokkaampi tuote- ja asiakastiedonhallinta ja lopetettavien tuotteiden tunnistaminen, kypsien tuotteiden tarkempi seuranta sekä myyjien lisäkoulutukset. Joidenkin tuotteiden lopetuksen keston pidentyminen voi lisätä lopetukseen liittyviä kustannuksia, mutta näin ei ole kaikkien tuotteiden kohdalla. Lisäksi todettiin, että osa vanhoista tuotteista ei ole korvattavissa täysin uusilla tuotteilla uusiin tuotteisiin liittyvien riskien vuoksi. Tuloksena selvisi myös, että prosessimallia ei käytetä kaikissa tuotelopetustapauksissa. Syitä tähän on selvitettävä vielä lisätutkimuksin, jotta mallin parantaminen onnistuisi.
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
Traditional econometric approaches in modeling the dynamics of equity and commodity markets, have, made great progress in the past decades. However, they assume rationality among the economic agents and and do not capture the dynamics that produce extreme events (black swans), due to deviation from the rationality assumption. The purpose of this study is to simulate the dynamics of silver markets by using the novel computational market dynamics approach. To this end, the daily data from the period of 1st March 2000 to 1st March 2013 of closing prices of spot silver prices has been simulated with the Jabłonska-Capasso-Morale(JCM) model. The Maximum Likelihood approach has been employed to calibrate the acquired data with JCM. Statistical analysis of the simulated series with respect to the actual one has been conducted to evaluate model performance. The model captures the animal spirits dynamics present in the data under evaluation well.
Resumo:
Inflation targeting: the conventional analysis and an alternative model. This article has two aims: the first one is to present a formal model of the monetary policy identified generally as "inflation targeting policy", an instrument of intervention of the central bank, through the short run nominal interest rate. The second aim is to discuss and criticize the theoretical assumptions of the model specially the concepts of "natural rate of interest" and of potential product presented by the "augmented Philips curve"; and to present a more realistic control of inflation targeting which does not assume the hypotheses above, and in which inflation targeting is based on the control of real rate of interest.
Resumo:
Väkevän hapon katalysoiman hydrolyysin avulla lignoselluloosasta on mahdollista valmistaa arvokkaita sokereita. Katalyyttinä toimiva happo voidaan käyttää uudelleen hydrolyysissä, jos se saadaan erotettua sokereista ilman neutralointia. Tämän kandidaatintyön tavoitteena oli selvittää, soveltuuko happoretardaatiotekniikka väkevähappohydrolysaatin fraktiointiin. Työssä verrattiin happoretardaatiotekniikkaa elektrolyyttiekskluusiotekniikkaan. Työn kirjallisuusosassa käsiteltiin happoretardaation ja elektrolyyttiekskluusion teoriaa. Lisäksi esiteltiin elektrolyyttiekskluusioon ja happoretardaatioon liittyviä tutkimuksia. Työn kokeellisessa osassa suoritettiin panoskromatografiakokeita käyttäen syöttöliuoksena rikkihappoa, etikkahappoa, glukoosia ja ksyloosia sisältävää synteettistä liuosta. Erotusmateriaaleina käytettiin neljää eri anionin- ja yhtä kationinvaihtohartsia. Kokeiden perusteella tutkittiin anioninvaihtohartsin tyypin ja kolonnin latauksen vaikutusta happoretardaatiotekniikalla saavutettavaan erotustulokseen sekä verrattiin elektrolyyttiekskluusiota happoretardaatioon. Työn tulosten perusteella rikkihappo laimeni happoretardaatiotekniikalla jopa 20-kertaisesti kromatografiakolonniin syötettyyn liuokseen verrattuna, riippumatta kolonnin latauksesta ja anioninvaihtohartsista. Rikkihapon laimenemisen vuoksi happoretardaatio ei soveltunut lignoselluloosapohjaisten väkevähappohydrolysaattien fraktiointiin. Elektrolyyttiekskluusiotekniikalla rikkihapon laimeneminen oli merkittävästi vähäisempää, minkä vuoksi elektrolyyttiekskluusion todettiin soveltuvan happoretardaatiota paremmin lignoselluloosapohjaisten väkevähappohydrolysaattien fraktiointiin.
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.
Resumo:
This thesis Entitled “modelling and analysis of recurrent event data with multiple causes.Survival data is a term used for describing data that measures the time to occurrence of an event.In survival studies, the time to occurrence of an event is generally referred to as lifetime.Recurrent event data are commonly encountered in longitudinal studies when individuals are followed to observe the repeated occurrences of certain events. In many practical situations, individuals under study are exposed to the failure due to more than one causes and the eventual failure can be attributed to exactly one of these causes.The proposed model was useful in real life situations to study the effect of covariates on recurrences of certain events due to different causes.In Chapter 3, an additive hazards model for gap time distributions of recurrent event data with multiple causes was introduced. The parameter estimation and asymptotic properties were discussed .In Chapter 4, a shared frailty model for the analysis of bivariate competing risks data was presented and the estimation procedures for shared gamma frailty model, without covariates and with covariates, using EM algorithm were discussed. In Chapter 6, two nonparametric estimators for bivariate survivor function of paired recurrent event data were developed. The asymptotic properties of the estimators were studied. The proposed estimators were applied to a real life data set. Simulation studies were carried out to find the efficiency of the proposed estimators.