992 resultados para random loss


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Duplications and rearrangements of coding genes are major themes in the evolution of mitochondrial genomes, bearing important consequences in the function of mitochondria and the fitness of organisms. Yu et al. (BMC Genomics 2008, 9: 477) reported the complete mt genome sequence of the oyster Crassostrea hongkongensis (16,475 bp) and found that a DNA segment containing four tRNA genes (trnK(1), trnC, trnQ(1) and trnN), a duplicated (rrnS) and a split rRNA gene (rrnL5') was absent compared with that of two other Crassostrea species. It was suggested that the absence was a novel case of "tandem duplication-random loss" with evolutionary significance. We independently sequenced the complete mt genome of three C. hongkongensis individuals, all of which were 18,622 bp and contained the segment that was missing in Yu et al.'s sequence. Further, we designed primers, verified sequences and demonstrated that the sequence loss in Yu et al.'s study was an artifact caused by placing primers in a duplicated region. The duplication and split of ribosomal RNA genes are unique for Crassostrea oysters and not lost in C. hongkongensis. Our study highlights the need for caution when amplifying and sequencing through duplicated regions of the genome.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Motivated by new and innovative rental business models, this paper develops a novel discrete-time model of a rental operation with random loss of inventory due to customer use. The inventory level is chosen before the start of a finite rental season, and customers not immediately served are lost. Our analysis framework uses stochastic comparisons of sample paths to derive structural results that hold under good generality for demands, rental durations, and rental unit lifetimes. Considering different \recirculation" rules | i.e., which rental unit to choose to meet each demand | we prove the concavity of the expected profit function and identify the optimal recirculation rule. A numerical study clarifies when considering rental unit loss and recirculation rules matters most for the inventory decision: Accounting for rental unit loss can increase the expected profit by 7% for a single season and becomes even more important as the time horizon lengthens. We also observe that the optimal inventory level in response to increasing loss probability is non-monotonic. Finally, we show that choosing the optimal recirculation rule over another simple policy allows more rental units to be profitably added, and the profit-maximizing service level increases by up to 6 percentage points.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present entire sequences of two hymenopteran mitochondrial genomes and the major portion of three others. We combined these data with nine previously sequenced hymenopteran mitochondrial genomes. This allowed us to infer and analyze the evolution of the 67 mitochondrial gene rearrangements so far found in this order. All of these involve tRNA genes, whereas four also involve larger (protein-coding or ribosomal RNA) genes. We find that the vast majority of mitochondrial gene rearrangements are independently derived. A maximum of four of these rearrangements represent shared, derived organizations, whereas three are convergently derived. The remaining mitochondrial gene rearrangements represent new mitochondrial genome organizations. These data are consistent with the proposal that there are an enormous number of alternative mitochondrial genome organizations possible and that mitochondrial genome organization is, for the most part, selectively neutral. Nevertheless, some mitochondrial genes appear less mobile than others. Genes close to the noncoding region are generally more mobile but only marginally so. Some mitochondrial genes rearrange in a pattern consistent with the duplication/random loss model, but more mitochondrial genes move in a pattern inconsistent with this model. An increased rate of mitochondrial gene rearrangement is not tightly associated with the evolution of parasitism. Although parasitic lineages tend to have more mitochondrial gene rearrangements than nonparasitic lineages, there are exceptions (e.g., Orussus and Schlettererius). It is likely that only a small proportion of the total number of mitochondrial gene rearrangements that have occurred during the evolution of the Hymenoptera have been sampled in the present study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have constructed plasmids to be used for in vitro signature-tagged mutagenesis (STM) of Campylobacter jejuni and used these to generate STM libraries in three different strains. Statistical analysis of the transposon insertion sites in the C. jejuni NCTC 11168 chromosome and the plasmids of strain 81-176 indicated that their distribution was not uniform. Visual inspection of the distribution suggested that deviation from uniformity was not due to preferential integration of the transposon into a limited number of hot spots but rather that there was a bias towards insertions around the origin. We screened pools of mutants from the STM libraries for their ability to colonize the ceca of 2-week-old chickens harboring a standardized gut flora. We observed high-frequency random loss of colonization proficient mutants. When cohoused birds were individually inoculated with different tagged mutants, random loss of colonization-proficient mutants was similarly observed, as was extensive bird-to-bird transmission of mutants. This indicates that the nature of campylobacter colonization in chickens is complex and dynamic, and we hypothesize that bottlenecks in the colonization process and between-bird transmission account for these observations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The complete mitochondrial DNA sequence was determined for the Australian freshwater crayfish Cherax destructor (Crustacea: Decapoda: Parastacidae). The 15,895-bp genome is circular with the same gene composition as that found in other metazoans. However, we report a novel gene arrangement with respect to the putative arthropod ancestral gene order and all other arthropod mitochondrial genomes sequenced to date. It is apparent that 11 genes have been translocated (ND1, ND4, ND4L, Cyt b, srRNA, and tRNAs Ser(UGA), Leu(CUN), Ile, Cys, Pro, and Val), two of which have also undergone inversions (tRNAs Pro and Val). The ‘duplication/random loss’ mechanism is a plausible model for the observed translocations, while ‘intramitochondrial recombination’ may account for the gene inversions. In addition, the arrangement of rRNA genes is incompatible with current mitochondrial transcription models, and suggests that a different transcription mechanism may operate in C. destructor.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The complete mitochondrial DNA sequence was determined for the Australian giant crab Pseudocarcinns gigas (Crustacea: Decapoda: Menippidae) and the giant freshwater shrimp Macrobrachium rosenbergii (Crustacea: Decapoda: Palaemonidae). The Pse gigas and Mrosenbergii mitochondrial genomes are circular molecules, 15,515 and 15,772 bp in length, respectively, and have the same gene composition as found in other metazoans. The gene arrangement of M. rosenbergii corresponds with that of the presumed ancestral arthropod gene order, represented by Limulus polyphemus, except for the position of the tRNALeu(UUR) gene. The Pse. gigas gene arrangement corresponds exactly with that reported for another brachyuran, Portunus trituberculatus, and differs from the M. rosenbergii gene order by only the position of the tRNAHis gene. Given the relative positions of intergenic nonoding nucleotides, the “duplication/random loss” model appears to be the most plausible mechanism for the translocation of this gene. These data represent the first caridean and only the second brachyuran complete mtDNA sequences, and a source of information that will facilitate surveys of intraspecific variation within these commercially important decapod species.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Several cases have been described in the literature where genetic polymorphism appears to be shared between a pair of species. Here we examine the distribution of times to random loss of shared polymorphism in the context of the neutral Wright–Fisher model. Order statistics are used to obtain the distribution of times to loss of a shared polymorphism based on Kimura’s solution to the diffusion approximation of the Wright–Fisher model. In a single species, the expected absorption time for a neutral allele having an initial allele frequency of ½ is 2.77 N generations. If two species initially share a polymorphism, that shared polymorphism is lost as soon as either of two species undergoes fixation. The loss of a shared polymorphism thus occurs sooner than loss of polymorphism in a single species and has an expected time of 1.7 N generations. Molecular sequences of genes with shared polymorphism may be characterized by the count of the number of sites that segregate in both species for the same nucleotides (or amino acids). The distribution of the expected numbers of these shared polymorphic sites also is obtained. Shared polymorphism appears to be more likely at genetic loci that have an unusually large number of segregating alleles, and the neutral coalescent proves to be very useful in determining the probability of shared allelic lineages expected by chance. These results are related to examples of shared polymorphism in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust facial expression recognition (FER) under occluded face conditions is challenging. It requires robust algorithms of feature extraction and investigations into the effects of different types of occlusion on the recognition performance to gain insight. Previous FER studies in this area have been limited. They have spanned recovery strategies for loss of local texture information and testing limited to only a few types of occlusion and predominantly a matched train-test strategy. This paper proposes a robust approach that employs a Monte Carlo algorithm to extract a set of Gabor based part-face templates from gallery images and converts these templates into template match distance features. The resulting feature vectors are robust to occlusion because occluded parts are covered by some but not all of the random templates. The method is evaluated using facial images with occluded regions around the eyes and the mouth, randomly placed occlusion patches of different sizes, and near-realistic occlusion of eyes with clear and solid glasses. Both matched and mis-matched train and test strategies are adopted to analyze the effects of such occlusion. Overall recognition performance and the performance for each facial expression are investigated. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the high robustness and fast processing speed of our approach, and provide useful insight into the effects of occlusion on FER. The results on the parameter sensitivity demonstrate a certain level of robustness of the approach to changes in the orientation and scale of Gabor filters, the size of templates, and occlusions ratios. Performance comparisons with previous approaches show that the proposed method is more robust to occlusion with lower reductions in accuracy from occlusion of eyes or mouth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we consider a finite queue with its arrivals controlled by the random early detection algorithm. This is one of the most prominent congestion avoidance schemes in the Internet routers. The aggregate arrival stream from the population of transmission control protocol sources is locally considered stationary renewal or Markov modulated Poisson process with general packet length distribution. We study the exact dynamics of this queue and provide the stability and the rates of convergence to the stationary distribution and obtain the packet loss probability and the waiting time distribution. Then we extend these results to a two traffic class case with each arrival stream renewal. However, computing the performance indices for this system becomes computationally prohibitive. Thus, in the latter half of the article, we approximate the dynamics of the average queue length process asymptotically via an ordinary differential equation. We estimate the error term via a diffusion approximation. We use these results to obtain approximate transient and stationary performance of the system. Finally, we provide some computational examples to show the accuracy of these approximations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that globally declining fisheries catch trends cannot be explained by random processes and are consistent with declining stock abundance trends. Future projections are inherently uncertain but may provide a benchmark against which to assess the effectiveness of conservation measures. Marine reserves and fisheries closures are among those measures and can be equally effective in tropical and temperate areas—but must be combined with catch-, effort-, and gear restrictions to meet global conservation objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between retention loss in single crystal PbTiO3 ferroelectric thin films and leakage currents is demonstrated by piezoresponse and conductive atomic force microscopy measurements. It was found that the polarization reversal in the absence of an electric field followed a stretched exponential behavior 1-exp[-(t/k)(d)] with exponent d>1, which is distinct from a dispersive random walk process with d <. The latter has been observed in polycrystalline films for which retention loss was associated with grain boundaries. The leakage current indicates power law scaling at short length scales, which strongly depends on the applied electric field. Additional information of the microstructure, which contributes to an explanation of the presence of leakage currents, is presented with high resolution transmission electron microscopy analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is common practice to initiate supplemental feeding in newborns if body weight decreases by 7-10% in the first few days after birth (7-10% rule). Standard hospital procedure is to initiate intravenous therapy once a woman is admitted to give birth. However, little is known about the relationship between intrapartum intravenous therapy and the amount of weight loss in the newborn. The present research was undertaken in order to determine what factors contribute to weight loss in a newborn, and to examine the relationship between the practice of intravenous intrapartum therapy and the extent of weight loss post-birth. Using a cross-sectional design with a systematic random sample of 100 mother-baby dyads, we examined properties of delivery that have the potential to impact weight loss in the newborn, including method of delivery, parity, duration of labour, volume of intravenous therapy, feeding method, and birth attendant. This study indicated that the volume of intravenous therapy and method of delivery are significant predictors of weight loss in the newborn (R2=15.5, p<0.01). ROC curve analysis identified an intravenous volume cut-point of 1225 ml that would elicit a high measure of sensitivity (91.3%), and demonstrated significant Kappa agreement (p<0.01) with excess newborn weight loss. It was concluded that infusion of intravenous therapy and natural birth delivery are discriminant factors that influence excess weight loss in newborn infants. Acknowledgement of these factors should be considered in clinical practice.