47 resultados para spare part analysis
Resumo:
In this project, we have investigated new ways of modelling and analysis of human vasculature from Medical images. The research was divided in two main areas: cerebral vasculature analysis and coronary arteries modeling. Regarding cerebral vasculature analysis, we have studed cerebral aneurysms, internal carotid and the Circle of Willis (CoW). Aneurysms are abnormal vessel enlargements that can rupture causing important cerebral damages or death. The understanding of this pathology, together with its virtual treatment, and image diagnosis and prognosis, includes identification and detailed measurement of the aneurysms. In this context, we have proposed two automatic aneurysm isolation method, to separate the abnormal part of the vessel from the healthy part, to homogenize and speed-up the processing pipeline usually employed to study this pathology, [Cardenes2011TMI, arrabide2011MedPhys]. The results obtained from both methods have been also compared and validatied in [Cardenes2012MBEC]. A second important task here the analysis of the internal carotid [Bogunovic2011Media] and the automatic labelling of the CoW, Bogunovic2011MICCAI, Bogunovic2012TMI]. The second area of research covers the study of coronary arteries, specially coronary bifurcations because there is where the formation of atherosclerotic plaque is more common, and where the intervention is more challenging. Therefore, we proposed a novel modelling method from Computed Tomography Angiography (CTA) images, combined with Conventional Coronary Angiography (CCA), to obtain realistic vascular models of coronary bifurcations, presented in [Cardenes2011MICCAI], and fully validated including phantom experiments in [Cardene2013MedPhys]. The realistic models obtained from this method are being used to simulate stenting procedures, and to investigate the hemodynamic variables in coronary bifurcations in the works submitted in [Morlachi2012, Chiastra2012]. Additionally, another preliminary work has been done to reconstruct the coronary tree from rotational angiography, and published in [Cardenes2012ISBI].
Resumo:
Arising from either retrotransposition or genomic duplication of functional genes, pseudogenes are “genomic fossils” valuable for exploring the dynamics and evolution of genes and genomes. Pseudogene identification is an important problem in computational genomics, and is also critical for obtaining an accurate picture of a genome’s structure and function. However, no consensus computational scheme for defining and detecting pseudogenes has been developed thus far. As part of the ENCyclopedia Of DNA Elements (ENCODE) project, we have compared several distinct pseudogene annotation strategies and found that different approaches and parameters often resulted in rather distinct sets of pseudogenes. We subsequently developed a consensus approach for annotating pseudogenes (derived from protein coding genes) in the ENCODE regions, resulting in 201 pseudogenes, two-thirds of which originated from retrotransposition. A survey of orthologs for these pseudogenes in 28 vertebrate genomes showed that a significant fraction (∼80%) of the processed pseudogenes are primate-specific sequences, highlighting the increasing retrotransposition activity in primates. Analysis of sequence conservation and variation also demonstrated that most pseudogenes evolve neutrally, and processed pseudogenes appear to have lost their coding potential immediately or soon after their emergence. In order to explore the functional implication of pseudogene prevalence, we have extensively examined the transcriptional activity of the ENCODE pseudogenes. We performed systematic series of pseudogene-specific RACE analyses. These, together with complementary evidence derived from tiling microarrays and high throughput sequencing, demonstrated that at least a fifth of the 201 pseudogenes are transcribed in one or more cell lines or tissues.
Resumo:
This paper presents a technique to estimate and model patient-specific pulsatility of cerebral aneurysms over onecardiac cycle, using 3D rotational X-ray angiography (3DRA) acquisitions. Aneurysm pulsation is modeled as a time varying-spline tensor field representing the deformation applied to a reference volume image, thus producing the instantaneousmorphology at each time point in the cardiac cycle. The estimated deformation is obtained by matching multiple simulated projections of the deforming volume to their corresponding original projections. A weighting scheme is introduced to account for the relevance of each original projection for the selected time point. The wide coverage of the projections, together with the weighting scheme, ensures motion consistency in all directions. The technique has been tested on digital and physical phantoms that are realistic and clinically relevant in terms of geometry, pulsation and imaging conditions. Results from digital phantomexperiments demonstrate that the proposed technique is able to recover subvoxel pulsation with an error lower than 10% of the maximum pulsation in most cases. The experiments with the physical phantom allowed demonstrating the feasibility of pulsation estimation as well as identifying different pulsation regions under clinical conditions.
Resumo:
Exact closed-form expressions are obtained for the outage probability of maximal ratio combining in η-μ fadingchannels with antenna correlation and co-channel interference. The scenario considered in this work assumes the joint presence of background white Gaussian noise and independent Rayleigh-faded interferers with arbitrary powers. Outage probability results are obtained through an appropriate generalization of the moment-generating function of theη-μ fading distribution, for which new closed-form expressions are provided.
Resumo:
User generated content shared in online communities is often described using collaborative tagging systems where users assign labels to content resources. As a result, a folksonomy emerges that relates a number of tags with the resources they label and the users that have used them. In this paper we analyze the folksonomy of Freesound, an online audio clip sharing site which contains more than two million users and 150,000 user-contributed sound samplescovering a wide variety of sounds. By following methodologies taken from similar studies, we compute some metrics that characterize the folksonomy both at the globallevel and at the tag level. In this manner, we are able to betterunderstand the behavior of the folksonomy as a whole, and also obtain some indicators that can be used as metadata for describing tags themselves. We expect that such a methodology for characterizing folksonomies can be useful to support processes such as tag recommendation or automatic annotation of online resources.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
We address the performance optimization problem in a single-stationmulticlass queueing network with changeover times by means of theachievable region approach. This approach seeks to obtainperformance bounds and scheduling policies from the solution of amathematical program over a relaxation of the system's performanceregion. Relaxed formulations (including linear, convex, nonconvexand positive semidefinite constraints) of this region are developedby formulating equilibrium relations satisfied by the system, withthe help of Palm calculus. Our contributions include: (1) newconstraints formulating equilibrium relations on server dynamics;(2) a flow conservation interpretation of the constraintspreviously derived by the potential function method; (3) newpositive semidefinite constraints; (4) new work decomposition lawsfor single-station multiclass queueing networks, which yield newconvex constraints; (5) a unified buffer occupancy method ofperformance analysis obtained from the constraints; (6) heuristicscheduling policies from the solution of the relaxations.
Resumo:
We show the equivalence between the use of correspondence analysis (CA)of concadenated tables and the application of a particular version ofconjoint analysis called categorical conjoint measurement (CCM). Theconnection is established using canonical correlation (CC). The second part introduces the interaction e¤ects in all three variants of theanalysis and shows how to pass between the results of each analysis.
Resumo:
Ney is an end-blown flute which is mainly used for Makam music. Although from the beginning of 20th century a score representation based on extending the Western musicis used, because of its rich articulation repertoire, actualNey music can not be totally represented by written score.Ney is still taught and transmitted orally in Turkey. Becauseof that the performance has a distinct and importantrole in Ney music. Therefore signal analysis of ney performancesis crucial for understanding the actual music.Another important aspect which is also a part of the performanceis the articulations that performers apply. In Makam music in Turkey none of the articulations are taught evennamed by teachers. Articulations in Ney are valuable for understanding the real performance. Since articulations are not taught and their places are not marked in the score, the choice and character of the articulation is unique for eachperformer which also makes each performance unique.Our method analyzes audio files of well known Turkish Ney players. In order to obtain our analysis data, we analyzed audio files of 8 different performers vary from 1920to 2000.
Resumo:
Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).
Resumo:
Multiexponential decays may contain time-constants differing in several orders of magnitudes. In such cases, uniform sampling results in very long records featuring a high degree of oversampling at the final part of the transient. Here, we analyze a nonlinear time scale transformation to reduce the total number of samples with minimum signal distortion, achieving an important reduction of the computational cost of subsequent analyses. We propose a time-varying filter whose length is optimized for minimum mean square error
Resumo:
The process of free reserves in a non-life insurance portfolio as defined in the classical model of risk theory is modified by the introduction of dividend policies that set maximum levels for the accumulation of reserves. The first part of the work formulates the quantification of the dividend payments via the expectation of their current value under diferent hypotheses. The second part presents a solution based on a system of linear equations for discrete dividend payments in the case of a constant dividend barrier, illustrated by solving a specific case.
Resumo:
The process of free reserves in a non-life insurance portfolio as defined in the classical model of risk theory is modified by the introduction of dividend policies that set maximum levels for the accumulation of reserves. The first part of the work formulates the quantification of the dividend payments via the expectation of their current value under diferent hypotheses. The second part presents a solution based on a system of linear equations for discrete dividend payments in the case of a constant dividend barrier, illustrated by solving a specific case.
Resumo:
Visual inspection remains the most frequently applied method for detecting treatment effects in single-case designs. The advantages and limitations of visual inference are here discussed in relation to other procedures for assessing intervention effectiveness. The first part of the paper reviews previous research on visual analysis, paying special attention to the validation of visual analysts" decisions, inter-judge agreement, and false alarm and omission rates. The most relevant factors affecting visual inspection (i.e., effect size, autocorrelation, data variability, and analysts" expertise) are highlighted and incorporated into an empirical simulation study with the aim of providing further evidence about the reliability of visual analysis. Our results concur with previous studies that have reported the relationship between serial dependence and increased Type I rates. Participants with greater experience appeared to be more conservative and used more consistent criteria when assessing graphed data. Nonetheless, the decisions made by both professionals and students did not match sufficiently the simulated data features, and we also found low intra-judge agreement, thus suggesting that visual inspection should be complemented by other methods when assessing treatment effectiveness.
Resumo:
AbstractBACKGROUND: Scientists have been trying to understand the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. For some diseases, it has become evident that it is not enough to obtain a catalogue of the disease-related genes but to uncover how disruptions of molecular networks in the cell give rise to disease phenotypes. Moreover, with the unprecedented wealth of information available, even obtaining such catalogue is extremely difficult.PRINCIPAL FINDINGS: We developed a comprehensive gene-disease association database by integrating associations from several sources that cover different biomedical aspects of diseases. In particular, we focus on the current knowledge of human genetic diseases including mendelian, complex and environmental diseases. To assess the concept of modularity of human diseases, we performed a systematic study of the emergent properties of human gene-disease networks by means of network topology and functional annotation analysis. The results indicate a highly shared genetic origin of human diseases and show that for most diseases, including mendelian, complex and environmental diseases, functional modules exist. Moreover, a core set of biological pathways is found to be associated with most human diseases. We obtained similar results when studying clusters of diseases, suggesting that related diseases might arise due to dysfunction of common biological processes in the cell.CONCLUSIONS: For the first time, we include mendelian, complex and environmental diseases in an integrated gene-disease association database and show that the concept of modularity applies for all of them. We furthermore provide a functional analysis of disease-related modules providing important new biological insights, which might not be discovered when considering each of the gene-disease association repositories independently. Hence, we present a suitable framework for the study of how genetic and environmental factors, such as drugs, contribute to diseases.AVAILABILITY: The gene-disease networks used in this study and part of the analysis are available at http://ibi.imim.es/DisGeNET/DisGeNETweb.html#Download