904 resultados para Glomerular filtration rate estimation
Resumo:
We propose a complete application capable of tracking multiple objects in an environment monitored by multiple cameras. The system has been specially developed to be applied to sport games, and it has been evaluated in a real association-football stadium. Each target is tracked using a local importance-sampling particle filter in each camera, but the final estimation is made by combining information from the other cameras using a modified unscented Kalman filter algorithm. Multicamera integration enables us to compensate for bad measurements or occlusions in some cameras thanks to the other views it offers. The final algorithm results in a more accurate system with a lower failure rate. (C) 2009 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.3114605]
Resumo:
Aims
Our aim was to test the prediction and clinical applicability of high-sensitivity assayed troponin I for incident cardiovascular events in a general middle-aged European population.
Methods and results
High-sensitivity assayed troponin I was measured in the Scottish Heart Health Extended Cohort (n = 15 340) with 2171 cardiovascular events (including acute coronary heart disease and probable ischaemic strokes), 714 coronary deaths (25% of all deaths), 1980 myocardial infarctions, and 797 strokes of all kinds during an average of 20 years follow-up. Detection rate above the limit of detection (LoD) was 74.8% in the overall population and 82.6% in men and 67.0% in women. Troponin I assayed by the high-sensitivity method was associated with future cardiovascular risk after full adjustment such as that individuals in the fourth category had 2.5 times the risk compared with those without detectable troponin I (P < 0.0001). These associations remained significant even for those individuals in whom levels of contemporary-sensitivity troponin I measures were not detectable. Addition of troponin I levels to clinical variables led to significant increases in risk prediction with significant improvement of the c-statistic (P < 0.0001) and net reclassification (P < 0.0001). A threshold of 4.7 pg/mL in women and 7.0 pg/mL in men is suggested to detect individuals at high risk for future cardiovascular events.
Conclusion
Troponin I, measured with a high-sensitivity assay, is an independent predictor of cardiovascular events and might support selection of at risk individuals.
Resumo:
A new domain-specific reconfigurable sub-pixel interpolation architecture for multi-standard video Motion Estimation (ME) is presented. The mixed use of parallel and serial-input FIR filters achieves high throughput rate and efficient silicon utilisation. Flexibility has been achieved by using a multiplexed reconfigurable data-path controlled by a selection signal. Silicon design studies show that this can be implemented using 34.8K gates with area and performance that compares very favourably with existing fixed solutions based solely on the H.264 standard. ©2008 IEEE.
Resumo:
Mathematical models are useful tools for simulation, evaluation, optimal operation and control of solar cells and proton exchange membrane fuel cells (PEMFCs). To identify the model parameters of these two type of cells efficiently, a biogeography-based optimization algorithm with mutation strategies (BBO-M) is proposed. The BBO-M uses the structure of biogeography-based optimization algorithm (BBO), and both the mutation motivated from the differential evolution (DE) algorithm and the chaos theory are incorporated into the BBO structure for improving the global searching capability of the algorithm. Numerical experiments have been conducted on ten benchmark functions with 50 dimensions, and the results show that BBO-M can produce solutions of high quality and has fast convergence rate. Then, the proposed BBO-M is applied to the model parameter estimation of the two type of cells. The experimental results clearly demonstrate the power of the proposed BBO-M in estimating model parameters of both solar and fuel cells.
Resumo:
Stiffness values in geotechnical structures can range over many orders of magnitude for relatively small operational strains. The typical strain levels where soil stiffness changes most dramatically is in the range 0.01-0.1%, however soils do not exhibit linear stress-strain behaviour at small strains. Knowledge of the in situ stiffness at small strain is important in geotechnical numerical modelling and design. The stress-strain regime of cut slopes is complex, as we have different principle stress directions at different positions along the potential failure plane. For example, loading may be primarily in extension near the toe of the slope, while compressive loading is predominant at the crest of a slope. Cuttings in heavily overconsolidated clays are known to be susceptible to progressive failure and subsequent strain softening, in which progressive yielding propagates from the toe towards the crest of the slope over time. In order to gain a better understanding of the rate of softening it would be advantageous to measure changes in small strain stiffness in the field.
Resumo:
In the last few years, the number of systems and devices that use voice based interaction has grown significantly. For a continued use of these systems, the interface must be reliable and pleasant in order to provide an optimal user experience. However there are currently very few studies that try to evaluate how pleasant is a voice from a perceptual point of view when the final application is a speech based interface. In this paper we present an objective definition for voice pleasantness based on the composition of a representative feature subset and a new automatic voice pleasantness classification and intensity estimation system. Our study is based on a database composed by European Portuguese female voices but the methodology can be extended to male voices or to other languages. In the objective performance evaluation the system achieved a 9.1% error rate for voice pleasantness classification and a 15.7% error rate for voice pleasantness intensity estimation.
Resumo:
Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.
Resumo:
The dispersal process, by which individuals or other dispersing agents such as gametes or seeds move from birthplace to a new settlement locality, has important consequences for the dynamics of genes, individuals, and species. Many of the questions addressed by ecology and evolutionary biology require a good understanding of species' dispersal patterns. Much effort has thus been devoted to overcoming the difficulties associated with dispersal measurement. In this context, genetic tools have long been the focus of intensive research, providing a great variety of potential solutions to measuring dispersal. This methodological diversity is reviewed here to help (molecular) ecologists find their way toward dispersal inference and interpretation and to stimulate further developments.
Resumo:
La présente étude offre un panorama sur les interactions et les liens qui existent entre la volatilité des taux de change et les échanges internationaux. L’objectif de ce travail est donc de présenter théoriquement cette relation, puis d’examiner empiriquement l’existence de cette relation de causalité entre le commerce international et la variabilité des taux de change. La littérature portant sur la question se considère dans l'ensemble comme contradictoire et supporte plusieurs controverses qui ne nous permettent pas de conclure clairement quant à la relation en question. Nous essayerons de pousser ces recherches un peu plus loin en réexaminant cette évidence pour le canada et en offrant une investigation empirique sur l’existence éventuelle d'un impact significatif de la volatilité sur les flux désagrégées des exportations sectoriels du canada vers son partenaire, les États-Unis. Nous y examinons la réponse empirique de 5 secteurs d’exportations canadiennes aux variations du taux de change réel effectif entre le canada et les États- Unis. Toutefois, nos résultats obtenus ne nous permettent pas de conclure quant à la significativité relative d’un impact de volatilité de taux de change sur les exportations sectoriels désagrégées destinées aux États-Unis. Dans l’ensemble, même si on admet que les signe des coefficients estimés de la variable de risque dans chaque secteur est négatif, nous arrivons à la conclusion que la volatilité ne semble pas avoir un impact statistiquement significatif sur le volume réelle des exportations du Canada vers les États-Unis.
Resumo:
Depuis quelques années, l'évolution moléculaire cherche à caractériser les variations et l'intensité de la sélection grâce au rapport entre taux de substitution synonyme et taux de substitution non-synonyme (dN/dS). Cette mesure, dN/dS, a permis d'étudier l'histoire de la variation de l'intensité de la sélection au cours du temps ou de détecter des épisodes de la sélection positive. Les liens entre sélection et variation de taille efficace interfèrent cependant dans ces mesures. Les méthodes comparatives, quant a elle, permettent de mesurer les corrélations entre caractères quantitatifs le long d'une phylogénie. Elles sont également utilisées pour tester des hypothèses sur l'évolution corrélée des traits d'histoire de vie, mais pour être employées pour étudier les corrélations entre traits d'histoire de vie, masse, taux de substitution ou dN/dS. Nous proposons ici une approche combinant une méthode comparative basée sur le principe des contrastes indépendants et un modèle d'évolution moléculaire, dans un cadre probabiliste Bayésien. Intégrant, le long d'une phylogénie, sur les reconstructions ancestrales des traits et et de dN/dS nous estimons les covariances entre traits ainsi qu'entre traits et paramètres du modèle d'évolution moléculaire. Un modèle hiérarchique, a été implémenté dans le cadre du logiciel coevol, publié au cours de cette maitrise. Ce modèle permet l'analyse simultané de plusieurs gènes sans perdre la puissance donnée par l'ensemble de séquences. Un travail deparallélisation des calculs donne la liberté d'augmenter la taille du modèle jusqu'à l'échelle du génome. Nous étudions ici les placentaires, pour lesquels beaucoup de génomes complets et de mesures phénotypiques sont disponibles. À la lumière des théories sur les traits d'histoire de vie, notre méthode devrait permettre de caractériser l'implication de groupes de gènes dans les processus biologique liés aux phénotypes étudiés.
Resumo:
We consider two new approaches to nonparametric estimation of the leverage effect. The first approach uses stock prices alone. The second approach uses the data on stock prices as well as a certain volatility instrument, such as the CBOE volatility index (VIX) or the Black-Scholes implied volatility. The theoretical justification for the instrument-based estimator relies on a certain invariance property, which can be exploited when high frequency data is available. The price-only estimator is more robust since it is valid under weaker assumptions. However, in the presence of a valid volatility instrument, the price-only estimator is inefficient as the instrument-based estimator has a faster rate of convergence. We consider two empirical applications, in which we study the relationship between the leverage effect and the debt-to-equity ratio, credit risk, and illiquidity.
Resumo:
So far, in the bivariate set up, the analysis of lifetime (failure time) data with multiple causes of failure is done by treating each cause of failure separately. with failures from other causes considered as independent censoring. This approach is unrealistic in many situations. For example, in the analysis of mortality data on married couples one would be interested to compare the hazards for the same cause of death as well as to check whether death due to one cause is more important for the partners’ risk of death from other causes. In reliability analysis. one often has systems with more than one component and many systems. subsystems and components have more than one cause of failure. Design of high-reliability systems generally requires that the individual system components have extremely high reliability even after long periods of time. Knowledge of the failure behaviour of a component can lead to savings in its cost of production and maintenance and. in some cases, to the preservation of human life. For the purpose of improving reliability. it is necessary to identify the cause of failure down to the component level. By treating each cause of failure separately with failures from other causes considered as independent censoring, the analysis of lifetime data would be incomplete. Motivated by this. we introduce a new approach for the analysis of bivariate competing risk data using the bivariate vector hazard rate of Johnson and Kotz (1975).
Resumo:
This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder
Resumo:
This work projects photoluminescence (PL) as an alternative technique to estimate the order of resistivity of zinc oxide (ZnO) thin films. ZnO thin films, deposited using chemical spray pyrolysis (CSP) by varying the deposition parameters like solvent, spray rate, pH of precursor, and so forth, have been used for this study. Variation in the deposition conditions has tremendous impact on the luminescence properties as well as resistivity. Two emissions could be recorded for all samples—the near band edge emission (NBE) at 380 nm and the deep level emission (DLE) at ∼500 nm which are competing in nature. It is observed that the ratio of intensities of DLE to NBE ( DLE/ NBE) can be reduced by controlling oxygen incorporation in the sample. - measurements indicate that restricting oxygen incorporation reduces resistivity considerably. Variation of DLE/ NBE and resistivity for samples prepared under different deposition conditions is similar in nature. DLE/ NBE was always less than resistivity by an order for all samples.Thus from PL measurements alone, the order of resistivity of the samples can be estimated.
Resumo:
Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.