992 resultados para depth estimation
Resumo:
Estimer la filtration glomérulaire chez les personnes âgées, tout en tenant compte de la difficulté supplémentaire d'évaluer leur masse musculaire, est difficile et particulièrement important pour la prescription de médicaments. Le taux plasmatique de la creatinine dépend à la fois de la fraction d'élimination rénale et extra-rénale et de la masse musculaire. Actuellement, pour estimer là filtration glomérulaire différentes formules sont utilisées, qui se fondent principalement sur la valeur de la créatinine. Néanmoins, en raison de la fraction éliminée par les voies tubulaires et intestinales la clairance de la créatinine surestime généralement le taux de filtration glomérulaire (GFR). Le but de cette étude est de vérifier la fiabilité de certains marqueurs et algorithmes de la fonction rénale actuellement utilisés et d'évaluer l'avantage additionnel de prendre en considération la masse musculaire mesurée par la bio-impédance dans une population âgée (> 70 ans) et avec une fonction rénale chronique compromise basée sur MDRD eGFR (CKD stades lll-IV). Dans cette étude, nous comparons 5 équations développées pour estimer la fonction rénale et basées respectivement sur la créatinine sérique (Cockcroft et MDRD), la cystatine C (Larsson), la créatinine combinée à la bêta-trace protéine (White), et la créatinine ajustée à la masse musculaire obtenue par analyse de la bio-impédance (MacDonald). La bio-impédance est une méthode couramment utilisée pour estimer la composition corporelle basée sur l'étude des propriétés électriques passives et de la géométrie des tissus biologiques. Cela permet d'estimer les volumes relatifs des différents tissus ou des fluides dans le corps, comme par exemple l'eau corporelle totale, la masse musculaire (=masse maigre) et la masse grasse corporelle. Nous avons évalué, dans une population âgée d'un service interne, et en utilisant la clairance de l'inuline (single shot) comme le « gold standard », les algorithmes de Cockcroft (GFR CKC), MDRD, Larsson (cystatine C, GFR CYS), White (beta trace protein, GFR BTP) et Macdonald (GFR = ALM, la masse musculaire par bio-impédance. Les résultats ont montré que le GFR (mean ± SD) mesurée avec l'inuline et calculée avec les algorithmes étaient respectivement de : 34.9±20 ml/min pour l'inuline, 46.7±18.5 ml/min pour CKC, 47.2±23 ml/min pour CYS, 54.4±18.2ml/min pour BTP, 49±15.9 ml/min pour MDRD et 32.9±27.2ml/min pour ALM. Les courbes ROC comparant la sensibilité et la spécificité, l'aire sous la courbe (AUC) et l'intervalle de confiance 95% étaient respectivement de : CKC 0 68 (055-0 81) MDRD 0.76 (0.64-0.87), Cystatin C 0.82 (0.72-0.92), BTP 0.75 (0.63-0.87), ALM 0.65 (0.52-0.78). ' En conclusion, les algorithmes comparés dans cette étude surestiment la GFR dans la population agee et hospitalisée, avec des polymorbidités et une classe CKD lll-IV. L'utilisation de l'impédance bioelectrique pour réduire l'erreur de l'estimation du GFR basé sur la créatinine n'a fourni aucune contribution significative, au contraire, elle a montré de moins bons résultats en comparaison aux autres equations. En fait dans cette étude 75% des patients ont changé leur classification CKD avec MacDonald (créatinine et masse musculaire), contre 49% avec CYS (cystatine C), 56% avec MDRD,52% avec Cockcroft et 65% avec BTP. Les meilleurs résultats ont été obtenus avec Larsson (CYS C) et la formule de Cockcroft.
Resumo:
We provide an incremental quantile estimator for Non-stationary Streaming Data. We propose a method for simultaneous estimation of multiple quantiles corresponding to the given probability levels from streaming data. Due to the limitations of the memory, it is not feasible to compute the quantiles by storing the data. So estimating the quantiles as the data pass by is the only possibility. This can be effective in network measurement. To provide the minimum of the mean-squared error of the estimation, we use parabolic approximation and for comparison we simulate the results for different number of runs and using both linear and parabolic approximations.
Resumo:
Resveratrol has been shown to have beneficial effects on diseases related to oxidant and/or inflammatory processes and extends the lifespan of simple organisms including rodents. The objective of the present study was to estimate the dietary intake of resveratrol and piceid (R&P) present in foods, and to identify the principal dietary sources of these compounds in the Spanish adult population. For this purpose, a food composition database (FCDB) of R&P in Spanish foods was compiled. The study included 40 685 subjects aged 3564 years from northern and southern regions of Spain who were included in the European Prospective Investigation into Cancer and Nutrition (EPIC)-Spain cohort. Usual food intake was assessed by personal interviews using a computerised version of a validated diet history method. An FCDB with 160 items was compiled. The estimated median and mean of R&P intake were 100 and 933 mg/d respectively. Approximately, 32% of the population did not consume RΠ The most abundant of the four stilbenes studied was trans-piceid (53·6 %), followed by trans-resveratrol (20·9 %), cis-piceid (19·3 %) and cis-resveratrol (6·2 %). The most important source of R&P was wines (98·4 %) and grape and grape juices (1·6 %), whereas peanuts, pistachios and berries contributed to less than 0·01 %. For this reason the pattern of intake of R&P was similar to the wine pattern. This is the first time that R&P intake has been estimated in a Mediterranean country.
Resumo:
Tutkimus keskittyy kansainväliseen hajauttamiseen suomalaisen sijoittajan näkökulmasta. Tutkimuksen toinen tavoite on selvittää tehostavatko uudet kovarianssimatriisiestimaattorit minimivarianssiportfolion optimointiprosessia. Tavallisen otoskovarianssimatriisin lisäksi optimoinnissa käytetään kahta kutistusestimaattoria ja joustavaa monimuuttuja-GARCH(1,1)-mallia. Tutkimusaineisto koostuu Dow Jonesin toimialaindekseistä ja OMX-H:n portfolioindeksistä. Kansainvälinen hajautusstrategia on toteutettu käyttäen toimialalähestymistapaa ja portfoliota optimoidaan käyttäen kahtatoista komponenttia. Tutkimusaieisto kattaa vuodet 1996-2005 eli 120 kuukausittaista havaintoa. Muodostettujen portfolioiden suorituskykyä mitataan Sharpen indeksillä. Tutkimustulosten mukaan kansainvälisesti hajautettujen investointien ja kotimaisen portfolion riskikorjattujen tuottojen välillä ei ole tilastollisesti merkitsevää eroa. Myöskään uusien kovarianssimatriisiestimaattoreiden käytöstä ei synnytilastollisesti merkitsevää lisäarvoa verrattuna otoskovarianssimatrisiin perustuvaan portfolion optimointiin.
Resumo:
In lentic water bodies, such as lakes, the water temperature near the surface typically increases during the day, and decreases during the night as a consequence of the diurnal radiative forcing (solar and infrared radiation). These temperature variations penetrate vertically into the water, transported mainly by heat conduction enhanced by eddy diffusion, which may vary due to atmospheric conditions, surface wave breaking, and internal dynamics of the water body. These two processes can be described in terms of an effective thermal diffusivity, which can be experimentally estimated. However, the transparency of the water (depending on turbidity) also allows solar radiation to penetrate below the surface into the water body, where it is locally absorbed (either by the water or by the deployed sensors). This process makes the estimation of effective thermal diffusivity from experimental water temperature profiles more difficult. In this study, we analyze water temperature profiles in a lake with the aim of showing that assessment of the role played by radiative forcing is necessary to estimate the effective thermal diffusivity. To this end we investigate diurnal water temperature fluctuations with depth. We try to quantify the effect of locally absorbed radiation and assess the impact of atmospheric conditions (wind speed, net radiation) on the estimation of the thermal diffusivity. The whole analysis is based on the results of fiber optic distributed temperature sensing, which allows unprecedented high spatial resolution measurements (∼4 mm) of the temperature profile in the water and near the water surface.
Resumo:
The -function and the -function are phenomenological models that are widely used in the context of timing interceptive actions and collision avoidance, respectively. Both models were previously considered to be unrelated to each other: is a decreasing function that provides an estimation of time-to-contact (ttc) in the early phase of an object approach; in contrast, has a maximum before ttc. Furthermore, it is not clear how both functions could be implemented at the neuronal level in a biophysically plausible fashion. Here we propose a new framework the corrected modified Tau function capable of predicting both -type ("") and -type ("") responses. The outstanding property of our new framework is its resilience to noise. We show that can be derived from a firing rate equation, and, as , serves to describe the response curves of collision sensitive neurons. Furthermore, we show that predicts the psychophysical performance of subjects determining ttc. Our new framework is thus validated successfully against published and novel experimental data. Within the framework, links between -type and -type neurons are established. Therefore, it could possibly serve as a model for explaining the co-occurrence of such neurons in the brain.
Resumo:
Introduction: « Osteo-Mobile Vaud » is a mobile osteoporosis (OP) screening program. The women > 60 years living in the region Vaud will be offered OP screening with new equipment installed in a bus. The main goal is to evaluate the fracture risk with the combination of clinical risk factors (CRF) and informations extracted by a single DXA: bone mineral density (BMD), vertebral fracture assessment (VFA), and micro-architecture (MA) evaluation. MA is yet evaluable in daily practice by the Trabecular Bone Score (TBS) measure. TBS is a novel grey-level texture measurement reflecting bone MA based on the use of experimental variograms of 2D projection images. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis value, partially independent of CRF and BMD. A 55-years follow- up is planned. Method: The Osteo-Mobile Vaud cohort (1500 women, > 60 years, living in the region Vaud) started in July 2010. CRF for OP, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded. Preliminary results are reported. Results: In July 31th, we evaluated 510 women: mean age 67 years, BMI 26 kg/m². 72 women had one or more fragility fractures, 39 had vertebral fracture (VFx) grade 2/3. TBS decreases with age (-0.005 / year, p<0.001), and with BMI (-0.011 per kg/m², p<0.001). Correlation between BMD and site matched TBS is low (r=0.4, p<0.001). For the lowest T-score BMD, odds ratio (OR, 95% CI) for VFx grade 2/3 and clinical OP Fx are 1.8 (1.1-2.9) and 2.3 (1.5-3.4). For TBS, age-, BMI- and BMD adjusted ORs (per SD decrease) for VFx grade 2/3 and clinical OP Fx are 1.9 (1.2-3.0) and 1.8 (1.2-2.7). The TBS added value was independent of lumbar spine BMD or the lowest T-score (femoral neck, total hip or lumbar spine). Conclusion: As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS and BMD may increase significantly the identification of women with prevalent OP Fx. For the first time we are able to have complementary information about fracture (VFA), density (BMD), and micro-architecture (TBS) from a simple, low ionizing radiation and cheap device: DXA. The value of such informations in a screening program will be evaluated.
Resumo:
MOTIVATION: Comparative analyses of gene expression data from different species have become an important component of the study of molecular evolution. Thus methods are needed to estimate evolutionary distances between expression profiles, as well as a neutral reference to estimate selective pressure. Divergence between expression profiles of homologous genes is often calculated with Pearson's or Euclidean distance. Neutral divergence is usually inferred from randomized data. Despite being widely used, neither of these two steps has been well studied. Here, we analyze these methods formally and on real data, highlight their limitations and propose improvements. RESULTS: It has been demonstrated that Pearson's distance, in contrast to Euclidean distance, leads to underestimation of the expression similarity between homologous genes with a conserved uniform pattern of expression. Here, we first extend this study to genes with conserved, but specific pattern of expression. Surprisingly, we find that both Pearson's and Euclidean distances used as a measure of expression similarity between genes depend on the expression specificity of those genes. We also show that the Euclidean distance depends strongly on data normalization. Next, we show that the randomization procedure that is widely used to estimate the rate of neutral evolution is biased when broadly expressed genes are abundant in the data. To overcome this problem, we propose a novel randomization procedure that is unbiased with respect to expression profiles present in the datasets. Applying our method to the mouse and human gene expression data suggests significant gene expression conservation between these species. CONTACT: marc.robinson-rechavi@unil.ch; sven.bergmann@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Resumo:
Gas-liquid mass transfer is an important issue in the design and operation of many chemical unit operations. Despite its importance, the evaluation of gas-liquid mass transfer is not straightforward due to the complex nature of the phenomena involved. In this thesis gas-liquid mass transfer was evaluated in three different gas-liquid reactors in a traditional way by measuring the volumetric mass transfer coefficient (kLa). The studied reactors were a bubble column with a T-junction two-phase nozzle for gas dispersion, an industrial scale bubble column reactor for the oxidation of tetrahydroanthrahydroquinone and a concurrent downflow structured bed.The main drawback of this approach is that the obtained correlations give only the average volumetric mass transfer coefficient, which is dependent on average conditions. Moreover, the obtained correlations are valid only for the studied geometry and for the chemical system used in the measurements. In principle, a more fundamental approach is to estimate the interfacial area available for mass transfer from bubble size distributions obtained by solution of population balance equations. This approach has been used in this thesis by developing a population balance model for a bubble column together with phenomenological models for bubble breakage and coalescence. The parameters of the bubble breakage rate and coalescence rate models were estimated by comparing the measured and calculated bubble sizes. The coalescence models always have at least one experimental parameter. This is because the bubble coalescence depends on liquid composition in a way which is difficult to evaluate using known physical properties. The coalescence properties of some model solutions were evaluated by measuring the time that a bubble rests at the free liquid-gas interface before coalescing (the so-calledpersistence time or rest time). The measured persistence times range from 10 msup to 15 s depending on the solution. The coalescence was never found to be instantaneous. The bubble oscillates up and down at the interface at least a coupleof times before coalescence takes place. The measured persistence times were compared to coalescence times obtained by parameter fitting using measured bubble size distributions in a bubble column and a bubble column population balance model. For short persistence times, the persistence and coalescence times are in good agreement. For longer persistence times, however, the persistence times are at least an order of magnitude longer than the corresponding coalescence times from parameter fitting. This discrepancy may be attributed to the uncertainties concerning the estimation of energy dissipation rates, collision rates and mechanisms and contact times of the bubbles.
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.
Resumo:
Observers are often required to adjust actions with objects that change their speed. However, no evidence for a direct sense of acceleration has been found so far. Instead, observers seem to detect changes in velocity within a temporal window when confronted with motion in the frontal plane (2D motion). Furthermore, recent studies suggest that motion-in-depth is detected by tracking changes of position in depth. Therefore, in order to sense acceleration in depth a kind of second-order computation would have to be carried out by the visual system. In two experiments, we show that observers misperceive acceleration of head-on approaches at least within the ranges we used [600-800 ms] resulting in an overestimation of arrival time. Regardless of the viewing condition (only monocular or monocular and binocular), the response pattern conformed to a constant velocity strategy. However, when binocular information was available, overestimation was highly reduced.
Resumo:
The optimization of most pesticide and fertilizer applications is based on overall grove conditions. In this work we measurements. Recently, Wei [9, 10] used a terrestrial propose a measurement system based on a ground laser scanner to LIDAR to measure tree height, width and volume developing estimate the volume of the trees and then extrapolate their foliage a set of experiments to evaluate the repeatability and surface in real-time. Tests with pear trees demonstrated that the accuracy of the measurements, obtaining a coefficient of relation between the volume and the foliage can be interpreted as variation of 5.4% and a relative error of 4.4% in the linear with a coefficient of correlation (R) of 0.81 and the foliar estimation of the volume but without real-time capabilities. surface can be estimated with an average error less than 5 %.
Resumo:
Background: During the last part of the 1990s the chance of surviving breast cancer increased. Changes in survival functions reflect a mixture of effects. Both, the introduction of adjuvant treatments and early screening with mammography played a role in the decline in mortality. Evaluating the contribution of these interventions using mathematical models requires survival functions before and after their introduction. Furthermore, required survival functions may be different by age groups and are related to disease stage at diagnosis. Sometimes detailed information is not available, as was the case for the region of Catalonia (Spain). Then one may derive the functions using information from other geographical areas. This work presents the methodology used to estimate age- and stage-specific Catalan breast cancer survival functions from scarce Catalan survival data by adapting the age- and stage-specific US functions. Methods: Cubic splines were used to smooth data and obtain continuous hazard rate functions. After, we fitted a Poisson model to derive hazard ratios. The model included time as a covariate. Then the hazard ratios were applied to US survival functions detailed by age and stage to obtain Catalan estimations. Results: We started estimating the hazard ratios for Catalonia versus the USA before and after the introduction of screening. The hazard ratios were then multiplied by the age- and stage-specific breast cancer hazard rates from the USA to obtain the Catalan hazard rates. We also compared breast cancer survival in Catalonia and the USA in two time periods, before cancer control interventions (USA 1975–79, Catalonia 1980–89) and after (USA and Catalonia 1990–2001). Survival in Catalonia in the 1980–89 period was worse than in the USA during 1975–79, but the differences disappeared in 1990–2001. Conclusion: Our results suggest that access to better treatments and quality of care contributed to large improvements in survival in Catalonia. On the other hand, we obtained detailed breast cancer survival functions that will be used for modeling the effect of screening and adjuvant treatments in Catalonia.
Resumo:
Estimation of the dimensions of fluvial geobodies from core data is a notoriously difficult problem in reservoir modeling. To try and improve such estimates and, hence, reduce uncertainty in geomodels, data on dunes, unit bars, cross-bar channels, and compound bars and their associated deposits are presented herein from the sand-bed braided South Saskatchewan River, Canada. These data are used to test models that relate the scale of the formative bed forms to the dimensions of the preserved deposits and, therefore, provide an insight as to how such deposits may be preserved over geologic time. The preservation of bed-form geometry is quantified by comparing the Alluvial architecture above and below the maximum erosion depth of the modem channel deposits. This comparison shows that there is no significant difference in the mean set thickness of dune cross-strata above and below the basal erosion surface of the contemporary channel, thus suggesting that dimensional relationships between dune deposits and the formative bed-form dimensions are likely to be valid from both recent and older deposits. The data show that estimates of mean bankfull flow depth derived from dune, unit bar, and cross-bar channel deposits are all very similar. Thus, the use of all these metrics together can provide a useful check that all components and scales of the alluvial architecture have been identified correctly when building reservoir models. The data also highlight several practical issues with identifying and applying data relating to cross-strata. For example, the deposits of unit bars were found to be severely truncated in length and width, with only approximately 10% of the mean bar-form length remaining, and thus making identification in section difficult. For similar reasons, the deposits of compound bars were found to be especially difficult to recognize, and hence, estimates of channel depth based on this method may be problematic. Where only core data are available (i.e., no outcrop data exist), formative flow depths are suggested to be best reconstructed using cross-strata formed by dunes. However, theoretical relationships between the distribution of set thicknesses and formative dune height are found to result in slight overestimates of the latter and, hence, mean bankfull flow depths derived from these measurements. This article illustrates that the preservation of fluvial cross-strata and, thus, the paleohydraulic inferences that can be drawn from them, are a function of the ratio of the size and migration rate of bed forms and the time scale of aggradation and channel migration. These factors must thus be considered when deciding on appropriate length:thickness ratios for the purposes of object-based modeling in reservoir characterization.