186 resultados para Efficient estimation
Resumo:
The MDRD (Modification of diet in renal disease) equation enables glomerular filtration rate (GFR) estimation from serum creatinine only. Thus, the laboratory can report an estimated GFR (eGFR) with each serum creatinine assessment, increasing therefore the recognition of renal failure. Predictive performance of MDRD equation is better for GFR < 60 ml/min/1,73 m2. A normal or near-normal renal function is often underestimated by this equation. Overall, MDRD provides more reliable estimations of renal function than the Cockcroft-Gault (C-G) formula, but both lack precision. MDRD is not superior to C-G for drug dosing. Being adjusted to 1,73 m2, MDRD eGFR has to be back adjusted to the patient's body surface area for drug dosing. Besides, C-G has the advantage of a greater simplicity and a longer use.
Resumo:
In this paper, we propose a new paradigm to carry outthe registration task with a dense deformation fieldderived from the optical flow model and the activecontour method. The proposed framework merges differenttasks such as segmentation, regularization, incorporationof prior knowledge and registration into a singleframework. The active contour model is at the core of ourframework even if it is used in a different way than thestandard approaches. Indeed, active contours are awell-known technique for image segmentation. Thistechnique consists in finding the curve which minimizesan energy functional designed to be minimal when thecurve has reached the object contours. That way, we getaccurate and smooth segmentation results. So far, theactive contour model has been used to segment objectslying in images from boundary-based, region-based orshape-based information. Our registration technique willprofit of all these families of active contours todetermine a dense deformation field defined on the wholeimage. A well-suited application of our model is theatlas registration in medical imaging which consists inautomatically delineating anatomical structures. Wepresent results on 2D synthetic images to show theperformances of our non rigid deformation field based ona natural registration term. We also present registrationresults on real 3D medical data with a large spaceoccupying tumor substantially deforming surroundingstructures, which constitutes a high challenging problem.
Resumo:
Functional magnetic resonance imaging studies have indicated that efficient feature search (FS) and inefficient conjunction search (CS) activate partially distinct frontoparietal cortical networks. However, it remains a matter of debate whether the differences in these networks reflect differences in the early processing during FS and CS. In addition, the relationship between the differences in the networks and spatial shifts of attention also remains unknown. We examined these issues by applying a spatio-temporal analysis method to high-resolution visual event-related potentials (ERPs) and investigated how spatio-temporal activation patterns differ for FS and CS tasks. Within the first 450 msec after stimulus onset, scalp potential distributions (ERP maps) revealed 7 different electric field configurations for each search task. Configuration changes occurred simultaneously in the two tasks, suggesting that contributing processes were not significantly delayed in one task compared to the other. Despite this high spatial and temporal correlation, two ERP maps (120-190 and 250-300 msec) differed between the FS and CS. Lateralized distributions were observed only in the ERP map at 250-300 msec for the FS. This distribution corresponds to that previously described as the N2pc component (a negativity in the time range of the N2 complex over posterior electrodes of the hemisphere contralateral to the target hemifield), which has been associated with the focusing of attention onto potential target items in the search display. Thus, our results indicate that the cortical networks involved in feature and conjunction searching partially differ as early as 120 msec after stimulus onset and that the differences between the networks employed during the early stages of FS and CS are not necessarily caused by spatial attention shifts.
Resumo:
A new formula for glomerular filtration rate estimation in pediatric population from 2 to 18 years has been developed by the University Unit of Pediatric Nephrology. This Quadratic formula, accessible online, allows pediatricians to adjust drug dosage and/or follow-up renal function more precisely and in an easy manner.
Resumo:
Conservation of the function of open reading frames recently identified in fungal genome projects can be assessed by complementation of deletion mutants of putative Saccharomyces cerevisiae orthologs. A parallel complementation assay expressing the homologous wild type S. cerevisiae gene is generally performed as a positive control. However, we and others have found that failure of complementation can occur in this case. We investigated the specific cases of S. cerevisiae TBF1 and TIM54 essential genes. Heterologous complementation with Candida glabrata TBF1 or TIM54 gene was successful using the constitutive promoters TDH3 and TEF. In contrast, homologous complementation with S. cerevisiae TBF1 or TIM54 genes failed using these promoters, and was successful only using the natural promoters of these genes. The reduced growth rate of S. cerevisiae complemented with C. glabrata TBF1 or TIM54 suggested a diminished functionality of the heterologous proteins compared to the homologous proteins. The requirement of the homologous gene for the natural promoter was alleviated for TBF1 when complementation was assayed in the absence of sporulation and germination, and for TIM54 when two regions of the protein presumably responsible for a unique translocation pathway of the TIM54 protein into the mitochondrial membrane were deleted. Our results demonstrate that the use of different promoters may prove necessary to obtain successful complementation, with use of the natural promoter being the best approach for homologous complementation.
Resumo:
Modern dietary habits are characterized by high-sodium and low-potassium intakes, each of which was correlated with a higher risk for hypertension. In this study, we examined whether long-term variations in the intake of sodium and potassium induce lasting changes in the plasma concentration of circulating steroids by developing a mathematical model of steroidogenesis in mice. One finding of this model was that mice increase their plasma progesterone levels specifically in response to potassium depletion. This prediction was confirmed by measurements in both male mice and men. Further investigation showed that progesterone regulates renal potassium handling both in males and females under potassium restriction, independent of its role in reproduction. The increase in progesterone production by male mice was time dependent and correlated with decreased urinary potassium content. The progesterone-dependent ability to efficiently retain potassium was because of an RU486 (a progesterone receptor antagonist)-sensitive stimulation of the colonic hydrogen, potassium-ATPase (known as the non-gastric or hydrogen, potassium-ATPase type 2) in the kidney. Thus, in males, a specific progesterone concentration profile induced by chronic potassium restriction regulates potassium balance.
Resumo:
The clinical demand for a device to monitor Blood Pressure (BP) in ambulatory scenarios with minimal use of inflation cuffs is increasing. Based on the so-called Pulse Wave Velocity (PWV) principle, this paper introduces and evaluates a novel concept of BP monitor that can be fully integrated within a chest sensor. After a preliminary calibration, the sensor provides non-occlusive beat-by-beat estimations of Mean Arterial Pressure (MAP) by measuring the Pulse Transit Time (PTT) of arterial pressure pulses travelling from the ascending aorta towards the subcutaneous vasculature of the chest. In a cohort of 15 healthy male subjects, a total of 462 simultaneous readings consisting of reference MAP and chest PTT were acquired. Each subject was recorded at three different days: D, D+3 and D+14. Overall, the implemented protocol induced MAP values to range from 80 ± 6 mmHg in baseline, to 107 ± 9 mmHg during isometric handgrip maneuvers. Agreement between reference and chest-sensor MAP values was tested by using intraclass correlation coefficient (ICC = 0.78) and Bland-Altman analysis (mean error = 0.7 mmHg, standard deviation = 5.1 mmHg). The cumulative percentage of MAP values provided by the chest sensor falling within a range of ±5 mmHg compared to reference MAP readings was of 70%, within ±10 mmHg was of 91%, and within ±15mmHg was of 98%. These results point at the fact that the chest sensor complies with the British Hypertension Society (BHS) requirements of Grade A BP monitors, when applied to MAP readings. Grade A performance was maintained even two weeks after having performed the initial subject-dependent calibration. In conclusion, this paper introduces a sensor and a calibration strategy to perform MAP measurements at the chest. The encouraging performance of the presented technique paves the way towards an ambulatory-compliant, continuous and non-occlusive BP monitoring system.
Resumo:
Part I of this series of articles focused on the construction of graphical probabilistic inference procedures, at various levels of detail, for assessing the evidential value of gunshot residue (GSR) particle evidence. The proposed models - in the form of Bayesian networks - address the issues of background presence of GSR particles, analytical performance (i.e., the efficiency of evidence searching and analysis procedures) and contamination. The use and practical implementation of Bayesian networks for case pre-assessment is also discussed. This paper, Part II, concentrates on Bayesian parameter estimation. This topic complements Part I in that it offers means for producing estimates useable for the numerical specification of the proposed probabilistic graphical models. Bayesian estimation procedures are given a primary focus of attention because they allow the scientist to combine (his/her) prior knowledge about the problem of interest with newly acquired experimental data. The present paper also considers further topics such as the sensitivity of the likelihood ratio due to uncertainty in parameters and the study of likelihood ratio values obtained for members of particular populations (e.g., individuals with or without exposure to GSR).
Resumo:
The recommended dietary allowances of many expert committees (UK DHSS 1979, FAO/WHO/UNU 1985, USA NRC 1989) have set out the extra energy requirements necessary to support lactation on the basis of an efficiency of 80 per cent for human milk production. The metabolic efficiency of milk synthesis can be derived from the measurements of resting energy expenditure in lactating women and in a matched control group of non-pregnant non-lactating women. The results of the present study in Gambian women, as well as a review of human studies on energy expenditure during lactation performed in different countries, suggest an efficiency of human milk synthesis greater than the value currently used by expert committees. We propose that an average figure of 95 per cent would be more appropriate to calculate the energy cost of human lactation.
Resumo:
Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.
Resumo:
To date, state-of-the-art seismic material parameter estimates from multi-component sea-bed seismic data are based on the assumption that the sea-bed consists of a fully elastic half-space. In reality, however, the shallow sea-bed generally consists of soft, unconsolidated sediments that are characterized by strong to very strong seismic attenuation. To explore the potential implications, we apply a state-of-the-art elastic decomposition algorithm to synthetic data for a range of canonical sea-bed models consisting of a viscoelastic half-space of varying attenuation. We find that in the presence of strong seismic attenuation, as quantified by Q-values of 10 or less, significant errors arise in the conventional elastic estimation of seismic properties. Tests on synthetic data indicate that these errors can be largely avoided by accounting for the inherent attenuation of the seafloor when estimating the seismic parameters. This can be achieved by replacing the real-valued expressions for the elastic moduli in the governing equations in the parameter estimation by their complex-valued viscoelastic equivalents. The practical application of our parameter procedure yields realistic estimates of the elastic seismic material properties of the shallow sea-bed, while the corresponding Q-estimates seem to be biased towards too low values, particularly for S-waves. Given that the estimation of inelastic material parameters is notoriously difficult, particularly in the immediate vicinity of the sea-bed, this is expected to be of interest and importance for civil and ocean engineering purposes.
Resumo:
In Switzerland, the annual cost of damage by natural elements has been increasing for several years despite the introduction of protective measures. Mainly induced by material destruction building insurance companies have to pay the majority of this cost. In many European countries, governments and insurance companies consider prevention strategies to reduce vulnerability. In Switzerland, since 2004, the cost of damage due to natural hazards has surpassed the cost of damage due to fire; a traditional activity of the Cantonal Insurance company (EGA). Therefore, the strategy for efficient fire prevention incorporates a reduction of the vulnerability of buildings. The thesis seeks to illustrate the relevance of such an approach when applied to the damage caused by natural hazards. It examines the role of insurance place and its involvement in targeted prevention of natural disasters. Integrated risk management involves a faultless comprehension of all risk parameters The first part of the thesis is devoted to the theoretical development of the key concepts that influence risk management, such as: hazard, vulnerability, exposure or damage. The literature on this subject, very prolific in recent years, was taken into account and put in perspective in the context of this study. Among the risk parameters, it is shown in the thesis that vulnerability is a factor that we can influence efficiently in order to limit the cost of damage to buildings. This is confirmed through the development of an analysis method. This method has led to the development of a tool to assess damage to buildings by flooding. The tool, designed for the property insurer or owner, proposes several steps, namely: - Vulnerability and damage potential assessment; - Proposals for remedial measures and risk reduction from an analysis of the costs of a potential flood; - Adaptation of a global strategy in high-risk areas based on the elements at risk. The final part of the thesis is devoted to the study of a hail event in order to provide a better understanding of damage to buildings. For this, two samples from the available claims data were selected and analysed in the study. The results allow the identification of new trends A second objective of the study was to develop a hail model based on the available data The model simulates a random distribution of intensities and coupled with a risk model, proposes a simulation of damage costs for the determined study area. Le coût annuel des dommages provoqués par les éléments naturels en Suisse est conséquent et sa tendance est en augmentation depuis plusieurs années, malgré la mise en place d'ouvrages de protection et la mise en oeuvre de moyens importants. Majoritairement induit par des dégâts matériels, le coût est supporté en partie par les assurances immobilières en ce qui concerne les dommages aux bâtiments. Dans de nombreux pays européens, les gouvernements et les compagnies d'assurance se sont mis à concevoir leur stratégie de prévention en termes de réduction de la vulnérabilité. Depuis 2004, en Suisse, ce coût a dépassé celui des dommages dus à l'incendie, activité traditionnelle des établissements cantonaux d'assurance (ECA). Ce fait, aux implications stratégiques nombreuses dans le domaine public de la gestion des risques, résulte en particulier d'une politique de prévention des incendies menée efficacement depuis plusieurs années, notamment par le biais de la diminution de la vulnérabilité des bâtiments. La thèse, par la mise en valeur de données actuarielles ainsi que par le développement d'outils d'analyse, cherche à illustrer la pertinence d'une telle approche appliquée aux dommages induits par les phénomènes naturels. Elle s'interroge sur la place de l'assurance et son implication dans une prévention ciblée des catastrophes naturelles. La gestion intégrale des risques passe par une juste maîtrise de ses paramètres et de leur compréhension. La première partie de la thèse est ainsi consacrée au développement théorique des concepts clés ayant une influence sur la gestion des risques, comme l'aléa, la vulnérabilité, l'exposition ou le dommage. La littérature à ce sujet, très prolifique ces dernières années, a été repnse et mise en perspective dans le contexte de l'étude, à savoir l'assurance immobilière. Parmi les paramètres du risque, il est démontré dans la thèse que la vulnérabilité est un facteur sur lequel il est possible d'influer de manière efficace dans le but de limiter les coûts des dommages aux bâtiments. Ce raisonnement est confirmé dans un premier temps dans le cadre de l'élaboration d'une méthode d'analyse ayant débouché sur le développement d'un outil d'estimation des dommages aux bâtiments dus aux inondations. L'outil, destiné aux assurances immobilières, et le cas échéant aux propriétaires, offre plusieurs étapes, à savoir : - l'analyse de la vulnérabilité et le potentiel de dommages ; - des propositions de mesures de remédiation et de réduction du risque issues d'une analyse des coûts engendrés par une inondation potentielle; - l'adaptation d'une stratégie globale dans les zones à risque en fonction des éléments à risque. La dernière partie de la thèse est consacrée à l'étude d'un événement de grêle dans le but de fournir une meilleure compréhension des dommages aux bâtiments et de leur structure. Pour cela, deux échantillons ont été sélectionnés et analysés parmi les données de sinistres à disposition de l'étude. Les résultats obtenus, tant au niveau du portefeuille assuré que de l'analyse individuelle, permettent de dégager des tendances nouvelles. Un deuxième objectif de l'étude a consisté à élaborer une modélisation d'événements de grêle basée sur les données à disposition. Le modèle permet de simuler une distribution aléatoire des intensités et, couplé à un modèle d'estimation des risques, offre une simulation des coûts de dommages envisagés pour une zone d'étude déterminée. Les perspectives de ce travail permettent une meilleure focalisation du rôle de l'assurance et de ses besoins en matière de prévention.