866 resultados para Homeostasis Model Assessment
Resumo:
Phosphate homeostasis was studied in a monocotyledonous model plant through the characterization of the PHO1 gene family in rice (Oryza sativa). Bioinformatics and phylogenetic analysis showed that the rice genome has three PHO1 homologs, which cluster with the Arabidopsis (Arabidopsis thaliana) AtPHO1 and AtPHO1;H1, the only two genes known to be involved in root-to-shoot transfer of phosphate. In contrast to the Arabidopsis PHO1 gene family, all three rice PHO1 genes have a cis-natural antisense transcript located at the 5 ' end of the genes. Strand-specific quantitative reverse transcription-PCR analyses revealed distinct patterns of expression for sense and antisense transcripts for all three genes, both at the level of tissue expression and in response to nutrient stress. The most abundantly expressed gene was OsPHO1;2 in the roots, for both sense and antisense transcripts. However, while the OsPHO1;2 sense transcript was relatively stable under various nutrient deficiencies, the antisense transcript was highly induced by inorganic phosphate (Pi) deficiency. Characterization of Ospho1;1 and Ospho1;2 insertion mutants revealed that only Ospho1;2 mutants had defects in Pi homeostasis, namely strong reduction in Pi transfer from root to shoot, which was accompanied by low-shoot and high-root Pi. Our data identify OsPHO1;2 as playing a key role in the transfer of Pi from roots to shoots in rice, and indicate that this gene could be regulated by its cis-natural antisense transcripts. Furthermore, phylogenetic analysis of PHO1 homologs in monocotyledons and dicotyledons revealed the emergence of a distinct clade of PHO1 genes in dicotyledons, which include members having roles other than long-distance Pi transport.
Resumo:
Background Demand for home care services has increased considerably, along with the growing complexity of cases and variability among resources and providers. Designing services that guarantee co-ordination and integration for providers and levels of care is of paramount importance. The aim of this study is to determine the effectiveness of a new case-management based, home care delivery model which has been implemented in Andalusia (Spain). Methods Quasi-experimental, controlled, non-randomised, multi-centre study on the population receiving home care services comparing the outcomes of the new model, which included nurse-led case management, versus the conventional one. Primary endpoints: functional status, satisfaction and use of healthcare resources. Secondary endpoints: recruitment and caregiver burden, mortality, institutionalisation, quality of life and family function. Analyses were performed at base-line, and at two, six and twelve months. A bivariate analysis was conducted with the Student's t-test, Mann-Whitney's U, and the chi squared test. Kaplan-Meier and log-rank tests were performed to compare survival and institutionalisation. A multivariate analysis was performed to pinpoint factors that impact on improvement of functional ability. Results Base-line differences in functional capacity – significantly lower in the intervention group (RR: 1.52 95%CI: 1.05–2.21; p = 0.0016) – disappeared at six months (RR: 1.31 95%CI: 0.87–1.98; p = 0.178). At six months, caregiver burden showed a slight reduction in the intervention group, whereas it increased notably in the control group (base-line Zarit Test: 57.06 95%CI: 54.77–59.34 vs. 60.50 95%CI: 53.63–67.37; p = 0.264), (Zarit Test at six months: 53.79 95%CI: 49.67–57.92 vs. 66.26 95%CI: 60.66–71.86 p = 0.002). Patients in the intervention group received more physiotherapy (7.92 CI95%: 5.22–10.62 vs. 3.24 95%CI: 1.37–5.310; p = 0.0001) and, on average, required fewer home care visits (9.40 95%CI: 7.89–10.92 vs.11.30 95%CI: 9.10–14.54). No differences were found in terms of frequency of visits to A&E or hospital re-admissions. Furthermore, patients in the control group perceived higher levels of satisfaction (16.88; 95%CI: 16.32–17.43; range: 0–21, vs. 14.65 95%CI: 13.61–15.68; p = 0,001). Conclusion A home care service model that includes nurse-led case management streamlines access to healthcare services and resources, while impacting positively on patients' functional ability and caregiver burden, with increased levels of satisfaction.
Resumo:
This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.
Resumo:
The simultaneous recording of scalp electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) can provide unique insights into the dynamics of human brain function, and the increased functional sensitivity offered by ultra-high field fMRI opens exciting perspectives for the future of this multimodal approach. However, simultaneous recordings are susceptible to various types of artifacts, many of which scale with magnetic field strength and can seriously compromise both EEG and fMRI data quality in recordings above 3T. The aim of the present study was to implement and characterize an optimized setup for simultaneous EEG-fMRI in humans at 7T. The effects of EEG cable length and geometry for signal transmission between the cap and amplifiers were assessed in a phantom model, with specific attention to noise contributions from the MR scanner coldheads. Cable shortening (down to 12cm from cap to amplifiers) and bundling effectively reduced environment noise by up to 84% in average power and 91% in inter-channel power variability. Subject safety was assessed and confirmed via numerical simulations of RF power distribution and temperature measurements on a phantom model, building on the limited existing literature at ultra-high field. MRI data degradation effects due to the EEG system were characterized via B0 and B1(+) field mapping on a human volunteer, demonstrating important, although not prohibitive, B1 disruption effects. With the optimized setup, simultaneous EEG-fMRI acquisitions were performed on 5 healthy volunteers undergoing two visual paradigms: an eyes-open/eyes-closed task, and a visual evoked potential (VEP) paradigm using reversing-checkerboard stimulation. EEG data exhibited clear occipital alpha modulation and average VEPs, respectively, with concomitant BOLD signal changes. On a single-trial level, alpha power variations could be observed with relative confidence on all trials; VEP detection was more limited, although statistically significant responses could be detected in more than 50% of trials for every subject. Overall, we conclude that the proposed setup is well suited for simultaneous EEG-fMRI at 7T.
Resumo:
BACKGROUND/OBJECTIVES: (1) To cross-validate tetra- (4-BIA) and octopolar (8-BIA) bioelectrical impedance analysis vs dual-energy X-ray absorptiometry (DXA) for the assessment of total and appendicular body composition and (2) to evaluate the accuracy of external 4-BIA algorithms for the prediction of total body composition, in a representative sample of Swiss children. SUBJECTS/METHODS: A representative sample of 333 Swiss children aged 6-13 years from the Kinder-Sportstudie (KISS) (ISRCTN15360785). Whole-body fat-free mass (FFM) and appendicular lean tissue mass were measured with DXA. Body resistance (R) was measured at 50 kHz with 4-BIA and segmental body resistance at 5, 50, 250 and 500 kHz with 8-BIA. The resistance index (RI) was calculated as height(2)/R. Selection of predictors (gender, age, weight, RI4 and RI8) for BIA algorithms was performed using bootstrapped stepwise linear regression on 1000 samples. We calculated 95% confidence intervals (CI) of regression coefficients and measures of model fit using bootstrap analysis. Limits of agreement were used as measures of interchangeability of BIA with DXA. RESULTS: 8-BIA was more accurate than 4-BIA for the assessment of FFM (root mean square error (RMSE)=0.90 (95% CI 0.82-0.98) vs 1.12 kg (1.01-1.24); limits of agreement 1.80 to -1.80 kg vs 2.24 to -2.24 kg). 8-BIA also gave accurate estimates of appendicular body composition, with RMSE < or = 0.10 kg for arms and < or = 0.24 kg for legs. All external 4-BIA algorithms performed poorly with substantial negative proportional bias (r> or = 0.48, P<0.001). CONCLUSIONS: In a representative sample of young Swiss children (1) 8-BIA was superior to 4-BIA for the prediction of FFM, (2) external 4-BIA algorithms gave biased predictions of FFM and (3) 8-BIA was an accurate predictor of segmental body composition.
Resumo:
In this study, hypothalamic activation was performed by dehydration-induced anorexia (DIA) and overnight food suppression (OFS) in female rats. The assessment of the hypothalamic response to these challenges by manganese-enhanced MRI showed increased neuronal activity in the paraventricular nuclei (PVN) and lateral hypothalamus (LH), both known to be areas involved in the regulation of food intake. The effects of DIA and OFS were compared by generating T-score maps. Increased neuronal activation was detected in the PVN and LH of DIA rats relative to OFS rats. In addition, the neurochemical profile of the PVN and LH were measured by (1) H MRS at 14.1T. Significant increases in metabolite levels were measured in DIA and OFS relative to control rats. Statistically significant increases in γ-aminobutyric acid were found in DIA (p=0.0007) and OFS (p<0.001) relative to control rats. Lactate increased significantly in DIA (p=0.03), but not in OFS, rats. This work shows that manganese-enhanced MRI coupled to (1) H MRS at high field is a promising noninvasive method for the investigation of the neural pathways and mechanisms involved in the control of food intake, in the autonomic and endocrine control of energy metabolism and in the regulation of body weight.
Resumo:
Antiretroviral therapy has been associated with side effects, either from the drug itself or in conjunction with the effects of human immunodeficiency virus infection. Here, we evaluated the side effects of the protease inhibitor (PI) indinavir in hamsters consuming a normal or high-fat diet. Indinavir treatment increased the hamster death rate and resulted in an increase in triglyceride, cholesterol and glucose serum levels and a reduction in anti-oxLDL auto-antibodies. The treatment led to histopathological alterations of the kidney and the heart. These results suggest that hamsters are an interesting model for the study of the side effects of antiretroviral drugs, such as PIs.
Resumo:
Neglecting health effects from indoor pollutant emissions and exposure, as currently done in Life Cycle Assessment (LCA), may result in product or process optimizations at the expense of workers' or consumers' health. To close this gap, methods for considering indoor exposure to chemicals are needed to complement the methods for outdoor human exposure assessment already in use. This paper summarizes the work of an international expert group on the integration of human indoor and outdoor exposure in LCA, within the UNEP/ SETAC Life Cycle Initiative. A new methodological framework is proposed for a general procedure to include human-health effects from indoor exposure in LCA. Exposure models from occupational hygiene and household indoor air quality studies and practices are critically reviewed and recommendations are provided on the appropriateness of various model alternatives in the context of LCA. A single-compartment box model is recommended for use as a default in LCA, enabling one to screen occupational and household exposures consistent with the existing models to assess outdoor emission in a multimedia environment. An initial set of model parameter values was collected. The comparison between indoor and outdoor human exposure per unit of emission shows that for many pollutants, intake per unit of indoor emission may be several orders of magnitude higher than for outdoor emissions. It is concluded that indoor exposure should be routinely addressed within LCA.
Resumo:
Background: Physical activity (PA) and related energy expenditure (EE) is often assessed by means of a single technique. Because of inherent limitations, single techniques may not allow for an accurate assessment both PA and related EE. The aim of this study was to develop a model to accurately assess common PA types and durations and thus EE in free-living conditions, combining data from global positioning system (GPS) and 2 accelerometers. Methods: Forty-one volunteers participated in the study. First, a model was developed and adjusted to measured EE with a first group of subjects (Protocol I, n = 12) who performed 6 structured and supervised PA. Then, the model was validated over 2 experimental phases with 2 groups (n = 12 and n = 17) performing scheduled (Protocol I) and spontaneous common activities in real-life condition (Protocol II). Predicted EE was compared with actual EE as measured by portable indirect calorimetry. Results: In protocol I, performed PA types could be recognized with little error. The duration of each PA type could be predicted with an accuracy below 1 minute. Measured and predicted EE were strongly associated (r = .97, P < .001). Conclusion: Combining GPS and 2 accelerometers allows for an accurate assessment of PA and EE in free-living situations.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
Summary : 1. Measuring health literacy in Switzerland: a review of six surveys: 1.1 Comparison of questionnaires - 1.2 Measures of health literacy in Switzerland - 1.3 Discussion of Swiss data on HL - 1.4 Description of the six surveys: 1.4.1 Current health trends and health literacy in the Swiss population (gfs-UNIVOX), 1.4.2 Nutrition, physical exercise and body weight : opinions and perceptions of the Swiss population (USI), 1.4.3 Health Literacy in Switzerland (ISPMZ), 1.4.4 Swiss Health Survey (SHS), 1.4.5 Survey of Health, Ageing and Retirement in Europe (SHARE), 1.4.6 Adult literacy and life skills survey (ALL). - 2 . Economic costs of low health literacy in Switzerland: a rough calculation. Appendix: Screenshots cost model
Resumo:
In this paper, we perform a societal and economic risk assessment for debris flows at the regional scale, for lower Valtellina, Northern Italy. We apply a simple empirical debris-flow model, FLOW-R, which couples a probabilistic flow routing algorithm with an energy line approach, providing the relative probability of transit, and the maximum kinetic energy, for each cell. By assessing a vulnerability to people and to other exposed elements (buildings, public facilities, crops, woods, communication lines), and their economic value, we calculated the expected annual losses both in terms of lives (societal risk) and goods (direct economic risk). For societal risk assessment, we distinguish for the day and night scenarios. The distribution of people at different moments of the day was considered, accounting for the occupational and recreational activities, to provide a more realistic assessment of risk. Market studies were performed in order to assess a realistic economic value to goods, structures, and lifelines. As terrain unit, a 20 m x 20 m cell was used, in accordance with data availability and the spatial resolution requested for a risk assessment at this scale. Societal risk the whole area amounts to 1.98 and 4.22 deaths/year for the day and the night scenarios, respectively, with a maximum of 0.013 deaths/year/cell. Economic risk for goods amounts to 1,760,291 ?/year, with a maximum of 13,814 ?/year/cell.
Resumo:
La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.
Resumo:
BACKGROUND: Physicians need a specific risk-stratification tool to facilitate safe and cost-effective approaches to the management of patients with cancer and acute pulmonary embolism (PE). The objective of this study was to develop a simple risk score for predicting 30-day mortality in patients with PE and cancer by using measures readily obtained at the time of PE diagnosis. METHODS: Investigators randomly allocated 1,556 consecutive patients with cancer and acute PE from the international multicenter Registro Informatizado de la Enfermedad TromboEmbólica to derivation (67%) and internal validation (33%) samples. The external validation cohort for this study consisted of 261 patients with cancer and acute PE. Investigators compared 30-day all-cause mortality and nonfatal adverse medical outcomes across the derivation and two validation samples. RESULTS: In the derivation sample, multivariable analyses produced the risk score, which contained six variables: age > 80 years, heart rate ≥ 110/min, systolic BP < 100 mm Hg, body weight < 60 kg, recent immobility, and presence of metastases. In the internal validation cohort (n = 508), the 22.2% of patients (113 of 508) classified as low risk by the prognostic model had a 30-day mortality of 4.4% (95% CI, 0.6%-8.2%) compared with 29.9% (95% CI, 25.4%-34.4%) in the high-risk group. In the external validation cohort, the 18% of patients (47 of 261) classified as low risk by the prognostic model had a 30-day mortality of 0%, compared with 19.6% (95% CI, 14.3%-25.0%) in the high-risk group. CONCLUSIONS: The developed clinical prediction rule accurately identifies low-risk patients with cancer and acute PE.
Resumo:
In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.