863 resultados para Liver Function Test
Resumo:
The current study investigated the exculpatory value of alibi evidence when presented together with various types of incriminating evidence. Previous research has reported that alibi evidence could weaken the effects of DNA evidence and eyewitness identification. The present study assessed the effectiveness of alibi evidence in counteracting defendant's confession (experiment 1) and eyewitness evidence (experiment 2). In experiment 1, three levels of alibi evidence (none, weak, strong) were combined with three levels of confession evidence (voluntary, elicited under low pressure, elicited under high pressure). Results indicated significant main effects of confession and alibi and an alibi by confession interaction. Of participants exposed to high-pressure confession, those in the strong alibi condition rendered lower guilt estimates than those in the no alibi condition. In experiment 2, three levels of alibi were combined with two levels of eyewitness evidence (bad view, good view). A main effect of alibi was obtained, but no interaction between alibi and eyewitness evidence. ^ An explanation of this pattern is based in part on the Story Model (Pennington & Hastie, 1992) and a novel “culpability threshold” model of juror decision-making. The Story Model suggests that jurors generate verdict stories (interpretations of events consistent with a guilty or not guilty verdict) based on trial evidence. If the evidence in favor of guilt exceeds jurors' threshold for perceiving culpability, jurors will fail to properly consider exonerating evidence. However, when the strength of incriminating evidence does not exceed the jurors' threshold, they are likely to give appropriate consideration to exculpatory evidence in their decisions. ^ Presentation of a reliable confession in Experiment 1 exceeded jurors' culpability threshold and rendered alibi largely irrelevant. In contrast, presentation of a high-pressure confession failed to exceed jurors' culpability threshold, so jurors turned to alibi evidence in their decisions. Similarly, in the second experiment, eyewitness evidence (in general) was not strong enough to surpass the culpability threshold, and thus jurors incorporated alibi evidence in their decisions. A third study is planned to further test this “culpability threshold” model, further explore various types of alibi evidence, and clarify when exculpatory evidence will sufficiently weaken the prosecution's “story.” ^
Resumo:
The purpose of this study was threefold: first, to investigate variables associated with learning, and performance as measured by the National Council Licensure Examination for Registered Nurses (NCLEX-RN). The second purpose was to validate the predictive value of the Assessment Technologies Institute (ATI) achievement exit exam, and lastly, to provide a model that could be used to predict performance on the NCLEX-RN, with implications for admission and curriculum development. The study was based on school learning theory, which implies that acquisition in school learning is a function of aptitude (pre-admission measures), opportunity to learn, and quality of instruction (program measures). Data utilized were from 298 graduates of an associate degree nursing program in the Southeastern United States. Of the 298 graduates, 142 were Hispanic, 87 were Black, non-Hispanic, 54 White, non-Hispanic, and 15 reported as Others. The graduates took the NCLEX-RN for the first time during the years 2003–2005. This study was a predictive, correlational design that relied upon retrospective data. Point biserial correlations, and chi-square analyses were used to investigate relationships between 19 selected predictor variables and the dichotomous criterion variable, NCLEX-RN. The correlation and chi square findings indicated that men did better on the NCLEX-RN than women; Blacks had the highest failure rates, followed by Hispanics; older students were more likely to pass the exam than younger students; and students who passed the exam started and completed the nursing program with a higher grade point average, than those who failed the exam. Using logistic regression, five statistical models that used variables associated with learning and student performance on the NCLEX-RN were tested with a model adapted from Bloom's (1976) and Carroll's (1963) school learning theories. The derived model included: NCLEX-RNsuccess = f (Nurse Entrance Test and advanced medical-surgical nursing course grade achieved). The model demonstrates that student performance on the NCLEX-RN can be predicted by one pre-admission measure, and a program measure. The Assessment Technologies Institute achievement exit exam (an outcome measure) had no predictive value for student performance on the NCLEX-RN. The model developed accurately predicted 94% of the student's successful performance on the NCLEX-RN.
Resumo:
Understanding the language of one’s cultural environment is important for effective communication and function. As such, students entering U.S. schools from foreign countries are given access to English to Speakers of Other Languages (ESOL) programs and they are referred to as English Language Learner (ELL) students. This dissertation examined the correlation of ELL ACCESS Composite Performance Level (CPL) score to the End of Course tests (EOCTs) and the Georgia High School Graduation Tests (GHSGTs) in the four content courses (language arts, mathematics, science, and social studies). A premise of this study was that English language proficiency is critical in meeting or exceeding state and county assessment standards. A quantitative descriptive research design was conducted using Cross-sectional archival data from a secondary source. There were 148 participants from school years 2011-2012 to 2013- 2014 from Grades 9-12. A Pearson product moment correlation was run to assess the relationship between the ACCESS CPL (independent variable) and the EOCT scores and the GHSGT scores (dependent variables). The findings showed that there was a positive correlation between ACCESS CPL scores and the EOCT scores where language arts showed a strong positive correlation and mathematics showed a positive weak correlation. Also, there was a positive correlation between ACCESS CPL scores and GHSGT scores where language arts showed a weak positive correlation. The results of this study indicated that that there is a relationship between the stated variables, ACCESS CPL, EOCT and GHSGT. Also, the results of this study showed that there were positive correlations at varying degrees for each grade levels. While the null hypothesis for Research Question 1 and Research Question 2 were rejected, there was a slight relationship between the variables.
Resumo:
Many classical as well as modern optimization techniques exist. One such modern method belonging to the field of swarm intelligence is termed ant colony optimization. This relatively new concept in optimization involves the use of artificial ants and is based on real ant behavior inspired by the way ants search for food. In this thesis, a novel ant colony optimization technique for continuous domains was developed. The goal was to provide improvements in computing time and robustness when compared to other optimization algorithms. Optimization function spaces can have extreme topologies and are therefore difficult to optimize. The proposed method effectively searched the domain and solved difficult single-objective optimization problems. The developed algorithm was run for numerous classic test cases for both single and multi-objective problems. The results demonstrate that the method is robust, stable, and that the number of objective function evaluations is comparable to other optimization algorithms.
Resumo:
Introduction and Research Objectives: Pediatric obesity has reached epidemic proportions in the United States. In the critical care setting, obesity has yet to be fully studied. We sought to evaluate the effects of obesity in children who are admitted to a hospital from trauma centers using Kid's Inpatient Database (KID) during 2009. Methods: The study examined inpatient admissions from pediatric trauma patients in 2009 using the Kids´ Inpatient Database (KID). Patients (n=27599) were selected from the KID based on Age (AGE>1) and Admission Type (ATYPE=5) and assessed on Race, Sex, Length of Stay (LOS), Number of Diagnoses and Procedures, Severity of Illness (SOI), Risk of Mortality (ROM), Co-morbidities, and Intubation by comparing obese and non-obese cohorts. Chi-square test and student t-test were used to analyze the data. All variables were weighted to get national estimates. Results: The overall prevalence of obesity (those coded as having obesity as co-morbidity) was 1.6% with significantly higher prevalence among Blacks (1.8%), Hispanics (2.3%), and Native Americans (4.1%; p<0.001). Obesity was more prevalent among females (2.4% vs 1.2%; p<.001). Overall mortality in the cohort was 4.8%. Obesity was significantly lower among children who died during hospitalization (0.5% vs 1.6%; p<0.002). However, obese children had significantly longer LOS, greater number of diagnoses, more procedures and greater than expected loss of function due to SOI when compared with nonobese cohort (p<.001). Deficiency anemia, diabetes, hypertension, liver disease, and fluid and electrolyte disorders are all strongly associated with the presence of obesity (p<.005). The rate of intubation is similar between obese and non-obese cohorts. Conclusion: Our study using KID national database found that obese children who are admitted from trauma centers have a higher morbidity and LOS but lower mortality. Racial and gender inequalities of obesity prevalence is consistent with previous reports.
Resumo:
The transducer function mu for contrast perception describes the nonlinear mapping of stimulus contrast onto an internal response. Under a signal detection theory approach, the transducer model of contrast perception states that the internal response elicited by a stimulus of contrast c is a random variable with mean mu(c). Using this approach, we derive the formal relations between the transducer function, the threshold-versus-contrast (TvC) function, and the psychometric functions for contrast detection and discrimination in 2AFC tasks. We show that the mathematical form of the TvC function is determined only by mu, and that the psychometric functions for detection and discrimination have a common mathematical form with common parameters emanating from, and only from, the transducer function mu and the form of the distribution of the internal responses. We discuss the theoretical and practical implications of these relations, which have bearings on the tenability of certain mathematical forms for the psychometric function and on the suitability of empirical approaches to model validation. We also present the results of a comprehensive test of these relations using two alternative forms of the transducer model: a three-parameter version that renders logistic psychometric functions and a five-parameter version using Foley's variant of the Naka-Rushton equation as transducer function. Our results support the validity of the formal relations implied by the general transducer model, and the two versions that were contrasted account for our data equally well.
Resumo:
The objective of this study was to verify the association between some mobility items of the International Classification Functionality (ICF), with the evaluations Gross Motor Function Measure (GMFM-88), 1-minute walk test (1MWT) and if the motor impairment influences the quality of life in children with Cerebral Palsy (PC), by using the Paediatric Quality of Life Inventory (PedsQL 4.0 versions for children and parents). The study included 22 children with cerebral palsy spastic, classified in levels I, II, and III on the Gross Motor Function Classification System (GMFCS), with age group of 9.9 years old. Among those who have participated, seven of them were level I, eight of them were level II and seven of them were level III. All of the children and teenagers were rated by using check list ICF (mobility item), GMFM-88, 1-minute walk test and PedsQL 4.0 questionnaires for children and parents. It was observed a strong correlation between GMFM-88 with check list ICF (mobility item), but moderate correlation between GMFM-88 and 1-minute walk test (1MWT). It was also moderate the correlation between the walking test and the check list ICF (mobility item). The correlation between PedsQl 4.0 questionnaires for children and parents was weak, as well as the correlation of both with GMFM, ICF (mobility item) and the walking test. The lack of interrelation between physical function tests and quality of life, indicates that, regardless of the severity of the motor impairment and the difficulty with mobility, children and teenagers suffering of PC spastic, functional level I, II and III GMFCS and their parents have a varied opinion regarding the perception of well being and life satisfaction.
Resumo:
Experiments at Jefferson Lab have been conducted to extract the nucleon spin-dependent structure functions over a wide kinematic range. Higher moments of these quantities provide tests of QCD sum rules and predictions of chiral perturbation theory ($\chi$PT). While precise measurements of $g_{1}^n$, $g_{2}^n$, and $g_1^p$ have been extensively performed, the data of $g_2^p$ remain scarce. Discrepancies were found between existing data related to $g_2$ and theoretical predictions. Results on the proton at large $Q^2$ show a significant deviation from the Burkhardt-Cottingham sum rule, while results for the neutron generally follow this sum rule. The next-to-leading order $\chi$PT calculations exhibit discrepancy with data on the longitudinal-transverse polarizability $\delta_{LT}^n$. Further measurements of the proton spin structure function $g_2^p$ are desired to understand these discrepancies.
Experiment E08-027 (g2p) was conducted at Jefferson Lab in experimental Hall A in 2012. Inclusive measurements were performed with polarized electron beam and a polarized ammonia target to obtain the proton spin-dependent structure function $g_2^p$ at low Q$^2$ region (0.02$<$Q$^2$$<$0.2 GeV$^2$) for the first time. The results can be used to test the Burkhardt-Cottingham sum rule, and also allow us to extract the longitudinal-transverse spin polarizability of the proton, which will provide a benchmark test of $\chi$PT calculations. This thesis will present and discuss the very preliminary results of the transverse asymmetry and the spin-dependent structure functions $g_1^p$ and $g_2^p$ from the data analysis of the g2p experiment .
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The Galway Bay wave energy test site promises to be a vital resource for wave energy researchers and developers. As part of the development of this site, a floating power system is being developed to provide power and data acquisition capabilities, including its function as a local grid connection, allowing for the connection of up to three wave energy converter devices. This work shows results from scaled physical model testing and numerical modelling of the floating power system and an oscillating water column connected with an umbilical. Results from this study will be used to influence further scaled testing as well as the full scale design and build of the floating power system in Galway Bay.
Resumo:
A small portion of cellular glycogen is transported to and degraded in lysosomes by acid α-glucosidase (GAA) in mammals, but it is unclear why and how glycogen is transported to the lysosomes. Stbd1 has recently been proposed to participate in glycogen trafficking to lysosomes. However, our previous study demonstrated that knockdown of Stbd1 in GAA knock-out mice did not alter lysosomal glycogen storage in skeletal muscles. To further determine whether Stbd1 participates in glycogen transport to lysosomes, we generated GAA/Stbd1 double knock-out mice. In fasted double knock-out mice, glycogen accumulation in skeletal and cardiac muscles was not affected, but glycogen content in liver was reduced by nearly 73% at 3 months of age and by 60% at 13 months as compared with GAA knock-out mice, indicating that the transport of glycogen to lysosomes was suppressed in liver by the loss of Stbd1. Exogenous expression of human Stbd1 in double knock-out mice restored the liver lysosomal glycogen content to the level of GAA knock-out mice, as did a mutant lacking the Atg8 family interacting motif (AIM) and another mutant that contains only the N-terminal 24 hydrophobic segment and the C-terminal starch binding domain (CBM20) interlinked by an HA tag. Our results demonstrate that Stbd1 plays a dominant role in glycogen transport to lysosomes in liver and that the N-terminal transmembrane region and the C-terminal CBM20 domain are critical for this function.
Resumo:
Improving the representation of the hydrological cycle in Atmospheric General Circulation Models (AGCMs) is one of the main challenges in modeling the Earth's climate system. One way to evaluate model performance is to simulate the transport of water isotopes. Among those available, tritium (HTO) is an extremely valuable tracer, because its content in the different reservoirs involved in the water cycle (stratosphere, troposphere, ocean) varies by order of magnitude. Previous work incorporated natural tritium into LMDZ-iso, a version of the LMDZ general circulation model enhanced by water isotope diagnostics. Here for the first time, the anthropogenic tritium injected by each of the atmospheric nuclear-bomb tests between 1945 and 1980 has been first estimated and further implemented in the model; it creates an opportunity to evaluate certain aspects of LDMZ over several decades by following the bomb-tritium transient signal through the hydrological cycle. Simulations of tritium in water vapor and precipitation for the period 1950-2008, with both natural and anthropogenic components, are presented in this study. LMDZ-iso satisfactorily reproduces the general shape of the temporal evolution of tritium. However, LMDZ-iso simulates too high a bomb-tritium peak followed by too strong a decrease of tritium in precipitation. The too diffusive vertical advection in AGCMs crucially affects the residence time of tritium in the stratosphere. This insight into model performance demonstrates that the implementation of tritium in an AGCM provides a new and valuable test of the modeled atmospheric transport, complementing water stable isotope modeling.
Resumo:
Background: As the global population is ageing, studying cognitive impairments including dementia, one of the leading causes of disability in old age worldwide, is of fundamental importance to public health. As a major transition in older age, a focus on the complex impacts of the duration, timing, and voluntariness of retirement on health is important for policy changes in the future. Longer retirement periods, as well as leaving the workforce early, have been associated with poorer health, including reduced cognitive functioning. These associations are hypothesized to differ based on gender, as well as on pre-retirement educational and occupational experiences, and on post-retirement social factors and health conditions. Methods: A cross-sectional study is conducted to determine the relationship between duration and timing of retirement and cognitive function, using data from the five sites of International Mobility in Aging Study (IMIAS). Cognitive function is assessed using the Leganes Cognitive Test (LCT) scores in 2012. Data are analyzed using multiple linear regressions. Analyses are also done by site/region separately (Canada, Latin America, and Albania). Robustness checks are done with an analysis of cognitive change from 2012 to 2014, the effect of voluntariness of retirement on cognitive function. An instrumental variable (IV) approach is also applied to the cross-sectional and longitudinal analyses as a robustness check to address the potential endogeneity of the retirement variable. Results: Descriptive statistics highlight differences between men and women, as well as between sites. In linear regression analysis, there was no relationship between timing or duration of retirement and cognitive function in 2012, when adjusting for site/region. There was no association between retirement characteristics and cognitive function in site/region/stratified analyses. In IV analysis, longer retirement and on time or late retirement was associated with lower cognitive function among men. In IV analysis, there is no relationship between retirement characteristics and cognitive function among women. Conclusions: While results of the thesis suggest a negative effect of retirement on cognitive function, especially among men, the relationship remains uncertain. A lack of power results in the inability to draw conclusions for site/region-specific analysis and site-adjusted analysis in both linear and IV regressions.
Resumo:
Preeclampsia (PE) is a pregnancy complication that is new-onset of hypertension and proteinuria after 20 weeks of gestation. However, subclinical renal dysfunction may be apparent earlier in gestation prior to the clinical presentation of PE. Although the maternal syndrome of PE resolves early postpartum, women with a history of PE are at higher risk of renal dysfunction later in life. Mineral metabolism, such as phosphate balance is heavily dependent on renal function, yet, phosphate handling in women with a history of PE is largely unknown. To investigate whether women with a history of PE would exhibit changes in phosphate metabolism compared to healthy parous women, phosphate loading test was used. Women with or without a history of PE, who were 6 months to 5 years postpartum, were recruited for this study. Blood and urine samples were collected before and after the oral dosing of 500mg phosphate solution. Biochemical markers of phosphate metabolism and renal function were evaluated. In order to assess the difference in renal function alteration between first trimester women who were or were not destined to develop PE, plasma cystatin C concentration was analysed. After phosphate loading, women with a history of PE had significantly elevated serum phosphate at both 1- and 2-hour, while controls had higher urine phosphate:urine creatinine excretion ratio at 1-hour than women with a history of PE. Women with a history of PE had no changes in intact parathyroid hormone (iPTH) concentration throughout the study period, whereas controls had elevated iPTH at 1-hour from baseline. In terms of renal function in the first trimester, there was no difference in plasma cystatin C concentration between women who were or were not destined to develop PE. The elevation of serum phosphate in women with a history of PE could be due to the delay in phosphate excretion. Prolong elevation of serum phosphate can have serious consequences later in life. Thus, oral phosphate challenge may serve as a useful method of early screening for altered phosphate metabolism and renal function.
Resumo:
Preeclampsia (PE) is a pregnancy complication that is new-onset of hypertension and proteinuria after 20 weeks of gestation. However, subclinical renal dysfunction may be apparent earlier in gestation prior to the clinical presentation of PE. Although the maternal syndrome of PE resolves early postpartum, women with a history of PE are at higher risk of renal dysfunction later in life. Mineral metabolism, such as phosphate balance is heavily dependent on renal function, yet, phosphate handling in women with a history of PE is largely unknown. To investigate whether women with a history of PE would exhibit changes in phosphate metabolism compared to healthy parous women, phosphate loading test was used. Women with or without a history of PE, who were 6 months to 5 years postpartum, were recruited for this study. Blood and urine samples were collected before and after the oral dosing of 500mg phosphate solution. Biochemical markers of phosphate metabolism and renal function were evaluated. In order to assess the difference in renal function alteration between first trimester women who were or were not destined to develop PE, plasma cystatin C concentration was analysed. After phosphate loading, women with a history of PE had significantly elevated serum phosphate at both 1- and 2-hour, while controls had higher urine phosphate:urine creatinine excretion ratio at 1-hour than women with a history of PE. Women with a history of PE had no changes in intact parathyroid hormone (iPTH) concentration throughout the study period, whereas controls had elevated iPTH at 1-hour from baseline. In terms of renal function in the first trimester, there was no difference in plasma cystatin C concentration between women who were or were not destined to develop PE. The elevation of serum phosphate in women with a history of PE could be due to the delay in phosphate excretion. Prolong elevation of serum phosphate can have serious consequences later in life. Thus, oral phosphate challenge may serve as a useful method of early screening for altered phosphate metabolism and renal function.