838 resultados para linear mixed-effects models
Resumo:
Background and Objectives: African American (AA) women are disproportionately affected with hypertension (HTN). The aim of this randomized controlled trial was to evaluate the effectiveness of a 6-week culturally-tailored educational intervention for AA women with primary HTN who lived in rural Northeast Texas. ^ Methods: Sixty AA women, 29 to 86 years (M 57.98 ±12.37) with primary HTN were recruited from four rural locations and randomized to intervention (n =30) and wait-list control groups ( n =30) to determine the effectiveness of the intervention on knowledge, attitudes, beliefs, social support, adherence to a hypertension regimen, and blood pressure (BP) control. Survey and BP measurements were collected at baseline, 3 weeks, 6 weeks (post intervention) and 6 months post intervention. Culturally-tailored educational classes were provided for 90 minutes once a week for 6 weeks in two local churches and a community center. The wait-list control group received usual care and were offered education at the conclusion of the data collection six months post-intervention. Linear mixed models were used to test for differences between the groups. ^ Results: A significant overall main effect (Time) was found for systolic blood pressure, F(3, 174) =11.104, p=.000, and diastolic blood pressure. F(3, 174) =4.781, p=.003 for both groups. Age was a significant covariate for diastolic blood pressure. F(1, 56) =6.798 p=.012. Participants 57 years or older (n=30) had lower diastolic BPS than participants younger than 57 (n=30). No significant differences were found between groups on knowledge, adherence, or attitudes. Participants with lower incomes had significantly less knowledge about HBP Prevention (r=.036, p=.006). ^ Conclusion: AA women who participated in a 6 week intervention program demonstrated a significant decrease in BP over a 6 month period regardless of whether they were in the intervention or control group. These rural AA women had a relatively good knowledge of HTN and reported an average level of compliance, compared to other populations. Satisfaction with the program was high and there was no attrition, suggesting that AA women will participate in research studies that are culturally tailored to them, held in familiar community locations, and conducted by a trusted person with whom they can identify. Future studies using a different program with larger sample sizes are warranted to try to decrease the high level of HTN-related complications in AA women. ^
Resumo:
Human papillomavirus (HPV) is a necessary cause of cervical cancer and is also strongly associated with anal cancer. While different factors such as CD4+ cell count, HIV RNA viral load, smoking status, and cytological screening results have been identified as risk factors for the infection of HPV high-risk types and associated cancers, much less is known about the association between those risk factors and the infection of HPV low-risk types and anogential warts. In this dissertation, a public dataset (release P09) obtained from the Women's Interagency HIV Study (WIHS) was used to examine the effects of those risk factors on the size of the largest anal warts in HIV-infected women in the United States. Linear mixed modeling was used to address this research question. ^ The prevalence of anal warts at baseline for WIHS participants was higher than other populations. Incidence of anal warts in HIV-infected women was significantly higher than that of HIV-uninfected women [4.15 cases per 100 person-years (95% CI: 3.83–4.77) vs. 1.30 cases per 100 person-years (95% CI: 1.00–1.58), respectively]. There appeared to be an inverse association between the size of the largest anal wart and CD4+ cell count at baseline visit, however it was not statistically significant. There was no association between size of the largest anal wart and CD4+ cell count or HIV RNA viral load over time among HIV-infected women. There was also no association between the size of the largest anal wart and current smoking over time in HIV-infected women, even though smokers had larger warts at baseline than non-smokers. Finally, even though a woman with Pap smear results of ASCUS/LGSIL was found to have an anal wart larger than a woman with normal cervical Pap smear results the relationship between the size of the largest anal wart with cervical Pap smear results over time remains unclear. ^ Although the associations between these risk factors and the size of the largest anal wart over time in HIV-infected women could not be firmly established, this dissertation poses several questions concerning anal wart development for further exploration: (1) the role of immune function (i.e., CD4+ cell count), (2) the role of smoking status and the interaction between smoking status with other risk factors (e.g., CD4+ cell count or HIV RNA viral load), (3) the molecular mechanism of smoking on anal warts over time, (4) the potential for development of a screening program using anal Pap smear in HIV-infected women, and (5) how cost-effective and efficacious would an anal Pap smear screening program be in this high-risk population. ^
Resumo:
Background. Research has shown that elevations of only 10 mmHg diastolic blood pressure (BP) and 5 mmHg systolic BP are associated with substantial (as large as 50%) increases in risks for cardiovascular disease, a leading cause of death, worldwide. Epidemiological studies have found that particulate matter (PM) increases blood pressure (BP) and many biological mechanisms which may suggest that the organic matter of PM contributes to the increase in BP. To understand components of PM which may contribute to the increase in BP, this study focuses on diesel particulate matter (DPM) and polycyclic aromatic hydrocarbons (PAHs). To our knowledge, there have been only four epidemiological studies on BP and DPM, and no epidemiological studies on BP and PAHs. ^ Objective. Our objective was to evaluate the association between prevalent hypertension and two ambient exposures: DPM and PAHs amongst the Mano a Mano cohort. ^ Methods. The Mano a Mano cohort which was established by the M.D. Anderson Cancer Center in 2001, is comprised of individuals of Mexican origin residing in Houston, TX. Using geographical information systems, we linked modeled annual estimates of PAHs and DPM at the census track level from the U.S. Environmental Protection Agency's National-Scale Air Toxics Assessment to residential addresses of cohort members. Mixed-effects logistic regression models were applied to determine associations between DPM and PAHs and hypertension while adjusting for confounders. ^ Results. Ambient levels of DPM, categorized into quartiles, were not statistically associated with hypertension and did not indicate a dose response relationship. Ambient levels of PAHs, categorized into quartiles, were not associated with hypertension, but did indicate a dose response relationship in multiple models (for example: Q2: OR = 0.98; 95% CI, 0.73–1.31, Q3: OR = 1.08; 95% CI, 0.82–1.41, Q4: OR = 1.26; 95% CI, 0.94–1.70). ^ Conclusion. This is the first assessment to analyze the relationship between ambient levels of PAHs and hypertension and it is amongst a few studies investigating the association between ambient levels of DPM and hypertension. Future analyses are warranted to explore the effects DPM and PAHs using different categorizations in order to clarify their relationships with hypertension.^
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^
Resumo:
La predicción de energía eólica ha desempeñado en la última década un papel fundamental en el aprovechamiento de este recurso renovable, ya que permite reducir el impacto que tiene la naturaleza fluctuante del viento en la actividad de diversos agentes implicados en su integración, tales como el operador del sistema o los agentes del mercado eléctrico. Los altos niveles de penetración eólica alcanzados recientemente por algunos países han puesto de manifiesto la necesidad de mejorar las predicciones durante eventos en los que se experimenta una variación importante de la potencia generada por un parque o un conjunto de ellos en un tiempo relativamente corto (del orden de unas pocas horas). Estos eventos, conocidos como rampas, no tienen una única causa, ya que pueden estar motivados por procesos meteorológicos que se dan en muy diferentes escalas espacio-temporales, desde el paso de grandes frentes en la macroescala a procesos convectivos locales como tormentas. Además, el propio proceso de conversión del viento en energía eléctrica juega un papel relevante en la ocurrencia de rampas debido, entre otros factores, a la relación no lineal que impone la curva de potencia del aerogenerador, la desalineación de la máquina con respecto al viento y la interacción aerodinámica entre aerogeneradores. En este trabajo se aborda la aplicación de modelos estadísticos a la predicción de rampas a muy corto plazo. Además, se investiga la relación de este tipo de eventos con procesos atmosféricos en la macroescala. Los modelos se emplean para generar predicciones de punto a partir del modelado estocástico de una serie temporal de potencia generada por un parque eólico. Los horizontes de predicción considerados van de una a seis horas. Como primer paso, se ha elaborado una metodología para caracterizar rampas en series temporales. La denominada función-rampa está basada en la transformada wavelet y proporciona un índice en cada paso temporal. Este índice caracteriza la intensidad de rampa en base a los gradientes de potencia experimentados en un rango determinado de escalas temporales. Se han implementado tres tipos de modelos predictivos de cara a evaluar el papel que juega la complejidad de un modelo en su desempeño: modelos lineales autorregresivos (AR), modelos de coeficientes variables (VCMs) y modelos basado en redes neuronales (ANNs). Los modelos se han entrenado en base a la minimización del error cuadrático medio y la configuración de cada uno de ellos se ha determinado mediante validación cruzada. De cara a analizar la contribución del estado macroescalar de la atmósfera en la predicción de rampas, se ha propuesto una metodología que permite extraer, a partir de las salidas de modelos meteorológicos, información relevante para explicar la ocurrencia de estos eventos. La metodología se basa en el análisis de componentes principales (PCA) para la síntesis de la datos de la atmósfera y en el uso de la información mutua (MI) para estimar la dependencia no lineal entre dos señales. Esta metodología se ha aplicado a datos de reanálisis generados con un modelo de circulación general (GCM) de cara a generar variables exógenas que posteriormente se han introducido en los modelos predictivos. Los casos de estudio considerados corresponden a dos parques eólicos ubicados en España. Los resultados muestran que el modelado de la serie de potencias permitió una mejora notable con respecto al modelo predictivo de referencia (la persistencia) y que al añadir información de la macroescala se obtuvieron mejoras adicionales del mismo orden. Estas mejoras resultaron mayores para el caso de rampas de bajada. Los resultados también indican distintos grados de conexión entre la macroescala y la ocurrencia de rampas en los dos parques considerados. Abstract One of the main drawbacks of wind energy is that it exhibits intermittent generation greatly depending on environmental conditions. Wind power forecasting has proven to be an effective tool for facilitating wind power integration from both the technical and the economical perspective. Indeed, system operators and energy traders benefit from the use of forecasting techniques, because the reduction of the inherent uncertainty of wind power allows them the adoption of optimal decisions. Wind power integration imposes new challenges as higher wind penetration levels are attained. Wind power ramp forecasting is an example of such a recent topic of interest. The term ramp makes reference to a large and rapid variation (1-4 hours) observed in the wind power output of a wind farm or portfolio. Ramp events can be motivated by a broad number of meteorological processes that occur at different time/spatial scales, from the passage of large-scale frontal systems to local processes such as thunderstorms and thermally-driven flows. Ramp events may also be conditioned by features related to the wind-to-power conversion process, such as yaw misalignment, the wind turbine shut-down and the aerodynamic interaction between wind turbines of a wind farm (wake effect). This work is devoted to wind power ramp forecasting, with special focus on the connection between the global scale and ramp events observed at the wind farm level. The framework of this study is the point-forecasting approach. Time series based models were implemented for very short-term prediction, this being characterised by prediction horizons up to six hours ahead. As a first step, a methodology to characterise ramps within a wind power time series was proposed. The so-called ramp function is based on the wavelet transform and it provides a continuous index related to the ramp intensity at each time step. The underlying idea is that ramps are characterised by high power output gradients evaluated under different time scales. A number of state-of-the-art time series based models were considered, namely linear autoregressive (AR) models, varying-coefficient models (VCMs) and artificial neural networks (ANNs). This allowed us to gain insights into how the complexity of the model contributes to the accuracy of the wind power time series modelling. The models were trained in base of a mean squared error criterion and the final set-up of each model was determined through cross-validation techniques. In order to investigate the contribution of the global scale into wind power ramp forecasting, a methodological proposal to identify features in atmospheric raw data that are relevant for explaining wind power ramp events was presented. The proposed methodology is based on two techniques: principal component analysis (PCA) for atmospheric data compression and mutual information (MI) for assessing non-linear dependence between variables. The methodology was applied to reanalysis data generated with a general circulation model (GCM). This allowed for the elaboration of explanatory variables meaningful for ramp forecasting that were utilized as exogenous variables by the forecasting models. The study covered two wind farms located in Spain. All the models outperformed the reference model (the persistence) during both ramp and non-ramp situations. Adding atmospheric information had a noticeable impact on the forecasting performance, specially during ramp-down events. Results also suggested different levels of connection between the ramp occurrence at the wind farm level and the global scale.
Resumo:
Mediterranean Dehesas are one of the European natural habitat types of Community interest (43/92/EEC Directive), associated to high diversity levels and producer of important goods and services. In this work, tree contribution and grazing influence over pasture alpha diversity in a Dehesa in Central Spain was studied. We analyzed Richness and Shannon-Wiener (SW) indexes on herbaceous layer under 16 holms oak trees (64 sampling units distributed in two directions and in two distances to the trunk) distributed in four different grazing management zones (depending on species and stocking rate). Floristic composition by species or morphospecies and species abundance were analyzed for each sample unit. Linear mixed models (LMM) and generalized linear mixed models (GLMMs) were used to study relationships between alpha diversity measures and independent factors. Edge crown influence showed the highest values of Richness and SW index. No significant differences were found between orientations under tree crown influence. Grazing management had a significant effect over Richness and SW measures, specially the grazing species (cattle or sheep). We preliminary quantify and analyze the interaction of tree stratum and grazing management over herbaceous diversity in a year of extreme climatic conditions.
Resumo:
This PhD dissertation is framed in the emergent fields of Reverse Logistics and ClosedLoop Supply Chain (CLSC) management. This subarea of supply chain management has gained researchers and practitioners' attention over the last 15 years to become a fully recognized subdiscipline of the Operations Management field. More specifically, among all the activities that are included within the CLSC area, the focus of this dissertation is centered in direct reuse aspects. The main contribution of this dissertation to current knowledge is twofold. First, a framework for the so-called reuse CLSC is developed. This conceptual model is grounded in a set of six case studies conducted by the author in real industrial settings. The model has also been contrasted with existing literature and with academic and professional experts on the topic as well. The framework encompasses four building blocks. In the first block, a typology for reusable articles is put forward, distinguishing between Returnable Transport Items (RTI), Reusable Packaging Materials (RPM), and Reusable Products (RP). In the second block, the common characteristics that render reuse CLSC difficult to manage from a logistical standpoint are identified, namely: fleet shrinkage, significant investment and limited visibility. In the third block, the main problems arising in the management of reuse CLSC are analyzed, such as: (1) define fleet size dimension, (2) control cycle time and promote articles rotation, (3) control return rate and prevent shrinkage, (4) define purchase policies for new articles, (5) plan and control reconditioning activities, and (6) balance inventory between depots. Finally, in the fourth block some solutions to those issues are developed. Firstly, problems (2) and (3) are addressed through the comparative analysis of alternative strategies for controlling cycle time and return rate. Secondly, a methodology for calculating the required fleet size is elaborated (problem (1)). This methodology is valid for different configurations of the physical flows in the reuse CLSC. Likewise, some directions are pointed out for further development of a similar method for defining purchase policies for new articles (problem (4)). The second main contribution of this dissertation is embedded in the solutions part (block 4) of the conceptual framework and comprises a two-level decision problem integrating two mixed integer linear programming (MILP) models that have been formulated and solved to optimality using AIMMS as modeling language, CPLEX as solver and Excel spreadsheet for data introduction and output presentation. The results obtained are analyzed in order to measure in a client-supplier system the economic impact of two alternative control strategies (recovery policies) in the context of reuse. In addition, the models support decision-making regarding the selection of the appropriate recovery policy against the characteristics of demand pattern and the structure of the relevant costs in the system. The triangulation of methods used in this thesis has enabled to address the same research topic with different approaches and thus, the robustness of the results obtained is strengthened.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Patient outcomes in transplantation would improve if dosing of immunosuppressive agents was individualized. The aim of this study is to develop a population pharmacokinetic model of tacrolimus in adult liver transplant recipients and test this model in individualizing therapy. Population analysis was performed on data from 68 patients. Estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F) using the nonlinear mixed effects model program (NONMEM). Factors screened for influence on these parameters were weight, age, sex, transplant type, biliary reconstructive procedure, postoperative day, days of therapy, liver function test results, creatinine clearance, hematocrit, corticosteroid dose, and interacting drugs. The predictive performance of the developed model was evaluated through Bayesian forecasting in an independent cohort of 36 patients. No linear correlation existed between tacrolimus dosage and trough concentration (r(2) = 0.005). Mean individual Bayesian estimates for CL/F and V/F were 26.5 8.2 (SD) L/hr and 399 +/- 185 L, respectively. CL/F was greater in patients with normal liver function. V/F increased with patient weight. CL/F decreased with increasing hematocrit. Based on the derived model, a 70-kg patient with an aspartate aminotransferase (AST) level less than 70 U/L would require a tacrolimus dose of 4.7 mg twice daily to achieve a steady-state trough concentration of 10 ng/mL. A 50-kg patient with an AST level greater than 70 U/L would require a dose of 2.6 mg. Marked interindividual variability (43% to 93%) and residual random error (3.3 ng/mL) were observed. Predictions made using the final model were reasonably nonbiased (0.56 ng/mL), but imprecise (4.8 ng/mL). Pharmacokinetic information obtained will assist in tacrolimus dosing; however, further investigation into reasons for the pharmacokinetic variability of tacrolimus is required.
Resumo:
Fundamental principles of precaution are legal maxims that ask for preventive actions, perhaps as contingent interim measures while relevant information about causality and harm remains unavailable, to minimize the societal impact of potentially severe or irreversible outcomes. Such principles do not explain how to make choices or how to identify what is protective when incomplete and inconsistent scientific evidence of causation characterizes the potential hazards. Rather, they entrust lower jurisdictions, such as agencies or authorities, to make current decisions while recognizing that future information can contradict the scientific basis that supported the initial decision. After reviewing and synthesizing national and international legal aspects of precautionary principles, this paper addresses the key question: How can society manage potentially severe, irreversible or serious environmental outcomes when variability, uncertainty, and limited causal knowledge characterize their decision-making? A decision-analytic solution is outlined that focuses on risky decisions and accounts for prior states of information and scientific beliefs that can be updated as subsequent information becomes available. As a practical and established approach to causal reasoning and decision-making under risk, inherent to precautionary decision-making, these (Bayesian) methods help decision-makers and stakeholders because they formally account for probabilistic outcomes, new information, and are consistent and replicable. Rational choice of an action from among various alternatives-defined as a choice that makes preferred consequences more likely-requires accounting for costs, benefits and the change in risks associated with each candidate action. Decisions under any form of the precautionary principle reviewed must account for the contingent nature of scientific information, creating a link to the decision-analytic principle of expected value of information (VOI), to show the relevance of new information, relative to the initial ( and smaller) set of data on which the decision was based. We exemplify this seemingly simple situation using risk management of BSE. As an integral aspect of causal analysis under risk, the methods developed in this paper permit the addition of non-linear, hormetic dose-response models to the current set of regulatory defaults such as the linear, non-threshold models. This increase in the number of defaults is an important improvement because most of the variants of the precautionary principle require cost-benefit balancing. Specifically, increasing the set of causal defaults accounts for beneficial effects at very low doses. We also show and conclude that quantitative risk assessment dominates qualitative risk assessment, supporting the extension of the set of default causal models.
Resumo:
The pharmacokinetic disposition of metformin in late pregnancy was studied together with the level of fetal exposure at birth. Blood samples were obtained in the third trimester of pregnancy from women with gestational diabetes or type 2 diabetes, 5 had a previous diagnosis of polycystic ovary syndrome. A cord blood sample also was obtained at the delivery of some of these women, and also at delivery of others who had been taking metformin during pregnancy but from whom no blood had been taken. Plasma metformin concentrations were assayed by a new, validated, reverse-phase HPLC method, A 2-compartment, extravascular maternal model with transplacental partitioning of drug to a fetal compartment was fitted to the data. Nonlinear mixed-effects modeling was performed in'NONMEM using FOCE with INTERACTION. Variability was estimated using logarithmic interindividual and additive residual variance models; the covariance between clearance and volume was modeled simultaneously. Mean (range) metformin concentrations in cord plasma and in maternal plasma were 0.81 (range, 0.1-2.6) mg/L and 1.2 (range, 0. 1-2.9) mg/L, respectively. Typical population values (interindividual variability, CV%) for allometrically scaled maternal clearance and volume of distribution were 28 L/h/70 kg (17.1%) and 190 L/70 ka (46.3%), giving a derived population-wide half-life of 5.1 hours. The placental partition coefficient for metformin was 1.07 (36.3%). Neither maternal age nor weight significantly influenced the pharmacokinetics. The variability (SD) of observed concentrations about model-predicted concentrations was 0.32 mg/L. The pharmacokinetics were similar to those in nonpregnant patients and, therefore, no dosage adjustment is warranted. Metformin readily crosses the placenta, exposing the fetus to concentrations approaching those in the maternal circulation. The sequelae to such exposure, ea, effects on neonatal obesity and insulin resistance, remain unknown.
Resumo:
Background: Written material is often inaccessible fro people with aphasia. The format of written material needs to be adapted to enable people with aphasia to read with understanding. Aims: This study aimed to further explore some issues raised in Rose, Worrall, and MacKenna (2003) concerning the effects of aphasia-friendly formats on the reading comprehension of people with aphasia. It was hypothesised that people with aphasia would comprehend significantly more paragraphs that were formatted in an aphasia-friendly manner than control paragraphs. This study also aimed to investigate if each single aspect of aphasia-friendly formatting (i.e., simplified vocabulary and syntax, large print, increased white spacem and pictures) used in isolation would result in increased comprehension compared to control paragraphs. Other aims were to compare the effect of aphasia-friendly fromatting with the effects of each single adaptation, and to investigate if the effects of aphasia-friendly formates were related to aphasia severity. Methods & Procedures: Participants with mild to moderately severe aphasia (N = 9) read a battery of 90 paragraphs and selected the best word of phrase from a choice of four to complete each paragraph. A linear mixed model (p < .05) was used to analyse the differences in reading comprehension with each paragraph fromat across three reading grade levels. Outcomes & Results: People with aphasia comprehended significantly more aphasia-friendly paragraphs than control paragraphs. They also comprehended significantly more paragraphs with each of the following single adaptations: simplified vocabulary and syntax, large ptint, and increased white spaces. Although people with aphasia tended to comprehend more paragraphs with pictures added than control paragraphs, this difference was not significant. No significant correlation between aphasia severity and the effect of aphasia-friendly formatting was found. Conclusion: This study supports the idea that aphasia-friendly formats increase the reading comprehension of people with aphasia. It suggests that adding pictures, particularly Clip Art pictures, may not significantly improve the reading the reading comprehension of people with aphasia. These findings have implications for all written communication with people with aphasia, both in the clinical setting and in the wider community. Applying these findings may enable people with aphasia to have equal access to written information and to participate in society.
Resumo:
It has long been recognized that demographic structure within a population can significantly affect the likely outcomes of harvest. Many studies have focussed on equilibrium dynamics and maximization of the value of the harvest taken. However, in some cases the management objective is to maintain the population at a abundance that is significantly below the carrying capacity. Achieving such an objective by harvest can be complicated by the presence of significant structure (age or stage) in the target population. in such cases, optimal harvest strategies must account for differences among age- or stage-classes of individuals in their relative contribution to the demography of the population. In addition, structured populations are also characterized by transient non-linear dynamics following perturbation, such that even under an equilibrium harvest, the population may exhibit significant momentum, increasing or decreasing before cessation of growth. Using simple linear time-invariant models, we show that if harvest levels are set dynamically (e.g., annually) then transient effects can be as or more important than equilibrium outcomes. We show that appropriate harvest rates can be complicated by uncertainty about the demographic structure of the population, or limited control over the structure of the harvest taken. (c) 2006 Elsevier B.V. All rights reserved.