76 resultados para gls-regressio
Resumo:
Pós-graduação em Fisiopatologia em Clínica Médica - FMB
Resumo:
Pós-graduação em Fisiopatologia em Clínica Médica - FMB
Resumo:
Glutamine is an essential nutrient for cancer cell proliferation, especially in the context of citric acid cycle anaplerosis. In this manuscript we present results that collectively demonstrate that, of the three major mammalian glutaminases identified to date, the lesser studied splice variant of the gene gls, known as Glutaminase C (GAC), is important for tumor metabolism. We show that, although levels of both the kidney-type isoforms are elevated in tumor vs. normal tissues, GAC is distinctly mitochondrial. GAC is also most responsive to the activator inorganic phosphate, the content of which is supposedly higher in mitochondria subject to hypoxia. Analysis of X-ray crystal structures of GAC in different bound states suggests a mechanism that introduces the tetramerization-induced lifting of a "gating loop" as essential for the phosphate-dependent activation process. Surprisingly, phosphate binds inside the catalytic pocket rather than at the oligomerization interface. Phosphate also mediates substrate entry by competing with glutamate. A greater tendency to oligomerize differentiates GAC from its alternatively spliced isoform and the cycling of phosphate in and out of the active site distinguishes it from the liver-type isozyme, which is known to be less dependent on this ion.
Resumo:
Compreender a dinâmica de funcionamento do mercado de milho brasileiro, procedendo a uma investigação dos fatores que afetam as quantidades e preços nesse mercado, é o objetivo deste trabalho. Os testes de raiz unitária foram feitos utilizando-se a metodologia DF-GLS - Dickey Fuller Generalized Least Square - e os de cointegração de Johansen (1988). O modelo estimado, de ajuste pelo preço, foi um Modelo de Autorregressão Vetorial com Correção de Erros - VEC, sendo a identificação feita pelo procedimento de Sims-Bernanke. O estudo permite afirmar que existe forte interação entre os mercados de milho e de soja, mostrando uma relação de complementaridade na oferta e substitutibilidade na demanda, e que fatores macroeconômicos como renda e juros são importantes na determinação dos preços do milho ao produtor e no atacado. Vale ressaltar que os preços externos do milho mostraram relativa importância no processo de formação do preço doméstico do grão.
Resumo:
Early Diagnosis of Miocardial Dysfunction in Patients with Hematological Malignancies Submitted to Chemotherapy. Preliminary Background: Considering the current diagnostic improvements and tl1erapeutic approaches, patients witl 1 cancer can now be healed or keep the disease under control, still, the chemotherapy may cause heart damage, evolving to Congestive Heart Failure. Recognition of those changes increases the chances of control the endpoints; hence, new parameters of cardiac and fluid mechanics analysis have been used to assess the myocardial function, pursuing an earlier diagnosis of the cardiac alterations. This study aimed to detect early cardiac dysfunction consequently to chemotherapy in patients with hematological malignancies (HM). Methods: Patients with leukemia and lymphoma, submitted to chemotherapy, without knowing heart diseases were studied. Healthy volunteers served as the control group. Conventional 2DE parameters of myocardial function were analyzed. The peak global longitudinal, circumferential and radial left ventricular (LV) strain were deternined by 2D and 3D speckle tracking (STE); peak area strain measured by 3D STE and LV torsionn, twisting rate, recoil / recoil rate assessed by 2D STE. The LV vortex formation time (VFT) during the rapid diastolic filling was estimated by the 2D mitral valve (MV) planimetry and Pulsed Doppler LV inflow by: VFT- 4(1-β) / π x α3 x LVEF Where 1- β is the E wave contribution to the LV stroke volume and α3 is a volumetric variable related to the MV area. The statistical level was settled on 5%. Results: See Table. Conclusion: Despite the differences between the two groups concerning the LVESV, LVEF and E´, those parameters still are in the normal range when considering the patients submitted to chemotherapy; thus, in the clinical setting, they are not so noticeable. The 3D GLS was smaller among the patients, oppositely to the 2D GLS, suggesting that the former variable is more accurate to assess tlhe LV systolic function. The VFT is a dimensionless measure of the optimal vortex development inside the LV chamber; reflecting the efficiency of the diastolic filling and, consequently, blood ejection. This index showed to be diminished in patients with HM submitted to chemotherapy, indicating an impairment of the in1pulse and thrust, hence appearing to be a very early marker of diastolic and systolic dysfunction in this group.
Resumo:
La stima degli indici idrometrici in bacini non strumentati rappresenta un problema che la ricerca internazionale ha affrontato attraverso il cosiddetto PUB (Predictions on Ungauged Basins – IAHS, 2002-2013). Attraverso l’analisi di un’area di studio che comprende 61 bacini del Sud-Est americano, si descrivono e applicano due tecniche di stima molto diverse fra loro: il metodo regressivo ai Minimi Quadrati Generalizzati (GLS) e il Topological kriging (TK). Il primo considera una serie di fattori geomorfoclimatici relativi ai bacini oggetto di studio, e ne estrae i pesi per un modello di regressione lineare dei quantili; il secondo è un metodo di tipo geostatistico che considera il quantile come una variabile regionalizzata su supporto areale (l’area del bacino), tenendo conto della dislocazione geografica e l’eventuale struttura annidata dei bacini d’interesse. L’applicazione di questi due metodi ha riguardato una serie di quantili empirici associati ai tempi di ritorno di 10, 50, 100 e 500 anni, con lo scopo di valutare le prestazioni di un loro possibile accoppiamento, attraverso l’interpolazione via TK dei residui GLS in cross-validazione jack-knife e con differenti vicinaggi. La procedura risulta essere performante, con un indice di efficienza di Nash-Sutcliffe pari a 0,9 per tempi di ritorno bassi ma stazionario su 0,8 per gli altri valori, con un trend peggiorativo all’aumentare di TR e prestazioni pressoché invariate al variare del vicinaggio. L’applicazione ha mostrato che i risultati possono migliorare le prestazioni del metodo GLS ed essere paragonabili ai risultati del TK puro, confermando l’affidabilità del metodo geostatistico ad applicazioni idrologiche.
Resumo:
Scopo della tesi è illustrare l’evoluzione delle tecniche ecocardiografiche relativamente alla diagnosi precoce della cardiotossicità. L’elaborato espone le modalità di imaging ecocardiografico che vengono utilizzate per diagnosticare la cardiotossicità a partire dall’ecocardiografia bidimensionale, fino alle tecniche tridimensionali con acquisizione in tempo reale, attualmente in evoluzione. Si analizzano le varie tecniche diagnostiche rese disponibili dall’esame ecocardiografico: ecocardiografia a contrasto, doppler ad onda continua e pulsata e color doppler, e i metodi e le stime attraverso i quali è possibile quantificare i volumi cardiaci, indici della funzionalità del miocardio. La frazione di eiezione è infatti stata, fino ad ora, il parametro di riferimento per la verifica di lesioni cardiache riportate a seguito di terapia antitumorale. La cardiotossicità viene riscontrata per riduzioni dei valori della frazione di eiezione da ≥5% a <55% con sintomi di scompenso cardiaco e riduzione asintomatica da ≥10% al 55%. Tuttavia, l’osservazione di questo parametro, permette di quantificare il danno riportato quando ormai ha avuto ripercussioni funzionali. In campo clinico, si sta imponendo, al giorno d’oggi, l’analisi delle deformazioni cardiache per una valutazione precoce dell’insorgenza della cardiotossicità. Lo studio delle deformazioni cardiache viene effettuato tramite una nuova tecnica di imaging: l’ecocardiografia speckle tracking (STE), che consente un’analisi quantitativa e oggettiva, poiché indipendente dall’angolo di insonazione, della funzionalità miocardica sia globale sia locale, analizzando le dislocazioni spaziali degli speckles, punti generati dall’interazione tra ultrasuoni e fibre miocardiche. I parametri principali estrapolati dall’indagine sono: deformazione longitudinale, deformazione radiale e deformazione circonferenziale che descrivono la meccanica del muscolo cardiaco derivante dall’anatomia delle fibre miocardiche. La STE sviluppata inizialmente in 2D, è disponibile ora anche in 3D, permettendo la valutazione del vettore delle dislocazioni lungo le tre dimensioni e non più limitatamente ad un piano. Un confronto tra le due mostra come nella STE bidimensionale venga evidenziata una grande variabilità nella misura delle dislocazioni mentre la 3D mostra un pattern più uniforme, coerente con la normale motilità delle pareti cardiache. La valutazione della deformazione longitudinale globale (GLS), compiuta tramite ecocardiografia speckle tracking, viene riconosciuta come indice quantitativo della funzione del ventricolo sinistro le cui riduzioni sono predittive di cardiotossicità. Queste riduzioni vengono riscontrate anche per valori di frazioni di eiezione normale: ne risulta che costituiscono un più efficace e sensibile indicatore di cardiotossicità e possono essere utilizzate per la sua diagnosi precoce.
Resumo:
Aortic dilatation/dissection (AD) can occur spontaneously or in association with genetic syndromes, such as Marfan syndrome (MFS; caused by FBN1 mutations), MFS type 2 and Loeys-Dietz syndrome (associated with TGFBR1/TGFBR2 mutations), and Ehlers-Danlos syndrome (EDS) vascular type (caused by COL3A1 mutations). Although mutations in FBN1 and TGFBR1/TGFBR2 account for the majority of AD cases referred to us for molecular genetic testing, we have obtained negative results for these genes in a large cohort of AD patients, suggesting the involvement of additional genes or acquired factors. In this study we assessed the effect of COL3A1 deletions/duplications in this cohort. Multiplex ligation-dependent probe amplification (MLPA) analysis of 100 unrelated patients identified one hemizygous deletion of the entire COL3A1 gene. Subsequent microarray analyses and sequencing of breakpoints revealed the deletion size of 3,408,306 bp at 2q32.1q32.3. This deletion affects not only COL3A1 but also 21 other known genes (GULP1, DIRC1, COL5A2, WDR75, SLC40A1, ASNSD1, ANKAR, OSGEPL1, ORMDL1, LOC100129592, PMS1, MSTN, C2orf88, HIBCH, INPP1, MFSD6, TMEM194B, NAB1, GLS, STAT1, and STAT4), mutations in three of which (COL5A2, SLC40A1, and MSTN) have also been associated with an autosomal dominant disorder (EDS classical type, hemochromatosis type 4, and muscle hypertrophy). Physical and laboratory examinations revealed that true haploinsufficiency of COL3A1, COL5A2, and MSTN, but not that of SLC40A1, leads to a clinical phenotype. Our data not only emphasize the impact/role of COL3A1 in AD patients but also extend the molecular etiology of several disorders by providing hitherto unreported evidence for true haploinsufficiency of the underlying gene.
Resumo:
We apply the efficient unit-roots tests of Elliott, Rothenberg, and Stock (1996), and Elliott (1998) to twenty-one real exchange rates using monthly data of the G-7 countries from the post-Bretton Woods floating exchange rate period. Our results indicate that, for eighteen out of the twenty-one real exchange rates, the null hypothesis of a unit root can be rejected at the 10% significance level or better using the Elliot et al (1996) DF-GLS test. The unit-root null hypothesis is also rejected for one additional real exchange rate when we allow for one endogenously determined break in the time series of the real exchange rate as in Perron (1997). In all, we find favorable evidence to support long-run purchasing power parity in nineteen out of twenty-one real exchange rates. Second, we find no strong evidence to suggest that the use of non-U.S. dollar-based real exchange rates tend to produce more favorable result for long-run PPP than the use of U.S. dollar-based real exchange rates as Lothian (1998) has concluded.
Resumo:
Life expectancy has consistently increased over the last 150 years due to improvements in nutrition, medicine, and public health. Several studies found that in many developed countries, life expectancy continued to rise following a nearly linear trend, which was contrary to a common belief that the rate of improvement in life expectancy would decelerate and was fit with an S-shaped curve. Using samples of countries that exhibited a wide range of economic development levels, we explored the change in life expectancy over time by employing both nonlinear and linear models. We then observed if there were any significant differences in estimates between linear models, assuming an auto-correlated error structure. When data did not have a sigmoidal shape, nonlinear growth models sometimes failed to provide meaningful parameter estimates. The existence of an inflection point and asymptotes in the growth models made them inflexible with life expectancy data. In linear models, there was no significant difference in the life expectancy growth rate and future estimates between ordinary least squares (OLS) and generalized least squares (GLS). However, the generalized least squares model was more robust because the data involved time-series variables and residuals were positively correlated. ^
Resumo:
Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.
Resumo:
We analyzed the FANTOM2 clone set of 60,770 RIKEN full-length mouse cDNA sequences and 44,122 public mRNA sequences. We developed a new computational procedure to identify and classify the forms of splice variation evident in this data set and organized the results into a publicly accessible database that can be used for future expression array construction, structural genomics, and analyses of the mechanism and regulation of alternative splicing. Statistical analysis shows that at least 41% and possibly as much as 60% of multiexon genes in mouse have multiple splice forms. Of the transcription units with multiple splice forms, 49% contain transcripts in which the apparent use of an alternative transcription start (stop) is accompanied by alternative splicing of the initial (terminal) exon. This implies that alternative transcription may frequently induce alternative splicing. The fact that 73% of all exons with splice variation fall within the annotated coding region indicates that most splice variation is likely to affect the protein form. Finally, we compared the set of constitutive (present in all transcripts) exons with the set of cryptic (present only in some transcripts) exons and found statistically significant differences in their length distributions, the nucleoticle distributions around their splice junctions, and the frequencies of occurrence of several short sequence motifs.
Resumo:
In this article we investigate the asymptotic and finite-sample properties of predictors of regression models with autocorrelated errors. We prove new theorems associated with the predictive efficiency of generalized least squares (GLS) and incorrectly structured GLS predictors. We also establish the form associated with their predictive mean squared errors as well as the magnitude of these errors relative to each other and to those generated from the ordinary least squares (OLS) predictor. A large simulation study is used to evaluate the finite-sample performance of forecasts generated from models using different corrections for the serial correlation.
Resumo:
Introduction - Monocytes, with 3 different subsets, are implicated in the initiation and progression of the atherosclerotic plaque contributing to plaque instability and rupture. Mon1 are the “classical” monocytes with inflammatory action, whilst Mon3 are considered reparative with fibroblast deposition ability. The function of the newly described Mon2 subset is yet to be fully described. In PCI era, fewer patients have globally reduced left ventricular ejection fraction post infarction, hence the importance of studying regional wall motion abnormalities and deformation at segmental levels using longitudinal strain. Little is known of the role for the 3 monocyte subpopulations in determining global strain in ST elevation myocardial infarction patients (STEMI). Conclusion In patients with normal or mildly impaired EF post infarction, higher counts of Mon1 and Mon2 are correlated with GLS within 7 days and at 6 months of remodelling post infarction. Adverse clinical outcomes in patients with reduced convalescent GLS were predicted with Mon1 and Mon2 suggestive of an inflammatory role for the newly identified Mon2 subpopulation. These results imply an important role for monocytes in myocardial healing when assessed by subclinical ventricular function indices. Methodology - STEMI patients (n = 101, mean age 64 ± 13 years; 69% male) treated with percutaneous revascularisation were recruited within 24 h post-infarction. Peripheral blood monocyte subpopulations were enumerated and characterised using flow cytometry after staining for CD14, CD16 and CCR2. Phenotypically, monocyte subpopulations are defined as: CD14++CD16-CCR2+ (Mon1), CD14++CD16+CCR2+ (Mon2) and CD14+CD16++CCR2- (Mon3). Phagocytic activity of monocytes was measured using flow cytometry and Ecoli commercial kit. Transthoracic 2D echocardiography was performed within 7 days and at 6 months post infarct to assess global longitudinal strain (GLS) via speckle tracking. MACE was defined as recurrent acute coronary syndrome and death. Results - STEMI patients with EF ≥50% by Simpson’s biplane (n = 52) had GLS assessed. Using multivariate regression analysis higher counts of Mon1 and Mon 2 and phagocytic activity of Mon2 were significantly associated with GLS (after adjusting for age, time to hospital presentation, and peak troponin levels) (Table 1). At 6 months, the convalescent GLS remained associated with higher counts of Mon1, Mon 2. At one year follow up, using multivariate Cox regression analysis, Mon1 and Mon2 counts were an independent predictor of MACE in patients with a reduced GLS (n = 21)
Resumo:
The purpose of the study was to explore the geography literacy, attitudes and experiences of Florida International University (FIU) freshman students scoring at the low and high ends of a geography literacy survey. The Geography Literacy and ABC Models formed the conceptual framework. Participants were freshman students enrolled in the Finite Math course at FIU. Since it is assumed that students who perform poorly on geography assessments do not have an interest in the subject, testing and interviewing students allowed the researcher to explore the assumption. In Phase I, participants completed the Geography Literacy Survey (GLS) with items taken from the 2010 NAEP Geography Subject Area Assessment. The low 35% and high 20% performers were invited for Phase II, which consisted of semi-structured interviews. A total of 187 students participated in Phase I and 12 in Phase II. The primary research question asked was what are the geography attitudes and experiences of freshman students scoring at the low and high ends of a geographical literacy survey? The students had positive attitudes regardless of how they performed on the GLS. The study included a quantitative sub-question regarding the performance of the students on the GLS. The students’ performance on the GLS was equivalent to the performance of 12th grade students from the NAEP Assessment. There were three qualitative sub-questions from which the following themes were identified: the students’ definition of geography is limited, students recall more out of school experiences with geography, and students find geography valuable. In addition, there were five emergent themes: there is a concern regarding a lack of geographical knowledge, rote memorization of geographical content is overemphasized, geographical concepts are related to other subjects, taking the high school level AP Human Geography course is powerful, and there is a need for real-world applications of geographical knowledge. The researcher offered as suggestions for practice to reposition geography in our schools to avoid misunderstandings, highlight its interconnectedness to other fields, connect the material to real world events/daily decision-making, make research projects meaningful, partner with local geographers, and offer a mandatory geography courses at all educational levels.