86 resultados para GLS detrending


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The identification in various proxy records of periods of rapid (decadal scale) climate change over recent millennia, together with the possibility that feedback mechanisms may amplify climate system responses to increasing atmospheric CO2, highlights the importance of a detailed understanding, at high spatial and temporal resolutions, of forcings and feedbacks within the system. Such an understanding has hitherto been limited because the temperate marine environment has lacked an absolute timescale of the kind provided by tree-rings for the terrestrial environment and by corals for the tropical marine environment. Here we present the first annually resolved, multi-centennial (489-year), absolutely dated, shell-based marine master chronology. The chronology has been constructed by detrending and averaging annual growth increment widths in the shells of multiple specimens of the very long-lived bivalve mollusc Arctica islandica, collected from sites to the south and west of the Isle of Man in the Irish Sea. The strength of the common environmental signal expressed in the chronology is fully comparable with equivalent statistics for tree-ring chronologies. Analysis of the 14C signal in the shells shows no trend in the marine radiocarbon reservoir correction (DR), although it may be more variable before ~1750. The d13C signal shows a very significant (R**2 = 0.456, p < 0.0001) trend due to the 13C Suess effect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En esta tesis se presenta una nueva aproximación para la realización de mapas de calidad del aire, con objeto de que esta variable del medio físico pueda ser tenida en cuenta en los procesos de planificación física o territorial. La calidad del aire no se considera normalmente en estos procesos debido a su composición y a la complejidad de su comportamiento, así como a la dificultad de contar con información fiable y contrastada. Además, la variabilidad espacial y temporal de las medidas de calidad del aire hace que sea difícil su consideración territorial y exige la georeferenciación de la información. Ello implica la predicción de medidas para lugares del territorio donde no existen datos. Esta tesis desarrolla un modelo geoestadístico para la predicción de valores de calidad del aire en un territorio. El modelo propuesto se basa en la interpolación de las medidas de concentración de contaminantes registradas en las estaciones de monitorización, mediante kriging ordinario, previa homogeneización de estos datos para eliminar su carácter local. Con el proceso de eliminación del carácter local, desaparecen las tendencias de las series muestrales de datos debidas a las variaciones temporales y espaciales de la calidad del aire. La transformación de los valores de calidad del aire en cantidades independientes del lugar de muestreo, se realiza a través de parámetros de uso del suelo y de otras variables características de la escala local. Como resultado, se obtienen unos datos de entrada espacialmente homogéneos, que es un requisito fundamental para la utilización de cualquier algoritmo de interpolación, en concreto, del kriging ordinario. Después de la interpolación, se aplica una retransformación de los datos para devolver el carácter local al mapa final. Para el desarrollo del modelo, se ha elegido como área de estudio la Comunidad de Madrid, por la disponibilidad de datos reales. Estos datos, valores de calidad del aire y variables territoriales, se utilizan en dos momentos. Un momento inicial, donde se optimiza la selección de los parámetros más adecuados para la eliminación del carácter local de las medidas y se desarrolla cada una de las etapas del modelo. Y un segundo momento, en el que se aplica en su totalidad el modelo desarrollado y se contrasta su eficacia predictiva. El modelo se aplica para la estimación de los valores medios y máximos de NO2 del territorio de estudio. Con la implementación del modelo propuesto se acomete la territorialización de los datos de calidad del aire con la reducción de tres factores clave para su efectiva integración en la planificación territorial o en el proceso de toma de decisiones asociado: incertidumbre, tiempo empleado para generar la predicción y recursos (datos y costes) asociados. El modelo permite obtener una predicción de valores del contaminante objeto de análisis en unas horas, frente a los periodos de modelización o análisis requeridos por otras metodologías. Los recursos necesarios son mínimos, únicamente contar con los datos de las estaciones de monitorización del territorio que, normalmente, están disponibles en las páginas web viii institucionales de los organismos gestores de las redes de medida de la calidad del aire. Por lo que respecta a las incertidumbres de la predicción, puede decirse que los resultados del modelo propuesto en esta tesis son estadísticamente muy correctos y que los errores medios son, en general, similares o menores que los encontrados con la aplicación de las metodologías existentes. ABSTRACT This thesis presents a new approach for mapping air quality, so that this variable of physical environment can be taken into account in physical or territorial planning. Ambient air quality is not normally considered in territorial planning mainly due to the complexity of its composition and behavior and the difficulty of counting with reliable and contrasted information. In addition, the wide spatial and temporal variability of the measurements of air quality makes his territorial consideration difficult and requires georeferenced information. This involves predicting measurements in the places of the territory where there are no data. This thesis develops a geostatistical model for predicting air quality values in a territory. The proposed model is based on the interpolation of measurements of pollutants from the monitoring stations, using ordinary kriging, after a detrending or removal of the local character of sampling values process. With the detrending process, the local character of the time series of sampling data, due to temporal and spatial variations of air quality, is removed. The transformation of the air quality values into site-independent quantities is performed using land use parameters and other characteristic parameters of local scale. This detrending of the monitoring data process results in a spatial homogeneous input set which is a prerequisite for a correct use of any interpolation algorithm, particularly, ordinary kriging. After the interpolation step, a retrending or retransformation is applied in order to incorporate the local character in the final map at places where no monitoring data is available. For the development of this model, the Community of Madrid is chosen as study area, because of the availability of actual data. These data, air quality values and local parameters, are used in two moments. A starting point, to optimize the selection of the most suitable indicators for the detrending process and to develop each one of the model stages. And a second moment, to fully implement the developed model and to evaluate its predictive power. The model is applied to estimate the average and maximum values of NO2 in the study territory. With the implementation of the proposed model, the territorialization of air quality data is undertaken with the reduction in three key factors for the effective integration of this parameter in territorial planning or in the associated decision making process: uncertainty, time taken to generate the prediction and associated resources (data and costs). This model allows the prediction of pollutant values in hours, compared to the implementation time periods required for other modeling or analysis methodologies. The required resources are also minimal, only having data from monitoring stations in the territory, that are normally available on institutional websites of the authorities responsible for air quality networks control and management. With regard to the prediction uncertainties, it can be concluded that the results of the proposed model are statistically very accurate and the mean errors are generally similar to or lower than those found with the application of existing methodologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models for prediction of oil content as percentage of dried weight in olive fruits were comput- ed through PLS regression on NIR spectra. Spectral preprocessing was carried out by apply- ing multiplicative signal correction (MSC), Sa vitzky–Golay algorithm, standard normal variate correction (SNV), and detrending (D) to NIR spectra. MSC was the preprocessing technique showing the best performance. Further reduction of variability was performed by applying the Wold method of orthogonal signal correction (OSC). The calibration model achieved a R 2 of 0.93, a SEPc of 1.42, and a RPD of 3.8. The R 2 obtained with the validation set remained 0.93, and the SEPc was 1.41.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The understanding of the continental carbon budget is essential to predict future climate change. In order to quantify CO₂ and CH₄ fluxes at the regional scale, a measurement system was installed at the former radio tower in Beromünster as part of the Swiss greenhouse gas monitoring network (CarboCount CH). We have been measuring the mixing ratios of CO₂, CH₄ and CO on this tower with sample inlets at 12.5, 44.6, 71.5, 131.6 and 212.5 m above ground level using a cavity ring down spectroscopy (CRDS) analyzer. The first 2-year (December 2012–December 2014) continuous atmospheric record was analyzed for seasonal and diurnal variations and interspecies correlations. In addition, storage fluxes were calculated from the hourly profiles along the tower. The atmospheric growth rates from 2013 to 2014 determined from this 2-year data set were 1.78 ppm yr⁻¹, 9.66 ppb yr⁻¹ and and -1.27 ppb yr⁻¹ for CO₂, CH₄ and CO, respectively. After detrending, clear seasonal cycles were detected for CO₂ and CO, whereas CH₄ showed a stable baseline suggesting a net balance between sources and sinks over the course of the year. CO and CO₂ were strongly correlated (r² > 0.75) in winter (DJF), but almost uncorrelated in summer. In winter, anthropogenic emissions dominate the biospheric CO₂ fluxes and the variations in mixing ratios are large due to reduced vertical mixing. The diurnal variations of all species showed distinct cycles in spring and summer, with the lowest sampling level showing the most pronounced diurnal amplitudes. The storage flux estimates exhibited reasonable diurnal shapes for CO₂, but underestimated the strength of the surface sinks during daytime. This seems plausible, keeping in mind that we were only able to calculate the storage fluxes along the profile of the tower but not the flux into or out of this profile, since no Eddy covariance flux measurements were taken at the top of the tower.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We demonstrate here that the growth increment variability in the shell of the long-lived bivalve mollusc Arctica islandica can be interpreted as an indicator of marine environmental change in the climatically important North Atlantic shelf seas. Multi-centennial (up to 489-year) chronologies were constructed using five detrending techniques and their characteristics compared. The strength of the common environmental signal expressed in the chronologies was found to be fully comparable with equivalent statistics for tree-ring chronologies. The negative exponential function using truncated increment-width series from which the first thirty years have been removed was chosen as the optimal detrending technique. Chronology indices were compared with the Central England Temperature record and with seawater temperature records from stations close to the study site in the Irish Sea. Statistically significant correlations were found between the chronology indices and (a) mean air temperature for the 14-month period beginning in the January preceding the year of growth, (b) mean seawater temperatures for February-October in the year preceding the year of growth (c) late summer and autumn air temperatures and sea surface temperatures for the year of growth and (d) the timing of the autumn decline in SST. Changes through time in the correlations with air and seawater temperatures and changes towards a deeper water origin for the shells in the chronology were interpreted as an indication that shell growth may respond to stratification dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyzed the FANTOM2 clone set of 60,770 RIKEN full-length mouse cDNA sequences and 44,122 public mRNA sequences. We developed a new computational procedure to identify and classify the forms of splice variation evident in this data set and organized the results into a publicly accessible database that can be used for future expression array construction, structural genomics, and analyses of the mechanism and regulation of alternative splicing. Statistical analysis shows that at least 41% and possibly as much as 60% of multiexon genes in mouse have multiple splice forms. Of the transcription units with multiple splice forms, 49% contain transcripts in which the apparent use of an alternative transcription start (stop) is accompanied by alternative splicing of the initial (terminal) exon. This implies that alternative transcription may frequently induce alternative splicing. The fact that 73% of all exons with splice variation fall within the annotated coding region indicates that most splice variation is likely to affect the protein form. Finally, we compared the set of constitutive (present in all transcripts) exons with the set of cryptic (present only in some transcripts) exons and found statistically significant differences in their length distributions, the nucleoticle distributions around their splice junctions, and the frequencies of occurrence of several short sequence motifs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article we investigate the asymptotic and finite-sample properties of predictors of regression models with autocorrelated errors. We prove new theorems associated with the predictive efficiency of generalized least squares (GLS) and incorrectly structured GLS predictors. We also establish the form associated with their predictive mean squared errors as well as the magnitude of these errors relative to each other and to those generated from the ordinary least squares (OLS) predictor. A large simulation study is used to evaluate the finite-sample performance of forecasts generated from models using different corrections for the serial correlation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction - Monocytes, with 3 different subsets, are implicated in the initiation and progression of the atherosclerotic plaque contributing to plaque instability and rupture. Mon1 are the “classical” monocytes with inflammatory action, whilst Mon3 are considered reparative with fibroblast deposition ability. The function of the newly described Mon2 subset is yet to be fully described. In PCI era, fewer patients have globally reduced left ventricular ejection fraction post infarction, hence the importance of studying regional wall motion abnormalities and deformation at segmental levels using longitudinal strain. Little is known of the role for the 3 monocyte subpopulations in determining global strain in ST elevation myocardial infarction patients (STEMI). Conclusion In patients with normal or mildly impaired EF post infarction, higher counts of Mon1 and Mon2 are correlated with GLS within 7 days and at 6 months of remodelling post infarction. Adverse clinical outcomes in patients with reduced convalescent GLS were predicted with Mon1 and Mon2 suggestive of an inflammatory role for the newly identified Mon2 subpopulation. These results imply an important role for monocytes in myocardial healing when assessed by subclinical ventricular function indices. Methodology - STEMI patients (n = 101, mean age 64 ± 13 years; 69% male) treated with percutaneous revascularisation were recruited within 24 h post-infarction. Peripheral blood monocyte subpopulations were enumerated and characterised using flow cytometry after staining for CD14, CD16 and CCR2. Phenotypically, monocyte subpopulations are defined as: CD14++CD16-CCR2+ (Mon1), CD14++CD16+CCR2+ (Mon2) and CD14+CD16++CCR2- (Mon3). Phagocytic activity of monocytes was measured using flow cytometry and Ecoli commercial kit. Transthoracic 2D echocardiography was performed within 7 days and at 6 months post infarct to assess global longitudinal strain (GLS) via speckle tracking. MACE was defined as recurrent acute coronary syndrome and death. Results - STEMI patients with EF ≥50% by Simpson’s biplane (n = 52) had GLS assessed. Using multivariate regression analysis higher counts of Mon1 and Mon 2 and phagocytic activity of Mon2 were significantly associated with GLS (after adjusting for age, time to hospital presentation, and peak troponin levels) (Table 1). At 6 months, the convalescent GLS remained associated with higher counts of Mon1, Mon 2. At one year follow up, using multivariate Cox regression analysis, Mon1 and Mon2 counts were an independent predictor of MACE in patients with a reduced GLS (n = 21)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of the study was to explore the geography literacy, attitudes and experiences of Florida International University (FIU) freshman students scoring at the low and high ends of a geography literacy survey. The Geography Literacy and ABC Models formed the conceptual framework. Participants were freshman students enrolled in the Finite Math course at FIU. Since it is assumed that students who perform poorly on geography assessments do not have an interest in the subject, testing and interviewing students allowed the researcher to explore the assumption. In Phase I, participants completed the Geography Literacy Survey (GLS) with items taken from the 2010 NAEP Geography Subject Area Assessment. The low 35% and high 20% performers were invited for Phase II, which consisted of semi-structured interviews. A total of 187 students participated in Phase I and 12 in Phase II. The primary research question asked was what are the geography attitudes and experiences of freshman students scoring at the low and high ends of a geographical literacy survey? The students had positive attitudes regardless of how they performed on the GLS. The study included a quantitative sub-question regarding the performance of the students on the GLS. The students’ performance on the GLS was equivalent to the performance of 12th grade students from the NAEP Assessment. There were three qualitative sub-questions from which the following themes were identified: the students’ definition of geography is limited, students recall more out of school experiences with geography, and students find geography valuable. In addition, there were five emergent themes: there is a concern regarding a lack of geographical knowledge, rote memorization of geographical content is overemphasized, geographical concepts are related to other subjects, taking the high school level AP Human Geography course is powerful, and there is a need for real-world applications of geographical knowledge. The researcher offered as suggestions for practice to reposition geography in our schools to avoid misunderstandings, highlight its interconnectedness to other fields, connect the material to real world events/daily decision-making, make research projects meaningful, partner with local geographers, and offer a mandatory geography courses at all educational levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fragilidade brasileira quanto à competitividade turística é um fato observável nos dados da Organização Mundial do Turismo. O Brasil caiu em 2011, da 45ª para a 52ª posição, apesar de liderar no atributo recursos naturais e estar colocado na 23° em recursos culturais. Assim, grandes interesses e esforços têm sido direcionados para o estudo da competitividade dos produtos e destinos turísticos. O destino turístico é caracterizado por um conjunto complexo e articulado de fatores tangíveis e intangíveis, apresentando alta complexidade, dados de elevada dimensionalidade, não linearidade e comportamento dinâmico, tornando-se difícil a modelagem desses processos por meio de abordagens baseadas em técnicas estatísticas clássicas. Esta tese investigou modelos de equações estruturais e seus algoritmos, aplicados nesta área, analisando o ciclo completo de análise de dados, em um processo confirmatório no desenvolvimento e avaliação de um modelo holístico da satisfação do turista; na validação da estrutura do modelo de medida e do modelo estrutural, por meio de testes de invariância de múltiplos grupos; na análise comparativa dos métodos de estimação MLE, GLS e ULS para a modelagem da satisfação e na realização de segmentação de mercado no setor de destino turístico utilizando mapas auto-organizáveis de Kohonen e sua validação com modelagem de equações estruturais. Aplicações foram feitas em análises de dados no setor de turismo, principal indústria de serviços do Estado do Rio Grande do Norte, tendo sido, teoricamente desenvolvidos e testados empiricamente, modelos de equações estruturais em padrões comportamentais de destino turístico. Os resultados do estudo empírico se basearam em pesquisas com a técnica de amostragem aleatória sistemática, efetuadas em Natal-RN, entre Janeiro e Março de 2013 e forneceram evidências sustentáveis de que o modelo teórico proposto é satisfatório, com elevada capacidade explicativa e preditiva, sendo a satisfação o antecedente mais importante da lealdade no destino. Além disso, a satisfação é mediadora entre a geração da motivação da viagem e a lealdade do destino e que os turistas buscam primeiro à satisfação com a qualidade dos serviços de turismo e, posteriormente, com os aspectos que influenciam a lealdade. Contribuições acadêmicas e gerenciais são mostradas e sugestões de estudo são dadas para trabalhos futuros.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

No mercado de telecomunicações as transformações tecnológicas das últimas décadas aliaram-se a um cenário formado por empresas de alta tecnologia que caracterizam o setor de comunicações móveis pessoais em todo mundo. Neste contexto, as empresas deste setor preocupam-se cada vez mais com a competitividade, oferta de serviços, área de atendimento, demanda reprimida e a lealdade do cliente. Estudos de comportamento do consumidor pesquisam a satisfação e lealdade de clientes como fatores básicos para relações bem sucedidas e duradouras com as empresas. A complexidade das relações entre variáveis na avaliação da satisfação do cliente em comunicações móveis pode ser adequadamente pesquisada com a utilização de métodos estatísticos multivariados. Essa tese analisou as relações causais envolvendo os antecedentes e consequentes associados à satisfação do cliente, no segmento de comunicações móveis, bem como desenvolveu e validou um modelo comportamental do cliente no uso deste serviço, buscando explicar as relações entre os construtos envolvidos: satisfação, qualidade dos serviços, valor percebido, imagem da marca, lealdade e reclamação. Foi estabelecida uma ampla base teórica para avaliar a importância estratégica do modelo que relaciona a influência na satisfação do serviço com as percepções dos clientes e avaliada a precisão deste modelo, por meio de uma análise comparativa a utilização de três métodos de estimação dos seus parâmetros, MLE, GLS, e ULS, com o emprego de modelagem de equações estruturais. Foram feitas aplicações em análises de dados, sendo testada e avaliada empiricamente, a influência do gênero na satisfação do cliente deste setor, além de uma segmentação de mercado utilizando mapas auto-organizáveis e a correspondente validação deste processo, com modelagem de equações estruturais.Os resultados do estudo empírico produziram uma boa qualidade de ajustamento para o modelo teórico proposto, com evidências do estabelecimento de uma adequada capacidade explicativa e preditiva, destacando-se a relevância da relação causal entre a satisfação e lealdade, em consonância com diversos estudos realizados para os mercados de comunicações móveis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Caatinga biome, a semi-arid climate ecosystem found in northeast Brazil, presents low rainfall regime and strong seasonality. It has the most alarming climate change projections within the country, with air temperature rising and rainfall reduction with stronger trends than the global average predictions. Climate change can present detrimental results in this biome, reducing vegetation cover and changing its distribution, as well as altering all ecosystem functioning and finally influencing species diversity. In this context, the purpose of this study is to model the environmental conditions (rainfall and temperature) that influence the Caatinga biome productivity and to predict the consequences of environmental conditions in the vegetation dynamics under future climate change scenarios. Enhanced Vegetation Index (EVI) was used to estimate vegetation greenness (presence and density) in the area. Considering the strong spatial and temporal autocorrelation as well as the heterogeneity of the data, various GLS models were developed and compared to obtain the best model that would reflect rainfall and temperature influence on vegetation greenness. Applying new climate change scenarios in the model, environmental determinants modification, rainfall and temperature, negatively influenced vegetation greenness in the Caatinga biome. This model was used to create potential vegetation maps for current and future of Caatinga cover considering 20% decrease in precipitation and 1 °C increase in temperature until 2040, 35% decrease in precipitation and 2.5 °C increase in temperature in the period 2041-2070 and 50% decrease in precipitation and 4.5 °C increase in temperature in the period 2071-2100. The results suggest that the ecosystem functioning will be affected on the future scenario of climate change with a decrease of 5.9% of the vegetation greenness until 2040, 14.2% until 2070 and 24.3% by the end of the century. The Caatinga vegetation in lower altitude areas (most of the biome) will be more affected by climatic changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We thank Orkney Islands Council for access to Eynhallow and Talisman Energy (UK) Ltd and Marine Scotland for fieldwork and equipment support. Handling and tagging of fulmars was conducted under licences from the British Trust for Ornithology and the UK Home Office. EE was funded by a Marine Alliance for Science and Technology for Scotland/University of Aberdeen College of Life Sciences and Medicine studentship and LQ was supported by a NERC Studentship. Thanks also to the many colleagues who assisted with fieldwork during the project, and to Helen Bailey and Arliss Winship for advice on implementing the state-space model.