833 resultados para level of detail (LOD)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurde der Einfluss von monomeren und polymeren ionischen Additiven auf die Kristallisation von Calciumcarbonat untersucht. Dabei wurden die Wirkungen der Additive auf die Morphologie und auf die Phasenzusammensetzung (relative Verhältnisse der Calciumcarbonat-Polymorphe Calcit, Aragonit und Vaterit) sowohl experimentell als auch theoretisch im Sinne von Molecular Modelling studiert.rnrnMit Hilfe der monomeren Additive, wie z.B. Monocarbonsäuren, konnten grundlegende Mechanismen bei der Interaktion von Additiven mit dem wachsenden Kristall aufgeklärt werden. Auch der Einfluss der Stereochemie auf die Phasenselektion des Calciumcarbonats konnte detailliert untersucht werden. Die polymeren ionischen Additive vertiefen die Untersuchungen zu den bei den monomeren Additiven gefundenen Mechanismen. Auch hier konnte der Einfluss der Stereochemie studiert werden.rnrnAußerdem konnten verschiedene kooperative Wechselwirkungen der Polymere mit dem Kristall bzw. der zugrunde liegenden Oberfläche (im Sinne von self assembled-monolayers, SAM) gefunden und erklärt werden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown) 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio). However, target price accuracy is positively related to the level of detail of each report, company size and the reputation of the investment bank. The potential conflicts of interests between an analyst and a covered company do not bias forecast accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present the Cellular Dynamic Simulator (CDS) for simulating diffusion and chemical reactions within crowded molecular environments. CDS is based on a novel event driven algorithm specifically designed for precise calculation of the timing of collisions, reactions and other events for each individual molecule in the environment. Generic mesh based compartments allow the creation / importation of very simple or detailed cellular structures that exist in a 3D environment. Multiple levels of compartments and static obstacles can be used to create a dense environment to mimic cellular boundaries and the intracellular space. The CDS algorithm takes into account volume exclusion and molecular crowding that may impact signaling cascades in small sub-cellular compartments such as dendritic spines. With the CDS, we can simulate simple enzyme reactions; aggregation, channel transport, as well as highly complicated chemical reaction networks of both freely diffusing and membrane bound multi-protein complexes. Components of the CDS are generally defined such that the simulator can be applied to a wide range of environments in terms of scale and level of detail. Through an initialization GUI, a simple simulation environment can be created and populated within minutes yet is powerful enough to design complex 3D cellular architecture. The initialization tool allows visual confirmation of the environment construction prior to execution by the simulator. This paper describes the CDS algorithm, design implementation, and provides an overview of the types of features available and the utility of those features are highlighted in demonstrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the work carried out by Metro de Madrid and the Railway Technology Research Centre (Polytechnic University of Madrid), aimed at setting up rolling stock simulation models with a high level of detail. To do this, the features of the SIMPACK simulation tool used to create models have been briefly outlined, explaining the main features of models in two of the series modelled: 7000 and 8000. Finally, the results obtained from comparing comfort in the 7000 and 8000 series are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El proyecto consiste en el diseño del sistema de climatización de un edificio ubicado en la ciudad de Madrid que utilice la energía solar como fuente de calor y electricidad. El objetivo es que el edificio tenga un consumo energético lo más bajo posible y que utilice energías de origen renovable para su explotación. Se incluye el cálculo de cargas térmicas, el dimensionamiento del sistema de climatización y de los sistemas de captación de energía solar (térmica y fotovoltaica). Adicionalmente, se definen las principales características de un sistema de control centralizado que permita optimizar el rendimiento y monitorizar el funcionamiento de la instalación de forma continua. Se incluye el diseño de las instalaciones auxiliares con un grado de detalle suficiente que permita su valoración, tanto desde el punto de vista energético como económico. Como parte fundamental del proyecto, se extraen conclusiones acerca del ahorro energético de las instalaciones y se analiza la viabilidad económica de las inversiones. ABSTRACT The project covers the design of a Heating and Climatization System for a building located in the city of Madrid (Spain). The facilities will use solar energy as the main source for both heat and electricity. The main goals are to achieve the lowest possible energy consumption and to use renewable sources of energy to cover it. Calculation of thermal charges is included, together with the sizing of both the Climatization System and the Solar Energy (Thermal and PV) facilities. In addition, the main characteristics of a Centralized Control System are defined. This will help both to optimize the performance of the different systems involved and to monitor the operation. Design of all auxiliary systems is included with enough level of detail as to be able to evaluate them from both energetic and economic points of view. Paramount in this project is to be able to draw conclusions about the energy savings and the profitability (or not) of the main investments to be carried out

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Provenance plays a major role when understanding and reusing the methods applied in a scientic experiment, as it provides a record of inputs, the processes carried out and the use and generation of intermediate and nal results. In the specic case of in-silico scientic experiments, a large variety of scientic workflow systems (e.g., Wings, Taverna, Galaxy, Vistrails) have been created to support scientists. All of these systems produce some sort of provenance about the executions of the workflows that encode scientic experiments. However, provenance is normally recorded at a very low level of detail, which complicates the understanding of what happened during execution. In this paper we propose an approach to automatically obtain abstractions from low-level provenance data by finding common workflow fragments on workflow execution provenance and relating them to templates. We have tested our approach with a dataset of workflows published by the Wings workflow system. Our results show that by using these kinds of abstractions we can highlight the most common abstract methods used in the executions of a repository, relating different runs and workflow templates with each other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El mundo de las telecomunicaciones evoluciona a gran velocidad, acorde con las necesidades de los usuarios. El crecimiento del número de servicios a través de las conexiones que actualmente utilizamos para conectarnos a Internet (Ej. IPTV) con elevados requerimientos de ancho de banda, que junto a los servicios de nuevo nacimiento (ej. OTT), contribuyen tanto al aumento de la necesidad de mayores velocidades de conexión de los usuarios como a la implantación de nuevos modelos de calidad de servicio. Las redes de datos de banda ancha (fija y móvil) actuales deben, por lo tanto, experimentar una profunda transformación para conseguir solventar de una forma eficiente los problemas y las necesidades de tráfico, pudiendo así absorber el progresivo incremento del ancho de banda, dejando las puertas abiertas a futuras mejoras. Y para ello las operadoras se nutrirán con la valiosa información de tráfico y usuario que les lleven a tomar las mejores decisiones de cara a que las transformaciones llevadas a cabo cubran exactamente lo que el usuario demanda de la forma más eficiente posible. Con estas premisas, surgieron las ideas que se plasmaron como objetivos del PFC : La idea de narrar el despliegue de la banda ancha en España desde sus orígenes hasta la actualidad, enfocando su crecimiento desde un punto de vista sociotecnológico. Dando continuidad al punto anterior, se persiguió la idea de conocer las herramientas sociales y tecnológicas a raíz de las cuales se pueda realizar una previsión del tráfico en las redes de las operadoras en un futuro cercano. La pretensión de mostrar las características de los usuarios de banda ancha y del tráfico de datos que generan, que son de carácter crítico para las operadoras en la elaboración de forma adecuada de la planificación de sus redes. La intención de revelar los procedimientos de las operadoras para que, una vez conocidas las características de sus usuarios, se puedan cumplir los requisitos demandados por los mismos: QoS y los indicadores clave de rendimiento (KPIs) Por otro lado, el nivel de detalle dado pretende adecuarse a un público que no tenga profundos conocimientos sobre la materia, y salvo partes bastante concretas, se puede catalogar este trabajo como de abierto al público en general. ABSTRACT. The world of telecommunications is evolving at high speed, according to the needs of users. The growing of services number through the connections that currently have been used to connect to the Internet (eg IPTV ) with high bandwidth requirements, which together with the new birth services (eg OTT ) contribute both to increased the need for higher connection speeds users and the implementation of new models of service quality. Data networks broadband (fixed and mobile ) today must , therefore, undergo a deep transformation to achieve an efficient solving problems and traffic needs, thus being able to absorb the gradual increase of bandwidth, leaving the door open to future improvements. And for that the operators will be nurtured with valuable information and user traffic that lead them to make better decisions in the face of the transformations carried out exactly meet the user demand for the most efficient possible way. With these assumptions, the ideas that emerged were expressed as PFC objectives : The idea of narrating the broadband deployment in Spain from its origins to the present, focusing its growth from a socio-technological approach. Continuing the previous point, it pursued the idea of knowing the social tools and technology as a result of which it can perform a traffic forecast operators networks in the near future. The attempt to show the characteristics of broadband users and data traffic they generate, which are mission critical for operators in developing adequately planning their networks. The intention to disclose procedures for operators, once known the characteristics of their users, it can meet the requirements demanded by them: QoS and key performance indicators (KPI). On the other hand, the level of detail given suit seeks an audience that does not have deep knowledge on the subject, unless quite specific parts, this work can be classified as open to the general public.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La evolución de las redes eléctricas se dirige hacia lo que se conoce como “Smart Grids” o “Redes Eléctricas Inteligentes”. Estas “Smart Grids” se componen de subestaciones eléctricas, que a su vez se componen de unos dispositivos llamados IEDs (Dispositivos Electrónicos Inteligentes – Intelligent Electronic Devices). El diseño de IEDs se encuentra definido en la norma IEC 61850, que especifica además un Lenguaje de Configuración de Subestaciones (Substation Configuration Language SCL) para la definición de la configuración de subestaciones y sus IEDs. Hoy en día, este estándar internacional no sólo se utiliza para diseñar correctamente IEDs y asegurar su interoperabilidad, sino que también se utiliza para el diseño de otros dispositivos de la red eléctrica, como por ejemplo, medidores inteligentes. Sin embargo, aunque existe una tendencia cada vez mayor del uso de este estándar, la comprensión y el manejo del mismo resulta difícil debido al gran volumen de información que lo compone y del nivel de detalle que utiliza, por lo que su uso para el diseño de IEDs se hace tedioso sin la ayuda de un soporte software. Es por ello que, para facilitar la aplicación del estándar IEC 61850 en el diseño de IEDs se han desarrollado herramientas como “Visual SCL”, “SCL Explorer” o “61850 SCLVisual Design Tool”. En concreto, “61850 SCLVisual Design Tool” es una herramienta gráfica para el modelado de subestaciones electricas, generada mediante el uso de los frameworks Eclipse Modeling Framework (EMF) y Epsilon Generative Modeling Technologies (GMT) y desarrollada por el grupo de investigación SYST de la UPM. El objetivo de este proyecto es añadir una nueva funcionalidad a la herramienta “61850 Visual SCL DesignTool”. Esta nueva funcionalidad consiste en la generación automática de un fichero de configuración de subestaciones eléctricas según el estándar IEC 61850 a partir de de una herramienta de diseño gráfico. Este fichero, se denomina SCD (Substation Configuration Description), y se trata de un fichero XML conforme a un esquema XSD (XML Schema Definition) mediante el que se define el lenguaje de configuración de subestaciones SCL del IEC 61850. Para el desarrollo de este proyecto, es necesario el estudio del lenguaje para la configuración de subestaciones SCL, así como del lenguaje gráfico específico de dominio definido por la herramienta “61850 SCLVisual Design Tool”, la estructura de los ficheros SCD, y finalmente, del lenguaje EGL (Epsilon Generation Language) para la transformación y generación automática de código a partir de modelos EMF. ABSTRACT Electrical networks are evolving to “Smart Grids”. Smart Grids are composed of electrical substations that in turn are composed of devices called IEDs (Intelligent Electronic Devices). The design of IEDs is defined by the IEC 61850 standard, which also specifies a Substation Configuration Languaje (SCL) used to define the configuration of substations and their IEDs. Nowadays, this international standard is not only used to design properly IEDs and guarantee their interoperability, but it is also used to design different electrical network devices, such as, smart meters. However, although the use of this standard is growing, its compression as well as its management, is still difficult due to its large volume of information and its level of detail. As a result, designing IEDs becomes a tedious task without a software support. As a consequence of this, in order to make easier the application of the IEC 61850 standard while designing IEDs, some software tools have been developed, such as: “Visual SCL”, “SCL Explorer” or “61850 SCLVisual Design Tool”. In particular, “61850 SCLVisual Design Tool” is a graphical tool used to make electrical substations models, and developed with the Eclipse Modeling Framework (EMF) and Epsilon Generative Modeling Technologies (GMT) by the research group SYST of the UPM. The aim of this project is to add a new functionality to “61850 Visual SCL DesignTool”. This new functionality consists of the automatic code generation of a substation configuration file according to the IEC 61850 standard. This file is called SCD (Substation Configuration Description), and it is a XML file that follows a XSD (XML Schema Definition) that defines the Substation Configuration Language (SCL) of the IEC 61850. In order to develop this project, it is necessary to study the Substation Configuration Language (SCL), the domain-specific graphical languaje defined by the tool “61850 SCLVisual Design Tool”, the structure of a SCD file, and the Epsilon Generation Language (EGL) used for the automatic code generation from EMF models

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Expressed sequence tags (ESTs) are randomly sequenced cDNA clones. Currently, nearly 3 million human and 2 million mouse ESTs provide valuable resources that enable researchers to investigate the products of gene expression. The EST databases have proven to be useful tools for detecting homologous genes, for exon mapping, revealing differential splicing, etc. With the increasing availability of large amounts of poorly characterised eukaryotic (notably human) genomic sequence, ESTs have now become a vital tool for gene identification, sometimes yielding the only unambiguous evidence for the existence of a gene expression product. However, BLAST-based Web servers available to the general user have not kept pace with these developments and do not provide appropriate tools for querying EST databases with large highly spliced genes, often spanning 50 000–100 000 bases or more. Here we describe Gene2EST (http://woody.embl-heidelberg.de/gene2est/), a server that brings together a set of tools enabling efficient retrieval of ESTs matching large DNA queries and their subsequent analysis. RepeatMasker is used to mask dispersed repetitive sequences (such as Alu elements) in the query, BLAST2 for searching EST databases and Artemis for graphical display of the findings. Gene2EST combines these components into a Web resource targeted at the researcher who wishes to study one or a few genes to a high level of detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the light of the growing interest raised by Information Systems Offshore Outsourcing both in the managerial world and in the academic arena, the present work carries out a revision of the research in this area. We have analysed 89 research articles on this topic published in 17 prestigious journals. The analysis deals with aspects such as research methodologies, level of analysis in the studies, data perspective, economic theories used or location of vendors and clients of these services; and it additionally identifies the most frequent topics in this field as well as the most prolific authors and countries. Although other reviews about the research in this area have been published, the present paper achieves a greater level of detail than previous works. The review of the literature in the area could have interesting implications not only for academics but also for business practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This raster layer represents surface elevation and bathymetry data for the Boston Region, Massachusetts. It was created by merging portions of MassGIS Digital Elevation Model 1:5,000 (2005) data with NOAA Estuarine Bathymetric Digital Elevation Models (30 m.) (1998). DEM data was derived from the digital terrain models that were produced as part of the MassGIS 1:5,000 Black and White Digital Orthophoto imagery project. Cellsize is 5 meters by 5 meters. Each cell has a floating point value, in meters, which represents its elevation above or below sea level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to outline a seven-phase simulation conceptual modelling procedure that incorporates existing practice and embeds a process reference model (i.e. SCOR). Design/methodology/approach – An extensive review of the simulation and SCM literature identifies a set of requirements for a domain-specific conceptual modelling procedure. The associated design issues for each requirement are discussed and the utility of SCOR in the process of conceptual modelling is demonstrated using two development cases. Ten key concepts are synthesised and aligned to a general process for conceptual modelling. Further work is outlined to detail, refine and test the procedure with different process reference models in different industrial contexts. Findings - Simulation conceptual modelling is often regarded as the most important yet least understood aspect of a simulation project (Robinson, 2008a). Even today, there has been little research development into guidelines to aid in the creation of a conceptual model. Design issues are discussed for building an ‘effective’ conceptual model and the domain-specific requirements for modelling supply chains are addressed. The ten key concepts are incorporated to aid in describing the supply chain problem (i.e. components and relationships that need to be included in the model), model content (i.e. rules for determining the simplest model boundary and level of detail to implement the model) and model validation. Originality/value – Paper addresses Robinson (2008a) call for research in defining and developing new approaches for conceptual modelling and Manuj et al., (2009) discussion on improving the rigour of simulation studies in SCM. It is expected that more detailed guidelines will yield benefits to both expert (i.e. avert typical modelling failures) and novice modellers (i.e. guided practice; less reliance on hopeful intuition)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to evaluate the microclimate changes surrounding the wind farm Macau Pilot / RN, present in the municipality of the same name. To achieve this goal made use of remote sensing techniques using Landsat - 5 TM and 7 ETM +, from which made it possible evaluation of temperature changes on the surface, this around the park in periods prior to its implementation, to the today. For evaluation of the temperature data that has been generated by applying a template was performed its correlation with field data collection and evaluated the degree of correlation, in order to confirm the validity of the data acquired by satellite. Also held was a characterization of the climate of the region based on the data of this climatological station in Macau. Once collected this data made possible the evaluation of climate change policy in the study region. After validation of the temperature models, an analysis of the generated temperature histograms was performed visually could not identify any significant change. However when analyzing the temperature data at a higher level of detail, a data pattern of behavior was detected for both periods evaluated, yet could not see a distinction between the periods of pre-operation of the park, and post-operation . From this result was levantas hypotheses to explain the behavior of the data, the first of which is the presence of moisture in the soil, and the second to the soil composition. In order to validate the hypotheses were applied PDI techniques, involving a combination of different RGB bands of Landsat 5 and the implementation of Reason bands procedure that might show the elements present on the soil surface.