879 resultados para INTERVAL MAPS
Resumo:
The purpose of this paper is to analyze the usefulness of traditional indexes, such as NDVI and NDWI along with a recently proposed index (NDDI) using merged data for multiple dates, with the aim of obtaining drought data to facilitate the analysis for government premises. In this study we have used Landsat 7 ETM+ data for the month of June (2001-2009), which merged to get bands with twice the resolution. The three previous indices were calculated from these new bands, getting in turn drought maps that can enhance the effectiveness of decision making.
Resumo:
As one of the most competitive approaches to multi-objective optimization, evolutionary algorithms have been shown to obtain very good results for many realworld multi-objective problems. One of the issues that can affect the performance of these algorithms is the uncertainty in the quality of the solutions which is usually represented with the noise in the objective values. Therefore, handling noisy objectives in evolutionary multi-objective optimization algorithms becomes very important and is gaining more attention in recent years. In this paper we present ?-degree Pareto dominance relation for ordering the solutions in multi-objective optimization when the values of the objective functions are given as intervals. Based on this dominance relation, we propose an adaptation of the non-dominated sorting algorithm for ranking the solutions. This ranking method is then used in a standardmulti-objective evolutionary algorithm and a recently proposed novel multi-objective estimation of distribution algorithm based on joint variable-objective probabilistic modeling, and applied to a set of multi-objective problems with different levels of independent noise. The experimental results show that the use of the proposed method for solution ranking allows to approximate Pareto sets which are considerably better than those obtained when using the dominance probability-based ranking method, which is one of the main methods for noise handling in multi-objective optimization.
Resumo:
Trata de una conferencia invitada que ganó premio a la mejor comunicación científica.
Resumo:
Mosaics are high-resolution images obtained aerially and employed in several scientific research areas, such for example, in the field of environmental monitoring and precision agriculture. Although many high resolution maps are obtained by commercial demand, they can also be acquired with commercial aerial vehicles which provide more experimental autonomy and availability. For what regard to mosaicing-based aerial mission planners, there are not so many - if any - free of charge software. Therefore, in this paper is presented a framework designed with open source tools and libraries as an alternative to commercial tools to carry out mosaicing tasks.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Niemann–Pick disease type C (NP-C) is an autosomal recessive lipidosis linked to chromosome 18q11–12, characterized by lysosomal accumulation of unesterified cholesterol and delayed induction of cholesterol-mediated homeostatic responses. This cellular phenotype is identifiable cytologically by filipin staining and biochemically by measurement of low-density lipoprotein-derived cholesterol esterification. The mutant Chinese hamster ovary cell line (CT60), which displays the NP-C cellular phenotype, was used as the recipient for a complementation assay after somatic cell fusions with normal and NP-C murine cells suggested that this Chinese hamster ovary cell line carries an alteration(s) in the hamster homolog(s) of NP-C. To narrow rapidly the candidate interval for NP-C, three overlapping yeast artificial chromosomes (YACs) spanning the 1 centimorgan human NP-C interval were introduced stably into CT60 cells and analyzed for correction of the cellular phenotype. Only YAC 911D5 complemented the NP-C phenotype, as evidenced by cytological and biochemical analyses, whereas no complementation was obtained from the other two YACs within the interval or from a YAC derived from chromosome 7. Fluorescent in situ hybridization indicated that YAC 911D5 was integrated at a single site per CT60 genome. These data substantially narrow the NP-C critical interval and should greatly simplify the identification of the gene responsible in mouse and man. This is the first demonstration of YAC complementation as a valuable adjunct strategy for positional cloning of a human gene.
Resumo:
We use residual-delay maps of observational field data for barometric pressure to demonstrate the structure of latitudinal gradients in nonlinearity in the atmosphere. Nonlinearity is weak and largely lacking in tropical and subtropical sites and increases rapidly into the temperate regions where the time series also appear to be much noisier. The degree of nonlinearity closely follows the meridional variation of midlatitude storm track frequency. We extract the specific functional form of this nonlinearity, a V shape in the lagged residuals that appears to be a basic feature of midlatitude synoptic weather systems associated with frontal passages. We present evidence that this form arises from the relative time scales of high-pressure versus low-pressure events. Finally, we show that this nonlinear feature is weaker in a well regarded numerical forecast model (European Centre for Medium-Range Forecasts) because small-scale temporal and spatial variation is smoothed out in the grided inputs. This is significant, in that it allows us to demonstrate how application of statistical corrections based on the residual-delay map may provide marked increases in local forecast accuracy, especially for severe weather systems.
Resumo:
Multiple-complete-digest mapping is a DNA mapping technique based on complete-restriction-digest fingerprints of a set of clones that provides highly redundant coverage of the mapping target. The maps assembled from these fingerprints order both the clones and the restriction fragments. Maps are coordinated across three enzymes in the examples presented. Starting with yeast artificial chromosome contigs from the 7q31.3 and 7p14 regions of the human genome, we have produced cosmid-based maps spanning more than one million base pairs. Each yeast artificial chromosome is first subcloned into cosmids at a redundancy of ×15–30. Complete-digest fragments are electrophoresed on agarose gels, poststained, and imaged on a fluorescent scanner. Aberrant clones that are not representative of the underlying genome are rejected in the map construction process. Almost every restriction fragment is ordered, allowing selection of minimal tiling paths with clone-to-clone overlaps of only a few thousand base pairs. These maps demonstrate the practicality of applying the experimental and software-based steps in multiple-complete-digest mapping to a target of significant size and complexity. We present evidence that the maps are sufficiently accurate to validate both the clones selected for sequencing and the sequence assemblies obtained once these clones have been sequenced by a “shotgun” method.
Resumo:
We present a new map showing dimeric kinesin bound to microtubules in the presence of ADP that was obtained by electron cryomicroscopy and image reconstruction. The directly bound monomer (first head) shows a different conformation from one in the more tightly bound empty state. This change in the first head is amplified as a movement of the second (tethered) head, which tilts upward. The atomic coordinates of kinesin·ADP dock into our map so that the tethered head associates with the bound head as in the kinesin dimer structure seen by x-ray crystallography. The new docking orientation avoids problems associated with previous predictions; it puts residues implicated by proteolysis-protection and mutagenesis studies near the microtubule but does not lead to steric interference between the coiled-coil tail and the microtubule surface. The observed conformational changes in the tightly bound states would probably bring some important residues closer to tubulin. As expected from the homology with kinesin, the atomic coordinates of nonclaret disjunctional protein (ncd)·ADP dock in the same orientation into the attached head in a map of microtubules decorated with dimeric ncd·ADP. Our results support the idea that the observed direct interaction between the two heads is important at some stages of the mechanism by which kinesin moves processively along microtubules.
Resumo:
Revealing the layout of cortical maps is important both for understanding the processes involved in their development and for uncovering the mechanisms underlying neural computation. The typical organization of orientation maps in the cat visual cortex is radial; complete orientation cycles are mapped around orientation singularities. In contrast, long linear zones of orientation representation have been detected in the primary visual cortex of the tree shrew. In this study, we searched for the existence of long linear sequences and wide linear zones within orientation preference maps of the cat visual cortex. Optical imaging based on intrinsic signals was used. Long linear sequences and wide linear zones of preferred orientation were occasionally detected along the border between areas 17 and 18, as well as within area 18. Adjacent zones of distinct radial and linear organizations were observed across area 18 of a single hemisphere. However, radial and linear organizations were not necessarily segregated; long (7.5 mm) linear sequences of preferred orientation were found embedded within a typical pinwheel-like organization of orientation. We conclude that, although the radial organization is dominant, perfectly linear organization may develop and perform the processing related to orientation in the cat visual cortex.
Resumo:
Objective: To compare the cost effectiveness of two possible modifications to the current UK screening programme: shortening the screening interval from three to two years and extending the age of invitation to a final screen from 64 to 69.
Resumo:
Computational maps are of central importance to a neuronal representation of the outside world. In a map, neighboring neurons respond to similar sensory features. A well studied example is the computational map of interaural time differences (ITDs), which is essential to sound localization in a variety of species and allows resolution of ITDs of the order of 10 μs. Nevertheless, it is unclear how such an orderly representation of temporal features arises. We address this problem by modeling the ontogenetic development of an ITD map in the laminar nucleus of the barn owl. We show how the owl's ITD map can emerge from a combined action of homosynaptic spike-based Hebbian learning and its propagation along the presynaptic axon. In spike-based Hebbian learning, synaptic strengths are modified according to the timing of pre- and postsynaptic action potentials. In unspecific axonal learning, a synapse's modification gives rise to a factor that propagates along the presynaptic axon and affects the properties of synapses at neighboring neurons. Our results indicate that both Hebbian learning and its presynaptic propagation are necessary for map formation in the laminar nucleus, but the latter can be orders of magnitude weaker than the former. We argue that the algorithm is important for the formation of computational maps, when, in particular, time plays a key role.