638 resultados para Melastoma-affine Melastomataceae


Relevância:

10.00% 10.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Symmetries have played an important role in a variety of problems in geology and geophysics. A large fraction of studies in mineralogy are devoted to the symmetry properties of crystals. In this paper, however, the emphasis will be on scale-invariant (fractal) symmetries. The earth’s topography is an example of both statistically self-similar and self-affine fractals. Landforms are also associated with drainage networks, which are statistical fractal trees. A universal feature of drainage networks and other growth networks is side branching. Deterministic space-filling networks with side-branching symmetries are illustrated. It is shown that naturally occurring drainage networks have symmetries similar to diffusion-limited aggregation clusters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A hyperplane arrangement is a finite set of hyperplanes in a real affine space. An especially important arrangement is the braid arrangement, which is the set of all hyperplanes xi - xj = 1, 1

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CysK, uno degli isoenzimi di O-acetilserina sulfidrilasi (OASS) presenti in piante e batteri, è un enzima studiato da molto tempo ed il suo ruolo fisiologico nella sintesi della cisteina è stato ben definito. Recentemente sono state scoperte altre funzioni apparentemente non collegate alla sua funzione enzimatica (moonlighting). Una di queste è l’attivazione di una tossina ad attività tRNAsica, CdiA-CT, coinvolta nel sistema di inibizione della crescita da contatto (CDI) di ceppi patogeni di E. coli. In questo progetto abbiamo studiato il ruolo di CysK nel sistema CDI e la formazione di complessi con due differenti partner proteici: CdiA-CT e CysE (serina acetiltransferasi, l’enzima che catalizza la reazione precedente nella biosintesi della cisteina). I due complessi hanno le stesse caratteristiche spettrofluorimetriche e affinità molto simili, ma la cinetica di raggiungimento dell’equilibrio per il complesso tossina:CysK è più lenta che per il complesso CysE:CysK (cisteina sintasi). In entrambi i casi la formazione veloce di un complesso d’incontro è seguita da un riarrangiamento conformazionale che porta alla formazione di un complesso ad alta affinità. L’efficienza di formazione del complesso cisteina sintasi è circa 200 volte maggiore rispetto al complesso CysK:tossina. Una differenza importante, oltre alla cinetica di formazione dei complessi, è la stechiometria di legame. Infatti mentre CysE riesce a legare solo uno dei due siti attivi del dimero di CysK, nel complesso con CdiA-CT entrambi i siti attivi dell’enzima risultano essere occupati. Le cellule isogeniche esprimono un peptide inibitore della tossina (CdiI), e sono quindi resistenti all’azione tRNAsica. Tuttavia, siccome CdiI non altera la formazione del complesso CdiA-CT:CysK, CdiA-CT può esercitare comunque un ruolo nel metabolismo della cisteina e quindi nella fitness dei batteri isogenici, attraverso il legame e l'inibizione di CysK e la competizione con CysE. La via biosintetica della cisteina, un precursore di molecole riducenti, risulta essere molto importante per i batteri soprattutto in condizioni avverse come all’interno dei macrofagi nelle infezioni persistenti. Perciò questa via metabolica è di interesse per lo sviluppo di nuovi antibiotici, e in particolare le due isoforme dell’OASS negli enterobatteri, CysK e CysM, sono potenziali target per lo sviluppo di nuove molecole ad azione antibatterica. Partendo dall’analisi delle modalità di interazione con CysK del suo partner ed inibitore fisiologico, CysE, si è studiato dapprima l’interazione di pentapeptidi che mimassero la regione C-terminale di quest'ultimo, e in base ai dati ottenuti sono stati sviluppati piccoli ligandi sintetici. La struttura generale di questi composti è costituita da un gruppo acido ed un gruppo lipofilo, separati da un linker ciclopropanico che mantiene questi due gruppi in conformazione trans, ottimale per l’interazione col sito attivo dell’enzima. Sulla base di queste considerazioni, di docking in silico e di dati sperimentali ottenuti con la tecnica dell’STD-NMR e con saggi di binding spettrofluorimetrici, si è potuta realizzare una analisi di relazione struttura-attività che ha portato via via all’ottimizzazione dei ligandi. Il composto più affine che è stato finora ottenuto ha una costante di dissociazione nel range del nanomolare per entrambe le isoforme, ed è un ottimo punto di partenza per lo sviluppo di nuovi farmaci.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho aborda o problema de casamento entre duas imagens. Casamento de imagens pode ser do tipo casamento de modelos (template matching) ou casamento de pontos-chaves (keypoint matching). Estes algoritmos localizam uma região da primeira imagem numa segunda imagem. Nosso grupo desenvolveu dois algoritmos de casamento de modelos invariante por rotação, escala e translação denominados Ciratefi (Circula, radial and template matchings filter) e Forapro (Fourier coefficients of radial and circular projection). As características positivas destes algoritmos são a invariância a mudanças de brilho/contraste e robustez a padrões repetitivos. Na primeira parte desta tese, tornamos Ciratefi invariante a transformações afins, obtendo Aciratefi (Affine-ciratefi). Construímos um banco de imagens para comparar este algoritmo com Asift (Affine-scale invariant feature transform) e Aforapro (Affine-forapro). Asift é considerado atualmente o melhor algoritmo de casamento de imagens invariante afim, e Aforapro foi proposto em nossa dissertação de mestrado. Nossos resultados sugerem que Aciratefi supera Asift na presença combinada de padrões repetitivos, mudanças de brilho/contraste e mudanças de pontos de vista. Na segunda parte desta tese, construímos um algoritmo para filtrar casamentos de pontos-chaves, baseado num conceito que denominamos de coerência geométrica. Aplicamos esta filtragem no bem-conhecido algoritmo Sift (scale invariant feature transform), base do Asift. Avaliamos a nossa proposta no banco de imagens de Mikolajczyk. As taxas de erro obtidas são significativamente menores que as do Sift original.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The methodology “b-learning” is a new teaching scenario and it requires the creation, adaptation and application of new learning tools searching the assimilation of new collaborative competences. In this context, it is well known the knowledge spirals, the situational leadership and the informal learning. The knowledge spirals is a basic concept of the knowledge procedure and they are based on that the knowledge increases when a cycle of 4 phases is repeated successively.1) The knowledge is created (for instance, to have an idea); 2) The knowledge is decoded into a format to be easily transmitted; 3) The knowledge is modified to be easily comprehensive and it is used; 4) New knowledge is created. This new knowledge improves the previous one (step 1). Each cycle shows a step of a spiral staircase: by going up the staircase, more knowledge is created. On the other hand, the situational leadership is based on that each person has a maturity degree to develop a specific task and this maturity increases with the experience. Therefore, the teacher (leader) has to adapt the teaching style to the student (subordinate) requirements and in this way, the professional and personal development of the student will increase quickly by improving the results and satisfaction. This educational strategy, finally combined with the informal learning, and in particular the zone of proximal development, and using a learning content management system own in our University, gets a successful and well-evaluated learning activity in Master subjects focused on the collaborative activity of preparation and oral exhibition of short and specific topics affine to these subjects. Therefore, the teacher has a relevant and consultant role of the selected topic and his function is to guide and supervise the work, incorporating many times the previous works done in other courses, as a research tutor or more experienced student. Then, in this work, we show the academic results, grade of interactivity developed in these collaborative tasks, statistics and the satisfaction grade shown by our post-graduate students.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is intended to provide conditions for the stability of the strong uniqueness of the optimal solution of a given linear semi-infinite optimization (LSIO) problem, in the sense of maintaining the strong uniqueness property under sufficiently small perturbations of all the data. We consider LSIO problems such that the family of gradients of all the constraints is unbounded, extending earlier results of Nürnberger for continuous LSIO problems, and of Helbig and Todorov for LSIO problems with bounded set of gradients. To do this we characterize the absolutely (affinely) stable problems, i.e., those LSIO problems whose feasible set (its affine hull, respectively) remains constant under sufficiently small perturbations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The multiobjective optimization model studied in this paper deals with simultaneous minimization of finitely many linear functions subject to an arbitrary number of uncertain linear constraints. We first provide a radius of robust feasibility guaranteeing the feasibility of the robust counterpart under affine data parametrization. We then establish dual characterizations of robust solutions of our model that are immunized against data uncertainty by way of characterizing corresponding solutions of robust counterpart of the model. Consequently, we present robust duality theorems relating the value of the robust model with the corresponding value of its dual problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we examine multi-objective linear programming problems in the face of data uncertainty both in the objective function and the constraints. First, we derive a formula for the radius of robust feasibility guaranteeing constraint feasibility for all possible scenarios within a specified uncertainty set under affine data parametrization. We then present numerically tractable optimality conditions for minmax robust weakly efficient solutions, i.e., the weakly efficient solutions of the robust counterpart. We also consider highly robust weakly efficient solutions, i.e., robust feasible solutions which are weakly efficient for any possible instance of the objective matrix within a specified uncertainty set, providing lower bounds for the radius of highly robust efficiency guaranteeing the existence of this type of solutions under affine and rank-1 objective data uncertainty. Finally, we provide numerically tractable optimality conditions for highly robust weakly efficient solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We estimate the 'fundamental' component of euro area sovereign bond yield spreads, i.e. the part of bond spreads that can be justified by country-specific economic factors, euro area economic fundamentals, and international influences. The yield spread decomposition is achieved using a multi-market, no-arbitrage affine term structure model with a unique pricing kernel. More specifically, we use the canonical representation proposed by Joslin, Singleton, and Zhu (2011) and introduce next to standard spanned factors a set of unspanned macro factors, as in Joslin, Priebsch, and Singleton (2013). The model is applied to yield curve data from Belgium, France, Germany, Italy, and Spain over the period 2005-2013. Overall, our results show that economic fundamentals are the dominant drivers behind sovereign bond spreads. Nevertheless, shocks unrelated to the fundamental component of the spread have played an important role in the dynamics of bond spreads since the intensification of the sovereign debt crisis in the summer of 2011

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present-day condition of bipolar glaciation characterized by rapid and large climate fluctuations began at the end of the Pliocene with the intensification of the Northern Hemisphere continental glaciations. The global cooling steps of the late Pliocene have been documented in numerous studies of Ocean Drilling Program (ODP) sites from the Northern Hemisphere. However, the interactions between oceans and between land and ocean during these cooling steps are poorly known. In particular, data from the Southern Hemisphere are lacking. Therefore I investigated the pollen of ODP Site 1082 in the southeast Atlantic Ocean in order to obtain a high-resolution record of vegetation change in Namibia between 3.4 and 1.8 Ma. Four phases of vegetation development are inferred that are connected to global climate change. (1) Before 3 Ma, extensive, rather open grass-rich savannahs with mopane trees existed in Namibia, but the extension of desert and semidesert vegetation was still restricted. (2) Increase of winter rainfall dependent Renosterveld-like vegetation occurred between 3.1 and 2.2 Ma connected to strong advection of polar waters along the Namibian coast and a northward shift of the Polar Front Zone in the Southern Ocean. (3) Climatically induced fluctuations became stronger between 2.7 and 2.2 Ma and semiarid areas extended during glacial periods probably as the result of an increased pole-equator thermal gradient and consequently globally enhanced atmospheric circulation. (4) Aridification and climatic variability further increased after 2.2 Ma, when the Polar Front Zone migrated southward and the influence of Atlantic moisture brought by the westerlies to southern Africa declined. It is concluded that the positions of the frontal systems in the Southern Ocean which determine the locations of the high-pressure cells over the South Atlantic and the southern Indian Ocean have a strong influence on the climate of southern Africa in contrast to the climate of northwest and central Africa, which is dominated by the Saharan low-pressure cell.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador: