950 resultados para generalized assignment


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we carry out some results in sampling theory for U-invariant subspaces of a separable Hilbert space H, also called atomic subspaces. These spaces are a generalization of the well-known shift- invariant subspaces in L2 (R); here the space L2 (R) is replaced by H, and the shift operator by U. Having as data the samples of some related operators, we derive frame expansions allowing the recovery of the elements in Aa. Moreover, we include a frame perturbation-type result whenever the samples are affected with a jitter error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We demonstrate the existence of generalized synchronization in systems that act as mediators between two dynamical units that, in turn, show complete synchronization with each other. These are the so-called relay systems. Specifically, we analyze the Lyapunov spectrum of the full system to elucidate when complete and generalized synchronization appear. We show that once a critical coupling strength is achieved, complete synchronization emerges between the systems to be synchronized, and at the same point, generalized synchronization with the relay system also arises. Next, we use two nonlinear measures based on the distance between phase-space neighbors to quantify the generalized synchronization in discretized time series. Finally, we experimentally show the robustness of the phenomenon and of the theoretical tools here proposed to characterize it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is controversy regarding the use of the similarity functions proposed in the literature to compare generalized trapezoidal fuzzy numbers since conflicting similarity values are sometimes output for the same pair of fuzzy numbers. In this paper we propose a similarity function aimed at establishing a consensus. It accounts for the different approaches of all the similarity functions. It also has better properties and can easily incorporate new parameters for future improvements. The analysis is carried out on the basis of a large and representative set of pairs of trapezoidal fuzzy numbers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we address the new reduction method called Proper Generalized Decomposition (PGD) which is a discretization technique based on the use of separated representation of the unknown fields, specially well suited for solving multidimensional parametric equations. In this case, it is applied to the solution of dynamics problems. We will focus on the dynamic analysis of an one-dimensional rod with a unit harmonic load of frequency (ω) applied at a point of interest. In what follows, we will present the application of the methodology PGD to the problem in order to approximate the displacement field as the sum of the separated functions. We will consider as new variables of the problem, parameters models associated with the characteristic of the materials, in addition to the frequency. Finally, the quality of the results will be assessed based on an example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a fuzzy feedback linearization is used to control nonlinear systems described by Takagi-Suengo (T-S) fuzzy systems. In this work, an optimal controller is designed using the linear quadratic regulator (LQR). The well known weighting parameters approach is applied to optimize local and global approximation and modelling capability of T-S fuzzy model to improve the choice of the performance index and minimize it. The approach used here can be considered as a generalized version of T-S method. Simulation results indicate the potential, simplicity and generality of the estimation method and the robustness of the proposed optimal LQR algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The emission of light from each junction in a series-connected multijunction solar cell both complicates and elucidates the understanding of its performance under arbitrary conditions. Bringing together many recent advances in this understanding, we present a general 1-D model to describe luminescent coupling that arises from both voltage-driven electroluminescence and voltage-independent photoluminescence in nonideal junctions that include effects such as Sah-Noyce-Shockley (SNS) recombination with n ≠ 2, Auger recombination, shunt resistance, reverse-bias breakdown, series resistance, and significant dark area losses. The individual junction voltages and currents are experimentally determined from measured optical and electrical inputs and outputs of the device within the context of the model to fit parameters that describe the devices performance under arbitrary input conditions. Techniques to experimentally fit the model are demonstrated for a four-junction inverted metamorphic solar cell, and the predictions of the model are compared with concentrator flash measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La evaluación de las prestaciones de las embarcaciones a vela ha constituido un objetivo para ingenieros navales y marinos desde los principios de la historia de la navegación. El conocimiento acerca de estas prestaciones, ha crecido desde la identificación de los factores clave relacionados con ellas(eslora, estabilidad, desplazamiento y superficie vélica), a una comprensión más completa de las complejas fuerzas y acoplamientos involucrados en el equilibrio. Junto con este conocimiento, la aparición de los ordenadores ha hecho posible llevar a cabo estas tareas de una forma sistemática. Esto incluye el cálculo detallado de fuerzas, pero también, el uso de estas fuerzas junto con la descripción de una embarcación a vela para la predicción de su comportamiento y, finalmente, sus prestaciones. Esta investigación tiene como objetivo proporcionar una definición global y abierta de un conjunto de modelos y reglas para describir y analizar este comportamiento. Esto se lleva a cabo sin aplicar restricciones en cuanto al tipo de barco o cálculo, sino de una forma generalizada, de modo que sea posible resolver cualquier situación, tanto estacionaria como en el dominio del tiempo. Para ello se comienza con una definición básica de los factores que condicionan el comportamiento de una embarcación a vela. A continuación se proporciona una metodología para gestionar el uso de datos de diferentes orígenes para el cálculo de fuerzas, siempre con el la solución del problema como objetivo. Esta última parte se plasma en un programa de ordenador, PASim, cuyo propósito es evaluar las prestaciones de diferentes ti pos de embarcaciones a vela en un amplio rango de condiciones. Varios ejemplos presentan diferentes usos de PASim con el objetivo de ilustrar algunos de los aspectos discutidos a lo largo de la definición del problema y su solución . Finalmente, se presenta una estructura global de cara a proporcionar una representación virtual de la embarcación real, en la cual, no solo e l comportamiento sino también su manejo, son cercanos a la experiencia de los navegantes en el mundo real. Esta estructura global se propone como el núcleo (un motor de software) de un simulador físico para el que se proporciona una especificación básica. ABSTRACT The assessment of the performance of sailing yachts, and ships in general, has been an objective for naval architects and sailors since the beginning of the history of navigation. The knowledge has grown from identifying the key factors that influence performance(length, stability, displacement and sail area), to a much more complete understanding of the complex forces and couplings involved in the equilibrium. Along with this knowledge, the advent of computers has made it possible to perform the associated tasks in a systematic way. This includes the detailed calculation of forces, but also the use of those forces, along with the description of a sailing yacht, to predict its behavior, and ultimately, its performance. The aim of this investigation is to provide a global and open definition of a set of models and rules to describe and analyze the behavior of a sailing yacht. This is done without applying any restriction to the type of yacht or calculation, but rather in a generalized way, capable of solving any possible situation, whether it is in a steady state or in the time domain. First, the basic definition of the factors that condition the behavior of a sailing yacht is given. Then, a methodology is provided to assist with the use of data from different origins for the calculation of forces, always aiming towards the solution of the problem. This last part is implemented as a computational tool, PASim, intended to assess the performance of different types of sailing yachts in a wide range of conditions. Several examples then present different uses of PASim, as a way to illustrate some of the aspects discussed throughout the definition of the problem and its solution. Finally, a global structure is presented to provide a general virtual representation of the real yacht, in which not only the behavior, but also its handling is close to the experience of the sailors in the real world. This global structure is proposed as the core (a software engine) of a physical yacht simulator, for which a basic specification is provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The NMR assignment of 13C, 15N-labeled proteins with the use of triple resonance experiments is limited to molecular weights below ∼25,000 Daltons, mainly because of low sensitivity due to rapid transverse nuclear spin relaxation during the evolution and recording periods. For experiments that exclusively correlate the amide proton (1HN), the amide nitrogen (15N), and 13C atoms, this size limit has been previously extended by additional labeling with deuterium (2H). The present paper shows that the implementation of transverse relaxation-optimized spectroscopy ([15N,1H]-TROSY) into triple resonance experiments results in several-fold improved sensitivity for 2H/13C/15N-labeled proteins and approximately twofold sensitivity gain for 13C/15N-labeled proteins. Pulse schemes and spectra recorded with deuterated and protonated proteins are presented for the [15N, 1H]-TROSY-HNCA and [15N, 1H]-TROSY-HNCO experiments. A theoretical analysis of the HNCA experiment shows that the primary TROSY effect is on the transverse relaxation of 15N, which is only little affected by deuteration, and predicts sensitivity enhancements that are in close agreement with the experimental data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of passive scalar transport in a turbulent velocity field leads naturally to the notion of generalized flows, which are families of probability distributions on the space of solutions to the associated ordinary differential equations which no longer satisfy the uniqueness theorem for ordinary differential equations. Two most natural regularizations of this problem, namely the regularization via adding small molecular diffusion and the regularization via smoothing out the velocity field, are considered. White-in-time random velocity fields are used as an example to examine the variety of phenomena that take place when the velocity field is not spatially regular. Three different regimes, characterized by their degrees of compressibility, are isolated in the parameter space. In the regime of intermediate compressibility, the two different regularizations give rise to two different scaling behaviors for the structure functions of the passive scalar. Physically, this means that the scaling depends on Prandtl number. In the other two regimes, the two different regularizations give rise to the same generalized flows even though the sense of convergence can be very different. The “one force, one solution” principle is established for the scalar field in the weakly compressible regime, and for the difference of the scalar in the strongly compressible regime, which is the regime of inverse cascade. Existence and uniqueness of an invariant measure are also proved in these regimes when the transport equation is suitably forced. Finally incomplete self similarity in the sense of Barenblatt and Chorin is established.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many small bacterial, archaebacterial, and eukaryotic genomes have been sequenced, and the larger eukaryotic genomes are predicted to be completely sequenced within the next decade. In all genomes sequenced to date, a large portion of these organisms’ predicted protein coding regions encode polypeptides of unknown biochemical, biophysical, and/or cellular functions. Three-dimensional structures of these proteins may suggest biochemical or biophysical functions. Here we report the crystal structure of one such protein, MJ0577, from a hyperthermophile, Methanococcus jannaschii, at 1.7-Å resolution. The structure contains a bound ATP, suggesting MJ0577 is an ATPase or an ATP-mediated molecular switch, which we confirm by biochemical experiments. Furthermore, the structure reveals different ATP binding motifs that are shared among many homologous hypothetical proteins in this family. This result indicates that structure-based assignment of molecular function is a viable approach for the large-scale biochemical assignment of proteins and for discovering new motifs, a basic premise of structural genomics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Filamentous fungi are a large group of diverse and economically important microorganisms. Large-scale gene disruption strategies developed in budding yeast are not applicable to these organisms because of their larger genomes and lower rate of targeted integration (TI) during transformation. We developed transposon-arrayed gene knockouts (TAGKO) to discover genes and simultaneously create gene disruption cassettes for subsequent transformation and mutant analysis. Transposons carrying a bacterial and fungal drug resistance marker are used to mutagenize individual cosmids or entire libraries in vitro. Cosmids are annotated by DNA sequence analysis at the transposon insertion sites, and cosmid inserts are liberated to direct insertional mutagenesis events in the genome. Based on saturation analysis of a cosmid insert and insertions in a fungal cosmid library, we show that TAGKO can be used to rapidly identify and mutate genes. We further show that insertions can create alterations in gene expression, and we have used this approach to investigate an amino acid oxidation pathway in two important fungal phytopathogens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the isolation of generalized transducing phages for Streptomyces species able to transduce chromosomal markers or plasmids between derivatives of Streptomyces coelicolor, the principal genetic model system for this important bacterial genus. We describe four apparently distinct phages (DAH2, DAH4, DAH5, and DAH6) that are capable of transducing multiple chromosomal markers at frequencies ranging from 10−5 to 10−9 per plaque-forming unit. The phages contain DNA ranging in size from 93 to 121 kb and mediate linked transfer of genetic loci at neighboring chromosomal sites sufficiently close to be packaged within the same phage particle. The key to our ability to demonstrate transduction by these phages was the establishment of conditions expected to severely reduce superinfection killing during the selection of transductants. The host range of these phages, as measured by the ability to form plaques, extends to species as distantly related as Streptomyces avermitilis and Streptomyces verticillus, which are among the most commercially important species of this genus. Transduction of plasmid DNA between S. coelicolor and S. verticillus was observed at frequencies of ≈10−4 transductants per colony-forming unit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rfp-Y is a second region in the genome of the chicken containing major histocompatibility complex (MHC) class I and II genes. Haplotypes of Rfp-Y assort independently from haplotypes of the B system, a region known to function as a MHC and to be located on chromosome 16 (a microchromosome) with the single nucleolar organizer region (NOR) in the chicken genome. Linkage mapping with reference populations failed to reveal the location of Rfp-Y, leaving Rfp-Y unlinked in a map containing >400 markers. A possible location of Rfp-Y became apparent in studies of chickens trisomic for chromosome 16 when it was noted that the intensity of restriction fragments associated with Rfp-Y increased with increasing copy number of chromosome 16. Further evidence that Rfp-Y might be located on chromosome 16 was obtained when individuals trisomic for chromosome 16 were found to transmit three Rfp-Y haplotypes. Finally, mapping of cosmid cluster III of the molecular map of chicken MHC genes (containing a MHC class II gene and two rRNA genes) to Rfp-Y validated the assignment of Rfp-Y to the MHC/NOR microchromosome. A genetic map can now be drawn for a portion of chicken chromosome 16 with Rfp-Y, encompassing two MHC class I and three MHC class II genes, separated from the B system by a region containing the NOR and exhibiting highly frequent recombination.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using the mouse delta-opioid receptor cDNA as a probe, we have isolated genomic clones encoding the human mu- and kappa-opioid receptor genes. Their organization appears similar to that of the human delta receptor gene, with exon-intron boundaries located after putative transmembrane domains 1 and 4. The kappa gene was mapped at position q11-12 in human chromosome 8. A full-length cDNA encoding the human kappa-opioid receptor has been isolated. The cloned receptor expressed in COS cells presents a typical kappa 1 pharmacological profile and is negatively coupled to adenylate cyclase. The expression of kappa-opioid receptor mRNA in human brain, as estimated by reverse transcription-polymerase chain reaction, is consistent with the involvement of kappa-opioid receptors in pain perception, neuroendocrine physiology, affective behavior, and cognition. In situ hybridization studies performed on human fetal spinal cord demonstrate the presence of the transcript specifically in lamina II of the dorsal horn. Some divergences in structural, pharmacological, and anatomical properties are noted between the cloned human and rodent receptors.