937 resultados para Optimal matching analysis.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, Independent Components Analysis (ICA) has proven itself to be a powerful signal-processing technique for solving the Blind-Source Separation (BSS) problems in different scientific domains. In the present work, an application of ICA for processing NIR hyperspectral images to detect traces of peanut in wheat flour is presented. Processing was performed without a priori knowledge of the chemical composition of the two food materials. The aim was to extract the source signals of the different chemical components from the initial data set and to use them in order to determine the distribution of peanut traces in the hyperspectral images. To determine the optimal number of independent component to be extracted, the Random ICA by blocks method was used. This method is based on the repeated calculation of several models using an increasing number of independent components after randomly segmenting the matrix data into two blocks and then calculating the correlations between the signals extracted from the two blocks. The extracted ICA signals were interpreted and their ability to classify peanut and wheat flour was studied. Finally, all the extracted ICs were used to construct a single synthetic signal that could be used directly with the hyperspectral images to enhance the contrast between the peanut and the wheat flours in a real multi-use industrial environment. Furthermore, feature extraction methods (connected components labelling algorithm followed by flood fill method to extract object contours) were applied in order to target the spatial location of the presence of peanut traces. A good visualization of the distributions of peanut traces was thus obtained

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polysilicon production costs contribute approximately to 25-33% of the overall cost of the solar panels and a similar fraction of the total energy invested in their fabrication. Understanding the energy losses and the behaviour of process temperature is an essential requirement as one moves forward to design and build large scale polysilicon manufacturing plants. In this paper we present thermal models for two processes for poly production, viz., the Siemens process using trichlorosilane (TCS) as precursor and the fluid bed process using silane (monosilane, MS).We validate the models with some experimental measurements on prototype laboratory reactors relating the temperature profiles to product quality. A model sensitivity analysis is also performed, and the efects of some key parameters such as reactor wall emissivity, gas distributor temperature, etc., on temperature distribution and product quality are examined. The information presented in this paper is useful for further understanding of the strengths and weaknesses of both deposition technologies, and will help in optimal temperature profiling of these systems aiming at lowering production costs without compromising the solar cell quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El trabajo presentado en este documento se centra en la temática de la transferencia inalámbrica de energía, concretamente en aplicaciones de campo lejano, para llevar a cabo dicho trabajo nos centraremos en el diseño, implementación y medición de una rectenna operando en la banda ISM concretamente a una frecuencia de 2.45GHz, el objetivo primordial de este trabajo será analizar que parámetros intervienen en la eficiencia de conversión en la etapa de RF-DC a fin de lograr la máxima eficiencia de conversión posible. Para llevar a cabo dicho análisis se emplearán herramientas informáticas, concretamente se hará uso del software AWR Microwave Office, a través del cual se realizarán simulaciones SourcePull a fin de determinar la impedancia óptima de entrada que se le debe presentar a la etapa rectificadora RF-DC para conseguir la máxima eficiencia de conversión, una vez realizadas dichas pruebas se implementará físicamente un circuito rectenna a través del cual realizar medidas de SourcePull mediante un Wide Matching Range Slide Screw Tuner de MAURY MICROWAVE para cotejar las posibles diferencias con los resultados obtenidos en las simulaciones. Tras la fase de pruebas SourcePull se extrapolará una red de entrada en base a los datos obtenidos en las mediciones anteriores y se diseñará y fabricará un circuito rectenna con máxima eficiencia de conversión para un conjunto de valores de potencia de entrada de RF y carga de DC, tras lo cual se analizará la eficiencia del circuito diseñado para diferentes valores de potencia de RF de entrada y carga de DC. Como elemento rectificador emplearemos en nuestro trabajo el diodo Schottky HSMS-2820, los diodos Schottky se caracterizan por tener tiempos de conmutación relativamente bajos y pérdidas en directa reducidas los cual será fundamental a la hora de trabajar con niveles reducidos de potencia de RF de entrada, para implementar el circuito se empleará un substrato FR4 con espesor de 0.8mm para disminuir en la mayor medida posible las pérdidas introducidas por el dieléctrico, se analizarán diferentes posibilidades a la hora de implementar el filtro de RF a la salida del diodo rectificador y finalmente se optará por el empleo de un stub radial ya que será este el que mejor ancho de banda nos proporcione. Los resultados simulados se compararán con los resultados medidos sobre el circuito rectenna para determinar la similitud entre ambos. ABSTRACT. The work presented in this paper focuses on the issue of wireless transfer of energy, particularly applied to far-field applications, to carry out this work we focus on the design, implementation and measurement of a rectenna operating in the ISM band specifically at a frequency of 2.45GHz, the primary objective of this study is to analyze any parameter involved in the RF-DC conversion efficiency in order to achieve the maximum conversion efficiency as possible. Computer analysis tools will be used, particularly AWR Microwave Office software, in order to carry out SourcePull simulations to determine the optimal input impedance which must be presented to the rectifier stage for maximum conversion efficiency, once obtained, a rectenna circuit will be implemented to compute SourcePull measurements, and finally simulated results will be compared to measured results. Once obtained the result, an input network impedance is extrapolated based on data from previous measurements to design and implement a rectenna circuit with high conversion efficiency for a set of RF input power and DC load values , after that, the designed circuit efficiency will be analyzed for different values of RF input power and DC load. In this work a HSMS-2820 Schottky diode will be used as the rectifier , Schottky diodes are characterized by relatively low switching times and reduced direct losses, that properties will be essential when working with low RF input power levels , to implement the circuit a FR4 substrate with 0.8mm thickness is used to reduce as much as possible the dielectric losses, different possibilities to implement the RF filter to the output of the rectifier diode will be analyzed, finally we will opt for the use of a radial stub as this will provide the best bandwidth possible. The simulated results are compared with the results measured on the rectenna circuit to determine the similarity between them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intercontinental Ballistic Missiles are capable of placing a nuclear warhead at more than 5,000 km away from its launching base. With the lethal power of a nuclear warhead a whole city could be wiped out by a single weapon causing millions of deaths. This means that the threat posed to any country from a single ICBM captured by a terrorist group or launched by a 'rogue' state is huge. This threat is increasing as more countries are achieving nuclear and advanced launcher capabilities. In order to suppress or at least reduce this threat the United States created the National Missile Defense System which involved, among other systems, the development of long-range interceptors whose aim is to destroy incoming ballistic missiles in their midcourse phase. The Ballistic Missile Defense is a high-profile topic that has been the focus of political controversy lately when the U.S. decided to expand the Ballistic Missile system to Europe, with the opposition of Russia. However the technical characteristics of this system are mostly unknown by the general public. The Interception of an ICBM using a long range Interceptor Missile as intended within the Ground-Based Missile Defense System by the American National Missile Defense (NMD) implies a series of problems of incredible complexity: - The incoming missile has to be detected almost immediately after launch. - The incoming missile has to be tracked along its trajectory with a great accuracy. - The Interceptor Missile has to implement a fast and accurate guidance algorithm in order to reach the incoming missile as soon as possible. - The Kinetic Kill Vehicle deployed by the interceptor boost vehicle has to be able to detect the reentry vehicle once it has been deployed by ICBM, when it offers a very low infrared signature, in order to perform a final rendezvous manoeuvre. - The Kinetic Kill Vehicle has to be able to discriminate the reentry vehicle from the surrounding debris and decoys. - The Kinetic Kill Vehicle has to be able to implement an accurate guidance algorithm in order to perform a kinetic interception (direct collision) of the reentry vehicle, at relative speeds of more than 10 km/s. All these problems are being dealt simultaneously by the Ground-Based Missile Defense System that is developing very complex and expensive sensors, communications and control centers and long-range interceptors (Ground-Based Interceptor Missile) including a Kinetic Kill Vehicle. Among all the technical challenges involved in this interception scenario, this thesis focuses on the algorithms required for the guidance of the Interceptor Missile and the Kinetic Kill Vehicle in order to perform the direct collision with the ICBM. The involved guidance algorithms are deeply analysed in this thesis in part III where conventional guidance strategies are reviewed and optimal guidance algorithms are developed for this interception problem. The generation of a realistic simulation of the interception scenario between an ICBM and a Ground Based Interceptor designed to destroy it was considered as necessary in order to be able to compare different guidance strategies with meaningful results. As a consequence, a highly representative simulator for an ICBM and a Kill Vehicle has been implemented, as detailed in part II, and the generation of these simulators has also become one of the purposes of this thesis. In summary, the main purposes of this thesis are: - To develop a highly representative simulator of an interception scenario between an ICBM and a Kill Vehicle launched from a Ground Based Interceptor. -To analyse the main existing guidance algorithms both for the ascent phase and the terminal phase of the missiles. Novel conclusions of these analyses are obtained. - To develop original optimal guidance algorithms for the interception problem. - To compare the results obtained using the different guidance strategies, assess the behaviour of the optimal guidance algorithms, and analyse the feasibility of the Ballistic Missile Defense system in terms of guidance (part IV). As a secondary objective, a general overview of the state of the art in terms of ballistic missiles and anti-ballistic missile defence is provided (part I).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a consistent analysis of reinforced concrete (RC) two-dimensional (2-D) structures,namely slab structures subjected to in-plane and out-plane forces, is presented. By using this method of analysis the well established methodology for dimensioning and verifying RC sections of beam structures is extended to 2-D structures. The validity of the proposed analysis results is checked by comparing them with some published experimental test results. Several examples show some of these proposed analysis features, such as the influence of the reinforcement layout on the service and ultimate behavior of a slab structure and the non straightforward problem of the optimal dimension at a slab point subjected to several loading cases. Also, in these examples, the method applications to design situations as multiple steel families and non orthogonal reinforcement layout are commented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the maximum parsimony (MP) and minimum evolution (ME) methods of phylogenetic inference, evolutionary trees are constructed by searching for the topology that shows the minimum number of mutational changes required (M) and the smallest sum of branch lengths (S), respectively, whereas in the maximum likelihood (ML) method the topology showing the highest maximum likelihood (A) of observing a given data set is chosen. However, the theoretical basis of the optimization principle remains unclear. We therefore examined the relationships of M, S, and A for the MP, ME, and ML trees with those for the true tree by using computer simulation. The results show that M and S are generally greater for the true tree than for the MP and ME trees when the number of nucleotides examined (n) is relatively small, whereas A is generally lower for the true tree than for the ML tree. This finding indicates that the optimization principle tends to give incorrect topologies when n is small. To deal with this disturbing property of the optimization principle, we suggest that more attention should be given to testing the statistical reliability of an estimated tree rather than to finding the optimal tree with excessive efforts. When a reliability test is conducted, simplified MP, ME, and ML algorithms such as the neighbor-joining method generally give conclusions about phylogenetic inference very similar to those obtained by the more extensive tree search algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Divalent metal ions, such as Mg2+, are generally required for tertiary structure formation in RNA. Although the role of Mg2+ binding in RNA-folding equilibria has been studied extensively, little is known about the role of Mg2+ in RNA-folding kinetics. In this paper, we explore the effect of Mg2+ on the rate-limiting step in the kinetic folding pathway of the Tetrahymena ribozyme. Analysis of these data reveals the presence of a Mg2+-stabilized kinetic trap that slows folding at higher Mg2+ concentrations. Thus, the Tetrahymena ribozyme folds with an optimal rate at 2 mM Mg2+, just above the concentration required for stable structure formation. These results suggest that thermodynamic and kinetic folding of RNA are cooptimized at a Mg2+ concentration that is sufficient to stabilize the folded form but low enough to avoid kinetic traps and misfolding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent ability to sequence whole genomes allows ready access to all genetic material. The approaches outlined here allow automated analysis of sequence for the synthesis of optimal primers in an automated multiplex oligonucleotide synthesizer (AMOS). The efficiency is such that all ORFs for an organism can be amplified by PCR. The resulting amplicons can be used directly in the construction of DNA arrays or can be cloned for a large variety of functional analyses. These tools allow a replacement of single-gene analysis with a highly efficient whole-genome analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Piotr Omenzetter and Simon Hoell’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous analysis of the rules regarding how much more a female should invest in a litter of size C rather than producing a litter with one more offspring revealed an invariance relationship between litter size and the range of resources per offspring in any litter size. The rule is that the range of resources per offspring should be inversely proportional to litter size. Here we present a modification of this rule that relates litter size to the total resources devoted to reproduction at that litter size. The result is that the range of resources devoted to reproduction should be the same for all litter sizes. When parental phenotypes covary linearly with resources devoted to reproduction, then those traits should also show equal ranges within each litter size category (except for litters of one). We tested this prediction by examining the range in body size (=total length) of female mosquito fish (Gambusia hubbsi) at different litter sizes. Because resources devoted to reproduction may take many forms (e.g., nest defense), this prediction may have broad applicability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the genetic organisation of six prophages present in the genome of Lactococcus lactis IL1403. The three larger prophages (36–42 kb), belong to the already described P335 group of temperate phages, whereas the three smaller ones (13–15 kb) are most probably satellites relying on helper phage(s) for multiplication. These data give a new insight into the genetic structure of lactococcal phage populations. P335 temperate phages have variable genomes, sharing homology over only 10–33% of their length. In contrast, virulent phages have highly similar genomes sharing homology over >90% of their length. Further analysis of genetic structure in all known groups of phages active on other bacterial hosts such as Escherichia coli, Bacillus subtilis, Mycobacterium and Streptococcus thermophilus confirmed the existence of two types of genetic structure related to the phage way of life. This might reflect different intensities of horizontal DNA exchange: low among purely virulent phages and high among temperate phages and their lytic homologues. We suggest that the constraints on genetic exchange among purely virulent phages reflect their optimal genetic organisation, adapted to a more specialised and extreme form of parasitism than temperate/lytic phages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spectral changes in the photocycle of the photoactive yellow protein (PYP) are investigated by using ab initio multiconfigurational second-order perturbation theory at the available structures experimentally determined. Using the dark ground-state crystal structure [Genick, U. K., Soltis, S. M., Kuhn, P., Canestrelli, I. L. & Getzoff, E. D. (1998) Nature (London) 392, 206–209], the ππ* transition to the lowest excited state is related to the typical blue-light absorption observed at 446 nm. The different nature of the second excited state (nπ*) is consistent with the alternative route detected at 395-nm excitation. The results suggest the low-temperature photoproduct PYPHL as the most plausible candidate for the assignment of the cryogenically trapped early intermediate (Genick et al.). We cannot establish, however, a successful correspondence between the theoretical spectrum for the nanosecond time-resolved x-ray structure [Perman, B., Šrajer, V., Ren, Z., Teng, T., Pradervand, C., et al. (1998) Science 279, 1946–1950] and any of the spectroscopic photoproducts known up to date. It is fully confirmed that the colorless light-activated intermediate recorded by millisecond time-resolved crystallography [Genick, U. K., Borgstahl, G. E. O., Ng, K., Ren, Z., Pradervand, C., et al. (1997) Science 275, 1471–1475] is protonated, nicely matching the spectroscopic features of the photoproduct PYPM. The overall contribution demonstrates that a combined analysis of high-level theoretical results and experimental data can be of great value to perform assignments of detected intermediates in a photocycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method for discovering conserved sequence motifs from families of aligned protein sequences. The method has been implemented as a computer program called emotif (http://motif.stanford.edu/emotif). Given an aligned set of protein sequences, emotif generates a set of motifs with a wide range of specificities and sensitivities. emotif also can generate motifs that describe possible subfamilies of a protein superfamily. A disjunction of such motifs often can represent the entire superfamily with high specificity and sensitivity. We have used emotif to generate sets of motifs from all 7,000 protein alignments in the blocks and prints databases. The resulting database, called identify (http://motif.stanford.edu/identify), contains more than 50,000 motifs. For each alignment, the database contains several motifs having a probability of matching a false positive that range from 10−10 to 10−5. Highly specific motifs are well suited for searching entire proteomes, while generating very few false predictions. identify assigns biological functions to 25–30% of all proteins encoded by the Saccharomyces cerevisiae genome and by several bacterial genomes. In particular, identify assigned functions to 172 of proteins of unknown function in the yeast genome.