936 resultados para germs of holomorphic generalized functions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con esta tesis ”Desarrollo de una Teoría Uniforme de la Difracción para el Análisis de los Campos Electromagnéticos Dispersados y Superficiales sobre un Cilindro” hemos iniciado una nueva línea de investigación que trata de responder a la siguiente pregunta: ¿cuál es la impedancia de superficie que describe una estructura de conductor eléctrico perfecto (PEC) convexa recubierta por un material no conductor? Este tipo de estudios tienen interés hoy en día porque ayudan a predecir el campo electromagnético incidente, radiado o que se propaga sobre estructuras metálicas y localmente convexas que se encuentran recubiertas de algún material dieléctrico, o sobre estructuras metálicas con pérdidas, como por ejemplo se necesita en determinadas aplicaciones aeroespaciales, marítimas o automovilísticas. Además, desde un punto de vista teórico, la caracterización de la impedancia de superficie de una estructura PEC recubierta o no por un dieléctrico es una generalización de varias soluciones que tratan ambos tipos de problemas por separado. En esta tesis se desarrolla una teoría uniforme de la difracción (UTD) para analizar el problema canónico del campo electromagnético dispersado y superficial en un cilindro circular eléctricamente grande con una condición de contorno de impedancia (IBC) para frecuencias altas. Construir una solución basada en UTD para este problema canónico es crucial en el desarrollo de un método UTD para el caso más general de una superficie arbitrariamente convexa, mediante el uso del principio de localización de los campos electromagnéticos a altas frecuencias. Esta tesis doctoral se ha llevado a cabo a través de una serie de hitos que se enumeran a continuación, enfatizando las contribuciones a las que ha dado lugar. Inicialmente se realiza una revisión en profundidad del estado del arte de los métodos asintóticos con numerosas referencias. As í, cualquier lector novel puede llegar a conocer la historia de la óptica geométrica (GO) y la teoría geométrica de la difracción (GTD), que dieron lugar al desarrollo de la UTD. Después, se investiga ampliamente la UTD y los trabajos más importantes que pueden encontrarse en la literatura. As í, este capítulo, nos coloca en la posición de afirmar que, hasta donde nosotros conocemos, nadie ha intentado antes llevar a cabo una investigación rigurosa sobre la caracterización de la impedancia de superficie de una estructura PEC recubierta por un material dieléctrico, utilizando para ello la UTD. Primero, se desarrolla una UTD para el problema canónico de la dispersión electromagnética de un cilindro circular eléctricamente grande con una IBC uniforme, cuando es iluminado por una onda plana con incidencia oblicua a frecuencias altas. La solución a este problema canónico se construye a partir de una solución exacta mediante una expansión de autofunciones de propagación radial. Entonces, ésta se convierte en una nueva expansión de autofunciones de propagación circunferencial muy apropiada para cilindros grandes, a través de la transformación de Watson. De esta forma, la expresión del campo se reduce a una integral que se evalúa asintóticamente, para altas frecuencias, de manera uniforme. El resultado se expresa según el trazado de rayos descrito en la UTD. La solución es uniforme porque tiene la importante propiedad de mantenerse continua a lo largo de la región de transición, a ambos lados de la superficie del contorno de sombra. Fuera de la región de transición la solución se reduce al campo incidente y reflejado puramente ópticos en la región iluminada del cilindro, y al campo superficial difractado en la región de sombra. Debido a la IBC el campo dispersado contiene una componente contrapolar a causa de un acoplamiento entre las ondas TEz y TMz (donde z es el eje del cilindro). Esta componente contrapolar desaparece cuando la incidencia es normal al cilindro, y también en la región iluminada cuando la incidencia es oblicua donde el campo se reduce a la solución de GO. La solución UTD presenta una muy buena exactitud cuando se compara numéricamente con una solución de referencia exacta. A continuación, se desarrolla una IBC efectiva para el cálculo del campo electromagnético dispersado en un cilindro circular PEC recubierto por un dieléctrico e iluminado por una onda plana incidiendo oblicuamente. Para ello se derivan dos impedancias de superficie en relación directa con las ondas creeping y de superficie TM y TE que se excitan en un cilindro recubierto por un material no conductor. Las impedancias de superficie TM y TE están acopladas cuando la incidencia es oblicua, y dependen de la geometría del problema y de los números de onda. Además, se ha derivado una impedancia de superficie constante, aunque con diferente valor cuando el observador se encuentra en la zona iluminada o en la zona de sombra. Después, se presenta una solución UTD para el cálculo de la dispersión de una onda plana con incidencia oblicua sobre un cilindro eléctricamente grande y convexo, mediante la generalización del problema canónico correspondiente al cilindro circular. La solución asintótica es uniforme porque se mantiene continua a lo largo de la región de transición, en las inmediaciones del contorno de sombra, y se reduce a la solución de rayos ópticos en la zona iluminada y a la contribución de las ondas de superficie dentro de la zona de sombra, lejos de la región de transición. Cuando se usa cualquier material no conductor se excita una componente contrapolar que tiende a desaparecer cuando la incidencia es normal al cilindro y en la región iluminada. Se discuten ampliamente las limitaciones de las fórmulas para la impedancia de superficie efectiva, y se compara la solución UTD con otras soluciones de referencia, donde se observa una muy buena concordancia. Y en tercer lugar, se presenta una aproximación para una impedancia de superficie efectiva para el cálculo de los campos superficiales en un cilindro circular conductor recubierto por un dieléctrico. Se discuten las principales diferencias que existen entre un cilindro PEC recubierto por un dieléctrico desde un punto de vista riguroso y un cilindro con una IBC. Mientras para un cilindro de impedancia se considera una impedancia de superficie constante o uniforme, para un cilindro conductor recubierto por un dieléctrico se derivan dos impedancias de superficie. Estas impedancias de superficie están asociadas a los modos de ondas creeping TM y TE excitadas en un cilindro, y dependen de la posición y de la orientación del observador y de la fuente. Con esto en mente, se deriva una solución UTD con IBC para los campos superficiales teniendo en cuenta las dependencias de la impedancia de superficie. La expansión asintótica se realiza, mediante la transformación de Watson, sobre la representación en serie de las funciones de Green correspondientes, evitando as í calcular las derivadas de orden superior de las integrales de tipo Fock, y dando lugar a una solución rápida y precisa. En los ejemplos numéricos realizados se observa una muy buena precisión cuando el cilindro y la separación entre el observador y la fuente son grandes. Esta solución, junto con el método de los momentos (MoM), se puede aplicar para el cálculo eficiente del acoplamiento mutuo de grandes arrays conformados de antenas de parches. Los métodos propuestos basados en UTD para el cálculo del campo electromagnético dispersado y superficial sobre un cilindro PEC recubierto de dieléctrico con una IBC efectiva suponen un primer paso hacia la generalización de una solución UTD para superficies metálicas convexas arbitrarias cubiertas por un material no conductor e iluminadas por una fuente electromagnética arbitraria. ABSTRACT With this thesis ”Development of a Uniform Theory of Diffraction for Scattered and Surface Electromagnetic Field Analysis on a Cylinder” we have initiated a line of investigation whose goal is to answer the following question: what is the surface impedance which describes a perfect electric conductor (PEC) convex structure covered by a material coating? These studies are of current and future interest for predicting the electromagnetic (EM) fields incident, radiating or propagating on locally smooth convex parts of highly metallic structures with a material coating, or by a lossy metallic surfaces, as for example in aerospace, maritime and automotive applications. Moreover, from a theoretical point of view, the surface impedance characterization of PEC surfaces with or without a material coating represents a generalization of independent solutions for both type of problems. A uniform geometrical theory of diffraction (UTD) is developed in this thesis for analyzing the canonical problem of EM scattered and surface field by an electrically large circular cylinder with an impedance boundary condition (IBC) in the high frequency regime, by means of a surface impedance characterization. The construction of a UTD solution for this canonical problem is crucial for the development of the corresponding UTD solution for the more general case of an arbitrary smooth convex surface, via the principle of the localization of high frequency EM fields. The development of the present doctoral thesis has been carried out through a series of landmarks that are enumerated as follows, emphasizing the main contributions that this work has given rise to. Initially, a profound revision is made in the state of art of asymptotic methods where numerous references are given. Thus, any reader may know the history of geometrical optics (GO) and geometrical theory of diffraction (GTD), which led to the development of UTD. Then, the UTD is deeply investigated and the main studies which are found in the literature are shown. This chapter situates us in the position to state that, as far as we know, nobody has attempted before to perform a rigorous research about the surface impedance characterization for material-coated PEC convex structures via UTD. First, a UTD solution is developed for the canonical problem of the EM scattering by an electrically large circular cylinder with a uniform IBC, when it is illuminated by an obliquely incident high frequency plane wave. A solution to this canonical problem is first constructed in terms of an exact formulation involving a radially propagating eigenfunction expansion. The latter is converted into a circumferentially propagating eigenfunction expansion suited for large cylinders, via the Watson transformation, which is expressed as an integral that is subsequently evaluated asymptotically, for high frequencies, in a uniform manner. The resulting solution is then expressed in the desired UTD ray form. This solution is uniform in the sense that it has the important property that it remains continuous across the transition region on either side of the surface shadow boundary. Outside the shadow boundary transition region it recovers the purely ray optical incident and reflected ray fields on the deep lit side of the shadow boundary and to the modal surface diffracted ray fields on the deep shadow side. The scattered field is seen to have a cross-polarized component due to the coupling between the TEz and TMz waves (where z is the cylinder axis) resulting from the IBC. Such cross-polarization vanishes for normal incidence on the cylinder, and also in the deep lit region for oblique incidence where it properly reduces to the GO or ray optical solution. This UTD solution is shown to be very accurate by a numerical comparison with an exact reference solution. Then, an effective IBC is developed for the EM scattered field on a coated PEC circular cylinder illuminated by an obliquely incident plane wave. Two surface impedances are derived in a direct relation with the TM and TE surface and creeping wave modes excited on a coated cylinder. The TM and TE surface impedances are coupled at oblique incidence, and depend on the geometry of the problem and the wave numbers. Nevertheless, a constant surface impedance is found, although with a different value when the observation point lays in the lit or in the shadow region. Then, a UTD solution for the scattering of an obliquely incident plane wave on an electrically large smooth convex coated PEC cylinder is introduced, via a generalization of the canonical circular cylinder problem. The asymptotic solution is uniform because it remains continuous across the transition region, in the vicinity of the shadow boundary, and it recovers the ray optical solution in the deep lit region and the creeping wave formulation within the deep shadow region. When a coating is present a cross-polar field term is excited, which vanishes at normal incidence and in the deep lit region. The limitations of the effective surface impedance formulas are discussed, and the UTD solution is compared with some reference solutions where a very good agreement is met. And in third place, an effective surface impedance approach is introduced for determining surface fields on an electrically large coated metallic circular cylinder. Differences in analysis of rigorouslytreated coated metallic cylinders and cylinders with an IBC are discussed. While for the impedance cylinder case a single constant or uniform surface impedance is considered, for the coated metallic cylinder case two surface impedances are derived. These are associated with the TM and TE creeping wave modes excited on a cylinder and depend on observation and source positions and orientations. With this in mind, a UTD based method with IBC is derived for the surface fields by taking into account the surface impedance variation. The asymptotic expansion is performed, via the Watson transformation, over the appropriate series representation of the Green’s functions, thus avoiding higher-order derivatives of Fock-type integrals, and yielding a fast and an accurate solution. Numerical examples reveal a very good accuracy for large cylinders when the separation between the observation and the source point is large. Thus, this solution could be efficiently applied in mutual coupling analysis, along with the method of moments (MoM), of large conformal microstrip array antennas. The proposed UTD methods for scattered and surface EM field analysis on a coated PEC cylinder with an effective IBC are considered the first steps toward the generalization of a UTD solution for large arbitrarily convex smooth metallic surfaces covered by a material coating and illuminated by an arbitrary EM source.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a global overview of the recent study carried out in Spain for the new hazard map, which final goal is the revision of the Building Code in our country (NCSE-02). The study was carried our for a working group joining experts from The Instituto Geografico Nacional (IGN) and the Technical University of Madrid (UPM) , being the different phases of the work supervised by an expert Committee integrated by national experts from public institutions involved in subject of seismic hazard. The PSHA method (Probabilistic Seismic Hazard Assessment) has been followed, quantifying the epistemic uncertainties through a logic tree and the aleatory ones linked to variability of parameters by means of probability density functions and Monte Carlo simulations. In a first phase, the inputs have been prepared, which essentially are: 1) a project catalogue update and homogenization at Mw 2) proposal of zoning models and source characterization 3) calibration of Ground Motion Prediction Equations (GMPE’s) with actual data and development of a local model with data collected in Spain for Mw < 5.5. In a second phase, a sensitivity analysis of the different input options on hazard results has been carried out in order to have criteria for defining the branches of the logic tree and their weights. Finally, the hazard estimation was done with the logic tree shown in figure 1, including nodes for quantifying uncertainties corresponding to: 1) method for estimation of hazard (zoning and zoneless); 2) zoning models, 3) GMPE combinations used and 4) regression method for estimation of source parameters. In addition, the aleatory uncertainties corresponding to the magnitude of the events, recurrence parameters and maximum magnitude for each zone have been also considered including probability density functions and Monte Carlo simulations The main conclusions of the study are presented here, together with the obtained results in terms of PGA and other spectral accelerations SA (T) for return periods of 475, 975 and 2475 years. The map of the coefficient of variation (COV) are also represented to give an idea of the zones where the dispersion among results are the highest and the zones where the results are robust.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los decisores cada vez se enfrentan a problemas más complejos en los que tomar una decisión implica tener que considerar simultáneamente muchos criterios que normalmente son conflictivos entre sí. En la mayoría de los problemas de decisión es necesario considerar criterios económicos, sociales y medioambientales. La Teoría de la Decisión proporciona el marco adecuado para poder ayudar a los decisores a resolver estos problemas de decisión complejos, al permitir considerar conjuntamente la incertidumbre existente sobre las consecuencias de cada alternativa en los diferentes atributos y la imprecisión sobre las preferencias de los decisores. En esta tesis doctoral nos centramos en la imprecisión de las preferencias de los decisores cuando éstas pueden ser representadas mediante una función de utilidad multiatributo aditiva. Por lo tanto, consideramos imprecisión tanto en los pesos como en las funciones de utilidad componentes de cada atributo. Se ha considerado el caso en que la imprecisión puede ser representada por intervalos de valores o bien mediante información ordinal, en lugar de proporcionar valores concretos. En este sentido, hemos propuesto métodos que permiten ordenar las diferentes alternativas basados en los conceptos de intensidad de dominación o intensidad de preferencia, los cuales intentan medir la fuerza con la que cada alternativa es preferida al resto. Para todos los métodos propuestos se ha analizado su comportamiento y se ha comparado con los más relevantes existentes en la literatura científica que pueden ser aplicados para resolver este tipo de problemas. Para ello, se ha realizado un estudio de simulación en el que se han usado dos medidas de eficiencia (hit ratio y coeficiente de correlación de Kendall) para comparar los diferentes métodos. ABSTRACT Decision makers increasingly face complex decision-making problems where they have to simultaneously consider many often conflicting criteria. In most decision-making problems it is necessary to consider economic, social and environmental criteria. Decision making theory provides an adequate framework for helping decision makers to make complex decisions where they can jointly consider the uncertainty about the performance of each alternative for each attribute, and the imprecision of the decision maker's preferences. In this PhD thesis we focus on the imprecision of the decision maker's preferences represented by an additive multiattribute utility function. Therefore, we consider the imprecision of weights, as well as of component utility functions for each attribute. We consider the case in which the imprecision is represented by ranges of values or by ordinal information rather than precise values. In this respect, we propose methods for ranking alternatives based on notions of dominance intensity, also known as preference intensity, which attempt to measure how much more preferred each alternative is to the others. The performance of the propose methods has been analyzed and compared against the leading existing methods that are applicable to this type of problem. For this purpose, we conducted a simulation study using two efficiency measures (hit ratio and Kendall correlation coefficient) to compare the different methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter ?. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducción. La obesidad puede definirse como una enfermedad metabólica crónica de origen multifactorial, lo que provoca trastornos o problemas físicos y psicológicos a la persona, con patologías asociadas que limitan la esperanza de vida y deterioran la calidad de la misma, siendo determinante para sus áreas sociales y laborales. Este trastorno metabólico crónico se caracteriza por una acumulación excesiva de energía en el cuerpo en forma de grasa, lo que lleva a un aumento de peso con respecto al valor esperado por sexo, edad y altura. La gestión y el tratamiento de la obesidad tienen objetivos más amplios que la pérdida de peso e incluyen la reducción del riesgo y la mejora de la salud. Estos pueden ser alcanzados por la pérdida modesta de peso (es decir, 10.5% del peso corporal inicial), la mejora del contenido nutricional de la dieta y un modesto incremento en la actividad física y condición física. La dieta es uno de los métodos más populares para perder peso corporal. El ejercicio es otra alternativa para perder peso corporal. El aumento de ejercicio provoca un desequilibrio cuando se mantiene la ingesta calórica. También tiene ventajas, como la mejora del tono muscular, la capacidad cardiovascular, fuerza y flexibilidad, aumenta el metabolismo basal y mejora el sistema inmunológico. Objetivos. El objetivo de esta tesis es contribuir en un estudio de intervención para aclarar la evolución del peso corporal durante una intervención de dieta y ejercicio. Para ello, se evaluaron los efectos de la edad, sexo, índice de masa corporal inicial y el tipo de tratamiento en las tendencias de pérdida de peso. Otro objetivo de la tesis era crear un modelo de regresión lineal múltiple capaz de predecir la pérdida de peso corporal después del periodo de intervención. Y, por último, determinar el efecto sobre la composición corporal (peso corporal, índice de masa corporal, la masa grasa, y la masa libre de grasa) de las diferentes intervenciones basadas en ejercicios (fuerza, resistencia, resistencia combinada con fuerza, y las recomendaciones de actividad física (grupo control)) en combinación con dieta de adultos con sobrepeso y obesidad, después de la intervención, así como los cambios de la composición corporal 3 años más tarde. Diseño de la investigación. Los datos empleados en el análisis de esta tesis son parte del proyecto “Programas de Nutrición y Actividad Física para el tratamiento de la obesidad” (PRONAF). El proyecto PRONAF es un estudio clínico sobre programas de nutrición y actividad física para el sobrepeso y la obesidad, desarrollado en España durante varios años de intervención. Fue diseñado, en parte, para comparar diferentes tipos de intervención, con el objetivo de evaluar su impacto en las dinámicas de pérdida de peso, en personas con sobrepeso y obesidad. Como diseño experimental, el estudio se basó en una restricción calórica, a la que, en algunos casos, se le añadió un protocolo de entrenamiento (fuerza, resistencia, o combinado, en igualdad de volumen e intensidad). Las principales variables para la investigación que comprende esta tesis fueron: el peso corporal y la composición corporal (masa grasa y masa libre de grasa). Conclusiones. En esta tesis, para los programas de pérdida de peso en personas con sobrepeso y obesidad con un 25-30% de la restricción calórica, el peso corporal se redujo significativamente en ambos sexos, sin tener en cuenta la edad y el tipo de tratamiento seguido. Según los resultados del estudio, la pérdida de peso realizada por un individuo (hombre o mujer) durante los seis meses puede ser representada por cualquiera de las cinco funciones (lineal, potencial, exponencial, logarítmica y cuadrática) en ambos sexos, siendo la cuadrática la que tiende a representarlo mejor. Además, se puede concluir que la pérdida de peso corporal se ve afectada por el índice de masa corporal inicial y el sexo, siendo mayor para las personas obesas que para las de sobrepeso, que muestran diferencias entre sexos sólo en la condición de sobrepeso. Además, es posible calcular el peso corporal final de cualquier participante involucrado en una intervención utilizando la metodología del proyecto PRONAF sólo conociendo sus variables iniciales de composición corporal. Además, los cuatro tipos de tratamientos tuvieron resultados similares en cambios en la composición corporal al final del período de intervención, con la única excepción de la masa libre de grasa, siendo los grupos de entrenamiento los que la mantuvieron durante la restricción calórica. Por otro lado, sólo el grupo combinado logra mantener la reducción de la masa grasa (%) 3 años después del final de la intervención. ABSTRACT Introduction. Obesity can be defined as a chronic metabolic disease from a multifactorial origin, which leads to physical and psychological impacts to the person, with associated pathologies that limit the life expectancy and deteriorate the quality of it, being determinant for the social and labor areas of the person. This chronic metabolic disorder is characterized by an excessive accumulation of energy in the body as fat, leading to increased weight relative to the value expected by sex, age and height. The management and treatment of obesity have wider objectives than weight loss alone and include risk reduction and health improvement. These may be achieved by modest weight loss (i.e. 5–10% of initial body weight), improved nutritional content of the diet and modest increases in physical activity and fitness. Weight loss through diet is one of the most popular approaches to lose body weight. Exercise is another alternative to lose body weight. The increase of exercise causes an imbalance when the caloric intake is maintained. It also has advantages such as improved muscle tone, cardiovascular fitness, strength and flexibility, increases the basal metabolism and improves immune system. Objectives. The aim of this thesis is to contribute with an interventional study to clarify the evolution of the body weight during a diet and exercise intervention. For this, the effects of age, sex, initial body mass index and type of treatment on weight loss tendencies were evaluated. Another objective of the thesis was to create a multiple linear regression model able to predict the body weight loss after the intervention period. And, finally, to determine the effect upon body composition (body weight, body mass index, fat mass, and fat-free mass of different exercise-based interventions (strength, endurance, combined endurance and strength, and physical activity recommendations group (control group)) combined with diet in overweight and obese adults, after intervention as well as body composition changes 3 years later. Research Design. The data used in the analysis of this thesis are part of the project "Programs of Nutrition and Physical Activity for the treatment of obesity" (PRONAF). The PRONAF project is a clinical trial program about nutrition and physical activity for overweight and obesity, developed in Spain for several years of intervention. It was designed, in part, to compare different types of intervention, in order to assess their impact on the dynamics of weight loss in overweight and obese people. As experimental design, the study was based on caloric restriction, which, in some cases, added a training protocol (strength, endurance, or combined in equal volume and intensity). The main research variables comprising this thesis were: body weight and body composition outcomes (fat mass and fat-free mass). Conclusions. In this thesis, for weight loss programs in overweight and obese people with 25-30% of caloric restriction, the body weight was significantly decreased in both sexes, regardless the age and type of followed treatment. According to the results of the study, the weight loss performed by an individual (male or female) during six months can be represented by any of the five functions (linear, power law, exponential, logarithmic and quadratic) in both sexes, being the quadratic one which tends to represent it better. In addition, it can be concluded that the body weight loss is affected by the initial body mass index and sex condition, being greater for the obese people than for the overweight one, showing differences between sexes only in the overweight condition. Moreover, it is possible to calculate the final body weight of any participant engaged in an intervention using the PRONAF Project methodology only knowing their initial body composition variables. Furthermore, the four types of treatments had similar results on body composition changes at the end of the intervention period, with the only exception of fat-free mass, being the training groups the ones that maintained it during the caloric restriction. On the other hand, only the combined group achieved to maintain the fat mass (%) reduced 3 years after the end of the intervention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate and study the methods using data sampled from known parametric distributions, and we demonstrate their applicability by learning models based on real neuroscience data. Finally, we compare the performance of the proposed methods with an approach for learning mixtures of truncated basis functions (MoTBFs). The empirical results show that the proposed methods generally yield models that are comparable to or significantly better than those found using the MoTBF-based method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis establece los fundamentos teóricos y diseña una colección abierta de clases C++ denominada VBF (Vector Boolean Functions) para analizar funciones booleanas vectoriales (funciones que asocian un vector booleano a otro vector booleano) desde una perspectiva criptográfica. Esta nueva implementación emplea la librería NTL de Victor Shoup, incorporando nuevos módulos que complementan a las funciones de NTL, adecuándolas para el análisis criptográfico. La clase fundamental que representa una función booleana vectorial se puede inicializar de manera muy flexible mediante diferentes estructuras de datas tales como la Tabla de verdad, la Representación de traza y la Forma algebraica normal entre otras. De esta manera VBF permite evaluar los criterios criptográficos más relevantes de los algoritmos de cifra en bloque y de stream, así como funciones hash: por ejemplo, proporciona la no-linealidad, la distancia lineal, el grado algebraico, las estructuras lineales, la distribución de frecuencias de los valores absolutos del espectro Walsh o del espectro de autocorrelación, entre otros criterios. Adicionalmente, VBF puede llevar a cabo operaciones entre funciones booleanas vectoriales tales como la comprobación de igualdad, la composición, la inversión, la suma, la suma directa, el bricklayering (aplicación paralela de funciones booleanas vectoriales como la empleada en el algoritmo de cifra Rijndael), y la adición de funciones coordenada. La tesis también muestra el empleo de la librería VBF en dos aplicaciones prácticas. Por un lado, se han analizado las características más relevantes de los sistemas de cifra en bloque. Por otro lado, combinando VBF con algoritmos de optimización, se han diseñado funciones booleanas cuyas propiedades criptográficas son las mejores conocidas hasta la fecha. ABSTRACT This thesis develops the theoretical foundations and designs an open collection of C++ classes, called VBF, designed for analyzing vector Boolean functions (functions that map a Boolean vector to another Boolean vector) from a cryptographic perspective. This new implementation uses the NTL library from Victor Shoup, adding new modules which complement the existing ones making VBF better suited for cryptography. The fundamental class representing a vector Boolean function can be initialized in a flexible way via several alternative types of data structures such as Truth Table, Trace Representation, Algebraic Normal Form (ANF) among others. This way, VBF allows the evaluation of the most relevant cryptographic criteria for block and stream ciphers as well as for hash functions: for instance, it provides the nonlinearity, the linearity distance, the algebraic degree, the linear structures, the frequency distribution of the absolute values of the Walsh Spectrum or the Autocorrelation Spectrum, among others. In addition, VBF can perform operations such as equality testing, composition, inversion, sum, direct sum, bricklayering (parallel application of vector Boolean functions as employed in Rijndael cipher), and adding coordinate functions of two vector Boolean functions. This thesis also illustrates the use of VBF in two practical applications. On the one hand, the most relevant properties of the existing block ciphers have been analysed. On the other hand, by combining VBF with optimization algorithms, new Boolean functions have been designed which have the best known cryptographic properties up-to-date.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this Master Thesis is the analysis, design and development of a robust and reliable Human-Computer Interaction interface, based on visual hand-gesture recognition. The implementation of the required functions is oriented to the simulation of a classical hardware interaction device: the mouse, by recognizing a specific hand-gesture vocabulary in color video sequences. For this purpose, a prototype of a hand-gesture recognition system has been designed and implemented, which is composed of three stages: detection, tracking and recognition. This system is based on machine learning methods and pattern recognition techniques, which have been integrated together with other image processing approaches to get a high recognition accuracy and a low computational cost. Regarding pattern recongition techniques, several algorithms and strategies have been designed and implemented, which are applicable to color images and video sequences. The design of these algorithms has the purpose of extracting spatial and spatio-temporal features from static and dynamic hand gestures, in order to identify them in a robust and reliable way. Finally, a visual database containing the necessary vocabulary of gestures for interacting with the computer has been created.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have investigated the relationships between the apical sorting mechanism using lipid rafts and the soluble N-ethyl maleimide-sensitive factor attachment protein receptor (SNARE) machinery, which is involved in membrane docking and fusion. We first confirmed that anti-alpha-SNAP antibodies inhibit the apical pathway in Madin– Darby canine kidney (MDCK) cells; in addition, we report that a recombinant SNAP protein stimulates the apical transport whereas a SNAP mutant inhibits this transport step. Based on t-SNARE overexpression experiments and the effect of botulinum neurotoxin E, syntaxin 3 and SNAP-23 have been implicated in apical membrane trafficking. Here, we show in permeabilized MDCK cells that antisyntaxin 3 and anti-SNAP-23 antibodies lower surface delivery of an apical reporter protein. Moreover, using a similar approach, we show that tetanus toxin-insensitive, vesicle-associated membrane protein (TI-VAMP; also called VAMP7), a recently described apical v-SNARE, is involved. Furthermore, we show the presence of syntaxin 3 and TI-VAMP in isolated apical carriers. Polarized apical sorting has been postulated to be mediated by the clustering of apical proteins into dynamic sphingolipid-cholesterol rafts. We provide evidence that syntaxin 3 and TI-VAMP are raft-associated. These data support a raft-based mechanism for the sorting of not only apically destined cargo but also of SNAREs having functions in apical membrane-docking and fusion events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, a new method to analyze biological nonstationary stochastic variables has been presented. The method is especially suitable to analyze the variation of one biological variable with respect to changes of another variable. Here, it is illustrated by the change of the pulmonary blood pressure in response to a step change of oxygen concentration in the gas that an animal breathes. The pressure signal is resolved into the sum of a set of oscillatory intrinsic mode functions, which have zero “local mean,” and a final nonoscillatory mode. With this device, we obtain a set of “mean trends,” each of which represents a “mean” in a definitive sense, and together they represent the mean trend systematically with different degrees of oscillatory content. Correspondingly, the oscillatory content of the signal about any mean trend can be represented by a set of partial sums of intrinsic mode functions. When the concept of “indicial response function” is used to describe the change of one variable in response to a step change of another variable, we now have a set of indicial response functions of the mean trends and another set of indicial response functions to describe the energy or intensity of oscillations about each mean trend. Each of these can be represented by an analytic function whose coefficients can be determined by a least-squares curve-fitting procedure. In this way, experimental results are stated sharply by analytic functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dopamine is a neuromodulator involved in the control of key physiological functions. Dopamine-dependent signal transduction is activated through the interaction with membrane receptors of the seven-transmembrane domain G protein-coupled family. Among them, dopamine D2 receptor is highly expressed in the striatum and the pituitary gland as well as by mesencephalic dopaminergic neurons. Lack of D2 receptors in mice leads to a locomotor parkinsonian-like phenotype and to pituitary tumors. The D2 receptor promoter has characteristics of a housekeeping gene. However, the restricted expression of this gene to particular neurons and cells points to a strict regulation of its expression by cell-specific transcription factors. We demonstrate here that the D2 receptor promoter contains a functional retinoic acid response element. Furthermore, analysis of retinoic acid receptor-null mice supports our finding and shows that in these animals D2 receptor expression is reduced. This finding assigns to retinoids an important role in the control of gene expression in the central nervous system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bovine papillomavirus type 1 (BPV-1) induces fibropapillomas in its natural host and can transform fibroblasts in culture. The viral genome is maintained as an episome within fibroblasts, which has allowed extensive genetic analyses of the viral functions required for DNA replication, gene expression, and transformation. Much less is known about BPV-1 gene expression and replication in bovine epithelial cells because the study of the complete viral life cycle requires an experimental system capable of generating a fully differentiated stratified bovine epithelium. Using a combination of organotypic raft cultures and xenografts on nude mice, we have developed a system in which BPV-1 can replicate and produce infectious viral particles. Organotypic cultures were established with bovine keratinocytes plated on a collagen raft containing BPV-1-transformed fibroblasts. These keratinocytes were infected with virus particles isolated from a bovine wart or were transfected with cloned BPV-1 DNA. Several days after the rafts were lifted to the air interface, they were grafted on nude mice. After 6–8 weeks, large xenografts were produced that exhibited a hyperplastic and hyperkeratotic epithelium overlying a large dermal fibroma. These lesions were strikingly similar to a fibropapilloma caused by BPV-1 in the natural host. Amplified viral DNA and capsid antigens were detected in the suprabasal cells of the epithelium. Moreover, infectious virus particles could be isolated from these lesions and quantitated by a focus formation assay on mouse cells in culture. Interestingly, analysis of grafts produced with infected and uninfected fibroblasts indicated that the fibroma component was not required for productive infection or morphological changes characteristic of papillomavirus-infected epithelium. This system will be a powerful tool for the genetic analysis of the roles of the viral gene products in the complete viral life cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

5-Lipoxygenase (5LO) plays a pivotal role in cellular leukotriene synthesis. To identify proteins interacting with human 5LO, we used a two-hybrid approach to screen a human lung cDNA library. From a total of 1.5 × 107 yeast transformants, nine independent clones representing three different proteins were isolated and found to specifically interact with 5LO. Four 1.7- to 1.8-kb clones represented a 16-kDa protein named coactosin-like protein for its significant homology with coactosin, a protein found to be associated with actin in Dictyostelium discoideum. Coactosin-like protein thus may provide a link between 5LO and the cytoskeleton. Two other yeast clones of 1.5 kb encoded transforming growth factor (TGF) type β receptor-I-associated protein 1 partial cDNA. TGF type β receptor-I-associated protein 1 recently has been reported to associate with the activated form of the TGF β receptor I and may be involved in the TGF β-induced up-regulation of 5LO expression and activity observed in HL-60 and Mono Mac 6 cells. Finally, three identical 2.1-kb clones contained the partial cDNA of a human protein with high homology to a hypothetical helicase K12H4.8 from Caenorhabditis elegans and consequently was named ΔK12H4.8 homologue. Analysis of the predicted amino acid sequence revealed the presence of a RNase III motif and a double-stranded RNA binding domain, indicative of a protein of nuclear origin. The identification of these 5LO-interacting proteins provides additional approaches to studies of the cellular functions of 5LO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alfalfa mosaic virus (AlMV) coat protein is involved in systemic infection of host plants, and a specific mutation in this gene prevents the virus from moving into the upper uninoculated leaves. The coat protein also is required for different viral functions during early and late infection. To study the role of the coat protein in long-distance movement of AlMV independent of other vital functions during virus infection, we cloned the gene encoding the coat protein of AlMV into a tobacco mosaic virus (TMV)-based vector Av. This vector is deficient in long-distance movement and is limited to locally inoculated leaves because of the lack of native TMV coat protein. Expression of AlMV coat protein, directed by the subgenomic promoter of TMV coat protein in Av, supported systemic infection with the chimeric virus in Nicotiana benthamiana, Nicotiana tabacum MD609, and Spinacia oleracea. The host range of TMV was extended to include spinach as a permissive host. Here we report the alteration of a host range by incorporating genetic determinants from another virus.