906 resultados para numerical computation
Resumo:
The education designed and planned in a clear and objective manner is of paramount importance for universities to prepare competent professionals for the labor market, and above all can serve the population with an efficient work. Specifically, in relation to engineering, conducting classes in the laboratories it is very important for the application of theory and development of the practical part of the student. The planning and preparation of laboratories, as well as laboratory equipment and activities should be developed in a succinct and clear way, showing to students how to apply in practice what has been learned in theory and often shows them why and where it can be used when they become engineers. This work uses the MATLAB together with the System Identification Toolbox and Arduino for the identification of linear systems in Linear Control Lab. MATLAB is a widely used program in the engineering area for numerical computation, signal processing, graphing, system identification, among other functions. Thus the introduction to MATLAB and consequently the identification of systems using the System Identification Toolbox becomes relevant in the formation of students to thereafter when necessary to identify a system the base and the concept has been seen. For this procedure the open source platform Arduino was used as a data acquisition board being the same also introduced to the student, offering them a range of software and hardware for learning, giving you every day more luggage to their training
Resumo:
The education designed and planned in a clear and objective manner is of paramount importance for universities to prepare competent professionals for the labor market, and above all can serve the population with an efficient work. Specifically, in relation to engineering, conducting classes in the laboratories it is very important for the application of theory and development of the practical part of the student. The planning and preparation of laboratories, as well as laboratory equipment and activities should be developed in a succinct and clear way, showing to students how to apply in practice what has been learned in theory and often shows them why and where it can be used when they become engineers. This work uses the MATLAB together with the System Identification Toolbox and Arduino for the identification of linear systems in Linear Control Lab. MATLAB is a widely used program in the engineering area for numerical computation, signal processing, graphing, system identification, among other functions. Thus the introduction to MATLAB and consequently the identification of systems using the System Identification Toolbox becomes relevant in the formation of students to thereafter when necessary to identify a system the base and the concept has been seen. For this procedure the open source platform Arduino was used as a data acquisition board being the same also introduced to the student, offering them a range of software and hardware for learning, giving you every day more luggage to their training
Resumo:
[EN]The application of the Isogeometric Analysis (IA) with T-splines [1] demands a partition of the parametric space, C, in a tiling containing T-junctions denominated T-mesh. The T-splines are used both for the geometric modelization of the physical domain, D, and the basis of the numerical approximation. They have the advantage over the NURBS of allowing local refinement. In this work we propose a procedure to construct T-spline representations of complex domains in order to be applied to the resolution of elliptic PDE with IA. In precedent works [2, 3] we accomplished this task by using a tetrahedral parametrization…
Resumo:
Abstract. This thesis presents a discussion on a few specific topics regarding the low velocity impact behaviour of laminated composites. These topics were chosen because of their significance as well as the relatively limited attention received so far by the scientific community. The first issue considered is the comparison between the effects induced by a low velocity impact and by a quasi-static indentation experimental test. An analysis of both test conditions is presented, based on the results of experiments carried out on carbon fibre laminates and on numerical computations by a finite element model. It is shown that both quasi-static and dynamic tests led to qualitatively similar failure patterns; three characteristic contact force thresholds, corresponding to the main steps of damage progression, were identified and found to be equal for impact and indentation. On the other hand, an equal energy absorption resulted in a larger delaminated area in quasi-static than in dynamic tests, while the maximum displacement of the impactor (or indentor) was higher in the case of impact, suggesting a probably more severe fibre damage than in indentation. Secondly, the effect of different specimen dimensions and boundary conditions on its impact response was examined. Experimental testing showed that the relationships of delaminated area with two significant impact parameters, the absorbed energy and the maximum contact force, did not depend on the in-plane dimensions and on the support condition of the coupons. The possibility of predicting, by means of a simplified numerical computation, the occurrence of delaminations during a specific impact event is also discussed. A study about the compressive behaviour of impact damaged laminates is also presented. Unlike most of the contributions available about this subject, the results of compression after impact tests on thin laminates are described in which the global specimen buckling was not prevented. Two different quasi-isotropic stacking sequences, as well as two specimen geometries, were considered. It is shown that in the case of rectangular coupons the lay-up can significantly affect the damage induced by impact. Different buckling shapes were observed in laminates with different stacking sequences, in agreement with the results of numerical analysis. In addition, the experiments showed that impact damage can alter the buckling mode of the laminates in certain situations, whereas it did not affect the compressive strength in every case, depending on the buckling shape. Some considerations about the significance of the test method employed are also proposed. Finally, a comprehensive study is presented regarding the influence of pre-existing in-plane loads on the impact response of laminates. Impact events in several conditions, including both tensile and compressive preloads, both uniaxial and biaxial, were analysed by means of numerical finite element simulations; the case of laminates impacted in postbuckling conditions was also considered. The study focused on how the effect of preload varies with the span-to-thickness ratio of the specimen, which was found to be a key parameter. It is shown that a tensile preload has the strongest effect on the peak stresses at low span-to-thickness ratios, leading to a reduction of the minimum impact energy required to initiate damage, whereas this effect tends to disappear as the span-to-thickness ratio increases. On the other hand, a compression preload exhibits the most detrimental effects at medium span-to-thickness ratios, at which the laminate compressive strength and the critical instability load are close to each other, while the influence of preload can be negligible for thin plates or even beneficial for very thick plates. The possibility to obtain a better explanation of the experimental results described in the literature, in view of the present findings, is highlighted. Throughout the thesis the capabilities and limitations of the finite element model, which was implemented in an in-house program, are discussed. The program did not include any damage model of the material. It is shown that, although this kind of analysis can yield accurate results as long as damage has little effect on the overall mechanical properties of a laminate, it can be helpful in explaining some phenomena and also in distinguishing between what can be modelled without taking into account the material degradation and what requires an appropriate simulation of damage. Sommario. Questa tesi presenta una discussione su alcune tematiche specifiche riguardanti il comportamento dei compositi laminati soggetti ad impatto a bassa velocità. Tali tematiche sono state scelte per la loro importanza, oltre che per l’attenzione relativamente limitata ricevuta finora dalla comunità scientifica. La prima delle problematiche considerate è il confronto fra gli effetti prodotti da una prova sperimentale di impatto a bassa velocità e da una prova di indentazione quasi statica. Viene presentata un’analisi di entrambe le condizioni di prova, basata sui risultati di esperimenti condotti su laminati in fibra di carbonio e su calcoli numerici svolti con un modello ad elementi finiti. È mostrato che sia le prove quasi statiche sia quelle dinamiche portano a un danneggiamento con caratteristiche qualitativamente simili; tre valori di soglia caratteristici della forza di contatto, corrispondenti alle fasi principali di progressione del danno, sono stati individuati e stimati uguali per impatto e indentazione. D’altro canto lo stesso assorbimento di energia ha portato ad un’area delaminata maggiore nelle prove statiche rispetto a quelle dinamiche, mentre il massimo spostamento dell’impattatore (o indentatore) è risultato maggiore nel caso dell’impatto, indicando la probabilità di un danneggiamento delle fibre più severo rispetto al caso dell’indentazione. In secondo luogo è stato esaminato l’effetto di diverse dimensioni del provino e diverse condizioni al contorno sulla sua risposta all’impatto. Le prove sperimentali hanno mostrato che le relazioni fra l’area delaminata e due parametri di impatto significativi, l’energia assorbita e la massima forza di contatto, non dipendono dalle dimensioni nel piano dei provini e dalle loro condizioni di supporto. Viene anche discussa la possibilità di prevedere, per mezzo di un calcolo numerico semplificato, il verificarsi di delaminazioni durante un determinato caso di impatto. È presentato anche uno studio sul comportamento a compressione di laminati danneggiati da impatto. Diversamente della maggior parte della letteratura disponibile su questo argomento, vengono qui descritti i risultati di prove di compressione dopo impatto su laminati sottili durante le quali l’instabilità elastica globale dei provini non è stata impedita. Sono state considerate due differenti sequenze di laminazione quasi isotrope, oltre a due geometrie per i provini. Viene mostrato come nel caso di provini rettangolari la sequenza di laminazione possa influenzare sensibilmente il danno prodotto dall’impatto. Due diversi tipi di deformate in condizioni di instabilità sono stati osservati per laminati con diversa laminazione, in accordo con i risultati dell’analisi numerica. Gli esperimenti hanno mostrato inoltre che in certe situazioni il danno da impatto può alterare la deformata che il laminato assume in seguito ad instabilità; d’altra parte tale danno non ha sempre influenzato la resistenza a compressione, a seconda della deformata. Vengono proposte anche alcune considerazioni sulla significatività del metodo di prova utilizzato. Infine viene presentato uno studio esaustivo riguardo all’influenza di carichi membranali preesistenti sulla risposta all’impatto dei laminati. Sono stati analizzati con simulazioni numeriche ad elementi finiti casi di impatto in diverse condizioni di precarico, sia di trazione sia di compressione, sia monoassiali sia biassiali; è stato preso in considerazione anche il caso di laminati impattati in condizioni di postbuckling. Lo studio si è concentrato in particolare sulla dipendenza degli effetti del precarico dal rapporto larghezza-spessore del provino, che si è rivelato un parametro fondamentale. Viene illustrato che un precarico di trazione ha l’effetto più marcato sulle massime tensioni per bassi rapporti larghezza-spessore, portando ad una riduzione della minima energia di impatto necessaria per innescare il danneggiamento, mentre questo effetto tende a scomparire all’aumentare di tale rapporto. Il precarico di compressione evidenzia invece gli effetti più deleteri a rapporti larghezza-spessore intermedi, ai quali la resistenza a compressione del laminato e il suo carico critico di instabilità sono paragonabili, mentre l’influenza del precarico può essere trascurabile per piastre sottili o addirittura benefica per piastre molto spesse. Viene evidenziata la possibilità di trovare una spiegazione più soddisfacente dei risultati sperimentali riportati in letteratura, alla luce del presente contributo. Nel corso della tesi vengono anche discussi le potenzialità ed i limiti del modello ad elementi finiti utilizzato, che è stato implementato in un programma scritto in proprio. Il programma non comprende alcuna modellazione del danneggiamento del materiale. Viene però spiegato come, nonostante questo tipo di analisi possa portare a risultati accurati soltanto finché il danno ha scarsi effetti sulle proprietà meccaniche d’insieme del laminato, esso possa essere utile per spiegare alcuni fenomeni, oltre che per distinguere fra ciò che si può riprodurre senza tenere conto del degrado del materiale e ciò che invece richiede una simulazione adeguata del danneggiamento.
Resumo:
Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^
Resumo:
In this paper, a fully automatic goal-oriented hp-adaptive finite element strategy for open region electromagnetic problems (radiation and scattering) is presented. The methodology leads to exponential rates of convergence in terms of an upper bound of an user-prescribed quantity of interest. Thus, the adaptivity may be guided to provide an optimal error, not globally for the field in the whole finite element domain, but for specific parameters of engineering interest. For instance, the error on the numerical computation of the S-parameters of an antenna array, the field radiated by an antenna, or the Radar Cross Section on given directions, can be minimized. The efficiency of the approach is illustrated with several numerical simulations with two dimensional problem domains. Results include the comparison with the previously developed energy-norm based hp-adaptivity.
Resumo:
In this work, various turbulent solutions of the two-dimensional (2D) and three-dimensional compressible Reynolds averaged Navier?Stokes equations are analyzed using global stability theory. This analysis is motivated by the onset of flow unsteadiness (Hopf bifurcation) for transonic buffet conditions where moderately high Reynolds numbers and compressible effects must be considered. The buffet phenomenon involves a complex interaction between the separated flow and a shock wave. The efficient numerical methodology presented in this paper predicts the critical parameters, namely, the angle of attack and Mach and Reynolds numbers beyond which the onset of flow unsteadiness appears. The geometry, a NACA0012 profile, and flow parameters selected reproduce situations of practical interest for aeronautical applications. The numerical computation is performed in three steps. First, a steady baseflow solution is obtained; second, the Jacobian matrix for the RANS equations based on a finite volume discretization is computed; and finally, the generalized eigenvalue problem is derived when the baseflow is linearly perturbed. The methodology is validated predicting the 2D Hopf bifurcation for a circular cylinder under laminar flow condition. This benchmark shows good agreement with the previous published computations and experimental data. In the transonic buffet case, the baseflow is computed using the Spalart?Allmaras turbulence model and represents a mean flow where the high frequency content and length scales of the order of the shear-layer thickness have been averaged. The lower frequency content is assumed to be decoupled from the high frequencies, thus allowing a stability analysis to be performed on the low frequency range. In addition, results of the corresponding adjoint problem and the sensitivity map are provided for the first time for the buffet problem. Finally, an extruded three-dimensional geometry of the NACA0012 airfoil, where all velocity components are considered, was also analyzed as a Triglobal stability case, and the outcoming results were compared to the previous 2D limited model, confirming that the buffet onset is well detected.
Resumo:
La aparición de inestabilidades en un flujo es un problema importante que puede afectar a algunas aplicaciones aerodinámicas. De hecho existen diferentes tipos de fenómenos no-estacionarios que actualmente son tema de investigación; casos como la separación a altos ángulos de ataque o el buffet transónico son dos ejemplos de cierta relevancia. El análisis de estabilidad global permite identificar la aparición de dichas condiciones inestables, proporcionando información importante sobre la región donde la inestabilidad es dominante y sobre la frecuencia del fenómeno inestable. La metodología empleada es capaz de calcular un flujo base promediado mediante una discretización con volúmenes finitos y posteriormente la solución de un problema de autovalores asociado a la linealización que aparece al perturbar el flujo base. El cálculo numérico se puede dividir en tres pasos: primero se calcula una solución estacionaria para las ecuaciones RANS, luego se extrae la matriz del Jacobiano que representa el problema linealizado y finalmente se deriva y se resuelve el problema de autovalores generalizado mediante el método iterativo de Arnoldi. Como primer caso de validación, la técnica descrita ha sido aplicada a un cilindro circular en condiciones laminares para detectar el principio de las oscilaciones de los vórtices de von Karman, y se han comparado los resultados con experimentos y cálculos anteriores. La parte más importante del estudio se centra en el análisis de flujos compresibles en régimen turbulento. La predicción de la aparición y la progresión de flujo separado a altos ángulos de ataque se han estudiado en el perfil NACA0012 en condiciones tanto subsónicas como supersónicas y en una sección del ala del A310 en condiciones de despegue. Para todas las geometrías analizadas, se ha podido observar que la separación gradual genera la aparición de un modo inestable específico para altos ángulos de ataque siempre mayores que el ángulo asociado al máximo coeficiente de sustentación. Además, se ha estudiado el problema adjunto para obtener información sobre la zona donde una fuerza externa provoca el máximo cambio en el campo fluido. El estudio se ha completado calculando el mapa de sensibilidad estructural y localizando el centro de la inestabilidad. En el presente trabajo de tesis se ha analizado otro importante fenómeno: el buffet transónico. En condiciones transónicas, la interacción entre la onda de choque y la capa límite genera una oscilación de la posición de la onda de choque y, por consiguiente, de las fuerzas aerodinámicas. El conocimiento de las condiciones críticas y su origen puede ayudar a evitar la oscilación causada por estas fuerzas. Las condiciones para las cuales comienza la inestabilidad han sido calculadas y comparadas con trabajos anteriores. Por otra parte, los resultados del correspondiente problema adjunto y el mapa de sensibilidad se han obtenido por primera vez para el buffet, indicando la región del dominio que sera necesario modificar para crear el mayor cambio en las propiedades del campo fluido. Dado el gran consumo de memoria requerido para los casos 3D, se ha realizado un estudio sobre la reducción del domino con la finalidad de reducirlo a la región donde está localizada la inestabilidad. La eficacia de dicha reducción de dominio ha sido evaluada investigando el cambio en la dimensión de la matriz del Jacobiano, no resultando muy eficiente en términos del consumo de memoria. Dado que el buffet es un problema en general tridimensional, el análisis TriGlobal de una geometría 3D podría considerarse el auténtico reto futuro. Como aproximación al problema, un primer estudio se ha realizado empleando una geometría tridimensional extruida del NACA00f2. El cálculo del flujo 3D y, por primera vez en casos tridimensionales compresibles y turbulentos, el análisis de estabilidad TriGlobal, se han llevado a cabo. La comparación de los resultados obtenidos con los resultados del anterior modelo 2D, ha permitido, primero, verificar la exactitud del cálculo 2D realizado anteriormente y también ha proporcionado una estimación del consumo de memoria requerido para el caso 3D. ABSTRACT Flow unsteadiness is an important problem in aerodynamic applications. In fact, there are several types of unsteady phenomena that are still at the cutting edge of research in the field; separation at high angles of attack and transonic buffet are two important examples. Global Stability Analysis can identify the unstable onset conditions, providing important information about the instability location in the domain and the frequency of the unstable phenomenon. The methodology computes a base flow averaged state based on a finite volume discretization and a solution for a generalized eigenvalue problem corresponding to the perturbed linearized equations. The numerical computation is then performed in three steps: first, a steady solution for the RANS equation is computed; second, the Jacobian matrix that represents the linearized problem is obtained; and finally, the generalized eigenvalue problem is derived and solved with an Arnoldi iterative method. As a first validation test, the technique has been applied on a laminar circular cylinder in order to detect the von Karman vortex shedding onset, comparing the results with experiments and with previous calculations. The main part of the study focusses on turbulent and compressible cases. The prediction of the origin and progression of separated flows at high angles of attack has been studied on the NACA0012 airfoil at subsonic and transonic conditions and for the A310 airfoil in take-off configuration. For all the analyzed geometries, it has been found that gradual separation generates the appearance of one specific unstable mode for angles of attack always greater than the ones related to the maximum lift coefficient. In addition, the adjoint problem has been studied to suggest the location of an external force that results in the largest change to the flow field. From the direct and the adjoint analysis the structural sensitivity map has been computed and the core of the instability has been located. The other important phenomenon analyzed in this work is the transonic buffet. In transonic conditions, the interaction between the shock wave and the boundary layer leads to an oscillation of the shock location and, consequently, of the aerodynamic forces. Knowing the critical operational conditions and its origin can be helpful in preventing such fluctuating forces. The instability onset has then been computed and compared with the literature. Moreover, results of the corresponding adjoint problem and a sensitivity map have been provided for the first time for the buffet problem, indicating the region that must be modified to create the biggest change in flow field properties. Because of the large memory consumption required when a 3D case is approached, a domain reduction study has been carried out with the aim of limiting the domain size to the region where the instability is located. The effectiveness of the domain reduction has been evaluated by investigating the change in the Jacobian matrix size, not being very efficient in terms of memory consumption. Since buffet is a three-dimensional problem, TriGlobal stability analysis can be seen as a future challenge. To approximate the problem, a first study has been carried out on an extruded three-dimensional geometry of the NACA0012 airfoil. The 3D flow computation and the TriGlobal stability analysis have been performed for the first time on a compressible and turbulent 3D case. The results have been compared with a 2D model, confirming that the buffet onset evaluated in the 2D case is well detected. Moreover, the computation has given an indication about the memory consumption for a 3D case.
Resumo:
Sabor, Software de Análisis de BOcinas y Reflectores, es una herramienta didáctica la cual es utilizada en los laboratorios de la escuela para realizar prácticas de la asignatura Antenas y Compatibilidad Electromagnética, esta herramienta da a los alumnos una visión gráfica de lo que se enseña en clase de teoría de lo que son los campos en las aperturas de los reflectores. El proyector pretende sustituir al primer Sabor , ya que se queda obsoleto debido al sistema operativo, ya que funciona solo para Windows XP y con ordenadores de 32 bits, y también realizar mejoras y corregir errores de la versión anterior. El proyecto se ha desarrollado en Matlab que es un software matemático con grandes ventajas en cuanto a cálculo, desarrollo gráfico, y a la creación de nuevos algoritmos en su propio lenguaje y además está disponible para las plataformas Unix, Windows, Mac OSX y GNU/Linux. El objetivo del proyecto ha sido implementar, al igual que las versiones anteriores, cinco tipos de reflectores, como son: Parabólico, Offset, Cassegrain y los dos Dobles Offset, Cassegrain y Gregorian, y han sido analizados con un alimentador ideal ,cos-q, y por último los resultados obtenidos se han comparado con las versiones anteriores de Sabor, como son Sabor 3.0 y el primer Sabor. El proyecto consta de partes muy bien diferencias como son : La interpretación correctas de las formulas que se han utilizado para la realización de este proyecto ,dichas formulas han sido las dadas por el proyecto fin de carrera titulado Sabor3.0 de Francisco Egea Castejón. GUIDE, the graphical user interface development environment, con el que se creó: GUI, graphical user interface, que es la parte de Matlab dedicada a crear interfaces de usuario , herramienta utilizada para crear nuestras distintas ventanas dedicadas para la obtención de datos para analizar los distintos reflectores y para mostrar por pantalla los distintos resultados. Programación Orientada a Objetos de Matlab y sus distintas propiedades como son la herencia lo cual es muy útil para ocupar menos memoria ya que con un único método podemos realizar distintos cálculos con los distintos reflectores, objetos, solo cambiando las propiedades de cada objeto Y por último ha sido la realización de validación de los resultados con la ayuda de las versiones anteriores de Sabor, que están detallados en el capítulo 5 y la unión con bocinas del proyecto fin de carrera Análisis de Bocinas en Matlab de Javier Montero. Por otra parte tenemos las mejoras realizadas a las antiguas versiones como son: realización de registros que el usuario puede guardar y cargar con las distintas variables, también se ha realizado un fichero .txt en el que consta la amplitud del campo con su respectiva theta para que el usuario pueda visualizarlo en cualquier plataforma gráfica de datos como por ejemplo exel. ABSTRACT. Sabor, Software de Análisis de BOcinas y Reflectores, is a teaching tool, which is used to do laboratory practice in the subject of Antennas y Compatibilidad Electromagnética, this tool gives students a graphic view of the knowledge that are given in theory class in regard to aperture field of reflectors. This project intend to replace the first Sabor, because it is outdated, due to the operating system, because Sabor works only with Widows XP and computer with 32 bits, and to make improves and correct errors that were detected in the last version of Sabor too. This project has been carried out in Matlab, which is a mathematical software with high-level language for numerical computation, visualization and application development, and furthermore it is available to different platforms such as Unix, Windows ,Mac OSX and GNU/Linux This project has focused on implementing, the same as last versions, five kind of reflectors, such as : Parabolic, Offset, Cassegrain and two offset dual reflector Cassegrain y Gregorian ,and these were analysed with a cos-q ideal feed, and finally the results were checked with the versions of Sabor, as well as Sabor 3.0 and the first Sabor. This project consist of four parts: The correct interpretation of the formulas , which were used to do this project, from the final project Sabor3.0 by Francisco Egea Castejón. GUIDE, the graphical user interface development environment, tool that was used to create : GUI, graphical user interface, part of Matlab dedicated to create user interface. Object Oriented Programming of Matlab and different properties like inheritance, that is very useful for saving memory space because with only one method we can analyse different kind of reflectors, object, only change the properties of the object. At finally, the results were contrasted with the results from the previous versions and the link reflectors with horns from the final project Análisis de Bocinas en Matlab by Javier Montero. On the other hand, we have the improvements such as: registers and .txt file. The registers are used by user to save and load different variables and .txt file is useful because it allows to the user plotting in different platforms for example exel.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
La realización de este proyecto está basado en el estudio realizado por Jean Schoentgen en el cual el autor caracterizó el micro temblor vocal por medio del índice y la frecuencia de modulación. En este proyecto se utilizará la herramienta Matlab para el cálculo de estos parámetros y al finalizar se analizarán los datos obtenidos. El proyecto se ha dividido en tres grandes partes. En la primera de ellas se ha explicado brevemente los conceptos básicos de la voz y conceptos importantes tales como el temblor fisiológico, el patológico y el Jitter vocal entre otros, también se han detallado conceptos matemáticos utilizados en el desarrollo del código. Esto se realizó con el fin que el lector tenga claros algunos conceptos importantes antes del desarrollo del código y así pueda entender con más facilidad el estudio realizado en este proyecto, en esta parte no se ha realizado una explicación muy extensa de cada concepto, entendiendo que el lector posee unos conocimientos básicos de ingeniería, por otra parte existen innumerables libros que explican de una manera más precisa cada uno de estos conceptos. En la segunda parte se llevó a cabo el desarrollo del código. Como se mencionó anteriormente se ha utilizado la herramienta Matlab que es muy utilizada en la mayoría de las asignaturas de la carrera obteniendo así un buen dominio de esta, además posee unos toolbox muy útiles que facilitan los cálculos matemáticos. En esta parte se ilustra paso a paso cada etapa de elaboración del código y algunas graficas de la señal de voz a medida que pasa por cada etapa del código. En la última parte se obtienen los datos de todos los cálculos de los registros de voz y se analiza cada uno de ellos a la vez que se comparan con los del estudio de Jean Schoentgen y se analizan las posibles diferencias. ABSTRACT. The Project is based on the search made by Jean Schoentgen, whose research the micro tremor vocal can be established by frequency modulation and modulation index. This project has been carried out in Matlab to calculate the aforementioned parameters and finally, the results were contrasted with the results from Jean Shoetngen’s research. This project consists of three parts: The first of all, to be able to understand this project to future readers .It was explained different basic concepts about the voice such as physiologic tremor, pathological tremor and Jitter. Furthermore, mathematical concepts were explained in detail, due to these were used in the software development. Then, it was focused on software development such as the elaboration of code and different voice signals that were processed. This part was made with Matlab, which is mathematical software with high-level language for numerical computation, visualization, collaborate across disciplines including signal and image processing and application development. At finally, the acquired calculations were contrasted with the results from Jean Schoentgen’s research.
Resumo:
We propose an arithmetic of function intervals as a basis for convenient rigorous numerical computation. Function intervals can be used as mathematical objects in their own right or as enclosures of functions over the reals. We present two areas of application of function interval arithmetic and associated software that implements the arithmetic: (1) Validated ordinary differential equation solving using the AERN library and within the Acumen hybrid system modeling tool. (2) Numerical theorem proving using the PolyPaver prover. © 2014 Springer-Verlag.
Resumo:
This article presents the principal results of the doctoral thesis “Direct Operational Methods in the Environment of a Computer Algebra System” by Margarita Spiridonova (Institute of mathematics and Informatics, BAS), successfully defended before the Specialised Academic Council for Informatics and Mathematical Modelling on 23 March, 2009.
Resumo:
Chemical Stratigraphy, or the study of the variation of chemical elements within sedimentary sequences, has gradually become an experienced tool in the research and correlation of global geologic events. In this paper 87Sr/ 86Sr ratios of the Triassic marine carbonates (Muschelkalk facies) of southeast Iberian Ranges, Iberian Peninsula, are presented and the representative Sr-isotopic curve constructed for the upper Ladinian interval. The studied stratigraphic succession is 102 meters thick, continuous, and well preserved. Previous paleontological data from macro and micro, ammonites, bivalves, foraminifera, conodonts and palynological assemblages, suggest a Fassanian-Longobardian age (Late Ladinian). Although diagenetic minerals are present in small amounts, the elemental data content of bulk carbonate samples, especially Sr contents, show a major variation that probably reflects palaeoenvironmental changes. The 87Sr/86Sr ratios curve shows a rise from 0.707649 near the base of the section to 0.707741 and then declines rapidly to 0.707624, with a final values rise up to 0.70787 in the upper part. The data up to meter 80 in the studied succession is broadly concurrent with 87Sr/86Sr ratios of sequences of similar age and complements these data. Moreover, the sequence stratigraphic framework and its key surfaces, which are difficult to be recognised just based in the facies analysis, are characterised by combining variations of the Ca, Mg, Mn, Sr and CaCO3 contents