933 resultados para Visualization Of Interval Methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi-Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles' state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle's state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle's state for more than one minute, at real-time frame rates based, only on visual information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At present there is much literature that refers to the advantages and disadvantages of different methods of statistical and dynamical downscaling of climate variables projected by climate models. Less attention has been paid to other indirect variables, like runoff, which play a significant role in evaluating the impact of climate change on hydrological systems. Runoff presents a much greater bias in climate models than other climate variables, like temperature or precipitation. It is very important to identify the methods that minimize bias while downscaling runoff from the gridded results of climate models to the basin scale

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of a web platform is a complex and interdisciplinary task, where people with different roles such as project manager, designer or developer participate. Different usability and User Experience evaluation methods can be used in each stage of the development life cycle, but not all of them have the same influence in the software development and in the final product or system. This article presents the study of the impact of these methods applied in the context of an e-Learning platform development. The results show that the impact has been strong from a developer's perspective. Developer team members considered that usability and User Experience evaluation allowed them mainly to identify design mistakes, improve the platform's usability and understand the end users and their needs in a better way. Interviews with potential users, clickmaps and scrollmaps were rated as the most useful methods. Finally, these methods were considered unanimously very useful in the context of the entire software development, only comparable to SCRUM meetings and overcoming the rest of involved factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, Independent Components Analysis (ICA) has proven itself to be a powerful signal-processing technique for solving the Blind-Source Separation (BSS) problems in different scientific domains. In the present work, an application of ICA for processing NIR hyperspectral images to detect traces of peanut in wheat flour is presented. Processing was performed without a priori knowledge of the chemical composition of the two food materials. The aim was to extract the source signals of the different chemical components from the initial data set and to use them in order to determine the distribution of peanut traces in the hyperspectral images. To determine the optimal number of independent component to be extracted, the Random ICA by blocks method was used. This method is based on the repeated calculation of several models using an increasing number of independent components after randomly segmenting the matrix data into two blocks and then calculating the correlations between the signals extracted from the two blocks. The extracted ICA signals were interpreted and their ability to classify peanut and wheat flour was studied. Finally, all the extracted ICs were used to construct a single synthetic signal that could be used directly with the hyperspectral images to enhance the contrast between the peanut and the wheat flours in a real multi-use industrial environment. Furthermore, feature extraction methods (connected components labelling algorithm followed by flood fill method to extract object contours) were applied in order to target the spatial location of the presence of peanut traces. A good visualization of the distributions of peanut traces was thus obtained

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El principal objetivo de esta tesis es el desarrollo de métodos de síntesis de diagramas de radiación de agrupaciones de antenas, en donde se realiza una caracterización electromagnética rigurosa de los elementos radiantes y de los acoplos mutuos existentes. Esta caracterización no se realiza habitualmente en la gran mayoría de métodos de síntesis encontrados en la literatura, debido fundamentalmente a dos razones. Por un lado, se considera que el diagrama de radiación de un array de antenas se puede aproximar con el factor de array que únicamente tiene en cuenta la posición de los elementos y las excitaciones aplicadas a los mismos. Sin embargo, como se mostrará en esta tesis, en múltiples ocasiones un riguroso análisis de los elementos radiantes y del acoplo mutuo entre ellos es importante ya que los resultados obtenidos pueden ser notablemente diferentes. Por otro lado, no es sencillo combinar un método de análisis electromagnético con un proceso de síntesis de diagramas de radiación. Los métodos de análisis de agrupaciones de antenas suelen ser costosos computacionalmente, ya que son estructuras grandes en términos de longitudes de onda. Generalmente, un diseño de un problema electromagnético suele comprender varios análisis de la estructura, dependiendo de las variaciones de las características, lo que hace este proceso muy costoso. Dos métodos se utilizan en esta tesis para el análisis de los arrays acoplados. Ambos están basados en el método de los elementos finitos, la descomposición de dominio y el análisis modal para analizar la estructura radiante y han sido desarrollados en el grupo de investigación donde se engloba esta tesis. El primero de ellos es una técnica de análisis de arrays finitos basado en la aproximación de array infinito. Su uso es indicado para arrays planos de grandes dimensiones con elementos equiespaciados. El segundo caracteriza el array y el acoplo mutuo entre elementos a partir de una expansión en modos esféricos del campo radiado por cada uno de los elementos. Este método calcula los acoplos entre los diferentes elementos del array usando las propiedades de traslación y rotación de los modos esféricos. Es capaz de analizar agrupaciones de elementos distribuidos de forma arbitraria. Ambas técnicas utilizan una formulación matricial que caracteriza de forma rigurosa el campo radiado por el array. Esto las hace muy apropiadas para su posterior uso en una herramienta de diseño, como los métodos de síntesis desarrollados en esta tesis. Los resultados obtenidos por estas técnicas de síntesis, que incluyen métodos rigurosos de análisis, son consecuentemente más precisos. La síntesis de arrays consiste en modificar uno o varios parámetros de las agrupaciones de antenas buscando unas determinadas especificaciones de las características de radiación. Los parámetros utilizados como variables de optimización pueden ser varios. Los más utilizados son las excitaciones aplicadas a los elementos, pero también es posible modificar otros parámetros de diseño como son las posiciones de los elementos o las rotaciones de estos. Los objetivos de las síntesis pueden ser dirigir el haz o haces en una determinada dirección o conformar el haz con formas arbitrarias. Además, es posible minimizar el nivel de los lóbulos secundarios o del rizado en las regiones deseadas, imponer nulos que evitan posibles interferencias o reducir el nivel de la componente contrapolar. El método para el análisis de arrays finitos basado en la aproximación de array infinito considera un array finito como un array infinito con un número finito de elementos excitados. Los elementos no excitados están físicamente presentes y pueden presentar tres diferentes terminaciones, corto-circuito, circuito abierto y adaptados. Cada una de estas terminaciones simulará mejor el entorno real en el que el array se encuentre. Este método de análisis se integra en la tesis con dos métodos diferentes de síntesis de diagramas de radiación. En el primero de ellos se presenta un método basado en programación lineal en donde es posible dirigir el haz o haces, en la dirección deseada, además de ejercer un control sobre los lóbulos secundarios o imponer nulos. Este método es muy eficiente y obtiene soluciones óptimas. El mismo método de análisis es también aplicado a un método de conformación de haz, en donde un problema originalmente no convexo (y de difícil solución) es transformado en un problema convexo imponiendo restricciones de simetría, resolviendo de este modo eficientemente un problema complejo. Con este método es posible diseñar diagramas de radiación con haces de forma arbitraria, ejerciendo un control en el rizado del lóbulo principal, así como en el nivel de los lóbulos secundarios. El método de análisis de arrays basado en la expansión en modos esféricos se integra en la tesis con tres técnicas de síntesis de diagramas de radiación. Se propone inicialmente una síntesis de conformación del haz basado en el método de la recuperación de fase resuelta de forma iterativa mediante métodos convexos, en donde relajando las restricciones del problema original se consiguen unas soluciones cercanas a las óptimas de manera eficiente. Dos métodos de síntesis se han propuesto, donde las variables de optimización son las posiciones y las rotaciones de los elementos respectivamente. Se define una función de coste basada en la intensidad de radiación, la cual es minimizada de forma iterativa con el método del gradiente. Ambos métodos reducen el nivel de los lóbulos secundarios minimizando una función de coste. El gradiente de la función de coste es obtenido en términos de la variable de optimización en cada método. Esta función de coste está formada por la expresión rigurosa de la intensidad de radiación y por una función de peso definida por el usuario para imponer prioridades sobre las diferentes regiones de radiación, si así se desea. Por último, se presenta un método en el cual, mediante técnicas de programación entera, se buscan las fases discretas que generan un diagrama de radiación lo más cercano posible al deseado. Con este método se obtienen diseños que minimizan el coste de fabricación. En cada uno de las diferentes técnicas propuestas en la tesis, se presentan resultados con elementos reales que muestran las capacidades y posibilidades que los métodos ofrecen. Se comparan los resultados con otros métodos disponibles en la literatura. Se muestra la importancia de tener en cuenta los diagramas de los elementos reales y los acoplos mutuos en el proceso de síntesis y se comparan los resultados obtenidos con herramientas de software comerciales. ABSTRACT The main objective of this thesis is the development of optimization methods for the radiation pattern synthesis of array antennas in which a rigorous electromagnetic characterization of the radiators and the mutual coupling between them is performed. The electromagnetic characterization is usually overlooked in most of the available synthesis methods in the literature, this is mainly due to two reasons. On the one hand, it is argued that the radiation pattern of an array is mainly influenced by the array factor and that the mutual coupling plays a minor role. As it is shown in this thesis, the mutual coupling and the rigorous characterization of the array antenna influences significantly in the array performance and its computation leads to differences in the results obtained. On the other hand, it is difficult to introduce an analysis procedure into a synthesis technique. The analysis of array antennas is generally expensive computationally as the structure to analyze is large in terms of wavelengths. A synthesis method requires to carry out a large number of analysis, this makes the synthesis problem very expensive computationally or intractable in some cases. Two methods have been used in this thesis for the analysis of coupled antenna arrays, both of them have been developed in the research group in which this thesis is involved. They are based on the finite element method (FEM), the domain decomposition and the modal analysis. The first one obtains a finite array characterization with the results obtained from the infinite array approach. It is specially indicated for the analysis of large arrays with equispaced elements. The second one characterizes the array elements and the mutual coupling between them with a spherical wave expansion of the radiated field by each element. The mutual coupling is computed using the properties of translation and rotation of spherical waves. This method is able to analyze arrays with elements placed on an arbitrary distribution. Both techniques provide a matrix formulation that makes them very suitable for being integrated in synthesis techniques, the results obtained from these synthesis methods will be very accurate. The array synthesis stands for the modification of one or several array parameters looking for some desired specifications of the radiation pattern. The array parameters used as optimization variables are usually the excitation weights applied to the array elements, but some other array characteristics can be used as well, such as the array elements positions or rotations. The desired specifications may be to steer the beam towards any specific direction or to generate shaped beams with arbitrary geometry. Further characteristics can be handled as well, such as minimize the side lobe level in some other radiating regions, to minimize the ripple of the shaped beam, to take control over the cross-polar component or to impose nulls on the radiation pattern to avoid possible interferences from specific directions. The analysis method based on the infinite array approach considers an infinite array with a finite number of excited elements. The infinite non-excited elements are physically present and may have three different terminations, short-circuit, open circuit and match terminated. Each of this terminations is a better simulation for the real environment of the array. This method is used in this thesis for the development of two synthesis methods. In the first one, a multi-objective radiation pattern synthesis is presented, in which it is possible to steer the beam or beams in desired directions, minimizing the side lobe level and with the possibility of imposing nulls in the radiation pattern. This method is very efficient and obtains optimal solutions as it is based on convex programming. The same analysis method is used in a shaped beam technique in which an originally non-convex problem is transformed into a convex one applying symmetry restrictions, thus solving a complex problem in an efficient way. This method allows the synthesis of shaped beam radiation patterns controlling the ripple in the mainlobe and the side lobe level. The analysis method based on the spherical wave expansion is applied for different synthesis techniques of the radiation pattern of coupled arrays. A shaped beam synthesis is presented, in which a convex formulation is proposed based on the phase retrieval method. In this technique, an originally non-convex problem is solved using a relaxation and solving a convex problems iteratively. Two methods are proposed based on the gradient method. A cost function is defined involving the radiation intensity of the coupled array and a weighting function that provides more degrees of freedom to the designer. The gradient of the cost function is computed with respect to the positions in one of them and the rotations of the elements in the second one. The elements are moved or rotated iteratively following the results of the gradient. A highly non-convex problem is solved very efficiently, obtaining very good results that are dependent on the starting point. Finally, an optimization method is presented where discrete digital phases are synthesized providing a radiation pattern as close as possible to the desired one. The problem is solved using linear integer programming procedures obtaining array designs that greatly reduce the fabrication costs. Results are provided for every method showing the capabilities that the above mentioned methods offer. The results obtained are compared with available methods in the literature. The importance of introducing a rigorous analysis into the synthesis method is emphasized and the results obtained are compared with a commercial software, showing good agreement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The choice of sampling methods to survey saproxylic beetles is a key aspect to assessing conservation strategies for one of the most endangered assemblages in Europe. We evaluated the efficiency of three sampling methods: baited tube traps (TT), window traps in front of a hollow opening (WT), and emergence traps covering tree hollows (ET) to study richness and diversity of saproxylic beetle assemblages at species and family levels in Mediterranean woodlands. We also examined trap efficiency to report ecological diversity, and changes in the relative richness and abundance of species forming trophic guilds: xylophagous, saprophagous/saproxylophagous, xylomycetophagous, predators and commensals. WT and ET were similarly effective in reporting species richness and diversity at species and family levels, and provided an accurate profile of both the flying active and hollow-linked saproxylic beetle assemblages. WT and ET were the most complementary methods, together reporting more than 90 % of richness and diversity at both species and family levels. Diversity, richness and abundance of guilds were better characterized by ET, which indicates higher efficiency in outlining the ecological community of saproxylics that inhabit tree hollows. TT were the least effective method at both taxonomic levels, sampling a biased portion of the beetle assemblage attracted to trapping principles, however they could be used as a specific method for families such as Bostrichiidae, Biphyllidae, Melyridae, Mycetophagidae or Curculionidae Scolytinae species. Finally, ET and WT combination allows a better characterization of saproxylic assemblages in Mediterranean woodland, by recording species with different biology and linked to different microhabitat types.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: In this paper the authors aim to show the advantages of using the decomposition method introduced by Adomian to solve Emden's equation, a classical non‐linear equation that appears in the study of the thermal behaviour of a spherical cloud and of the gravitational potential of a polytropic fluid at hydrostatic equilibrium. Design/methodology/approach: In their work, the authors first review Emden's equation and its possible solutions using the Frobenius and power series methods; then, Adomian polynomials are introduced. Afterwards, Emden's equation is solved using Adomian's decomposition method and, finally, they conclude with a comparison of the solution given by Adomian's method with the solution obtained by the other methods, for certain cases where the exact solution is known. Findings: Solving Emden's equation for n in the interval [0, 5] is very interesting for several scientific applications, such as astronomy. However, the exact solution is known only for n=0, n=1 and n=5. The experiments show that Adomian's method achieves an approximate solution which overlaps with the exact solution when n=0, and that coincides with the Taylor expansion of the exact solutions for n=1 and n=5. As a result, the authors obtained quite satisfactory results from their proposal. Originality/value: The main classical methods for obtaining approximate solutions of Emden's equation have serious computational drawbacks. The authors make a new, efficient numerical implementation for solving this equation, constructing iteratively the Adomian polynomials, which leads to a solution of Emden's equation that extends the range of variation of parameter n compared to the solutions given by both the Frobenius and the power series methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Surface Renewal Theory (SRT) is one of the most unfamiliar models in order to characterize fluid-fluid and fluid-fluid-solid reactions, which are of considerable industrial and academicals importance. In the present work, an approach to the resolution of the SRT model by numerical methods is presented, enabling the visualization of the influence of different variables which control the heterogeneous overall process. Its use in a classroom allowed the students to reach a great understanding of the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Supersedes Research paper 768 [by W.F. Roeser and H.T. Wensel]."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 7 divisions, 1 and 7 originally issued July 18, 1930, and Nov. 26, 1929; each division with caption title: Symposium of test methods on coarse aggregates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper gives a review of recent progress in the design of numerical methods for computing the trajectories (sample paths) of solutions to stochastic differential equations. We give a brief survey of the area focusing on a number of application areas where approximations to strong solutions are important, with a particular focus on computational biology applications, and give the necessary analytical tools for understanding some of the important concepts associated with stochastic processes. We present the stochastic Taylor series expansion as the fundamental mechanism for constructing effective numerical methods, give general results that relate local and global order of convergence and mention the Magnus expansion as a mechanism for designing methods that preserve the underlying structure of the problem. We also present various classes of explicit and implicit methods for strong solutions, based on the underlying structure of the problem. Finally, we discuss implementation issues relating to maintaining the Brownian path, efficient simulation of stochastic integrals and variable-step-size implementations based on various types of control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Folates and its derivatives occur as polyglutamates in nature. The multiplicity of forms and the generally low levels in foods makes quantitative analysis of folate a difficult task. The assay of folates from foods generally involves three steps: liberation of folates from the cellular matrix; deconjugation from the polyglutamate to the mono and di-glutamate forms; and the detection of the biological activity or chemical concentration of the resulting folates. The detection methods used are the microbiological assay relying on the turbidimetric bacterial growth of Lactobacillus rhamnosus which is by far the most commonly used method; the HPLC and LC/MS techniques and bio-specific procedures. This review attempts to describe the methods along with the merits and demerits of using each of these methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-tree-based ('surrogate') methods have been used to identify instances of lateral genetic transfer in microbial genomes but agreement among predictions of different methods can be poor. It has been proposed that this disagreement arises because different surrogate methods are biased towards the detection of certain types of transfer events. This conjecture is supported by a rigorous phylogenetic analysis of 3776 proteins in Escherichia coli K12 MG1655 to map the ages of transfer events relative to one another.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed a sensitive, non-radioactive method to assess the interaction of transcription factors/DNA-binding proteins with DNA. We have modified the traditional radiolabeled DNA gel mobility shift assay to incorporate a DNA probe end-labeled with a Texas-red fluorophore and a DNA-binding protein tagged with the green fluorescent protein to monitor precisely DNA-protein complexation by native gel electrophoresis. We have applied this method to the DNA-binding proteins telomere release factor-1 and the sex-determining region-Y, demonstrating that the method is sensitive (able to detect 100 fmol of fluorescently labeled DNA), permits direct visualization of both the DNA probe and the DNA-binding protein, and enables quantitative analysis of DNA and protein complexation, and thereby an estimation of the stoichiometry of protein-DNA binding.