963 resultados para Multiresolution shape analysis
Resumo:
Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (-50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color -encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kF. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from -700 dB/m at 400 kHz and a power of n=1-2 in coarse-grained sands to few decibels per meter and n :s; 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.
Resumo:
This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.
Resumo:
Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system.
Resumo:
Direct-drive inertial confinement thermonuclear fusion consists in illuminating a shell of cryogenic Deuterium and Tritium (DT) mixture with many intense beams of laser light. Capsule is composed of DT gassurrounded by cryogenic DT as combustible fuel. Basic rules are used to define shell geometry from aspect ratio, fuel mass and layers densities. We define baseline designs using two aspect ratio (A=3 and A=5) who complete HiPER baseline design (A=7.7). Aspect ratio is defined as the ratio of ice DT shell inner radius over DT shell thickness. Low aspect ratio improves hydrodynamics stabilities of imploding shell. Laser impulsion shape and ablator thickness are initially defined by using Lindl (1995) pressure ablation and mass ablation formulae for direct-drive using CH layer as ablator. In flight adiabat parameter is close to one during implosion. Velocitie simplosions chosen are between 260 km/s and 365 km/s. More than thousand calculations are realized for each aspect ratio in order to optimize the laser pulse shape. Calculations are performed using the one-dimensional version of the Lagrangian radiation hydrodynamics FCI2. We choose implosion velocities for each initial aspect ratio, and we compute scaled-target family curves for each one to find self-ignition threshold. Then, we pick points on each curves that potentially product high thermonuclear gain and compute shock ignition in the context of Laser MegaJoule. This systematic analyze reveals many working points which complete previous studies ´allowing to highlight baseline designs, according to laser intensity and energy, combustible mass and initial aspect ratio to be relevant for Laser MegaJoule.
Resumo:
We propose a level set based variational approach that incorporates shape priors into edge-based and region-based models. The evolution of the active contour depends on local and global information. It has been implemented using an efficient narrow band technique. For each boundary pixel we calculate its dynamic according to its gray level, the neighborhood and geometric properties established by training shapes. We also propose a criterion for shape aligning based on affine transformation using an image normalization procedure. Finally, we illustrate the benefits of the our approach on the liver segmentation from CT images.
Resumo:
Quasi-monocrystalline silicon wafers have appeared as a critical innovation in the PV industry, joining the most favourable characteristics of the conventional substrates: the higher solar cell efficiencies of monocrystalline Czochralski-Si (Cz-Si) wafers and the lower cost and the full square-shape of the multicrystalline ones. However, the quasi-mono ingot growth can lead to a different defect structure than the typical Cz-Si process. Thus, the properties of the brand-new quasi-mono wafers, from a mechanical point of view, have been for the first time studied, comparing their strength with that of both Cz-Si mono and typical multicrystalline materials. The study has been carried out employing the four line bending test and simulating them by means of FE models. For the analysis, failure stresses were fitted to a three-parameter Weibull distribution. High mechanical strength was found in all the cases. The low quality quasi-mono wafers, interestingly, did not exhibit critical strength values for the PV industry, despite their noticeable density of extended defects.
Resumo:
After the experience gained during the past years it seems clear that nonlinear analysis of bridges are very important to compute ductility demands and to localize potential hinges. This is specially true for irregular bridges in which it is not clear weather or not it is possible to use a linear computation followed by a correction using a behaviour factor. To simplify the numerical effort several approximate methods have been proposed. Among them, the so-called Dynamic Plastic Hinge Method in which an evolutionary shape function is used to reduce the structure to a single degree of freedom system seems to mantein a good balance between accuracy and simplicity. This paper presents results obtained in a parametric study conducted under the auspicies of PREC-8 european research program.
Resumo:
The origin of the modified optical properties of InAs/GaAs quantum dots (QD) capped with a thin GaAs1−xSbx layer is analyzed in terms of the band structure. To do so, the size, shape, and composition of the QDs and capping layer are determined through cross-sectional scanning tunnelling microscopy and used as input parameters in an 8 × 8 k·p model. As the Sb content is increased, there are two competing effects determining carrier confinement and the oscillator strength: the increased QD height and reduced strain on one side and the reduced QD-capping layer valence band offset on the other. Nevertheless, the observed evolution of the photoluminescence (PL) intensity with Sb cannot be explained in terms of the oscillator strength between ground states, which decreases dramatically for Sb > 16%, where the band alignment becomes type II with the hole wavefunction localized outside the QD in the capping layer. Contrary to this behaviour, the PL intensity in the type II QDs is similar (at 15 K) or even larger (at room temperature) than in the type I Sb-free reference QDs. This indicates that the PL efficiency is dominated by carrier dynamics, which is altered by the presence of the GaAsSb capping layer. In particular, the presence of Sb leads to an enhanced PL thermal stability. From the comparison between the activation energies for thermal quenching of the PL and the modelled band structure, the main carrier escape mechanisms are suggested. In standard GaAs-capped QDs, escape of both electrons and holes to the GaAs barrier is the main PL quenching mechanism. For small-moderate Sb (<16%) for which the type I band alignment is kept, electrons escape to the GaAs barrier and holes escape to the GaAsSb capping layer, where redistribution and retraping processes can take place. For Sb contents above 16% (type-II region), holes remain in the GaAsSb layer and the escape of electrons from the QD to the GaAs barrier is most likely the dominant PL quenching mechanism. This means that electrons and holes behave dynamically as uncorrelated pairs in both the type-I and type-II structures.
Resumo:
A function based on the characteristics of the alpha-particle lines obtained with silicon semiconductor detectors and modi"ed by using cubic splines is proposed to parametrize the shape of the peaks. A reduction in the number of parameters initially considered in other proposals was carried out in order to improve the stability of the optimization process. It was imposed by the boundary conditions for the cubic splines term. This function was then able to describe peaks with highly anomalous shapes with respect to those expected from this type of detector. Some criteria were implemented to correctly determine the area of the peaks and their errors. Comparisons with other well-established functions revealed excellent agreement in the "nal values obtained from both "ts. Detailed studies on reliability of the "tting results were carried out and the application of the function is proposed. Although the aim was to correct anomalies in peak shapes, the peaks showing the expected shapes were also well "tted. Accordingly, the validity of the proposal is quite general in the analysis of alpha-particle spectrometry with semiconductor detectors.
Resumo:
Esta tesis presenta un análisis teórico del funcionamiento de toberas magnéticas para la propulsión espacial por plasmas. El estudio está basado en un modelo tridimensional y bi-fluido de la expansión supersónica de un plasma caliente en un campo magnético divergente. El modelo básico es ampliado progresivamente con la inclusión de términos convectivos dominantes de electrones, el campo magnético inducido por el plasma, poblaciones electrónicas múltiples a distintas temperaturas, y la capacidad de integrar el flujo en la región de expansión lejana. La respuesta hiperbólica del plasma es integrada con alta precisión y eficiencia haciendo uso del método de las líneas características. Se realiza una caracterización paramétrica de la expansión 2D del plasma en términos del grado de magnetización de iones, la geometría del campo magnético, y el perfil inicial del plasma. Se investigan los mecanismos de aceleración, mostrando que el campo ambipolar convierte la energía interna de electrones en energía dirigida de iones. Las corrientes diamagnéticas de Hall, que pueden hallarse distribuidas en el volumen del plasma o localizadas en una delgada capa de corriente en el borde del chorro, son esenciales para la operación de la tobera, ya que la fuerza magnética repulsiva sobre ellas es la encargada de confinar radialmente y acelerar axialmente el plasma. El empuje magnético es la reacción a esta fuerza sobre el motor. La respuesta del plasma muestra la separación gradual hacia adentro de los tubos de iones respecto de los magnéticos, lo cual produce la formación de corrientes eléctricas longitudinales y pone el plasma en rotación. La ganancia de empuje obtenida y las pérdidas radiales de la pluma de plasma se evalúan en función de los parámetros de diseño. Se analiza en detalle la separación magnética del plasma aguas abajo respecto a las líneas magnéticas (cerradas sobre sí mismas), necesaria para la aplicación de la tobera magnética a fines propulsivos. Se demuestra que tres teorías existentes sobre separación, que se fundamentan en la resistividad del plasma, la inercia de electrones, y el campo magnético que induce el plasma, son inadecuadas para la tobera magnética propulsiva, ya que producen separación hacia afuera en lugar de hacia adentro, aumentando la divergencia de la pluma. En su lugar, se muestra que la separación del plasma tiene lugar gracias a la inercia de iones y la desmagnetización gradual del plasma que tiene lugar aguas abajo, que permiten la separación ilimitada del flujo de iones respecto a las líneas de campo en condiciones muy generales. Se evalúa la cantidad de plasma que permanece unida al campo magnético y retorna hacia el motor a lo largo de las líneas cerradas de campo, mostrando que es marginal. Se muestra cómo el campo magnético inducido por el plasma incrementa la divergencia de la tobera magnética y por ende de la pluma de plasma en el caso propulsivo, contrariamente a las predicciones existentes. Se muestra también cómo el inducido favorece la desmagnetización del núcleo del chorro, acelerando la separación magnética. La hipótesis de ambipolaridad de corriente local, común a varios modelos de tobera magnética existentes, es discutida críticamente, mostrando que es inadecuada para el estudio de la separación de plasma. Una inconsistencia grave en la derivación matemática de uno de los modelos más aceptados es señalada y comentada. Incluyendo una especie adicional de electrones supratérmicos en el modelo, se estudia la formación y geometría de dobles capas eléctricas en el interior del plasma. Cuando dicha capa se forma, su curvatura aumenta cuanto más periféricamente se inyecten los electrones supratérmicos, cuanto menor sea el campo magnético, y cuanto más divergente sea la tobera magnética. El plasma con dos temperaturas electrónicas posee un mayor ratio de empuje magnético frente a total. A pesar de ello, no se encuentra ninguna ventaja propulsiva de las dobles capas, reforzando las críticas existentes frente a las propuestas de estas formaciones como un mecanismo de empuje. Por último, se presenta una formulación general de modelos autosemejantes de la expansión 2D de una pluma no magnetizada en el vacío. El error asociado a la hipótesis de autosemejanza es calculado, mostrando que es pequeño para plumas hipersónicas. Tres modelos de la literatura son particularizados a partir de la formulación general y comparados. Abstract This Thesis presents a theoretical analysis of the operation of magnetic nozzles for plasma space propulsion. The study is based on a two-dimensional, two-fluid model of the supersonic expansion of a hot plasma in a divergent magnetic field. The basic model is extended progressively to include the dominant electron convective terms, the plasma-induced magnetic field, multi-temperature electron populations, and the capability to integrate the plasma flow in the far expansion region. The hyperbolic plasma response is integrated accurately and efficiently with the method of the characteristic lines. The 2D plasma expansion is characterized parametrically in terms of the ion magnetization strength, the magnetic field geometry, and the initial plasma profile. Acceleration mechanisms are investigated, showing that the ambipolar electric field converts the internal electron energy into directed ion energy. The diamagnetic electron Hall current, which can be distributed in the plasma volume or localized in a thin current sheet at the jet edge, is shown to be central for the operation of the magnetic nozzle. The repelling magnetic force on this current is responsible for the radial confinement and axial acceleration of the plasma, and magnetic thrust is the reaction to this force on the magnetic coils of the thruster. The plasma response exhibits a gradual inward separation of the ion streamtubes from the magnetic streamtubes, which focuses the jet about the nozzle axis, gives rise to the formation of longitudinal currents and sets the plasma into rotation. The obtained thrust gain in the magnetic nozzle and radial plasma losses are evaluated as a function of the design parameters. The downstream plasma detachment from the closed magnetic field lines, required for the propulsive application of the magnetic nozzle, is investigated in detail. Three prevailing detachment theories for magnetic nozzles, relying on plasma resistivity, electron inertia, and the plasma-induced magnetic field, are shown to be inadequate for the propulsive magnetic nozzle, as these mechanisms detach the plume outward, increasing its divergence, rather than focusing it as desired. Instead, plasma detachment is shown to occur essentially due to ion inertia and the gradual demagnetization that takes place downstream, which enable the unbounded inward ion separation from the magnetic lines beyond the turning point of the outermost plasma streamline under rather general conditions. The plasma fraction that remains attached to the field and turns around along the magnetic field back to the thruster is evaluated and shown to be marginal. The plasmainduced magnetic field is shown to increase the divergence of the nozzle and the resulting plasma plume in the propulsive case, and to enhance the demagnetization of the central part of the plasma jet, contrary to existing predictions. The increased demagnetization favors the earlier ion inward separation from the magnetic field. The local current ambipolarity assumption, common to many existing magnetic nozzle models, is critically discussed, showing that it is unsuitable for the study of plasma detachment. A grave mathematical inconsistency in a well-accepted model, related to the acceptance of this assumption, is found out and commented on. The formation and 2D shape of electric double layers in the plasma expansion is studied with the inclusion of an additional suprathermal electron population in the model. When a double layer forms, its curvature is shown to increase the more peripherally suprathermal electrons are injected, the lower the magnetic field strength, and the more divergent the magnetic nozzle is. The twoelectron- temperature plasma is seen to have a greater magnetic-to-total thrust ratio. Notwithstanding, no propulsive advantage of the double layer is found, supporting and reinforcing previous critiques to their proposal as a thrust mechanism. Finally, a general framework of self-similar models of a 2D unmagnetized plasma plume expansion into vacuum is presented and discussed. The error associated with the self-similarity assumption is calculated and shown to be small for hypersonic plasma plumes. Three models of the literature are recovered as particularizations from the general framework and compared.
Resumo:
La influencia de la aerodinámica en el diseño de los trenes de alta velocidad, unida a la necesidad de resolver nuevos problemas surgidos con el aumento de la velocidad de circulación y la reducción de peso del vehículo, hace evidente el interés de plantear un estudio de optimización que aborde tales puntos. En este contexto, se presenta en esta tesis la optimización aerodinámica del testero de un tren de alta velocidad, llevada a cabo mediante el uso de métodos de optimización avanzados. Entre estos métodos, se ha elegido aquí a los algoritmos genéticos y al método adjunto como las herramientas para llevar a cabo dicha optimización. La base conceptual, las características y la implementación de los mismos se detalla a lo largo de la tesis, permitiendo entender los motivos de su elección, y las consecuencias, en términos de ventajas y desventajas que cada uno de ellos implican. El uso de los algorimos genéticos implica a su vez la necesidad de una parametrización geométrica de los candidatos a óptimo y la generación de un modelo aproximado que complementa al método de optimización. Estos puntos se describen de modo particular en el primer bloque de la tesis, enfocada a la metodología seguida en este estudio. El segundo bloque se centra en la aplicación de los métodos a fin de optimizar el comportamiento aerodinámico del tren en distintos escenarios. Estos escenarios engloban los casos más comunes y también algunos de los más exigentes a los que hace frente un tren de alta velocidad: circulación en campo abierto con viento frontal o viento lateral, y entrada en túnel. Considerando el caso de viento frontal en campo abierto, los dos métodos han sido aplicados, permitiendo una comparación de las diferentes metodologías, así como el coste computacional asociado a cada uno, y la minimización de la resistencia aerodinámica conseguida en esa optimización. La posibilidad de evitar parametrizar la geometría y, por tanto, reducir el coste computacional del proceso de optimización es la característica más significativa de los métodos adjuntos, mientras que en el caso de los algoritmos genéticos se destaca la simplicidad y capacidad de encontrar un óptimo global en un espacio de diseño multi-modal o de resolver problemas multi-objetivo. El caso de viento lateral en campo abierto considera nuevamente los dos métoxi dos de optimización anteriores. La parametrización se ha simplificado en este estudio, lo que notablemente reduce el coste numérico de todo el estudio de optimización, a la vez que aún recoge las características geométricas más relevantes en un tren de alta velocidad. Este análisis ha permitido identificar y cuantificar la influencia de cada uno de los parámetros geométricos incluídos en la parametrización, y se ha observado que el diseño de la arista superior a barlovento es fundamental, siendo su influencia mayor que la longitud del testero o que la sección frontal del mismo. Finalmente, se ha considerado un escenario más a fin de validar estos métodos y su capacidad de encontrar un óptimo global. La entrada de un tren de alta velocidad en un túnel es uno de los casos más exigentes para un tren por el pico de sobrepresión generado, el cual afecta a la confortabilidad del pasajero, así como a la estabilidad del vehículo y al entorno próximo a la salida del túnel. Además de este problema, otro objetivo a minimizar es la resistencia aerodinámica, notablemente superior al caso de campo abierto. Este problema se resuelve usando algoritmos genéticos. Dicho método permite obtener un frente de Pareto donde se incluyen el conjunto de óptimos que minimizan ambos objetivos. ABSTRACT Aerodynamic design of trains influences several aspects of high-speed trains performance in a very significant level. In this situation, considering also that new aerodynamic problems have arisen due to the increase of the cruise speed and lightness of the vehicle, it is evident the necessity of proposing an optimization study concerning the train aerodynamics. Thus, the aerodynamic optimization of the nose shape of a high-speed train is presented in this thesis. This optimization is based on advanced optimization methods. Among these methods, genetic algorithms and the adjoint method have been selected. A theoretical description of their bases, the characteristics and the implementation of each method is detailed in this thesis. This introduction permits understanding the causes of their selection, and the advantages and drawbacks of their application. The genetic algorithms requirethe geometrical parameterization of any optimal candidate and the generation of a metamodel or surrogate model that complete the optimization process. These points are addressed with a special attention in the first block of the thesis, focused on the methodology considered in this study. The second block is referred to the use of these methods with the purpose of optimizing the aerodynamic performance of a high-speed train in several scenarios. These scenarios englobe the most representative operating conditions of high-speed trains, and also some of the most exigent train aerodynamic problems: front wind and cross-wind situations in open air, and the entrance of a high-speed train in a tunnel. The genetic algorithms and the adjoint method have been applied in the minimization of the aerodynamic drag on the train with front wind in open air. The comparison of these methods allows to evaluate the methdology and computational cost of each one, as well as the resulting minimization of the aerodynamic drag. Simplicity and robustness, the straightforward realization of a multi-objective optimization, and the capability of searching a global optimum are the main attributes of genetic algorithm. However, the requirement of geometrically parameterize any optimal candidate is a significant drawback that is avoided with the use of the adjoint method. This independence of the number of design variables leads to a relevant reduction of the pre-processing and computational cost. Considering the cross-wind stability, both methods are used again for the minimization of the side force. In this case, a simplification of the geometric parameterization of the train nose is adopted, what dramatically reduces the computational cost of the optimization process. Nevertheless, some of the most important geometrical characteristics are still described with this simplified parameterization. This analysis identifies and quantifies the influence of each design variable on the side force on the train. It is observed that the A-pillar roundness is the most demanding design parameter, with a more important effect than the nose length or the train cross-section area. Finally, a third scenario is considered for the validation of these methods in the aerodynamic optimization of a high-speed train. The entrance of a train in a tunnel is one of the most exigent train aerodynamic problems. The aerodynamic consequences of high-speed trains running in a tunnel are basically resumed in two correlated phenomena, the generation of pressure waves and an increase in aerodynamic drag. This multi-objective optimization problem is solved with genetic algorithms. The result is a Pareto front where a set of optimal solutions that minimize both objectives.
Resumo:
Moment invariants have been thoroughly studied and repeatedly proposed as one of the most powerful tools for 2D shape identification. In this paper a set of such descriptors is proposed, being the basis functions discontinuous in a finite number of points. The goal of using discontinuous functions is to avoid the Gibbs phenomenon, and therefore to yield a better approximation capability for discontinuous signals, as images. Moreover, the proposed set of moments allows the definition of rotation invariants, being this the other main design concern. Translation and scale invariance are achieved by means of standard image normalization. Tests are conducted to evaluate the behavior of these descriptors in noisy environments, where images are corrupted with Gaussian noise up to different SNR values. Results are compared to those obtained using Zernike moments, showing that the proposed descriptor has the same performance in image retrieval tasks in noisy environments, but demanding much less computational power for every stage in the query chain.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
We present a novel general resource analysis for logic programs based on sized types.Sized types are representations that incorporate structural (shape) information and allow expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. They also allow relating the sizes of terms and subterms occurring at different argument positions in logic predicates. Using these sized types, the resource analysis can infer both lower and upper bounds on the resources used by all the procedures in a program as functions on input term (and subterm) sizes, overcoming limitations of existing analyses and enhancing their precision. Our new resource analysis has been developed within the abstract interpretation framework, as an extension of the sized types abstract domain, and has been integrated into the Ciao preprocessor, CiaoPP. The abstract domain operations are integrated with the setting up and solving of recurrence equations for both, inferring size and resource usage functions. We show that the analysis is an improvement over the previous resource analysis present in CiaoPP and compares well in power to state of the art systems.
Resumo:
We present a novel analysis for relating the sizes of terms and subterms occurring at diferent argument positions in logic predicates. We extend and enrich the concept of sized type as a representation that incorporates structural (shape) information and allows expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. For example, expressing bounds on the length of lists of numbers, together with bounds on the values of all of their elements. The analysis is developed using abstract interpretation and the novel abstract operations are based on setting up and solving recurrence relations between sized types. It has been integrated, together with novel resource usage and cardinality analyses, in the abstract interpretation framework in the Ciao preprocessor, CiaoPP, in order to assess both the accuracy of the new size analysis and its usefulness in the resource usage estimation application. We show that the proposed sized types are a substantial improvement over the previous size analyses present in CiaoPP, and also benefit the resource analysis considerably, allowing the inference of equal or better bounds than comparable state of the art systems.