977 resultados para Full spatial domain computation
Resumo:
In spite of trying to understand processes in the same spatial domain, the catchment hydrology and water quality scientific communities are relatively disconnected and so are their respective models. This is emphasized by an inadequate representation of transport processes, in both catchment-scale hydrological and water quality models. While many hydrological models at the catchment scale only account for pressure propagation and not for mass transfer, catchment scale water quality models are typically limited by overly simplistic representations of flow processes. With the objective of raising awareness for this issue and outlining potential ways forward we provide a non-technical overview of (1) the importance of hydrology-controlled transport through catchment systems as the link between hydrology and water quality; (2) the limitations of current generation catchment-scale hydrological and water quality models; (3) the concept of transit times as tools to quantify transport and (4) the benefits of transit time based formulations of solute transport for catchment-scale hydrological and water quality models. There is emerging evidence that an explicit formulation of transport processes, based on the concept of transit times has the potential to improve the understanding of the integrated system dynamics of catchments and to provide a stronger link between catchment-scale hydrological and water quality models.
Resumo:
This paper considers the stability of explicit, implicit and Crank-Nicolson schemes for the one-dimensional heat equation on a staggered grid. Furthemore, we consider the cases when both explicit and implicit approximations of the boundary conditions arc employed. Why we choose to do this is clearly motivated and arises front solving fluid flow equations with free surfaces when the Reynolds number can be very small. in at least parts of the spatial domain. A comprehensive stability analysis is supplied: a novel result is the precise stability restriction on the Crank-Nicolson method when the boundary conditions are approximated explicitly, that is, at t =n delta t rather than t = (n + 1)delta t. The two-dimensional Navier-Stokes equations were then solved by a marker and cell approach for two simple problems that had analytic solutions. It was found that the stability results provided in this paper were qualitatively very similar. thereby providing insight as to why a Crank-Nicolson approximation of the momentum equations is only conditionally, stable. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Plasmabasierte Röntgenlaser sind aufgrund ihrer kurzen Wellenlänge und schma-rnlen spektralen Bandbreite attraktive Diagnose-Instrumente in einer Vielzahl potentieller Anwendungen, beispielsweise in den Bereichen Spektroskopie, Mikroskopie und EUV-Lithografie. Dennoch sind Röntgenlaser zum heutigen Stand noch nicht sehr weit verbreitet, was vorwiegend auf eine zu geringe Pulsenergie und für manche Anwendungen nicht hinreichende Strahlqualität zurückzuführen ist. In diesem Zusammenhang wurden in den letzten Jahren bedeutende Fortschritte erzielt. Die gleichzeitige Weiterentwicklung von Pumplasersystemen und Pumpmechanismen ermöglichte es, kompakte Röntgenlaserquellen mit bis zu 100 Hz zu betreiben. Um gleichzeitig höhere Pulsenergien, höhere Strahlqualität und volle räumliche Kohärenz zu erhalten, wurden intensive Studien theoretischer und experimenteller Natur durchgeführt. In diesem Kontext wurde in der vorliegenden Arbeit ein experimenteller Aufbau zur Kombination von zwei Röntgenlaser-Targets entwickelt, die sogenannte Butterfly-Konfiguration. Der erste Röntgenlaser wird dabei als sogenannter Seed für das zweite, als Verstärker dienende Röntgenlasermedium verwendet (injection-seeding). Aufrndiese Weise werden störende Effekte vermieden, welche beim Entstehungsprozessrndes Röntgenlasers durch die Verstärkung von spontaner Emission zustande kom-rnmen. Unter Verwendung des ebenfalls an der GSI entwickelten Double-Pulse Gra-rnzing Incidence Pumpschemas ermöglicht das hier vorgestellte Konzept, erstmaligrnbeide Röntgenlasertargets effizient und inklusive Wanderwellenanregung zu pum-rnpen.rnBei einer ersten experimentellen Umsetzung gelang die Erzeugung verstärkter Silber-Röntgenlaserpulse von 1 µJ bei 13.9 nm Wellenlänge. Anhand der gewonnenen Daten erfolgte neben dem Nachweis der Verstärkung die Bestimmung der Lebensdauer der Besetzungsinversion zu 3 ps. In einem Nachfolgeexperiment wurden die Eigenschaften eines Molybdän-Röntgenlaserplasmas näher untersucht. Neben dem bisher an der GSI angewandten Pumpschema kam in dieser Strahlzeit noch eine weitere Technik zum Einsatz, welche auf einem zusätzlichen Pumppuls basierte. In beiden Schemata gelang neben dem Nachweis der Verstärkung die zeitliche und räumliche Charakterisierung des Verstärkermediums. Röntgenlaserpulse mit bis zu 240 nJ bei einer Wellenlänge von 18.9 nm wurden nachgewiesen. Die erreichte Brillanz der verstärkten Pulse lag ca. zwei Größenordnungen über der des ursprünglichen Seeds und mehr als eine Größenordnung über der Brillanz eines Röntgenlasers, dessen Erzeugung auf der Verwendung eines einzelnen Targets basierte. Das in dieser Arbeitrnentwickelte und experimentell verifizierte Konzept birgt somit das Potential, extrem brillante plasmabasierte Röntgenlaser mit vollständiger räumlicher und zeitlicher Kohärenz zu erzeugen.rnDie in dieser Arbeit diskutierten Ergebnisse sind ein wesentlicher Beitrag zu der Entwicklung eines Röntgenlasers, der bei spektroskopischen Untersuchungen von hochgeladenen Schwerionen eingesetzt werden soll. Diese Experimente sind amrnExperimentierspeicherring der GSI und zukünftig auch am High-Energy StoragernRing der FAIR-Anlage vorgesehen.rn
Resumo:
Synthetic Aperture Radar (SAR) images a target region reflectivity function in the multi-dimensional spatial domain of range and cross-range. SAR synthesizes a large aperture radar in order to achieve finer azimuth resolution than the one provided by any on-board real antenna. Conventional SAR techniques assume a single reflection of transmitted waveforms from targets. Nevertheless, today¿s new scenes force SAR systems to work in urban environments. Consequently, multiple-bounce returns are added to direct-scatter echoes. We refer to these as ghost images, since they obscure true target image and lead to poor resolution. By analyzing the quadratic phase error (QPE), this paper demonstrates that Earth¿s curvature influences the defocusing degree of multipath returns. In addition to the QPE, other parameters such as integrated sidelobe ratio (ISLR), peak sidelobe ratio (PSLR), contrast and entropy provide us with the tools to identify direct-scatter echoes in images containing undesired returns coming from multipath.
Resumo:
Synthetic Aperture Radar (SAR) images a target region reflectivity function in the multi-dimensional spatial domain of range and cross-range. SAR synthesizes a large aperture radar in order to achieve a finer azimuth resolution than the one provided by any on-board real antenna. Conventional SAR techniques assume a single reflection of transmitted waveforms from targets. Nevertheless, today¿s new scenes force SAR systems to work in urban environments. Consequently, multiple-bounce returns are added to directscatter echoes. We refer to these as ghost images, since they obscure true target image and lead to poor resolution. By analyzing the quadratic phase error (QPE), this paper demonstrates that Earth¿s curvature influences the defocusing degree of multipath returns. In addition to the QPE, other parameters such as integrated sidelobe ratio (ISLR), peak sidelobe ratio (PSLR), contrast (C) and entropy (E) provide us with the tools to identify direct-scatter echoes in images containing undesired returns coming from multipath.
Resumo:
Synthetic Aperture Radar (SAR) images a target region reflectivity function in the multi-dimensional spatial domain of range and cross-range with a finer azimuth resolution than the one provided by any on-board real antenna. Conventional SAR techniques assume a single reflection of transmitted waveforms from targets. Nevertheless, new uses of Unmanned Aerial Vehicles (UAVs) for civilian-security applications force SAR systems to work in much more complex scenes such as urban environments. Consequently, multiple-bounce returns are additionally superposed to direct-scatter echoes. They are known as ghost images, since they obscure true target image and lead to poor resolution. All this may involve a significant problem in applications related to surveillance and security. In this work, an innovative multipath mitigation technique is presented in which Time Reversal (TR) concept is applied to SAR images when the target is concealed in clutter, leading to TR-SAR technique. This way, the effect of multipath is considerably reduced ?or even removed?, recovering the lost resolution due to multipath propagation. Furthermore, some focusing indicators such as entropy (E), contrast (C) and Rényi entropy (RE) provide us with a good focusing criterion when using TR-SAR.
Resumo:
Desde los inicios de la codificación de vídeo digital hasta hoy, tanto la señal de video sin comprimir de entrada al codificador como la señal de salida descomprimida del decodificador, independientemente de su resolución, uso de submuestreo en los planos de diferencia de color, etc. han tenido siempre la característica común de utilizar 8 bits para representar cada una de las muestras. De la misma manera, los estándares de codificación de vídeo imponen trabajar internamente con estos 8 bits de precisión interna al realizar operaciones con las muestras cuando aún no se han transformado al dominio de la frecuencia. Sin embargo, el estándar H.264, en gran auge hoy en día, permite en algunos de sus perfiles orientados al mundo profesional codificar vídeo con más de 8 bits por muestra. Cuando se utilizan estos perfiles, las operaciones efectuadas sobre las muestras todavía sin transformar se realizan con la misma precisión que el número de bits del vídeo de entrada al codificador. Este aumento de precisión interna tiene el potencial de permitir unas predicciones más precisas, reduciendo el residuo a codificar y aumentando la eficiencia de codificación para una tasa binaria dada. El objetivo de este Proyecto Fin de Carrera es estudiar, utilizando las medidas de calidad visual objetiva PSNR (Peak Signal to Noise Ratio, relación señal ruido de pico) y SSIM (Structural Similarity, similaridad estructural), el efecto sobre la eficiencia de codificación y el rendimiento al trabajar con una cadena de codificación/descodificación H.264 de 10 bits en comparación con una cadena tradicional de 8 bits. Para ello se utiliza el codificador de código abierto x264, capaz de codificar video de 8 y 10 bits por muestra utilizando los perfiles High, High 10, High 4:2:2 y High 4:4:4 Predictive del estándar H.264. Debido a la ausencia de herramientas adecuadas para calcular las medidas PSNR y SSIM de vídeo con más de 8 bits por muestra y un tipo de submuestreo de planos de diferencia de color distinto al 4:2:0, como parte de este proyecto se desarrolla también una aplicación de análisis en lenguaje de programación C capaz de calcular dichas medidas a partir de dos archivos de vídeo sin comprimir en formato YUV o Y4M. ABSTRACT Since the beginning of digital video compression, the uncompressed video source used as input stream to the encoder and the uncompressed decoded output stream have both used 8 bits for representing each sample, independent of resolution, chroma subsampling scheme used, etc. In the same way, video coding standards force encoders to work internally with 8 bits of internal precision when working with samples before being transformed to the frequency domain. However, the H.264 standard allows coding video with more than 8 bits per sample in some of its professionally oriented profiles. When using these profiles, all work on samples still in the spatial domain is done with the same precision the input video has. This increase in internal precision has the potential of allowing more precise predictions, reducing the residual to be encoded, and thus increasing coding efficiency for a given bitrate. The goal of this Project is to study, using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) objective video quality metrics, the effects on coding efficiency and performance caused by using an H.264 10 bit coding/decoding chain compared to a traditional 8 bit chain. In order to achieve this goal the open source x264 encoder is used, which allows encoding video with 8 and 10 bits per sample using the H.264 High, High 10, High 4:2:2 and High 4:4:4 Predictive profiles. Given that no proper tools exist for computing PSNR and SSIM values of video with more than 8 bits per sample and chroma subsampling schemes other than 4:2:0, an analysis application written in the C programming language is developed as part of this Project. This application is able to compute both metrics from two uncompressed video files in the YUV or Y4M format.
Resumo:
Partitioning is a common approach to developing mixed-criticality systems, where partitions are isolated from each other both in the temporal and the spatial domain in order to prevent low-criticality subsystems from compromising other subsystems with high level of criticality in case of misbehaviour. The advent of many-core processors, on the other hand, opens the way to highly parallel systems in which all partitions can be allocated to dedicated processor cores. This trend will simplify processor scheduling, although other issues such as mutual interference in the temporal domain may arise as a consequence of memory and device sharing. The paper describes an architecture for multi-core partitioned systems including critical subsystems built with the Ada Ravenscar profile. Some implementation issues are discussed, and experience on implementing the ORK kernel on the XtratuM partitioning hypervisor is presented.
Resumo:
In this dissertation a new numerical method for solving Fluid-Structure Interaction (FSI) problems in a Lagrangian framework is developed, where solids of different constitutive laws can suffer very large deformations and fluids are considered to be newtonian and incompressible. For that, we first introduce a meshless discretization based on local maximum-entropy interpolants. This allows to discretize a spatial domain with no need of tessellation, avoiding the mesh limitations. Later, the Stokes flow problem is studied. The Galerkin meshless method based on a max-ent scheme for this problem suffers from instabilities, and therefore stabilization techniques are discussed and analyzed. An unconditionally stable method is finally formulated based on a Douglas-Wang stabilization. Then, a Langrangian expression for fluid mechanics is derived. This allows us to establish a common framework for fluid and solid domains, such that interaction can be naturally accounted. The resulting equations are also in the need of stabilization, what is corrected with an analogous technique as for the Stokes problem. The fully Lagrangian framework for fluid/solid interaction is completed with simple point-to-point and point-to-surface contact algorithms. The method is finally validated, and some numerical examples show the potential scope of applications.
Resumo:
Fluid flow and fabric compaction during vacuum assisted resin infusion (VARI) of composite materials was simulated using a level set-based approach. Fluid infusion through the fiber preform was modeled using Darcy’s equations for the fluid flow through a porous media. The stress partition between the fluid and the fiber bed was included by means of Terzaghi’s effective stress theory. Tracking the fluid front during infusion was introduced by means of the level set method. The resulting partial differential equations for the fluid infusion and the evolution of flow front were discretized and solved approximately using the finite differences method with a uniform grid discretization of the spatial domain. The model results were validated against uniaxial VARI experiments through an [0]8 E-glass plain woven preform. The physical parameters of the model were also independently measured. The model results (in terms of the fabric thickness, pressure and fluid front evolution during filling) were in good agreement with the numerical simulations, showing the potential of the level set method to simulate resin infusion
Resumo:
The current approach to developing mixed-criticality sys- tems is by partitioning the hardware resources (processors, memory and I/O devices) among the different applications. Partitions are isolated from each other both in the temporal and the spatial domain, so that low-criticality applications cannot compromise other applications with a higher level of criticality in case of misbehaviour. New architectures based on many-core processors open the way to highly parallel systems in which each partition can be allocated to a set of dedicated proces- sor cores, thus simplifying partition scheduling and temporal separation. Moreover, spatial isolation can also benefit from many-core architectures, by using simpler hardware mechanisms to protect the address spaces of different applications. This paper describes an architecture for many- core embedded partitioned systems, together with some implementation advice for spatial isolation.
Resumo:
Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.
Resumo:
The study utilized the advanced technology provided by automated perimeters to investigate the hypothesis that patients with retinitis pigmentosa behave atypically over the dynamic range and to concurrently determine the influence of extraneous factors on the format of the normal perimetric sensitivity profile. The perimetric processing of some patients with retinitis pigmentosa was considered to be abnormal in either the temporal and/or the spatial domain. The standard size III stimulus saturated the central regions and was thus ineffective in detecting early depressions in sensitivity in these areas. When stimulus size was scaled in inverse proportion to the square root of ganglion cell receptive field density (M-scaled), isosensitive profiles did not result, although cortical representation was theoretically equivalent across the visual field. It was conjectured that this was due to variations in the ganglion cell characteristics with increasing peripheral angle, most notably spatial summation. It was concluded that the development of perimetric routines incorporating stimulus sizes adjusted in proportion to the coverage factor of retinal ganglion cells would enhance the diagnostic capacity of perimetry. Good general and local correspondence was found between perimetric sensitivity and the available retinal cell counts. Intraocular light scatter arising both from simulations and media opacities depressed perimetric sensitivity. Attenuation was greater centrally for the smaller LED stimuli, whereas the reverse was true for the larger projected stimuli. Prior perimetric experience and pupil size also demonstrated eccentricity-dependent effect on sensitivity. Practice improved perimetric sensitivity for projected stimuli at eccentricities greater than or equal to 30o; particularly in the superior region. Increase in pupil size for LED stimuli enhanced sensitivity at eccentricities greater than 10o. Conversely, microfluctuation in the accommodative response during perimetric examination and the correction of peripheral refractive error had no significant influence on perimetric sensitivity.