972 resultados para Non-homogeneous boundary conditions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical analysis of fully developed laminar slip flow and heat transfer in trapezoidal micro-channels has been studied with uniform wall heat flux boundary conditions. Through coordinate transformation, the governing equations are transformed from physical plane to computational domain, and the resulting equations are solved by a finite-difference scheme. The influences of velocity slip and temperature jump on friction coefficient and Nusselt number are investigated in detail. The calculation also shows that the aspect ratio and base angle have significant effect on flow and heat transfer in trapezoidal micro-channel. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I.Wood: Maximal Lp-regularity for the Laplacian on Lipschitz domains, Math. Z., 255, 4 (2007), 855-875.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose that a simple, closed-form mathematical expression--the Wedge-Dipole mapping--provides a concise approximation to the full-field, two-dimensional topographic structure of macaque V1, V2, and V3. A single map function, which we term a map complex, acts as a simultaneous descriptor of all three areas. Quantitative estimation of the Wedge-Dipole parameters is provided via 2DG data of central-field V1 topography and a publicly available data set of full-field macaque V1 and V2 topography. Good quantitative agreement is obtained between the data and the model presented here. The increasing importance of fMRI-based brain imaging motivates the development of more sophisticated two-dimensional models of cortical visuotopy, in contrast to the one-dimensional approximations that have been in common use. One reason is that topography has traditionally supplied an important aspect of "ground truth", or validation, for brain imaging, suggesting that further development of high-resolution fMRI will be facilitated by this data analysis. In addition, several important insights into the nature of cortical topography follows from this work. The presence of anisotropy in cortical magnification factor is shown to follow mathematically from the shared boundary conditions at the V1-V2 and V2-V3 borders, and therefore may not causally follow from the existence of columnar systems in these areas, as is widely assumed. An application of the Wedge-Dipole model to localizing aspects of visual processing to specific cortical areas--extending previous work in correlating V1 cortical magnification factor to retinal anatomy or visual psychophysics data--is briefly discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phase-locked loops (PLLs) are a crucial component in modern communications systems. Comprising of a phase-detector, linear filter, and controllable oscillator, they are widely used in radio receivers to retrieve the information content from remote signals. As such, they are capable of signal demodulation, phase and carrier recovery, frequency synthesis, and clock synchronization. Continuous-time PLLs are a mature area of study, and have been covered in the literature since the early classical work by Viterbi [1] in the 1950s. With the rise of computing in recent decades, discrete-time digital PLLs (DPLLs) are a more recent discipline; most of the literature published dates from the 1990s onwards. Gardner [2] is a pioneer in this area. It is our aim in this work to address the difficulties encountered by Gardner [3] in his investigation of the DPLL output phase-jitter where additive noise to the input signal is combined with frequency quantization in the local oscillator. The model we use in our novel analysis of the system is also applicable to another of the cases looked at by Gardner, that is the DPLL with a delay element integrated in the loop. This gives us the opportunity to look at this system in more detail, our analysis providing some unique insights into the variance `dip' seen by Gardner in [3]. We initially provide background on the probability theory and stochastic processes. These branches of mathematics are the basis for the study of noisy analogue and digital PLLs. We give an overview of the classical analogue PLL theory as well as the background on both the digital PLL and circle map, referencing the model proposed by Teplinsky et al. [4, 5]. For our novel work, the case of the combined frequency quantization and noisy input from [3] is investigated first numerically, and then analytically as a Markov chain via its Chapman-Kolmogorov equation. The resulting delay equation for the steady-state jitter distribution is treated using two separate asymptotic analyses to obtain approximate solutions. It is shown how the variance obtained in each case matches well to the numerical results. Other properties of the output jitter, such as the mean, are also investigated. In this way, we arrive at a more complete understanding of the interaction between quantization and input noise in the first order DPLL than is possible using simulation alone. We also do an asymptotic analysis of a particular case of the noisy first-order DPLL with delay, previously investigated by Gardner [3]. We show a unique feature of the simulation results, namely the variance `dip' seen for certain levels of input noise, is explained by this analysis. Finally, we look at the second-order DPLL with additive noise, using numerical simulations to see the effects of low levels of noise on the limit cycles. We show how these effects are similar to those seen in the noise-free loop with non-zero initial conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper investigates stochastic processes forced by independent and identically distributed jumps occurring according to a Poisson process. The impact of different distributions of the jump amplitudes are analyzed for processes with linear drift. Exact expressions of the probability density functions are derived when jump amplitudes are distributed as exponential, gamma, and mixture of exponential distributions for both natural and reflecting boundary conditions. The mean level-crossing properties are studied in relation to the different jump amplitudes. As an example of application of the previous theoretical derivations, the role of different rainfall-depth distributions on an existing stochastic soil water balance model is analyzed. It is shown how the shape of distribution of daily rainfall depths plays a more relevant role on the soil moisture probability distribution as the rainfall frequency decreases, as predicted by future climatic scenarios. © 2010 The American Physical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

© 2015 Society for Industrial and Applied Mathematics.We consider parabolic PDEs with randomly switching boundary conditions. In order to analyze these random PDEs, we consider more general stochastic hybrid systems and prove convergence to, and properties of, a stationary distribution. Applying these general results to the heat equation with randomly switching boundary conditions, we find explicit formulae for various statistics of the solution and obtain almost sure results about its regularity and structure. These results are of particular interest for biological applications as well as for their significant departure from behavior seen in PDEs forced by disparate Gaussian noise. Our general results also have applications to other types of stochastic hybrid systems, such as ODEs with randomly switching right-hand sides.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: We previously reported models that characterized the synergistic interaction between remifentanil and sevoflurane in blunting responses to verbal and painful stimuli. This preliminary study evaluated the ability of these models to predict a return of responsiveness during emergence from anesthesia and a response to tibial pressure when patients required analgesics in the recovery room. We hypothesized that model predictions would be consistent with observed responses. We also hypothesized that under non-steady-state conditions, accounting for the lag time between sevoflurane effect-site concentration (Ce) and end-tidal (ET) concentration would improve predictions. METHODS: Twenty patients received a sevoflurane, remifentanil, and fentanyl anesthetic. Two model predictions of responsiveness were recorded at emergence: an ET-based and a Ce-based prediction. Similarly, 2 predictions of a response to noxious stimuli were recorded when patients first required analgesics in the recovery room. Model predictions were compared with observations with graphical and temporal analyses. RESULTS: While patients were anesthetized, model predictions indicated a high likelihood that patients would be unresponsive (> or = 99%). However, after termination of the anesthetic, models exhibited a wide range of predictions at emergence (1%-97%). Although wide, the Ce-based predictions of responsiveness were better distributed over a percentage ranking of observations than the ET-based predictions. For the ET-based model, 45% of the patients awoke within 2 min of the 50% model predicted probability of unresponsiveness and 65% awoke within 4 min. For the Ce-based model, 45% of the patients awoke within 1 min of the 50% model predicted probability of unresponsiveness and 85% awoke within 3.2 min. Predictions of a response to a painful stimulus in the recovery room were similar for the Ce- and ET-based models. DISCUSSION: Results confirmed, in part, our study hypothesis; accounting for the lag time between Ce and ET sevoflurane concentrations improved model predictions of responsiveness but had no effect on predicting a response to a noxious stimulus in the recovery room. These models may be useful in predicting events of clinical interest but large-scale evaluations with numerous patients are needed to better characterize model performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The outcomes for both (i) radiation therapy and (ii) preclinical small animal radio- biology studies are dependent on the delivery of a known quantity of radiation to a specific and intentional location. Adverse effects can result from these procedures if the dose to the target is too high or low, and can also result from an incorrect spatial distribution in which nearby normal healthy tissue can be undesirably damaged by poor radiation delivery techniques. Thus, in mice and humans alike, the spatial dose distributions from radiation sources should be well characterized in terms of the absolute dose quantity, and with pin-point accuracy. When dealing with the steep spatial dose gradients consequential to either (i) high dose rate (HDR) brachytherapy or (ii) within the small organs and tissue inhomogeneities of mice, obtaining accurate and highly precise dose results can be very challenging, considering commercially available radiation detection tools, such as ion chambers, are often too large for in-vivo use.

In this dissertation two tools are developed and applied for both clinical and preclinical radiation measurement. The first tool is a novel radiation detector for acquiring physical measurements, fabricated from an inorganic nano-crystalline scintillator that has been fixed on an optical fiber terminus. This dosimeter allows for the measurement of point doses to sub-millimeter resolution, and has the ability to be placed in-vivo in humans and small animals. Real-time data is displayed to the user to provide instant quality assurance and dose-rate information. The second tool utilizes an open source Monte Carlo particle transport code, and was applied for small animal dosimetry studies to calculate organ doses and recommend new techniques of dose prescription in mice, as well as to characterize dose to the murine bone marrow compartment with micron-scale resolution.

Hardware design changes were implemented to reduce the overall fiber diameter to <0.9 mm for the nano-crystalline scintillator based fiber optic detector (NanoFOD) system. Lower limits of device sensitivity were found to be approximately 0.05 cGy/s. Herein, this detector was demonstrated to perform quality assurance of clinical 192Ir HDR brachytherapy procedures, providing comparable dose measurements as thermo-luminescent dosimeters and accuracy within 20% of the treatment planning software (TPS) for 27 treatments conducted, with an inter-quartile range ratio to the TPS dose value of (1.02-0.94=0.08). After removing contaminant signals (Cerenkov and diode background), calibration of the detector enabled accurate dose measurements for vaginal applicator brachytherapy procedures. For 192Ir use, energy response changed by a factor of 2.25 over the SDD values of 3 to 9 cm; however a cap made of 0.2 mm thickness silver reduced energy dependence to a factor of 1.25 over the same SDD range, but had the consequence of reducing overall sensitivity by 33%.

For preclinical measurements, dose accuracy of the NanoFOD was within 1.3% of MOSFET measured dose values in a cylindrical mouse phantom at 225 kV for x-ray irradiation at angles of 0, 90, 180, and 270˝. The NanoFOD exhibited small changes in angular sensitivity, with a coefficient of variation (COV) of 3.6% at 120 kV and 1% at 225 kV. When the NanoFOD was placed alongside a MOSFET in the liver of a sacrificed mouse and treatment was delivered at 225 kV with 0.3 mm Cu filter, the dose difference was only 1.09% with use of the 4x4 cm collimator, and -0.03% with no collimation. Additionally, the NanoFOD utilized a scintillator of 11 µm thickness to measure small x-ray fields for microbeam radiation therapy (MRT) applications, and achieved 2.7% dose accuracy of the microbeam peak in comparison to radiochromic film. Modest differences between the full-width at half maximum measured lateral dimension of the MRT system were observed between the NanoFOD (420 µm) and radiochromic film (320 µm), but these differences have been explained mostly as an artifact due to the geometry used and volumetric effects in the scintillator material. Characterization of the energy dependence for the yttrium-oxide based scintillator material was performed in the range of 40-320 kV (2 mm Al filtration), and the maximum device sensitivity was achieved at 100 kV. Tissue maximum ratio data measurements were carried out on a small animal x-ray irradiator system at 320 kV and demonstrated an average difference of 0.9% as compared to a MOSFET dosimeter in the range of 2.5 to 33 cm depth in tissue equivalent plastic blocks. Irradiation of the NanoFOD fiber and scintillator material on a 137Cs gamma irradiator to 1600 Gy did not produce any measurable change in light output, suggesting that the NanoFOD system may be re-used without the need for replacement or recalibration over its lifetime.

For small animal irradiator systems, researchers can deliver a given dose to a target organ by controlling exposure time. Currently, researchers calculate this exposure time by dividing the total dose that they wish to deliver by a single provided dose rate value. This method is independent of the target organ. Studies conducted here used Monte Carlo particle transport codes to justify a new method of dose prescription in mice, that considers organ specific doses. Monte Carlo simulations were performed in the Geant4 Application for Tomographic Emission (GATE) toolkit using a MOBY mouse whole-body phantom. The non-homogeneous phantom was comprised of 256x256x800 voxels of size 0.145x0.145x0.145 mm3. Differences of up to 20-30% in dose to soft-tissue target organs was demonstrated, and methods for alleviating these errors were suggested during whole body radiation of mice by utilizing organ specific and x-ray tube filter specific dose rates for all irradiations.

Monte Carlo analysis was used on 1 µm resolution CT images of a mouse femur and a mouse vertebra to calculate the dose gradients within the bone marrow (BM) compartment of mice based on different radiation beam qualities relevant to x-ray and isotope type irradiators. Results and findings indicated that soft x-ray beams (160 kV at 0.62 mm Cu HVL and 320 kV at 1 mm Cu HVL) lead to substantially higher dose to BM within close proximity to mineral bone (within about 60 µm) as compared to hard x-ray beams (320 kV at 4 mm Cu HVL) and isotope based gamma irradiators (137Cs). The average dose increases to the BM in the vertebra for these four aforementioned radiation beam qualities were found to be 31%, 17%, 8%, and 1%, respectively. Both in-vitro and in-vivo experimental studies confirmed these simulation results, demonstrating that the 320 kV, 1 mm Cu HVL beam caused statistically significant increased killing to the BM cells at 6 Gy dose levels in comparison to both the 320 kV, 4 mm Cu HVL and the 662 keV, 137Cs beams.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les recherches récapitulées dans cette thèse de doctorat ont porté sur les causes de l’organisation spatiale des végétations périodiques. Ces structures paysagères aux motifs réguliers, tachetés, tigrés ou labyrinthiques, d’échelle décamétrique à hectométrique, couvrant des étendues considérables sur au moins trois continents, constituent un cas d’école dans l’étude des processus endogènes présidant à l’hétérogénéité du couvert végétal. Ces structures prennent place sur un substrat homogène, mis à part la rétroaction du couvert lui-même, et sont marquées par des écotones abrupts et la persistance d’une proportion considérable de sol nu. Plusieurs modèles ont mis en avant l’existence possible d’un phénomène d’auto-organisation du couvert, qui verrait une structure d’ensemble émerger des interactions locales entre individus. Ces modèles se basent sur le jeu simultané de la consommation de la ressource (compétition) et de l’amélioration de l’un ou l’autre des éléments du bilan de la même ressource par le couvert (facilitation). La condition à l’existence d’une structure d’ensemble spatialement périodique et stable réside dans une différence entre la portée de la compétition (plus grande) et celle de la facilitation. L’apparition de ces structures est modulée par le taux de croissance biologique, qui est le reflet des contraintes extérieures telles que l’aridité, le pâturage ou la coupe de bois. Le modus operandi des interactions spatiales supposées entre individus reste largement à préciser.

Nos recherches ont été menées au sud-ouest de la République du Niger, à l’intérieur et dans les environs du parc Régional du W. Trois axes ont été explorés :(i) Une étude de la dépendance spatiale entre la structure de la végétation (biovolumes cartographiés) et les paramètres du milieu abiotique (relief, sol), sur base d’analyses spectrales et cross-spectrales par transformée de Fourier (1D et 2D). (ii) Une étude diachronique (1956, 1975 et 1996) à large échelle (3000 km²) de l’influence de l’aridité et des pressions d’origine anthropique sur l’auto-organisation des végétations périodiques, basée sur la caractérisation de la structure spatiale des paysages sur photos aériennes via la transformée de Fourier en 2D. (iii) Trois études portant sur les interactions spatiales entre individus :En premier lieu, via l’excavation des systèmes racinaires (air pulsé) ;Ensuite, par un suivi spatio-temporel du bilan hydrique du sol (blocs de gypse) ;Enfin, via le marquage de la ressource par du deutérium.

Nous avons ainsi pu établir que les végétations périodiques constituent bien un mode d’auto-organisation pouvant survenir sur substrat homogène et modulé par les contraintes climatiques et anthropiques. Un ajustement rapide entre l’organisation des végétations périodiques et le climat a pu être montrée en zone protégée. La superficie et l’organisation des végétations périodiques y ont tour à tour progressé et régressé en fonction d’épisodes secs ou humides. Par contre, en dehors de l’aire protégée, la possibilité d’une restauration du couvert semble fortement liée au taux d’exploitation des ressources végétales. Ces résultats ont d’importantes implications quant à la compréhension des interactions entre climat et écosystèmes et à l’évaluation de leurs capacités de charge. La caractérisation de la structure spatiale des végétations arides, notamment par la transformée de Fourier d’images HR, devrait être généralisée comme outil de monitoring de l’état de ces écosystèmes. Nos études portant sur les modes d’interactions spatiales ont permis de confirmer l’existence d’une facilitation à courte portée du couvert végétal sur la ressource. Cependant, cette facilitation ne semble pas s’exercer sur le terme du bilan hydrique traditionnellement avancé, à savoir l’infiltration, mais plutôt sur le taux d’évaporation (deux fois moindre à l’ombre des canopées). Ce mécanisme exclut l’existence de transferts diffusifs souterrains entre sols nu et fourrés. Des transferts inverses semblent d’ailleurs montrés par le marquage isotopique. L’étude du bilan hydrique et la cartographie du micro-relief, ainsi que la profondeur fortement réduite de la zone d’exploitation racinaire, jettent de sérieux doutes quant au rôle communément admis des transferts d’eau par ruissellement/diffusion de surface en tant que processus clé dans la compétition à distance entre les plantes. L’alternative réside dans l’existence d’une compétition racinaire de portée supérieure aux canopées. Cette hypothèse trouve une confirmation tant par les rhizosphères excavées, superficielles et étendues, que dans le marquage isotopique, montrant des contaminations d’arbustes situés à plus de 15 m de la zone d’apport. De même, l’étude du bilan hydrique met en évidence les influences simultanées et contradictoires (facilitation/compétition) des ligneux sur l’évapotranspiration.

/

This PhD thesis gathers results of a research dealing with the causes of the spatial organisation of periodic vegetations. These landscape structures, featuring regular spotted, labyrinthine or banded patterns of decametric to hectometric scale, and extending over considerable areas on at least three continents, constitute a perfect study case to approach endogenous processes leading to vegetation heterogeneities. These patterns occur over homogeneous substratum, except for vegetation’s own feedbacks, and are marked by sharp ecotones and the persistence of a considerable amount of bare soil. A number of models suggested a possible case of self-organized patterning, in which the general structure would emerge from local interactions between individuals. Those models rest on the interplay of competitive and facilitative effects, relating to soil water consumption and to soil water budget enhancement by vegetation. A general necessary condition for pattern formation to occur is that negative interactions (competition) have a larger range than positive interactions (facilitation). Moreover, all models agree with the idea that patterning occurs when vegetation growth decreases, for instance as a result of reduced water availability, domestic grazing or wood cutting, therefore viewing patterns as a self-organised response to environmental constraints. However the modus operandi of the spatial interactions between individual plants remains largely to be specified.

We carried out a field research in South-West Niger, within and around the W Regional Park. Three research lines were explored: (i) The study of the spatial dependency between the vegetation pattern (mapped biovolumes) and the factors of the abiotic environment (soil, relief), on the basis of spectral and cross-spectral analyses with Fourier transform (1D and 2D). (ii) A broad scale diachronic study (1956, 1975, 1996) of the influence of aridity and human induced pressures on the vegetation self-patterning, based on the characterisation of patterns on high resolution remote sensing data via 2D Fourier transform. (iii) Three different approaches of the spatial interactions between individuals: via root systems excavation with pulsed air; via the monitoring in space and time of the soil water budget (gypsum blocks method); and via water resource labelling with deuterated water.

We could establish that periodic vegetations are indeed the result of a self-organisation process, occurring in homogeneous substratum conditions and modulated by climate and human constraints. A rapid adjustment between vegetation patterning and climate could be observed in protected zones. The area and patterning of the periodic vegetations successively progressed and regressed, following drier or wetter climate conditions. On the other hand, outside protected areas, the restoration ability of vegetation appeared to depend on the degree of vegetation resource exploitation. These results have important implications regarding the study of vegetation-climate interactions and the evaluation of ecosystems’ carrying capacities. Spatial pattern characterisation in arid vegetations using Fourier transform of HR remote sensing data should be generalised for the monitoring of those ecosystems. Our studies dealing with spatial interaction mechanisms confirmed the existence of a short range facilitation of the cover on water resource. However, this facilitation does not seem to act through the commonly accepted infiltration component, but rather on the evaporative rate (twice less within thickets). This mechanism excludes underground diffusive transfers between bare ground and vegetation. Inverse transfers were even shown by deuterium labelling. Water budget study and micro-elevation mapping, along with consistent soil shallowness, together cast serious doubts on the traditional mechanism of run-off/diffusion of surface water as a key process of the long range competition between plants. An alternative explanation lies in long range root competition. This hypothesis find support as well in the excavated root systems, shallow and wide, as in isotopic labelling, showing contaminations of shrubs located up to 15 m of the irrigated area. Water budget study also evidenced simultaneous contradictory effects (facilitation/competition) of shrubs on evapotranspiration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a computer simulation tool capable of modelling multi-physics processes in complex geometry has been developed and applied to the casting process. The quest for high-quality complex casting components demanded by the aerospace and automobile industries, requires more precise numerical modelling techniques and one that need to be generic and modular in its approach to modelling multi-processes problems. For such a computer model to be successful in shape casting, the complete casting process needs to be addressed, the major events being:-• Filling of hot liquid metal into a cavity mould • Solidification and latent heat evolution of liquid metal • Convection currents generated in liquid metal by thermal gradients • Deformation of cast and stress development in solidified metal • Macroscopic porosity formation The above phenomena combines the analysis of fluid flow, heat transfer, change of phase and thermal stress development. None of these events can be treated in isolation as they inexorably interact with each other in a complex way. Also conditions such as design of running system, location of feeders and chills, moulding materials and types of boundary conditions can all affect on the final cast quality and must be appropriately represented in the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An integrated fire spread model is presented in this study including several sub-models representing different phenomena of gaseous and solid combustion. The integrated model comprises of the following sub-models: a gaseous combustion model, a thermal radiation model that includes the effects of soot, and a pyrolysis model for charring combustible solids. The interaction of the gaseous and solid phases are linked together through the boundary conditions of the governing equations for the flow domain and the solid region respectively. The integrated model is used to simulate a fire spread experiment conducted in a half-scale test compartment. Good qualitative and reasonable quantitative agreement is achieved between the experiment and numerical predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The last few years have seen a substantial increase in the geometric complexity for 3D flow simulation. In this paper we describe the challenges in generating computation grids for 3D aerospace configuations and demonstrate the progress made to eventually achieve a push button technology for CAD to visualized flow. Special emphasis is given to the interfacing from the grid generator to the flow solver by semi-automatic generation of boundary conditions during the grid generation process. In this regard, once a grid has been generated, push button technology of most commercial flow solvers has been achieved. This will be demonstrated by the ad hoc simulation for the Hopper configuration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of finding the heat distribution and the shape of the liquid fraction during laser welding of a thick steel plate using the finite volume CFD package PHYSICA. Since the shape of the keyhole is not known in advance, the following two-step approach to handling this problem has been employed. In the first stage, we determine the geometry of the keyhole for the steady-state case and form an appropriate mesh that includes both the workpiece and the keyhole. In the second stage, we impose the boundary conditions by assigning temperature to the walls of the keyhole and find the heat distribution and the shape of the liquid fraction for a given welding speed and material properties. We construct a fairly accurate approximation of the keyhole as a sequence of include sliced cones. A formula for finding the initial radius of the keyhole is derived by determining the radius of the vaporisation isotherm for the line heat source. We report on the results of a series of computational experiments for various heat input values and welding velocities.