977 resultados para CONVECTIVE PARAMETERIZATION
Resumo:
Given a 2manifold triangular mesh \(M \subset {\mathbb {R}}^3\), with border, a parameterization of \(M\) is a FACE or trimmed surface \(F=\{S,L_0,\ldots, L_m\}\) -- \(F\) is a connected subset or region of a parametric surface \(S\), bounded by a set of LOOPs \(L_0,\ldots ,L_m\) such that each \(L_i \subset S\) is a closed 1manifold having no intersection with the other \(L_j\) LOOPs -- The parametric surface \(S\) is a statistical fit of the mesh \(M\) -- \(L_0\) is the outermost LOOP bounding \(F\) and \(L_i\) is the LOOP of the ith hole in \(F\) (if any) -- The problem of parameterizing triangular meshes is relevant for reverse engineering, tool path planning, feature detection, redesign, etc -- Stateofart mesh procedures parameterize a rectangular mesh \(M\) -- To improve such procedures, we report here the implementation of an algorithm which parameterizes meshes \(M\) presenting holes and concavities -- We synthesize a parametric surface \(S \subset {\mathbb {R}}^3\) which approximates a superset of the mesh \(M\) -- Then, we compute a set of LOOPs trimming \(S\), and therefore completing the FACE \(F=\ {S,L_0,\ldots ,L_m\}\) -- Our algorithm gives satisfactory results for \(M\) having low Gaussian curvature (i.e., \(M\) being quasi-developable or developable) -- This assumption is a reasonable one, since \(M\) is the product of manifold segmentation preprocessing -- Our algorithm computes: (1) a manifold learning mapping \(\phi : M \rightarrow U \subset {\mathbb {R}}^2\), (2) an inverse mapping \(S: W \subset {\mathbb {R}}^2 \rightarrow {\mathbb {R}}^3\), with \ (W\) being a rectangular grid containing and surpassing \(U\) -- To compute \(\phi\) we test IsoMap, Laplacian Eigenmaps and Hessian local linear embedding (best results with HLLE) -- For the back mapping (NURBS) \(S\) the crucial step is to find a control polyhedron \(P\), which is an extrapolation of \(M\) -- We calculate \(P\) by extrapolating radial basis functions that interpolate points inside \(\phi (M)\) -- We successfully test our implementation with several datasets presenting concavities, holes, and are extremely nondevelopable -- Ongoing work is being devoted to manifold segmentation which facilitates mesh parameterization
Resumo:
Strong convective events can produce extreme precipitation, hail, lightning or gusts, potentially inducing severe socio-economic impacts. These events have a relatively small spatial extension and, in most cases, a short lifetime. In this study, a model is developed for estimating convective extreme events based on large scale conditions. It is shown that strong convective events can be characterized by a Weibull distribution of radar-based rainfall with a low shape and high scale parameter value. A radius of 90km around a station reporting a convective situation turned out to be suitable. A methodology is developed to estimate the Weibull parameters and thus the occurrence probability of convective events from large scale atmospheric instability and enhanced near-surface humidity, which are usually found on a larger scale than the convective event itself. Here, the probability for the occurrence of extreme convective events is estimated from the KO-index indicating the stability, and relative humidity at 1000hPa. Both variables are computed from ERA-Interim reanalysis. In a first version of the methodology, these two variables are applied to estimate the spatial rainfall distribution and to estimate the occurrence of a convective event. The developed method shows significant skill in estimating the occurrence of convective events as observed at synoptic stations, lightning measurements, and severe weather reports. In order to take frontal influences into account, a scheme for the detection of atmospheric fronts is implemented. While generally higher instability is found in the vicinity of fronts, the skill of this approach is largely unchanged. Additional improvements were achieved by a bias-correction and the use of ERA-Interim precipitation. The resulting estimation method is applied to the ERA-Interim period (1979-2014) to establish a ranking of estimated convective extreme events. Two strong estimated events that reveal a frontal influence are analysed in detail. As a second application, the method is applied to GCM-based decadal predictions in the period 1979-2014, which were initialized every year. It is shown that decadal predictive skill for convective event frequencies over Germany is found for the first 3-4 years after the initialization.
Resumo:
Since the majority of the population of the world lives in cities and that this number is expected to increase in the next years, one of the biggest challenges of the research is the determination of the risk deriving from high temperatures experienced in urban areas, together with improving responses to climate-related disasters, for example by introducing in the urban context vegetation or built infrastructures that can improve the air quality. In this work, we will investigate how different setups of the boundary and initial conditions set on an urban canyon generate different patterns of the dispersion of a pollutant. To do so we will exploit the low computational cost of Reynolds-Averaged Navier-Stokes (RANS) simulations to reproduce the dynamics of an infinite array of two-dimensional square urban canyons. A pollutant is released at the street level to mimic the presence of traffic. RANS simulations are run using the k-ɛ closure model and vertical profiles of significant variables of the urban canyon, namely the velocity, the turbulent kinetic energy, and the concentration, are represented. This is done using the open-source software OpenFOAM and modifying the standard solver simpleFoam to include the concentration equation and the temperature by introducing a buoyancy term in the governing equations. The results of the simulation are validated with experimental results and products of Large-Eddy Simulations (LES) from previous works showing that the simulation is able to reproduce all the quantities under examination with satisfactory accuracy. Moreover, this comparison shows that despite LES are known to be more accurate albeit more expensive, RANS simulations represent a reliable tool if a smaller computational cost is needed. Overall, this work exploits the low computational cost of RANS simulations to produce multiple scenarios useful to evaluate how the dispersion of a pollutant changes by a modification of key variables, such as the temperature.
Resumo:
This work approaches the forced air cooling of strawberry by numerical simulation. The mathematical model that was used describes the process of heat transfer, based on the Fourier's law, in spherical coordinates and simplified to describe the one-dimensional process. For the resolution of the equation expressed for the mathematical model, an algorithm was developed based on the explicit scheme of the numerical method of the finite differences and implemented in the scientific computation program MATLAB 6.1. The validation of the mathematical model was made by the comparison between theoretical and experimental data, where strawberries had been cooled with forced air. The results showed to be possible the determination of the convective heat transfer coefficient by fitting the numerical and experimental data. The methodology of the numerical simulations was showed like a promising tool in the support of the decision to use or to develop equipment in the area of cooling process with forced air of spherical fruits.
Resumo:
Inulin is a fructooligosacharide found in diverse agricultural products, amongst them garlic, banana, Jerusalem artichoke and chicory root. Inulin generally is used in developed countries, as a substitute of sugar and/or fat due to its characteristics of fitting as functional and dietary food. Chicory root is usually used as source and raw material for commercial extration of inulin. The experiments consisted on drying sliced chicory roots based on a factorial experimental design in a convective dryer whose alows the air to pass perpendicularly through the tray. Effective diffusivity (dependent variable) has been determined for each experimental combination of independent variables (air temperature and velocity). The data curves have been fitted by the solution of the second Fick law and Page's model. Effective difusivity varied from 3.51 x 10-10 m² s-1 to 1.036 x 10-10 m² s-1. It is concluded that, for the range of studied values, air temperature is the only statistically significant variable. So, a first order mathematical model was obtained, representing effective diffusivity behavior as function of air temperature. The best drying condition was correspondent to the trial using the highest drying air temperature.
Resumo:
JUSTIFICATIVA E OBJETIVOS: Hipotermia intra-operatória é complicação frequente, favorecida por operação abdominal. A eficácia da associação dos métodos de aquecimento por condução e convecção na prevenção de hipotermia e seus efeitos no período de recuperação pós-operatória foram os objetivos deste estudo. MÉTODO: Quarenta e três pacientes de ambos os sexos de 18 a 88 anos de idade, submetidos à laparotomia xifopúbica sob anestesia geral e monitorização da temperatura esofágica, foram distribuídos de modo aleatório em dois grupos de aquecimento: COND (n = 24), com colchão de circulação de água a 37°C no dorso e COND + CONV (n = 19), com a mesma condição associada à manta de ar aquecido a 42°C sobre o tórax e membros superiores. Analisados peso, sexo, idade, duração da operação e anestesia, temperaturas na indução anestésica (Mi), horas consecutiva (M1, M2), final da operação (Mfo) e anestesia (Mfa), entrada (Me-REC) e saída (Ms-REC) da recuperação pós-anestésica (SRPA), além das incidências de tremores e queixas de frio no pós-operatório. RESULTADOS: Os grupos foram semelhantes em todas as variáveis analisadas, exceto nas temperaturas em M2, M3, M4, Mfo e Mfa. O grupo COND reduziu a temperatura a partir da segunda hora da indução anestésica, mas o grupo COND + CONV só na quarta hora. Em COND, observou-se hipotermia na entrada e saída da SRPA. CONCLUSÕES: Associar métodos de aquecimento retardou a instalação e diminui a intensidade da hipotermia intra-operatória, mas não reduziu a incidência das queixas de frio e tremores.
Resumo:
É apresentado um estudo sobre sistemas convectivos linearmente organizados e observados por um radar meteorológico banda-C na região semi-árida do Nordeste do Brasil. São analisados três dias (27 a 29) de março de 1985, com ênfase na investigação do papel desempenhado por fatores locais e de grande escala no desenvolvimento dos sistemas. No cenário de grande escala, a área de cobertura do radar foi influenciada por um cavado de ar superior austral no dia 27 e por um vórtice ciclônico de altos níveis no dia 29. A convergência de umidade próxima à superfície favoreceu a atividade convectiva nos dias 27 e 29, enquanto que divergência de umidade próxima à superfície inibiu a atividade convectiva no dia 28. No cenário de mesoescala, foi observado que o aquecimento diurno é um fator importante para a formação de células convectivas, somando-se a ele o papel determinante da orografia na localização dos ecos. De maneira geral, as imagens de radar mostram os sistemas convectivos linearmente organizados em áreas elevadas e núcleos convectivos intensos envolvidos por uma área de precipitação estratiforme. Os resultados indicam que convergência do fluxo de umidade em grande escala e aquecimento radiativo, são fatores determinantes na evolução e desenvolvimento dos ecos na área de estudo.
Resumo:
This article deals with the scavenging processes modeling of the particulate sulfate and the gas sulfur dioxide, emphasizing the synoptic conditions at different sampling sites in order to verify the domination of the in-cloud or below-cloud scavenging processes in the Metropolitan Area of São Paulo (RMSP). Three sampling sites were chosen: GV (Granja Viana) at RMSP surroundings, IAG-USP and Mackenzie (RMSP center). Basing on synoptic conditions, it was chosen a group of events where the numerical modeling, a simple scavenging model, was used. These synoptic conditions were usually convective cloud storms, which are usual at RMSP. The results show that the in-cloud processes were dominant (80%) for sulfate/sulfur dioxide scavenging processes, with below-cloud process indicating around 20% of the total. Clearly convective events, with total rainfall higher than 20 mm, are better modeled than the stratiform events, with correlation coefficient of 0.92. There is also a clear association with events presenting higher rainfall amount and the ratio between modeled and observed data set with correlation coefficient of 0.63. Additionally, the suburb sampling site, GV, as expected due to the pollution source distance, presents in general smaller amount of rainwater sulfate (modeled and observed) than the center sampling site, Mackenzie, where the characterization event explains partially the rainfall concentration differences.
Resumo:
The Levei Low Jet (LLJ) observed in the Porto Alegre metropolitan region, Rio Grande do Sul State, Brazil, was analyzed using 1989-2003 at 00:00 and 12:00 UTC upper-air observations. The LLJ classification criteria proposed by Bonner (1968) and modified by Whiteman et aI. (1997) were applied to determine the LLJ occurrence. Afterwards was selected a LLJ event, that was one of the most intense observed in the summer (01/27/2002 at 12:00 UTC), during the study period. ln this study were used as tools: atmospheric soundings, GOES-8 satellite images, and wind, temperature and specific humidity fields from GLOBAL, ETA and BRAMS models. Based on the numerical analysis was possible to verify that the three models overestimated the specific humidity and potential temperature values, at LLJ time occurrence. The wind speed was underestimated by the models. It was observed in the study region, at 12:00 UTC (LLJ detected hour in the Porto Alegre region), by three models, warm and wet air from north, generating conditions to Mesoscale Convective System (MCS) formation and intensification.
Resumo:
Context. Abundance variations in moderately metal-rich globular clusters can give clues about the formation and chemical enrichment of globular clusters. Aims. CN, CH, Na, Mg and Al indices in spectra of 89 stars of the template metal-rich globular cluster M71 are measured and implications on internal mixing are discussed. Methods. Stars from the turn-off up to the Red Giant Branch (0.87 < log g < 4.65) observed with the GMOS multi-object spectrograph at the Gemini-North telescope are analyzed. Radial velocities, colours, effective temperatures, gravities and spectral indices are determined for the sample. Results. Previous findings related to the CN bimodality and CN-CH anticorrelation in stars of M71 are confirmed. We also find a CN-Na correlation, and Al-Na, as well as an Mg(2)-Al anticorrelation. Conclusions. A combination of convective mixing and a primordial pollution by AGB or massive stars in the early stages of globular cluster formation is required to explain the observations.
Resumo:
Context. The turbulent pumping effect corresponds to the transport of magnetic flux due to the presence of density and turbulence gradients in convectively unstable layers. In the induction equation it appears as an advective term and for this reason it is expected to be important in the solar and stellar dynamo processes. Aims. We explore the effects of turbulent pumping in a flux-dominated Babcock-Leighton solar dynamo model with a solar-like rotation law. Methods. As a first step, only vertical pumping has been considered through the inclusion of a radial diamagnetic term in the induction equation. In the second step, a latitudinal pumping term was included and then, a near-surface shear was included. Results. The results reveal the importance of the pumping mechanism in solving current limitations in mean field dynamo modeling, such as the storage of the magnetic flux and the latitudinal distribution of the sunspots. If a meridional flow is assumed to be present only in the upper part of the convective zone, it is the full turbulent pumping that regulates both the period of the solar cycle and the latitudinal distribution of the sunspot activity. In models that consider shear near the surface, a second shell of toroidal field is generated above r = 0.95 R(circle dot) at all latitudes. If the full pumping is also included, the polar toroidal fields are efficiently advected inwards, and the toroidal magnetic activity survives only at the observed latitudes near the equator. With regard to the parity of the magnetic field, only models that combine turbulent pumping with near-surface shear always converge to the dipolar parity. Conclusions. This result suggests that, under the Babcock-Leighton approach, the equartorward motion of the observed magnetic activity is governed by the latitudinal pumping of the toroidal magnetic field rather than by a large scale coherent meridional flow. Our results support the idea that the parity problem is related to the quadrupolar imprint of the meridional flow on the poloidal component of the magnetic field and the turbulent pumping positively contributes to wash out this imprint.
Resumo:
In this Letter, we propose a new and model-independent cosmological test for the distance-duality (DD) relation, eta = D(L)(z)(1 + z)(-2)/D(A)(z) = 1, where D(L) and D(A) are, respectively, the luminosity and angular diameter distances. For D(L) we consider two sub-samples of Type Ia supernovae (SNe Ia) taken from Constitution data whereas D(A) distances are provided by two samples of galaxy clusters compiled by De Filippis et al. and Bonamente et al. by combining Sunyaev-Zeldovich effect and X-ray surface brightness. The SNe Ia redshifts of each sub-sample were carefully chosen to coincide with the ones of the associated galaxy cluster sample (Delta z < 0.005), thereby allowing a direct test of the DD relation. Since for very low redshifts, D(A)(z) approximate to D(L)(z), we have tested the DD relation by assuming that. is a function of the redshift parameterized by two different expressions: eta(z) = 1 + eta(0)z and eta(z) = 1 +eta(0)z/(1 + z), where eta(0) is a constant parameter quantifying a possible departure from the strict validity of the reciprocity relation (eta(0) = 0). In the best scenario (linear parameterization), we obtain eta(0) = -0.28(-0.44)(+0.44) (2 sigma, statistical + systematic errors) for the De Filippis et al. sample (elliptical geometry), a result only marginally compatible with the DD relation. However, for the Bonamente et al. sample (spherical geometry) the constraint is eta(0) = -0.42(-0.34)(+0.34) (3 sigma, statistical + systematic errors), which is clearly incompatible with the duality-distance relation.
Resumo:
Investigations of chaotic particle transport by drift waves propagating in the edge plasma of tokamaks with poloidal zonal flow are described. For large aspect ratio tokamaks, the influence of radial electric field profiles on convective cells and transport barriers, created by the nonlinear interaction between the poloidal flow and resonant waves, is investigated. For equilibria with edge shear flow, particle transport is seen to be reduced when the electric field shear is reversed. The transport reduction is attributed to the robust invariant tori that occur in nontwist Hamiltonian systems. This mechanism is proposed as an explanation for the transport reduction in Tokamak Chauffage Alfven Bresilien [R. M. O. Galvao , Plasma Phys. Controlled Fusion 43, 1181 (2001)] for discharges with a biased electrode at the plasma edge.
Resumo:
Cloud-aerosol interaction is a key issue in the climate system, affecting the water cycle, the weather, and the total energy balance including the spatial and temporal distribution of latent heat release. Information on the vertical distribution of cloud droplet microphysics and thermodynamic phase as a function of temperature or height, can be correlated with details of the aerosol field to provide insight on how these particles are affecting cloud properties and their consequences to cloud lifetime, precipitation, water cycle, and general energy balance. Unfortunately, today's experimental methods still lack the observational tools that can characterize the true evolution of the cloud microphysical, spatial and temporal structure in the cloud droplet scale, and then link these characteristics to environmental factors and properties of the cloud condensation nuclei. Here we propose and demonstrate a new experimental approach (the cloud scanner instrument) that provides the microphysical information missed in current experiments and remote sensing options. Cloud scanner measurements can be performed from aircraft, ground, or satellite by scanning the side of the clouds from the base to the top, providing us with the unique opportunity of obtaining snapshots of the cloud droplet microphysical and thermodynamic states as a function of height and brightness temperature in clouds at several development stages. The brightness temperature profile of the cloud side can be directly associated with the thermodynamic phase of the droplets to provide information on the glaciation temperature as a function of different ambient conditions, aerosol concentration, and type. An aircraft prototype of the cloud scanner was built and flew in a field campaign in Brazil. The CLAIM-3D (3-Dimensional Cloud Aerosol Interaction Mission) satellite concept proposed here combines several techniques to simultaneously measure the vertical profile of cloud microphysics, thermodynamic phase, brightness temperature, and aerosol amount and type in the neighborhood of the clouds. The wide wavelength range, and the use of multi-angle polarization measurements proposed for this mission allow us to estimate the availability and characteristics of aerosol particles acting as cloud condensation nuclei, and their effects on the cloud microphysical structure. These results can provide unprecedented details on the response of cloud droplet microphysics to natural and anthropogenic aerosols in the size scale where the interaction really happens.
Resumo:
We report on the event structure and double helicity asymmetry (A(LL)) of jet production in longitudinally polarized p + p collisions at root s = 200 GeV. Photons and charged particles were measured by the PHENIX experiment at midrapidity vertical bar eta vertical bar < 0.35 with the requirement of a high-momentum (> 2 GeV/c) photon in the event. Event structure, such as multiplicity, p(T) density and thrust in the PHENIX acceptance, were measured and compared with the results from the PYTHIA event generator and the GEANT detector simulation. The shape of jets and the underlying event were well reproduced at this collision energy. For the measurement of jet A(LL), photons and charged particles were clustered with a seed-cone algorithm to obtain the cluster pT sum (p(T)(reco)). The effect of detector response and the underlying events on p(T)(reco) was evaluated with the simulation. The production rate of reconstructed jets is satisfactorily reproduced with the next-to-leading-order and perturbative quantum chromodynamics jet production cross section. For 4< p(T)(reco) < 12 GeV/c with an average beam polarization of < P > = 49% we measured Lambda(LL) = -0.0014 +/- 0.0037(stat) at the lowest p(T)(reco) bin (4-5 GeV= c) and -0.0181 +/- 0.0282(stat) at the highest p(T)(reco) bin (10-12 GeV= c) with a beam polarization scale error of 9.4% and a pT scale error of 10%. Jets in the measured p(T)(reco) range arise primarily from hard-scattered gluons with momentum fraction 0: 02 < x < 0: 3 according to PYTHIA. The measured A(LL) is compared with predictions that assume various Delta G(x) distributions based on the Gluck-Reya-Stratmann-Vogelsang parameterization. The present result imposes the limit -a.1 < integral(0.3)(0.02) dx Delta G(x, mu(2) = GeV2) < 0.4 at 95% confidence level or integral(0.3)(0.002) dx Delta G(x, mu(2) = 1 GeV2) < 0.5 at 99% confidence level.