974 resultados para TEST CASE GENERATION


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work aims at dimensional reduction of non-linear isotropic hyperelastic plates in an asymptotically accurate manner. The problem is both geometrically and materially non-linear. The geometric non-linearity is handled by allowing for finite deformations and generalized warping while the material non-linearity is incorporated through hyperelastic material model. The development, based on the Variational Asymptotic Method (VAM) with moderate strains and very small thickness to shortest wavelength of the deformation along the plate reference surface as small parameters, begins with three-dimensional (3-D) non-linear elasticity and mathematically splits the analysis into a one-dimensional (1-D) through-the-thickness analysis and a two-dimensional (2-D) plate analysis. Major contributions of this paper are derivation of closed-form analytical expressions for warping functions and stiffness coefficients and a set of recovery relations to express approximately the 3-D displacement, strain and stress fields. Consistent with the 2-D non-linear constitutive laws, 2-D plate theory and corresponding finite element program have been developed. Validation of present theory is carried out with a standard test case and the results match well. Distributions of 3-D results are provided for another test case. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Asymptotically-accurate dimensional reduction from three to two dimensions and recovery of 3-D displacement field of non-prestretched dielectric hyperelastic membranes are carried out using the Variational Asymptotic Method (VAM) with moderate strains and very small ratio of the membrane thickness to its shortest wavelength of the deformation along the plate reference surface chosen as the small parameters for asymptotic expansion. Present work incorporates large deformations (displacements and rotations), material nonlinearity (hyperelasticity), and electrical effects. It begins with 3-D nonlinear electroelastic energy and mathematically splits the analysis into a one-dimensional (1-D) through-the-thickness analysis and a 2-D nonlinear plate analysis. Major contribution of this paper is a comprehensive nonlinear through-the-thickness analysis which provides a 2-D energy asymptotically equivalent of the 3-D energy, a 2-D constitutive relation between the 2-D generalized strain and stress tensors for the plate analysis and a set of recovery relations to express the 3-D displacement field. Analytical expressions are derived for warping functions and stiffness coefficients. This is the first attempt to integrate an analytical work on asymptotically-accurate nonlinear electro-elastic constitutive relation for compressible dielectric hyperelastic model with a generalized finite element analysis of plates to provide 3-D displacement fields using VAM. A unified software package `VAMNLM' (Variational Asymptotic Method applied to Non-Linear Material models) was developed to carry out 1-D non-linear analysis (analytical), 2-D non-linear finite element analysis and 3-D recovery analysis. The applicability of the current theory is demonstrated through an actuation test case, for which distribution of 3-D displacements are provided. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Atomization is the process of disintegration of a liquid jet into ligaments and subsequently into smaller droplets. A liquid jet injected from a circular orifice into cross flow of air undergoes atomization primarily due to the interaction of the two phases rather than an intrinsic break up. Direct numerical simulation of this process resolving the finest droplets is computationally very expensive and impractical. In the present study, we resort to multiscale modelling to reduce the computational cost. The primary break up of the liquid jet is simulated using Gerris, an open source code, which employs Volume-of-Fluid (VOF) algorithm. The smallest droplets formed during primary atomization are modeled as Lagrangian particles. This one-way coupling approach is validated with the help of the simple test case of tracking a particle in a Taylor-Green vortex. The temporal evolution of the liquid jet forming the spray is captured and the flattening of the cylindrical liquid column prior to breakup is observed. The size distribution of the resultant droplets is presented at different distances downstream from the location of injection and their spatial evolution is analyzed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Identification of residue-residue contacts from primary sequence can be used to guide protein structure prediction. Using Escherichia coli CcdB as the test case, we describe an experimental method termed saturation-suppressor mutagenesis to acquire residue contact information. In this methodology, for each of five inactive CcdB mutants, exhaustive screens for suppressors were performed. Proximal suppressors were accurately discriminated from distal suppressors based on their phenotypes when present as single mutants. Experimentally identified putative proximal pairs formed spatial constraints to recover >98% of native-like models of CcdB from a decoy dataset. Suppressor methodology was also applied to the integral membrane protein, diacylglycerol kinase A where the structures determined by X-ray crystallography and NMR were significantly different. Suppressor as well as sequence co-variation data clearly point to the Xray structure being the functional one adopted in vivo. The methodology is applicable to any macromolecular system for which a convenient phenotypic assay exists.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The uptake of Cu, Zn, and Cd by fresh water plankton was studied by analyzing samples of water and plankton from six lakes in southern California. Co, Pb, Mn, Fe, Na, K, Mg, Ca, Sr, Ba, and Al were also determined in the plankton samples. Special precautions were taken during sampling and analysis to avoid metal contamination.

The relation between aqueous metal concentrations and the concentrations of metals in plankton was studied by plotting aqueous and plankton metal concentrations vs time and comparing the plots. No plankton metal plot showed the same changes as its corresponding aqueous metal plot, though long-term trends were similar. Thus, passive sorption did not completely explain plankton metal uptake.

The fractions of Cu, Zn, and Cd in lake water which were associated with plankton were calculated and these fractions were less than 1% in every case.

To see whether or not plankton metal uptake could deplete aqueous metal concentrations by measurable amounts (e.g. 20%) in short periods (e.g. less than six days), three integrated rate equations were used as models of plankton metal sorption. Parameters for the equations were taken from actual field measurements. Measurable reductions in concentration within short times were predicted by all three equations when the concentration factor was greater than 10^5. All Cu concentration factors were less than 10^5.

The role of plankton was regulating metal concentrations considered in the context of a model of trace metal chemistry in lakes. The model assumes that all particles can be represented by a single solid phase and that the solid phase controls aqueous metal concentrations. A term for the rate of in situ production of particulate matter is included and primary productivity was used for this parameter. In San Vicente Reservoir, the test case, the rate of in situ production of particulate matter was of the same order of magnitude as the rate of introduction of particulate matter by the influent stream.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this thesis is to characterize the behavior of the smallest turbulent scales in high Karlovitz number (Ka) premixed flames. These scales are particularly important in the two-way coupling between turbulence and chemistry and better understanding of these scales will support future modeling efforts using large eddy simulations (LES). The smallest turbulent scales are studied by considering the vorticity vector, ω, and its transport equation.

Due to the complexity of turbulent combustion introduced by the wide range of length and time scales, the two-dimensional vortex-flame interaction is first studied as a simplified test case. Numerical and analytical techniques are used to discern the dominate transport terms and their effects on vorticity based on the initial size and strength of the vortex. This description of the effects of the flame on a vortex provides a foundation for investigating vorticity in turbulent combustion.

Subsequently, enstrophy, ω2 = ω • ω, and its transport equation are investigated in premixed turbulent combustion. For this purpose, a series of direct numerical simulations (DNS) of premixed n-heptane/air flames are performed, the conditions of which span a wide range of unburnt Karlovitz numbers and turbulent Reynolds numbers. Theoretical scaling analysis along with the DNS results support that, at high Karlovitz number, enstrophy transport is controlled by the viscous dissipation and vortex stretching/production terms. As a result, vorticity scales throughout the flame with the inverse of the Kolmogorov time scale, τη, just as in homogeneous isotropic turbulence. As τη is only a function of the viscosity and dissipation rate, this supports the validity of Kolmogorov’s first similarity hypothesis for sufficiently high Ka numbers (Ka ≳ 100). These conclusions are in contrast to low Karlovitz number behavior, where dilatation and baroclinic torque have a significant impact on vorticity within the flame. Results are unaffected by the transport model, chemical model, turbulent Reynolds number, and lastly the physical configuration.

Next, the isotropy of vorticity is assessed. It is found that given a sufficiently large value of the Karlovitz number (Ka ≳ 100) the vorticity is isotropic. At lower Karlovitz numbers, anisotropy develops due to the effects of the flame on the vortex stretching/production term. In this case, the local dynamics of vorticity in the strain-rate tensor, S, eigenframe are altered by the flame. At sufficiently high Karlovitz numbers, the dynamics of vorticity in this eigenframe resemble that of homogeneous isotropic turbulence.

Combined, the results of this thesis support that both the magnitude and orientation of vorticity resemble the behavior of homogeneous isotropic turbulence, given a sufficiently high Karlovitz number (Ka ≳ 100). This supports the validity of Kolmogorov’s first similarity hypothesis and the hypothesis of local isotropy under these condition. However, dramatically different behavior is found at lower Karlovitz numbers. These conclusions provides/suggests directions for modeling high Karlovitz number premixed flames using LES. With more accurate models, the design of aircraft combustors and other combustion based devices may better mitigate the detrimental effects of combustion, from reducing CO2 and soot production to increasing engine efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Determining patterns of population connectivity is critical to the evaluation of marine reserves as recruitment sources for harvested populations. Mutton snapper (Lutjanus analis) is a good test case because the last known major spawning aggregation in U.S. waters was granted no-take status in the Tortugas South Ecological Reserve (TSER) in 2001. To evaluate the TSER population as a recruitment source, we genotyped mutton snapper from the Dry Tortugas, southeast Florida, and from three locations across the Caribbean at eight microsatellite loci. Both Fstatistics and individual-based Bayesian analyses indicated that genetic substructure was absent across the five populations. Genetic homogeneity of mutton snapper populations is consistent with its pelagic larval duration of 27 to 37 days and adult behavior of annual migrations to large spawning aggregations. Statistical power of future genetic assessments of mutton snapper population connectivity may benefit from more comprehensive geographic sampling, and perhaps from the development of less polymorphic DNA microsatellite loci. Research where alternative methods are used, such as the transgenerational marking of embryonic otoliths with barium stable isotopes, is also needed on this and other species with diverse life history characteristics to further evaluate the TSER as a recruitment source and to define corridors of population connectivity across the Caribbean and Florida.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os testes são uma atividade crucial no desenvolvimento de sistemas, pois uma boa execução dos testes podem expor anomalias do software e estas podem ser corrigidas ainda no processo de desenvolvimento, reduzindo custos. Esta dissertação apresenta uma ferramenta de testes chamada SIT (Sistema de Testes) que auxiliará no teste de Sistemas de Informações Geográficas (SIG). Os SIG são caracterizados pelo uso de informações espaciais georreferenciadas, que podem gerar um grande número de casos de teste complexos. As técnicas tradicionais de teste são divididas em funcionais e estruturais. Neste trabalho, o SIT abordará os testes funcionais, focado em algumas técnicas clássicas como o particionamento de equivalência e análise do Valor Limite. O SIT também propõe o uso de Lógica Nebulosa como uma ferramenta que irá sugerir um conjunto mínimo de testes a executar nos SIG, ilustrando os benefícios da ferramenta.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Predicting and averting the spread of invasive species is a core focus of resource managers in all ecosystems. Patterns of invasion are difficult to forecast, compounded by a lack of user-friendly species distribution model (SDM) tools to help managers focus control efforts. This paper presents a web-based cellular automata hybrid modeling tool developed to study the invasion pattern of lionfish (Pterois volitans/miles) in the western Atlantic and is a natural extension our previous lionfish study. Our goal is to make publically available this hybrid SDM tool and demonstrate both a test case (P. volitans/miles) and a use case (Caulerpa taxifolia). The software derived from the model, titled Invasionsoft, is unique in its ability to examine multiple default or user-defined parameters, their relation to invasion patterns, and is presented in a rich web browser-based GUI with integrated results viewer. The beta version is not species-specific and includes a default parameter set that is tailored to the marine habitat. Invasionsoft is provided as copyright protected freeware at http://www.invasionsoft.com.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The majority of computational studies of confined explosion hazards apply simple and inaccurate combustion models, requiring ad hoc corrections to obtain realistic flame shapes and often predicting an order of magnitude error in the overpressures. This work describes the application of a laminar flamelet model to a series of two-dimensional test cases. The model is computationally efficient applying an algebraic expression to calculate the flame surface area, an empirical correlation for the laminar flame speed and a novel unstructured, solution adaptive numerical grid system which allows important features of the solution to be resolved close to the flame. Accurate flame shapes are predicted, the correct burning rate is predicted near the walls, and an improvement in the predicted overpressures is obtained. However, in these fully turbulent calculations the overpressures are still too high and the flame arrival times too low, indicating the need for a model for the early laminar burning phase. Due to the computational expense, it is unrealistic to model a laminar flame in the complex geometries involved and therefore a pragmatic approach is employed which constrains the flame to propagate at the laminar flame speed. Transition to turbulent burning occurs at a specified turbulent Reynolds number. With the laminar phase model included, the predicted flame arrival times increase significantly, but are still too low. However, this has no significant effect on the overpressures, which are predicted accurately for a baffled channel test case where rapid transition occurs once the flame reaches the first pair of baffles. In a channel with obstacles on the centreline, transition is more gradual and the accuracy of the predicted overpressures is reduced. However, although the accuracy is still less than desirable in some cases, it is much better than the order of magnitude error previously expected.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An intermittency transport model is proposed for modeling separated-flow transition. The model is based on earlier work on prediction of attached flow bypass transition and is applied for the first time to model transition in a separation bubble at various degrees of free-stream turbulence. The model has been developed so that it takes into account the entrainment of the surrounding fluid. Experimental investigations suggest that it is this phenomena which ultimately determines the extent of the separation bubble. Transition onset is determined via a boundary layer correlation based on momentum thickness at the point of separation. The intermittent flow characteristic of the transition process is modeled via an intermittency transport equation. This accounts for both normal and streamwise variation of intermittency and hence models the entrainment of surrounding flow in a more accurate manner than alternative prescribed intermittency models. The model has been validated against the well established T3L semicircular leading edge flat plate test case for three different degrees of free-stream turbulence characteristic of turbomachinery blade applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aircraft in high-lift configuration shed multiple vortices. These generally merge to form a downstream wake consisting of two counter-rotating vortices of equal strength. The understanding of the merger of two co-rotating trailing vortices is important in evaluating the separation criteria for different aircraft to prevent wake vortex hazards during landing and take-off. There is no existing theoretical method on the basis of which such norms can be set. The present study is aimed at gaining a better understanding of the behaviour of wake vortices behind the aircraft. Two dimensional studies are carried out using the vortex blob method and compared with Bertenyi's experiment. It is shown that inviscid two dimensional effects are insufficient to explain the observations. Three dimensional studies, using the vortex filament method, are applied to the same test case. Two Lamb-Oseen profile vortices of the same dimensions and initial separation as the experiment are allowed to evolve from a straight starting condition until a converged steady flow is achieved. The results obtained show good agreement with the experimental distance to merger. Core radius and separation behaviour is qualitatively similar to experiment, with the exception of rapid increases at first. This may be partially attributable to the choice of filament-element length, and recommended further work includes a convergence study for this parameter. Copyright © 2005 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.