893 resultados para Shape optimization method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method of estimating dissipation rates from a vertically pointing Doppler lidar with high temporal and spatial resolution has been evaluated by comparison with independent measurements derived from a balloon-borne sonic anemometer. This method utilizes the variance of the mean Doppler velocity from a number of sequential samples and requires an estimate of the horizontal wind speed. The noise contribution to the variance can be estimated from the observed signal-to-noise ratio and removed where appropriate. The relative size of the noise variance to the observed variance provides a measure of the confidence in the retrieval. Comparison with in situ dissipation rates derived from the balloon-borne sonic anemometer reveal that this particular Doppler lidar is capable of retrieving dissipation rates over a range of at least three orders of magnitude. This method is most suitable for retrieval of dissipation rates within the convective well-mixed boundary layer where the scales of motion that the Doppler lidar probes remain well within the inertial subrange. Caution must be applied when estimating dissipation rates in more quiescent conditions. For the particular Doppler lidar described here, the selection of suitably short integration times will permit this method to be applicable in such situations but at the expense of accuracy in the Doppler velocity estimates. The two case studies presented here suggest that, with profiles every 4 s, reliable estimates of ϵ can be derived to within at least an order of magnitude throughout almost all of the lowest 2 km and, in the convective boundary layer, to within 50%. Increasing the integration time for individual profiles to 30 s can improve the accuracy substantially but potentially confines retrievals to within the convective boundary layer. Therefore, optimization of certain instrument parameters may be required for specific implementations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Kriging interpolation method is combined with an object-based evaluation measure to assess the ability of the UK Met Office's dispersion and weather prediction models to predict the evolution of a plume of tracer as it was transported across Europe. The object-based evaluation method, SAL, considers aspects of the Structure, Amplitude and Location of the pollutant field. The SAL method is able to quantify errors in the predicted size and shape of the pollutant plume, through the structure component, the over- or under-prediction of the pollutant concentrations, through the amplitude component, and the position of the pollutant plume, through the location component. The quantitative results of the SAL evaluation are similar for both models and close to a subjective visual inspection of the predictions. A negative structure component for both models, throughout the entire 60 hour plume dispersion simulation, indicates that the modelled plumes are too small and/or too peaked compared to the observed plume at all times. The amplitude component for both models is strongly positive at the start of the simulation, indicating that surface concentrations are over-predicted by both models for the first 24 hours, but modelled concentrations are within a factor of 2 of the observations at later times. Finally, for both models, the location component is small for the first 48 hours after the start of the tracer release, indicating that the modelled plumes are situated close to the observed plume early on in the simulation, but this plume location error grows at later times. The SAL methodology has also been used to identify differences in the transport of pollution in the dispersion and weather prediction models. The convection scheme in the weather prediction model is found to transport more pollution vertically out of the boundary layer into the free troposphere than the dispersion model convection scheme resulting in lower pollutant concentrations near the surface and hence a better forecast for this case study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a new system identification algorithm is introduced for Hammerstein systems based on observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a non-uniform rational B-spline (NURB) neural network. The proposed system identification algorithm for this NURB network based Hammerstein system consists of two successive stages. First the shaping parameters in NURB network are estimated using a particle swarm optimization (PSO) procedure. Then the remaining parameters are estimated by the method of the singular value decomposition (SVD). Numerical examples including a model based controller are utilized to demonstrate the efficacy of the proposed approach. The controller consists of computing the inverse of the nonlinear static function approximated by NURB network, followed by a linear pole assignment controller.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results are presented of a study of a performance of various track-side railway noise barriers, determined by using a two- dimensional numerical boundary element model. The basic model uses monopole sources and has been adapted to allow the sources to exhibit dipole-type radiation characteristics. A comparison of boundary element predictions of the performance of simple barriers and vehicle shapes is made with results obtained by using the standard U.K. prediction method. The results obtained from the numerical model indicate that modifying the source to exhibit dipole characteristics becomes more significant as the height of the barrier increases, and suggest that for any particular shape, absorbent barriers provide much better screening efficiency than the rigid equivalent. The cross-section of the rolling stock significantly affects the performance of rigid barriers. If the position of the upper edge is fixed, the results suggest that simple absorptive barriers provide more effective screening than tilted barriers. The addition of multiple edges to a barrier provides additional insertion loss without any increase in barrier height.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new sparse model construction method aimed at maximizing a model’s generalisation capability for a large class of linear-in-the-parameters models. The coordinate descent optimization algorithm is employed with a modified l1- penalized least squares cost function in order to estimate a single parameter and its regularization parameter simultaneously based on the leave one out mean square error (LOOMSE). Our original contribution is to derive a closed form of optimal LOOMSE regularization parameter for a single term model, for which we show that the LOOMSE can be analytically computed without actually splitting the data set leading to a very simple parameter estimation method. We then integrate the new results within the coordinate descent optimization algorithm to update model parameters one at the time for linear-in-the-parameters models. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have calculated the equilibrium shape of the axially symmetric meniscus along which a spherical bubble contacts a flat liquid surface, by analytically integrating the Young-Laplace equation in the presence of gravity, in the limit of large Bond numbers. This method has the advantage that it provides semi-analytical expressions for key geometrical properties of the bubble in terms of the Bond number. Results are in good overall agreement with experimental data and are consistent with fully numerical (Surface Evolver) calculations. In particular, we are able to describe how the bubble shape changes from hemispherical, with a shallow flat bottom, to lenticular, with a deeper, curved bottom, as the Bond number is decreased.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquid matrix-assisted laser desorption/ionization (MALDI) allows the generation of predominantly multiply charged ions in atmospheric pressure (AP) MALDI ion sources for mass spectrometry (MS) analysis. The charge state distribution of the generated ions and the efficiency of the ion source in generating such ions crucially depend on the desolvation regime of the MALDI plume after desorption in the AP-tovacuum inlet. Both high temperature and a flow regime with increased residence time of the desorbed plume in the desolvation region promote the generation of multiply charged ions. Without such measures the application of an electric ion extraction field significantly increases the ion signal intensity of singly charged species while the detection of multiply charged species is less dependent on the extraction field. In general, optimization of high temperature application facilitates the predominant formation and detection of multiply charged compared to singly charged ion species. In this study an experimental setup and optimization strategy is described for liquid AP-MALDI MS which improves the ionization effi- ciency of selected ion species up to 14 times. In combination with ion mobility separation, the method allows the detection of multiply charged peptide and protein ions for analyte solution concentrations as low as 2 fmol/lL (0.5 lL, i.e. 1 fmol, deposited on the target) with very low sample consumption in the low nL-range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The method of porosity analysis by water absorption has been carried out by the storage of the specimens in pure water, but it does not exclude the potential plasticising effect of the water generating unreal values of porosity. Objective: The present study evaluated the reliability of this method of porosity analysis in polymethylmethacrylate denture base resins by the determination of the most satisfactory solution for storage (S), where the plasticising effect was excluded. Materials and methods: Two specimen shapes (rectangular and maxillary denture base) and two denture base resins, water bath-polymerised (Classico) and microwave-polymerised (Acron MC) were used. Saturated anhydrous calcium chloride solutions (25%, 50%, 75%) and distilled water were used for specimen storage. Sorption isotherms were used to determine S. Porosity factor (PF) and diffusion coefficient (D) were calculated within S and for the groups stored in distilled water. anova and Tukey tests were performed to identify significant differences in PF results and Kruskal-Wallis test and Dunn multiple comparison post hoc test, for D results (alpha = 0.05). Results: For Acron MC denture base shape, FP results were 0.24% (S 50%) and 1.37% (distilled water); for rectangular shape FP was 0.35% (S 75%) and 0.19% (distilled water). For Classico denture base shape, FP results were 0.54% (S 75%) and 1.21% (distilled water); for rectangular shape FP was 0.7% (S 50%) and 1.32% (distilled water). FP results were similar in S and distilled water only for Acron MC rectangular shape (p > 0.05). D results in distilled water were statistically higher than S for all groups. Conclusions: The results of the study suggest that an adequate solution for storing specimens must be used to measure porosity by water absorption, based on excluding the plasticising effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach for solving the optimal power flow (OPF) problem is established by combining the reduced gradient method and the augmented Lagrangian method with barriers and exploring specific characteristics of the relations between the variables of the OPF problem. Computer simulations on IEEE 14-bus and IEEE 30-bus test systems illustrate the method. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The NMR spin coupling parameters, (1)J(N,H) and (2)J(H,H), and the chemical shielding, sigma((15)N), of liquid ammonia are studied from a combined and sequential QM/MM methodology. Monte Carlo simulations are performed to generate statistically uncorrelated configurations that are submitted to density functional theory calculations. Two different Lennard-Jones potentials are used in the liquid simulations. Electronic polarization is included in these two potentials via an iterative procedure with and without geometry relaxation, and the influence on the calculated properties are analyzed. B3LYP/aug-cc-pVTZ-J calculations were used to compute the V(N,H) constants in the interval of -67.8 to -63.9 Hz, depending on the theoretical model used. These can be compared with the experimental results of -61.6 Hz. For the (2)J(H,H) coupling the theoretical results vary between -10.6 to -13.01 Hz. The indirect experimental result derived from partially deuterated liquid is -11.1 Hz. Inclusion of explicit hydrogen bonded molecules gives a small but important contribution. The vapor-to-liquid shifts are also considered. This shift is calculated to be negligible for (1)J(N,H) in agreement with experiment. This is rationalized as a cancellation of the geometry relaxation and pure solvent effects. For the chemical shielding, U(15 N) Calculations at the B3LYP/aug-pcS-3 show that the vapor-to-liquid chemical shift requires the explicit use of solvent molecules. Considering only one ammonia molecule in an electrostatic embedding gives a wrong sign for the chemical shift that is corrected only with the use of explicit additional molecules. The best result calculated for the vapor to liquid chemical shift Delta sigma((15)N) is -25.2 ppm, in good agreement with the experimental value of -22.6 ppm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we report on the magnetic properties of nickel nanoparticles (NP) in a SiO(2)-C thin film matrix, prepared by a polymeric precursor method, with Ni content x in the 0-10 wt% range. Microstructural analyses of the films showed that the Ni NP are homogenously distributed in the SiO(2)-C matrix and have spherical shape with average diameter of similar to 10 nm. The magnetic properties reveal features of superparamagnetism with blocking temperatures T (B) similar to 10 K. The average diameter of the Ni NP, estimated from magnetization measurements, was found to be similar to 4 nm for the x = 3 wt% Ni sample, in excellent agreement with X-ray diffraction data. M versus H hysteresis loops indicated that the Ni NP are free from a surrounding oxide layer. We have also observed that coercivity (H (C)) develops appreciably below T (B), and follows the H (C) ae [1 - (T/T (B))(0.5)] relationship, a feature expected for randomly oriented and non-interacting nanoparticles. The extrapolation of H (C) to 0 K indicates that coercivity decreases with increasing x, suggesting that dipolar interactions may be relevant in films with x > 3 wt% Ni.