87 resultados para High-Order Accurate Scheme


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the integrated design of parallel manipulators, which exhibit varying dynamics. This characteristic affects the machine stability and performance. The design methodology consists of four main steps: (i) the system modeling using flexible multibody technique, (ii) the synthesis of reduced-order models suitable for control design, (iii) the systematic flexible model-based input signal design, and (iv) the evaluation of some possible machine designs. The novelty in this methodology is to take structural flexibilities into consideration during the input signal design; therefore, enhancing the standard design process which mainly considers rigid bodies dynamics. The potential of the proposed strategy is exploited for the design evaluation of a two degree-of-freedom high-speed parallel manipulator. The results are experimentally validated. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel adaptive control scheme. with improved convergence rate, for the equalization of harmonic disturbances such as engine noise. First, modifications for improving convergence speed of the standard filtered-X LMS control are described. Equalization capabilities are then implemented, allowing the independent tuning of harmonics. Eventually, by providing the desired order vs. engine speed profiles, the pursued sound quality attributes can be achieved. The proposed control scheme is first demonstrated with a simple secondary path model and, then, experimentally validated with the aid of a vehicle mockup which is excited with engine noise. The engine excitation is provided by a real-time sound quality equivalent engine simulator. Stationary and transient engine excitations are used to assess the control performance. The results reveal that the proposed controller is capable of large order-level reductions (up to 30 dB) for stationary excitation, which allows a comfortable margin for equalization. The same holds for slow run-ups ( > 15s) thanks to the improved convergence rate. This margin, however, gets narrower with shorter run-ups (<= 10s). (c) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An accurate estimate of machining time is very important for predicting delivery time, manufacturing costs, and also to help production process planning. Most commercial CAM software systems estimate the machining time in milling operations simply by dividing the entire tool path length by the programmed feed rate. This time estimate differs drastically from the real process time because the feed rate is not always constant due to machine and computer numerical controlled (CNC) limitations. This study presents a practical mechanistic method for milling time estimation when machining free-form geometries. The method considers a variable called machine response time (MRT) which characterizes the real CNC machine`s capacity to move in high feed rates in free-form geometries. MRT is a global performance feature which can be obtained for any type of CNC machine configuration by carrying out a simple test. For validating the methodology, a workpiece was used to generate NC programs for five different types of CNC machines. A practical industrial case study was also carried out to validate the method. The results indicated that MRT, and consequently, the real machining time, depends on the CNC machine`s potential: furthermore, the greater MRT, the larger the difference between predicted milling time and real milling time. The proposed method achieved an error range from 0.3% to 12% of the real machining time, whereas the CAM estimation achieved from 211% to 1244% error. The MRT-based process is also suggested as an instrument for helping in machine tool benchmarking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the method of Galerkin and the Askey-Wiener scheme are used to obtain approximate solutions to the stochastic displacement response of Kirchhoff plates with uncertain parameters. Theoretical and numerical results are presented. The Lax-Milgram lemma is used to express the conditions for existence and uniqueness of the solution. Uncertainties in plate and foundation stiffness are modeled by respecting these conditions, hence using Legendre polynomials indexed in uniform random variables. The space of approximate solutions is built using results of density between the space of continuous functions and Sobolev spaces. Approximate Galerkin solutions are compared with results of Monte Carlo simulation, in terms of first and second order moments and in terms of histograms of the displacement response. Numerical results for two example problems show very fast convergence to the exact solution, at excellent accuracies. The Askey-Wiener Galerkin scheme developed herein is able to reproduce the histogram of the displacement response. The scheme is shown to be a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to reduce energy costs, high-temperature insulation porous refractory ceramics have been subjected to increasing demands. Among the techniques used to produce these materials (such as the addition of foaming agents and organic compounds), the pore generation via phase transformation presents key aspects, such as easy processing and the absence of toxic volatiles. In this study, this technique was applied to produce porous ceramics by decomposing an aluminum magnesium hydro-carbonate known as hydrotalcite (Mg(6)Al(2)(CO(3))(OH)(16)center dot 4H(2)O). It was found out that by using this complex compound, a large fraction of pores can be generated and kept at high temperatures (above 1300 degrees C) due to the in situ formation of spinel-like phases (MgAl(2)O(4)). (C) 2011 Elsevier Ltd and Techna Group S.r.l. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper proposes a flexible consensus scheme for group decision making, which allows one to obtain a consistent collective opinion, from information provided by each expert in terms of multigranular fuzzy estimates. It is based on a linguistic hierarchical model with multigranular sets of linguistic terms, and the choice of the most suitable set is a prerogative of each expert. From the human viewpoint, using such model is advantageous, since it permits each expert to utilize linguistic terms that reflect more adequately the level of uncertainty intrinsic to his evaluation. From the operational viewpoint, the advantage of using such model lies in the fact that it allows one to express the linguistic information in a unique domain, without losses of information, during the discussion process. The proposed consensus scheme supposes that the moderator can interfere in the discussion process in different ways. The intervention can be a request to any expert to update his opinion or can be the adjustment of the weight of each expert`s opinion. An optimal adjustment can be achieved through the execution of an optimization procedure that searches for the weights that maximize a corresponding soft consensus index. In order to demonstrate the usefulness of the presented consensus scheme, a technique for multicriteria analysis, based on fuzzy preference relation modeling, is utilized for solving a hypothetical enterprise strategy planning problem, generated with the use of the Balanced Scorecard methodology. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence of the addition of high-impact polystyrene (HIPS) on polypropylene (PP) photodegradation was studied with blends obtained by extrusion with and without styrene-butadiene-styrene (SBS) copolymer (10 wt % with respect to the dispersed phase). The concentrations of HIPS ranged from 10 to 30 wt %. The blends and pure materials were exposed for periods of up to 15 weeks of UV irradiation; their mechanical properties (tensile and impact), fracture surface, and melt flow indices were monitored. After 3 weeks of UV exposure, all of the materials presented mechanical properties of the same order of magnitude. However, for times of exposure greater than 3 weeks, an increasing concentration of HIPS resulted in a better photostability of PP. These results were explained in light of morphological observations. This increase of photostability was even greater when SBS was added to the blends. It was more difficult to measure the melt flow index of the binary PP/HIPS blends than that of PP for low concentrations of HIPS; this was most likely due to energy transfer between the blend domains during photodegradation. This phenomenon was not observed for the ternary blends. (C) 2010 Wiley Periodicals, Inc. J Appl Polym Sci 120: 770-779, 2011

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leaf wetness duration (LWD) is related to plant disease occurrence and is therefore a key parameter in agrometeorology. As LWD is seldom measured at standard weather stations, it must be estimated in order to ensure the effectiveness of warning systems and the scheduling of chemical disease control. Among the models used to estimate LWD, those that use physical principles of dew formation and dew and/or rain evaporation have shown good portability and sufficiently accurate results for operational use. However, the requirement of net radiation (Rn) is a disadvantage foroperational physical models, since this variable is usually not measured over crops or even at standard weather stations. With the objective of proposing a solution for this problem, this study has evaluated the ability of four models to estimate hourly Rn and their impact on LWD estimates using a Penman-Monteith approach. A field experiment was carried out in Elora, Ontario, Canada, with measurements of LWD, Rn and other meteorological variables over mowed turfgrass for a 58 day period during the growing season of 2003. Four models for estimating hourly Rn based on different combinations of incoming solar radiation (Rg), airtemperature (T), relative humidity (RH), cloud cover (CC) and cloud height (CH), were evaluated. Measured and estimated hourly Rn values were applied in a Penman-Monteith model to estimate LWD. Correlating measured and estimated Rn, we observed that all models performed well in terms of estimating hourly Rn. However, when cloud data were used the models overestimated positive Rn and underestimated negative Rn. When only Rg and T were used to estimate hourly Rn, the model underestimated positive Rn and no tendency was observed for negative Rn. The best performance was obtained with Model I, which presented, in general, the smallest mean absolute error (MAE) and the highest C-index. When measured LWD was compared to the Penman-Monteith LWD, calculated with measured and estimated Rn, few differences were observed. Both precision and accuracy were high, with the slopes of the relationships ranging from 0.96 to 1.02 and R-2 from 0.85 to 0.92, resulting in C-indices between 0.87 and 0.93. The LWD mean absolute errors associated with Rn estimates were between 1.0 and 1.5h, which is sufficient for use in plant disease management schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complex interactions among endangered ecosystems, landowners` interests, and different models of land tenure and use, constitute an important series of challenges for those seeking to maintain and restore biodiversity and augment the flow of ecosystem services. Over the past 10 years, we have developed a data-based approach to address these challenges and to achieve medium and large-scale ecological restoration of riparian areas on private lands in the state of Sao Paulo, southeastern Brazil. Given varying motivations for ecological restoration, the location of riparian areas within landholdings, environmental zoning of different riparian areas, and best-practice restoration methods were developed for each situation. A total of 32 ongoing projects, covering 527,982 ha, were evaluated in large sugarcane farms and small mixed farms, and six different restoration techniques have been developed to help upscale the effort. Small mixed farms had higher portions of land requiring protection as riparian areas (13.3%), and lower forest cover of riparian areas (18.3%), than large sugarcane farms (10.0% and 36.9%, respectively for riparian areas and forest cover values). In both types of farms, forest fragments required some degree of restoration. Historical anthropogenic degradation has compromised forest ecosystem structure and functioning, despite their high-diversity of native tree and shrub species. Notably, land use patterns in riparian areas differed markedly. Large sugarcane farms had higher portions of riparian areas occupied by highly mechanized agriculture, abandoned fields, and anthropogenic wet fields created by siltation in water courses. In contrast, in small mixed crop farms, low or non-mechanized agriculture and pasturelands were predominant. Despite these differences, plantations of native tree species covering the entire area was by far the main restoration method needed both by large sugarcane farms (76.0%) and small mixed farms (92.4%), in view of the low resilience of target sites, reduced forest cover, and high fragmentation, all of which limit the potential for autogenic restoration. We propose that plantations should be carried out with a high-diversity of native species in order to create biologically viable restored forests, and to assist long-term biodiversity persistence at the landscape scale. Finally, we propose strategies to integrate the political, socio-economic and methodological aspects needed to upscale restoration efforts in tropical forest regions throughout Latin America and elsewhere. (C) 2010 Elsevier BA/. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Choline citrate (CC) and acetylmethionine (AM) are lipotropic drugs used in several pharmaceutical formulations. The objective of this research was to develop and validate a high performance liquid chromatographic (HPLC) method for simultaneous determination of CC and AM in injectable solutions, aiming its application in routine analysis for quality control of these pharmaceutical formulations. The method was validated using a Shim-Pack (R) C18 (250 x 4.6 mm, 5 mu m) column. The mobile phase was constituted of 25 mM potassium phosphate buffer solution, pH 5.7, adjusted with 10 % orthophosphoric acid, acetonitrile and methanol (88:10:2, v/v/v). The flow rate was 1.1 mL.min(-1) and the UV detection was made at 210 nm. The analyses were made at room temperature (25 +/- 1 degrees C). The method is precise, selective, accurate and robust, and was successfully applied for simultaneous quantitative determination of CC and AM in injectables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A stability-indicating high-performance liquid chromatographic (HPLC) and a second-order derivative spectrophotometric (UVDS) analytical methods were validated and compared for determination of simvastatin in tablets. The HPLC method was performed with isocratic elution using a C18 column and a mobile phase composed of methanol:acetonitrile:water (60:20:20, v/v/v) at a flow rate of 1.0 ml/min. The detection was made at 239 nm. In UVDS method, methanol and water were used in first dilution and distilled water was used in consecutive dilutions and as background. The second-order derivative signal measurement was taken at 255 nm. Analytical curves showed correlation coefficients > 0.999 for both methods. The quantitation limits (QL) were 2.41 mu g/ml for HPLC and 0.45 mu g/ml for UVDS, respectively. Intra and inter-day relative standard deviations were < 2.0 %. Statistical analysis with t- and F-tests are not exceeding their critical values demonstrating that there is no significant difference between the two methods at 95 % confidence level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to develop and validate analytical methods for determination of amlodipine besylate in tablets. Simple, accurate and precise liquid chromatographic and spectrophotometric methods are proposed. For the chromatographic method, the conditions were: a LiChrospher (R) 100 RP-18 Merck (R) (125 mm x 4.6 mm, 5 mu m) column; methanol/water containing 1 % of trietylamine adjusted to pH 5.0 with phosphoric acid (35:65) as mobile phase; a flow rate of 1.0 mL/min and UV detector at 238 nm. Linearity was in the range of 50.0 - 350.0 mu g/mL with a correlation coefficient (r) = 0.9999. For the spectrophotometric method, the first dilutions of samples were performed in methanol and the consecutives in ultrapure water. The quantitation was made at 364.4 nm. Linearity was determined within the range of 41.0 - 61.0 mu g/mL with a correlation coefficient (r) = 0.9996. Our results demonstrate that both methods can be used in routine analysis for quality control of tablets containing amlodipine besylate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in the control of molecular engineering architectures have allowed unprecedented ability of molecular recognition in biosensing, with a promising impact for clinical diagnosis and environment control. The availability of large amounts of data from electrical, optical, or electrochemical measurements requires, however, sophisticated data treatment in order to optimize sensing performance. In this study, we show how an information visualization system based on projections, referred to as Projection Explorer (PEx), can be used to achieve high performance for biosensors made with nanostructured films containing immobilized antigens. As a proof of concept, various visualizations were obtained with impedance spectroscopy data from an array of sensors whose electrical response could be specific toward a given antibody (analyte) owing to molecular recognition processes. In addition to discussing the distinct methods for projection and normalization of the data, we demonstrate that an excellent distinction can be made between real samples tested positive for Chagas disease and Leishmaniasis, which could not be achieved with conventional statistical methods. Such high performance probably arose from the possibility of treating the data in the whole frequency range. Through a systematic analysis, it was inferred that Sammon`s mapping with standardization to normalize the data gives the best results, where distinction could be made of blood serum samples containing 10(-7) mg/mL of the antibody. The method inherent in PEx and the procedures for analyzing the impedance data are entirely generic and can be extended to optimize any type of sensor or biosensor.