87 resultados para Robust design
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
This paper presents a robust voltage control scheme for fixed-speed wind generators using a static synchronous compensator (STATCOM) controller. To enable a linear and robust control framework with structured uncertainty, the overall system is represented by a linear part plus a nonlinear part that covers an operating range of interest required to ensure stability during severe low voltages. The proposed methodology is flexible and readily applicable to larger wind farms of different configurations. The performance of the control strategy is demonstrated on a two area test system. Large disturbance simulations demonstrate that the proposed controller enhances voltage stability as well as transient stability of induction generators during low voltage ride through (LVRT) transients and thus enhances the LVRT capability. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
For centuries, specific instruments or regular toothbrushes have routinely been used to remove tongue biofilm and improve breath odor. Toothbrushes with a tongue scraper on the back of their head have recently been introduced to the market. The present study compared the effectiveness of a manual toothbrush with this new design, i.e., possessing a tongue scraper, and a commercial tongue scraper in improving breath odor and reducing the aerobic and anaerobic microbiota of tongue surface. The evaluations occurred at 4 moments, when the participants (n=30) had their halitosis quantified with a halimeter and scored according to a 4-point scoring system corresponding to different levels of intensity. Saliva was collected for counts of aerobic and anaerobic microorganisms. Data were analyzed statistically by Friedman's test (p<0.05). When differences were detected, the Wilcoxon test adjusted for Bonferroni correction was used for multiple comparisons (group to group). The results confirmed the importance of mechanical cleaning of the tongue, since this procedure provided an improvement in halitosis and reduction of aerobe and anaerobe counts. Regarding the evaluated methods, the toothbrush's tongue scraper and conventional tongue scraper had a similar performance in terms of breath improvement and reduction of tongue microbiota, and may be indicated as effective methods for tongue cleaning.
Resumo:
This study evaluated the effect of specimens' design and manufacturing process on microtensile bond strength, internal stress distributions (Finite Element Analysis - FEA) and specimens' integrity by means of Scanning Electron Microscopy (SEM) and Laser Scanning Confocal Microscopy (LCM). Excite was applied to flat enamel surface and a resin composite build-ups were made incrementally with 1-mm increments of Tetric Ceram. Teeth were cut using a diamond disc or a diamond wire, obtaining 0.8 mm² stick-shaped specimens, or were shaped with a Micro Specimen Former, obtaining dumbbell-shaped specimens (n = 10). Samples were randomly selected for SEM and LCM analysis. Remaining samples underwent microtensile test, and results were analyzed with ANOVA and Tukey test. FEA dumbbell-shaped model resulted in a more homogeneous stress distribution. Nonetheless, they failed under lower bond strengths (21.83 ± 5.44 MPa)c than stick-shaped specimens (sectioned with wire: 42.93 ± 4.77 MPaª; sectioned with disc: 36.62 ± 3.63 MPa b), due to geometric irregularities related to manufacturing process, as noted in microscopic analyzes. It could be concluded that stick-shaped, nontrimmed specimens, sectioned with diamond wire, are preferred for enamel specimens as they can be prepared in a less destructive, easier, and more precise way.
Resumo:
This work describes the construction and testing of a simple pressurized solvent extraction (PSE) system. A mixture of acetone:water (80:20), 80 ºC and 103.5 bar, was used to extract two herbicides (Diuron and Bromacil) from a sample of polluted soil, followed by identification and quantification by high-performance liquid chromatography coupled with diode array detector (HPLC-DAD). The system was also used to extract soybean oil (70 ºC and 69 bar) using pentane. The extracted oil was weighed and characterized through the fatty acid methyl ester analysis (myristic (< 0.3%), palmitic (16.3%), stearic (2.8%), oleic (24.5%), linoleic (46.3%), linolenic (9.6%), araquidic (0.3%), gadoleic (< 0.3%), and behenic (0.3%) acids) using high-resolution gas chromatography with flame ionization detection (HRGC-FID). PSE results were compared with those obtained using classical procedures: Soxhlet extraction for the soybean oil and solid-liquid extraction followed by solid-phase extraction (SLE-SPE) for the herbicides. The results showed: 21.25 ± 0.36% (m/m) of oil in the soybeans using the PSE system and 21.55 ± 0.65% (m/m) using the soxhlet extraction system; extraction efficiency (recovery) of herbicides Diuron and Bromacil of 88.7 ± 4.5% and 106.6 ± 8.1%, respectively, using the PSE system, and 96.8 ± 1.0% and 94.2 ± 3.9%, respectively, with the SLP-SPE system; limit of detection (LOD) and limit of quantification (LOQ) for Diuron of 0.012 mg kg-1 and 0.040 mg kg-1, respectively; LOD and LOQ for Bromacil of 0.025 mg kg-1 and 0.083 mg kg-1, respectively. The linearity used ranged from 0.04 to 1.50 mg L-1 for Diuron and from 0.08 to 1.50 mg L-1 for Bromacil. In conclusion, using the PSE system, due to high pressure and temperature, it is possible to make efficient, fast extractions with reduced solvent consumption in an inert atmosphere, which prevents sample and analyte decomposition.
Resumo:
This paper revisits the design of L and S band bridged loop-gap resonators (BLGRs) for electron paramagnetic resonance applications. A novel configuration is described and extensively characterized for resonance frequency and quality factor as a function of the geometrical parameters of the device. The obtained experimental results indicate higher values of the quality factor (Q) than previously reported in the literature, and the experimental analysis data should provide useful guidelines for BLGR design.
Resumo:
This paper proposes a new design methodology for discrete multi-pumped Raman amplifier. In a multi-objective optimization scenario, in a first step the whole solution-space is inspected by a CW analytical formulation. Then, the most promising solutions are fully investigated by a rigorous numerical treatment and the Raman amplification performance is thus determined by the combination of analytical and numerical approaches. As an application of our methodology we designed an photonic crystal fiber Raman amplifier configuration which provides low ripple, high gain, clear eye opening and a low power penalty. The amplifier configuration also enables to fully compensate the dispersion introduced by a 70-km singlemode fiber in a 10 Gbit/s system. We have successfully obtained a configuration with 8.5 dB average gain over the C-band and 0.71 dB ripple with almost zero eye-penalty using only two pump lasers with relatively low pump power. (C) 2009 Optical Society of America
Resumo:
In this work, the effects of conical indentation variables on the load-depth indentation curves were analyzed using finite element modeling and dimensional analysis. A factorial design 2(6) was used with the aim of quantifying the effects of the mechanical properties of the indented material and of the indenter geometry. Analysis was based on the input variables Y/E, R/h(max), n, theta, E, and h(max). The dimensional variables E and h(max) were used such that each value of dimensionless Y/E was obtained with two different values of E and each value of dimensionless R/h(max) was obtained with two different h(max) values. A set of dimensionless functions was defined to analyze the effect of the input variables: Pi(1) = P(1)/Eh(2), Pi(2) = h(c)/h, Pi(3) = H/Y, Pi(4) = S/Eh(max), Pi(6) = h(max)/h(f) and Pi(7) = W(P)/W(T). These six functions were found to depend only on the dimensionless variables studied (Y/E, R/h(max), n, theta). Another dimension less function, Pi(5) = beta, was not well defined for most of the dimensionless variables and the only variable that provided a significant effect on beta was theta. However, beta showed a strong dependence on the fraction of the data selected to fit the unloading curve, which means that beta is especially Susceptible to the error in the Calculation of the initial unloading slope.
Resumo:
This paper presents a rational approach to the design of a catamaran's hydrofoil applied within a modern context of multidisciplinary optimization. The approach used includes the use of response surfaces represented by neural networks and a distributed programming environment that increases the optimization speed. A rational approach to the problem simplifies the complex optimization model; when combined with the distributed dynamic training used for the response surfaces, this model increases the efficiency of the process. The results achieved using this approach have justified this publication.
Resumo:
Background: The MASS IV-DM Trial is a large project from a single institution, the Heart Institute (InCor), University of Sao Paulo Medical School, Brazil to study ventricular function and coronary arteries in patients with type 2 diabetes mellitus. Methods/Design: The study will enroll 600 patients with type 2 diabetes who have angiographically normal ventricular function and coronary arteries. The goal of the MASS IV-DM Trial is to achieve a long-term evaluation of the development of coronary atherosclerosis by using angiograms and coronary-artery calcium scan by electron-beam computed tomography at baseline and after 5 years of follow-up. In addition, the incidence of major cardiovascular events, the dysfunction of various organs involved in this disease, particularly microalbuminuria and renal function, will be analyzed through clinical evaluation. In addition, an effort will be made to investigate in depth the presence of major cardiovascular risk factors, especially the biochemical profile, metabolic syndrome inflammatory activity, oxidative stress, endothelial function, prothrombotic factors, and profibrinolytic and platelet activity. An evaluation will be made of the polymorphism as a determinant of disease and its possible role in the genesis of micro- and macrovascular damage. Discussion: The MASS IV-DM trial is designed to include diabetic patients with clinically suspected myocardial ischemia in whom conventional angiography shows angiographically normal coronary arteries. The result of extensive investigation including angiographic follow-up by several methods, vascular reactivity, pro-thrombotic mechanisms, genetic and biochemical studies may facilitate the understanding of so-called micro- and macrovascular disease of DM.
Resumo:
We have modeled, fabricated, and characterized superhydrophobic surfaces with a morphology formed of periodic microstructures which are cavities. This surface morphology is the inverse of that generally reported in the literature when the surface is formed of pillars or protrusions, and has the advantage that when immersed in water the confined air inside the cavities tends to expel the invading water. This differs from the case of a surface morphology formed of pillars or protrusions, for which water can penetrate irreversibly among the microstructures, necessitating complete drying of the surface in order to again recover its superhydrophobic character. We have developed a theoretical model that allows calculation of the microcavity dimensions needed to obtain superhydrophobic surfaces composed of patterns of such microcavities, and that provides estimates of the advancing and receding contact angle as a function of microcavity parameters. The model predicts that the cavity aspect ratio (depth-to-diameter ratio) can be much less than unity, indicating that the microcavities do not need to be deep in order to obtain a surface with enhanced superhydrophobic character. Specific microcavity patterns have been fabricated in polydimethylsiloxane and characterized by scanning electron microscopy, atomic force microscopy, and contact angle measurements. The measured advancing and receding contact angles are in good agreement with the predictions of the model. (C) 2010 American Institute of Physics. [doi:10.1063/1.3466979]
Resumo:
We have developed a theoretical model for superhydrophobic surfaces that are formed from an extended array of microcavities, and have fabricated specific microcavity patterns to form superhydrophobic surfaces of the kind modeled. The model shows that the cavity aspect ratio can be significantly less than unity, indicating that the microcavities do not need to be deep in order to enhance the superhydrophobic character of the surface. We have fabricated surfaces of this kind and measured advancing contact angle as high as 153 degrees, in agreement with predictions of the model.
Resumo:
In-situ measurements in convective clouds (up to the freezing level) over the Amazon basin show that smoke from deforestation fires prevents clouds from precipitating until they acquire a vertical development of at least 4 km, compared to only 1-2 km in clean clouds. The average cloud depth required for the onset of warm rain increased by similar to 350 m for each additional 100 cloud condensation nuclei per cm(3) at a super-saturation of 0.5% (CCN0.5%). In polluted clouds, the diameter of modal liquid water content grows much slower with cloud depth (at least by a factor of similar to 2), due to the large number of droplets that compete for available water and to the suppressed coalescence processes. Contrary to what other studies have suggested, we did not observe this effect to reach saturation at 3000 or more accumulation mode particles per cm(3). The CCN0.5% concentration was found to be a very good predictor for the cloud depth required for the onset of warm precipitation and other microphysical factors, leaving only a secondary role for the updraft velocities in determining the cloud drop size distributions. The effective radius of the cloud droplets (r(e)) was found to be a quite robust parameter for a given environment and cloud depth, showing only a small effect of partial droplet evaporation from the cloud's mixing with its drier environment. This supports one of the basic assumptions of satellite analysis of cloud microphysical processes: the ability to look at different cloud top heights in the same region and regard their r(e) as if they had been measured inside one well developed cloud. The dependence of r(e) on the adiabatic fraction decreased higher in the clouds, especially for cleaner conditions, and disappeared at r(e)>=similar to 10 mu m. We propose that droplet coalescence, which is at its peak when warm rain is formed in the cloud at r(e)=similar to 10 mu m, continues to be significant during the cloud's mixing with the entrained air, cancelling out the decrease in r(e) due to evaporation.