833 resultados para Robustness
Resumo:
This paper analyzes the convergence of the constant modulus algorithm (CMA) in a decision feedback equalizer using only a feedback filter. Several works had already observed that the CMA presented a better performance than decision directed algorithm in the adaptation of the decision feedback equalizer, but theoretical analysis always showed to be difficult specially due to the analytical difficulties presented by the constant modulus criterion. In this paper, we surmount such obstacle by using a recent result concerning the CM analysis, first obtained in a linear finite impulse response context with the objective of comparing its solutions to the ones obtained through the Wiener criterion. The theoretical analysis presented here confirms the robustness of the CMA when applied to the adaptation of the decision feedback equalizer and also defines a class of channels for which the algorithm will suffer from ill-convergence when initialized at the origin.
Resumo:
This paper considers two aspects of the nonlinear H(infinity) control problem: the use of weighting functions for performance and robustness improvement, as in the linear case, and the development of a successive Galerkin approximation method for the solution of the Hamilton-Jacobi-Isaacs equation that arises in the output-feedback case. Design of nonlinear H(infinity) controllers obtained by the well-established Taylor approximation and by the proposed Galerkin approximation method applied to a magnetic levitation system are presented for comparison purposes.
Resumo:
In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
Over the air download is an important feature for terrestrial digital television systems. It provides a cheaper option for DTV receiver manufacturers to provide bug fixes and quality improvements to their products, allowing a shorter time to the market. This paper presents a mechanism proposal of an over the air download software update for the Brazilian system. This mechanism was specified considering the Brazilian DTV over the air download specifications, but it was extended considering efficiency, reliability and user transparency as requirements for software update. A proof of concept was implemented on a Linux based set-top box. The mechanism is divided into five main functional parts: download schedule, packets download, packets authentication, installation and error robustness. Some analyses were conducted upon the implementation considering the following criteria: download robustness and maximum downloading rate. (1)
Resumo:
Many therapeutic agents are commercialized under their racemic form. The enantiomers can show differences in the pharmacokinetic and pharmacodynamic profile. The use of a pure enantiomer in pharmaceutical formulations may result in a better therapeutic index and fewer adverse effects. Atropine, an alkaloid of Atropa belladonna, is a racemic mixture of l-hyoscyamine and d-hyoscyamine. It is widely used to dilate the pupil. To quantify these enantiomers in ophthalmic solutions, an HPLC method was developed and validated using a Chiral AGP (R) column at 20 degrees C. The mobile phase consisted of a buffered phosphate solution (containing 10 mM 1-octanesulfonic acid sodium salt and 7.5 mM triethylamine, adjusted to pH 7.0 with orthophosphoric acid) and acetonitrile (99 + 1, v/v). The flow rate was 0.6 mL/min, with UV detection at 205 nm. In the concentration range of 14.0-26.0 mu g/mL, the method was found to be linear (r > 0.9999), accurate (with recovery of 100.1-100.5%), and precise (RSD system: <= 0.6%; RSD intraday: <= 1.1%; RSD interday: <= 0.9%). The method was specific, and the standard and sample solutions were stable for up to 72 h. The factorial design assures robustness with a variation of +/-10% in the mobile phase components and 2 degrees C of column temperature. The complete validation, including stress testing and factorial design, was studied and is presented in this research.
Resumo:
A reversed-phase high performance liquid chromatographic (RP-HPLC) method for determination of econazole nitrate, preservatives (methylparaben and propylparaben) and its main impurities (4-chlorobenzl alcohol and alpha-(2,4-dicholorophenyl)-1H-imidazole-1-ethanol) in cream formulations, has been developed and validated. Separation was achieved on a column Bondclone (R) C18 (300 mm x 3.9 mm i.d., 10 mu m) using a gradient method with mobile phase composed of methanol and water. The flow rate was 1.4 mL min(-1), temperature of the column was 25 C and the detection was made at 220 nm. Miconazole nitrate was used as an internal standard. The total run time was less than 15 min, The analytical curves presented coefficient of correlation upper to 0.99 and detection and quantitation limits were calculated for all molecules. Excellent accuracy and precision were obtained for econazole nitrate. Recoveries varied from 97.9 to 102.3% and intra- and inter-day precisions, calculated as relative standard deviation (R.S.D), were lower than 2.2%. Specificity, robustness and assay for econazole nitrate were also determined. The method allowed the quantitative determination of econazole nitrate, its impurities and preservatives and could be applied as a stability-indicating method for econazole nitrate in cream formulations. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A method was optimized for the analysis of omeprazole (OMZ) by ultra-high speed LC with diode array detection using a monolithic Chromolith Fast Gradient RP 18 endcapped column (50 x 2.0 mm id). The analyses were performed at 30 degrees C using a mobile phase consisting of 0.15% (v/v) trifluoroacetic acid (TFA) in water (solvent A) and 0.15% (v/v) TFA in acetonitrile (solvent B) under a linear gradient of 5 to 90% B in 1 min at a flow rate of 1.0 mL/min and detection at 220 nm. Under these conditions, OMZ retention time was approximately 0.74 min. Validation parameters, such as selectivity, linearity, precision, accuracy, and robustness, showed results within the acceptable criteria. The method developed was successfully applied to OMZ enteric-coated pellets, showing that this assay can be used in the pharmaceutical industry for routine QC analysis. Moreover, the analytical conditions established allow for the simultaneous analysis of OMZ metabolites, 5-hydroxyomeprazole and omeprazole sulfone, in the same run, showing that this method can be extended to other matrixes with adequate procedures for sample preparation.
Resumo:
A simple method was optimized and validated for determination of ractopamine hydrochloride (RAC) in raw material and feed additives by HPLC for use in quality control in veterinary industries. The best-optimized conditions were a C8 column (250 x 4.6 mm id, 5.0 mu m particle size) at room temperature with acetonitrile-100 mM sodium acetate buffer (pH 5.0; 75 + 25, v/v) mobile phase at a flow rate of 1.0 mL/min and UV detection at 275 nm. With these conditions, the retention time of RAC was around 5.2 min, and standard curves were linear in the concentration range of 160-240 mu g/mL (correlation coefficient >= 0.999). Validation parameters, such as selectivity, linearity, limit of detection (ranged from 1.60 to 2.05 mu g/mL), limit of quantification (ranged from 4.26 to 6.84 mu g/mL), precision (relative standard deviation <= 1.87%), accuracy (ranged from 96.97 to 100.54%), and robustness, gave results within acceptable ranges. Therefore, the developed method can be successfully applied for the routine quality control analysis of raw material and feed additives.
Resumo:
Introduction - Baccharis dracunculifolia, which has great potential for the development of new phytotherapeutic medicines, is the most important botanical source of the southeastern Brazilian propolis, known as green propolis on account of its color. Objective - To develop a reliable reverse-phase HPLC chromatographic method for the analysis of phenolic compounds in both B. dracunculifolia raw material and its hydroalcoholic extracts. Methodology - The method utilised a C(18) CLC-ODS (M) (4.6 x 250 mm) column with nonlinear gradient elution and UV detection at 280 nm. A procedure for the extraction of phenolic compounds using aqueous ethanol 90%, with the addition of veratraldehyde as the internal standard, was developed allowing the quantification of 10 compounds: caffeic acid, coumaric acid, ferulic acid, cinnamic acid, aromadendrin-4`-methyl ether, isosakuranetin, drupanin, artepillin C, baccharin and 2,2-dimethyl-6-carboxyethenyl-2H-1-benzopyran acid. Results - The developed method gave a good detection response with linearity in the range 20.83-800 mu g/mL and recovery in the range 81.25-93.20%, allowing the quantification of the analysed standards. Conclusion - The method presented good results for the following parameters: selectivity, linearity, accuracy, precision, robustness, as well as limit of detection and limit of quantitation. Therefore, this method could be considered as an analytical tool for the quality control of B. dracunculifolia raw material and its products in both cosmetic and pharmaceutical companies. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
The level set method has been implemented in a computational volcanology context. New techniques are presented to solve the advection equation and the reinitialisation equation. These techniques are based upon an algorithm developed in the finite difference context, but are modified to take advantage of the robustness of the finite element method. The resulting algorithm is tested on a well documented Rayleigh–Taylor instability benchmark [19], and on an axisymmetric problem where the analytical solution is known. Finally, the algorithm is applied to a basic study of lava dome growth.
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
To translate and transfer solution data between two totally different meshes (i.e. mesh 1 and mesh 2), a consistent point-searching algorithm for solution interpolation in unstructured meshes consisting of 4-node bilinear quadrilateral elements is presented in this paper. The proposed algorithm has the following significant advantages: (1) The use of a point-searching strategy allows a point in one mesh to be accurately related to an element (containing this point) in another mesh. Thus, to translate/transfer the solution of any particular point from mesh 2 td mesh 1, only one element in mesh 2 needs to be inversely mapped. This certainly minimizes the number of elements, to which the inverse mapping is applied. In this regard, the present algorithm is very effective and efficient. (2) Analytical solutions to the local co ordinates of any point in a four-node quadrilateral element, which are derived in a rigorous mathematical manner in the context of this paper, make it possible to carry out an inverse mapping process very effectively and efficiently. (3) The use of consistent interpolation enables the interpolated solution to be compatible with an original solution and, therefore guarantees the interpolated solution of extremely high accuracy. After the mathematical formulations of the algorithm are presented, the algorithm is tested and validated through a challenging problem. The related results from the test problem have demonstrated the generality, accuracy, effectiveness, efficiency and robustness of the proposed consistent point-searching algorithm. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
In order to use the finite element method for solving fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins effectively and efficiently, we have presented, in this paper, the new concept and numerical algorithms to deal with the fundamental issues associated with the fluid-rock interaction problems. These fundamental issues are often overlooked by some purely numerical modelers. (1) Since the fluid-rock interaction problem involves heterogeneous chemical reactions between reactive aqueous chemical species in the pore-fluid and solid minerals in the rock masses, it is necessary to develop the new concept of the generalized concentration of a solid mineral, so that two types of reactive mass transport equations, namely, the conventional mass transport equation for the aqueous chemical species in the pore-fluid and the degenerated mass transport equation for the solid minerals in the rock mass, can be solved simultaneously in computation. (2) Since the reaction area between the pore-fluid and mineral surfaces is basically a function of the generalized concentration of the solid mineral, there is a definite need to appropriately consider the dependence of the dissolution rate of a dissolving mineral on its generalized concentration in the numerical analysis. (3) Considering the direct consequence of the porosity evolution with time in the transient analysis of fluid-rock interaction problems; we have proposed the term splitting algorithm and the concept of the equivalent source/sink terms in mass transport equations so that the problem of variable mesh Peclet number and Courant number has been successfully converted into the problem of constant mesh Peclet and Courant numbers. The numerical results from an application example have demonstrated the usefulness of the proposed concepts and the robustness of the proposed numerical algorithms in dealing with fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Objective: Existing evidence suggests that family interventions can be effective in reducing relapse rates in schizophrenia and related conditions. Despite this, such interventions are not routinely delivered in Australian mental health services. The objective of the current study is to investigate the incremental cost-effectiveness ratios (ICERs) of introducing three types of family interventions, namely: behavioural family management (BFM); behavioural intervention for families (BIF); and multiple family groups (MFG) into current mental health services in Australia. Method: The ICER of each of the family interventions is assessed from a health sector perspective, including the government, persons with schizophrenia and their families/carers using a standardized methodology. A two-stage approach is taken to the assessment of benefit. The first stage involves a quantitative analysis based on disability-adjusted life years (DALYs) averted. The second stage involves application of 'second filter' criteria (including equity, strength of evidence, feasibility and acceptability to stakeholders) to results. The robustness of results is tested using multivariate probabilistic sensitivity analysis. Results: The most cost-effective intervention, in order of magnitude, is BIF (A$8000 per DALY averted), followed by MFG (A$21 000 per DALY averted) and lastly BFM (A$28 000 per DALY averted). The inclusion of time costs makes BFM more cost-effective than MFG. Variation of discount rate has no effect on conclusions. Conclusions: All three interventions are considered 'value-for-money' within an Australian context. This conclusion needs to be tempered against the methodological challenge of converting clinical outcomes into a generic economic outcome measure (DALY). Issues surrounding the feasibility of routinely implementing such interventions need to be addressed.
Resumo:
Biogeography deals with the combined analysis of the spatial and temporal components of the evolutionary process. To this purpose, biogeographical analysis should consider two extra steps: a reciprocal illumination step, and a consilience step. Even if the traditional challenges of biogeography were successfully handled, the obtained hypothesis is not necessarily meaningful in biogeographical terms--it needs continuous test in the light of external hypotheses. For this reason, a concept analogous to Hennig`s reciprocal illumination is valuable, as well as a sort of biogeographical consilience in Whewell`s sense. Firstly, through the search for different classes of evidence, information useful to improve the hypothesis can be accessed via reciprocal illumination. Following, a more general hypothesis would arise through a consilience process, when the hypothesis explains phenomena not contemplated during its construction, as the distribution of other taxa or the existence (or absence) of fossils. This procedure aims to evaluate the robustness of biogeographical hypotheses as scientific theories. Such theories are reliable descriptions of how life changes its form both in space and time, putting historical biogeography close to Croizat`s statement of evolution as a three dimensional phenomenon.