950 resultados para Penalty parameter
Resumo:
The Random Parameter model was proposed to explain the structure of the covariance matrix in problems where most, but not all, of the eigenvalues of the covariance matrix can be explained by Random Matrix Theory. In this article, we explore the scaling properties of the model, as observed in the multifractal structure of the simulated time series. We use the Wavelet Transform Modulus Maxima technique to obtain the multifractal spectrum dependence with the parameters of the model. The model shows a scaling structure compatible with the stylized facts for a reasonable choice of the parameter values. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Due to differences in the functional quality of natural extracts, we have also faced differences in their effectiveness. So, it was intended to assess the antioxidant activity of natural extracts in order to attain their functional quality. It was observed that all the extracts (brown and green propolis, Ginkgo biloba and Isoflavin Beta (R)) and the standard used (quercetin) showed antioxidant activity in a dose-dependent manner with IC50 values ranging from 0.21 to 155.28 mu g mL(-1) (inhibition of lipid peroxidation and scavenging of the DPPH center dot assays). We observed a high correlation (r(2)= 0.9913) among the antioxidant methods; on the other hand, the antioxidant activity was not related to the polyphenol and flavonoid content. As the DPPH center dot assay is a fast method, presents low costs and even has a high correlation with other antioxidant methods, it could be applied as an additional parameter in the quality control of natural extracts.
Resumo:
Three main models of parameter setting have been proposed: the Variational model proposed by Yang (2002; 2004), the Structured Acquisition model endorsed by Baker (2001; 2005), and the Very Early Parameter Setting (VEPS) model advanced by Wexler (1998). The VEPS model contends that parameters are set early. The Variational model supposes that children employ statistical learning mechanisms to decide among competing parameter values, so this model anticipates delays in parameter setting when critical input is sparse, and gradual setting of parameters. On the Structured Acquisition model, delays occur because parameters form a hierarchy, with higher-level parameters set before lower-level parameters. Assuming that children freely choose the initial value, children sometimes will miss-set parameters. However when that happens, the input is expected to trigger a precipitous rise in one parameter value and a corresponding decline in the other value. We will point to the kind of child language data that is needed in order to adjudicate among these competing models.
Resumo:
Power system real time security assessment is one of the fundamental modules of the electricity markets. Typically, when a contingency occurs, it is required that security assessment and enhancement module shall be ready for action within about 20 minutes’ time to meet the real time requirement. The recent California black out again highlighted the importance of system security. This paper proposed an approach for power system security assessment and enhancement based on the information provided from the pre-defined system parameter space. The proposed scheme opens up an efficient way for real time security assessment and enhancement in a competitive electricity market for single contingency case
Resumo:
A new two-parameter integrable model with quantum superalgebra U-q[gl(3/1)] symmetry is proposed, which is an eight-state fermions model with correlated single-particle and pair hoppings as well as uncorrelated triple-particle hopping. The model is solved and the Bethe ansatz equations are obtained.
Resumo:
This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.
Resumo:
ArtinM is a D-mannose binding lectin that has been arousing increasing interest because of its biomedical properties, especially those involving the stimulation of Th1 immune response, which confers protection against intracellular pathogens The potential pharmaceutical applications of ArtinM have motivated the production of its recombinant form (rArtinM) so that it is important to compare the sugar-binding properties of jArtinM and rArtinM in order to take better advantage of the potential applications of the recombinant lectin. In this work, a biosensor framework based on a Quartz Crystal Microbalance was established with the purpose of making a comparative study of the activity of native and recombinant ArtinM protein The QCM transducer was strategically functionalized to use a simple model of protein binding kinetics. This approach allowed for the determination of the binding/dissociation kinetics rate and affinity equilibrium constant of both forms of ArtinM with horseradish peroxidase glycoprotein (HRP), a N-glycosylated protein that contains the trimannoside Man alpha 1-3[Man alpha 1-6]Man, which is a known ligand for jArtinM (Jeyaprakash et al, 2004). Monitoring of the real-time binding of rArtinM shows that it was able to bind HRP, leading to an analytical curve similar to that of jArtinM, with statistically equivalent kinetic rates and affinity equilibrium constants for both forms of ArtinM The lower reactivity of rArtinM with HRP than jArtinM was considered to be due to a difference in the number of Carbohydrate Recognition Domains (CRDs) per molecule of each lectin form rather than to a difference in the energy of binding per CRD of each lectin form. (C) 2010 Elsevier B V. All rights reserved
Resumo:
The problem of the negative values of the interaction parameter in the equation of Frumkin has been analyzed with respect to the adsorption of nonionic molecules on energetically homogeneous surface. For this purpose, the adsorption states of a homologue series of ethoxylated nonionic surfactants on air/water interface have been determined using four different models and literature data (surface tension isotherms). The results obtained with the Frumkin adsorption isotherm imply repulsion between the adsorbed species (corresponding to negative values of the interaction parameter), while the classical lattice theory for energetically homogeneous surface (e.g., water/air) admits attraction alone. It appears that this serious contradiction can be overcome by assuming heterogeneity in the adsorption layer, that is, effects of partial condensation (formation of aggregates) on the surface. Such a phenomenon is suggested in the Fainerman-Lucassen-Reynders-Miller (FLM) 'Aggregation model'. Despite the limitations of the latter model (e.g., monodispersity of the aggregates), we have been able to estimate the sign and the order of magnitude of Frumkin's interaction parameter and the range of the aggregation numbers of the surface species. (C) 2004 Elsevier B.V All rights reserved.
Resumo:
We explore the task of optimal quantum channel identification and in particular, the estimation of a general one-parameter quantum process. We derive new characterizations of optimality and apply the results to several examples including the qubit depolarizing channel and the harmonic oscillator damping channel. We also discuss the geometry of the problem and illustrate the usefulness of using entanglement in process estimation.
Resumo:
The concept of parameter-space size adjustment is pn,posed in order to enable successful application of genetic algorithms to continuous optimization problems. Performance of genetic algorithms with six different combinations of selection and reproduction mechanisms, with and without parameter-space size adjustment, were severely tested on eleven multiminima test functions. An algorithm with the best performance was employed for the determination of the model parameters of the optical constants of Pt, Ni and Cr.
Resumo:
Numerical optimisation methods are being more commonly applied to agricultural systems models, to identify the most profitable management strategies. The available optimisation algorithms are reviewed and compared, with literature and our studies identifying evolutionary algorithms (including genetic algorithms) as superior in this regard to simulated annealing, tabu search, hill-climbing, and direct-search methods. Results of a complex beef property optimisation, using a real-value genetic algorithm, are presented. The relative contributions of the range of operational options and parameters of this method are discussed, and general recommendations listed to assist practitioners applying evolutionary algorithms to the solution of agricultural systems. (C) 2001 Elsevier Science Ltd. All rights reserved.
A broadband uniplanar quasi-yagi antenna: Parameter study in application to a spatial power combiner