60 resultados para inequality constraint
Resumo:
Cosmological analyses based on currently available observations are unable to rule out a sizeable coupling between dark energy and dark matter. However, the signature of the coupling is not easy to grasp, since the coupling is degenerate with other cosmological parameters, such as the dark energy equation of state and the dark matter abundance. We discuss possible ways to break such degeneracy. Based on the perturbation formalism, we carry out the global fitting by using the latest observational data and get a tight constraint on the interaction between dark sectors. We find that the appropriate interaction can alleviate the coincidence problem.
Resumo:
In this paper, we present an analog of Bell's inequalities violation test for N qubits to be performed in a nuclear magnetic resonance (NMR) quantum computer. This can be used to simulate or predict the results for different Bell's inequality tests, with distinct configurations and a larger number of qubits. To demonstrate our scheme, we implemented a simulation of the violation of the Clauser, Horne, Shimony and Holt (CHSH) inequality using a two-qubit NMR system and compared the results to those of a photon experiment. The experimental results are well described by the quantum mechanics theory and a local realistic hidden variables model (LRHVM) that was specifically developed for NMR. That is why we refer to this experiment as a simulation of Bell's inequality violation. Our result shows explicitly how the two theories can be compatible with each other due to the detection loophole. In the last part of this work, we discuss the possibility of testing some fundamental features of quantum mechanics using NMR with highly polarized spins, where a strong discrepancy between quantum mechanics and hidden variables models can be expected.
Resumo:
We consider the problem of interaction neighborhood estimation from the partial observation of a finite number of realizations of a random field. We introduce a model selection rule to choose estimators of conditional probabilities among natural candidates. Our main result is an oracle inequality satisfied by the resulting estimator. We use then this selection rule in a two-step procedure to evaluate the interacting neighborhoods. The selection rule selects a small prior set of possible interacting points and a cutting step remove from this prior set the irrelevant points. We also prove that the Ising models satisfy the assumptions of the main theorems, without restrictions on the temperature, on the structure of the interacting graph or on the range of the interactions. It provides therefore a large class of applications for our results. We give a computationally efficient procedure in these models. We finally show the practical efficiency of our approach in a simulation study.
Resumo:
Recent fears of terrorism have provoked an increase in delays and denials of transboundary shipments of radioisotopes. This represents a serious constraint to sterile insect technique (SIT) programs around the world as they rely on the use of ionizing radiation from radioisotopes for insect sterilization. To validate a novel X ray irradiator, a series of studies on Ceratitis capitata (Wiedemann) and Anastrepha fraterculus (Wiedemann) (Diptera: Tephritidae) were carried out, comparing the relative biological effectiveness (RBE) between X rays and traditional gamma radiation from (60)Co. Male C. capitata pupae and pupae of both sexes of A. fraterculus, both 24 - 48 h before adult emergence, were irradiated with doses ranging from 15 to 120 Gy and 10-70 Gy, respectively. Estimated mean doses of 91.2 Gy of X and 124.9 Gy of gamma radiation induced 99% sterility in C. capitata males, Irradiated A. fraterculus were 99% sterile at approximate to 40-60 Gy for both radiation treatments. Standard quality control parameters and mating indices were not significantly affected by the two types of radiation. The RBE did not differ significantly between the tested X and gamma radiation, and X rays are as biologically effective for SIT purposes as gamma rays are. This work confirms the suitability of this new generation of X ray irradiators for pest control programs that integrate the SIT.
Resumo:
We investigated the effect of joint immobilization on the postural sway during quiet standing. We hypothesized that the center of pressure (COP), rambling, and trembling trajectories would be affected by joint immobilization. Ten young adults stood on a force plate during 60 s without and with immobilized joints (only knees constrained, CK; knees and hips, CH; and knees, hips, and trunk, CT). with their eyes open (OE) or closed (CE). The root mean square deviation (RMS, the standard deviation from the mean) and mean speed of COP, rambling, and trembling trajectories in the anterior-posterior and medial-lateral directions were analyzed. Similar effects of vision were observed for both directions: larger amplitudes for all variables were observed in the CE condition. In the anterior-posterior direction, postural sway increased only when the knees, hips, and trunk were immobilized. For the medial-lateral direction, the RMS and the mean speed of the COP, rambling, and trembling displacements decreased after immobilization of knees and hips and knees, hips, and trunk. These findings indicate that the single inverted pendulum model is unable to completely explain the processes involved in the control of the quiet upright stance in the anterior-posterior and medial-lateral directions. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Self controlling practice implies a process of decision making which suggests that the options in a self controlled practice condition could affect learners The number of task components with no fixed position in a movement sequence may affect the (Nay learners self control their practice A 200 cm coincident timing track with 90 light emitting diodes (LEDs)-the first and the last LEDs being the warning and the target lights respectively was set so that the apparent speed of the light along the track was 1 33 m/sec Participants were required to touch six sensors sequentially the last one coincidently with the lighting of the tar get light (timing task) Group 1 (n=55) had only one constraint and were instructed to touch the sensors in any order except for the last sensor which had to be the one positioned close to the target light Group 2 (n=53) had three constraints the first two and the last sensor to be touched Both groups practiced the task until timing error was less than 30 msec on three consecutive trials There were no statistically significant differences between groups in the number of trials needed to reach the performance criterion but (a) participants in Group 2 created fewer sequences corn pared to Group 1 and (b) were more likely to use the same sequence throughout the learning process The number of options for a movement sequence affected the way learners self-controlled their practice but had no effect on the amount of practice to reach criterion performance.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
The main objective of this paper is to relieve the power system engineers from the burden of the complex and time-consuming process of power system stabilizer (PSS) tuning. To achieve this goal, the paper proposes an automatic process for computerized tuning of PSSs, which is based on an iterative process that uses a linear matrix inequality (LMI) solver to find the PSS parameters. It is shown in the paper that PSS tuning can be written as a search problem over a non-convex feasible set. The proposed algorithm solves this feasibility problem using an iterative LMI approach and a suitable initial condition, corresponding to a PSS designed for nominal operating conditions only (which is a quite simple task, since the required phase compensation is uniquely defined). Some knowledge about the PSS tuning is also incorporated in the algorithm through the specification of bounds defining the allowable PSS parameters. The application of the proposed algorithm to a benchmark test system and the nonlinear simulation of the resulting closed-loop models demonstrate the efficiency of this algorithm. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
This paper presents a new approach, predictor-corrector modified barrier approach (PCMBA), to minimize the active losses in power system planning studies. In the PCMBA, the inequality constraints are transformed into equalities by introducing positive auxiliary variables. which are perturbed by the barrier parameter, and treated by the modified barrier method. The first-order necessary conditions of the Lagrangian function are solved by predictor-corrector Newton`s method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, reaching the limits of the inequality constraints. The feasibility of the proposed approach is demonstrated using various IEEE test systems and a realistic power system of 2256-bus corresponding to the Brazilian South-Southeastern interconnected system. The results show that the utilization of the predictor-corrector method with the pure modified barrier approach accelerates the convergence of the problem in terms of the number of iterations and computational time. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.
Resumo:
The applicability of a meshfree approximation method, namely the EFG method, on fully geometrically exact analysis of plates is investigated. Based on a unified nonlinear theory of plates, which allows for arbitrarily large rotations and displacements, a Galerkin approximation via MLS functions is settled. A hybrid method of analysis is proposed, where the solution is obtained by the independent approximation of the generalized internal displacement fields and the generalized boundary tractions. A consistent linearization procedure is performed, resulting in a semi-definite generalized tangent stiffness matrix which, for hyperelastic materials and conservative loadings, is always symmetric (even for configurations far from the generalized equilibrium trajectory). Besides the total Lagrangian formulation, an updated version is also presented, which enables the treatment of rotations beyond the parameterization limit. An extension of the arc-length method that includes the generalized domain displacement fields, the generalized boundary tractions and the load parameter in the constraint equation of the hyper-ellipsis is proposed to solve the resulting nonlinear problem. Extending the hybrid-displacement formulation, a multi-region decomposition is proposed to handle complex geometries. A criterium for the classification of the equilibrium`s stability, based on the Bordered-Hessian matrix analysis, is suggested. Several numerical examples are presented, illustrating the effectiveness of the method. Differently from the standard finite element methods (FEM), the resulting solutions are (arbitrary) smooth generalized displacement and stress fields. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A Thermodynamic air-standard cycle was envisaged for Ranque-Hilsh (R-H) or Vortex Tubes to provide relevant Thermodynamic analysis and tools for setting operating limits according to the conservation laws of mass and energy, as well as the constraint of the Second Law of Thermodynamics. The study used an integral or control volume approach and resulted in establishing working equations for evaluating the performance of an R-H tube. The work proved that the coefficient of performance does not depend on the R-H tube operating mode, i.e., the same value is obtained independently if the R-H tube operates either as a heat pump or as a refrigeration device. It was also shown that the isentropic coefficient of performance displays optima values of cold and hot mass fractions for a given operating pressure ratio. Finally, the study was concluded by comparing the present analysis with some experimental data available in the literature for operating pressures ranging 2-11 atm. (C) 2010 Elsevier Ltd and IIR. All rights reserved.
Resumo:
In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
To explain the magnetic behavior of plastic deformation of thin magnetic films (Fe and permalloy) on an elastic substrate (nitinol), it is noted that unlike in the bulk, the dislocation density does not increase dramatically because of the dimensional constraint. As a result, the resulting residual stress, even though strain hardening is limited, dominates the observed magnetic behavior. Thus, with the field parallel to the stress axis, the compressive residual stress resulting from plastic deformation causes a decrease in remanence and an increase in coercivity; and with the field perpendicular to the stress axis, the resulting compressive residual stress causes an increase in remanence and a decrease in coercivity. These elements have been inserted into the model previously developed for plastic deformation in the bulk, producing the aforementioned behavior, which has been observed experimentally in the films.