9 resultados para minimum expenditure constraint
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification by Minchenko and Stakhovski that was called RCRCQ. We show that RCPLD is enough to ensure the convergence of an augmented Lagrangian algorithm and that it asserts the validity of an error bound. We also provide proofs and counter-examples that show the relations of RCRCQ and RCPLD with other known constraint qualifications. In particular, RCPLD is strictly weaker than CPLD and RCRCQ, while still stronger than Abadie's constraint qualification. We also verify that the second order necessary optimality condition holds under RCRCQ.
Resumo:
Background: Previous studies show that chronic hemiparetic patients after stroke, presents inabilities to perform movements in paretic hemibody. This inability is induced by positive reinforcement of unsuccessful attempts, a concept called learned non-use. Forced use therapy (FUT) and constraint induced movement therapy (CIMT) were developed with the goal of reversing the learned non-use. These approaches have been proposed for the rehabilitation of the paretic upper limb (PUL). It is unknown what would be the possible effects of these approaches in the rehabilitation of gait and balance. Objectives: To evaluate the effect of Modified FUT (mFUT) and Modified CIMT (mCIMT) on the gait and balance during four weeks of treatment and 3 months follow-up. Methods: This study included thirty-seven hemiparetic post-stroke subjects that were randomly allocated into two groups based on the treatment protocol. The non-paretic UL was immobilized for a period of 23 hours per day, five days a week. Participants were evaluated at Baseline, 1st, 2nd, 3rd and 4th weeks, and three months after randomization. For the evaluation we used: The Stroke Impact Scale (SIS), Berg Balance Scale (BBS) and Fugl-Meyer Motor Assessment (FM). Gait was analyzed by the 10-meter walk test (T10) and Timed Up & Go test (TUG). Results: Both groups revealed a better health status (SIS), better balance, better use of lower limb (BBS and FM) and greater speed in gait (T10 and TUG), during the weeks of treatment and months of follow-up, compared to the baseline. Conclusion: The results show mFUT and mCIMT are effective in the rehabilitation of balance and gait. Trial Registration ACTRN12611000411943.
Resumo:
Background: Exercise training (ET) can reduce blood pressure (BP) and prevent functional disability. However, the effects of low volumes of training have been poorly studied, especially in elderly hypertensive patients. Objectives: To investigate the effects of a multi-component ET program (aerobic training, strength, flexibility, and balance) on BP, physical fitness, and functional ability of elderly hypertensive patients. Methods: Thirty-six elderly hypertensive patients with optimal clinical treatment underwent a multi-component ET program: two 60-minute sessions a week for 12 weeks at a Basic Health Unit. Results: Compared to pre-training values, systolic and diastolic BP were reduced by 3.6% and 1.2%, respectively (p < 0.001), body mass index was reduced by 1.1% (p < 0.001), and peripheral blood glucose was reduced by 2.5% (p= 0.002). There were improvements in all physical fitness domains: muscle strength (chair-stand test and elbow flexor test; p < 0.001), static balance test (unipedal stance test; p < 0.029), aerobic capacity (stationary gait test; p < 0.001), except for flexibility (sit and reach test). Moreover, there was a reduction in the time required to perform two functional ability tests: "put on sock" and "sit down, stand up, and move around the house" (p < 0.001). Conclusions: Lower volumes of ET improved BP, metabolic parameters, and physical fitness and reflected in the functional ability of elderly hypertensive patients. Trial Registration RBR-2xgjh3.
Resumo:
In this paper, the effects of uncertainty and expected costs of failure on optimum structural design are investigated, by comparing three distinct formulations of structural optimization problems. Deterministic Design Optimization (DDO) allows one the find the shape or configuration of a structure that is optimum in terms of mechanics, but the formulation grossly neglects parameter uncertainty and its effects on structural safety. Reliability-based Design Optimization (RBDO) has emerged as an alternative to properly model the safety-under-uncertainty part of the problem. With RBDO, one can ensure that a minimum (and measurable) level of safety is achieved by the optimum structure. However, results are dependent on the failure probabilities used as constraints in the analysis. Risk optimization (RO) increases the scope of the problem by addressing the compromising goals of economy and safety. This is accomplished by quantifying the monetary consequences of failure, as well as the costs associated with construction, operation and maintenance. RO yields the optimum topology and the optimum point of balance between economy and safety. Results are compared for some example problems. The broader RO solution is found first, and optimum results are used as constraints in DDO and RBDO. Results show that even when optimum safety coefficients are used as constraints in DDO, the formulation leads to configurations which respect these design constraints, reduce manufacturing costs but increase total expected costs (including expected costs of failure). When (optimum) system failure probability is used as a constraint in RBDO, this solution also reduces manufacturing costs but by increasing total expected costs. This happens when the costs associated with different failure modes are distinct. Hence, a general equivalence between the formulations cannot be established. Optimum structural design considering expected costs of failure cannot be controlled solely by safety factors nor by failure probability constraints, but will depend on actual structural configuration. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, interior point algorithms, and inexact restoration.
Resumo:
Measurements of the sphericity of primary charged particles in minimum bias proton-proton collisions at root s = 0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is measured in the plane perpendicular to the beam direction using primary charged tracks with p(T) > 0.5 GeV/c in vertical bar eta vertical bar < 0.8. The mean sphericity as a function of the charged particle multiplicity at mid-rapidity (N-ch) is reported for events with different p(T) scales ("soft" and "hard") defined by the transverse momentum of the leading particle. In addition, the mean charged particle transverse momentum versus multiplicity is presented for the different event classes, and the sphericity distributions in bins of multiplicity are presented. The data are compared with calculations of standard Monte Carlo event generators. The transverse sphericity is found to grow with multiplicity at all collision energies, with a steeper rise at low N-ch, whereas the event generators show an opposite tendency. The combined study of the sphericity and the mean p(T) with multiplicity indicates that most of the tested event generators produce events with higher multiplicity by generating more back-to-back jets resulting in decreased sphericity (and isotropy). The PYTHIA6 generator with tune PERUGIA-2011 exhibits a noticeable improvement in describing the data, compared to the other tested generators.
Resumo:
We study general properties of the Landau-gauge Gribov ghost form factor sigma(p(2)) for SU(N-c) Yang-Mills theories in the d-dimensional case. We find a qualitatively different behavior for d = 3, 4 with respect to the d = 2 case. In particular, considering any (sufficiently regular) gluon propagator D(p(2)) and the one-loop-corrected ghost propagator, we prove in the 2d case that the function sigma(p(2)) blows up in the infrared limit p -> 0 as -D(0) ln(p(2)). Thus, for d = 2, the no-pole condition sigma(p(2)) < 1 (for p(2) > 0) can be satisfied only if the gluon propagator vanishes at zero momentum, that is, D(0) = 0. On the contrary, in d = 3 and 4, sigma(p(2)) is finite also if D(0) > 0. The same results are obtained by evaluating the ghost propagator G(p(2)) explicitly at one loop, using fitting forms for D(p(2)) that describe well the numerical data of the gluon propagator in two, three and four space-time dimensions in the SU(2) case. These evaluations also show that, if one considers the coupling constant g(2) as a free parameter, the ghost propagator admits a one-parameter family of behaviors (labeled by g(2)), in agreement with previous works by Boucaud et al. In this case the condition sigma(0) <= 1 implies g(2) <= g(c)(2), where g(c)(2) is a "critical" value. Moreover, a freelike ghost propagator in the infrared limit is obtained for any value of g(2) smaller than g(c)(2), while for g(2) = g(c)(2) one finds an infrared-enhanced ghost propagator. Finally, we analyze the Dyson-Schwinger equation for sigma(p(2)) and show that, for infrared-finite ghost-gluon vertices, one can bound the ghost form factor sigma(p(2)). Using these bounds we find again that only in the d = 2 case does one need to impose D(0) = 0 in order to satisfy the no-pole condition. The d = 2 result is also supported by an analysis of the Dyson-Schwinger equation using a spectral representation for the ghost propagator. Thus, if the no-pole condition is imposed, solving the d = 2 Dyson-Schwinger equations cannot lead to a massive behavior for the gluon propagator. These results apply to any Gribov copy inside the so-called first Gribov horizon; i.e., the 2d result D(0) = 0 is not affected by Gribov noise. These findings are also in agreement with lattice data.
Resumo:
Abstract Background A popular model for gene regulatory networks is the Boolean network model. In this paper, we propose an algorithm to perform an analysis of gene regulatory interactions using the Boolean network model and time-series data. Actually, the Boolean network is restricted in the sense that only a subset of all possible Boolean functions are considered. We explore some mathematical properties of the restricted Boolean networks in order to avoid the full search approach. The problem is modeled as a Constraint Satisfaction Problem (CSP) and CSP techniques are used to solve it. Results We applied the proposed algorithm in two data sets. First, we used an artificial dataset obtained from a model for the budding yeast cell cycle. The second data set is derived from experiments performed using HeLa cells. The results show that some interactions can be fully or, at least, partially determined under the Boolean model considered. Conclusions The algorithm proposed can be used as a first step for detection of gene/protein interactions. It is able to infer gene relationships from time-series data of gene expression, and this inference process can be aided by a priori knowledge available.
Resumo:
We analyse the secular effects of a long-lived Galactic spiral structure on the stellar orbits with mean radii close to the corotation resonance. By test-particle simulations and different spiral potential models with parameters constrained on observations, we verified the formation of a minimum with amplitude ∼30–40 per cent of the background disc stellar density at corotation. Such a minimum is formed by the secular angular momentum transfer between stars and the spiral density wave on both sides of corotation. We demonstrate that the secular loss (gain) of angular momentum and decrease (increase) of mean orbital radius of stars just inside (outside) corotation can counterbalance the opposite trend of exchange of angular momentum shown by stars orbiting the librational points L4/5 at the corotation circle. Such secular processes actually allow steady spiral waves to promote radial migration across corotation. We propose some pieces of observational evidence for the minimum stellar density in the Galactic disc, such as its direct relation to the minimum in the observed rotation curve of the Galaxy at the radius r ∼ 9 kpc (for R0 = 7.5 kpc), as well as its association with a minimum in the distribution of Galactic radii of a sample of open clusters older than 1Gyr. The closeness of the solar orbit adius to the corotation resonance implies that the solar orbit lies inside a ring of minimum surface density (stellar + gas). This also implies a correction to larger values for the estimated total mass of the Galactic disc, and consequently, a greater contribution of the disc componente to the inner rotation curve of the Galaxy.