929 resultados para Monotonicity constraints
Resumo:
We investigate bouncing solutions in the framework of the nonsingular gravity model of Brandenberger, Mukhanov and Sornborger. We show that a spatially flat universe filled with ordinary matter undergoing a phase of contraction reaches a stage of minimal expansion factor before bouncing in a regular way to reach the expanding phase. The expansion can be connected to the usual radiation-and matter-dominated epochs before reaching a final expanding de Sitter phase. In general relativity (GR), a bounce can only take place provided that the spatial sections are positively curved, a fact that has been shown to translate into a constraint on the characteristic duration of the bounce. In our model, on the other hand, a bounce can occur also in the absence of spatial curvature, which means that the time scale for the bounce can be made arbitrarily short or long. The implication is that constraints on the bounce characteristic time obtained in GR rely heavily on the assumed theory of gravity. Although the model we investigate is fourth order in the derivatives of the metric (and therefore unstable vis-a-vis the perturbations), this generic bounce dynamics should extend to string-motivated nonsingular models which can accommodate a spatially flat bounce.
Resumo:
We propose a field theory model for dark energy and dark matter in interaction. Comparing the classical solutions of the field equations with the observations of the CMB shift parameter, baryonic acoustic oscillations, lookback time, and the Gold supernovae sample, we observe a possible interaction between dark sectors with energy decay from dark energy into dark matter. The observed interaction provides an alleviation to the coincidence problem.
Resumo:
Cosmological analyses based on currently available observations are unable to rule out a sizeable coupling between dark energy and dark matter. However, the signature of the coupling is not easy to grasp, since the coupling is degenerate with other cosmological parameters, such as the dark energy equation of state and the dark matter abundance. We discuss possible ways to break such degeneracy. Based on the perturbation formalism, we carry out the global fitting by using the latest observational data and get a tight constraint on the interaction between dark sectors. We find that the appropriate interaction can alleviate the coincidence problem.
Resumo:
We examine the possibility that a new strong interaction is accessible to the Tevatron and the LHC. In an effective theory approach, we consider a scenario with a new color-octet interaction with strong couplings to the top quark, as well as the presence of a strongly coupled fourth generation which could be responsible for electroweak symmetry breaking. We apply several constraints, including the ones from flavor physics. We study the phenomenology of the resulting parameter space at the Tevatron, focusing on the forward-backward asymmetry in top pair production, as well as in the production of the fourth-generation quarks. We show that if the excess in the top production asymmetry is indeed the result of this new interaction, the Tevatron could see the first hints of the strongly coupled fourth-generation quarks. Finally, we show that the LHC with root s = 7 TeV and 1 fb(-1) integrated luminosity should observe the production of fourth-generation quarks at a level at least 1 order of magnitude above the QCD prediction for the production of these states.
Resumo:
One of the standard generalized-gradient approximations (GGAs) in use in modern electronic-structure theory [Perdew-Burke-Ernzerhof (PBE) GGA] and a recently proposed modification designed specifically for solids (PBEsol) are identified as particular members of a family of functionals taking their parameters from different properties of homogeneous or inhomogeneous electron liquids. Three further members of this family are constructed and tested, together with the original PBE and PBEsol, for atoms, molecules, and solids. We find that PBE, in spite of its popularity in solid-state physics and quantum chemistry, is not always the best performing member of the family and that PBEsol, in spite of having been constructed specifically for solids, is not the best for solids. The performance of GGAs for finite systems is found to sensitively depend on the choice of constraints stemming from infinite systems. Guidelines both for users and for developers of density functionals emerge from this work.
Resumo:
The MINOS experiment at Fermilab has recently reported a tension between the oscillation results for neutrinos and antineutrinos. We show that this tension, if it persists, can be understood in the framework of nonstandard neutrino interactions (NSI). While neutral current NSI (nonstandard matter effects) are disfavored by atmospheric neutrinos, a new charged current coupling between tau neutrinos and nucleons can fit the MINOS data without violating other constraints. In particular, we show that loop-level contributions to flavor-violating tau decays are sufficiently suppressed. However, conflicts with existing bounds could arise once the effective theory considered here is embedded into a complete renormalizable model. We predict the future sensitivity of the T2K and NOvA experiments to the NSI parameter region favored by the MINOS fit, and show that both experiments are excellent tools to test the NSI interpretation of the MINOS data.
Resumo:
The pre-Mesozoic geodynamic evolution of SW Iberia has been investigated on the basis of detailed structural analysis, isotope dating, and petrologic study of high-pressure (HP) rocks, revealing the superposition of several tectonometamorphic events: (1) An HP event older than circa 358 Ma is recorded in basic rocks preserved inside marbles, which suggests subduction of a continental margin. The deformation associated with this stage is recorded by a refractory graphite fabric and noncoaxial mesoscopic structures found within the host metasediments. The sense of shear is top to south, revealing thrusting synthetic with subduction (underthrusting) to the north. (2) Recrystallization before circa 358 Ma is due to a regional-scale thermal episode and magmatism. (3) Noncoaxial deformation with top to north sense of shear in northward dipping large-scale shear zones is associated with pervasive hydration and metamorphic retrogression under mostly greenschist facies. This indicates exhumation by normal faulting in a detachment zone confined to the top to north and north dipping shear zones during postorogenic collapse soon after 358 Ma ago (inversion of earlier top to south thrusts). (4) Static recrystallization at circa 318 Ma is due to regional-scale granitic intrusions. Citation: Rosas, F. M., F. O. Marques, M. Ballevre, and C. Tassinari (2008), Geodynamic evolution of the SW Variscides: Orogenic collapse shown by new tectonometamorphic and isotopic data from western Ossa-Morena Zone, SW Iberia, Tectonics, 27, TC6008, doi:10.1029/2008TC002333.
Resumo:
The reverse engineering problem addressed in the present research consists of estimating the thicknesses and the optical constants of two thin films deposited on a transparent substrate using only transmittance data through the whole stack. No functional dispersion relation assumptions are made on the complex refractive index. Instead, minimal physical constraints are employed, as in previous works of some of the authors where only one film was considered in the retrieval algorithm. To our knowledge this is the first report on the retrieval of the optical constants and the thickness of multiple film structures using only transmittance data that does not make use of dispersion relations. The same methodology may be used if the available data correspond to normal reflectance. The software used in this work is freely available through the PUMA Project web page (http://www.ime.usp.br/similar to egbirgin/puma/). (C) 2008 Optical Society of America
Resumo:
Self controlling practice implies a process of decision making which suggests that the options in a self controlled practice condition could affect learners The number of task components with no fixed position in a movement sequence may affect the (Nay learners self control their practice A 200 cm coincident timing track with 90 light emitting diodes (LEDs)-the first and the last LEDs being the warning and the target lights respectively was set so that the apparent speed of the light along the track was 1 33 m/sec Participants were required to touch six sensors sequentially the last one coincidently with the lighting of the tar get light (timing task) Group 1 (n=55) had only one constraint and were instructed to touch the sensors in any order except for the last sensor which had to be the one positioned close to the target light Group 2 (n=53) had three constraints the first two and the last sensor to be touched Both groups practiced the task until timing error was less than 30 msec on three consecutive trials There were no statistically significant differences between groups in the number of trials needed to reach the performance criterion but (a) participants in Group 2 created fewer sequences corn pared to Group 1 and (b) were more likely to use the same sequence throughout the learning process The number of options for a movement sequence affected the way learners self-controlled their practice but had no effect on the amount of practice to reach criterion performance.
Resumo:
This study analyzed inter-individual variability of the temporal structure applied in basketball throwing. Ten experienced male athletes in basketball throwing were filmed and a number of kinematic movement parameters analyzed. A biomechanical model provided the relative timing of the shoulder, elbow and wrist joint movements. Inter-individual variability was analyzed using sequencing and relative timing of tem phases of the throw. To compare the variability of the movement phases between subjects a discriminant analysis and an ANOVA were applied. The Tukey test was applied to determine where differences occurred. The significance level was p = 0.05. Inter-individual variability was explained by three concomitant factors: (a) a precision control strategy, (b) a velocity control strategy and (c) intrinsic characteristics of the subjects. Therefore, despite the fact that some actions are common to the basketball throwing pattern each performed demonstrated particular and individual characteristics.
Resumo:
A niobium single crystal was subjected to equal channel angular pressing (ECAP) at room temperature after orienting the crystal such that [1 -1 -1] ayen ND, [0 1 -1] ayen ED, and [-2 -1 -1] ayen TD. Electron backscatter diffraction (EBSD) was used to characterize the microstructures both on the transverse and the longitudinal sections of the deformed sample. After one pass of ECAP the single crystal exhibits a group of homogeneously distributed large misorientation sheets and a well formed cell structure in the matrix. The traces of the large misorientation sheets match very well with the most favorably oriented slip plane and one of the slip directions is macroscopically aligned with the simple shear plane. The lattice rotation during deformation was quantitatively estimated through comparison of the orientations parallel to three macroscopic axes before and after deformation. An effort has been made to link the microstructure with the initial crystal orientation. Collinear slip systems are believed to be activated during deformation. The full constraints Taylor model was used to simulate the orientation evolution during ECAP. The result matched only partially with the experimental observation.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
This paper presents a controller design method for fuzzy dynamic systems based on piecewise Lyapunov functions with constraints on the closed-loop pole location. The main idea is to use switched controllers to locate the poles of the system to obtain a satisfactory transient response. It is shown that the global fuzzy system satisfies the requirements for the design and that the control law can be obtained by solving a set of linear matrix inequalities, which can be efficiently solved with commercially available softwares. An example is given to illustrate the application of the proposed method. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
This paper presents a new approach, predictor-corrector modified barrier approach (PCMBA), to minimize the active losses in power system planning studies. In the PCMBA, the inequality constraints are transformed into equalities by introducing positive auxiliary variables. which are perturbed by the barrier parameter, and treated by the modified barrier method. The first-order necessary conditions of the Lagrangian function are solved by predictor-corrector Newton`s method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, reaching the limits of the inequality constraints. The feasibility of the proposed approach is demonstrated using various IEEE test systems and a realistic power system of 2256-bus corresponding to the Brazilian South-Southeastern interconnected system. The results show that the utilization of the predictor-corrector method with the pure modified barrier approach accelerates the convergence of the problem in terms of the number of iterations and computational time. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.