142 resultados para energetic constraint
Resumo:
Many engineering problems that can be formulatedas constrained optimization problems result in solutionsgiven by a waterfilling structure; the classical example is thecapacity-achieving solution for a frequency-selective channel.For simple waterfilling solutions with a single waterlevel and asingle constraint (typically, a power constraint), some algorithmshave been proposed in the literature to compute the solutionsnumerically. However, some other optimization problems result insignificantly more complicated waterfilling solutions that includemultiple waterlevels and multiple constraints. For such cases, itmay still be possible to obtain practical algorithms to evaluate thesolutions numerically but only after a painstaking inspection ofthe specific waterfilling structure. In addition, a unified view ofthe different types of waterfilling solutions and the correspondingpractical algorithms is missing.The purpose of this paper is twofold. On the one hand, itoverviews the waterfilling results existing in the literature from aunified viewpoint. On the other hand, it bridges the gap betweena wide family of waterfilling solutions and their efficient implementationin practice; to be more precise, it provides a practicalalgorithm to evaluate numerically a general waterfilling solution,which includes the currently existing waterfilling solutions andothers that may possibly appear in future problems.
Resumo:
The well-known structure of an array combiner along with a maximum likelihood sequence estimator (MLSE) receiveris the basis for the derivation of a space-time processor presentinggood properties in terms of co-channel and intersymbol interferencerejection. The use of spatial diversity at the receiver front-endtogether with a scalar MLSE implies a joint design of the spatialcombiner and the impulse response for the sequence detector. Thisis faced using the MMSE criterion under the constraint that thedesired user signal power is not cancelled, yielding an impulse responsefor the sequence detector that is matched to the channel andcombiner response. The procedure maximizes the signal-to-noiseratio at the input of the detector and exhibits excellent performancein realistic multipath channels.
Resumo:
This paper presents a relational positioning methodology for flexibly and intuitively specifying offline programmed robot tasks, as well as for assisting the execution of teleoperated tasks demanding precise movements.In relational positioning, the movements of an object can be restricted totally or partially by specifying its allowed positions in terms of a set of geometric constraints. These allowed positions are found by means of a 3D sequential geometric constraint solver called PMF – Positioning Mobile with respect to Fixed. PMF exploits the fact that in a set of geometric constraints, the rotational component can often be separated from the translational one and solved independently.
Resumo:
A detailed mathematical analysis on the q = 1/2 non-extensive maximum entropydistribution of Tsallis' is undertaken. The analysis is based upon the splitting of such adistribution into two orthogonal components. One of the components corresponds to theminimum norm solution of the problem posed by the fulfillment of the a priori conditionson the given expectation values. The remaining component takes care of the normalizationconstraint and is the projection of a constant onto the Null space of the "expectation-values-transformation"
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
Background: The reduction in the amount of food available for European avian scavengers as a consequence of restrictive public health policies is a concern for managers and conservationists. Since 2002, the application of several sanitary regulations has limited the availability of feeding resources provided by domestic carcasses, but theoretical studies assessing whether the availability of food resources provided by wild ungulates are enough to cover energetic requirements are lacking. Methodology/Findings: We assessed food provided by a wild ungulate population in two areas of NE Spain inhabited by three vulture species and developed a P System computational model to assess the effects of the carrion resources provided on their population dynamics. We compared the real population trend with to a hypothetical scenario in which only food provided by wild ungulates was available. Simulation testing of the model suggests that wild ungulates constitute an important food resource in the Pyrenees and the vulture population inhabiting this area could grow if only the food provided by wild ungulates would be available. On the contrary, in the Pre-Pyrenees there is insufficient food to cover the energy requirements of avian scavenger guilds, declining sharply if biomass from domestic animals would not be available. Conclusions/Significance: Our results suggest that public health legislation can modify scavenger population trends if a large number of domestic ungulate carcasses disappear from the mountains. In this case, food provided by wild ungulates could be not enough and supplementary feeding could be necessary if other alternative food resources are not available (i.e. the reintroduction of wild ungulates), preferably in European Mediterranean scenarios sharing similar and socio-economic conditions where there are low densities of wild ungulates. Managers should anticipate the conservation actions required by assessing food availability and the possible scenarios in order to make the most suitable decisions.
Resumo:
Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
Random problem distributions have played a key role in the study and design of algorithms for constraint satisfaction and Boolean satisfiability, as well as in ourunderstanding of problem hardness, beyond standard worst-case complexity. We consider random problem distributions from a highly structured problem domain that generalizes the Quasigroup Completion problem (QCP) and Quasigroup with Holes (QWH), a widely used domain that captures the structure underlying a range of real-world applications. Our problem domain is also a generalization of the well-known Sudoku puz- zle: we consider Sudoku instances of arbitrary order, with the additional generalization that the block regions can have rectangular shape, in addition to the standard square shape. We evaluate the computational hardness of Generalized Sudoku instances, for different parameter settings. Our experimental hardness results show that we can generate instances that are considerably harder than QCP/QWH instances of the same size. More interestingly, we show the impact of different balancing strategies on problem hardness. We also provide insights into backbone variables in Generalized Sudoku instances and how they correlate to problem hardness.
Resumo:
Tractable cases of the binary CSP are mainly divided in two classes: constraint language restrictions and constraint graph restrictions. To better understand and identify the hardest binary CSPs, in this work we propose methods to increase their hardness by increasing the balance of both the constraint language and the constraint graph. The balance of a constraint is increased by maximizing the number of domain elements with the same number of occurrences. The balance of the graph is defined using the classical definition from graph the- ory. In this sense we present two graph models; a first graph model that increases the balance of a graph maximizing the number of vertices with the same degree, and a second one that additionally increases the girth of the graph, because a high girth implies a high treewidth, an important parameter for binary CSPs hardness. Our results show that our more balanced graph models and constraints result in harder instances when compared to typical random binary CSP instances, by several orders of magnitude. Also we detect, at least for sparse constraint graphs, a higher treewidth for our graph models.
Resumo:
SEPServer is a three-year collaborative project funded by the seventh framework programme (FP7-SPACE) of the European Union. The objective of the project is to provide access to state-of-the-art observations and analysis tools for the scientific community on solar energetic particle (SEP) events and related electromagnetic (EM) emissions. The project will eventually lead to better understanding of the particle acceleration and transport processes at the Sun and in the inner heliosphere. These processes lead to SEP events that form one of the key elements of space weather. In this paper we present the first results from the systematic analysis work performed on the following datasets: SOHO/ERNE, SOHO/EPHIN, ACE/EPAM, Wind/WAVES and GOES X-rays. A catalogue of SEP events at 1 AU, with complete coverage over solar cycle 23, based on high-energy (~68-MeV) protons from SOHO/ERNE and electron recordings of the events by SOHO/EPHIN and ACE/EPAM are presented. A total of 115 energetic particle events have been identified and analysed using velocity dispersion analysis (VDA) for protons and time-shifting analysis (TSA) for electrons and protons in order to infer the SEP release times at the Sun. EM observations during the times of the SEP event onset have been gathered and compared to the release time estimates of particles. Data from those events that occurred during the European day-time, i.e., those that also have observations from ground-based observatories included in SEPServer, are listed and a preliminary analysis of their associations is presented. We find that VDA results for protons can be a useful tool for the analysis of proton release times, but if the derived proton path length is out of a range of 1 AU < s[3 AU, the result of the analysis may be compromised, as indicated by the anti-correlation of the derived path length and release time delay from the asso ciated X-ray flare. The average path length derived from VDA is about 1.9 times the nominal length of the spiral magnetic field line. This implies that the path length of first-arriving MeV to deka-MeV protons is affected by interplanetary scattering. TSA of near-relativistic electrons results in a release time that shows significant scatter with respect to the EM emissions but with a trend of being delayed more with increasing distance between the flare and the nominal footpoint of the Earth-connected field line.
Resumo:
We use interplanetary transport simulations to compute a database of electron Green's functions, i.e., differential intensities resulting at the spacecraft position from an impulsive injection of energetic (>20 keV) electrons close to the Sun, for a large number of values of two standard interplanetary transport parameters: the scattering mean free path and the solar wind speed. The nominal energy channels of the ACE, STEREO, and Wind spacecraft have been used in the interplanetary transport simulations to conceive a unique tool for the study of near-relativistic electron events observed at 1 AU. In this paper, we quantify the characteristic times of the Green's functions (onset and peak time, rise and decay phase duration) as a function of the interplanetary transport conditions. We use the database to calculate the FWHM of the pitch-angle distributions at different times of the event and under different scattering conditions. This allows us to provide a first quantitative result that can be compared with observations, and to assess the validity of the frequently used term beam-like pitch-angle distribution.
Resumo:
In this paper we provide a formal account for underapplication of vowel reduction to schwa in Majorcan Catalan loanwords and learned words. On the basis of the comparison of these data with those concerning productive derivation and verbal inflection, which show analogous patterns, in this paper we also explore the existing and not yet acknowledged correlation between those processes that exhibit a particular behaviour in the loanword phonology with respect to the native phonology of the language, those processes that show lexical exceptions and those processes that underapply due to morphological reasons. In light of the analysis of the very same data and taking into account the aforementioned correlation, we show how there might exist a natural diachronic relation between two kinds of Optimality Theory constraints which are commonly used but, in principle, mutually exclusive: positional faithfulness and contextual markedness constraints. Overall, phonological productivity is proven to be crucial in three respects: first, as a context of the grammar, given that «underapplication» is systematically found in what we call the productive phonology of the dialect (including loanwords, learned words, productive derivation and verbal inflection); second, as a trigger or blocker of processes, in that the productivity or the lack of productivity of a specific process or constraint in the language is what explains whether it is challenged or not in any of the depicted situations, and, third, as a guiding principle which can explain the transition from the historical to the synchronic phonology of a linguistic variety.
Resumo:
The expansion of an isolated hot spherical nucleus with excitation energy and its caloric curve are studied in a thermodynamic model with the SkM∗ force as the nuclear effective two-body inter-action. The calculated results are shown to compare well with the recent experimental data from energetic nuclear collisions. The fluctuations in temperature and density are also studied. They are seen to build up very rapidly beyond an excitation energy of ∼9 MeV/u. Volume-conserving quadrupole deformation in addition to expansion indicates , however, nuclear disassembly above an excitation energy of ∼4 MeV/u.
Resumo:
Microquasars are stellar x-ray binaries that behave as a scaled down version of extragalactic quasars. The star LS 5039 is a new microquasar system with apparent persistent ejection of relativistic plasma at a 3 kiloparsec distance from the sun. It may also be associated with a gamma-ray source discovered by the Energetic Gamma Ray Experiment Telescope (EGRET) on board the COMPTON-Gamma Ray Observatory satellite. Before the discovery of LS 5039, merely a handful of microquasars had been identified in the Galaxy, and none of them was detected in high-energy gamma-rays.