887 resultados para Problem Behavior Theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical Program with Complementarity Constraints (MPCC) finds many applications in fields such as engineering design, economic equilibrium and mathematical programming theory itself. A queueing system model resulting from a single signalized intersection regulated by pre-timed control in traffic network is considered. The model is formulated as an MPCC problem. A MATLAB implementation based on an hyperbolic penalty function is used to solve this practical problem, computing the total average waiting time of the vehicles in all queues and the green split allocation. The problem was codified in AMPL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Industrial rotating machines may be exposed to severe dynamic excitations due to resonant working regimes. Dealing with the bending vibration, problem of a machine rotor, the shaft - and attached discs - can be simply modelled using the Bernoulli-Euler beam theory, as a continuous beam subjected to a specific set of boundary conditions. In this study, the authors recall Rayleigh's method to propose an iterative strategy, which allows for the determination of natural frequencies and mode shapes of continuous beams taking into account the effect of attached concentrated masses and rotational inertias, including different stiffness coefficients at the right and the left end sides. The algorithm starts with the exact solutions from Bernoulli-Euler's beam theory, which are then updated through Rayleigh's quotient parameters. Several loading cases are examined in comparison with the experimental data and examples are presented to illustrate the validity of the model and the accuracy of the obtained values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ruthenium(II)-cymene complexes [Ru(eta(6)-cymene)(bha)Cl] with substituted halogenobenzohydroxamato (bha) ligands (substituents = 4-F, 4-Cl, 4-Br, 2,4-F-2, 3,4-F-2, 2,5-F-2, 2,6-F-2) have been synthesized and characterized by elemental analysis, IR, H-1 NMR, C-13 NMR, cyclic voltammetry and controlled-potential electrolysis, and density functional theory (DFT) studies. The compositions of their frontier molecular orbitals (MOs) were established by DFT calculations, and the oxidation and reduction potentials are shown to follow the orders of the estimated vertical ionization potential and electron affinity, respectively. The electrochemical E-L Lever parameter is estimated for the first time for the various bha ligands, which can thus be ordered according to their electron-donor character. All complexes exhibit very strong protein tyrosine kinase (PTK) inhibitory activity, even much higher than that of genistein, the clinically used PTK inhibitory drug. The complex containing the 2,4-difluorobenzohydroxamato ligand is the most active one, and the dependences of the PTK activity of the complexes and of their redox potentials on the ring substituents are discussed. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In memory of our beloved Professor José Rodrigues Santos de Sousa Ramos (1948-2007), who João Cabral, one of the authors of this paper, had the honor of being his student between 2000 and 2006, we wrote this paper following the research by experimentation, using the new technologies to capture a new insight about a problem, as him so much love to do it. His passion was to create new relations between different fields of mathematics. He was a builder of bridges of knowledge, encouraging the birth of new ways to understand this science. One of the areas that Sousa Ramos researched was the iteration of maps and the description of its behavior, using the symbolic dynamics. So, in this issue of this journal, honoring his memory, we use experimental results to find some stable regions of a specific family of real rational maps, the ones that he worked with João Cabral. In this paper we describe a parameter space (a,b) to the real rational maps fa,b(x) = (x2 −a)/(x2 −b), using some tools of dynamical systems, as the study of the critical point orbit and Lyapunov exponents. We give some results regarding the stability of these family of maps when we iterate it, specially the ones connected to the order 3 of iteration. We hope that our results would help to understand better the behavior of these maps, preparing the ground to a more efficient use of the Kneading Theory on these family of maps, using symbolic dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of children's school achievements in mathematics is one of the most important aims of education in Poland. The results of research concerning monitoring of school achievements in maths is not optimistic. We can observe low levels of children’s understanding of the merits of maths, self-developed strategies in solving problems and practical usage of maths skills. This article frames the discussion of this problem in its psychological and didactic context and analyses the causes as they relate to school practice in teaching maths

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity markets are complex environments, involving a large number of different entities, with specific characteristics and objectives, making their decisions and interacting in a dynamic scene. Game-theory has been widely used to support decisions in competitive environments; therefore its application in electricity markets can prove to be a high potential tool. This paper proposes a new scenario analysis algorithm, which includes the application of game-theory, to evaluate and preview different scenarios and provide players with the ability to strategically react in order to exhibit the behavior that better fits their objectives. This model includes forecasts of competitor players’ actions, to build models of their behavior, in order to define the most probable expected scenarios. Once the scenarios are defined, game theory is applied to support the choice of the action to be performed. Our use of game theory is intended for supporting one specific agent and not for achieving the equilibrium in the market. MASCEM (Multi-Agent System for Competitive Electricity Markets) is a multi-agent electricity market simulator that models market players and simulates their operation in the market. The scenario analysis algorithm has been tested within MASCEM and our experimental findings with a case study based on real data from the Iberian Electricity Market are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The higher education system in Europe is currently under stress and the debates over its reform and future are gaining momentum. Now that, for most countries, we are in a time for change, in the overall society and the whole education system, the legal and political dimensions have gained prominence, which has not been followed by a more integrative approach of the problem of order, its reform and the issue of regulation, beyond the typical static and classical cost-benefit analyses. The two classical approaches for studying (and for designing the policy measures of) the problem of the reform of the higher education system - the cost-benefit analysis and the legal scholarship description - have to be integrated. This is the argument of our paper that the very integration of economic and legal approaches, what Warren Samuels called the legal-economic nexus, is meaningful and necessary, especially if we want to address the problem of order (as formulated by Joseph Spengler) and the overall regulation of the system. On the one hand, and without neglecting the interest and insights gained from the cost-benefit analysis, or other approaches of value for money assessment, we will focus our study on the legal, social and political aspects of the regulation of the higher education system and its reform in Portugal. On the other hand, the economic and financial problems have to be taken into account, but in a more inclusive way with regard to the indirect and other socio-economic costs not contemplated in traditional or standard assessments of policies for the tertiary education sector. In the first section of the paper, we will discuss the theoretical and conceptual underpinning of our analysis, focusing on the evolutionary approach, the role of critical institutions, the legal-economic nexus and the problem of order. All these elements are related to the institutional tradition, from Veblen and Commons to Spengler and Samuels. The second section states the problem of regulation in the higher education system and the issue of policy formulation for tackling the problem. The current situation is clearly one of crisis with the expansion of the cohorts of young students coming to an end and the recurrent scandals in private institutions. In the last decade, after a protracted period of extension or expansion of the system, i. e., the continuous growth of students, universities and other institutions are competing harder to gain students and have seen their financial situation at risk. It seems that we are entering a period of radical uncertainty, higher competition and a new configuration that is slowly building up is the growth in intensity, which means upgrading the quality of the higher learning and getting more involvement in vocational training and life-long learning. With this change, and along with other deep ones in the Portuguese society and economy, the current regulation has shown signs of maladjustment. The third section consists of our conclusions on the current issue of regulation and policy challenge. First, we underline the importance of an evolutionary approach to a process of change that is essentially dynamic. A special attention will be given to the issues related to an evolutionary construe of policy analysis and formulation. Second, the integration of law and economics, through the notion of legal economic nexus, allows us to better define the issues of regulation and the concrete problems that the universities are facing. One aspect is the instability of the political measures regarding the public administration and on which the higher education system depends financially, legally and institutionally, to say the least. A corollary is the lack of clear strategy in the policy reforms. Third, our research criticizes several studies, such as the one made by the OECD in late 2006 for the Ministry of Science, Technology and Higher Education, for being too static and neglecting fundamental aspects of regulation such as the logic of actors, groups and organizations who are major players in the system. Finally, simply changing the legal rules will not necessary per se change the behaviors that the authorities want to change. By this, we mean that it is not only remiss of the policy maker to ignore some of the critical issues of regulation, namely the continuous non-respect by academic management and administrative bodies of universities of the legal rules that were once promulgated. Changing the rules does not change the problem, especially without the necessary debates form the different relevant quarters that make up the higher education system. The issues of social interaction remain as intact. Our treatment of the matter will be organized in the following way. In the first section, the theoretical principles are developed in order to be able to study more adequately the higher education transformation with a modest evolutionary theory and a legal and economic nexus of the interactions of the system and the policy challenges. After describing, in the second section, the recent evolution and current working of the higher education in Portugal, we will analyze the legal framework and the current regulatory practices and problems in light of the theoretical framework adopted. We will end with some conclusions on the current problems of regulation and the policy measures that are discusses in recent years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scope of this paper is to adapt the standard mean-variance model of Henry Markowitz theory, creating a simulation tool to find the optimal configuration of the portfolio aggregator, calculate its profitability and risk. Currently, there is a deep discussion going on among the power system society about the structure and architecture of the future electric system. In this environment, policy makers and electric utilities find new approaches to access the electricity market; this configures new challenging positions in order to find innovative strategies and methodologies. Decentralized power generation is gaining relevance in liberalized markets, and small and medium size electricity consumers are also become producers (“prosumers”). In this scenario an electric aggregator is an entity that joins a group of electric clients, customers, producers, “prosumers” together as a single purchasing unit to negotiate the purchase and sale of electricity. The aggregator conducts research on electricity prices, contract terms and conditions in order to promote better energy prices for their clients and allows small and medium customers to benefit improved market prices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of scheduling a multi-mode real-time system upon identical multiprocessor platforms. Since it is a multi-mode system, the system can change from one mode to another such that the current task set is replaced with a new task set. Ensuring that deadlines are met requires not only that a schedulability test is performed on tasks in each mode but also that (i) a protocol for transitioning from one mode to another is specified and (ii) a schedulability test for each transition is performed. We propose two protocols which ensure that all the expected requirements are met during every transition between every pair of operating modes of the system. Moreover, we prove the correctness of our proposed algorithms by extending the theory about the makespan determination problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of coordinating two non-holonomic mobile robots that move in formation while transporting a long payload. A competitive dynamics is introduced that gradually controls the activation and deactivation of individual behaviors. This process introduces (asymmetrical) hysteresis during behavioral switching. As a result behavioral oscillations, due to noisy information, are eliminated. Results in indoor environments show that if parameter values are chosen within reasonable ranges then, in spite of noise in the robots communi- cation and sensors, the overall robotic system works quite well even in cluttered environments. The robots overt behavior is stable and smooth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Engineering Education includes not only teaching theoretical fundamental concepts but also its verification during practical lessons in laboratories. The usual strategies to carry out this action are frequently based on Problem Based Learning, starting from a given state and proceeding forward to a target state. The possibility or the effectiveness of this procedure depends on previous states and if the present state was caused or resulted from earlier ones. This often happens in engineering education when the achieved results do not match the desired ones, e.g. when programming code is being developed or when the cause of the wrong behavior of an electronic circuit is being identified. It is thus important to also prepare students to proceed in the reverse way, i.e. given a start state generate the explanation or even the principles that underlie it. Later on, this sort of skills will be important. For instance, to a doctor making a patient?s story or to an engineer discovering the source of a malfunction. This learning methodology presents pedagogical advantages besides the enhanced preparation of students to their future work. The work presented on his document describes an automation project developed by a group of students in an engineering polytechnic school laboratory. The main objective was to improve the performance of a Braille machine. However, in a scenario of Reverse Problem-Based learning, students had first to discover and characterize the entire machine's function before being allowed (and being able) to propose a solution for the existing problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We generalize Wertheim's first order perturbation theory to account for the effect in the thermodynamics of the self-assembly of rings characterized by two energy scales. The theory is applied to a lattice model of patchy particles and tested against Monte Carlo simulations on a fcc lattice. These particles have 2 patches of type A and 10 patches of type B, which may form bonds AA or AB that decrease the energy by epsilon(AA) and by epsilon(AB) = r epsilon(AA), respectively. The angle theta between the 2 A-patches on each particle is fixed at 601, 90 degrees or 120 degrees. For values of r below 1/2 and above a threshold r(th)(theta) the models exhibit a phase diagram with two critical points. Both theory and simulation predict that rth increases when theta decreases. We show that the mechanism that prevents phase separation for models with decreasing values of theta is related to the formation of loops containing AB bonds. Moreover, we show that by including the free energy of B-rings ( loops containing one AB bond), the theory describes the trends observed in the simulation results, but that for the lowest values of theta, the theoretical description deteriorates due to the increasing number of loops containing more than one AB bond.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity markets are complex environments, involving a large number of different entities, with specific characteristics and objectives, making their decisions and interacting in a dynamic scene. Game-theory has been widely used to support decisions in competitive environments; therefore its application in electricity markets can prove to be a high potential tool. This paper proposes a new scenario analysis algorithm, which includes the application of game-theory, to evaluate and preview different scenarios and provide players with the ability to strategically react in order to exhibit the behavior that better fits their objectives. This model includes forecasts of competitor players’ actions, to build models of their behavior, in order to define the most probable expected scenarios. Once the scenarios are defined, game theory is applied to support the choice of the action to be performed. Our use of game theory is intended for supporting one specific agent and not for achieving the equilibrium in the market. MASCEM (Multi-Agent System for Competitive Electricity Markets) is a multi-agent electricity market simulator that models market players and simulates their operation in the market. The scenario analysis algorithm has been tested within MASCEM and our experimental findings with a case study based on real data from the Iberian Electricity Market are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a decision support methodology for electricity market players’ bilateral contract negotiations. The proposed model is based on the application of game theory, using artificial intelligence to enhance decision support method’s adaptive features. This model is integrated in AiD-EM (Adaptive Decision Support for Electricity Markets Negotiations), a multi-agent system that provides electricity market players with strategic behavior capabilities to improve their outcomes from energy contracts’ negotiations. Although a diversity of tools that enable the study and simulation of electricity markets has emerged during the past few years, these are mostly directed to the analysis of market models and power systems’ technical constraints, making them suitable tools to support decisions of market operators and regulators. However, the equally important support of market negotiating players’ decisions is being highly neglected. The proposed model contributes to overcome the existing gap concerning effective and realistic decision support for electricity market negotiating entities. The proposed method is validated by realistic electricity market simulations using real data from the Iberian market operator—MIBEL. Results show that the proposed adaptive decision support features enable electricity market players to improve their outcomes from bilateral contracts’ negotiations.