900 resultados para Fuzzy Set Theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The great interest in nonlinear system identification is mainly due to the fact that a large amount of real systems are complex and need to have their nonlinearities considered so that their models can be successfully used in applications of control, prediction, inference, among others. This work evaluates the application of Fuzzy Wavelet Neural Networks (FWNN) to identify nonlinear dynamical systems subjected to noise and outliers. Generally, these elements cause negative effects on the identification procedure, resulting in erroneous interpretations regarding the dynamical behavior of the system. The FWNN combines in a single structure the ability to deal with uncertainties of fuzzy logic, the multiresolution characteristics of wavelet theory and learning and generalization abilities of the artificial neural networks. Usually, the learning procedure of these neural networks is realized by a gradient based method, which uses the mean squared error as its cost function. This work proposes the replacement of this traditional function by an Information Theoretic Learning similarity measure, called correntropy. With the use of this similarity measure, higher order statistics can be considered during the FWNN training process. For this reason, this measure is more suitable for non-Gaussian error distributions and makes the training less sensitive to the presence of outliers. In order to evaluate this replacement, FWNN models are obtained in two identification case studies: a real nonlinear system, consisting of a multisection tank, and a simulated system based on a model of the human knee joint. The results demonstrate that the application of correntropy as the error backpropagation algorithm cost function makes the identification procedure using FWNN models more robust to outliers. However, this is only achieved if the gaussian kernel width of correntropy is properly adjusted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An abundance of research in the social sciences has demonstrated a persistent bias against nonnative English speakers (Giles & Billings, 2004; Gluszek & Dovidio, 2010). Yet, organizational scholars have only begun to investigate the underlying mechanisms that drive the bias against nonnative speakers and subsequently design interventions to mitigate these biases. In this dissertation, I offer an integrative model to organize past explanations for accent-based bias into a coherent framework, and posit that nonnative accents elicit social perceptions that have implications at the personal, relational, and group level. I also seek to complement the existing emphasis on main effects of accents, which focuses on the general tendency to discriminate against those with accents, by examining moderators that shed light on the conditions under which accent-based bias is most likely to occur. Specifically, I explore the idea that people’s beliefs about the controllability of accents can moderate their evaluations toward nonnative speakers, such that those who believe that accents can be controlled are more likely to demonstrate a bias against nonnative speakers. I empirically test my theoretical model in three studies in the context of entrepreneurial funding decisions. Results generally supported the proposed model. By examining the micro foundations of accent-based bias, the ideas explored in this dissertation set the stage for future research in an increasingly multilingual world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Major developments in the technological environment can become commonplace very quickly. They are now impacting upon a broad range of information-based service sectors, as high growth Internet-based firms, such as Google, Amazon, Facebook and Airbnb, and financial technology (Fintech) start-ups expand their product portfolios into new markets. Real estate is one of the information-based service sectors that is currently being impacted by this new type of competitor and the broad range of disruptive digital technologies that have emerged. Due to the vast troves of data that these Internet firms have at their disposal and their asset-light (cloud-based) structures, they are able to offer highly-targeted products at much lower costs than conventional brick-and-mortar companies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O presente trabalho faz um enlace de teorias propostas por dois trabalhos: Transformação de valores crisp em valores fuzzy e construção de gráfico de controle fuzzy. O resultado desse enlace é um gráfico de controle fuzzy que foi aplicado em um processo de produção de iogurte, onde as variáveis analisadas foram: Cor, Aroma, Consistência, Sabor e Acidez. São características que dependem da percepção dos indivíduos, então a forma utilizada para coletar informações a respeito de tais característica foi a análise sensorial. Nas analises um grupo denominado de juízes, atribuía individualmente notas para cada amostra de iogurte em uma escala de 0 a 10. Esses valores crisp, notas atribuídas pelos juízes, foram então, transformados em valores fuzzy, na forma de número fuzzy triangular. Com os números fuzzy, foram construídos os gráficos de controle fuzzy de média e amplitude. Com os valores crisp foram construídos gráficos de controle de Shewhart para média e amplitude, já consolidados pela literatura. Por fim, os resultados encontrados nos gráficos tradicionais foram comparados aos encontrados nos gráficos de controle fuzzy. O que pode-se observar é que o gráfico de controle fuzzy, parece satisfazer de forma significativa a realidade do processo, pois na construção do número fuzzy é considerada a variabilidade do processo. Além disso, caracteriza o processo de produção em alguns níveis, onde nem sempre o processo estará totalmente em controle ou totalmente fora de controle. O que vai ao encontro da teoria fuzzy: se não é possível prever com exatidão determinados resultados é melhor ter uma margem de aceitação, o que implicará na redução de erros.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A teoria de jogos modela estratégias entre agentes (jogadores), os quais possuem recompensas ao fim do jogo conforme suas ações. O melhor par de estratégias para os jogadores constitui uma solução de equilíbrio. Porém, nem sempre se consegue estimar os dados do problema. Diante disso, os parâmetros incertos presentes em modelos de jogos são formalizados pela teoria fuzzy. Assim, a teoria fuzzy auxilia a teoria de jogos, formando jogos fuzzy. Dessa forma, parâmetros, como as recompensas, tornam-se números fuzzy. Mais ainda, quando há incerteza na representação desses números fuzzy utilizam-se os números fuzzy intervalares. Então, neste trabalho modelos de jogos fuzzy intervalares são analisados e métodos computacionais são desenvolvidos para a resolução desses jogos. Por fim, realizam-se simulações de programação linear para observar melhor a aplicação das teorias estudadas e avaliar a proposta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Value and reasons for action are often cited by rationalists and moral realists as providing a desire-independent foundation for normativity. Those maintaining instead that normativity is dependent upon motivation often deny that anything called '"value" or "reasons" exists. According to the interest-relational theory, something has value relative to some perspective of desire just in case it satisfies those desires, and a consideration is a reason for some action just in case it indicates that something of value will be accomplished by that action. Value judgements therefore describe real properties of objects and actions, but have no normative significance independent of desires. It is argued that only the interest-relational theory can account for the practical significance of value and reasons for action. Against the Kantian hypothesis of prescriptive rational norms, I attack the alleged instrumental norm or hypothetical imperative, showing that the normative force for taking the means to our ends is explicable in terms of our desire for the end, and not as a command of reason. This analysis also provides a solution to the puzzle concerning the connection between value judgement and motivation. While it is possible to hold value judgements without motivation, the connection is more than accidental. This is because value judgements are usually but not always made from the perspective of desires that actually motivate the speaker. In the normal case judgement entails motivation. But often we conversationally borrow external perspectives of desire, and subsequent judgements do not entail motivation. This analysis drives a critique of a common practice as a misuse of normative language. The "absolutist" attempts to use and, as philosopher, analyze normative language in such a way as to justify the imposition of certain interests over others. But these uses and analyses are incoherent - in denying relativity to particular desires they conflict with the actual meaning of these utterances, which is always indexed to some particular set of desires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis presents experimental results, simulations, and theory on turbulence excited in magnetized plasmas near the ionosphere’s upper hybrid layer. The results include: The first experimental observations of super small striations (SSS) excited by the High-Frequency Auroral Research Project (HAARP) The first detection of high-frequency (HF) waves from the HAARP transmitter over a distance of 16x10^3 km The first simulations indicating that upper hybrid (UH) turbulence excites electron Bernstein waves associated with all nearby gyroharmonics Simulation results that indicate that the resulting bulk electron heating near the upper hybrid (UH) resonance is caused primarily by electron Bernstein waves parametrically excited near the first gyroharmonic. On the experimental side we present two sets of experiments performed at the HAARP heating facility in Alaska. In the first set of experiments, we present the first detection of super-small (cm scale) striations (SSS) at the HAARP facility. We detected density structures smaller than 30 cm for the first time through a combination of satellite and ground based measurements. In the second set of experiments, we present the results of a novel diagnostic implemented by the Ukrainian Antarctic Station (UAS) in Verdansky. The technique allowed the detection of the HAARP signal at a distance of nearly 16 Mm, and established that the HAARP signal was injected into the ionospheric waveguide by direct scattering off of dekameter-scale density structures induced by the heater. On the theoretical side, we present results of Vlasov simulations near the upper hybrid layer. These results are consistent with the bulk heating required by previous work on the theory of the formation of descending artificial ionospheric layers (DIALs), and with the new observations of DIALs at HAARP’s upgraded effective radiated power (ERP). The simulations that frequency sweeps, and demonstrate that the heating changes from a bulk heating between gyroharmonics, to a tail acceleration as the pump frequency is swept through the fourth gyroharmonic. These simulations are in good agreement with experiments. We also incorporate test particle simulations that isolate the effects of specific wave modes on heating, and we find important contributions from both electron Bernstein waves and upper hybrid waves, the former of which have not yet been detected by experiments, and have not been previously explored as a driver of heating. In presenting these results, we analyzed data from HAARP diagnostics and assisted in planning the second round of experiments. We integrated the data into a picture of experiments that demonstrated the detection of SSS, hysteresis effects in simulated electromagnetic emission (SEE) features, and the direct scattering of the HF pump into the ionospheric waveguide. We performed simulations and analyzed simulation data to build the understanding of collisionless heating near the upper hybrid layer, and we used these simulations to show that bulk electron heating at the upper hybrid layer is possible, which is required by current theories of DAIL formation. We wrote a test particle simulation to isolate the effects of electron Bernstein waves and upper hybrid layers on collisionless heating, and integrated this code to work with both the output of Vlasov simulations and the input for simulations of DAIL formation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we use concepts from graph theory and cellular biology represented as ontologies, to carry out semantic mining tasks on signaling pathway networks. Specifically, the paper describes the semantic enrichment of signaling pathway networks. A cell signaling network describes the basic cellular activities and their interactions. The main contribution of this paper is in the signaling pathway research area, it proposes a new technique to analyze and understand how changes in these networks may affect the transmission and flow of information, which produce diseases such as cancer and diabetes. Our approach is based on three concepts from graph theory (modularity, clustering and centrality) frequently used on social networks analysis. Our approach consists into two phases: the first uses the graph theory concepts to determine the cellular groups in the network, which we will call them communities; the second uses ontologies for the semantic enrichment of the cellular communities. The measures used from the graph theory allow us to determine the set of cells that are close (for example, in a disease), and the main cells in each community. We analyze our approach in two cases: TGF-β and the Alzheimer Disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the relations of shift equivalence and strong shift equivalence for matrices over a ring $\mathcal{R}$, and establish a connection between these relations and algebraic K-theory. We utilize this connection to obtain results in two areas where the shift and strong shift equivalence relations play an important role: the study of finite group extensions of shifts of finite type, and the Generalized Spectral Conjectures of Boyle and Handelman for nonnegative matrices over subrings of the real numbers. We show the refinement of the shift equivalence class of a matrix $A$ over a ring $\mathcal{R}$ by strong shift equivalence classes over the ring is classified by a quotient $NK_{1}(\mathcal{R}) / E(A,\mathcal{R})$ of the algebraic K-group $NK_{1}(\calR)$. We use the K-theory of non-commutative localizations to show that in certain cases the subgroup $E(A,\mathcal{R})$ must vanish, including the case $A$ is invertible over $\mathcal{R}$. We use the K-theory connection to clarify the structure of algebraic invariants for finite group extensions of shifts of finite type. In particular, we give a strong negative answer to a question of Parry, who asked whether the dynamical zeta function determines up to finitely many topological conjugacy classes the extensions by $G$ of a fixed mixing shift of finite type. We apply the K-theory connection to prove the equivalence of a strong and weak form of the Generalized Spectral Conjecture of Boyle and Handelman for primitive matrices over subrings of $\mathbb{R}$. We construct explicit matrices whose class in the algebraic K-group $NK_{1}(\mathcal{R})$ is non-zero for certain rings $\mathcal{R}$ motivated by applications. We study the possible dynamics of the restriction of a homeomorphism of a compact manifold to an isolated zero-dimensional set. We prove that for $n \ge 3$ every compact zero-dimensional system can arise as an isolated invariant set for a homeomorphism of a compact $n$-manifold. In dimension two, we provide obstructions and examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of fluid behavior in multiphase flow is very relevant to guarantee system safety. The use of equipment to describe such behavior is subjected to factors such as the high level of investments and of specialized labor. The application of image processing techniques to flow analysis can be a good alternative, however, very little research has been developed. In this subject, this study aims at developing a new approach to image segmentation based on Level Set method that connects the active contours and prior knowledge. In order to do that, a model shape of the targeted object is trained and defined through a model of point distribution and later this model is inserted as one of the extension velocity functions for the curve evolution at zero level of level set method. The proposed approach creates a framework that consists in three terms of energy and an extension velocity function λLg(θ)+vAg(θ)+muP(0)+θf. The first three terms of the equation are the same ones introduced in (LI CHENYANG XU; FOX, 2005) and the last part of the equation θf is based on the representation of object shape proposed in this work. Two method variations are used: one restricted (Restrict Level Set - RLS) and the other with no restriction (Free Level Set - FLS). The first one is used in image segmentation that contains targets with little variation in shape and pose. The second will be used to correctly identify the shape of the bubbles in the liquid gas two phase flows. The efficiency and robustness of the approach RLS and FLS are presented in the images of the liquid gas two phase flows and in the image dataset HTZ (FERRARI et al., 2009). The results confirm the good performance of the proposed algorithm (RLS and FLS) and indicate that the approach may be used as an efficient method to validate and/or calibrate the various existing equipment used as meters for two phase flow properties, as well as in other image segmentation problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past few years, there has been a concern among economists and policy makers that increased openness to international trade affects some regions in a country more than others. Recent research has found that local labor markets more exposed to import competition through their initial employment composition experience worse outcomes in several dimensions such as, employment, wages, and poverty. Although there is evidence that regions within a country exhibit variation in the intensity with which they trade with each other and with other countries, trade linkages have been ignored in empirical analyses of the regional effects of trade, which focus on differences in employment composition. In this dissertation, I investigate how local labor markets' trade linkages shape the response of wages to international trade shocks. In the second chapter, I lay out a standard multi-sector general equilibrium model of trade, where domestic regions trade with each other and with the rest of the world. Using this benchmark, I decompose a region's wage change resulting from a national import cost shock into a direct effect on prices, holding other endogenous variables constant, and a series of general equilibrium effects. I argue the direct effect provides a natural measure of exposure to import competition within the model since it summarizes the effect of the shock on a region's wage as a function of initial conditions given by its trade linkages. I call my proposed measure linkage exposure while I refer to the measures used in previous studies as employment exposure. My theoretical analysis also shows that the assumptions previous studies make on trade linkages are not consistent with the standard trade model. In the third chapter, I calibrate the model to the Brazilian economy in 1991--at the beginning of a period of trade liberalization--to perform a series of experiments. In each of them, I reduce the Brazilian import cost by 1 percent in a single sector and I calculate how much of the cross-regional variation in counterfactual wage changes is explained by exposure measures. Over this set of experiments, employment exposure explains, for the median sector, 2 percent of the variation in counterfactual wage changes while linkage exposure explains 44 percent. In addition, I propose an estimation strategy that incorporates trade linkages in the analysis of the effects of trade on observed wages. In the model, changes in wages are completely determined by changes in market access, an endogenous variable that summarizes the real demand faced by a region. I show that a linkage measure of exposure is a valid instrument for changes in market access within Brazil. By using observed wage changes in Brazil between 1991-2000, my estimates imply that a region at the 25th percentile of the change in domestic market access induced by trade liberalization, experiences a 0.6 log points larger wage decline (or smaller wage increase) than a region at the 75th percentile. The estimates from a regression of wages changes on exposure imply that a region at the 25th percentile of exposure experiences a 3 log points larger wage decline (or smaller wage increase) than a region at the 75th percentile. I conclude that estimates based on exposure overstate the negative impact of trade liberalization on wages in Brazil. In the fourth chapter, I extend the standard model to allow for two types of workers according to their education levels: skilled and unskilled. I show that there is substantial variation across Brazilian regions in the skill premium. I use the exogenous variation provided by tariff changes to estimate the impact of market access on the skill premium. I find that decreased domestic market access resulting from trade liberalization resulted in a higher skill premium. I propose a mechanism to explain this result: that the manufacturing sector is relatively more intensive in unskilled labor and I show empirical evidence that supports this hypothesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a proposal to detect interface in atmospheric oil tanks by installing a differential pressure level transmitter to infer the oil-water interface. The main goal of this project is to maximize the quantity of free water that is delivered to the drainage line by controlling the interface. A Fuzzy Controller has been implemented by using the interface transmitter as the Process Variable. Two ladder routine was generated to perform the control. One routine was developed to calculate the error and error variation. The other was generate to develop the fuzzy controller itself. By using rules, the fuzzy controller uses these variables to set the output. The output is the position variation of the drainage valve. Although the ladder routine was implemented into an Allen Bradley PLC, Control Logix family it can be implemented into any brand of PLCs