890 resultados para Fuzzy set theory
Resumo:
The thesis presents experimental results, simulations, and theory on turbulence excited in magnetized plasmas near the ionosphere’s upper hybrid layer. The results include: The first experimental observations of super small striations (SSS) excited by the High-Frequency Auroral Research Project (HAARP) The first detection of high-frequency (HF) waves from the HAARP transmitter over a distance of 16x10^3 km The first simulations indicating that upper hybrid (UH) turbulence excites electron Bernstein waves associated with all nearby gyroharmonics Simulation results that indicate that the resulting bulk electron heating near the upper hybrid (UH) resonance is caused primarily by electron Bernstein waves parametrically excited near the first gyroharmonic. On the experimental side we present two sets of experiments performed at the HAARP heating facility in Alaska. In the first set of experiments, we present the first detection of super-small (cm scale) striations (SSS) at the HAARP facility. We detected density structures smaller than 30 cm for the first time through a combination of satellite and ground based measurements. In the second set of experiments, we present the results of a novel diagnostic implemented by the Ukrainian Antarctic Station (UAS) in Verdansky. The technique allowed the detection of the HAARP signal at a distance of nearly 16 Mm, and established that the HAARP signal was injected into the ionospheric waveguide by direct scattering off of dekameter-scale density structures induced by the heater. On the theoretical side, we present results of Vlasov simulations near the upper hybrid layer. These results are consistent with the bulk heating required by previous work on the theory of the formation of descending artificial ionospheric layers (DIALs), and with the new observations of DIALs at HAARP’s upgraded effective radiated power (ERP). The simulations that frequency sweeps, and demonstrate that the heating changes from a bulk heating between gyroharmonics, to a tail acceleration as the pump frequency is swept through the fourth gyroharmonic. These simulations are in good agreement with experiments. We also incorporate test particle simulations that isolate the effects of specific wave modes on heating, and we find important contributions from both electron Bernstein waves and upper hybrid waves, the former of which have not yet been detected by experiments, and have not been previously explored as a driver of heating. In presenting these results, we analyzed data from HAARP diagnostics and assisted in planning the second round of experiments. We integrated the data into a picture of experiments that demonstrated the detection of SSS, hysteresis effects in simulated electromagnetic emission (SEE) features, and the direct scattering of the HF pump into the ionospheric waveguide. We performed simulations and analyzed simulation data to build the understanding of collisionless heating near the upper hybrid layer, and we used these simulations to show that bulk electron heating at the upper hybrid layer is possible, which is required by current theories of DAIL formation. We wrote a test particle simulation to isolate the effects of electron Bernstein waves and upper hybrid layers on collisionless heating, and integrated this code to work with both the output of Vlasov simulations and the input for simulations of DAIL formation.
Resumo:
In this paper we use concepts from graph theory and cellular biology represented as ontologies, to carry out semantic mining tasks on signaling pathway networks. Specifically, the paper describes the semantic enrichment of signaling pathway networks. A cell signaling network describes the basic cellular activities and their interactions. The main contribution of this paper is in the signaling pathway research area, it proposes a new technique to analyze and understand how changes in these networks may affect the transmission and flow of information, which produce diseases such as cancer and diabetes. Our approach is based on three concepts from graph theory (modularity, clustering and centrality) frequently used on social networks analysis. Our approach consists into two phases: the first uses the graph theory concepts to determine the cellular groups in the network, which we will call them communities; the second uses ontologies for the semantic enrichment of the cellular communities. The measures used from the graph theory allow us to determine the set of cells that are close (for example, in a disease), and the main cells in each community. We analyze our approach in two cases: TGF-β and the Alzheimer Disease.
Resumo:
We study the relations of shift equivalence and strong shift equivalence for matrices over a ring $\mathcal{R}$, and establish a connection between these relations and algebraic K-theory. We utilize this connection to obtain results in two areas where the shift and strong shift equivalence relations play an important role: the study of finite group extensions of shifts of finite type, and the Generalized Spectral Conjectures of Boyle and Handelman for nonnegative matrices over subrings of the real numbers. We show the refinement of the shift equivalence class of a matrix $A$ over a ring $\mathcal{R}$ by strong shift equivalence classes over the ring is classified by a quotient $NK_{1}(\mathcal{R}) / E(A,\mathcal{R})$ of the algebraic K-group $NK_{1}(\calR)$. We use the K-theory of non-commutative localizations to show that in certain cases the subgroup $E(A,\mathcal{R})$ must vanish, including the case $A$ is invertible over $\mathcal{R}$. We use the K-theory connection to clarify the structure of algebraic invariants for finite group extensions of shifts of finite type. In particular, we give a strong negative answer to a question of Parry, who asked whether the dynamical zeta function determines up to finitely many topological conjugacy classes the extensions by $G$ of a fixed mixing shift of finite type. We apply the K-theory connection to prove the equivalence of a strong and weak form of the Generalized Spectral Conjecture of Boyle and Handelman for primitive matrices over subrings of $\mathbb{R}$. We construct explicit matrices whose class in the algebraic K-group $NK_{1}(\mathcal{R})$ is non-zero for certain rings $\mathcal{R}$ motivated by applications. We study the possible dynamics of the restriction of a homeomorphism of a compact manifold to an isolated zero-dimensional set. We prove that for $n \ge 3$ every compact zero-dimensional system can arise as an isolated invariant set for a homeomorphism of a compact $n$-manifold. In dimension two, we provide obstructions and examples.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
The analysis of fluid behavior in multiphase flow is very relevant to guarantee system safety. The use of equipment to describe such behavior is subjected to factors such as the high level of investments and of specialized labor. The application of image processing techniques to flow analysis can be a good alternative, however, very little research has been developed. In this subject, this study aims at developing a new approach to image segmentation based on Level Set method that connects the active contours and prior knowledge. In order to do that, a model shape of the targeted object is trained and defined through a model of point distribution and later this model is inserted as one of the extension velocity functions for the curve evolution at zero level of level set method. The proposed approach creates a framework that consists in three terms of energy and an extension velocity function λLg(θ)+vAg(θ)+muP(0)+θf. The first three terms of the equation are the same ones introduced in (LI CHENYANG XU; FOX, 2005) and the last part of the equation θf is based on the representation of object shape proposed in this work. Two method variations are used: one restricted (Restrict Level Set - RLS) and the other with no restriction (Free Level Set - FLS). The first one is used in image segmentation that contains targets with little variation in shape and pose. The second will be used to correctly identify the shape of the bubbles in the liquid gas two phase flows. The efficiency and robustness of the approach RLS and FLS are presented in the images of the liquid gas two phase flows and in the image dataset HTZ (FERRARI et al., 2009). The results confirm the good performance of the proposed algorithm (RLS and FLS) and indicate that the approach may be used as an efficient method to validate and/or calibrate the various existing equipment used as meters for two phase flow properties, as well as in other image segmentation problems.
Resumo:
International audience
Resumo:
In the past few years, there has been a concern among economists and policy makers that increased openness to international trade affects some regions in a country more than others. Recent research has found that local labor markets more exposed to import competition through their initial employment composition experience worse outcomes in several dimensions such as, employment, wages, and poverty. Although there is evidence that regions within a country exhibit variation in the intensity with which they trade with each other and with other countries, trade linkages have been ignored in empirical analyses of the regional effects of trade, which focus on differences in employment composition. In this dissertation, I investigate how local labor markets' trade linkages shape the response of wages to international trade shocks. In the second chapter, I lay out a standard multi-sector general equilibrium model of trade, where domestic regions trade with each other and with the rest of the world. Using this benchmark, I decompose a region's wage change resulting from a national import cost shock into a direct effect on prices, holding other endogenous variables constant, and a series of general equilibrium effects. I argue the direct effect provides a natural measure of exposure to import competition within the model since it summarizes the effect of the shock on a region's wage as a function of initial conditions given by its trade linkages. I call my proposed measure linkage exposure while I refer to the measures used in previous studies as employment exposure. My theoretical analysis also shows that the assumptions previous studies make on trade linkages are not consistent with the standard trade model. In the third chapter, I calibrate the model to the Brazilian economy in 1991--at the beginning of a period of trade liberalization--to perform a series of experiments. In each of them, I reduce the Brazilian import cost by 1 percent in a single sector and I calculate how much of the cross-regional variation in counterfactual wage changes is explained by exposure measures. Over this set of experiments, employment exposure explains, for the median sector, 2 percent of the variation in counterfactual wage changes while linkage exposure explains 44 percent. In addition, I propose an estimation strategy that incorporates trade linkages in the analysis of the effects of trade on observed wages. In the model, changes in wages are completely determined by changes in market access, an endogenous variable that summarizes the real demand faced by a region. I show that a linkage measure of exposure is a valid instrument for changes in market access within Brazil. By using observed wage changes in Brazil between 1991-2000, my estimates imply that a region at the 25th percentile of the change in domestic market access induced by trade liberalization, experiences a 0.6 log points larger wage decline (or smaller wage increase) than a region at the 75th percentile. The estimates from a regression of wages changes on exposure imply that a region at the 25th percentile of exposure experiences a 3 log points larger wage decline (or smaller wage increase) than a region at the 75th percentile. I conclude that estimates based on exposure overstate the negative impact of trade liberalization on wages in Brazil. In the fourth chapter, I extend the standard model to allow for two types of workers according to their education levels: skilled and unskilled. I show that there is substantial variation across Brazilian regions in the skill premium. I use the exogenous variation provided by tariff changes to estimate the impact of market access on the skill premium. I find that decreased domestic market access resulting from trade liberalization resulted in a higher skill premium. I propose a mechanism to explain this result: that the manufacturing sector is relatively more intensive in unskilled labor and I show empirical evidence that supports this hypothesis.
Resumo:
This work presents a proposal to detect interface in atmospheric oil tanks by installing a differential pressure level transmitter to infer the oil-water interface. The main goal of this project is to maximize the quantity of free water that is delivered to the drainage line by controlling the interface. A Fuzzy Controller has been implemented by using the interface transmitter as the Process Variable. Two ladder routine was generated to perform the control. One routine was developed to calculate the error and error variation. The other was generate to develop the fuzzy controller itself. By using rules, the fuzzy controller uses these variables to set the output. The output is the position variation of the drainage valve. Although the ladder routine was implemented into an Allen Bradley PLC, Control Logix family it can be implemented into any brand of PLCs
Resumo:
Virtually every sector of business and industry that uses computing, including financial analysis, search engines, and electronic commerce, incorporate Big Data analysis into their business model. Sophisticated clustering algorithms are popular for deducing the nature of data by assigning labels to unlabeled data. We address two main challenges in Big Data. First, by definition, the volume of Big Data is too large to be loaded into a computer’s memory (this volume changes based on the computer used or available, but there is always a data set that is too large for any computer). Second, in real-time applications, the velocity of new incoming data prevents historical data from being stored and future data from being accessed. Therefore, we propose our Streaming Kernel Fuzzy c-Means (stKFCM) algorithm, which reduces both computational complexity and space complexity significantly. The proposed stKFCM only requires O(n2) memory where n is the (predetermined) size of a data subset (or data chunk) at each time step, which makes this algorithm truly scalable (as n can be chosen based on the available memory). Furthermore, only 2n2 elements of the full N × N (where N >> n) kernel matrix need to be calculated at each time-step, thus reducing both the computation time in producing the kernel elements and also the complexity of the FCM algorithm. Empirical results show that stKFCM, even with relatively very small n, can provide clustering performance as accurately as kernel fuzzy c-means run on the entire data set while achieving a significant speedup.
Resumo:
Schurz and Tholen (2016) argue that common approaches to studying the neural basis of “theory of mind” (ToM) obscure a potentially important role for inferior frontal gyrus (IFG) in managing conflict between perspectives, and urge new work to address this question: “to gain a full understanding of the IFG's role in ToM, we encourage future imaging studies to use a wider range of control conditions.” (p332). We wholeheartedly agree, but note that this observation has been made before, and has already led to a programme of work that provides evidence from fMRI, EEG, and TMS on the role of IFG in managing conflict between self and other perspectives in ToM. We highlight these works, and in particular we demonstrate how careful manipulation within ToM tasks has been used to act as an internal control condition, wherein conflict has been manipulated within-subject. We further add to the discussion by framing key questions that remain regarding IFG in the context of these. Using limitations in the existing research, we outline how best researchers can proceed with the challenge set by Schurz and Tholen (2016).
Resumo:
Fifty years have passed since Cyert and March’s 1963 A Behavioral Theory of the Firm. During this time, BTOF has been adopted across different research domains to investigate how organizations set goals, how they determine aspirations and how they finally react to performance aspiration discrepancies. Cyert and March’s framework has also recently emerged as one of the dominant paradigms to understand the ways in which family business organizations make decisions. In this chapter, I review the theoretical development and empirical results of BTOF and its application in the family business field of study in order to identify theoretical and empirical gaps and propose suggestions for future research. The conclusions suggest that BTOF is both a theoretically and empirically valid perspective in family business research, particularly when combined with other theoretical frameworks. The principal recommendation is to apply behavioral theory to enhance scholarly understanding of how family organisations define their aspiration levels and respond to organizational problems.
Resumo:
Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.
Resumo:
This paper outlines a formal and systematic approach to explication of the role of structure in information organization. It presents a preliminary set of constructs that are useful for understanding the similarities and differences that obtain across information organization systems. This work seeks to provide necessary groundwork for development of a theory of structure that can serve as a lens through which to observe patterns across systems of information organization.
Resumo:
Ethos is the spirit that motivates ideas and practices. When we talk casually about the ethos of a town, state, or country we are describing the fundamental or at least underlying rationale for action, as we see it. Ideology is a way of looking at things.It is the set of ideas that constitute one’s goals, expectations, and actions. In this brief essay I want to create a space where we might talk about the ethos and ideology in knowledge organization from a particular point of view; combining ideas and inspiration from the Arts and Crafts movement of the early Twentieth Century, critical theory in extant knowledge organization work, the work of Slavoj Žižek, and the work of Thich Nhat Hahn on Engaged Buddhism.I will expand more below, but we can say here and now that there are many open questions about ethos and ideology in and of knowledge organization, both its practice and products. Many of them in classification, positioned as they are around identity politics of race, gender, and other marginalized groups, ask the classificationist to be mindful of the choice of terms and relationships between terms. From this work we understand that race and gender requires special consideration, which manifests as a particular concern for the form of representation inside extant schemes. Even with these advances in our understanding there are still other categories about which we must make decisions and take action. For example, there are ethical decisions about fiduciary resource allocation, political decisions about standards adoption, and even broader zeitgeist considerations like the question of Fordist conceptions (Day, 2001; Tennis 2006) of the mechanics of description and representation present in much of today’s practice.Just as taking action in a particular way is an ethical concern, so too is avoiding a lack of action. Scholars in Knowledge Organization have also looked at the absence of what we might call right action in the context of cataloguing and classification. This leads to some problems above, and hints at larger ethical concerns of watching a subtle semantic violence go on without intervention (Bowker and Star, 2001; Bade 2006).The problem is not to act or not act, but how to act or not act in an ethical way, or at least with ethical considerations. The action advocated by an ethical consideration for knowledge organization is an engaged one, and it is here where we can take a nod from contemporary ethical theory advanced by Engaged Buddhism. In this context we can see the manifestation of fourteen precepts that guide ethical action, and warn against lack of action.
Resumo:
El objetivo que tiene este proyecto es revisar los conceptos básicos acerca de las relaciones que crean los líderes con sus colaboradores dentro de las organizaciones, dichas relaciones y vínculos pueden afectar positiva o negativamente el desempeño de sus actividades diarias dentro de una organización. Para darle inicio a la investigación se estudió como primer paso el concepto de liderazgo transformacional, capital psicológico y que componentes hacían parte de este factor. El desarrollo de la investigación se enfatizó entre el liderazgo transformacional y la autoeficacia ya que son factores claves dentro del desarrollo de las actividades organizacionales debido a que afectan claramente el capital humano de las compañías y están directamente relacionados con el crecimiento de las mismas, lo que nos llevó a preguntarnos ¿qué relación tendrá el liderazgo transformacional y la autoeficacia en la productividad de las empresas? Es aquí donde radica la importancia de esta investigación ya que el cambio de pensamiento de las organizaciones hacia un liderazgo transformacional podría lograr una maximización del desempeño del personal de trabajo en relación al objetivo de la compañía. Como conclusión llegamos a que efectivamente hay un efecto positivo en los individuos que desarrollan un capital psicológico específicamente en el factor de autoeficacia para lograr un desempeño destacable, productivo y eficiente dentro de las organizaciones.