928 resultados para Topologies on an arbitrary set


Relevância:

100.00% 100.00%

Publicador:

Resumo:

For an arbitrary associative unital ring RR, let J1J1 and J2J2 be the following noncommutative, birational, partly defined involutions on the set M3(R)M3(R) of 3×33×3 matrices over RR: J1(M)=M−1J1(M)=M−1 (the usual matrix inverse) and J2(M)jk=(Mkj)−1J2(M)jk=(Mkj)−1 (the transpose of the Hadamard inverse).

We prove the surprising conjecture by Kontsevich that (J2∘J1)3(J2∘J1)3 is the identity map modulo the DiagL×DiagRDiagL×DiagR action (D1,D2)(M)=D−11MD2(D1,D2)(M)=D1−1MD2 of pairs of invertible diagonal matrices. That is, we show that, for each MM in the domain where (J2∘J1)3(J2∘J1)3 is defined, there are invertible diagonal 3×33×3 matrices D1=D1(M)D1=D1(M) and D2=D2(M)D2=D2(M) such that (J2∘J1)3(M)=D−11MD2(J2∘J1)3(M)=D1−1MD2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article expands on an earlier concept of horror autotoxicus linked to digital contagions of spam and network Virality.1 It aims to present, as such, a broader conception of cosmic topologies of imitation (CTI) intended to better grasp the relatively new practices of social media marketing. Similar to digital autotoxicity, CTI provide the perfect medium for sharing while also spreading contagions that can potentially contaminate the medium itself. However, whereas digital contagions are perhaps limited to the toxicity of a technical layer of information viruses, the contagions of CTI are an all pervasive auto-toxicity which can infect human bodies and technologies increasingly in concert with each other. This is an exceptional autotoxicus that significantly blurs the immunological line of exemption between self and nonself, and potentially, the anthropomorphic distinction between individual self and collective others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examined how international food price shocks have impacted local ination processes in Brazil, Chile, Colombia, Mexico, and Peru in the past decade -- Using impulse-response analysis coming from cointegrated VARs, we wind that international food ination shocks take from one to six quarters to pass through to domestic head-line ination, depending on the country -- In addition, by calculating the elasticity of local prices to an international food price shock, we found that this pass-through is not complete -- We also take a closer look at how this type of shock affects local food and core prices separately, and asses the possibility second round effects over core ination stemming from the shock -- We wind that a transmission to headline prices does occur, and that part of the transmission is associated with rising core prices both directly and through possible second round effects, which implies a role for monetary policy when such a shock takes place -- This is especially relevant given that international food prices have recently been on an upward trend after falling considerably during the Great Recession

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: In the field of Plastic Reconstructive Surgery the development of new innovative matrices for skin repair is in urgent need. The ideal biomaterial should promote attachment, proliferation and growth of cells. Additionally, it should degrade in an appropriate time period without releasing harmful substances, but not exert a pathological immune response. Spider dragline silk from Nephila spp meets these demands to a large extent. Methodology/Principal Findings: Native spider dragline silk, harvested directly out of Nephila spp spiders, was woven on steel frames. Constructs were sterilized and seeded with fibroblasts. After two weeks of cultivating single fibroblasts, keratinocytes were added to generate a bilayered skin model, consisting of dermis and epidermis equivalents. For the next three weeks, constructs in co-culture were lifted on an originally designed setup for air/liquid interface cultivation. After the culturing period, constructs were embedded in paraffin with an especially developed program for spidersilk to avoid supercontraction. Paraffin cross-sections were stained in Haematoxylin & Eosin (H&E) for microscopic analyses. Conclusion/Significance: Native spider dragline silk woven on steel frames provides a suitable matrix for 3 dimensional skin cell culturing. Both fibroblasts and keratinocytes cell lines adhere to the spider silk fibres and proliferate. Guided by the spider silk fibres, they sprout into the meshes and reach confluence in at most one week. A well-balanced, bilayered cocultivation in two continuously separated strata can be achieved by serum reduction, changing the medium conditions and the cultivation period at the air/liquid interphase. Therefore spider silk appears to be a promising biomaterial for the enhancement of skin regeneration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growing availability and popularity of opinion rich resources on the online web resources, such as review sites and personal blogs, has made it convenient to find out about the opinions and experiences of layman people. But, simultaneously, this huge eruption of data has made it difficult to reach to a conclusion. In this thesis, I develop a novel recommendation system, Recomendr that can help users digest all the reviews about an entity and compare candidate entities based on ad-hoc dimensions specified by keywords. It expects keyword specified ad-hoc dimensions/features as input from the user and based on those features; it compares the selected range of entities using reviews provided on the related User Generated Contents (UGC) e.g. online reviews. It then rates the textual stream of data using a scoring function and returns the decision based on an aggregate opinion to the user. Evaluation of Recomendr using a data set in the laptop domain shows that it can effectively recommend the best laptop as per user-specified dimensions such as price. Recomendr is a general system that can potentially work for any entities on which online reviews or opinionated text is available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this contribution, a system identification procedure of a two-input Wiener model suitable for the analysis of the disturbance behavior of integrated nonlinear circuits is presented. The identified block model is comprised of two linear dynamic and one static nonlinear block, which are determined using an parameterized approach. In order to characterize the linear blocks, an correlation analysis using a white noise input in combination with a model reduction scheme is adopted. After having characterized the linear blocks, from the output spectrum under single tone excitation at each input a linear set of equations will be set up, whose solution gives the coefficients of the nonlinear block. By this data based black box approach, the distortion behavior of a nonlinear circuit under the influence of an interfering signal at an arbitrary input port can be determined. Such an interfering signal can be, for example, an electromagnetic interference signal which conductively couples into the port of consideration. © 2011 Author(s).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let S(M) be the ring of (continuous) semialgebraic functions on a semialgebraic set M and S*(M) its subring of bounded semialgebraic functions. In this work we compute the size of the fibers of the spectral maps Spec(j)1:Spec(S(N))→Spec(S(M)) and Spec(j)2:Spec(S*(N))→Spec(S*(M)) induced by the inclusion j:N M of a semialgebraic subset N of M. The ring S(M) can be understood as the localization of S*(M) at the multiplicative subset WM of those bounded semialgebraic functions on M with empty zero set. This provides a natural inclusion iM:Spec(S(M)) Spec(S*(M)) that reduces both problems above to an analysis of the fibers of the spectral map Spec(j)2:Spec(S*(N))→Spec(S*(M)). If we denote Z:=ClSpec(S*(M))(M N), it holds that the restriction map Spec(j)2|:Spec(S*(N)) Spec(j)2-1(Z)→Spec(S*(M)) Z is a homeomorphism. Our problem concentrates on the computation of the size of the fibers of Spec(j)2 at the points of Z. The size of the fibers of prime ideals "close" to the complement Y:=M N provides valuable information concerning how N is immersed inside M. If N is dense in M, the map Spec(j)2 is surjective and the generic fiber of a prime ideal p∈Z contains infinitely many elements. However, finite fibers may also appear and we provide a criterium to decide when the fiber Spec(j)2-1(p) is a finite set for p∈Z. If such is the case, our procedure allows us to compute the size s of Spec(j)2-1(p). If in addition N is locally compact and M is pure dimensional, s coincides with the number of minimal prime ideals contained in p. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis work deals with a mathematical description of flow in polymeric pipe and in a specific peristaltic pump. This study involves fluid-structure interaction analysis in presence of complex-turbulent flows treated in an arbitrary Lagrangian-Eulerian (ALE) framework. The flow simulations are performed in COMSOL 4.4, as 2D axial symmetric model, and ABAQUS 6.14.1, as 3D model with symmetric boundary conditions. In COMSOL, the fluid and structure problems are coupled by monolithic algorithm, while ABAQUS code links ABAQUS CFD and ABAQUS Standard solvers with single block-iterative partitioned algorithm. For the turbulent features of the flow, the fluid model in both codes is described by RNG k-ϵ. The structural model is described, on the basis of the pipe material, by Elastic models or Hyperelastic Neo-Hookean models with Rayleigh damping properties. In order to describe the pulsatile fluid flow after the pumping process, the available data are often defective for the fluid problem. Engineering measurements are normally able to provide average pressure or velocity at a cross-section. This problem has been analyzed by McDonald's and Womersley's work for average pressure at fixed cross section by Fourier analysis since '50, while nowadays sophisticated techniques including Finite Elements and Finite Volumes exist to study the flow. Finally, we set up peristaltic pipe simulations in ABAQUS code, by using the same model previously tested for the fl uid and the structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation verifies whether the following two hypotheses are true: (1) High-occupancy/toll lanes (and therefore other dedicated lanes) have capacity that could still be used; (2) such unused capacity (or more precisely, “unused managed capacity”) can be sold successfully through a real-time auction. To show that the second statement is true, this dissertation proposes an auction-based metering (ABM) system, that is, a mechanism that regulates traffic that enters the dedicated lanes. Participation in the auction is voluntary and can be skipped by paying the toll or by not registering to the new system. This dissertation comprises the following four components: a measurement of unused managed capacity on an existing HOT facility, a game-theoretic model of an ABM system, an operational description of the ABM system, and a simulation-based evaluation of the system. Some other and more specific contributions of this dissertation include the following: (1) It provides a definition and a methodology for measuring unused managed capacity and another important variable referred as “potential volume increase”. (2) It proves that the game-theoretic model has a unique Bayesian Nash equilibrium. (3) And it provides a specific road design that can be applied or extended to other facilities. The results provide evidence that the hypotheses are true and suggest that the ABM system would benefit a public operator interested in reducing traffic congestion significantly, would benefit drivers when making low-reliability trips (such as work-to-home trips), and would potentially benefit a private operator interested in raising revenue.