591 resultados para Monotone Iterations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

My thesis consists of three essays that investigate strategic interactions between individuals engaging in risky collective action in uncertain environments. The first essay analyzes a broad class of incomplete information coordination games with a wide range of applications in economics and politics. The second essay draws from the general model developed in the first essay to study decisions by individuals of whether to engage in protest/revolution/coup/strike. The final essay explicitly integrates state response to the analysis. The first essay, Coordination Games with Strategic Delegation of Pivotality, exhaustively analyzes a class of binary action, two-player coordination games in which players receive stochastic payoffs only if both players take a ``stochastic-coordination action''. Players receive conditionally-independent noisy private signals about the normally distributed stochastic payoffs. With this structure, each player can exploit the information contained in the other player's action only when he takes the “pivotalizing action”. This feature has two consequences: (1) When the fear of miscoordination is not too large, in order to utilize the other player's information, each player takes the “pivotalizing action” more often than he would based solely on his private information, and (2) best responses feature both strategic complementarities and strategic substitutes, implying that the game is not supermodular nor a typical global game. This class of games has applications in a wide range of economic and political phenomena, including war and peace, protest/revolution/coup/ strike, interest groups lobbying, international trade, and adoption of a new technology. My second essay, Collective Action with Uncertain Payoffs, studies the decision problem of citizens who must decide whether to submit to the status quo or mount a revolution. If they coordinate, they can overthrow the status quo. Otherwise, the status quo is preserved and participants in a failed revolution are punished. Citizens face two types of uncertainty. (a) non-strategic: they are uncertain about the relative payoffs of the status quo and revolution, (b) strategic: they are uncertain about each other's assessments of the relative payoff. I draw on the existing literature and historical evidence to argue that the uncertainty in the payoffs of status quo and revolution is intrinsic in politics. Several counter-intuitive findings emerge: (1) Better communication between citizens can lower the likelihood of revolution. In fact, when the punishment for failed protest is not too harsh and citizens' private knowledge is accurate, then further communication reduces incentives to revolt. (2) Increasing strategic uncertainty can increase the likelihood of revolution attempts, and even the likelihood of successful revolution. In particular, revolt may be more likely when citizens privately obtain information than when they receive information from a common media source. (3) Two dilemmas arise concerning the intensity and frequency of punishment (repression), and the frequency of protest. Punishment Dilemma 1: harsher punishments may increase the probability that punishment is materialized. That is, as the state increases the punishment for dissent, it might also have to punish more dissidents. It is only when the punishment is sufficiently harsh, that harsher punishment reduces the frequency of its application. Punishment Dilemma 1 leads to Punishment Dilemma 2: the frequencies of repression and protest can be positively or negatively correlated depending on the intensity of repression. My third essay, The Repression Puzzle, investigates the relationship between the intensity of grievances and the likelihood of repression. First, I make the observation that the occurrence of state repression is a puzzle. If repression is to succeed, dissidents should not rebel. If it is to fail, the state should concede in order to save the costs of unsuccessful repression. I then propose an explanation for the “repression puzzle” that hinges on information asymmetries between the state and dissidents about the costs of repression to the state, and hence the likelihood of its application by the state. I present a formal model that combines the insights of grievance-based and political process theories to investigate the consequences of this information asymmetry for the dissidents' contentious actions and for the relationship between the magnitude of grievances (formulated here as the extent of inequality) and the likelihood of repression. The main contribution of the paper is to show that this relationship is non-monotone. That is, as the magnitude of grievances increases, the likelihood of repression might decrease. I investigate the relationship between inequality and the likelihood of repression in all country-years from 1981 to 1999. To mitigate specification problem, I estimate the probability of repression using a generalized additive model with thin-plate splines (GAM-TPS). This technique allows for flexible relationship between inequality, the proxy for the costs of repression and revolutions (income per capita), and the likelihood of repression. The empirical evidence support my prediction that the relationship between the magnitude of grievances and the likelihood of repression is non-monotone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis proposes a generic visual perception architecture for robotic clothes perception and manipulation. This proposed architecture is fully integrated with a stereo vision system and a dual-arm robot and is able to perform a number of autonomous laundering tasks. Clothes perception and manipulation is a novel research topic in robotics and has experienced rapid development in recent years. Compared to the task of perceiving and manipulating rigid objects, clothes perception and manipulation poses a greater challenge. This can be attributed to two reasons: firstly, deformable clothing requires precise (high-acuity) visual perception and dexterous manipulation; secondly, as clothing approximates a non-rigid 2-manifold in 3-space, that can adopt a quasi-infinite configuration space, the potential variability in the appearance of clothing items makes them difficult to understand, identify uniquely, and interact with by machine. From an applications perspective, and as part of EU CloPeMa project, the integrated visual perception architecture refines a pre-existing clothing manipulation pipeline by completing pre-wash clothes (category) sorting (using single-shot or interactive perception for garment categorisation and manipulation) and post-wash dual-arm flattening. To the best of the author’s knowledge, as investigated in this thesis, the autonomous clothing perception and manipulation solutions presented here were first proposed and reported by the author. All of the reported robot demonstrations in this work follow a perception-manipulation method- ology where visual and tactile feedback (in the form of surface wrinkledness captured by the high accuracy depth sensor i.e. CloPeMa stereo head or the predictive confidence modelled by Gaussian Processing) serve as the halting criteria in the flattening and sorting tasks, respectively. From scientific perspective, the proposed visual perception architecture addresses the above challenges by parsing and grouping 3D clothing configurations hierarchically from low-level curvatures, through mid-level surface shape representations (providing topological descriptions and 3D texture representations), to high-level semantic structures and statistical descriptions. A range of visual features such as Shape Index, Surface Topologies Analysis and Local Binary Patterns have been adapted within this work to parse clothing surfaces and textures and several novel features have been devised, including B-Spline Patches with Locality-Constrained Linear coding, and Topology Spatial Distance to describe and quantify generic landmarks (wrinkles and folds). The essence of this proposed architecture comprises 3D generic surface parsing and interpretation, which is critical to underpinning a number of laundering tasks and has the potential to be extended to other rigid and non-rigid object perception and manipulation tasks. The experimental results presented in this thesis demonstrate that: firstly, the proposed grasp- ing approach achieves on-average 84.7% accuracy; secondly, the proposed flattening approach is able to flatten towels, t-shirts and pants (shorts) within 9 iterations on-average; thirdly, the proposed clothes recognition pipeline can recognise clothes categories from highly wrinkled configurations and advances the state-of-the-art by 36% in terms of classification accuracy, achieving an 83.2% true-positive classification rate when discriminating between five categories of clothes; finally the Gaussian Process based interactive perception approach exhibits a substantial improvement over single-shot perception. Accordingly, this thesis has advanced the state-of-the-art of robot clothes perception and manipulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper focuses on a variation of the Art Gallery problem that considers open-edge guards and open mobile-guards. A mobile guard can be placed on edges and diagonals of a polygon, and the ‘open’ prefix means that the endpoints of such an edge or diagonal are not taken into account for visibility purposes. This paper studies the number of guards that are sufficient and sometimes necessary to guard some classes of simple polygons for both open-edge and open mobile-guards. A wide range of polygons is studied, which include orthogonal polygons with or without holes, spirals, orthogonal spirals and monotone polygons. Moreover, this problem is also considered for planar triangulation graphs using open-edge guards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study competitive market outcomes in economies where agents have other-regarding preferences. We identify a separability condition on monotone preferences that is necessary and sufficient for one’s own demand to be independent of the allocations and characteristics of other agents in the economy. Given separability, it is impossible to identify other-regarding preferences from market behavior: agents be- have as if they had classical preferences that depend only on own consumption in competitive equilibrium. If preferences, in addition, depend only on the final allocation of consumption in society, the Sec- ond Welfare Theorem holds as long as an increase in resources can be distributed such that all agents are better off. Nevertheless, the First Welfare Theorem generally does not hold. Allowing agents to care about their own consumption and the distribution of consump- tion possibilities in the economy, we provide a condition under which agents have no incentive to make direct transfers, and show that this condition implies that competitive equilibria are efficient given prices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We find approximations to travelling breather solutions of the one-dimensional Fermi-Pasta-Ulam (FPU) lattice. Both bright breather and dark breather solutions are found. We find that the existence of localised (bright) solutions depends upon the coefficients of cubic and quartic terms of the potential energy, generalising an earlier inequality derived by James [CR Acad Sci Paris 332, 581, (2001)]. We use the method of multiple scales to reduce the equations of motion for the lattice to a nonlinear Schr{\"o}dinger equation at leading order and hence construct an asymptotic form for the breather. We show that in the absence of a cubic potential energy term, the lattice supports combined breathing-kink waveforms. The amplitude of breathing-kinks can be arbitrarily small, as opposed to traditional monotone kinks, which have a nonzero minimum amplitude in such systems. We also present numerical simulations of the lattice, verifying the shape and velocity of the travelling waveforms, and confirming the long-lived nature of all such modes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We extend previous papers in the literature concerning the homogenization of Robin type boundary conditions for quasilinear equations, in the case of microscopic obstacles of critical size: here we consider nonlinear boundary conditions involving some maximal monotone graphs which may correspond to discontinuous or non-Lipschitz functions arising in some catalysis problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tesis versa sobre el an álisis de la forma de objetos 2D. En visión articial existen numerosos aspectos de los que se pueden extraer información. Uno de los más usados es la forma o el contorno de esos objetos. Esta característica visual de los objetos nos permite, mediante el procesamiento adecuado, extraer información de los objetos, analizar escenas, etc. No obstante el contorno o silueta de los objetos contiene información redundante. Este exceso de datos que no aporta nuevo conocimiento debe ser eliminado, con el objeto de agilizar el procesamiento posterior o de minimizar el tamaño de la representación de ese contorno, para su almacenamiento o transmisión. Esta reducción de datos debe realizarse sin que se produzca una pérdida de información importante para representación del contorno original. Se puede obtener una versión reducida de un contorno eliminando puntos intermedios y uniendo los puntos restantes mediante segmentos. Esta representación reducida de un contorno se conoce como aproximación poligonal. Estas aproximaciones poligonales de contornos representan, por tanto, una versión comprimida de la información original. El principal uso de las mismas es la reducción del volumen de información necesario para representar el contorno de un objeto. No obstante, en los últimos años estas aproximaciones han sido usadas para el reconocimiento de objetos. Para ello los algoritmos de aproximaci ón poligonal se han usado directamente para la extracci ón de los vectores de caracter ísticas empleados en la fase de aprendizaje. Las contribuciones realizadas por tanto en esta tesis se han centrado en diversos aspectos de las aproximaciones poligonales. En la primera contribución se han mejorado varios algoritmos de aproximaciones poligonales, mediante el uso de una fase de preprocesado que acelera estos algoritmos permitiendo incluso mejorar la calidad de las soluciones en un menor tiempo. En la segunda contribución se ha propuesto un nuevo algoritmo de aproximaciones poligonales que obtiene soluciones optimas en un menor espacio de tiempo que el resto de métodos que aparecen en la literatura. En la tercera contribución se ha propuesto un algoritmo de aproximaciones que es capaz de obtener la solución óptima en pocas iteraciones en la mayor parte de los casos. Por último, se ha propuesto una versi ón mejorada del algoritmo óptimo para obtener aproximaciones poligonales que soluciona otro problema de optimización alternativo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visualization of vector fields plays an important role in research activities nowadays -- Web applications allow a fast, multi-platform and multi-device access to data, which results in the need of optimized applications to be implemented in both high-performance and low-performance devices -- Point trajectory calculation procedures usually perform repeated calculations due to the fact that several points might lie over the same trajectory -- This paper presents a new methodology to calculate point trajectories over highly-dense and uniformly-distributed grid of points in which the trajectories are forced to lie over the points in the grid -- Its advantages rely on a highly parallel computing architecture implementation and in the reduction of the computational effort to calculate the stream paths since unnecessary calculations are avoided, reusing data through iterations -- As case study, the visualization of oceanic currents through in the web platform is presented and analyzed, using WebGL as the parallel computing architecture and the rendering Application Programming Interface

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.

The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.

We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.

The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.

To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.

A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.

One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundamental movement skills (FMS) competence is low in adolescent girls. An assessment tool for teachers is needed to monitor FMS in this demographic. The present study explored whether the Canadian Agility and Movement Skill Assessment (CAMSA) is feasible for use by physical education (PE) teachers of Australian Year 7 girls in a school setting. Surveys, focus group interviews, and direct observation of 18 specialist PE teachers investigated teachers’ perceptions of this tool. Results indicated that the CAMSA was usable in a real-world school setting and was considered a promising means to assess FMS in Year 7 girls. However, future iterations may require minor logistical alterations and further training for teachers on how to utilize the assessment data to enhance teaching practice. These considerations could be used to improve future design, application, and training of the CAMSA in school-based PE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In his 1967 work, Presentation of Sacher-Masoch – Coldness and Cruelty (2007), Gilles Deleuze famously distinguishes the symptomatologies commonly designated by the names Masochism and Sadism, arguing that despite their shared feature of algolagnia, they are more rigorously approached as two very distinct regimes, having nothing to do with the ‘economy’ of the other. In the work’s preface, Deleuze also notes about Sacher-Masoch himself: ‘His whole oeuvre remains influenced by the problem of minorities, of nationalities and of revolutionary movements’ (2007: 9). Deleuze identifies that, within Masoch’s oeuvre, the masochist is he (normally a ‘he’) who insists on the contract. This insistence is neither to honour any particular contract or contracting per se, nor to safeguard himself within it, but to perform, through parodying it to its letter and pushing its operation towards its own limit, the inherent injustice that is its inexorable outcome. This article seeks to explore, using Masochistic ‘humouring’ or mockery of the contract as example, what might constitute a practice of intervention in regimes of power, and in which instances these iterations serve instead only as gestures of complicity with the injustices of the established logics. The article seeks to clarify, at the level of mechanism, a region of parody’s slippery operation, one which would determine the criteria for it to be intervention, as opposed to functioning as compliance and ‘bare repetition’ or ‘repetition of the Same’ (see Deleuze 2004: 27).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyze directional monotonicity of several mixture functions in the direction (1,1...,1), called weak monotonicity. Our particular focus is on power weighting functions and the special cases of Lehmer and Gini means. We establish limits on the number of arguments of these means for which they are weakly monotone. These bounds significantly improve the earlier results and hence increase the range of applicability of Gini and Lehmer means. We also discuss the case of affine weighting functions and find the smallest constant which ensures weak monotonicity of such mixture functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dirk de Bruyn is one of Australia’s most successful and acclaimed abstract animators. His career spans a significant portion of the history of abstract and experimental animation in Australia and his films are as addictive as they are bold and uncompromising examples of the genre. He displays a remarkable ability to learn the lessons gifted us by earlier greats and yet produce a flowing, beautifully realised river of imagery that is all his own. MIAF’s look at the various iterations of de Bruyn’s work continues with this special one-off live performance in which he will utilise three projectors to create an experience that blends a suite of moving image artwork drawn from his practice with an improvised sound track. Hosted by the VCA, this performance is FREE and will take place in VCA’s Founders Gallery at 234 St Kilda Rd just a few short minutes walk from ACMI.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a situation in which agents have mutual claims on each other, summarized in a liability matrix. Agents' assets might be insufficient to satisfy their liabilities leading to defaults. In case of default, bankruptcy rules are used to specify the way agents are going to be rationed. A clearing payment matrix is a payment matrix consistent with the prevailing bankruptcy rules that satisfies limited liability and priority of creditors. Since clearing payment matrices and the corresponding values of equity are not uniquely determined, we provide bounds on the possible levels equity can take. Unlike the existing literature, which studies centralized clearing procedures, we introduce a large class of decentralized clearing processes. We show the convergence of any such process in finitely many iterations to the least clearing payment matrix. When the unit of account is sufficiently small, all decentralized clearing processes lead essentially to the same value of equity as a centralized clearing procedure. As a policy implication, it is not necessary to collect and process all the sensitive data of all the agents simultaneously and run a centralized clearing procedure.