876 resultados para dS vacua in string theory
Resumo:
Theories of Visual search generally differentiate between bottom-up control and top-down control. Bottom-up control occurs when visual selection is determined by the stimulus properties in the search field. Top-down control takes place when observers are able to select those stimuli that are in line with their attentional sets. Pure stimulus-driven capture and contingent capture are two main theories on attentional capture by now, in which, theory of pure capture more emphasize bottom-up control, while theory of contingent capture more emphasize top-down control. Besides those two theories, Perceptual load theory of attention provides completely new perspective to explain attentional capture. The aim of this study is to investigate the mechanism of attentional capture in visual search on the basis of the existing theory of attentional capture and Perceptual load theory of attention. Three aspects of questions were explored in this study, which includes: the modulation role of perceptual load on attentional capture; the influence of search mode on attentional capture; and the influence of stimuli’s spatial and temporal characteristics on attentional capture. The results showed that: (1) Attentional capture was modulated by perceptual load in both conditions in which perceptual load manipulated either by amount of stimuli or similarity of stimuli. (2) Search mode did influence attentional capture, but more important, which was also modulated by perceptual load. (3) The spatial characteristics of congruent and incongruent distractor did influence attentional capture, specifically, the further the distractor from the target, the more interference effect the distractor had on visual search. (4) The temporal characteristics of distractor did influence attentional capture, specifically, the pattern of results from the study in which distractor were presented after the search display, were similar to those from the study in which distractors were presented before the search display. In sum, the results indicated that attentional capture are controlled not only by bottom-up factors, top-down factors but also modulated by available attention resources. These findings contribute to resolve the controversy for mechanism of attentional capture. And the potential application of this research was discussed.
Resumo:
Wheeler, Nicholas. 'The Humanitarian Responsibilities of Sovereignty', In: Humanitarian Intervention and International Relations (Oxford: Oxford University Press, 2003), pp.29-51 RAE2008
Resumo:
Wydział Prawa i Administracji: Katedra Teorii i Filozofii Prawa
Resumo:
The transition to becoming a leader is perhaps the least understood and most difficult in business. This Portfolio of Exploration examines the development of conscious awareness and meaning complexity as key transformational requirements to operate competently at leadership level and to succeed in a work environment characterised by change and complexity. It recognises that developing executive leadership capability is not just an issue of personality increasing what we know or expertise. It requires development of complexity in terms of how we know ourselves, relate to others, construe leadership and organisation, problem solve in business and understand the world as a whole. The exploration is grounded in the theory of adult mental development as outlined by Robert Kegan (1982, 1994) and in his collaborations with Lisa Laskow Lahey (2001, 2009). The theory points to levels of consciousness which impact on how we make meaning of and experience the world around us and respond to it. Critically it also points to transformational processes which enable us to evolve how we make meaning of our world as a means to close the mismatch between the demands of this world and our ability to cope. The exploration is laid out in three stages. Using Kegan’s (1982, 1994) theory as a framework it begins with a reflection of my career to surface how I made meaning of banking, management and subsequently leadership. In stage two I engage with a range of source thinkers in the areas of leadership, decision making, business, organisation, growth and complexity in a transformational process of developing greater conscious and complex understanding of organisational leadership (also recognising ever increasing complexity in the world). Finally, in stage three, I explore how qualitative changes as a result of this transformational effort have benefitted my professional, leadership and organisational capabilities.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, we reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: "The Jama Model. On Legal Narratives and Interpretation Patterns"), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story, is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability was infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for AI researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially bayesian probability) in accounts of evidence has been flouishing among legal scholars. Nowadays both the the Bayesians (e.g. Peter Tillers) and Bayesioskeptics (e.g. Ron Allen) among those legal scholars whoare involved in the controversy are willing to give AI researchers a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application or probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making (Rosoni 1995). Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.
Resumo:
In judicial decision making, the doctrine of chances takes explicitly into account the odds. There is more to forensic statistics, as well as various probabilistic approaches, which taken together form the object of an enduring controversy in the scholarship of legal evidence. In this paper, I reconsider the circumstances of the Jama murder and inquiry (dealt with in Part I of this paper: 'The JAMA Model and Narrative Interpretation Patterns'), to illustrate yet another kind of probability or improbability. What is improbable about the Jama story is actually a given, which contributes in terms of dramatic underlining. In literary theory, concepts of narratives being probable or improbable date back from the eighteenth century, when both prescientific and scientific probability were infiltrating several domains, including law. An understanding of such a backdrop throughout the history of ideas is, I claim, necessary for Artificial Intelligence (AI) researchers who may be tempted to apply statistical methods to legal evidence. The debate for or against probability (and especially Bayesian probability) in accounts of evidence has been flourishing among legal scholars; nowadays both the Bayesians (e.g. Peter Tillers) and the Bayesio-skeptics (e.g. Ron Allen), among those legal scholars who are involved in the controversy, are willing to give AI research a chance to prove itself and strive towards models of plausibility that would go beyond probability as narrowly meant. This debate within law, in turn, has illustrious precedents: take Voltaire, he was critical of the application of probability even to litigation in civil cases; take Boole, he was a starry-eyed believer in probability applications to judicial decision making. Not unlike Boole, the founding father of computing, nowadays computer scientists approaching the field may happen to do so without full awareness of the pitfalls. Hence, the usefulness of the conceptual landscape I sketch here.
Resumo:
A Feller–Reuter–Riley function is a Markov transition function whose corresponding semigroup maps the set of the real-valued continuous functions vanishing at infinity into itself. The aim of this paper is to investigate applications of such functions in the dual problem, Markov branching processes, and the Williams-matrix. The remarkable property of a Feller–Reuter–Riley function is that it is a Feller minimal transition function with a stable q-matrix. By using this property we are able to prove that, in the theory of branching processes, the branching property is equivalent to the requirement that the corresponding transition function satisfies the Kolmogorov forward equations associated with a stable q-matrix. It follows that the probabilistic definition and the analytic definition for Markov branching processes are actually equivalent. Also, by using this property, together with the Resolvent Decomposition Theorem, a simple analytical proof of the Williams' existence theorem with respect to the Williams-matrix is obtained. The close link between the dual problem and the Feller–Reuter–Riley transition functions is revealed. It enables us to prove that a dual transition function must satisfy the Kolmogorov forward equations. A necessary and sufficient condition for a dual transition function satisfying the Kolmogorov backward equations is also provided.
Resumo:
In this paper, we shall critically examine a special class of graph matching algorithms that follow the approach of node-similarity measurement. A high-level algorithm framework, namely node-similarity graph matching framework (NSGM framework), is proposed, from which, many existing graph matching algorithms can be subsumed, including the eigen-decomposition method of Umeyama, the polynomial-transformation method of Almohamad, the hubs and authorities method of Kleinberg, and the kronecker product successive projection methods of Wyk, etc. In addition, improved algorithms can be developed from the NSGM framework with respects to the corresponding results in graph theory. As the observation, it is pointed out that, in general, any algorithm which can be subsumed from NSGM framework fails to work well for graphs with non-trivial auto-isomorphism structure.
Resumo:
This is about politics and protest, or rather about a politics of protest, and of rebellion. But it is also about creativity and the way in which theory and practice combine within the context of the ‘productive/creative’ process. In this case the combination is explicit and can be traced along a clear trajectory. The following will set out the way in which the accompanying piece of music – a cover of the 1969 protest song Leaving on a Jet Plane by Peter, Paul & Mary - came into being. In doing so it will make reference to a number of theoretical ideas/concepts that fed into the productive process and/or appeared relevant postproduction. It will draw on various aspects of thought from Heidegger (Standing reserve, Enframing and Authenticity), Camus (The Rebel), Foucault (Luminosity), and Deleuze (Immanence, Difference and Repetition and The Fold). [From the Author].
Resumo:
This research published in the foremost international journal in information theory and shows interplay between complex random matrix and multiantenna information theory. Dr T. Ratnarajah is leader in this area of research and his work has been contributed in the development of graduate curricula (course reader) in Massachusetts Institute of Technology (MIT), USA, By Professor Alan Edelman. The course name is "The Mathematics and Applications of Random Matrices", see http://web.mit.edu/18.338/www/projects.html
Resumo:
We consider the derivation of a kinetic equation for a charged test particle weakly interacting with an electrostatic plasma in thermal equilibrium, subject to a uniform external magnetic field. The Liouville equation leads to a generalized master equation to second order in the `weak' interaction; a Fokker-Planck-type equation then follows as a `Markovian' approximation. It is shown that such an equation does not preserve the positivity of the distribution function f(x,v;t). By applying techniques developed in the theory of open systems, a correct Fokker-Planck equation is derived. Explicit expressions for the diffusion and drift coefficients, depending on the magnetic field, are obtained.