921 resultados para Motor Vehicles by Power Source.
Resumo:
In Somalia the central government collapsed in 1991 and since then state failure became a widespread phenomenon and one of the greatest political and humanitarian problems facing the world in this century. Thus, the main objective of this research is to answer the following question: What went wrong? Most of the existing literature on the political economy of conflict starts from the assumption that state in Africa is predatory by nature. Unlike these studies, the present research, although it uses predation theory, starts from the social contract approach of state definition. Therefore, rather than contemplating actions and policies of the rulers alone, this approach allows us to deliberately bring the role of the society – as citizens – and other players into the analyses. In Chapter 1, after introducing the study, a simple principal-agent model will be developed to check the logical consistence of the argument and to make the identification of causal mechanism easier. I also identify three main actors in the process of state failure in Somalia: the Somali state, Somali society and the superpowers. In Chapter 2, so as to understand the incentives, preferences and constraints of each player in the state failure game, I in some depth analyse the evolution and structure of three central informal institutions: identity based patronage system of leadership, political tribalism, and the Cold War. These three institutions are considered as the rules of the game in the Somali state failure. Chapter 3 summarises the successive civilian governments’ achievements and failures (1960-69) concerning the main national goals, national unification and socio-economic development. Chapter 4 shows that the military regime, although it assumed power through extralegal means, served to some extent the developmental interest of the citizens in the first five years of its rule. Chapter 5 shows the process, and the factors involved, of the military regime’s self-transformation from being an agent for the developmental interests of the society to a predatory state that not only undermines the interests of the society but that also destroys the state itself. Chapter 6 addresses the process of disintegration of the post-colonial state of Somalia. The chapter shows how the regime’s merciless reactions to political ventures by power-seeking opposition leaders shattered the entire country and wrecked the state institutions. Chapter 7 concludes the study by summarising the main findings: due to the incentive structures generated by the informal institutions, the formal state institutions fell apart.
Resumo:
Most of the structural elements like beams, cables etc. are flexible and should be modeled as distributed parameter systems (DPS) to represent the reality better. For large structures, the usual approach of 'modal representation' is not an accurate representation. Moreover, for excessive vibrations (possibly due to strong wind, earthquake etc.), external power source (controller) is needed to suppress it, as the natural damping of these structures is usually small. In this paper, we propose to use a recently developed optinial dynamic inversion technique to design a set of discrete controllers for this purpose. We assume that the control force to the structure is applied through finite number of actuators, which are located at predefined locations in the spatial domain. The method used in this paper determines control forces directly from the partial differential equation (PDE) model of the system. The formulation has better practical significance, both because it leads to a closed form solution of the controller (hence avoids computational issues) as well as because a set of discrete actuators along the spatial domain can be implemented with relative ease (as compared to a continuous actuator).
Resumo:
The modern subject is what we can call a self-subjecting individual. This is someone in whose inner reality has been implanted a more permanent governability, a governability that works inside the agent. Michel Foucault s genealogy of the modern subject is the history of its constitution by power practices. By a flight of imagination, suppose that this history is not an evolving social structure or cultural phenomenon, but one of those insects (moth) whose life cycle consists of three stages or moments: crawling larva, encapsulated pupa, and flying adult. Foucault s history of power-practices presents the same kind of miracle of total metamorphosis. The main forces in the general field of power can be apprehended through a generalisation of three rationalities functioning side-by-side in the plurality of different practices of power: domination, normalisation and the law. Domination is a force functioning by the rationality of reason of state: the state s essence is power, power is firm domination over people, and people are the state s resource by which the state s strength is measured. Normalisation is a force that takes hold on people from the inside of society: it imposes society s own reality its empirical verity as a norm on people through silently working jurisdictional operations that exclude pathological individuals too far from the average of the population as a whole. The law is a counterforce to both domination and normalisation. Accounting for elements of legal practice as omnihistorical is not possible without a view of the general field of power. Without this view, and only in terms of the operations and tactical manoeuvres of the practice of law, nothing of the kind can be seen: the only thing that practice manifests is constant change itself. However, the backdrop of law s tacit dimension that is, the power-relations between law, domination and normalisation allows one to see more. In the general field of power, the function of law is exactly to maintain the constant possibility of change. Whereas domination and normalisation would stabilise society, the law makes it move. The European individual has a reality as a problem. What is a problem? A problem is something that allows entry into the field of thought, said Foucault. To be a problem, it is necessary for certain number of factors to have made it uncertain, to have made it lose familiarity, or to have provoked a certain number of difficulties around it . Entering the field of thought through problematisations of the European individual human forms, power and knowledge one is able to glimpse the historical backgrounds of our present being. These were produced, and then again buried, in intersections between practices of power and games of truth. In the problem of the European individual one has suitable circumstances that bring to light forces that have passed through the individual through centuries.
Resumo:
An inverse problem for the wave equation is a mathematical formulation of the problem to convert measurements of sound waves to information about the wave speed governing the propagation of the waves. This doctoral thesis extends the theory on the inverse problems for the wave equation in cases with partial measurement data and also considers detection of discontinuous interfaces in the wave speed. A possible application of the theory is obstetric sonography in which ultrasound measurements are transformed into an image of the fetus in its mother's uterus. The wave speed inside the body can not be directly observed but sound waves can be produced outside the body and their echoes from the body can be recorded. The present work contains five research articles. In the first and the fifth articles we show that it is possible to determine the wave speed uniquely by using far apart sound sources and receivers. This extends a previously known result which requires the sound waves to be produced and recorded in the same place. Our result is motivated by a possible application to reflection seismology which seeks to create an image of the Earth s crust from recording of echoes stimulated for example by explosions. For this purpose, the receivers can not typically lie near the powerful sound sources. In the second article we present a sound source that allows us to recover many essential features of the wave speed from the echo produced by the source. Moreover, these features are known to determine the wave speed under certain geometric assumptions. Previously known results permitted the same features to be recovered only by sequential measurement of echoes produced by multiple different sources. The reduced number of measurements could increase the number possible applications of acoustic probing. In the third and fourth articles we develop an acoustic probing method to locate discontinuous interfaces in the wave speed. These interfaces typically correspond to interfaces between different materials and their locations are of interest in many applications. There are many previous approaches to this problem but none of them exploits sound sources varying freely in time. Our use of more variable sources could allow more robust implementation of the probing.
Resumo:
This paper considers the design and analysis of a filter at the receiver of a source coding system to mitigate the excess Mean-Squared Error (MSE) distortion caused due to channel errors. It is assumed that the source encoder is channel-agnostic, i.e., that a Vector Quantization (VQ) based compression designed for a noiseless channel is employed. The index output by the source encoder is sent over a noisy memoryless discrete symmetric channel, and the possibly incorrect received index is decoded by the corresponding VQ decoder. The output of the VQ decoder is processed by a receive filter to obtain an estimate of the source instantiation. In the sequel, the optimum linear receive filter structure to minimize the overall MSE is derived, and shown to have a minimum-mean squared error receiver type structure. Further, expressions are derived for the resulting high-rate MSE performance. The performance is compared with the MSE obtained using conventional VQ as well as the channel optimized VQ. The accuracy of the expressions is demonstrated through Monte Carlo simulations.
Resumo:
Most of the structural elements like beams, cables etc. are flexible and should be modeled as distributed parameter systems (DPS) to represent the reality better. For large structures, the usual approach of 'modal representation' is not an accurate representation. Moreover, for excessive vibrations (possibly due to strong wind, earthquake etc.), external power source (controller) is needed to suppress it, as the natural damping of these structures is usually small. In this paper, we propose to use a recently developed optimal dynamic inversion technique to design a set of discrete controllers for this purpose. We assume that the control force to the structure is applied through finite number of actuators, which are located at predefined locations in the spatial domain. The method used in this paper determines control forces directly from the partial differential equation (PDE) model of the system. The formulation has better practical significance, both because it leads to a closed form solution of the controller (hence avoids computational issues) as well as because a set of discrete actuators along the spatial domain can be implemented with relative ease (as compared to a continuous actuator)
Resumo:
his paper studies the problem of designing a logical topology over a wavelength-routed all-optical network (AON) physical topology, The physical topology consists of the nodes and fiber links in the network, On an AON physical topology, we can set up lightpaths between pairs of nodes, where a lightpath represents a direct optical connection without any intermediate electronics, The set of lightpaths along with the nodes constitutes the logical topology, For a given network physical topology and traffic pattern (relative traffic distribution among the source-destination pairs), our objective is to design the logical topology and the routing algorithm on that topology so as to minimize the network congestion while constraining the average delay seen by a source-destination pair and the amount of processing required at the nodes (degree of the logical topology), We will see that ignoring the delay constraints can result in fairly convoluted logical topologies with very long delays, On the other hand, in all our examples, imposing it results in a minimal increase in congestion, While the number of wavelengths required to imbed the resulting logical topology on the physical all optical topology is also a constraint in general, we find that in many cases of interest this number can be quite small, We formulate the combined logical topology design and routing problem described above (ignoring the constraint on the number of available wavelengths) as a mixed integer linear programming problem which we then solve for a number of cases of a six-node network, Since this programming problem is computationally intractable for larger networks, we split it into two subproblems: logical topology design, which is computationally hard and will probably require heuristic algorithms, and routing, which can be solved by a linear program, We then compare the performance of several heuristic topology design algorithms (that do take wavelength assignment constraints into account) against that of randomly generated topologies, as well as lower bounds derived in the paper.
Transformation of a laterally diverging boundary layer flow to a two-dimensional boundary layer flow
Resumo:
Laterally diverging boundary layer flow over a plate is shown to be reducible to a two-dimensional flow by modelling the diverging streamlines by a source flow.
Resumo:
The conventional Cornell's source-based approach of probabilistic seismic-hazard assessment (PSHA) has been employed all around the world, whilst many studies often rely on the use of computer packages such as FRISK (McGuire FRISK-a computer program for seismic risk analysis. Open-File Report 78-1007, United States Geological Survey, Department of Interior, Washington 1978) and SEISRISK III (Bender and Perkins SEISRISK III-a computer program for seismic hazard estimation, Bulletin 1772. United States Geological Survey, Department of Interior, Washington 1987). A ``black-box'' syndrome may be resulted if the user of the software does not have another simple and robust PSHA method that can be used to make comparisons. An alternative method for PSHA, namely direct amplitude-based (DAB) approach, has been developed as a heuristic and efficient method enabling users to undertake their own sanity checks on outputs from computer packages. This paper experiments the application of the DAB approach for three cities in China, Iran, and India, respectively, and compares with documented results computed by the source-based approach. Several insights regarding the procedure of conducting PSHA have also been obtained, which could be useful for future seismic-hazard studies.
Resumo:
This paper considers the design and analysis of a filter at the receiver of a source coding system to mitigate the excess distortion caused due to channel errors. The index output by the source encoder is sent over a fading discrete binary symmetric channel and the possibly incorrect received index is mapped to the corresponding codeword by a Vector Quantization (VQ) decoder at the receiver. The output of the VQ decoder is then processed by a receive filter to obtain an estimate of the source instantiation. The distortion performance is analyzed for weighted mean square error (WMSE) and the optimum receive filter that minimizes the expected distortion is derived for two different cases of fading. It is shown that the performance of the system with the receive filter is strictly better than that of a conventional VQ and the difference becomes more significant as the number of bits transmitted increases. Theoretical expressions for an upper and lower bound on the WMSE performance of the system with the receive filter and a Rayleigh flat fading channel are derived. The design of a receive filter in the presence of channel mismatch is also studied and it is shown that a minimax solution is the one obtained by designing the receive filter for the worst possible channel. Simulation results are presented to validate the theoretical expressions and illustrate the benefits of receive filtering.
Resumo:
In this paper, we present the design and development details of a micro air vehicle (MAV) built around a quadrotor configuration. A survey of implemented MAVs suggests that a quadrotor design has several advantages over other configurations, especially in the context of swarm intelligence applications. Our design approach consists of three stages. However, the focus of this paper is restricted to the first stage that involves selection of crucial components such as motor-rotor pair, battery source, and structural material. The application of MAVs are broad-ranging, from reconnaissance to search and rescue, and have immense potential in the rapidly advancing field of swarm intelligence.
Resumo:
Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Cooperative relaying combined with selection has been extensively studied in the literature to improve the performance of interference-constrained secondary users in underlay cognitive radio (CR). We present a novel symbol error probability (SEP)-optimal amplify-and-forward relay selection rule for an average interference-constrained underlay CR system. A fundamental principle, which is unique to average interference-constrained underlay CR, that the proposed rule brings out is that the choice of the optimal relay is affected not just by the source-to-relay, relay-to-destination, and relay-to-primary receiver links, which are local to the relay, but also by the direct source-to-destination (SD) link, even though it is not local to any relay. We also propose a simpler, practically amenable variant of the optimal rule called the 1-bit rule, which requires just one bit of feedback about the SD link gain to the relays, and incurs a marginal performance loss relative to the optimal rule. We analyze its SEP and develop an insightful asymptotic SEP analysis. The proposed rules markedly outperform several ad hoc SD link-unaware rules proposed in the literature. They also generalize the interference-unconstrained and SD link-unaware optimal rules considered in the literature.
Resumo:
Vehicular Ad-hoc Networks (VANET), is a type of wireless ad-hoc network that aims to provide communication among vehicles. A key characteristic of VANETs is the very high mobility of nodes that result in a frequently changing topology along with the frequent breakage and linkage of the paths among the nodes involved. These characteristics make the Quality of Service (QoS) requirements in VANET a challenging issue. In this paper we characterize the performance available to applications in infrastructureless VANETs in terms of path holding time, path breakage probability and per session throughput as a function of various vehicle densities on road, data traffic rate and number of connections formed among vehicles by making use of table-driven and on-demand routing algorithms. Several QoS constraints in the applications of infrastructureless VANETs are observed in the results obtained.
Resumo:
Previous simulations of potential ichthyoplankton entrainment by power generating stations on the Potomac estuary have not included the influence of lateral transport in distributing eggs and larvae over the nursery area. Therefore, two-dimensional, vertically-averaged hydrodynamic and kinematic models of passive organism transport were developed to represent advective and dispersive processes near the proposed Douglas Point Nuclear Generating Station. Although the more refined model did not substantially alter the estimate of ichthyoplankton entrainment, it did reveal that lateral inhomogeneities in hydrodynamics could engender several fold differences in entrainment probabilities on opposite sides of the estuary. Models of higher resolution and greater biological detail did not project greater total entrainment by the Douglas Point plant, because the volume of nontidal flow past the site was large in comparison to the proposed rate of cooling water withdrawal.