21 resultados para World-systems theory
Resumo:
Drawing upon recent reworkings of world systems theory and Marx’s concept of metabolic rift, this paper attempts to ground early nineteenth-century Ireland more clearly within these metanarratives, which take the historical-ecological dynamics of the development of capitalism as their point of departure. In order to unravel the socio-spatial complexities of Irish agricultural production throughout this time, attention must be given to the prevalence of customary legal tenure, institutions of communal governance, and their interaction with the colonial apparatus, as an essential feature of Ireland’s historical geography often neglected by famine scholars. This spatially differentiated legacy of communality, embedded within a country-wide system of colonial rent, and burgeoning capitalist system of global trade, gave rise to profound regional differentiations and ecological contradictions, which became central to the distribution of distress during the Great Famine (1845-1852). Contrary to accounts which depict it as a case of discrete transition from feudalism to capitalism, Ireland’s pre-famine ecology must be understood through an analysis which emphasises these socio-spatial complexities. Consequently, this structure must be conceptualised as one in which communality, colonialism, and capitalism interact dynamically, and in varying stages of development and devolution, according to space and time.
Resumo:
This article examines whether a Modern World- Systems (MWS) perspective can provide an improved understanding of the processes of democratization in Africa (and other developing regions of the world) by conducting a comparative case study of South Africa and Zambia in the 1990s, examining the transitions to democracy and divergent processes of democratic consolidation in each country. Semiperipheral South Africa has, due to its more advantageous position in the world-system, been better equipped than peripheral Zambia to safeguard democracy against erosion and reversal. Th e central irony of the MWS is that the weakest states in the MWS can be pushed around by core powers and are more easily forced to democratize while at the same time they are least likely to possess the resources necessary for democratic consolidation. Semiperipheral states can maintain their independence vis-à-vis the core to a higher degree, but if the decision is made to undertake a democratic transition they are more likely to possess the resources necessary for successful consolidation. Th e MWS perspective allows for an improved understanding of the causal pathway of how position in the MWS translates into the ability to consolidate democracy than does approaches that emphasize domestic factors.
Resumo:
The purpose of this paper is to expose the concept of collaborative planning to the reality of planning, thereby assessing its efficacy for informing and explaining what planners 'really' do and can do. In this systematic appraisal, collaborative planning is disaggregated into four elements that can enlighten such conceptual frameworks: ontology, epistemology, ideology and methodology. These four lenses help delimit and clarify collaborative planning's strengths and weaknesses. The conceptual debate is related to an empirical investigation of planning processes, ranging from region-wide to local and from statutory to visionary in an arena where special care has been invested in participatory deliberation processes. The final analysis provides a systematic gauge of collaborative planning in light of the extensive empirical evidence, deploying the four conceptual dimensions introduced in part one. This exposes a range of problems not only with the concept itself but also regarding its affinity with the uncollaborative world within which it has to operate. The former shed light on those aspects where collaborative planning as a conceptual tool for practitioners needs to be renovated, while the latter highlight inconsistencies in a political framework that struggles to accommodate both global competitiveness and local democratic collaboration.
Resumo:
Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called ‘reduced complexity’ models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed ‘appropriate complexity modelling’ of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.
Resumo:
The structures of liquid water and isopropanol have been studied as a function of the size of a hydrophobic patch present in a model hydrophilic surface via molecular dynamics simulations. A significant anisotropy extending into the first few solvent layers is found over the patch which suggests implications for many real-world systems in which nanoscale heterogeneity is found.
Resumo:
We study the dynamics of the entanglement spectrum, that is the time evolution of the eigenvalues of the reduced density matrices after a bipartition of a one-dimensional spin chain. Starting from the ground state of an initial Hamiltonian, the state of the system is evolved in time with a new Hamiltonian. We consider both instantaneous and quasi adiabatic quenches of the system Hamiltonian across a quantum phase transition. We analyse the Ising model that can be exactly solved and the XXZ for which we employ the time-dependent density matrix renormalisation group algorithm. Our results show once more a connection between the Schmidt gap, i.e. the difference of the two largest eigenvalues of the reduced density matrix and order parameters, in this case the spontaneous magnetisation.
Resumo:
In the last decade, many side channel attacks have been published in academic literature detailing how to efficiently extract secret keys by mounting various attacks, such as differential or correlation power analysis, on cryptosystems. Among the most efficient and widely utilized leakage models involved in these attacks are the Hamming weight and distance models which give a simple, yet effective, approximation of the power consumption for many real-world systems. These leakage models reflect the number of bits switching, which is assumed proportional to the power consumption. However, the actual power consumption changing in the circuits is unlikely to be directly of that form. We, therefore, propose a non-linear leakage model by mapping the existing leakage model via a transform function, by which the changing power consumption is depicted more precisely, hence the attack efficiency can be improved considerably. This has the advantage of utilising a non-linear power model while retaining the simplicity of the Hamming weight or distance models. A modified attack architecture is then suggested to yield the correct key efficiently in practice. Finally, an empirical comparison of the attack results is presented.
Resumo:
This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.
Resumo:
A theory of strongly interacting Fermi systems of a few particles is developed. At high excit at ion energies (a few times the single-parti cle level spacing) these systems are characterized by an extreme degree of complexity due to strong mixing of the shell-model-based many-part icle basis st at es by the residual two- body interaction. This regime can be described as many-body quantum chaos. Practically, it occurs when the excitation energy of the system is greater than a few single-particle level spacings near the Fermi energy. Physical examples of such systems are compound nuclei, heavy open shell atoms (e.g. rare earths) and multicharged ions, molecules, clusters and quantum dots in solids. The main quantity of the theory is the strength function which describes spreading of the eigenstates over many-part icle basis states (determinants) constructed using the shell-model orbital basis. A nonlinear equation for the strength function is derived, which enables one to describe the eigenstates without diagonalization of the Hamiltonian matrix. We show how to use this approach to calculate mean orbital occupation numbers and matrix elements between chaotic eigenstates and introduce typically statistical variable s such as t emperature in an isolated microscopic Fermi system of a few particles.
Resumo:
The use of B-spline basis sets in R-matrix theory for scattering processes has been investigated. In the present approach a B-spline basis is used for the description of the inner region, which is matched to the physical outgoing wavefunctions by the R-matrix. Using B-splines, continuum basis functions can be determined easily, while pseudostates can be included naturally. The accuracy for low-energy scattering processes is demonstrated by calculating inelastic scattering cross sections for e colliding on H. Very good agreement with other calculations has been obtained. Further extensions of the codes to quasi two-electron systems and general atoms are discussed as well as the application to (multi) photoionization.
Resumo:
We present a one-dimensional scattering theory which enables us to describe a wealth of effects arising from the coupling of the motional degree of freedom of scatterers to the electromagnetic field. Multiple scattering to all orders is taken into account. The theory is applied to describe the scheme of a Fabry-Perot resonator with one of its mirrors moving. The friction force, as well as the diffusion, acting on the moving mirror is derived. In the limit of a small reflection coefficient, the same model provides for the description of the mechanical effect of light on an atom moving in front of a mirror.