842 resultados para Hierarchy of beings
Resumo:
The strength of the Antarctic Circumpolar Current (ACC) is believed to depend on the westerly wind stress blowing over the Southern Ocean, although the exact relationship between winds and circumpolar transport is yet to be determined. Here we show, based on theoretical arguments and a hierarchy of numerical modeling experiments, that the global pycnocline depth and the baroclinic ACC transport are set by an integral measure of the wind stress over the path of the ACC, taking into account its northward deflection. Our results assume that the mesoscale eddy diffusivity is independent of the mean flow; while the relationship between wind stress and ACC transport will be more complicated in an eddy-saturated regime, our conclusion that the ACC is driven by winds over the circumpolar streamlines is likely to be robust.
Resumo:
During the past 15 years, a number of initiatives have been undertaken at national level to develop ocean forecasting systems operating at regional and/or global scales. The co-ordination between these efforts has been organized internationally through the Global Ocean Data Assimilation Experiment (GODAE). The French MERCATOR project is one of the leading participants in GODAE. The MERCATOR systems routinely assimilate a variety of observations such as multi-satellite altimeter data, sea-surface temperature and in situ temperature and salinity profiles, focusing on high-resolution scales of the ocean dynamics. The assimilation strategy in MERCATOR is based on a hierarchy of methods of increasing sophistication including optimal interpolation, Kalman filtering and variational methods, which are progressively deployed through the Syst`eme d’Assimilation MERCATOR (SAM) series. SAM-1 is based on a reduced-order optimal interpolation which can be operated using ‘altimetry-only’ or ‘multi-data’ set-ups; it relies on the concept of separability, assuming that the correlations can be separated into a product of horizontal and vertical contributions. The second release, SAM-2, is being developed to include new features from the singular evolutive extended Kalman (SEEK) filter, such as three-dimensional, multivariate error modes and adaptivity schemes. The third one, SAM-3, considers variational methods such as the incremental four-dimensional variational algorithm. Most operational forecasting systems evaluated during GODAE are based on least-squares statistical estimation assuming Gaussian errors. In the framework of the EU MERSEA (Marine EnviRonment and Security for the European Area) project, research is being conducted to prepare the next-generation operational ocean monitoring and forecasting systems. The research effort will explore nonlinear assimilation formulations to overcome limitations of the current systems. This paper provides an overview of the developments conducted in MERSEA with the SEEK filter, the Ensemble Kalman filter and the sequential importance re-sampling filter.
Resumo:
This paper describes a framework architecture for the automated re-purposing and efficient delivery of multimedia content stored in CMSs. It deploys specifically designed templates as well as adaptation rules based on a hierarchy of profiles to accommodate user, device and network requirements invoked as constraints in the adaptation process. The user profile provides information in accordance with the opt-in principle, while the device and network profiles provide the operational constraints such as for example resolution and bandwidth limitations. The profiles hierarchy ensures that the adaptation privileges the users' preferences. As part of the adaptation, we took into account the support for users' special needs, and therefore adopted a template-based approach that could simplify the adaptation process integrating accessibility-by-design in the template.
Resumo:
This article critically examines the nature and quality of governance in community representation and civil society engagement in the context of trans-national large-scale mining, drawing on experiences in the Anosy Region of south-east Madagascar. An exploration of functional relationships between government, mining business and civil society stakeholders reveals an equivocal legitimacy of certain civil society representatives, created by state manipulation, which contributes to community disempowerment. The appointment of local government officials, rather than election, creates a hierarchy of upward dependencies and a culture where the majority of officials express similar views and political alliances. As a consequence, community resistance is suppressed. Voluntary mechanisms such as Corporate Social Responsibility (CSR) and the Extractive Industries Transparency Initiative (EITI) advocate community stakeholder engagement in decision making processes as a measure to achieve public accountability. In many developing countries, where there is a lack of transparency and high levels of corruption, the value of this engagement, however, is debatable. Findings from this study indicate that the power relationships which exist between stakeholders in the highly lucrative mining industry override efforts to achieve "good governance" through voluntary community engagement. The continuing challenge lies in identifying where the responsibility sits in order to address this power struggle to achieve fair representation.
Resumo:
In two recent papers Byrne and Lee (2006, 2007) examined the geographical concentration of institutional office and retail investment in England and Wales at two points in time; 1998 and 2003. The findings indicate that commercial office portfolios are concentrated in a very few urban areas, whereas retail holdings correlate more closely with the urban hierarchy of England and Wales and consequently are essentially ubiquitous. Research into the industrial sector is very much less developed, and this paper therefore makes a significant contribution to understanding the structure of industrial property investment in the UK. It shows that industrial investment concentration is between that of retail and office and is focussed on LAs with high levels of manual workers in areas with smaller industrial units. It also shows that during the period studied the structure of the sector changed, with greater emphasis on the distributional element, for which location is a principal consideration.
Resumo:
This is one of the first papers in which arguments are given to treat code-switching and borrowing as similar phenomena. It is argued that it is theoretically undesirable to distinguish both phenomena, and empirically very problematic. A probabilistic account of code-switching and a hierarchy of switched constituents (similar to hierarchies of borrowability) are proposed which account for the fact that some constituents are more likely to be borrowed/switched than others. It is argued that the same kinds of constraints apply to both code-switching and borrowing.
Resumo:
This Atlas presents statistical analyses of the simulations submitted to the Aqua-Planet Experiment (APE) data archive. The simulations are from global Atmospheric General Circulation Models (AGCM) applied to a water-covered earth. The AGCMs include ones actively used or being developed for numerical weather prediction or climate research. Some are mature, application models and others are more novel and thus less well tested in Earth-like applications. The experiment applies AGCMs with their complete parameterization package to an idealization of the planet Earth which has a greatly simplified lower boundary that consists of an ocean only. It has no land and its associated orography, and no sea ice. The ocean is represented by Sea Surface Temperatures (SST) which are specified everywhere with simple, idealized distributions. Thus in the hierarchy of tests available for AGCMs, APE falls between tests with simplified forcings such as those proposed by Held and Suarez (1994) and Boer and Denis (1997) and Earth-like simulations of the Atmospheric Modeling Intercomparison Project (AMIP, Gates et al., 1999). Blackburn and Hoskins (2013) summarize the APE and its aims. They discuss where the APE fits within a modeling hierarchy which has evolved to evaluate complete models and which provides a link between realistic simulation and conceptual models of atmospheric phenomena. The APE bridges a gap in the existing hierarchy. The goals of APE are to provide a benchmark of current model behaviors and to stimulate research to understand the cause of inter-model differences., APE is sponsored by the World Meteorological Organization (WMO) joint Commission on Atmospheric Science (CAS), World Climate Research Program (WCRP) Working Group on Numerical Experimentation (WGNE). Chapter 2 of this Atlas provides an overview of the specification of the eight APE experiments and of the data collected. Chapter 3 lists the participating models and includes brief descriptions of each. Chapters 4 through 7 present a wide variety of statistics from the 14 participating models for the eight different experiments. Additional intercomparison figures created by Dr. Yukiko Yamada in AGU group are available at http://www.gfd-dennou.org/library/ape/comparison/. This Atlas is intended to present and compare the statistics of the APE simulations but does not contain a discussion of interpretive analyses. Such analyses are left for journal papers such as those included in the Special Issue of the Journal of the Meteorological Society of Japan (2013, Vol. 91A) devoted to the APE. Two papers in that collection provide an overview of the simulations. One (Blackburn et al., 2013) concentrates on the CONTROL simulation and the other (Williamson et al., 2013) on the response to changes in the meridional SST profile. Additional papers provide more detailed analysis of the basic simulations, while others describe various sensitivities and applications. The APE experiment data base holds a wealth of data that is now publicly available from the APE web site: http://climate.ncas.ac.uk/ape/. We hope that this Atlas will stimulate future analyses and investigations to understand the large variation seen in the model behaviors.
Resumo:
Document design and typeface design: A typographic specification for a new Intermediate Greek-English Lexicon by CUP, accompanied by typefaces modified for the specific typographic requirements of the text. The Lexicon is a substantial (over 1400 pages) publication for HE students and academics intended to complement Liddell-Scott (the standard reference for classical Greek since the 1850s), and has been in preparation for over a decade. The typographic appearance of such works has changed very little since the original editions, largely to the lack of suitable typefaces: early digital proofs of the Lexicon utilised directly digitised versions of historical typefaces, making the entries difficult to navigate, and the document uneven in typographic texture. Close collaboration with the editors of the Lexicon, and discussion of the historical precedents for such documents informed the design at all typographic levels to achieve a highly reader-friendly results that propose a model for this kind of typography. Uniquely for a work of this kind, typeface design decisions were integrated into the wider document design specification. A rethinking of the complex typography for Greek and English based on historical editions as well as equivalent bilingual reference works at this level (from OUP, CUP, Brill, Mondadori, and other publishers) led a redefinition of multi-script typeface pairing for the specific context, taking into account recent developments in typeface design. Specifically, the relevant weighting of elements within each entry were redefined, as well as the typographic texture of type styles across the two scripts. In details, Greek typefaces were modified to emphasise clarity and readability, particularly of diacritics, at very small sizes. The relative weights of typefaces typeset side-by-side were fine-tuned so that the visual hierarchy of the entires was unambiguous despite the dense typesetting.
Resumo:
The concept of a slowest invariant manifold is investigated for the five-component model of Lorenz under conservative dynamics. It is shown that Lorenz's model is a two-degree-of-freedom canonical Hamiltonian system, consisting of a nonlinear vorticity-triad oscillator coupled to a linear gravity wave oscillator, whose solutions consist of regular and chaotic orbits. When either the Rossby number or the rotational Froude number is small, there is a formal separation of timescales, and one can speak of fast and slow motion. In the same regime, the coupling is weak, and the Kolmogorov–Arnold-Moser theorem is shown to apply. The chaotic orbits are inherently unbalanced and are confined to regions sandwiched between invariant tori consisting of quasi-periodic regular orbits. The regular orbits generally contain free fast motion, but a slowest invariant manifold may be geometrically defined as the set of all slow cores of invariant tori (defined by zero fast action) that are smoothly related to such cores in the uncoupled system. This slowest invariant manifold is not global; in fact, its structure is fractal; but it is of nearly full measure in the limit of weak coupling. It is also nonlinearly stable. As the coupling increases, the slowest invariant manifold shrinks until it disappears altogether. The results clarify previous definitions of a slowest invariant manifold and highlight the ambiguity in the definition of “slowness.” An asymptotic procedure, analogous to standard initialization techniques, is found to yield nonzero free fast motion even when the core solutions contain none. A hierarchy of Hamiltonian balanced models preserving the symmetries in the original low-order model is formulated; these models are compared with classic balanced models, asymptotically initialized solutions of the full system and the slowest invariant manifold defined by the core solutions. The analysis suggests that for sufficiently small Rossby or rotational Froude numbers, a stable slowest invariant manifold can be defined for this system, which has zero free gravity wave activity, but it cannot be defined everywhere. The implications of the results for more complex systems are discussed.
Resumo:
Fire is an important component of the Earth System that is tightly coupled with climate, vegetation, biogeochemical cycles, and human activities. Observations of how fire regimes change on seasonal to millennial timescales are providing an improved understanding of the hierarchy of controls on fire regimes. Climate is the principal control on fire regimes, although human activities have had an increasing influence on the distribution and incidence of fire in recent centuries. Understanding of the controls and variability of fire also underpins the development of models, both conceptual and numerical, that allow us to predict how future climate and land-use changes might influence fire regimes. Although fires in fire-adapted ecosystems can be important for biodiversity and ecosystem function, positive effects are being increasingly outweighed by losses of ecosystem services. As humans encroach further into the natural habitat of fire, social and economic costs are also escalating. The prospect of near-term rapid and large climate changes, and the escalating costs of large wildfires, necessitates a radical re-thinking and the development of approaches to fire management that promote the more harmonious co-existence of fire and people.
Resumo:
The cold shock response in bacteria involves the expression of low-molecular weight cold shock proteins (CSPs) containing a nucleic acid-binding cold shock domain (CSD), which are known to destabilize secondary structures on mRNAs, facilitating translation at low temperatures. Caulobacter crescentus cspA and cspB are induced upon cold shock, while cspC and cspD are induced during stationary phase. In this work, we determined a new coding sequence for the cspC gene, revealing that it encodes a protein containing two CSDs. The phenotypes of C. crescentus csp mutants were analyzed, and we found that cspC is important for cells to maintain viability during extended periods in stationary phase. Also, cspC and cspCD strains presented altered morphology, with frequent non-viable filamentous cells, and cspCD also showed a pronounced cell death at late stationary phase. In contrast, the cspAB mutant presented increased viability in this phase, which is accompanied by an altered expression of both cspC and cspD, but the triple cspABD mutant loses this characteristic. Taken together, our results suggest that there is a hierarchy of importance among the csp genes regarding stationary phase viability, which is probably achieved by a fine tune balance of the levels of these proteins.
Resumo:
The scalar-isoscalar term in the two-pion exchange NN potential is abnormally large and does not respect the hierarchy of effects predicted by chiral perturbation theory. We argue that this anomaly is associated with non-perturbative effects, which are also present in the pi N scalar form factor.
Resumo:
Optimal location on the transport infrastructure is the preferable requirement for many decision making processes. Most studies have focused on evaluating performances of optimally locate p facilities by minimizing their distances to a geographically distributed demand (n) when p and n vary. The optimal locations are also sensitive to geographical context such as road network, especially when they are asymmetrically distributed in the plane. The influence of alternating road network density is however not a very well-studied problem especially when it is applied in a real world context. This paper aims to investigate how the density level of the road network affects finding optimal location by solving the specific case of p-median location problem. A denser network is found needed when a higher number of facilities are to locate. The best solution will not always be obtained in the most detailed network but in a middle density level. The solutions do not further improve or improve insignificantly as the density exceeds 12,000 nodes, some solutions even deteriorate. The hierarchy of the different densities of network can be used according to location and transportation purposes and increase the efficiency of heuristic methods. The method in this study can be applied to other location-allocation problem in transportation analysis where the road network density can be differentiated.
Resumo:
Delineation of commuting regions has always been based on statistical units, often municipalities or wards. However, using these units has certain disadvantages as their land areas differ considerably. Much information is lost in the larger spatial base units and distortions in self-containment values, the main criterion in rule-based delineation procedures, occur. Alternatively, one can start from relatively small standard size units such as hexagons. In this way, much greater detail in spatial patterns is obtained. In this paper, regions are built by means of intrazonal maximization (Intramax) on the basis of hexagons. The use of geoprocessing tools, specifically developed for the processing ofcommuting data, speeds up processing time considerably. The results of the Intramax analysis are evaluated with travel-to-work area constraints, and comparisons are made with commuting fields, accessibility to employment, commuting flow density and network commuting flow size. From selected steps in the regionalization process, a hierarchy of nested commuting regions emerges, revealing the complexity of commuting patterns.