882 resultados para formation of large scale structure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interwar British retailing has been characterized as having lower productivity, less developed managerial hierarchies and methods, and weaker scale economies than its US counterpart. This article examines comparative productivity for one major segment of large-scale retailing in both countries—the department store sector. Drawing on exceptionally detailed contemporary survey data, we show that British department stores in fact achieved superior performance in terms of operating costs, margins, profits, and stock-turn. While smaller British stores had lower labour productivity than US stores of equivalent size, TFP was generally higher for British stores, which also enjoyed stronger scale economies. We also examine the reasons behind Britain's surprisingly strong relative performance, using surviving original returns from the British surveys. Contrary to arguments that British retailers faced major barriers to the development of large-scale enterprises, that could reap economies of scale and scope and invest in machinery and marketing to support the growth of their primary sales functions, we find that British department stores enthusiastically embraced the retail ‘managerial revolution’—and reaped substantial benefits from this investment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global FGGE data are used to investigate several aspects of large-scale turbulence in the atmosphere. The approach follows that for two-dimensional, nondivergent turbulent flows which are homogeneous and isotropic on the sphere. Spectra of kinetic energy, enstrophy and available potential energy are obtained for both the stationary and transient parts of the flow. Nonlinear interaction terms and fluxes of energy and enstrophy through wavenumber space are calculated and compared with the theory. A possible method of parameterizing the interactions with unresolved scales is considered. Two rather different flow regimes are found in wavenumber space. The high-wavenumber regime is dominated by the transient components of the flow and exhibits, at least approximately, several of the conditions characterizing homogeneous and isotropic turbulence. This region of wavenumber space also displays some of the features of an enstrophy-cascading inertial subrange. The low-wavenumber region, on the other hand, is dominated by the stationary component of the flow, exhibits marked anisotropy and, in contrast to the high-wavenumber regime, displays a marked change between January and July.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, it has been proposed that there are two type Ia supernova progenitors: short-lived and long-lived. On the basis of this idea, we develop a theory of a unified mechanism for the formation of the bimodal radial distribution of iron and oxygen in the Galactic disc. The underlying cause for the formation of the fine structure of the radial abundance pattern is the influence of the spiral arms, specifically the combined effect of the corotation resonance and turbulent diffusion. From our modelling, we conclude that in order to explain the bimodal radial distributions simultaneously for oxygen and iron and to obtain approximately equal total iron output from different types of supernovae, the mean ejected iron mass per supernova event should be the same as quoted in the literature if the maximum mass of stars, which eject heavy elements, is 50 M(circle dot). For the upper mass limit of 70 M(circle dot), the production of iron by a type II supernova explosion should increase by about 1.5 times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bismuth titanatc-Bi(4)Ti(3)O(12) (BIT) with wide application in the electronic industry as capacitors, memory devices and sensors is the simplest compound in the Aurivillius family, which consists of (Bi(2)O(2))(2+) sheets alternating with (Bi(2)T(i)3O(10))(2-) perovskite-like layers. The synthesis of more resistive BIT ceramics would be preferable advance in obtaining of well-densified ceramic with small grains randomly oriented to limit the conductivity along the (Bi(2)O(2))(2+) layers. Having in mind that the conventional ceramic route for the synthesis can lead to non-stoichiometry in composition, in consequence of the undesirable loss in bismuth content through volatilization of Bi(2)O(3) at elevated temperature, our efforts were addressed to preparation of BIT by mechanical activation the constituent oxides. The nucleation and phase formation of BIT, crystal structure, microstructure, powder particle size and specific surface area were followed by XRD, Rietveld refinement analysis, thermal analysis, scanning electron microscopy (SEM) and the BET specific surface area measurements. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

More than 40 years after the agrarian reform, Peru is experiencing a renewed process of concentration of land ownership in the hands of large-scale investors, favoring the development of a sugar cane production cluster along the northern coast. The expansion of the agricultural frontier by means of large irrigation projects – originally developed to benefit medium- and small-scale farmers – is carried out today in order to be sold to large-scale investors for the production of export crops. In the region of Piura the increasing presence of large-scale biofuel investors puts substantial pressure on land and water resources, not only changing the use of and access to land for local communities, but also generating water shortages vis-à-vis the multiple water demands of local food producers. The changes in land relations and the agro-ecosystem, the altering food production regime as well as the increasing proletarization of smallholders, is driving many locals – even those which (initially) welcomed the investment – into resistance activities against the increasing control of land, water and other natural resources in the hands of agribusinesses. The aim of this presentation is to discuss the contemporary political, social and cultural dynamics of agrarian change along the northern Peruvian coast as well as the «reactions from below» emanating from campesino communities, landless laborers, brick producers, pastoralists as well as other marginalized groups. The different strategies, forms and practices of resistance with the goal of the «protection of the territory» shall be explored as well as the reasons for their rather scattered occurrence and the lack of alliances on the land issue. This input shall make a contribution to the on-going debate on individual and communal property rights and the question of what is best in terms of collective defense against land grabbing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although analyses of large-scale land acquisitions (LSLA) often contain an explicit or implicit normative judgment about such projects, they rarely deduce such judgment from a nuanced balancing of pros and cons. This paper uses assessments about a well-researched LSLA in Sierra Leone to show that a utilitarian approach tends to lead to the conclusion that positive effects prevail, whereas deontological approaches lead to an emphasis on negative aspects. LSLA are probably the most radical land-use change in the history of humankind. This process of radical transformation poses a challenge for balanced evaluations. Thus, we line out a framework that focuses on the options of local residents but sets boundaries of acceptability through the core contents of human rights. In addition, systemic implications of a project need to be regarded.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Variability management is one of the major challenges in software product line adoption, since it needs to be efficiently managed at various levels of the software product line development process (e.g., requirement analysis, design, implementation, etc.). One of the main challenges within variability management is the handling and effective visualization of large-scale (industry-size) models, which in many projects, can reach the order of thousands, along with the dependency relationships that exist among them. These have raised many concerns regarding the scalability of current variability management tools and techniques and their lack of industrial adoption. To address the scalability issues, this work employed a combination of quantitative and qualitative research methods to identify the reasons behind the limited scalability of existing variability management tools and techniques. In addition to producing a comprehensive catalogue of existing tools, the outcome form this stage helped understand the major limitations of existing tools. Based on the findings, a novel approach was created for managing variability that employed two main principles for supporting scalability. First, the separation-of-concerns principle was employed by creating multiple views of variability models to alleviate information overload. Second, hyperbolic trees were used to visualise models (compared to Euclidian space trees traditionally used). The result was an approach that can represent models encompassing hundreds of variability points and complex relationships. These concepts were demonstrated by implementing them in an existing variability management tool and using it to model a real-life product line with over a thousand variability points. Finally, in order to assess the work, an evaluation framework was designed based on various established usability assessment best practices and standards. The framework was then used with several case studies to benchmark the performance of this work against other existing tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is increasingly recognized that ecological restoration demands conservation action beyond the borders of existing protected areas. This requires the coordination of land uses and management over a larger area, usually with a range of partners, which presents novel institutional challenges for conservation planners. Interviews were undertaken with managers of a purposive sample of large-scale conservation areas in the UK. Interviews were open-ended and analyzed using standard qualitative methods. Results show a wide variety of organizations are involved in large-scale conservation projects, and that partnerships take time to create and demand resilience in the face of different organizational practices, staff turnover, and short-term funding. Successful partnerships with local communities depend on the establishment of trust and the availability of external funds to support conservation land uses. We conclude that there is no single institutional model for large-scale conservation: success depends on finding institutional strategies that secure long-term conservation outcomes, and ensure that conservation gains are not reversed when funding runs out, private owners change priorities, or land changes hands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present Dithen, a novel computation-as-a-service (CaaS) cloud platform specifically tailored to the parallel ex-ecution of large-scale multimedia tasks. Dithen handles the upload/download of both multimedia data and executable items, the assignment of compute units to multimedia workloads, and the reactive control of the available compute units to minimize the cloud infrastructure cost under deadline-abiding execution. Dithen combines three key properties: (i) the reactive assignment of individual multimedia tasks to available computing units according to availability and predetermined time-to-completion constraints; (ii) optimal resource estimation based on Kalman-filter estimates; (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of units servicing workloads. The deployment of Dithen over Amazon EC2 spot instances is shown to be capable of processing more than 80,000 video transcoding, face detection and image processing tasks (equivalent to the processing of more than 116 GB of compressed data) for less than $1 in billing cost from EC2. Moreover, the proposed AIMD-based control mechanism, in conjunction with the Kalman estimates, is shown to provide for more than 27% reduction in EC2 spot instance cost against methods based on reactive resource estimation. Finally, Dithen is shown to offer a 38% to 500% reduction of the billing cost against the current state-of-the-art in CaaS platforms on Amazon EC2 (Amazon Lambda and Amazon Autoscale). A baseline version of Dithen is currently available at dithen.com.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

What drove the transition from small-scale human societies centred on kinship and personal exchange, to large-scale societies comprising cooperation and division of labour among untold numbers of unrelated individuals? We propose that the unique human capacity to negotiate institutional rules that coordinate social actions was a key driver of this transition. By creating institutions, humans have been able to move from the default ‘Hobbesian’ rules of the ‘game of life’, determined by physical/environmental constraints, into self-created rules of social organization where cooperation can be individually advantageous even in large groups of unrelated individuals. Examples include rules of food sharing in hunter–gatherers, rules for the usage of irrigation systems in agriculturalists, property rights and systems for sharing reputation between mediaeval traders. Successful institutions create rules of interaction that are self-enforcing, providing direct benefits both to individuals that follow them, and to individuals that sanction rule breakers. Forming institutions requires shared intentionality, language and other cognitive abilities largely absent in other primates. We explain how cooperative breeding likely selected for these abilities early in the Homo lineage. This allowed anatomically modern humans to create institutions that transformed the self-reliance of our primate ancestors into the division of labour of large-scale human social organization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Starting from the Fisher matrix for counts in cells, we derive the full Fisher matrix for surveys of multiple tracers of large-scale structure. The key step is the classical approximation, which allows us to write the inverse of the covariance of the galaxy counts in terms of the naive matrix inverse of the covariance in a mixed position-space and Fourier-space basis. We then compute the Fisher matrix for the power spectrum in bins of the 3D wavenumber , the Fisher matrix for functions of position (or redshift z) such as the linear bias of the tracers and/or the growth function and the cross-terms of the Fisher matrix that expresses the correlations between estimations of the power spectrum and estimations of the bias. When the bias and growth function are fully specified, and the Fourier-space bins are large enough that the covariance between them can be neglected, the Fisher matrix for the power spectrum reduces to the widely used result that was first derived by Feldman, Kaiser & Peacock. Assuming isotropy, a fully analytical calculation of the Fisher matrix in the classical approximation can be performed in the case of a constant-density, volume-limited survey.