80 resultados para Re-location
Resumo:
This article analyzes empirically the main existing theories on income and population city growth: increasing returns to scale, locational fundamentals and random growth. To do this we implement a threshold nonlinearity test that extends standard linear growth regression models to a dataset on urban, climatological and macroeconomic variables on 1,175 U.S. cities. Our analysis reveals the existence of increasing returns when per-capita income levels are beyond $19; 264. Despite this, income growth is mostly explained by social and locational fundamentals. Population growth also exhibits two distinct equilibria determined by a threshold value of 116,300 inhabitants beyond which city population grows at a higher rate. Income and population growth do not go hand in hand, implying an optimal level of population beyond which income growth stagnates or deteriorates
Resumo:
The objective of this paper is to analyze why firms in some industries locate in specialized economic environments (localization economies) while those in other industries prefer large city locations (urbanization economies). To this end, we examine the location decisions of new manufacturing firms in Spain at the city level and for narrowly defined industries (three-digit level). First, we estimate firm location models to obtain estimates that reflect the importance of localization and urbanization economies in each industry. In a second step, we regress these estimates on industry characteristics that are related to the potential importance of three agglomeration theories, namely, labor market pooling, input sharing and knowledge spillovers. Localization effects are low and urbanization effects are high in knowledge-intensive industries, suggesting that firms (partly) locate in large cities to reap the benefits of inter-industry knowledge spillovers. We also find that localization effects are high in industries that employ workers whose skills are more industry-specific, suggesting that industries (partly) locate in specialized economic environments to share a common pool of specialized workers.
Resumo:
Fault location has been studied deeply for transmission lines due to its importance in power systems. Nowadays the problem of fault location on distribution systems is receiving special attention mainly because of the power quality regulations. In this context, this paper presents an application software developed in Matlabtrade that automatically calculates the location of a fault in a distribution power system, starting from voltages and currents measured at the line terminal and the model of the distribution power system data. The application is based on a N-ary tree structure, which is suitable to be used in this application due to the highly branched and the non- homogeneity nature of the distribution systems, and has been developed for single-phase, two-phase, two-phase-to-ground, and three-phase faults. The implemented application is tested by using fault data in a real electrical distribution power system
Resumo:
This paper presents and compares two approaches to estimate the origin (upstream or downstream) of voltage sag registered in distribution substations. The first approach is based on the application of a single rule dealing with features extracted from the impedances during the fault whereas the second method exploit the variability of waveforms from an statistical point of view. Both approaches have been tested with voltage sags registered in distribution substations and advantages, drawbacks and comparative results are presented
Resumo:
This paper focus on the problem of locating single-phase faults in mixed distribution electric systems, with overhead lines and underground cables, using voltage and current measurements at the sending-end and sequence model of the network. Since calculating series impedance for underground cables is not as simple as in the case of overhead lines, the paper proposes a methodology to obtain an estimation of zero-sequence impedance of underground cables starting from previous single-faults occurred in the system, in which an electric arc occurred at the fault location. For this reason, the signal is previously pretreated to eliminate its peaks voltage and the analysis can be done working with a signal as close as a sinus wave as possible
Resumo:
The work presented in this paper belongs to the power quality knowledge area and deals with the voltage sags in power transmission and distribution systems. Propagating throughout the power network, voltage sags can cause plenty of problems for domestic and industrial loads that can financially cost a lot. To impose penalties to responsible party and to improve monitoring and mitigation strategies, sags must be located in the power network. With such a worthwhile objective, this paper comes up with a new method for associating a sag waveform with its origin in transmission and distribution networks. It solves this problem through developing hybrid methods which hire multiway principal component analysis (MPCA) as a dimension reduction tool. MPCA reexpresses sag waveforms in a new subspace just in a few scores. We train some well-known classifiers with these scores and exploit them for classification of future sags. The capabilities of the proposed method for dimension reduction and classification are examined using the real data gathered from three substations in Catalonia, Spain. The obtained classification rates certify the goodness and powerfulness of the developed hybrid methods as brand-new tools for sag classification
Resumo:
Globalization involves several facility location problems that need to be handled at large scale. Location Allocation (LA) is a combinatorial problem in which the distance among points in the data space matter. Precisely, taking advantage of the distance property of the domain we exploit the capability of clustering techniques to partition the data space in order to convert an initial large LA problem into several simpler LA problems. Particularly, our motivation problem involves a huge geographical area that can be partitioned under overall conditions. We present different types of clustering techniques and then we perform a cluster analysis over our dataset in order to partition it. After that, we solve the LA problem applying simulated annealing algorithm to the clustered and non-clustered data in order to work out how profitable is the clustering and which of the presented methods is the most suitable
Resumo:
This paper considers the estimation of the geographical scope of industrial location determinants. While previous studies impose strong assumptions on the weighting scheme of the spatial neighbour matrix, we propose a exible parametrisation that allows for di fferent (distance-based) de finitions of neighbourhood and di fferent weights to the neighbours. In particular, we estimate how far can reach indirect marginal e ffects and discuss how to report them. We also show that the use of smooth transition functions provides tools for policy analysis that are not available in the traditional threshold modelling. Keywords: count data models, industrial location, smooth transition functions, threshold models. JEL-Codes: C25, C52, R11, R30.
Resumo:
In recent years, the large deployment of mobile devices has led to a massiveincrease in the volume of records of where people have been and when they were there.The analysis of these spatio-temporal data can supply high-level human behaviorinformation valuable to urban planners, local authorities, and designer of location-basedservices. In this paper, we describe our approach to collect and analyze the history ofphysical presence of tourists from the digital footprints they publicly disclose on the web.Our work takes place in the Province of Florence in Italy, where the insights on thevisitors’ flows and on the nationalities of the tourists who do not sleep in town has beenlimited to information from survey-based hotel and museums frequentation. In fact, mostlocal authorities in the world must face this dearth of data on tourist dynamics. In thiscase study, we used a corpus of geographically referenced photos taken in the provinceby 4280 photographers over a period of 2 years. Based on the disclosure of the locationof the photos, we design geovisualizations to reveal the tourist concentration and spatiotemporalflows. Our initial results provide insights on the density of tourists, the points ofinterests they visit as well as the most common trajectories they follow.
Resumo:
Does additional government spending improve the electoral chances of incumbent political parties? This paper provides the first quasi-experimental evidence on this question. Our research design exploits discontinuities in federal funding to local governments in Brazil around several population cutoffs over the period 1982-1985. We find that extra fiscal transfers resulted in a 20% increase in local government spending per capita, and an increase of about 10 percentage points in the re-election probability of local incumbent parties. We also find positive effects of the government spending on education outcomes and earnings, which we interpret as indirect evidence of public service improvements. Together, our results provide evidence that electoral rewards encourage incumbents to spend part of additional revenues on public services valued by voters, a finding in line with agency models of electoral accountability.
Resumo:
The past four decades have witnessed an explosive growth in the field of networkbased facility location modeling. This is not at all surprising since location policy is one of the most profitable areas of applied systems analysis in regional science and ample theoretical and applied challenges are offered. Location-allocation models seek the location of facilities and/or services (e.g., schools, hospitals, and warehouses) so as to optimize one or several objectives generally related to the efficiency of the system or to the allocation of resources. This paper concerns the location of facilities or services in discrete space or networks, that are related to the public sector, such as emergency services (ambulances, fire stations, and police units), school systems and postal facilities. The paper is structured as follows: first, we will focus on public facility location models that use some type of coverage criterion, with special emphasis in emergency services. The second section will examine models based on the P-Median problem and some of the issues faced by planners when implementing this formulation in real world locational decisions. Finally, the last section will examine new trends in public sector facility location modeling.
Resumo:
In this paper we propose a metaheuristic to solve a new version of the Maximum Capture Problem. In the original MCP, market capture is obtained by lower traveling distances or lower traveling time, in this new version not only the traveling time but also the waiting time will affect the market share. This problem is hard to solve using standard optimization techniques. Metaheuristics are shown to offer accurate results within acceptable computing times.
Resumo:
One of the assumptions of the Capacitated Facility Location Problem (CFLP) is thatdemand is known and fixed. Most often, this is not the case when managers take somestrategic decisions such as locating facilities and assigning demand points to thosefacilities. In this paper we consider demand as stochastic and we model each of thefacilities as an independent queue. Stochastic models of manufacturing systems anddeterministic location models are put together in order to obtain a formula for thebacklogging probability at a potential facility location.Several solution techniques have been proposed to solve the CFLP. One of the mostrecently proposed heuristics, a Reactive Greedy Adaptive Search Procedure, isimplemented in order to solve the model formulated. We present some computationalexperiments in order to evaluate the heuristics performance and to illustrate the use ofthis new formulation for the CFLP. The paper finishes with a simple simulationexercise.
Resumo:
In this paper, we assess the determinants of long-run persistence of localculture, and examine the success of policy interventions designed to change attitudes.We analyze anti-Semitic attitudes drawing on individual-level survey results fromGermany s social value survey in 1996 and 2006. On average, we find that historicalvoting patterns for anti-Semitic parties between 1890 and 1933 are powerfulpredictors of anti-Jewish attitudes today. There is evidence that transmission takesplace both vertically (parent to child) and horizontally (among peers). Policy modifiedGerman views on Jews in important ways: The cohort that grew up under the Naziregime shows significantly higher levels of anti-Semitism. After 1945, the victoriousAllies implemented denazification programs in their zones of occupation. We usedifferences in these policies between the occupying powers as a source of identifyingvariation. The US and French zones today still show high anti-Semitism, reflecting anambitious botched attempt at denazification. In contrast, the British and Soviet zones,register much lower levels of Jew-hatred.
Resumo:
We consider an entrepreneur that is the sole producer of a costreducing skill, but the entrepreneur that hires a team to usethe skill cannot prevent collusive trade for the innovation related knowledge between employees and competitors. We showthat there are two types of diffusion avoiding strategies forthe entrepreneur to preempt collusive communication i) settingup a large productive capacity (the traditional firm) and ii)keeping a small team (the lean firm). The traditional firm ischaracterized by its many "marginal" employees that work shortdays, receive flat wages and are incompletely informed about the innovation. The lean firm is small in number of employees,engages in complete information sharing among members, that are paid with stock option schemes. We find that the lean firm is superior to the traditional firm when technological entry costsare low and when the sector is immature.