113 resultados para Error-location numbers
Resumo:
The past four decades have witnessed an explosive growth in the field of networkbased facilitylocation modeling. This is not at all surprising since location policy is one of the mostprofitable areas of applied systems analysis in regional science and ample theoretical andapplied challenges are offered. Location-allocation models seek the location of facilitiesand/or services (e.g., schools, hospitals, and warehouses) so as to optimize one or severalobjectives generally related to the efficiency of the system or to the allocation of resources.This paper concerns the location of facilities or services in discrete space or networks, thatare related to the public sector, such as emergency services (ambulances, fire stations, andpolice units), school systems and postal facilities. The paper is structured as follows: first,we will focus on public facility location models that use some type of coverage criterion,with special emphasis in emergency services. The second section will examine models based onthe P-Median problem and some of the issues faced by planners when implementing thisformulation in real world locational decisions. Finally, the last section will examine newtrends in public sector facility location modeling.
Resumo:
The paper presents a new model based on the basic Maximum Capture model,MAXCAP. The New Chance Constrained Maximum Capture modelintroduces astochastic threshold constraint, which recognises the fact that a facilitycan be open only if a minimum level of demand is captured. A metaheuristicbased on MAX MIN ANT system and TABU search procedure is presented tosolve the model. This is the first time that the MAX MIN ANT system isadapted to solve a location problem. Computational experience and anapplication to 55 node network are also presented.
Resumo:
A new direction of research in Competitive Location theory incorporatestheories of Consumer Choice Behavior in its models. Following thisdirection, this paper studies the importance of consumer behavior withrespect to distance or transportation costs in the optimality oflocations obtained by traditional Competitive Location models. To dothis, it considers different ways of defining a key parameter in thebasic Maximum Capture model (MAXCAP). This parameter will reflectvarious ways of taking into account distance based on several ConsumerChoice Behavior theories. The optimal locations and the deviation indemand captured when the optimal locations of the other models are usedinstead of the true ones, are computed for each model. A metaheuristicbased on GRASP and Tabu search procedure is presented to solve all themodels. Computational experience and an application to 55-node networkare also presented.
Resumo:
In this paper we propose a metaheuristic to solve a new version of the Maximum CaptureProblem. In the original MCP, market capture is obtained by lower traveling distances or lowertraveling time, in this new version not only the traveling time but also the waiting time willaffect the market share. This problem is hard to solve using standard optimization techniques.Metaheuristics are shown to offer accurate results within acceptable computing times.
Resumo:
New location models are presented here for exploring the reduction of facilities in aregion. The first of these models considers firms ceding market share to competitorsunder situations of financial exigency. The goal of this model is to cede the leastmarket share, i.e., retain as much of the customer base as possible while sheddingcostly outlets. The second model considers a firm essentially without competition thatmust shrink it services for economic reasons. This firm is assumed to close outlets sothat the degradation of service is limited. An example is offered within a competitiveenvironment to demonstrate the usefulness of this modeling approach.
Resumo:
Models are presented for the optimal location of hubs in airline networks, that take into consideration the congestion effects. Hubs, which are the most congested airports, are modeled as M/D/c queuing systems, that is, Poisson arrivals, deterministic service time, and {\em c} servers. A formula is derived for the probability of a number of customers in the system, which is later used to propose a probabilistic constraint. This constraint limits the probability of {\em b} airplanes in queue, to be lesser than a value $\alpha$. Due to the computational complexity of the formulation. The model is solved using a meta-heuristic based on tabu search. Computational experience is presented.
Resumo:
In this paper we address the issue of locating hierarchical facilities in the presence of congestion. Two hierarchical models are presented, where lower level servers attend requests first, and then, some of the served customers are referred to higher level servers. In the first model, the objective is to find the minimum number of servers and theirlocations that will cover a given region with a distance or time standard. The second model is cast as a Maximal Covering Location formulation. A heuristic procedure is then presented together with computational experience. Finally, some extensions of these models that address other types of spatial configurations are offered.
Resumo:
We propose a model and solution methods, for locating a fixed number ofmultiple-server, congestible common service centers or congestible publicfacilities. Locations are chosen so to minimize consumers congestion (orqueuing) and travel costs, considering that all the demand must be served.Customers choose the facilities to which they travel in order to receiveservice at minimum travel and congestion cost. As a proxy for thiscriterion, total travel and waiting costs are minimized. The travel costis a general function of the origin and destination of the demand, whilethe congestion cost is a general function of the number of customers inqueue at the facilities.
Resumo:
Whereas much literature exists on choice overload, little is known about effects of numbers of alternatives in donation decisions. How do these affect both the size and distribution of donations? We hypothesize that donations are affected by the reputation of recipients and increase with their number, albeit at a decreasing rate. Allocations to recipients reflect different concepts of fairness equity and equality. Both may be employed but, since they differ in cognitive and emotional costs, numbers of recipients are important. Using a cognitive (emotional) argument, distributions become more uniform (skewed) as numbers increase. In a survey, respondents indicated how they would donate lottery winnings of 50 Euros. Results indicated that more was donated to NGO s that respondents knew better. Second, total donations increased with the number of recipients albeit at a decreasing rate. Third, distributions of donations became more skewed as numbers increased. We comment on theoretical and practical implications.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
The results of the application of the geophysical electromagnetic prospection methods in the resolution of the problems of the spatial location of the travertine quaternary formations of the Banyoles depression are presented
Resumo:
Upper bounds for the Betti numbers of generalized Cohen-Macaulay ideals are given. In particular, for the case of non-degenerate, reduced and ir- reducible projective curves we get an upper bound which only depends on their degree.
Resumo:
The aim of this paper is to give an explicit formula for the num- bers of abelian extensions of a p-adic number field and to study the generating function of these numbers. More precisely, we give the number of abelian ex- tensions with given degree and ramification index, and the number of abelian extensions with given degree of any local field of characteristic zero. Moreover, we give a concrete expression of a generating function for these last numbers
Resumo:
The COMPTEL unidentified source GRO J1411-64 was observed by INTEGRAL, and its central part, also by XMM-Newton. The data analysis shows no hint for new detections at hard X-rays. The upper limits in flux herein presented constrain the energy spectrum of whatever was producing GRO J1411-64, imposing, in the framework of earlier COMPTEL observations, the existence of a peak in power output located somewhere between 300-700 keV for the so-called low state. The Circinus Galaxy is the only source detected within the 4$\sigma$ location error of GRO J1411-64, but can be safely excluded as the possible counterpart: the extrapolation of the energy spectrum is well below the one for GRO J1411-64 at MeV energies. 22 significant sources (likelihood $> 10$) were extracted and analyzed from XMM-Newton data. Only one of these sources, XMMU J141255.6-635932, is spectrally compatible with GRO J1411-64 although the fact the soft X-ray observations do not cover the full extent of the COMPTEL source position uncertainty make an association hard to quantify and thus risky. The unique peak of the power output at high energies (hard X-rays and gamma-rays) resembles that found in the SED seen in blazars or microquasars. However, an analysis using a microquasar model consisting on a magnetized conical jet filled with relativistic electrons which radiate through synchrotron and inverse Compton scattering with star, disk, corona and synchrotron photons shows that it is hard to comply with all observational constrains. This and the non-detection at hard X-rays introduce an a-posteriori question mark upon the physical reality of this source, which is discussed in some detail.
Resumo:
Temporal variability was studied in the common sea urchin Paracentrotus lividus through the analysis of the genetic composition of three yearly cohorts sampled over two consecutive springs in a locality in northwestern Mediterranean. Individuals were aged using growth ring patterns observed in tests and samples were genotyped for five microsatellite loci. No reduction of genetic diversity was observed relative to a sample of the adult population from the same location or within cohorts across years. FST and amova results indicated that the differentiation between cohorts is rather shallow and not significant, as most variability is found within cohorts and within individuals. This mild differentiation translated into estimates of effective population size of 90100 individuals. When the observed excess of homozygotes was taken into account, the estimate of the average number of breeders increased to c. 300 individuals. Given our restricted sampling area and the known small-scale heterogeneity in recruitment in this species, our results suggest that at stretches of a few kilometres of shoreline, large numbers of progenitors are likely to contribute to the larval pool at each reproduction event. Intercohort variation in our samples is six times smaller than spatial variation between adults of four localities in the western Mediterranean. Our results indicate that, notwithstanding the stochastic events that take place during the long planktonic phase and during the settlement and recruitment processes, reproductive success in this species is high enough to produce cohorts genetically diverse and with little differentiation between them. Further research is needed before the link between genetic structure and underlying physical and biological processes can be well established.