983 resultados para Cost-Distance
Resumo:
With development of industry and acceleration of urbanization, problems of air quality as well as their influences on human health have recently been regarded highly by current international communities and governments. Generally, industrializations can result in exhausting of a lot of industry gases and dusts, while urbanization can cause increasing of modern vehicles. Comparing with traditional chemical methods, magnetic method is simple, rapid, exact, low-cost and non-destructive for monitoring air pollution and has been widely applied in domestic and international studies. In this thesis, with an aim of better monitoring air pollution, we selected plants (highroad-side perennial pine trees (Pinus pumila Regel) along a highroad linking Beijing City and the Capital International Airport, and tree bark and tree ring core samples (willow, Salix matsudana) nearby a smelting industry in northeast Beijing) for magnetic studies. With systemic magnetic measurements on these samples, magnetic response mechanism of contamination(e.g. tree leaves, tree ring)to both short- and long-term environmental pollution has been constructed, and accordingly the pollution range, degree and process of different time-scale human activities could be assessed. A series of rock magnetic experiments of tree leaves show that the primary magnetic mineral of leaf samples was identified to be magnetite, in pseudo-single domain (PSD) grain size range of 0.2-5.0 μm. Magnetite concentration and grain size in leaves are ascertained to decrease with increasing of sampling distance to highroad asphalt surface, suggesting that high magnetic response to traffic pollution is localized within a distance of about 2 m away from highroad asphalt surface. On the other hand, highroad-side trees and rainwater can effectively reduce the concentration of traffic pollution-induced particulate matters (PMs) in the atmosphere. This study is the first time to investigate the relationship of smelting factory activities and vicissitudes of environment with tree rings by magnetic methods. Results indicate that magnetic particles are omnipresent in tree bark and trunk wood. Magnetic techniques including low-temperature experiment, successive acquisition of IRM, hysteresis loops and SIRM measurements suggest that magnetic particles are predominated by magnetite in pseudo-single domain state. Comparison of magnetic properties of tree trunk and branch cores collected from different directions and heights implies that collection of magnetic particles depends on both sampling direction and height. Pollution source-facing tree trunk wood contains significantly more magnetic particles than other sides. These indicate that magnetic particles are most likely intercepted and collected by tree bark first, then enter into tree xylem tissues by translocation during growing season, and are finally enclosed in a tree ring by lignifying. Correlation between magnetic properties such as time-dependent SIRM values of tree ring cores and the annual steel yields of the smelting factory is significant. Considering the dependence of magnetic properties in sampling directions, heights, and ring cores, we proposed that magnetic particles in the xylem cannot move between tree rings. Accordingly, the SIRM and some other magnetic parameters of tree ring cores from the source-facing side could be contributed to historical study of atmospheric pollution produced by heavy metal smelting activities, isoline diagrams of SIRM values of all the tree rings indicate that air pollution is increasing worse. We believed that a synthetic rock magnetic study is an effective method for determining concentration and grain size of ferromagnets in the atmospheric PMs, and then it should be a rapid and feasible technique for monitoring atmospheric pollution.
Resumo:
At present the main object of the exploration and development (E&D) of oil and gas is not the structural oil-gas pools but the subtle lithological oil-gas reservoir. Since the last 90's, the ratio of this kind of pools in newly-added oil reserves is becoming larger and larger, so is the ratio in the eastern oilfields. The third oil-gas resource evaluation indicates the main exploration object of Jiyang depression is the lithological oil-gas pools in future. However, lack of effective methods that are applied to search for this kind of pool makes E&D difficult and the cost high. In view of the urgent demand of E&D, in this paper we deeply study and analyze the theory and application in which the seismic attributes are used to predict and describe lithological oil-gas reservoirs. The great results are obtained by making full use of abundant physics and reservoir information as well as the remarkable lateral continuity involved in seismic data in combination with well logging, drilling-well and geology. ①Based on a great deal of research and different geological features of Shengli oilfield, the great progresses are made some theories and methods of seismic reservoir prediction and description. Three kinds of extrapolation near well seismic wavelet methods-inverse distance interpolation, phase interpolation and pseudo well reflectivity-are improved; particularly, in sparse well area the method of getting pseudo well reflectivity is given by the application of the wavelet theory. The formulae for seismic attributes and coherent volumes are derived theoretically, and the optimal method of seismic attributes and improved algorithms of picking up coherent data volumes are put forward. The method of making sequence analysis on seismic data is put forward and derived in which the wavelet transform is used to analyze not only qualitatively but also quantitatively seismic characteristics of reservoirs.② According to geologic model and seismic forward simulation, from macro to micro, the method of pre- and post-stack data synthetic analysis and application is put forward using seismic in close combination with geology; particularly, based on making full use of post-stack seismic data, "green food"-pre-stack seismic data is as possible as utilized. ③ In this paper, the formative law and distributing characteristic of lithologic oil-gas pools of the Tertiary in Jiyang depression, the knowledge of geological geophysics and the feasibility of all sorts of seismic methods, and the applied knowledge of seismic data and the geophysical mechanism of oil-gas reservoirs are studied. Therefore a series of perfect seismic technique and software are completed that fit to E&D of different categories of lithologic oil-gas reservoirs. ④ This achievement is different from other new seismic methods that are put forward in the recent years, that is multi-wave multi-component seismic, cross hole seismic, vertical seismic, and time-lapse seismic etc. that need the reacquisition of seismic data to predict and describe the oil-gas reservoir. The method in this paper is based on the conventional 2D/3D seismic data, so the cost falls sharply. ⑤ In recent years this technique that predict and describe lithologic oil-gas reservoirs by seismic information has been applied in E&D of lithologic oil-gas reservoirs on glutenite fans in abrupt slop and turbidite fans in front of abrup slop, slump turbidite fans in front of delta, turbidite fans with channel in low slope and channel sanbody, and a encouraging geologic result has been gained. This achievement indicates that the application of seismic information is one of the most effective ways in solving the present problem of E&D. This technique is significant in the application and popularization, and positive on increasing reserves and raising production as well as stable development in Shengli oilfield. And it will be directive to E&D of some similar reservoirs
Resumo:
Similarity measurements between 3D objects and 2D images are useful for the tasks of object recognition and classification. We distinguish between two types of similarity metrics: metrics computed in image-space (image metrics) and metrics computed in transformation-space (transformation metrics). Existing methods typically use image and the nearest view of the object. Example for such a measure is the Euclidean distance between feature points in the image and corresponding points in the nearest view. (Computing this measure is equivalent to solving the exterior orientation calibration problem.) In this paper we introduce a different type of metrics: transformation metrics. These metrics penalize for the deformatoins applied to the object to produce the observed image. We present a transformation metric that optimally penalizes for "affine deformations" under weak-perspective. A closed-form solution, together with the nearest view according to this metric, are derived. The metric is shown to be equivalent to the Euclidean image metric, in the sense that they bound each other from both above and below. For Euclidean image metric we offier a sub-optimal closed-form solution and an iterative scheme to compute the exact solution.
Resumo:
This paper focuses on the analysis of the relationship between maritime trade and transport cost in Latin America. The analysis is based on disaggregated (SITC 5 digit level) trade data for intra Latin maritime trade routes over the period 1999-2004. The research contributes to the literature by disentangling the effects of transport costs on the range of traded goods (extensive margin) and the traded volumes of goods (intensive margin) of international trade in order to test some of the predictions of the trade theories that introduce firm heterogeneity in productivity, as well as fixed costs of exporting. Recent investigations show that spatial frictions (distance) reduce trade mainly by trimming the number of shipments and that most firms ship only to geographically proximate customers, instead of shipping to many destinations in quantities that decrease in distance. Our analyses confirm these findings and show that the opposite pattern is observed for ad-valorem freight rates that reduce aggregate trade values mainly by reducing the volume of imported goods (intensive margin).
Resumo:
N.W. Hardy and M.H. Lee. The effect of the product cost factor on error handling in industrial robots. In Maria Gini, editor, Detecting and Resolving Errors in Manufacturing Systems. Papers from the 1994 AAAI Spring Symposium Series, pages 59-64, Menlo Park, CA, March 1994. The AAAI Press. Technical Report SS-94-04, ISBN 0-929280-60-1.
Resumo:
Tedd, L.A., Dahl, K., Francis, S.,Tet?evov?, M.& ?ihlavn?kov?, E.(2002).Training for professional librarians in Slovakia by distance-learning methods: an overview of the PROLIB and EDULIB projects. Library Hi Tech, 20(3), 340-351. Sponsorship: European Union and the Open Society Institute
Resumo:
Previous research argues that large non-controlling shareholders enhance firm value because they deter expropriation by the controlling shareholder. We propose that the conflicting incentives faced by large shareholders may induce a nonlinear relationship between the relative size of large shareholdings and firm value. Consistent with this prediction, we present evidence that there are costs of having a second (and third) largest shareholder, especially when the largest shareholdings are similar in size. Our results are robust to various relative size proxies, firm performance measures, model specifications, and potential endogeneity issues.
Resumo:
The Basic Income has been defined as a relatively small income that the public Administration unconditionally provides to all its members as a citizenship right. Its principal objective consists on guaranteeing the entire population with an income enough to satisfy living basic needs, but it could have other positive effects such as a more equally income redistribution or tax fraud fighting, as well as some drawbacks, like the labor supply disincentives. In this essay we present the argument in favor and against this policy and ultimately define how it could be financed according to the actual tax and social benefits’ system in Navarra. The research also approaches the main economic implications of the proposal, both in terms of static income redistribution and discusses other relevant dynamic uncertainties.
Resumo:
Background: Many African countries are rapidly expanding HIV/AIDS treatment programs. Empirical information on the cost of delivering antiretroviral therapy (ART) for HIV/AIDS is needed for program planning and budgeting. Methods: We searched published and gray sources for estimates of the cost of providing ART in service delivery (non-research) settings in sub-Saharan Africa. Estimates were included if they were based on primary local data for input prices. Results: 17 eligible cost estimates were found. Of these, 10 were from South Africa. The cost per patient per year ranged from $396 to $2,761. It averaged approximately $850/patient/year in countries outside South Africa and $1,700/patient/year in South Africa. The most recent estimates for South Africa averaged $1,200/patient/year. Specific cost items included in the average cost per patient per year varied, making comparison across studies problematic. All estimates included the cost of antiretroviral drugs and laboratory tests, but many excluded the cost of inpatient care, treatment of opportunistic infections, and/or clinic infrastructure. Antiretroviral drugs comprised an average of one third of the cost of treatment in South Africa and one half to three quarters of the cost in other countries. Conclusions: There is very little empirical information available about the cost of providing antiretroviral therapy in non-research settings in Africa. Methods for estimating costs are inconsistent, and many estimates combine data drawn from disparate sources. Cost analysis should become a routine part of operational research on the treatment rollout in Africa.
Resumo:
In this paper we discuss a new type of query in Spatial Databases, called Trip Planning Query (TPQ). Given a set of points P in space, where each point belongs to a category, and given two points s and e, TPQ asks for the best trip that starts at s, passes through exactly one point from each category, and ends at e. An example of a TPQ is when a user wants to visit a set of different places and at the same time minimize the total travelling cost, e.g. what is the shortest travelling plan for me to visit an automobile shop, a CVS pharmacy outlet, and a Best Buy shop along my trip from A to B? The trip planning query is an extension of the well-known TSP problem and therefore is NP-hard. The difficulty of this query lies in the existence of multiple choices for each category. In this paper, we first study fast approximation algorithms for the trip planning query in a metric space, assuming that the data set fits in main memory, and give the theory analysis of their approximation bounds. Then, the trip planning query is examined for data sets that do not fit in main memory and must be stored on disk. For the disk-resident data, we consider two cases. In one case, we assume that the points are located in Euclidean space and indexed with an Rtree. In the other case, we consider the problem of points that lie on the edges of a spatial network (e.g. road network) and the distance between two points is defined using the shortest distance over the network. Finally, we give an experimental evaluation of the proposed algorithms using synthetic data sets generated on real road networks.
Resumo:
As distributed information services like the World Wide Web become increasingly popular on the Internet, problems of scale are clearly evident. A promising technique that addresses many of these problems is service (or document) replication. However, when a service is replicated, clients then need the additional ability to find a "good" provider of that service. In this paper we report on techniques for finding good service providers without a priori knowledge of server location or network topology. We consider the use of two principal metrics for measuring distance in the Internet: hops, and round-trip latency. We show that these two metrics yield very different results in practice. Surprisingly, we show data indicating that the number of hops between two hosts in the Internet is not strongly correlated to round-trip latency. Thus, the distance in hops between two hosts is not necessarily a good predictor of the expected latency of a document transfer. Instead of using known or measured distances in hops, we show that the extra cost at runtime incurred by dynamic latency measurement is well justified based on the resulting improved performance. In addition we show that selection based on dynamic latency measurement performs much better in practice that any static selection scheme. Finally, the difference between the distribution of hops and latencies is fundamental enough to suggest differences in algorithms for server replication. We show that conclusions drawn about service replication based on the distribution of hops need to be revised when the distribution of latencies is considered instead.
Resumo:
The objective of unicast routing is to find a path from a source to a destination. Conventional routing has been used mainly to provide connectivity. It lacks the ability to provide any kind of service guarantees and smart usage of network resources. Improving performance is possible by being aware of both traffic characteristics and current available resources. This paper surveys a range of routing solutions, which can be categorized depending on the degree of the awareness of the algorithm: (1) QoS/Constraint-based routing solutions are aware of traffic requirements of individual connection requests; (2) Traffic-aware routing solutions assume knowledge of the location of communicating ingress-egress pairs and possibly the traffic demands among them; (3) Routing solutions that are both QoS-aware as (1) and traffic-aware as (2); (4) Best-effort solutions are oblivious to both traffic and QoS requirements, but are adaptive only to current resource availability. The best performance can be achieved by having all possible knowledge so that while finding a path for an individual flow, one can make a smart choice among feasible paths to increase the chances of supporting future requests. However, this usually comes at the cost of increased complexity and decreased scalability. In this paper, we discuss such cost-performance tradeoffs by surveying proposed heuristic solutions and hybrid approaches.
Resumo:
We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.
Resumo:
As the Internet has evolved and grown, an increasing number of nodes (hosts or autonomous systems) have become multihomed, i.e., a node is connected to more than one network. Mobility can be viewed as a special case of multihoming—as a node moves, it unsubscribes from one network and subscribes to another, which is akin to one interface becoming inactive and another active. The current Internet architecture has been facing significant challenges in effectively dealing with multihoming (and consequently mobility). The Recursive INternet Architecture (RINA) [1] was recently proposed as a clean-slate solution to the current problems of the Internet. In this paper, we perform an average-case cost analysis to compare the multihoming / mobility support of RINA, against that of other approaches such as LISP and MobileIP. We also validate our analysis using trace-driven simulation.
Resumo:
Background: Elective repeat caesarean delivery (ERCD) rates have been increasing worldwide, thus prompting obstetric discourse on the risks and benefits for the mother and infant. Yet, these increasing rates also have major economic implications for the health care system. Given the dearth of information on the cost-effectiveness related to mode of delivery, the aim of this paper was to perform an economic evaluation on the costs and short-term maternal health consequences associated with a trial of labour after one previous caesarean delivery compared with ERCD for low risk women in Ireland.Methods: Using a decision analytic model, a cost-effectiveness analysis (CEA) was performed where the measure of health gain was quality-adjusted life years (QALYs) over a six-week time horizon. A review of international literature was conducted to derive representative estimates of adverse maternal health outcomes following a trial of labour after caesarean (TOLAC) and ERCD. Delivery/procedure costs derived from primary data collection and combined both "bottom-up" and "top-down" costing estimations.Results: Maternal morbidities emerged in twice as many cases in the TOLAC group than the ERCD group. However, a TOLAC was found to be the most-effective method of delivery because it was substantially less expensive than ERCD ((sic)1,835.06 versus (sic)4,039.87 per women, respectively), and QALYs were modestly higher (0.84 versus 0.70). Our findings were supported by probabilistic sensitivity analysis.Conclusions: Clinicians need to be well informed of the benefits and risks of TOLAC among low risk women. Ideally, clinician-patient discourse would address differences in length of hospital stay and postpartum recovery time. While it is premature advocate a policy of TOLAC across maternity units, the results of the study prompt further analysis and repeat iterations, encouraging future studies to synthesis previous research and new and relevant evidence under a single comprehensive decision model.