919 resultados para Costs and Cost Analysis.
Resumo:
BACKGROUND/AIMS: Alveolar echinococcosis (AE) is a serious liver disease. The aim of this study was to explore the long-term prognosis of AE patients, the burden of this disease in Switzerland and the cost-effectiveness of treatment. METHODS: Relative survival analysis was undertaken using a national database with 329 patient records. 155 representative cases had sufficient details regarding treatment costs and patient outcome to estimate the financial implications and treatment costs of AE. RESULTS: For an average 54-year-old patient diagnosed with AE in 1970 the life expectancy was estimated to be reduced by 18.2 and 21.3 years for men and women, respectively. By 2005 this was reduced to approximately 3.5 and 2.6 years, respectively. Patients undergoing radical surgery had a better outcome, whereas the older patients had a poorer prognosis than the younger patients. Costs amount to approximately Euro108,762 per patient. Assuming the improved life expectancy of AE patients is due to modern treatment the cost per disability-adjusted life years (DALY) saved is approximately Euro6,032. CONCLUSIONS: Current treatments have substantially improved the prognosis of AE patients compared to the 1970s. The cost per DALY saved is low compared to the average national annual income. Hence, AE treatment is highly cost-effective in Switzerland.
Resumo:
OBJECTIVE: To compare costs of function- and pain-centred inpatient treatment in patients with chronic low back pain over 3 years of follow-up. DESIGN: Cost analysis of a randomized controlled trial. PATIENTS: A total of 174 patients with chronic low back pain were randomized to function- or pain-centred inpatient treatment. METHODS: Data on direct and indirect costs were gathered by questionnaires sent to patients, health insurance providers, employers, and the Swiss Disability Insurance Company. RESULTS: There was a non-significant difference in total medical costs after 3 years' follow-up. Total costs were 77,305 Euros in the function-centred inpatient treatment group and 83,085 Euros in the pain-centred inpatient treatment group. Likewise, indirect costs after 3 years from lost work days were non-significantly lower in the function-centred in-patient treatment group (6354 Euros; 95% confidence interval -20,892, 8392) and direct medical costs were non-significantly higher in the function-centred inpatient treatment group (574 Euros; 95% confidence interval -862, 2011). CONCLUSION: The total costs of function-centred and pain-centred inpatient treatment were similar over the whole 3-year follow-up.
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
This paper extends the existing research on real estate investment trust (REIT) operating efficiencies. We estimate a stochastic-frontier panel-data model specifying a translog cost function, covering 1995 to 2003. The results disagree with previous research in that we find little evidence of scale economies and some evidence of scale diseconomies. Moreover, we also generally find smaller inefficiencies than those shown by other REIT studies. Contrary to previous research, the results also show that self-management of a REIT associates with more inefficiency when we measure output with assets. When we use revenue to measure output, selfmanagement associates with less inefficiency. Also contrary with previous research, higher leverage associates with more efficiency. The results further suggest that inefficiency increases over time in three of our four specifications.
Resumo:
The relative merits of PBSCT versus BMT for children with standard and high risk hematologic malignancies remain unclear. In a retrospective single center study, we compared allogeneic peripheral blood stem cell transplantation (PBSCT) (n=30) with bone marrow transplantation (BMT) (n=110) in children with acute leukemia. We studied recipients of HLA matched sibling stem cells, and of stem cells from alternative donors (HLA mismatched and/or unrelated) and determined whether sourcing the stem cells from PB or marrow affected engraftment, incidence of acute and chronic GvHD, and disease-free survival at 1 year. Our results show a modest reduction in time to engraftment from PB stem cells and no greater risk of GvHD, but illustrate that the severity of the underlying disease is by far the greatest determinant of 1 year survival. Patients in the BMT group had a higher treatment success rate and lower costs than the recipients of the PBSCT within the standard but not the high risk disease group, where the treatment success rate and the cumulative costs were lower in the PBSCT group compared to the BMT group. Our current incremental cost-effectiveness ratio and analysis of uncertainty suggest that allogeneic transplantation of bone marrow grafts was a more cost-effective treatment option compared to peripheral blood stem cells in patients with standard risk childhood acute leukemia disease. For high risk disease our data are less prescriptive, since the differences were more limited and the range of costs much larger. Neither option demonstrated a clear advantage from a cost-effectiveness standpoint.^
Resumo:
Background. Childhood immunization programs have dramatically reduced the morbidity and mortality associated with vaccine-preventable diseases. Proper documentation of immunizations that have been administered is essential to prevent duplicate immunization of children. To help improve documentation, immunization information systems (IISs) have been developed. IISs are comprehensive repositories of immunization information for children residing within a geographic region. The two models for participation in an IIS are voluntary inclusion, or "opt-in," and voluntary exclusion, or "opt-out." In an opt-in system, consent must be obtained for each participant, conversely, in an opt-out IIS, all children are included unless procedures to exclude the child are completed. Consent requirements for participation vary by state; the Texas IIS, ImmTrac, is an opt-in system.^ Objectives. The specific objectives are to: (1) Evaluate the variance among the time and costs associated with collecting ImmTrac consent at public and private birthing hospitals in the Greater Houston area; (2) Estimate the total costs associated with collecting ImmTrac consent at selected public and private birthing hospitals in the Greater Houston area; (3) Describe the alternative opt-out process for collecting ImmTrac consent at birth and discuss the associated cost savings relative to an opt-in system.^ Methods. Existing time-motion studies (n=281) conducted between October, 2006 and August, 2007 at 8 birthing hospitals in the Greater Houston area were used to assess the time and costs associated with obtaining ImmTrac consent at birth. All data analyzed are deidentified and contain no personal information. Variations in time and costs at each location were assessed and total costs per child and costs per year were estimated. The cost of an alternative opt-out system was also calculated.^ Results. The median time required by birth registrars to complete consent procedures varied from 72-285 seconds per child. The annual costs associated with obtaining consent for 388,285 newborns in ImmTrac's opt-in consent process were estimated at $702,000. The corresponding costs of the proposed opt-out system were estimated to total $194,000 per year. ^ Conclusions. Substantial variation in the time and costs associated with completion of ImmTrac consent procedures were observed. Changing to an opt-out system for participation could represent significant cost savings. ^
Resumo:
Emergency Departments (EDs) and Emergency Rooms (ERs) are designed to manage trauma, respond to disasters, and serve as the initial care for those with serious illnesses. However, because of many factors, the ED has become the doorway to the hospital and a “catch-all net” for patients including those with non-urgent needs. This increase in the population in the ED has lead to an increase in wait times for patients. It has been well documented that there has been a constant and consistent rise in the number of patients that frequent the ED (National Center for Health Statistics, 2002); the wait time for patients in the ED has increased (Pitts, Niska, Xu, & Burt, 2008); and the cost of the treatment in the ER has risen (Everett Clinic, 2008). Because the ED was designed to treat patients who need quick diagnoses and may be in potential life-threatening circumstances, management of time can be the ultimate enemy. If a system was implemented to decrease wait times in the ED, decrease the use of ED resources, and decrease costs endured by patients seeking care, better outcomes for patients and patient satisfaction could be achieved. The goal of this research was to explore potential changes and/or alternatives to relieve the burden endured by the ED. In order to explore these options, data was collected by conducting one-on-one interviews with seven physicians closely tied to a Level 1 ED (Emergency Room physicians, Trauma Surgeons and Primary Care physicians). A qualitative analysis was performed on the responses of one-on-one interviews with the aforementioned physicians. The interviews were standardized, open-ended questions that probe what makes an effective ED, possible solutions to improving patient care in the ED, potential remedies for the mounting problems that plague the ED, and the feasibility of bringing Primary Care Physicians to the ED to decrease the wait times experienced by the patient. From the responses, it is clear that there needs to be more research in this area, several areas need to be addressed, and a variety of solutions could be implemented. The most viable option seems to be making the ED its own entity (similar to the clinic or hospital) that includes urgent clinics as a part of the system, in which triage and better staffing would be the most integral part of its success.^
Resumo:
The purpose of this study was to assess the impact of the Arkansas Long-Term Care Demonstration Project upon Arkansas' Medicaid expenditures and upon the clients it serves. A Retrospective Medicaid expenditure study component used analyses of variance techniques to test for the Project's effects upon aggregated expenditures for 28 demonstration and control counties representing 25 percent of the State's population over four years, 1979-1982.^ A second approach to the study question utilized a 1982 prospective sample of 458 demonstration and control clients from the same 28 counties. The disability level or need for care of each patient was established a priori. The extent to which an individual's variation in Medicaid utilization and costs was explained by patient need, presence or absence of the channeling project's placement decision or some other patient characteristic was examined by multiple regression analysis. Long-term and acute care Medicaid, Medicare, third party, self-pay and the grand total of all Medicaid claims were analyzed for project effects and explanatory relationships.^ The main project effect was to increase personal care costs without reducing nursing home or acute care costs (Prospective Study). Expansion of clients appeared to occur in personal care (Prospective Study) and minimum care nursing home (Retrospective Study) for the project areas. Cost-shifting between Medicaid and Medicare in the project areas and two different patterns of utilization in the North and South projects tended to offset each other such that no differences in total costs between the project areas and demonstration areas occurred. The project was significant ((beta) = .22, p < .001) only for personal care costs. The explanatory power of this personal care regression model (R('2) = .36) was comparable to other reported health services utilization models. Other variables (Medicare buy-in, level of disability, Social Security Supplemental Income (SSI), net monthly income, North/South areas and age) explained more variation in the other twelve cost regression models. ^
Resumo:
Medication reconciliation, with the aim to resolve medication discrepancy, is one of the Joint Commission patient safety goals. Medication errors and adverse drug events that could result from medication discrepancy affect a large population. At least 1.5 million adverse drug events and $3.5 billion of financial burden yearly associated with medication errors could be prevented by interventions such as medication reconciliation. This research was conducted to answer the following research questions: (1a) What are the frequency range and type of measures used to report outpatient medication discrepancy? (1b) Which effective and efficient strategies for medication reconciliation in the outpatient setting have been reported? (2) What are the costs associated with medication reconciliation practice in primary care clinics? (3) What is the quality of medication reconciliation practice in primary care clinics? (4) Is medication reconciliation practice in primary care clinics cost-effective from the clinic perspective? Study designs used to answer these questions included a systematic review, cost analysis, quality assessments, and cost-effectiveness analysis. Data sources were published articles in the medical literature and data from a prospective workflow study, which included 150 patients and 1,238 medications. The systematic review confirmed that the prevalence of medication discrepancy was high in ambulatory care and higher in primary care settings. Effective strategies for medication reconciliation included the use of pharmacists, letters, a standardized practice approach, and partnership between providers and patients. Our cost analysis showed that costs associated with medication reconciliation practice were not substantially different between primary care clinics using or not using electronic medical records (EMR) ($0.95 per patient per medication in EMR clinics vs. $0.96 per patient per medication in non-EMR clinics, p=0.78). Even though medication reconciliation was frequently practiced (97-98%), the quality of such practice was poor (0-33% of process completeness measured by concordance of medication numbers and 29-33% of accuracy measured by concordance of medication names) and negatively (though not significantly) associated with medication regimen complexity. The incremental cost-effectiveness ratios for concordance of medication number per patient per medication and concordance of medication names per patient per medication were both 0.08, favoring EMR. Future studies including potential cost-savings from medication features of the EMR and potential benefits to minimize severity of harm to patients from medication discrepancy are warranted. ^
Resumo:
El hormigón es uno de los materiales de construcción más empleados en la actualidad debido a sus buenas prestaciones mecánicas, moldeabilidad y economía de obtención, entre otras ventajas. Es bien sabido que tiene una buena resistencia a compresión y una baja resistencia a tracción, por lo que se arma con barras de acero para formar el hormigón armado, material que se ha convertido por méritos propios en la solución constructiva más importante de nuestra época. A pesar de ser un material profusamente utilizado, hay aspectos del comportamiento del hormigón que todavía no son completamente conocidos, como es el caso de su respuesta ante los efectos de una explosión. Este es un campo de especial relevancia, debido a que los eventos, tanto intencionados como accidentales, en los que una estructura se ve sometida a una explosión son, por desgracia, relativamente frecuentes. La solicitación de una estructura ante una explosión se produce por el impacto sobre la misma de la onda de presión generada en la detonación. La aplicación de esta carga sobre la estructura es muy rápida y de muy corta duración. Este tipo de acciones se denominan cargas impulsivas, y pueden ser hasta cuatro órdenes de magnitud más rápidas que las cargas dinámicas impuestas por un terremoto. En consecuencia, no es de extrañar que sus efectos sobre las estructuras y sus materiales sean muy distintos que las que producen las cargas habitualmente consideradas en ingeniería. En la presente tesis doctoral se profundiza en el conocimiento del comportamiento material del hormigón sometido a explosiones. Para ello, es crucial contar con resultados experimentales de estructuras de hormigón sometidas a explosiones. Este tipo de resultados es difícil de encontrar en la literatura científica, ya que estos ensayos han sido tradicionalmente llevados a cabo en el ámbito militar y los resultados obtenidos no son de dominio público. Por otra parte, en las campañas experimentales con explosiones llevadas a cabo por instituciones civiles el elevado coste de acceso a explosivos y a campos de prueba adecuados no permite la realización de ensayos con un elevado número de muestras. Por este motivo, la dispersión experimental no es habitualmente controlada. Sin embargo, en elementos de hormigón armado sometidos a explosiones, la dispersión experimental es muy acusada, en primer lugar, por la propia heterogeneidad del hormigón, y en segundo, por la dificultad inherente a la realización de ensayos con explosiones, por motivos tales como dificultades en las condiciones de contorno, variabilidad del explosivo, o incluso cambios en las condiciones atmosféricas. Para paliar estos inconvenientes, en esta tesis doctoral se ha diseñado un novedoso dispositivo que permite ensayar hasta cuatro losas de hormigón bajo la misma detonación, lo que además de proporcionar un número de muestras estadísticamente representativo, supone un importante ahorro de costes. Con este dispositivo se han ensayado 28 losas de hormigón, tanto armadas como en masa, de dos dosificaciones distintas. Pero además de contar con datos experimentales, también es importante disponer de herramientas de cálculo para el análisis y diseño de estructuras sometidas a explosiones. Aunque existen diversos métodos analíticos, hoy por hoy las técnicas de simulación numérica suponen la alternativa más avanzada y versátil para el cálculo de elementos estructurales sometidos a cargas impulsivas. Sin embargo, para obtener resultados fiables es crucial contar con modelos constitutivos de material que tengan en cuenta los parámetros que gobiernan el comportamiento para el caso de carga en estudio. En este sentido, cabe destacar que la mayoría de los modelos constitutivos desarrollados para el hormigón a altas velocidades de deformación proceden del ámbito balístico, donde dominan las grandes tensiones de compresión en el entorno local de la zona afectada por el impacto. En el caso de los elementos de hormigón sometidos a explosiones, las tensiones de compresión son mucho más moderadas, siendo las tensiones de tracción generalmente las causantes de la rotura del material. En esta tesis doctoral se analiza la validez de algunos de los modelos disponibles, confirmando que los parámetros que gobiernan el fallo de las losas de hormigón armado ante explosiones son la resistencia a tracción y su ablandamiento tras rotura. En base a los resultados anteriores se ha desarrollado un modelo constitutivo para el hormigón ante altas velocidades de deformación, que sólo tiene en cuenta la rotura por tracción. Este modelo parte del de fisura cohesiva embebida con discontinuidad fuerte, desarrollado por Planas y Sancho, que ha demostrado su capacidad en la predicción de la rotura a tracción de elementos de hormigón en masa. El modelo ha sido modificado para su implementación en el programa comercial de integración explícita LS-DYNA, utilizando elementos finitos hexaédricos e incorporando la dependencia de la velocidad de deformación para permitir su utilización en el ámbito dinámico. El modelo es estrictamente local y no requiere de remallado ni conocer previamente la trayectoria de la fisura. Este modelo constitutivo ha sido utilizado para simular dos campañas experimentales, probando la hipótesis de que el fallo de elementos de hormigón ante explosiones está gobernado por el comportamiento a tracción, siendo de especial relevancia el ablandamiento del hormigón. Concrete is nowadays one of the most widely used building materials because of its good mechanical properties, moldability and production economy, among other advantages. As it is known, it has high compressive and low tensile strengths and for this reason it is reinforced with steel bars to form reinforced concrete, a material that has become the most important constructive solution of our time. Despite being such a widely used material, there are some aspects of concrete performance that are not yet fully understood, as it is the case of its response to the effects of an explosion. This is a topic of particular relevance because the events, both intentional and accidental, in which a structure is subjected to an explosion are, unfortunately, relatively common. The loading of a structure due to an explosive event occurs due to the impact of the pressure shock wave generated in the detonation. The application of this load on the structure is very fast and of very short duration. Such actions are called impulsive loads, and can be up to four orders of magnitude faster than the dynamic loads imposed by an earthquake. Consequently, it is not surprising that their effects on structures and materials are very different than those that cause the loads usually considered in engineering. This thesis broadens the knowledge about the material behavior of concrete subjected to explosions. To that end, it is crucial to have experimental results of concrete structures subjected to explosions. These types of results are difficult to find in the scientific literature, as these tests have traditionally been carried out by armies of different countries and the results obtained are classified. Moreover, in experimental campaigns with explosives conducted by civil institutions the high cost of accessing explosives and the lack of proper test fields does not allow for the testing of a large number of samples. For this reason, the experimental scatter is usually not controlled. However, in reinforced concrete elements subjected to explosions the experimental dispersion is very pronounced. First, due to the heterogeneity of concrete, and secondly, because of the difficulty inherent to testing with explosions, for reasons such as difficulties in the boundary conditions, variability of the explosive, or even atmospheric changes. To overcome these drawbacks, in this thesis we have designed a novel device that allows for testing up to four concrete slabs under the same detonation, which apart from providing a statistically representative number of samples, represents a significant saving in costs. A number of 28 slabs were tested using this device. The slabs were both reinforced and plain concrete, and two different concrete mixes were used. Besides having experimental data, it is also important to have computational tools for the analysis and design of structures subjected to explosions. Despite the existence of several analytical methods, numerical simulation techniques nowadays represent the most advanced and versatile alternative for the assessment of structural elements subjected to impulsive loading. However, to obtain reliable results it is crucial to have material constitutive models that take into account the parameters that govern the behavior for the load case under study. In this regard it is noteworthy that most of the developed constitutive models for concrete at high strain rates arise from the ballistic field, dominated by large compressive stresses in the local environment of the area affected by the impact. In the case of concrete elements subjected to an explosion, the compressive stresses are much more moderate, while tensile stresses usually cause material failure. This thesis discusses the validity of some of the available models, confirming that the parameters governing the failure of reinforced concrete slabs subjected to blast are the tensile strength and softening behaviour after failure. Based on these results we have developed a constitutive model for concrete at high strain rates, which only takes into account the ultimate tensile strength. This model is based on the embedded Cohesive Crack Model with Strong Discontinuity Approach developed by Planas and Sancho, which has proved its ability in predicting the tensile fracture of plain concrete elements. The model has been modified for its implementation in the commercial explicit integration program LS-DYNA, using hexahedral finite elements and incorporating the dependence of the strain rate, to allow for its use in dynamic domain. The model is strictly local and does not require remeshing nor prior knowledge of the crack path. This constitutive model has been used to simulate two experimental campaigns, confirming the hypothesis that the failure of concrete elements subjected to explosions is governed by their tensile response, being of particular relevance the softening behavior of concrete.
Resumo:
Costs and environmental impacts are key elements in forest logistics and they must be integrated in forest decision-making. The evaluation of transportation fuel costs and carbon emissions depend on spatial and non-spatial data but in many cases the former type of data are dicult to obtain. On the other hand, the availability of software tools to evaluate transportation fuel consumption as well as costs and emissions of carbon dioxide is limited. We developed a software tool that combines two empirically validated models of truck transportation using Digital Elevation Model (DEM) data and an open spatial data tool, specically OpenStreetMap©. The tool generates tabular data and spatial outputs (maps) with information regarding fuel consumption, cost and CO2 emissions for four types of trucks. It also generates maps of the distribution of transport performance indicators (relation between beeline and real road distances). These outputs can be easily included in forest decision-making support systems. Finally, in this work we applied the tool in a particular case of forest logistics in north-eastern Portugal
Resumo:
Aug. 1979.
Resumo:
"EPA-560/4-81-002."