855 resultados para Travel Cost Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of Markov processes is very useful to health-care problems. The objective of this study is to provide a structured methodology of forecasting cost based upon combining a stochastic model of utilization (Markov Chain) and deterministic cost function. The perspective of the cost in this study is the reimbursement for the services rendered. The data to be used is the OneCare database of claim records of their enrollees over a two-year period of January 1, 1996–December 31, 1997. The model combines a Markov Chain that describes the utilization pattern and its variability where the use of resources by risk groups (age, gender, and diagnosis) will be considered in the process and a cost function determined from a fixed schedule based on real costs or charges for those in the OneCare claims database. The cost function is a secondary application to the model. Goodness-of-fit will be used checked for the model against the traditional method of cost forecasting. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study compares the procurement cost-minimizing and productive efficiency performance of the auction mechanism used by independent system operators (ISOs) in wholesale electricity auction markets in the U.S. with that of a proposed alternative. The current practice allocates energy contracts as if the auction featured a discriminatory final payment method when, in fact, the markets are uniform price auctions. The proposed alternative explicitly accounts for the market clearing price during the allocation phase. We find that the proposed alternative largely outperforms the current practice on the basis of procurement costs in the context of simple auction markets featuring both day-ahead and real-time auctions and that the procurement cost advantage of the alternative is complete when we simulate the effects of increased competition. We also find that a trade-off between the objectives of procurement cost minimization and productive efficiency emerges in our simple auction markets and persists in the face of increased competition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I propose that the Last in, First out (LIFO) inventory valuation method needs to be reevaluated. I will evaluate the impact of the LIFO method on earnings of publically traded companies with a LIFO reserve over the past 10 years. I will begin my proposal with the history of how the LIFO method became an acceptable valuation method and discuss the significance of LIFO within the accounting profession Next I will provide a description of LIFO, the First in, First out (FIFO), and the weighted average inventory valuation methods and explore the differences among each. More specifically, I will explore the arguments for and against the use of the LIFO method and the potential shift towards financial standards that do not allow LIFO (a standard adopted and influenced by the International Financial Accounting Standards Board). Data will be collected from Compustat for publicly traded companies (with a LIFO Reserve) for the past 10 years. I will document which firms use LIFO, analyze trends relating to LIFO usage and LIFO reserves (the difference in the cost of inventory between using LIFO and FIFO), and evaluate the effect on earnings. The purpose of this research is to evaluate the accuracy of LIFO in portraying earnings and to see how much tax has gone uncollected over the years because of the use of LIFO. Moreover, I will provide an opinion as to whether U.S. GAAP should adopt a standard similar to IFRS and ban the LIFO method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Houston region is home to arguably the largest petrochemical and refining complex anywhere. The effluent of this complex includes many potentially hazardous compounds. Study of some of these compounds has led to recognition that a number of known and probable carcinogens are at elevated levels in ambient air. Two of these, benzene and 1,3-butadiene, have been found in concentrations which may pose health risk for residents of Houston.^ Recent popular journalism and publications by local research institutions has increased the interest of the public in Houston's air quality. Much of the literature has been critical of local regulatory agencies' oversight of industrial pollution. A number of citizens in the region have begun to volunteer with air quality advocacy groups in the testing of community air. Inexpensive methods exist for monitoring of ozone, particulate matter and airborne toxic ambient concentrations. This study is an evaluation of a technique that has been successfully applied to airborne toxics.^ This technique, solid phase microextraction (SPME), has been used to measure airborne volatile organic hydrocarbons at community-level concentrations. It is has yielded accurate and rapid concentration estimates at a relatively low cost per sample. Examples of its application to measurement of airborne benzene exist in the literature. None have been found for airborne 1,3-butadiene. These compounds were selected for an evaluation of SPME as a community-deployed technique, to replicate previous application to benzene, to expand application to 1,3-butadiene and due to the salience of these compounds in this community. ^ This study demonstrates that SPME is a useful technique for quantification of 1,3-butadiene at concentrations observed in Houston. Laboratory background levels precluded recommendation of the technique for benzene. One type of SPME fiber, 85 μm Carboxen/PDMS, was found to be a sensitive sampling device for 1,3-butadiene under temperature and humidity conditions common in Houston. This study indicates that these variables affect instrument response. This suggests the necessity of calibration within specific conditions of these variables. While deployment of this technique was less expensive than other methods of quantification of 1,3-butadiene, the complexity of calibration may exclude an SPME method from broad deployment by community groups.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Back ground and Purpose. There is a growing consensus among health care researchers that Quality of Life (QoL) is an important outcome and, within the field of family caregiving, cost effectiveness research is needed to determine which programs have the greatest benefit for family members. This study uses a multidimensional approach to measure the cost effectiveness of a multicomponent intervention designed to improve the quality of life of spousal caregivers of stroke survivors. Methods. The CAReS study (Committed to Assisting with Recovery after Stroke) was a 5-year prospective, longitudinal intervention study for 159 stroke survivors and their spousal caregivers upon discharge of the stroke survivor from inpatient rehabilitation to their home. CAReS cost data were analyzed to determine the incremental cost of the intervention per caregiver. The mean values of the quality-of-life predictor variables of the intervention group of caregivers were compared to the mean values of usual care groups found in the literature. Significant differences were then divided into the cost of the intervention per caregiver to calculate the incremental cost effectiveness ratio for each predictor variable. Results. The cost of the intervention per caregiver was approximately $2,500. Statistically significant differences were found between the mean scores for the Perceived Stress and Satisfaction with Life scales. Statistically significant differences were not found between the mean scores for the Self Reported Health Status, Mutuality, and Preparedness scales. Conclusions. This study provides a prototype cost effectiveness analysis on which researchers can build. Using a multidimensional approach to measure QoL, as used in this analysis, incorporates both the subjective and objective components of QoL. Some of the QoL predictor variable scores were significantly different between the intervention and comparison groups, indicating a significant impact of the intervention. The estimated cost of the impact was also examined. In future studies, a scale that takes into account both the dimensions and the weighting each person places on the dimensions of QoL should be used to provide a single QoL score per participant. With participant level cost and outcome data, uncertainty around each cost-effectiveness ratio can be calculated using the bias-corrected percentile bootstrapping method and plotted to calculate the cost-effectiveness acceptability curves.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supermarket nutrient movement, a community food consumption measure, aggregated 1,023 high-fat foods, representing 100% of visible fats and approximately 44% of hidden fats in the food supply (FAO, 1980). Fatty acid and cholesterol content of foods shipped from the warehouse to 47 supermarkets located in the Houston area were calculated over a 6 month period. These stores were located in census tracts with over 50% of a given ethnicity: Hispanic, black non-Hispanic, or white non-Hispanic. Categorizing the supermarket census tracts by predominant ethnicity, significant differences were found by ANOVA in the proportion of specific fatty acids and cholesterol content of the foods examined. Using ecological regression, ethnicity, income, and median age predicted supermarket lipid movements while residential stability did not. No associations were found between lipid movements and cardiovascular disease mortality, making further validation necessary for epidemiological application of this method. However, it has been shown to be a non-reactive and cost-effective method appropriate for tracking target foods in populations of groups, and for assessing the impact of mass media nutrition education, legislation, and fortification on community food and nutrient purchase patterns. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the repercussion effects on the production cost of industries in Asian countries when some countries eliminate tariffs and import commodity taxes on all imports. This kind of analysis is related in some sense to that measuring the effects of FTAs on economies, and thus may be considered as an analysis of “pseudo FTAs.” Examining a number of combinations of “pseudo FTAs” between China, Japan, and ASEAN, it is found that the case of China plus Japan plus ASEAN is the most effective “pseudo FTA” of the combinations in terms of production cost reduction. The method is a form of price model based on the Asian International Input-Output Table. Almost no studies on price models related to multilateral I/O tables have been implemented thus far.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land value bears significant weight in house prices in historical town centers. An essential aim for regulating the mortgage market, particularly in the financial and property crisis that countries such as Spain are undergoing, is to have at hand objective procedures for its valuation, whatever the conditions (location, construction, planning). Of all the factors contributing to house price make-up, the land is the only one whose value does not depend on acquisition cost, but rather on the location-time binomial. That is to say, the specific circumstances at that point and at the exact moment of valuation. For this reason, the most commonly applied procedure for land valuation in town centers is the use of the residual method: once the selling price of new housing in a district is known, the other necessary costs and expenses of development are deducted, including those of building and the developer’s profit. The value left is that of the land. To apply these procedures it is vital to have figures such as building costs, technical fees, tax costs, etc. But, above all, it is essential to obtain the selling price of the new housing. This is not always feasible, on account of the lack of newbuild development in this location. This shortage of information occurs in historical town cities, where urban renewal is slight due to the heritage-protection policies, and where, nevertheless there is substantial activity in the secondary market. In these circumstances, as an alternative for land valuation in consolidated urban areas, we have the adaptation of the residual method to the particular characteristics of the secondary market. To these ends, there is the proposal for the appreciation of the dwelling which follows, in a backwards direction, the application of traditional depreciation methods proposed by the various valuation manuals and guidelines. The reliability of the results obtained is analyzed by contrasting it with published figures for newly-built properties, according to different rules applied in administrative appraisals in Spain and the incidence of an eventual correction due to conservation state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although there are numerous accurate measuring methods to determine soil moisture content in a spot, until very recently there were no precise in situ and in real time methods that were able to measure soil moisture content along a line. By means of the Distributed Fiber Optic Temperature Measurement method or DFOT, the temperature in 0.12 m intervals and long distances (up to 10,000 m) with a high time frequency and an accuracy of +0.2º C is determined. The principle of temperature measurement along a fiber optic cable is based on the thermal sensitivity of the relative intensities of backscattered photons that arise from collisions with electrons in the core of the glass fiber. A laser pulse, generated by the DTS unit, traversing a fiber optic cable will result in backscatter at two frequencies. The DTS quantifies the intensity of these backscattered photons and elapsed time between the pulse and the observed returned light. The intensity of one of the frequencies is strongly dependent on the temperature at the point where the scattering process occurred. The computed temperature is attributed to the position along the cable from which the light was reflected, computed from the time of travel for the light.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Erosion potential and the effects of tillage can be evaluated from quantitative descriptions of soil surface roughness. The present study therefore aimed to fill the need for a reliable, low-cost and convenient method to measure that parameter. Based on the interpretation of micro-topographic shadows, this new procedure is primarily designed for use in the field after tillage. The principle underlying shadow analysis is the direct relationship between soil surface roughness and the shadows cast by soil structures under fixed sunlight conditions. The results obtained with this method were compared to the statistical indexes used to interpret field readings recorded by a pin meter. The tests were conducted on 4-m2 sandy loam and sandy clay loam plots divided into 1-m2 subplots tilled with three different tools: chisel, tiller and roller. The highly significant correlation between the statistical indexes and shadow analysis results obtained in the laboratory as well as in the field for all the soil?tool combinations proved that both variability (CV) and dispersion (SD) are accommodated by the new method. This procedure simplifies the interpretation of soil surface roughness and shortens the time involved in field operations by a factor ranging from 12 to 20.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a novel method to simulate radio propagation is presented. The method consists of two steps: automatic 3D scenario reconstruction and propagation modeling. For 3D reconstruction, a machine learning algorithm is adopted and improved to automatically recognize objects in pictures taken from target regions, and 3D models are generated based on the recognized objects. The propagation model employs a ray tracing algorithm to compute signal strength for each point on the constructed 3D map. Our proposition reduces, or even eliminates, infrastructure cost and human efforts during the construction of realistic 3D scenes used in radio propagation modeling. In addition, the results obtained from our propagation model proves to be both accurate and efficient

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Look-up tables are collected and analysed for 12 European National Travel Surveys (NTS) in a harmonized way covering the age group 13-84 year. Travel behaviour measured as kilometres, time use and trips per traveller is compared. Trips per traveller are very similar over the countries whereas kilometres differ most, from minus 28% for Spain to plus 19% and 14% for Sweden and Finland. It is shown that two main factors for differences are GDP per capita and density in the urban areas. The latter is the main reason for the low level in Spain. Mode share is except for Spain with a very high level of walking trips rather similar with a higher level of cycling in the Netherlands, more public transport in Switzerland, and more air traffic in Sweden. Normally kilometres per respondent/inhabitant is used for national planning purpose and this is very affected by the share of mobile travellers. The immobile share is varying between 8 and 28% with 6 NTS at a 15-17% level. These differences are analysed and discussed and it is concluded that the immobile share should be a little less than 15-17% because it is assessed that some short trips might have been forgotten in these 6 countries. The share has a downward tendency with higher density. The resulting immobile share is very dependent on data collection methodology, sampling method, quality of interviewer felt-work etc. The paper shows other possibilities to improve local surveys based on comparison with other countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a novel method to simulate radio propagation is presented. The method consists of two steps: automatic 3D scenario reconstruction and propagation modeling. For 3D reconstruction, a machine learning algorithm is adopted and improved to automatically recognize objects in pictures taken from target region, and 3D models are generated based on the recognized objects. The propagation model employs a ray tracing algorithm to compute signal strength for each point on the constructed 3D map. By comparing with other methods, the work presented in this paper makes contributions on reducing human efforts and cost in constructing 3D scene; moreover, the developed propagation model proves its potential in both accuracy and efficiency.