888 resultados para Accuracy of Hotel Feasibility Study Projections


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study explores the accuracy and valuation implications of the application of a comprehensive list of equity multiples in the takeover context. Motivating the study is the prevalent use of equity multiples in practice, the observed long-run underperformance of acquirers following takeovers, and the scarcity of multiplesbased research in the merger and acquisition setting. In exploring the application of equity multiples in this context three research questions are addressed: (1) how accurate are equity multiples (RQ1); which equity multiples are more accurate in valuing the firm (RQ2); and which equity multiples are associated with greater misvaluation of the firm (RQ3). Following a comprehensive review of the extant multiples-based literature it is hypothesised that the accuracy of multiples in estimating stock market prices in the takeover context will rank as follows (from best to worst): (1) forecasted earnings multiples, (2) multiples closer to bottom line earnings, (3) multiples based on Net Cash Flow from Operations (NCFO) and trading revenue. The relative inaccuracies in multiples are expected to flow through to equity misvaluation (as measured by the ratio of estimated market capitalisation to residual income value, or P/V). Accordingly, it is hypothesised that greater overvaluation will be exhibited for multiples based on Trading Revenue, NCFO, Book Value (BV) and earnings before interest, tax, depreciation and amortisation (EBITDA) versus multiples based on bottom line earnings; and that multiples based on Intrinsic Value will display the least overvaluation. The hypotheses are tested using a sample of 147 acquirers and 129 targets involved in Australian takeover transactions announced between 1990 and 2005. The results show that first, the majority of computed multiples examined exhibit valuation errors within 30 percent of stock market values. Second, and consistent with expectations, the results provide support for the superiority of multiples based on forecasted earnings in valuing targets and acquirers engaged in takeover transactions. Although a gradual improvement in estimating stock market values is not entirely evident when moving down the Income Statement, historical earnings multiples perform better than multiples based on Trading Revenue or NCFO. Third, while multiples based on forecasted earnings have the highest valuation accuracy they, along with Trading Revenue multiples for targets, produce the most overvalued valuations for acquirers and targets. Consistent with predictions, greater overvaluation is exhibited for multiples based on Trading Revenue for targets, and NCFO and EBITDA for both acquirers and targets. Finally, as expected, multiples based Intrinsic Value (along with BV) are associated with the least overvaluation. Given the widespread usage of valuation multiples in takeover contexts these findings offer a unique insight into their relative effectiveness. Importantly, the findings add to the growing body of valuation accuracy literature, especially within Australia, and should assist market participants to better understand the relative accuracy and misvaluation consequences of various equity multiples used in takeover documentation and assist them in subsequent investment decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study examined the accuracy of maternal perceived child weight. Urban-affluent mothers with 111 children aged 2-5 years were recruited. Nearly a quarter of mothers overestimated their underweight child as healthyweight and all overweight/obese children were perceived as healthyweight. Mothers therefore were unable to recognize their child’s true weight status.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For clinical use, in electrocardiogram (ECG) signal analysis it is important to detect not only the centre of the P wave, the QRS complex and the T wave, but also the time intervals, such as the ST segment. Much research focused entirely on qrs complex detection, via methods such as wavelet transforms, spline fitting and neural networks. However, drawbacks include the false classification of a severe noise spike as a QRS complex, possibly requiring manual editing, or the omission of information contained in other regions of the ECG signal. While some attempts were made to develop algorithms to detect additional signal characteristics, such as P and T waves, the reported success rates are subject to change from person-to-person and beat-to-beat. To address this variability we propose the use of Markov-chain Monte Carlo statistical modelling to extract the key features of an ECG signal and we report on a feasibility study to investigate the utility of the approach. The modelling approach is examined with reference to a realistic computer generated ECG signal, where details such as wave morphology and noise levels are variable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A mine site water balance is important for communicating information to interested stakeholders, for reporting on water performance, and for anticipating and mitigating water-related risks through water use/demand forecasting. Gaining accuracy over the water balance is therefore crucial for sites to achieve best practice water management and to maintain their social license to operate. For sites that are located in high rainfall environments the water received to storage dams through runoff can represent a large proportion of the overall inputs to site; inaccuracies in these flows can therefore lead to inaccuracies in the overall site water balance. Hydrological models that estimate runoff flows are often incorporated into simulation models used for water use/demand forecasting. The Australian Water Balance Model (AWBM) is one example that has been widely applied in the Australian context. However, the calibration of AWBM in a mining context can be challenging. Through a detailed case study, we outline an approach that was used to calibrate and validate AWBM at a mine site. Commencing with a dataset of monitored dam levels, a mass balance approach was used to generate an observed runoff sequence. By incorporating a portion of this observed dataset into the calibration routine, we achieved a closer fit between the observed vs. simulated dataset compared with the base case. We conclude by highlighting opportunities for future research to improve the calibration fit through improving the quality of the input dataset. This will ultimately lead to better models for runoff prediction and thereby improve the accuracy of mine site water balances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims to assess the accuracy of Digital Elevation Model (DEM) which is generated by using Toutin’s model. Thus, Toutin’s model was run by using OrthoEngineSE of PCI Geomatics 10.3.Thealong-track stereoimages of Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) sensor with 15 m resolution were used to produce DEM on an area with low and near Mean Sea Level (MSL) elevation in Johor Malaysia. Despite the satisfactory pre-processing results the visual assessment of the DEM generated from Toutin’s model showed that the DEM contained many outliers and incorrect values. The failure of Toutin’s model may mostly be due to the inaccuracy and insufficiency of ASTER ephemeris data for low terrains as well as huge water body in the stereo images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The understanding of the loads generated within the prosthetic leg can aid engineers in the design of components and clinicians in the process of rehabilitation. Traditional methods to assess these loads have relied on inverse dynamics. This indirect method estimates the applied load using video recordings and force-plates located at a distance from the region of interest, such as the base of the residuum. The well-known limitations of this method are related to the accuracy of this recursive model and the experimental conditions required (Frossard et al., 2003). Recent developments in sensors (Frossard et al., 2003) and prosthetic fixation (Brånemark et al., 2000) permit the direct measurement of the loads applied on the residuum of transfemoral amputees. In principle, direct measurement should be an appropriate tool for assessing the accuracy of inverse dynamics. The purpose of this paper is to determine the validity of this assumption. The comparative variable used in this study is the velocity of the relative body center of mass (VCOM(t)). The relativity is used to align the static (w.r.t. position) force plate measurement with the dynamic load cell measurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The aim of this study was to determine the feasibility of a combined supervised and home-based exercise intervention during chemotherapy for women with recurrent ovarian cancer. Secondary aims were to determine the impact of physical activity on physical and psychological outcomes and on chemotherapy completion rates. Methods: Women with recurrent ovarian cancer were recruited from 3 oncology outpatient clinics in Sydney and Canberra, Australia. All participants received an individualized exercise program that consisted of 90 minutes or more of low to moderate aerobic, resistance, core stability, and balance exercise per week, for 12 weeks. Feasibility was determined by recruitment rate, retention rate, intervention adherence, and adverse events. Aerobic capacity, muscular strength, fatigue, sleep quality, quality of life, depression, and chemotherapy completion rates were assessed at weeks 0, 12, and 24. Results: Thirty participants were recruited (recruitment rate, 63%), with a retention rate of 70%. Participants averaged 196 ± 138 min · wk of low to moderate physical activity throughout the intervention, with adherence to the program at 81%. There were no adverse events resulting from the exercise intervention. Participants who completed the study displayed significant improvements in quality of life (P = 0.017), fatigue (P = 0.004), mental health (P = 0.007), muscular strength (P = 0.001), and balance (P = 0.003) after the intervention. Participants completing the intervention had a higher relative dose intensity than noncompleters (P = 0.03). Conclusions: A program consisting of low to moderate exercise of 90 min · wk was achieved by two-thirds of women with recurrent ovarian cancer in this study, with no adverse events reported. Randomized control studies are required to confirm the benefits of exercise reported in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new technique called the reef resource inventory (RRI) was developed to map the distribution and abundance of benthos and substratum on reefs. The rapid field sampling technique uses divers to visually estimate the percentage cover of categories of benthos and substratum along 2x20 in plotless strip-transects positioned randomly over the tops, and systematically along the edge of reefs. The purpose of this study was to compare the relative sampling accuracy of the RRI against the line intercept transect technique (LIT), an international standard for sampling reef benthos and substratum. Analysis of paired sampling with LIT and RRI at 51 sites indicated sampling accuracy was not different (P > 0.05) for 8 of the 12 benthos and substratum categories used in the study. Significant differences were attributed to small-scale patchiness and cryptic coloration of some benthos; effects associated with sampling a sparsely distributed animal along a line versus an area; difficulties in discriminating some of the benthos and substratum categories; and differences due to visual acuity since LIT measurements were taken by divers close to the seabed whereas RRI measurements were taken by divers higher in the water column. The relative cost efficiency of the RRI technique was at least three times that of LIT for all benthos and substratum categories and as much as 10 times higher for two categories. These results suggest that the RRI can be used to obtain reliable and accurate estimates of relative abundance of broad categories of reef benthos and substratum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modeling of cultivar x trial effects for multienvironment trials (METs) within a mixed model framework is now common practice in many plant breeding programs. The factor analytic (FA) model is a parsimonious form used to approximate the fully unstructured form of the genetic variance-covariance matrix in the model for MET data. In this study, we demonstrate that the FA model is generally the model of best fit across a range of data sets taken from early generation trials in a breeding program. In addition, we demonstrate the superiority of the FA model in achieving the most common aim of METs, namely the selection of superior genotypes. Selection is achieved using best linear unbiased predictions (BLUPs) of cultivar effects at each environment, considered either individually or as a weighted average across environments. In practice, empirical BLUPs (E-BLUPs) of cultivar effects must be used instead of BLUPs since variance parameters in the model must be estimated rather than assumed known. While the optimal properties of minimum mean squared error of prediction (MSEP) and maximum correlation between true and predicted effects possessed by BLUPs do not hold for E-BLUPs, a simulation study shows that E-BLUPs perform well in terms of MSEP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Little is known about the neural mechanisms by which transcranial direct current stimulation (tDCS) impacts on language processing in post-stroke aphasia. This was addressed in a proof-of-principle study that explored the effects of tDCS application in aphasia during simultaneous functional magnetic resonance imaging (fMRI). We employed a single subject, cross-over, sham-tDCS controlled design, and the stimulation was administered to an individualized perilesional stimulation site that was identified by a baseline fMRI scan and a picture naming task. Peak activity during the baseline scan was located in the spared left inferior frontal gyrus and this area was stimulated during a subsequent cross-over phase. tDCS was successfully administered to the target region and anodal- vs. sham-tDCS resulted in selectively increased activity at the stimulation site. Our results thus demonstrate that it is feasible to precisely target an individualized stimulation site in aphasia patients during simultaneous fMRI, which allows assessing the neural mechanisms underlying tDCS application. The functional imaging results of this case report highlight one possible mechanism that may have contributed to beneficial behavioral stimulation effects in previous clinical tDCS trials in aphasia. In the future, this approach will allow identifying distinct patterns of stimulation effects on neural processing in larger cohorts of patients. This may ultimately yield information about the variability of tDCS effects on brain functions in aphasia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study contributes to the neglect effect literature by looking at the relative trading volume in terms of value. The results for the Swedish market show a significant positive relationship between the accuracy of estimation and the relative trading volume. Market capitalisation and analyst coverage have in prior studies been used as proxies for neglect. These measures however, do not take into account the effort analysts put in when estimating corporate pre-tax profits. I also find evidence that the industry of the firm influence the accuracy of estimation. In addition, supporting earlier findings, loss making firms are associated with larger forecasting errors. Further, I find that the average forecast error increased in the year 2000 – in Sweden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary objective of the present study is to show that for the most common configuration of an impactor system, the accelerometer cannot exactly reproduce the dynamic response of a specimen subjected to impact loading. An equivalent Lumped Parameter Model (LPM) of the given impactor set-up has been formulated for assessing the accuracy of an accelerometer mounted in a drop-weight impactor set-up for an axially loaded specimen. A specimen under the impact loading is represented by a non-linear spring of varying stiffness, while the accelerometer is assumed to behave in a linear manner due to its high stiffness. Specimens made of steel, aluminium and fibre-reinforced composite (FRC) are used in the present study. Assuming the force-displacement response obtained in an actual impact test to be the true behaviour of the test specimen, a suitable numerical approach has been used to solve the governing non-linear differential equations of a three degrees-of-freedom (DOF) system in a piece-wise linear manner. The numerical solution of the governing differential equations following an explicit time integration scheme yields an excellent reproduction of the mechanical behaviour of the specimen, consequently confirming the accuracy of the numerical approach. However, the spring representing the accelerometer predicts a response that qualitatively matches the assumed force-displacement response of the test specimen with a perceptibly lower magnitude of load.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of the current study is to evaluate the fidelity of load cell reading during impact testing in a drop-weight impactor using lumped parameter modeling. For the most common configuration of a moving impactor-load cell system in which dynamic load is transferred from the impactor head to the load cell, a quantitative assessment is made of the possible discrepancy that can result in load cell response. A 3-DOF (degrees-of-freedom) LPM (lumped parameter model) is considered to represent a given impact testing set-up. In this model, a test specimen in the form of a steel hat section similar to front rails of cars is represented by a nonlinear spring while the load cell is assumed to behave in a linear manner due to its high stiffness. Assuming a given load-displacement response obtained in an actual test as the true behavior of the specimen, the numerical solution of the governing differential equations following an implicit time integration scheme is shown to yield an excellent reproduction of the mechanical behavior of the specimen thereby confirming the accuracy of the numerical approach. The spring representing the load cell, however,predicts a response that qualitatively matches the assumed load-displacement response of the test specimen with a perceptibly lower magnitude of load.