6 resultados para Market-to-book-ratio
em Digital Commons at Florida International University
Resumo:
In an article entitled - The Specialist: Coming Soon To Your Local Hotel - by Stan Bromley, Regional Vice President and General Manager, Four Seasons Clift Hotel, San Francisco, the author’s introduction states: “An experienced hotelier discusses the importance of the delivery of a high “quality-to-value” ratio consistently to guests, particularly as the hotel market becomes specialized and a distinction is drawn between a “property” and a “hotel.” The author’s primary intention is to make you, the reader, aware of changes in the hospitality/hotel marketplace. From the embryo to the contemporary, the hotel market has consistently evolved; this includes but is not limited to mission statement, marketing, management, facilities, and all the tangibles and intangibles of the total hotel experience. “Although we are knocking ourselves out trying to be everything to everyone, I don't think hotel consumers are as interested in “mixing and matching” as they were in the past,” Bromley says. “Today's hotel guest is looking for “specialized care,” and is increasingly skeptical of our industry-wide hotel ads and promises of greatness.” As an example Bromley makes an analogy using retail outlets such as Macy’s, Saks, and Sears, which cater to their own unique market segment. Hotels now follow the same outline, he allows. “In my view, two key factors will make a hotel a success,” advises Bromley. “First, know your specialty and market to that segment. Second, make sure you consistently offer a high quality-to-value ratio. That means every day.” To emphasize that second point, Bromley offers this bolstering thought, “The second factor that will make or break your business is your ability to deliver a high "quality/value" ratio-and to do so consistently.” The author evidently considers quality-to-value ratio to be an important element. Bromley emphasizes the importance of convention and trade show business to the hotel industry. That business element cannot be over-estimated in his opinion. This doesn’t mean an operator who can accommodate that type of business should exclude other client opportunities outside the target market. It does mean, however, these secondary opportunities should only be addressed after pursuing the primary target strategy. After all, the largest profit margin lies in the center of the target. To amplify the above statement, and in reference to his own experience, Bromley says, “Being in the luxury end of the business I, on the other hand, need to uncover and book individuals and small corporate meetings more than convention or association business.
Resumo:
Prior research suggests that book-tax income differences (BTD) relate to both firms' earnings quality and operating performance. In this dissertation, I explore whether and how financial analysts signal the implications of BTD efficiently. This dissertation is comprised of three essays on BTD. The three essays seek to develop a better understanding of how financial analysts utilize information reflected in BTD (derived from the ratio of taxable income to book income). The first essay is a review and discussion of prior research regarding BTD. The second essay of this dissertation investigates the role of BTD in indicating the consensus and dispersion of analyst recommendations. I find that sell recommendations are positively related to BTD. I also document that analyst coverage has a positive effect on the standard deviation of consensus recommendations with respect to BTD. The third essay is an empirical analysis of analysts' forecast optimism, analyst coverage, and BTD. I find a negative association between forecast optimism and BTD. My results are consistent with a larger BTD being associated with less forecast bias. Overall, I interpret the sum of the evidence as being consistent with BTD reflecting information about earnings quality, and consistent with analysts examining and using this information in making decisions regarding both forecasts and recommendations.
Resumo:
The need to incorporate advanced engineering tools in biology, biochemistry and medicine is in great demand. Many of the existing instruments and tools are usually expensive and require special facilities.^ With the advent of nanotechnology in the past decade, new approaches to develop devices and tools have been generated by academia and industry. ^ One such technology, NMR spectroscopy, has been used by biochemists for more than 2 decades to study the molecular structure of chemical compounds. However, NMR spectrometers are very expensive and require special laboratory rooms for their proper operation. High magnetic fields with strengths in the order of several Tesla make these instruments unaffordable to most research groups.^ This doctoral research proposes a new technology to develop NMR spectrometers that can operate at field strengths of less than 0.5 Tesla using an inexpensive permanent magnet and spin dependent nanoscale magnetic devices. This portable NMR system is intended to analyze samples as small as a few nanoliters.^ The main problem to resolve when downscaling the variables is to obtain an NMR signal with high Signal-To-Noise-Ratio (SNR). A special Tunneling Magneto-Resistive (TMR) sensor design was developed to achieve this goal. The minimum specifications for each component of the proposed NMR system were established. A complete NMR system was designed based on these minimum requirements. The goat was always to find cost effective realistic components. The novel design of the NMR system uses technologies such as Direct Digital Synthesis (DDS), Digital Signal Processing (DSP) and a special Backpropagation Neural Network that finds the best match of the NMR spectrum. The system was designed, calculated and simulated with excellent results.^ In addition, a general method to design TMR Sensors was developed. The technique was automated and a computer program was written to help the designer perform this task interactively.^
Resumo:
This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.
Resumo:
A report from the National Institutes of Health defines a disease biomarker as a “characteristic that is objectively measured and evaluated as an indicator of normal biologic processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention.” Early diagnosis is a crucial factor for incurable disease such as cancer and Alzheimer’s disease (AD). During the last decade researchers have discovered that biochemical changes caused by a disease can be detected considerably earlier as compared to physical manifestations/symptoms. In this dissertation electrochemical detection was utilized as the detection strategy as it offers high sensitivity/specificity, ease of operation, and capability of miniaturization and multiplexed detection. Electrochemical detection of biological analytes is an established field, and has matured at a rapid pace during the last 50 years and adapted itself to advances in micro/nanofabrication procedures. Carbon fiber microelectrodes were utilized as the platform sensor due to their high signal to noise ratio, ease and low-cost of fabrication, biocompatibility, and active carbon surface which allows conjugation with biorecognition moieties. This dissertation specifically focuses on the detection of 3 extensively validated biomarkers for cancer and AD. Firstly, vascular endothelial growth factor (VEGF) a cancer biomarker was detected using a one-step, reagentless immunosensing strategy. The immunosensing strategy allowed a rapid and sensitive means of VEGF detection with a detection limit of about 38 pg/mL with a linear dynamic range of 0–100 pg/mL. Direct detection of AD-related biomarker amyloid beta (Aβ) was achieved by exploiting its inherent electroactivity. The quantification of the ratio of Aβ1-40/42 (or Aβ ratio) has been established as a reliable test to diagnose AD through human clinical trials. Triple barrel carbon fiber microelectrodes were used to simultaneously detect Aβ1-40 and Aβ1-42 in cerebrospinal fluid from rats within a detection range of 100nM to 1.2μM and 400nM to 1μM respectively. In addition, the release of DNA damage/repair biomarker 8-hydroxydeoxyguanine (8-OHdG) under the influence of reactive oxidative stress from single lung endothelial cell was monitored using an activated carbon fiber microelectrode. The sensor was used to test the influence of nicotine, which is one of the most biologically active chemicals present in cigarette smoke and smokeless tobacco.
Resumo:
The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^