893 resultados para Costs and cost analysis
Resumo:
To achieve the goal of sustainable development, the building energy system was evaluated from both the first and second law of thermodynamics point of view. The relationship between exergy destruction and sustainable development were discussed at first, followed by the description of the resource abundance model, the life cycle analysis model and the economic investment effectiveness model. By combining the forgoing models, a new sustainable index was proposed. Several green building case studies in U.S. and China were presented. The influences of building function, geographic location, climate pattern, the regional energy structure, and the technology improvement potential of renewable energy in the future were discussed. The building’s envelope, HVAC system, on-site renewable energy system life cycle analysis from energy, exergy, environmental and economic perspective were compared. It was found that climate pattern had a dramatic influence on the life cycle investment effectiveness of the building envelope. The building HVAC system energy performance was much better than its exergy performance. To further increase the exergy efficiency, renewable energy rather than fossil fuel should be used as the primary energy. A building life cycle cost and exergy consumption regression model was set up. The optimal building insulation level could be affected by either cost minimization or exergy consumption minimization approach. The exergy approach would cause better insulation than cost approach. The influence of energy price on the system selection strategy was discussed. Two photovoltaics (PV) systems—stand alone and grid tied system were compared by the life cycle assessment method. The superiority of the latter one was quite obvious. The analysis also showed that during its life span PV technology was less attractive economically because the electricity price in U.S. and China did not fully reflect the environmental burden associated with it. However if future energy price surges and PV system cost reductions were considered, the technology could be very promising for sustainable buildings in the future.
Resumo:
The purpose of this paper is to understand whether multinational restaurant firms (MNRF’s) have higher agency and expected bankruptcy costs. Given this expectation, this may have an impact on the amount of debt incurred by MNRF’s. Overall, the findings are consistent with the existing literatue in terms of the positive relationship between MNRF’s and agency and bankruptcy cost. However, it was found that MNRF’s also have more total debt. This is surprising given the higher agency and bankruptcy costs. The importance of this research is that there may be considerations other than agency and bacnkruptcy costs affecting the capital structure decisions of MNRF’s.
Resumo:
There are many factors which can assist in controlling the cost of labor in the food service industry. The author discusses a number of these, including scheduling, establishing production standards, forecasting workloads, analyzing employee turnover, combating absenteeism, and controlling overtime.
Resumo:
To achieve the goal of sustainable development, the building energy system was evaluated from both the first and second law of thermodynamics point of view. The relationship between exergy destruction and sustainable development were discussed at first, followed by the description of the resource abundance model, the life cycle analysis model and the economic investment effectiveness model. By combining the forgoing models, a new sustainable index was proposed. Several green building case studies in U.S. and China were presented. The influences of building function, geographic location, climate pattern, the regional energy structure, and the technology improvement potential of renewable energy in the future were discussed. The building’s envelope, HVAC system, on-site renewable energy system life cycle analysis from energy, exergy, environmental and economic perspective were compared. It was found that climate pattern had a dramatic influence on the life cycle investment effectiveness of the building envelope. The building HVAC system energy performance was much better than its exergy performance. To further increase the exergy efficiency, renewable energy rather than fossil fuel should be used as the primary energy. A building life cycle cost and exergy consumption regression model was set up. The optimal building insulation level could be affected by either cost minimization or exergy consumption minimization approach. The exergy approach would cause better insulation than cost approach. The influence of energy price on the system selection strategy was discussed. Two photovoltaics (PV) systems – stand alone and grid tied system were compared by the life cycle assessment method. The superiority of the latter one was quite obvious. The analysis also showed that during its life span PV technology was less attractive economically because the electricity price in U.S. and China did not fully reflect the environmental burden associated with it. However if future energy price surges and PV system cost reductions were considered, the technology could be very promising for sustainable buildings in the future.
Resumo:
Acknowledgements The authors are grateful for the input of Professor Blair Smith (University of Dundee): his counsel early in the project, and his advice and comments regarding the search strategy; and Professor Danielle van der Windt (Keele University) for helpful advice and comments. Funding The British Pain Society provided financial assistance to AF with the costs of this project. PC was partly supported by an Arthritis Research UK Primary Care Centre grant (reference: 18139).
Resumo:
“The authors wish to thank the European Commission for funding this research programme ‘Health Care Reform: The iMpact on practice, oUtcomes and cost of New ROles for health profeSsionals (MUNROS), under the European Community’s Seventh Framework Programme (FP7 HEALTH-2012-INNOVATION-1) grant agreement number HEALTH-F3-2012-305467EC. The authors also wish to thank all those who supported and guided this work both within the MUNROS research project team and as external associates. In particular we would like to thank Mathijs Kelder for his valuable contribution in the review process. The authors also wish to thank all the MUNROS research and project partners for their continuing collaboration in this research”.
Resumo:
“The authors wish to thank the European Commission for funding this research programme ‘Health Care Reform: The iMpact on practice, oUtcomes and cost of New ROles for health profeSsionals (MUNROS), under the European Community’s Seventh Framework Programme (FP7 HEALTH-2012-INNOVATION-1) grant agreement number HEALTH-F3-2012-305467EC. The authors also wish to thank all those who supported and guided this work both within the MUNROS research project team and as external associates. In particular we would like to thank Mathijs Kelder for his valuable contribution in the review process. The authors also wish to thank all the MUNROS research and project partners for their continuing collaboration in this research”.
Resumo:
“The authors wish to thank the European Commission for funding this research programme ‘Health Care Reform: The iMpact on practice, oUtcomes and cost of New ROles for health profeSsionals (MUNROS), under the European Community’s Seventh Framework Programme (FP7 HEALTH-2012-INNOVATION-1) grant agreement number HEALTH-F3-2012-305467EC. The authors also wish to thank all those who supported and guided this work both within the MUNROS research project team and as external associates. In particular we would like to thank Mathijs Kelder for his valuable contribution in the review process. The authors also wish to thank all the MUNROS research and project partners for their continuing collaboration in this research”.
Resumo:
Lifetime risk of developing colorectal cancer (CRC) is 5% and five-year survival at early-stage is 92%. CRC risk following index colonoscopy should establish post-screening surveillance benefit, which may be greater in high-risk patients. This review evaluated published cost-effectiveness estimates of post-polypectomy surveillance to assess the potential for personalised recommendations by risk sub-group. Current data suggested colonoscopy identifies those at low-risk of CRC, who may not benefit from intensive surveillance, which risks unnecessary harms and inefficient use of colonoscopy resources. Meta-analyses of incidence of advanced-neoplasia post-polypectomy for low-risk was comparable to those without adenoma; both rates were under the lifetime risk of 5%. Therefore, greater personalisation through de-intensified strategies for low-risk individuals could be beneficial and could employ non-invasive testing such as faecal immunochemical tests (FIT) combined with primary prevention or chemoprevention, thereby reserving colonoscopy for targeted use in personalised risk-stratified surveillance.
This systematic review aims to:
1. Assess if there is evidence supporting a program of personalised surveillance in patients with colorectal adenoma according to risk sub-group.
2. Compare the effectiveness of surveillance colonoscopy with alternative prevention strategies.
3. Assess trade-off between costs, benefits and adverse effects which must be considered in a decision to adopt or reject personalised surveillance.
Resumo:
Background
Increasing physical activity in the workplace can provide employee physical and mental health benefits, and employer economic benefits through reduced absenteeism and increased productivity. The workplace is an opportune setting to encourage habitual activity. However, there is limited evidence on effective behaviour change interventions that lead to maintained physical activity. This study aims to address this gap and help build the necessary evidence base for effective, and cost-effective, workplace interventions
Methods/design
This cluster randomised control trial will recruit 776 office-based employees from public sector organisations in Belfast and Lisburn city centres, Northern Ireland. Participants will be randomly allocated by cluster to either the Intervention Group or Control Group (waiting list control). The 6-month intervention consists of rewards (retail vouchers, based on similar principles to high street loyalty cards), feedback and other evidence-based behaviour change techniques. Sensors situated in the vicinity of participating workplaces will promote and monitor minutes of physical activity undertaken by participants. Both groups will complete all outcome measures. The primary outcome is steps per day recorded using a pedometer (Yamax Digiwalker CW-701) for 7 consecutive days at baseline, 6, 12 and 18 months. Secondary outcomes include health, mental wellbeing, quality of life, work absenteeism and presenteeism, and use of healthcare resources. Process measures will assess intervention “dose”, website usage, and intervention fidelity. An economic evaluation will be conducted from the National Health Service, employer and retailer perspective using both a cost-utility and cost-effectiveness framework. The inclusion of a discrete choice experiment will further generate values for a cost-benefit analysis. Participant focus groups will explore who the intervention worked for and why, and interviews with retailers will elucidate their views on the sustainability of a public health focused loyalty card scheme.
Discussion
The study is designed to maximise the potential for roll-out in similar settings, by engaging the public sector and business community in designing and delivering the intervention. We have developed a sustainable business model using a ‘points’ based loyalty platform, whereby local businesses ‘sponsor’ the incentive (retail vouchers) in return for increased footfall to their business.
Resumo:
In many countries wind energy has become an indispensable part of the electricity generation mix. The opportunity for ground based wind turbine systems are becoming more and more constrained due to limitations on turbine hub heights, blade lengths and location restrictions linked to environmental and permitting issues including special areas of conservation and social acceptance due to the visual and noise impacts. In the last decade there have been numerous proposals to harness high altitude winds, such as tethered kites, airfoils and dirigible based rotors. These technologies are designed to operate above the neutral atmospheric boundary layer of 1,300 m, which are subject to more powerful and persistent winds thus generating much higher electricity capacities. This paper presents an in-depth review of the state-of-the-art of high altitude wind power, evaluates the technical and economic viability of deploying high altitude wind power as a resource in Northern Ireland and identifies the optimal locations through considering wind data and geographical constraints. The key findings show that the total viable area over Northern Ireland for high altitude wind harnessing devices is 5109.6 km2, with an average wind power density of 1,998 W/m2 over a 20-year span, at a fixed altitude of 3,000 m. An initial budget for a 2MW pumping kite device indicated a total cost £1,751,402 thus proving to be economically viable with other conventional wind-harnessing devices.
Resumo:
Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.
Resumo:
In February the U.S. 20 Corridor Development Study's Steering Committee met to review Report A. At that meeting the Committee selected seven alternatives to be evaluated from a cost and traffic perspective. This report, Report B, presents the cost and traffic evaluation of these seven alternatives. This Report B and its cost and traffic estimates will be reviewed at the next Steering Committee meeting. At that time it is possible that, based on the traffic and cost estimates, one or more of the alternatives will be eliminated from further consideration. After that meeting the Consultant will initiate the more in-depth analyses, including the economic feasibility
Resumo:
Development of adequate diving capabilities is crucial for survival of seal pups and may depend on age and body size. We tracked the diving behavior of 20 gray seal pups during their first 3 mo at sea using satellite relay data loggers. We employed quantile analysis to track upper limits of dive duration and percentage time spent diving, and lower limits of surface intervals. When pups first left the breeding colony, extreme (ninety-fifth percentile) dive duration and percentage time spent diving were positively correlated with age, but not mass, at departure. Extreme dive durations and percentage time spent diving peaked at [Formula: see text] d of age at values comparable with those of adults, but were not sustained. Greater peaks in extreme percentage time spent diving occurred in pups that had higher initial values, were older at their peak, and were heavier at departure. Pups that were smaller and less capable divers when they left the colony improved extreme dive durations and percentage time spent diving more rapidly, once they were at sea. Minimum survival time correlated positively with departure mass. Pups that were heavier at weaning thus benefitted from being both larger and older at departure, but smaller pups faced a trade-off. While age at departure had a positive effect on early dive performance, departure mass impacted on peak percentage time spent diving and longer-term survival. We speculate that once small pups have attained a minimum degree of physiological development to support diving, they would benefit by leaving the colony when younger but larger to maximize limited fuel reserves, rather than undergoing further maturation on land away from potential food resources, because poor divers may be able to "catch up" once at sea.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.