792 resultados para Informed decisions
Resumo:
Offering competitive health and wellness benefit programs is ever challenging for companies, as industry leaders continually devise ways to innovate and deliver high-value programs to attract and retain employees. Financial stability is a form of wellness, and yet companies offer limited finance-related benefit offerings. Employees are commonly given access to retirement savings plans and college savings plans, and yet employers do not typically incorporate educational components into benefit programs. Research presented in this paper examines the financial issues impacting the lives of young workers in the United States and makes the case for a new recruitment and retention tool: a dynamic, practical benefit program designed to engage employees in their financial planning early and empower them to make informed decisions.
Resumo:
Context: Global Software Development (GSD) allows companies to take advantage of talent spread across the world. Most research has been focused on the development aspect. However, little if any attention has been paid to the management of GSD projects. Studies report a lack of adequate support for management’s decisions made during software development, further accentuated in GSD since information is scattered throughout multiple factories, stored in different formats and standards. Objective: This paper aims to improve GSD management by proposing a systematic method for adapting Business Intelligence techniques to software development environments. This would enhance the visibility of the development process and enable software managers to make informed decisions regarding how to proceed with GSD projects. Method: A combination of formal goal-modeling frameworks and data modeling techniques is used to elicitate the most relevant aspects to be measured by managers in GSD. The process is described in detail and applied to a real case study throughout the paper. A discussion regarding the generalisability of the method is presented afterwards. Results: The application of the approach generates an adapted BI framework tailored to software development according to the requirements posed by GSD managers. The resulting framework is capable of presenting previously inaccessible data through common and specific views and enabling data navigation according to the organization of software factories and projects in GSD. Conclusions: We can conclude that the proposed systematic approach allows us to successfully adapt Business Intelligence techniques to enhance GSD management beyond the information provided by traditional tools. The resulting framework is able to integrate and present the information in a single place, thereby enabling easy comparisons across multiple projects and factories and providing support for informed decisions in GSD management.
Resumo:
Regulatory cooperation is both one of the most ambitious and contentious parts of the EU-US Transatlantic Trade and Investment Partnership (TTIP) negotiations. In this paper, having identified the many levels of international regulatory cooperation, we show that TTIP regulatory cooperation will be significant, but not ambitious, while political and legal limits on cooperation in both the EU and the US minimise the concerns. For transatlantic regulatory cooperation to work, it must accept these political and legal constraints, build trust and confidence among counterpart regulators so they see that their transatlantic partner can help them do their work better, and provide tools to help regulators on both sides make informed decisions while retaining their regulatory autonomy and accountability to their politicians and citizens. A TTIP that provides these tools – and some more detailed instruments to that effect – will be more ambitious than previous trade agreements, and should, over the longer term, provide both the economic and regulatory benefits that the two sides envisage. The paper incorporates comparisons with the relevant chapters of recent FTAs the US and the EU have concluded, so as to clarify the approaches and degrees of ambition in this area. This comparison suggests that the TTIP regulatory cooperation will probably be more ambitious in terms of commitments and have a wider scope than any of these FTAs.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
Increasingly in the UK, companies that have traditionally considered themselves as manufacturers are being advised to now see themselves as service providers and to reconsider whether to have any production capability. A key challenge is to translate this strategy into a selection of product and service-centred activities within the company's supply chain networks. Strategic positioning is concerned with the choice of business activities a company carries out itself, compared to those provided by suppliers, partners, distributors and even customers. In practice, strategic positioning is directly impacted by such decisions as outsourcing, off-shoring, partnering, technology innovation, acquisition and exploitation. If companies can better understand their strategic positioning, they can make more informed decisions about the adoption of alternative manufacturing and supply chain activities. Similarly, they are more likely to reject those that, like off-shoring, are currently en vogue but are highly likely to erode competitive edge and business success. Our research has developed a new concept we call 'competitive space' as a means of appreciating the strategic positioning of companies, along with a structured decision process for managing competitive space. Our ideas about competitive space, along with the decision process itself, have been developed and tested on a range of manufacturers. As more and more manufacturers are encouraged to move towards system integration and a serviceable business model, the challenge is to identify the appropriate strategic position for their organisations, or in other words, to identify their optimum competitive space for manufacture.
Resumo:
Enabling a Simulation Capability in the Organisation addresses the application of simulation modelling techniques in order to enable better informed decisions in business and industrial organisations. The book’s unique approach treats simulation not just as a technical tool, but within as a support for organisational decision making, showing the results from a survey of current and potential users of simulation to suggest reasons why the technique is not used as much as it should be and what are the barriers to its further use. By incorporating an evaluation of six detailed case studies of the application of simulation in industry by the author, the book will teach readers: •the role of simulation in decision making; •how to introduce simulation as a tool for decision making; and •how to undertake simulation studies that lead to change in the organisation. Enabling a Simulation Capability in the Organisation provides an introduction to the state of the art in simulation modelling for researchers in business studies and engineering, as well a useful guide to practitioners and managers in business and industry.
Resumo:
here is an increasing number of reports of propylene glycol (PG) toxicity in the literature, regardless of its inclusion on the Generally Recognized as Safe List (GRAS).1 PG is an excipient used in many medications as a solvent for water-insoluble drugs. Polypharmacy may increase PG exposure in vulnerable PICU patients who may accumulate PG due to compromised liver and renal function. The study aim was to quantify PG intake in PICU patients and attitudes of clinicians towards PG. Method A snapshot of 50 PICU patients oral or intravenous medication intake was collected. Other data collected included age, weight, diagnosis, lactate levels and renal function. Manufacturers were contacted for PG content and then converted to mg/kg. Excipients in formulations that compete with the PG metabolism pathway were recorded. The Intensivists' opinions on PG intake was sought via e-survey. Results The 50 patients were prescribed 62 drugs and 83 formulations, 43/83 (52%) were parenteral formulations. Median weight of the patients was 5.5 kg (range 2–50 kg), ages ranged from 1 day to 13 years of age. Eleven of the patients were classed as renally impaired (defined as 1.5 times the baseline creatinine). Sixteen formulations contained PG, 2/16 were parenteral, 6/16 unlicensed preparations. Thirty-eight patients received at least one prescription containing PG and 29/38 of these patients were receiving formulations that contained excipients that may have competed with the metabolic pathways of PG. PG intake ranged from 0.002 mg/kg/day to 250 mg/kg/day. Total intake was inconclusive for 2 patients due to a of lack of availability of information from the manufacturer; these formulations were licensed but used in for off-label indications. Five commonly used formulations contributed to higher intakes of PG, namely co-trimoxazole, dexamethasone, potassium chloride, dipyridamole and phenobarbitone. Lactate levels were difficult to interpret due to the underlying conditions of the patients. One of the sixteen intensivist was aware of PG content in drugs, 16/16 would actively change therapy if intake was above European Medicines Agency recommendations. Conclusions Certain formulations used on PICU can considerably increase PG exposure to patients. Due to a lack of awareness of PG content, these should be highlighted to the clinician to assist with making informed decisions regarding risks versus benefits in continuing that drug, route of administration or formulation.
Resumo:
There is an increasing number of reports of propylene glycol (PG) toxicity in the literature, regardless of its inclusion on the Generally Recognized as Safe List (GRAS).1 PG is an excipient used in many medications as a solvent for water-insoluble drugs. Polypharmacy may increase PG exposure in vulnerable PICU patients who may accumulate PG due to compromised liver and renal function. The study aim was to quantify PG intake in PICU patients and attitudes of clinicians towards PG. Method A snapshot of 50 PICU patients oral or intravenous medication intake was collected. Other data collected included age, weight, diagnosis, lactate levels and renal function. Manufacturers were contacted for PG content and then converted to mg/kg. Excipients in formulations that compete with the PG metabolism pathway were recorded. The Intensivists' opinions on PG intake was sought via e-survey. Results The 50 patients were prescribed 62 drugs and 83 formulations, 43/83 (52%) were parenteral formulations. Median weight of the patients was 5.5 kg (range 2–50 kg), ages ranged from 1 day to 13 years of age. Eleven of the patients were classed as renally impaired (defined as 1.5 times the baseline creatinine). Sixteen formulations contained PG, 2/16 were parenteral, 6/16 unlicensed preparations. Thirty-eight patients received at least one prescription containing PG and 29/38 of these patients were receiving formulations that contained excipients that may have competed with the metabolic pathways of PG. PG intake ranged from 0.002 mg/kg/day to 250 mg/kg/day. Total intake was inconclusive for 2 patients due to a of lack of availability of information from the manufacturer; these formulations were licensed but used in for off-label indications. Five commonly used formulations contributed to higher intakes of PG, namely co-trimoxazole, dexamethasone, potassium chloride, dipyridamole and phenobarbitone. Lactate levels were difficult to interpret due to the underlying conditions of the patients. One of the sixteen intensivist was aware of PG content in drugs, 16/16 would actively change therapy if intake was above European Medicines Agency recommendations. Conclusions Certain formulations used on PICU can considerably increase PG exposure to patients. Due to a lack of awareness of PG content, these should be highlighted to the clinician to assist with making informed decisions regarding risks versus benefits in continuing that drug, route of administration or formulation.
Resumo:
Flooding can have a devastating impact on businesses, especially on small- and medium-sized enterprises (SMEs) who may be unprepared and vulnerable to the range of both direct and indirect impacts. SMEs may tend to focus on the direct tangible impacts of flooding, limiting their ability to realise the true costs of flooding. Greater understanding of the impacts of flooding is likely to contribute towards increased uptake of flood protection measures by SMEs, particularly during post-flood property reinstatement. This study sought to investigate the full range of impacts experienced by SMEs located in Cockermouth following the floods of 2009. The findings of a questionnaire survey of SMEs revealed that businesses not directly affected by the flooding experienced a range of impacts and that short-term impacts were given a higher significance. A strong correlation was observed between direct, physical flood impacts and post-flood costs of insurance. Significant increases in the costs of property insurance and excesses were noted, meaning that SMEs will be exposed to increased losses in the event of a future flood event. The findings from the research will enable policy makers and professional bodies to make informed decisions to improve the status of advice given to SMEs. The study also adds weight to the case for SMEs to consider investing in property-level flood risk adaptation measures, especially during the post flood reinstatement process. © 2012 Blackwell Publishing Ltd and The Chartered Institution of Water and Environmental Management (CIWEM).
Resumo:
The world's population is ageing. Older people are healthier and more active than previous generations. Living in a hypermobile world, people want to stay connected to dispersed communities as they age. Staying connected to communities and social networks enables older people to contribute and connect with society and is associated with positive mental and physical health, facilitating independence and physical activity while reducing social isolation. Changes in physiology and cognition associated with later life mean longer journeys may have to be curtailed. A shift in focus is needed to fully explore older people, transport and health; a need to be multidisciplinary in approach and to embrace social sciences and arts and humanities. A need to embrace different types of mobilities is needed for a full understanding of ageing, transport and health, moving from literal or corporeal through virtual and potential to imaginative mobility, taking into account aspirations and emotions. Mobility in later life is more than a means of getting to destinations and includes more affective or emotive associations. Cycling and walking are facilitated not just by improving safety but through social and cultural norms. Car driving can be continued safely in later life if people make appropriate and informed decisions about when and how to stop driving; stringent testing of driver ability and skill has as yet had little effect on safety. Bus use facilitates physical activity and keeps people connected but there are concerns for the future viability of buses. The future of transport may be more community led and involve more sharing of transport modes.
Resumo:
Organizations can use the valuable tool of data envelopment analysis (DEA) to make informed decisions on developing successful strategies, setting specific goals, and identifying underperforming activities to improve the output or outcome of performance measurement. The Handbook of Research on Strategic Performance Management and Measurement Using Data Envelopment Analysis highlights the advantages of using DEA as a tool to improve business performance and identify sources of inefficiency in public and private organizations. These recently developed theories and applications of DEA will be useful for policymakers, managers, and practitioners in the areas of sustainable development of our society including environment, agriculture, finance, and higher education sectors. All rights reserved.
Resumo:
Engineering education in the United Kingdom is at the point of embarking upon an interesting journey into uncharted waters. At no point in the past have there been so many drivers for change and so many opportunities for the development of engineering pedagogy. This paper will look at how Engineering Education Research (EER) has developed within the UK and what differentiates it from the many small scale practitioner interventions, perhaps without a clear research question or with little evaluation, which are presented at numerous staff development sessions, workshops and conferences. From this position some examples of current projects will be described, outcomes of funding opportunities will be summarised and the benefits of collaboration with other disciplines illustrated. In this study, I will account for how the design of task structure according to variation theory, as well as the probe-ware technology, make the laws of force and motion visible and learnable and, especially, in the lab studied make Newton's third law visible and learnable. I will also, as a comparison, include data from a mechanics lab that use the same probe-ware technology and deal with the same topics in mechanics, but uses a differently designed task structure. I will argue that the lower achievements on the FMCE-test in this latter case can be attributed to these differences in task structure in the lab instructions. According to my analysis, the necessary pattern of variation is not included in the design. I will also present a microanalysis of 15 hours collected from engineering students' activities in a lab about impulse and collisions based on video recordings of student's activities in a lab about impulse and collisions. The important object of learning in this lab is the development of an understanding of Newton's third law. The approach analysing students interaction using video data is inspired by ethnomethodology and conversation analysis, i.e. I will focus on students practical, contingent and embodied inquiry in the setting of the lab. I argue that my result corroborates variation theory and show this theory can be used as a 'tool' for designing labs as well as for analysing labs and lab instructions. Thus my results have implications outside the domain of this study and have implications for understanding critical features for student learning in labs. Engineering higher education is well used to change. As technology develops the abilities expected by employers of graduates expand, yet our understanding of how to make informed decisions about learning and teaching strategies does not without a conscious effort to do so. With the numerous demands of academic life, we often fail to acknowledge our incomplete understanding of how our students learn within our discipline. The journey facing engineering education in the UK is being driven by two classes of driver. Firstly there are those which we have been working to expand our understanding of, such as retention and employability, and secondly the new challenges such as substantial changes to funding systems allied with an increase in student expectations. Only through continued research can priorities be identified, addressed and a coherent and strong voice for informed change be heard within the wider engineering education community. This new position makes it even more important that through EER we acquire the knowledge and understanding needed to make informed decisions regarding approaches to teaching, curriculum design and measures to promote effective student learning. This then raises the question 'how does EER function within a diverse academic community?' Within an existing community of academics interested in taking meaningful steps towards understanding the ongoing challenges of engineering education a Special Interest Group (SIG) has formed in the UK. The formation of this group has itself been part of the rapidly changing environment through its facilitation by the Higher Education Academy's Engineering Subject Centre, an entity which through the Academy's current restructuring will no longer exist as a discrete Centre dedicated to supporting engineering academics. The aims of this group, the activities it is currently undertaking and how it expects to network and collaborate with the global EER community will be reported in this paper. This will include explanation of how the group has identified barriers to the progress of EER and how it is seeking, through a series of activities, to facilitate recognition and growth of EER both within the UK and with our valued international colleagues.
Resumo:
Highways are generally designed to serve a mixed traffic flow that consists of passenger cars, trucks, buses, recreational vehicles, etc. The fact that the impacts of these different vehicle types are not uniform creates problems in highway operations and safety. A common approach to reducing the impacts of truck traffic on freeways has been to restrict trucks to certain lane(s) to minimize the interaction between trucks and other vehicles and to compensate for their differences in operational characteristics. ^ The performance of different truck lane restriction alternatives differs under different traffic and geometric conditions. Thus, a good estimate of the operational performance of different truck lane restriction alternatives under prevailing conditions is needed to help make informed decisions on truck lane restriction alternatives. This study develops operational performance models that can be applied to help identify the most operationally efficient truck lane restriction alternative on a freeway under prevailing conditions. The operational performance measures examined in this study include average speed, throughput, speed difference, and lane changes. Prevailing conditions include number of lanes, interchange density, free-flow speeds, volumes, truck percentages, and ramp volumes. ^ Recognizing the difficulty of collecting sufficient data for an empirical modeling procedure that involves a high number of variables, the simulation approach was used to estimate the performance values for various truck lane restriction alternatives under various scenarios. Both the CORSIM and VISSIM simulation models were examined for their ability to model truck lane restrictions. Due to a major problem found in the CORSIM model for truck lane modeling, the VISSIM model was adopted as the simulator for this study. ^ The VISSIM model was calibrated mainly to replicate the capacity given in the 2000 Highway Capacity Manual (HCM) for various free-flow speeds under the ideal basic freeway section conditions. Non-linear regression models for average speed, throughput, average number of lane changes, and speed difference between the lane groups were developed. Based on the performance models developed, a simple decision procedure was recommended to select the desired truck lane restriction alternative for prevailing conditions. ^
Resumo:
Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. ^ This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.^