902 resultados para Costs-consequences analysis
Resumo:
The study examined the relationships between antecedents, timeliness in NPD and INPR, and consequences. A conceptual framework was tested using 232 new products from South Korean firms. The hypothesized relationships among the constructs in the model were evaluated by multiple regression and hierarchal regression analyses using SPSS 12 as well as by structural equation modelling (SEM) using SIMPLIS LISREL. In addition, confirmatory factor analysis (CFA) was carried out using SIMPLIS LISREL. In the direct relationships, cross-functional linkages and marketing synergy exhibited a statistically significant effect on NPD timeliness. The results also supported the influences of the HQ-subsidiary/agent relationship and NPD timeliness on INPR timeliness as well as INPR timeliness on performance. In the mediating effect tests, marketing proficiency significantly accounts for the relationships between cross-functional linkages and NPD timeliness, between marketing synergy and NPD timeliness, and between the HQ-subsidiary/agent relationship and INPR timeliness. Technical proficiency also mediates the effect of the HQ-subsidiary/agent relationship on INPR timeliness. The influence of NPD timeliness on new product performance in target markets is attributed to INPR timeliness. As for the results of the external environmentals and standardization influences, competitive intensity moderates the relationship between NPD timeliness and new product performance. Technology change also moderates the relationship between cross-functional linkages and NPD timeliness and between timeliness in NPD and INPR and performance. Standardization has a moderating role on the relationship between NPD timeliness and INPR timeliness. This study presents the answers to research questions which concern what factors are predictors of criterion variables, how antecedents influence timeliness in NPD and INPR and when the direct relationships in the INPR process are strengthened.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
Predicting future need for water resources has traditionally been, at best, a crude mixture of art and science. This has prevented the evaluation of water need from being carried out in either a consistent or comprehensive manner. This inconsistent and somewhat arbitrary approach to water resources planning led to well publicised premature developments in the 1970's and 1980's but privatisation of the Water Industry, including creation of the Office of Water Services and the National Rivers Authority in 1989, turned the tide of resource planning to the point where funding of schemes and their justification by the Regulators could no longer be assumed. Furthermore, considerable areas of uncertainty were beginning to enter the debate and complicate the assessment It was also no longer appropriate to consider that contingencies would continue to lie solely on the demand side of the equation. An inability to calculate the balance between supply and demand may mean an inability to meet standards of service or, arguably worse, an excessive provision of water resources and excessive costs to customers. United Kingdom Water Industry Research limited (UKWlR) Headroom project in 1998 provided a simple methodology for the calculation of planning margins. This methodology, although well received, was not, however, accepted by the Regulators as a tool sufficient to promote resource development. This thesis begins by considering the history of water resource planning in the UK, moving on to discuss events following privatisation of the water industry post·1985. The mid section of the research forms the bulk of original work and provides a scoping exercise which reveals a catalogue of uncertainties prevalent within the supply-demand balance. Each of these uncertainties is considered in terms of materiality, scope, and whether it can be quantified within a risk analysis package. Many of the areas of uncertainty identified would merit further research. A workable, yet robust, methodology for evaluating the balance between water resources and water demands by using a spreadsheet based risk analysis package is presented. The technique involves statistical sampling and simulation such that samples are taken from input distributions on both the supply and demand side of the equation and the imbalance between supply and demand is calculated in the form of an output distribution. The percentiles of the output distribution represent different standards of service to the customer. The model allows dependencies between distributions to be considered, for improved uncertainties to be assessed and for the impact of uncertain solutions to any imbalance to be calculated directly. The method is considered a Significant leap forward in the field of water resource planning.
Resumo:
In the general introduction of the road-accident phenomenon inside and outside Iran, the results of previous research-works and international conferences and seminars on road-safety have been reviewed. Also a sample-road between Tehran and Mashad has been investigated as a case-study. Examining the road-accident data and iriformation,first: the information presented in road-accident report-forms in developed countries is discussed and, second: the procedures for road-accident data collection in Iran are investigated in detail. The data supplied by Iran Road-Police Central Statistics Office, is analysed, different rates are computed, due comparisons with other nations are made, and the results are discussed. Also such analysis and comparisons are presented for different provinces of Iran. It is concluded that each province with its own natural, geographical, social and economical characteristics possesses its own reasons for the quality and quantity of road-accidents and therefore must receive its own appropriate remedial solutions. The question~ of "what is the cost of road-accidents", "why and how evaluate the cost", "what is the appropriate way of approach to such evaluation" are all discussed and then "the cost of road-accidents in Iran" based on two different approaches: "Gross National Output"and "court award" is computed. It is concluded that this cost is about 1.5 per cent of the country's national product. In Appendix 3 an impressive example is given of the trend of costs and benefits that can be attributed to investment in road-safety measures.
Resumo:
This thesis describes the history of robots and explains the reasons for the international differences in robot diffusion, and the differences in the diffusion of various robot applications with reference to the UK. As opposed to most of the literature, diffusion is examined with an integrated and interdisciplinary perspective. Robot technology evolves from the interaction of development, supply and manufacture, adoption, and promotion. activities. Emphasis is given to the analysis of adoption, at present the most important limiting factor of robot advancement in the UK. Technical development is inferred from a comparison of surveys on equipment, and from the topics of ten years of symposia papers. This classification of papers is also used to highlight the international and institutional differences in robot development. Analysis of the growth in robot supply, manufacture, and use is made from statistics compiled. A series of interviews with users and potential users serves to illustrate the factors and implications of the adoption of different robot systems in the UK. Adoption pioneering takes place when several conditions exist: when the technology is compatible with the firm, when its advantages outweigh its disadvantages, and particularly when a climate exists which encourages the managerial involvement and the labour acceptance. The degree of compatibility (technical, methodological, organisational, and economic) and the consequences (profitability, labour impacts, and managerial effects) of different robot systems (transfer, manipulative, processing, and assembly) are determined by various aspects of manufacturing operations (complexity, automation, integration, labour tasks, and working conditions). The climate for adoption pioneering is basically determined by the performance of firms. The firms' policies on capital investment have as decisive a role in determining the profitability of robots as their total labour costs. The performance of the motor car industry and its machine builders explains, more than any other factor, the present state of robot advancement in the UK.
The structural and electrochemical consequences of hydrogenating Copper N2S2 Schiff base macrocycles
Resumo:
A series of cis and trans tetradentate copper macrocyclic complexes, of ring size fourteen - sixteen, which employ amine and thioether donor groups are reported. Apart from 5,6,15,16-bisbenzo-8,13-diaza-1,4-dithia-cyclohexadecane copper(I) (cis-[Cu(H4NbuSen)]+) all of the complexes are obtained in the copper(II) form. Crystallographic analysis shows that the copper(II) complexes all adopt a distorted planar geometry around the copper. In contrast, cis-[Cu(H4NbuSen)]+ is found to adopt a distorted tetrahedral geometry. The complexes were subjected to electrochemical analysis in water and acetonitrile. The effect of the solvent, positions of the donor atoms (cis/trans) on E1/2 is discussed as is the comparison of the electrochemical behaviour of these complexes with their parent Schiff base macrocycles.
Resumo:
Biomass-To-Liquid (BTL) is one of the most promising low carbon processes available to support the expanding transportation sector. This multi-step process produces hydrocarbon fuels from biomass, the so-called “second generation biofuels” that, unlike first generation biofuels, have the ability to make use of a wider range of biomass feedstock than just plant oils and sugar/starch components. A BTL process based on gasification has yet to be commercialized. This work focuses on the techno-economic feasibility of nine BTL plants. The scope was limited to hydrocarbon products as these can be readily incorporated and integrated into conventional markets and supply chains. The evaluated BTL systems were based on pressurised oxygen gasification of wood biomass or bio-oil and they were characterised by different fuel synthesis processes including: Fischer-Tropsch synthesis, the Methanol to Gasoline (MTG) process and the Topsoe Integrated Gasoline (TIGAS) synthesis. This was the first time that these three fuel synthesis technologies were compared in a single, consistent evaluation. The selected process concepts were modelled using the process simulation software IPSEpro to determine mass balances, energy balances and product distributions. For each BTL concept, a cost model was developed in MS Excel to estimate capital, operating and production costs. An uncertainty analysis based on the Monte Carlo statistical method, was also carried out to examine how the uncertainty in the input parameters of the cost model could affect the output (i.e. production cost) of the model. This was the first time that an uncertainty analysis was included in a published techno-economic assessment study of BTL systems. It was found that bio-oil gasification cannot currently compete with solid biomass gasification due to the lower efficiencies and higher costs associated with the additional thermal conversion step of fast pyrolysis. Fischer-Tropsch synthesis was the most promising fuel synthesis technology for commercial production of liquid hydrocarbon fuels since it achieved higher efficiencies and lower costs than TIGAS and MTG. None of the BTL systems were competitive with conventional fossil fuel plants. However, if government tax take was reduced by approximately 33% or a subsidy of £55/t dry biomass was available, transport biofuels could be competitive with conventional fuels. Large scale biofuel production may be possible in the long term through subsidies, fuels price rises and legislation.
Resumo:
Energy price is related to more than half of the total life cycle cost of asphalt pavements. Furthermore, the fluctuation related to price of energy has been much higher than the general inflation and interest rate. This makes the energy price inflation an important variable that should be addressed when performing life cycle cost (LCC) studies re- garding asphalt pavements. The present value of future costs is highly sensitive to the selected discount rate. Therefore, the choice of the discount rate is the most critical element in LCC analysis during the life time of a project. The objective of the paper is to present a discount rate for asphalt pavement projects as a function of interest rate, general inflation and energy price inflation. The discount rate is defined based on the portion of the energy related costs during the life time of the pavement. Consequently, it can reflect the financial risks related to the energy price in asphalt pavement projects. It is suggested that a discount rate sensitivity analysis for asphalt pavements in Sweden should range between –20 and 30%.
Resumo:
Police-suspect interviews in England & Wales are a multi-audience, multi-purpose, transcontextual mode of discourse. They are conducted as part of the initial investigation into a crime, but are subsequently recontextualised through the judicial process, ultimately being presented in court as evidence against the interviewee. The communicative challenges posed by multiple future audiences are investigated by applying Bell’s (1984) audience design model to the police interview, and the resulting "poor fit" demonstrates why this context is discursively counter-intuitive to participants. Further, data analysis indicates that interviewer and interviewee, although ostensibly addressing each other, may orientate to different audiences, with potentially serious consequences. As well as providing new insight into police-suspect interview interaction, this article seeks to extend understanding of the influence of audience on interaction at the discourse level, and to contribute to the development of theoretical models for contexts with multiple or asynchronous audiences.
Resumo:
The Implementation of Enterprise Resource Planning (ERP) systems require huge investments while ineffective implementations of such projects are commonly observed. A considerable number of these projects have been reported to fail or take longer than it was initially planned, while previous studies show that the aim of rapid implementation of such projects has not been successful and the failure of the fundamental goals in these projects have imposed huge amounts of costs on investors. Some of the major consequences are the reduction in demand for such products and the introduction of further skepticism to the managers and investors of ERP systems. In this regard, it is important to understand the factors determining success or failure of ERP implementation. The aim of this paper is to study the critical success factors (CSFs) in implementing ERP systems and to develop a conceptual model which can serve as a basis for ERP project managers. These critical success factors that are called “core critical success factors” are extracted from 62 published papers using the content analysis and the entropy method. The proposed conceptual model has been verified in the context of five multinational companies.
Resumo:
This paper explores the potential for cost savings in the general Practice units of a Primary Care Trust (PCT) in the UK. We have used Data Envelopment Analysis (DEA) to identify benchmark Practices, which offer the lowest aggregate referral and drugs costs controlling for the number, age, gender, and deprivation level of the patients registered with each Practice. For the remaining, non-benchmark Practices, estimates of the potential for savings on referral and drug costs were obtained. Such savings could be delivered through a combination of the following actions: (i) reducing the levels of referrals and prescriptions without affecting their mix (£15.74 m savings were identified, representing 6.4% of total expenditure); (ii) switching between inpatient and outpatient referrals and/or drug treatment to exploit differences in their unit costs (£10.61 m savings were identified, representing 4.3% of total expenditure); (iii) seeking a different profile of referral and drug unit costs (£11.81 m savings were identified, representing 4.8% of total expenditure). © 2012 Elsevier B.V. All rights reserved.
Resumo:
Three studies tested the impact of properties of behavioral intention on intention-behavior consistency, information processing, and resistance. Principal components analysis showed that properties of intention formed distinct factors. Study 1 demonstrated that temporal stability, but not the other intention attributes, moderated intention-behavior consistency. Study 2 found that greater stability of intention was associated with improved memory performance. In Study 3, participants were confronted with a rating scale manipulation designed to alter their intention scores. Findings showed that stable intentions were able to withstand attack. Overall, the present research findings suggest that different properties of intention are not simply manifestations of a single underlying construct ("intention strength"), and that temporal stability exhibits superior resistance and impact compared to other intention attributes. © 2013 Wiley Periodicals, Inc.
Resumo:
Objectives: This paper highlights the importance of analysing patient transportation in Nordic circumpolar areas. The research questions we asked are as follows: How many Finnish patients have been transferred to special care intra-country and inter-country in 2009? Does it make any difference to health care policymakers if patients are transferred inter-country? Study design: We analysed the differences in distances from health care centres to special care services within Finland, Sweden and Norway and considered the health care policy implica tions. Methods: An analysis of the time required to drive between service providers using the "Google distance meter" (http://maps.google.com/); conducting interviews with key Finnish stakeholders; and undertaking a quantitative analyses of referral data from the Lapland Hospital District. Results: Finnish patients are generally not transferred for health care services across national borders even if the distances are shorter. Conclusion: Finnish patients have limited access to health care services in circumpolar are as across the Nordic countries for 2 reasons. First, health professionals in Norway and Sweden do not speak Finnish, which presents a language problem. Second, The Social Insurance Institution of Finland does not cover the expenditures of travel or the costs of medicine. In addition, it seems that in circumpolar areas the density of Finnish service providers is greater than Swedish ones, causing many Swedish citizens to transfer to Finnish health care providers every year. However, future research is needed to determine the precise reasons for this.
Resumo:
Fps1p is a glycerol efflux channel from Saccharomyces cerevisiae. In this atypical major intrinsic protein neither of the signature NPA motifs of the family, which are part of the pore, is preserved. To understand the functional consequences of this feature, we analyzed the pseudo-NPA motifs of Fps1p by site-directed mutagenesis and assayed the resultant mutant proteins in vivo. In addition, we took advantage of the fact that the closest bacterial homolog of Fps1p, Escherichia coli GlpF, can be functionally expressed in yeast, thus enabling the analysis in yeast cells of mutations that make this typical major intrinsic protein more similar to Fps1p. We observed that mutations made in Fps1p to "restore" the signature NPA motifs did not substantially affect channel function. In contrast, when GlpF was mutated to resemble Fps1p, all mutants had reduced activity compared with wild type. We rationalized these data by constructing models of one GlpF mutant and of the transmembrane core of Fps1p. Our model predicts that the pore of Fps1p is more flexible than that of GlpF. We discuss the fact that this may accommodate the divergent NPA motifs of Fps1p and that the different pore structures of Fps1p and GlpF may reflect the physiological roles of the two glycerol facilitators.
Resumo:
Despite considerable and growing interest in the subject of academic researchers and practising managers jointly generating knowledge (which we term ‘co-production’), our searches of management literature revealed few articles based on primary data or multiple cases. Given the increasing commitment to co-production by academics, managers and those funding research, it seems important to strengthen the evidence base about practice and performance in co-production. Literature on collaborative research was reviewed to develop a framework to structure the analysis of this data and relate findings to the limited body of prior research on collaborative research practice and performance. This paper presents empirical data from four completed, large scale co-production projects. Despite major differences between the cases, we find that the key success factors and the indicators of performances are remarkably similar. We demonstrate many, complex influences between factors, between outcomes, and between factors and outcomes, and discuss the features that are distinctive to co-production. Our empirical findings are broadly consonant with prior literature, but go further in trying to understand success factors’ consequences for performance. A second contribution of this paper is the development of a conceptually and methodologically rigorous process for investigating collaborative research, linking process and performance. The paper closes with discussion of the study’s limitations and opportunities for further research.