801 resultados para success of a firm
Resumo:
Purpose Limited robust randomised controlled trials investigating fruit and vegetable (F&V) intake in people at risk of cardiovascular disease (CVD) exist. We aimed to design and validate a dietary strategy of increasing flavonoid-rich versus flavonoid-poor F&V consumption on nutrient biomarker profile. Methods A parallel, randomised, controlled, dose–response dietary intervention study. Participants with a CVD relative risk of 1.5 assessed by risk scores were randomly assigned to one of the 3 groups: habitual (control, CT), high-flavonoid (HF) or low-flavonoid (LF) diets. While the CT group (n = 57) consumed their habitual diet throughout, the HF (n = 58) and LF (n = 59) groups sequentially increased their daily F&V intake by an additional 2, 4 and 6 portions for 6-week periods during the 18-week study. Results Compliance to target numbers and types of F&V was broadly met and verified by dietary records, and plasma and urinary biomarkers. Mean (±SEM) number of F&V portions/day consumed by the HF and LF groups at baseline (3.8 ± 0.3 and 3.4 ± 0.3), 6 weeks (6.3 ± 0.4 and 5.8 ± 0.3), 12 weeks (7.0 ± 0.3 and 6.8 ± 0.3) and 18 weeks (7.6 ± 0.4 and 8.1 ± 0.4), respectively, was similar at baseline yet higher than the CT group (3.9 ± 0.3, 4.3 ± 0.3, 4.6 ± 0.4, 4.5 ± 0.3) (P = 0.015). There was a dose-dependent increase in dietary and urinary flavonoids in the HF group, with no change in other groups (P = 0.0001). Significantly higher dietary intakes of folate (P = 0.035), non-starch polysaccharides (P = 0.001), vitamin C (P = 0.0001) and carotenoids (P = 0.0001) were observed in both intervention groups compared with CT, which were broadly supported by nutrient biomarker analysis. Conclusions The success of improving nutrient profile by active encouragement of F&V intake in an intervention study implies the need for a more hands-on public health approach.
Resumo:
Reductions in the division of labour are a significant feature of modern developments in work organisation. It has been recognised that a reduced division of labour can have the advantages of job enrichment and lower coordination costs. In this paper it is shown how advantages from a lesser division of labour can stem from the flow of work between different sets of resources where the work rates of individual production stages are subject to uncertainties. Both process and project-based work are considered. Implications for the boundaries of the firm and for innovation processes are noted.
Resumo:
The construction sector has a major role to play in delivering the transition to a low carbon economy and in contributing to sustainable development; however, integrating sustainability into everyday business remains a major challenge for the sector. This research explores the experience of three large construction and engineering consultancy firms in mainstreaming sustainability. The aim of the paper is to identify and explain variations in firm level strategies for mainstreaming sustainability. The three cases vary in the way in which sustainability is ramed – as a problem of risk, business opportunity or culture – and in its location within the firm. The research postulates that the mainstreaming of sustainability is not the uniform linear process often articulated in theories of strategic change and management, but varies with the dominant organisational culture and history of each firm. he paper concludes with a reflection on the implications of this analysis for management theories and for firm level strategies.
Resumo:
In the mid 1990s the North Atlantic subpolar gyre (SPG) warmed rapidly, with sea surface temperatures (SST) increasing by 1°C in just a few years. By examining initialized hindcasts made with the UK Met Office Decadal Prediction System (DePreSys), it is shown that the warming could have been predicted. Conversely, hindcasts that only consider changes in radiative forcings are not able to capture the rapid warming. Heat budget analysis shows that the success of the DePreSys hindcasts is due to the initialization of anomalously strong northward ocean heat transport. Furthermore, it is found that initializing a strong Atlantic circulation, and in particular a strong Atlantic Meridional Overturning Circulation, is key for successful predictions. Finally, we show that DePreSys is able to predict significant changes in SST and other surface climate variables related to the North Atlantic warming.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
Salmonella enterica serotypes Derby, Mbandaka, Montevideo, Livingstone, and Senftenberg were among the 10 most prevalent serotypes isolated from farm animals in England and Wales in 1999. These serotypes are of potential zoonotic relevance; however, there is currently no "gold standard" fingerprinting method for them. A collection of isolates representing the former serotypes and serotype Gold Coast were analyzed using plasmid profiling, pulsed-field gel electrophoresis (PFGE), and ribotyping. The success of the molecular methods in identifying DNA polymorphisms was different for each serotype. Plasmid profiling was particularly useful for serotype Derby isolates, and it also provided a good level of discrimination for serotype Senftenberg. For most serotypes, we observed a number of nontypeable plasmid-free strains, which represents a limitation of this technique. Fingerprinting of genomic DNA by ribotyping and PFGE produced a significant variation in results, depending on the serotype of the strain. Both PstI/SphI ribotyping and XbaI-PFGE provided a similar degree of strain differentiation for serotype Derby and serotype Senftenberg, only marginally lower than that achieved by plasmid profiling. Ribotyping was less sensitive than PFGE when applied to serotype Mbandaka or serotype Montevideo. Serotype Gold Coast isolates were found to be nontypeable by XbaI-PFGE, and a significant proportion of them were found to be plasmid free. A similar situation applies to a number of serotype Livingstone isolates which were nontypeable by plasmid profiling and/or PFGE. In summary, the serotype of the isolates has a considerable influence in deciding the best typing strategy; a single method cannot be relied upon for discriminating between strains, and a combination of typing methods allows further discrimination.
Resumo:
The UK Government's Department for Energy and Climate Change has been investigating the feasibility of developing a national energy efficiency data framework covering both domestic and non-domestic buildings. Working closely with the Energy Saving Trust and energy suppliers, the aim is to develop a data framework to monitor changes in energy efficiency, develop and evaluate programmes and improve information available to consumers. Key applications of the framework are to understand trends in built stock energy use, identify drivers and evaluate the success of different policies. For energy suppliers, it could identify what energy uses are growing, in which sectors and why. This would help with market segmentation and the design of products. For building professionals, it could supplement energy audits and modelling of end-use consumption with real data and support the generation of accurate and comprehensive benchmarks. This paper critically examines the results of the first phase of work to construct a national energy efficiency data-framework for the domestic sector focusing on two specific issues: (a) drivers of domestic energy consumption in terms of the physical nature of the dwellings and socio-economic characteristics of occupants and (b) the impact of energy efficiency measures on energy consumption.
Resumo:
It is predicted that non-communicable diseases will account for over 73 % of global mortality in 2020. Given that the majority of these deaths occur in developed countries such as the UK, and that up to 80 % of chronic disease could be prevented through improvements in diet and lifestyle, it is imperative that dietary guidelines and disease prevention strategies are reviewed in order to improve their efficacy. Since the completion of the human genome project our understanding of complex interactions between environmental factors such as diet and genes has progressed considerably, as has the potential to individualise diets using dietary, phenotypic and genotypic data. Thus, there is an ambition for dietary interventions to move away from population-based guidance towards 'personalised nutrition'. The present paper reviews current evidence for the public acceptance of genetic testing and personalised nutrition in disease prevention. Health and clear consumer benefits have been identified as key motivators in the uptake of genetic testing, with individuals reporting personal experience of disease, such as those with specific symptoms, being more willing to undergo genetic testing for the purpose of personalised nutrition. This greater perceived susceptibility to disease may also improve motivation to change behaviour which is a key barrier in the success of any nutrition intervention. Several consumer concerns have been identified in the literature which should be addressed before the introduction of a nutrigenomic-based personalised nutrition service. Future research should focus on the efficacy and implementation of nutrigenomic-based personalised nutrition.
Resumo:
The external environment is characterized by periods of relative stability interspersed with periods of extreme change, implying that high performing firms must practice exploration and exploitation in order to survive and thrive. In this paper, we posit that R&D expenditure volatility indicates the presence of proactive R&D management, and is evidence of a firm moving from exploitation to exploration over time. This is consistent with a punctuated equilibrium model of R&D investment where shocks are induced by reactions to external turbulence. Using an unbalanced panel of almost 11,000 firm-years from 1997 to 2006, we show that greater fluctuations in the firm's R&D expenditure over time are associated with higher firm growth. Developing a contextual view of the relationship between R&D expenditure volatility and firm growth, we find that this relationship is weaker among firms with higher levels of corporate diversification and negative among smaller firms and those in slow clockspeed industries.
Resumo:
This paper extends the resource-based view (RBV) of the firm, as applied to multinational enterprises (MNEs), by distinguishing between two critical resource dimensions, namely relative resource superiority (capabilities) and slack. Both dimensions, in concert with specific environmental conditions, are required to increase entrepreneurial activities. We propose distinct configurations (three-way moderation effects) of capabilities, slack, and environmental factors (i.e. dynamism and hostility) to explain entrepreneurship. Using survey data from 66 Canadian subsidiaries operating in China, we find that higher subsidiary entrepreneurship requires both HR slack and strong downstream capabilities in subsidiaries, subject to the industry environment being dynamic and benign. However, high HR slack alone, in a dynamic and benign environment, but without the presence of strong capabilities, actually triggers the fewest initiatives, with HR slack redirected from entrepreneurial experimentation towards complacency and inefficiency. This paper has major implications for MNEs seeking to increase subsidiary entrepreneurship in fast growing emerging markets.
Resumo:
Purpose – The development of marketing strategies optimally adjusted to export markets has been a vitally important topic for both managers and academics for about five decades. However, there is no agreement in the literature about which elements integrate marketing strategy and which components of domestic strategies should be adapted to export markets. The purpose of this paper is to develop a new scale – STRATADAPT. Design/methodology/approach – Results from a sample of small and medium-sized industrial exporting firms support a four-dimensional scale – product, promotion, price, and distribution strategies – of 30 items. The scale presents evidence of composite reliability as well as discriminant and nomological validity. Findings – Findings reveal that all four dimensions of marketing strategy adaptation are positively associated with the amount of the firm's financial resources allocated to export activity. Practical implications – The STRATADAPT scale may assist managers in developing better international marketing strategies as well as in planning more accurate and efficient marketing programs across markets. Originality/value – This study develops a new scale, the STRATADAPT scale, which is a broad measure of export marketing strategy adaptation.
Resumo:
Construction professional service (CPS) firms sell expertise and provide innovative solutions for projects founded on their knowledge, experience, and technical competences. Large CPS firms seeking to grow will often seek new opportunities in their domestic market and overseas by organic or inorganic growth through mergers, alliances, and acquisitions. Growth can also come from increasing market penetration through vertical, horizontal, and lateral diversification. Such growth, hopefully, leads to economies of scope and scale in the long term, but it can also lead to diseconomies, when the added cost of integration and the increased complexity of diversification no longer create tangible and intangible benefits. The aim of this research is to investigate the key influences impacting on the growth in scope and scale for large CPS firms. Qualitative data from the interviews were underpinned by secondary data from CPS firms’ annual reports and analysts’ findings. The findings showed five key influences on the scope and scale of a CPS firm: the importance of growth as a driver; the influence of the ownership of the firm on the decision for growth in scope and scale; the optimization of resources and capabilities; the need to serve changing clients’ needs; and the importance of localization. The research provides valuable insights into the growth strategies of international CPS firms. A major finding of the research is the influence of ownership on CPS firms’ growth strategies which has not been highlighted in previous research.
Resumo:
Mergers of Higher Education Institutions (HEIs) are organisational processes requiring tremendous amount of resources, in terms of time, work, and money. A number of mergers have been seen on previous years and more are to come. Several studies on mergers have been conducted, revealing some crucial factors that affect the success of mergers. Based on literature review on these studies, factors are: the initiator of merger, a reason for merger, geographical distance of merging institutions, organisational culture, the extend of overlapping course portfolio, and Quality Assurance Systems (QASs). Usually these kind of factors are not considered on mergers, but focus is on financial matters. In this paper, a framework (HMEF) for evaluating merging of HEIs is introduced. HMEF is based on Enterprise Architecture (EA), focusing on factors found to be affecting the success of mergers. By using HMEF, HEIs can focus on matters that crucial for merging.
Resumo:
Enterprise Architecture (EA) has been recognised as an important tool in modern business management for closing the gap between strategy and its execution. The current literature implies that for EA to be successful, it should have clearly defined goals. However, the goals of different stakeholders are found to be different, even contradictory. In our explorative research, we seek an answer to the questions: What kind of goals are set for the EA implementation? How do the goals evolve during the time? Are the goals different among stakeholders? How do they affect the success of EA? We analysed an EA pilot conducted among eleven Finnish Higher Education Institutions (HEIs) in 2011. The goals of the pilot were gathered from three different stages of the pilot: before the pilot, during the pilot, and after the pilot, by means of a project plan, interviews during the pilot and a questionnaire after the pilot. The data was analysed using qualitative and quantitative methods. Eight distinct goals were recognised by the coding: Adopt EA Method, Build Information Systems, Business Development, Improve Reporting, Process Improvement, Quality Assurance, Reduce Complexity, and Understand the Big Picture. The success of the pilot was analysed statistically using the scale 1-5. Results revealed that goals set before the pilot were very different from those mentioned during the pilot, or after the pilot. Goals before the pilot were mostly related to expected benefits from the pilot, whereas the most important result was to adopt the EA method. Results can be explained by possibly different roles of respondents, which in turn were most likely caused by poor communication. Interestingly, goals mentioned by different stakeholders were not limited to their traditional areas of responsibility. For example, in some cases Chief Information Officers' goals were Quality Assurance and Process Improvement, whereas managers’ goals were Build Information Systems and Adopt EA Method. This could be a result of a good understanding of the meaning of EA, or stakeholders do not regard EA as their concern at all. It is also interesting to notice that regardless of the different perceptions of goals among stakeholders, all HEIs felt the pilot to be successful. Thus the research does not provide support to confirm the link between clear goals and success.
Resumo:
Interest towards Enterprise Architecture (EA) has been increasing during the last few years. EA has been found to be a crucial aspect of business survival, and thus the importance of EA implementation success is also crucial. Current literature does not have a tool to be used to measure the success of EA implementation. In this paper, a tentative model for measuring success is presented and empirically validated in EA context. Results show that the success of EA implementation can be measured indirectly by measuring the achievement of the objectives set for the implementation. Results also imply that achieving individual's objectives do not necessarily mean that organisation's objectives are achieved. The presented Success Measurement Model can be used as basis for developing measurement metrics.