948 resultados para producer
Resumo:
The modern consumer has an attitude that food safety is non-negotiable issue – the consumer simply demands food to be safe. Yet, at the same time, the modern consumer has an expectation that the food safety is the responsibility of others – the primary producer, the processing company, the supermarket, commercial food handlers and so on. Given this environment, all food animal industries have little choice but to regard food safety as a key issue. As an example, the chicken meat industry, via the two main industry funding bodies – the Rural Industries Research and Development Corporation (Chicken Meat) and the Poultry CRC – has a comprehensive research program that seeks to focus on reducing the risks of food-borne diseases at all points of the food processing chain – from the farm to the processing plant. The scale of the issue for all industries can be illustrated by an analysis of the problem of campylobacterosis – a major food-borne disease. It has been estimated that there are around 230,000 cases of campylobacterosis per year. In 1995, it was estimated that each case of food-borne campylobacterosis in the USA was costing between $(US) 350-580. Hence, a reasonable conservative estimate is that each Australian case in 2010 would result in a cost of around $500 (this includes hospital, medication and lost productivity costs). Hence, this single food-borne agent could be costing Australian society around $115 million annually. In the light of these types of estimated costs for just one food-borne pathogen, it is easy to understand the importance that all food animal industries place on food safety.
Resumo:
B. cereus is one of the most frequent occurring bacteria in foods . It produces several heat-labile enterotoxins and one stable non-protein toxin, cereulide (emetic), which may be pre-formed in food. Cereulide is a heat stable peptide whose structure and mechanism of action were in the past decade elucidated. Until this work, the detection of cereulide was done by biological assays. With my mentors, I developed the first quantitative chemical assay for cereulide. The assay is based on liquid chromatography (HPLC) combined with ion trap mass spectrometry and the calibration is done with valinomycin and purified cereulide. To detect and quantitate valinomycin and cereulide, their [NH4+] adducts, m/z 1128.9 and m/z 1171 respectively, were used. This was a breakthrough in the cereulide research and became a very powerful tool of investigation. This tool made it possible to prove for the first time that the toxin produced by B. cereus in heat-treated food caused human illness. Until this thesis work (Paper II), cereulide producing B. cereus strains were believed to represent a homogenous group of clonal strains. The cereulide producing strains investigated in those studies originated mostly from food poisoning incidents. We used strains of many origins and analyzed them using a polyphasic approach. We found that the cereulide producing B. cereus strains are genetically and biologically more diverse than assumed in earlier studies. The strains diverge in the adenylate kinase (adk) gene (two sequence types), in ribopatterns obtained with EcoRI and PvuII (three patterns), tyrosin decomposition, haemolysis and lecithine hydrolysis (two phenotypes). Our study was the first demonstration of diversity within the cereulide producing strains of B. cereus. To manage the risk for cereulide production in food, understanding is needed on factors that may upregulate cereulide production in a given food matrix and the environmental factors affecting it. As a contribution towards this direction, we adjusted the growth environment and measured the cereulide production by strains selected for diversity. The temperature range where cereulide is produced was narrower than that for growth for most of the producer strains. Most cereulide was by most strains produced at room temperature (20 - 23ºC). Exceptions to this were two faecal isolates which produced the same amount of cereulide from 23 ºC up until 39ºC. We also found that at 37º C the choice of growth media for cereulide production differed from that at the room temperature. The food composition and temperature may thus be a key for understanding cereulide production in foods as well as in the gut. We investigated the contents of [K+], [Na+] and amino acids of six growth media. Statistical evaluation indicated a significant positive correlation between the ratio [K+]:[Na+] and the production of cereulide, but only when the concentrations of glycine and [Na+] were constant. Of the amino acids only glycine correlated positively with high cereulide production. Glycine is used worldwide as food additive (E 640), flavor modifier, humectant, acidity regulator, and is permitted in the European Union countries, with no regulatory quantitative limitation, in most types of foods. B. subtilis group members are endospore-forming bacteria ubiquitous in the environment, similar to B. cereus in this respect. Bacillus species other than B. cereus have only sporadically been identified as causative agents of food-borne illnesses. We found (Paper IV) that food-borne isolates of B. subtilis and B. mojavensis produced amylosin. It is possible that amylosin was the agent responsible for the food-borne illness, since no other toxic substance was found in the strains. This is the first report on amylosin production by strains isolated from food. We found that the temperature requirement for amylosin production was higher for the B. subtilis strain F 2564/96, a mesophilic producer, than for B. mojavensis strains eela 2293 and B 31, psychrotolerant producers. We also found that an atmosphere with low oxygen did not prevent the production of amylosin. Ready-to-eat foods packaged in micro-aerophilic atmosphere and/or stored at temperatures above 10 °C, may thus pose a risk when toxigenic strains of B. subtilis or B. mojavensis are present.
Resumo:
Costs of purchasing new piglets and of feeding them until slaughter are the main variable expenditures in pig fattening. They both depend on slaughter intensity, the nature of feeding patterns and the technological constraints of pig fattening, such as genotype. Therefore, it is of interest to examine the effect of production technology and changes in input and output prices on feeding and slaughter decisions. This study examines the problem by using a dynamic programming model that links genetic characteristics of a pig to feeding decisions and the timing of slaughter and takes into account how these jointly affect the quality-adjusted value of a carcass. The model simulates the growth mechanism of a pig under optional feeding and slaughter patterns and then solves the optimal feeding and slaughter decisions recursively. The state of nature and the genotype of a pig are known in the analysis. The main contribution of this study is the dynamic approach that explicitly takes into account carcass quality while simultaneously optimising feeding and slaughter decisions. The method maximises the internal rate of return to the capacity unit. Hence, the results can have vital impact on competitiveness of pig production, which is known to be quite capital-intensive. The results suggest that producer can significantly benefit from improvements in the pig's genotype, because they improve efficiency of pig production. The annual benefits from obtaining pigs of improved genotype can be more than €20 per capacity unit. The annual net benefits of animal breeding to pig farms can also be considerable. Animals of improved genotype can reach optimal slaughter maturity quicker and produce leaner meat than animals of poor genotype. In order to fully utilise the benefits of animal breeding, the producer must adjust feeding and slaughter patterns on the basis of genotype. The results suggest that the producer can benefit from flexible feeding technology. The flexible feeding technology segregates pigs into groups according to their weight, carcass leanness, genotype and sex and thereafter optimises feeding and slaughter decisions separately for these groups. Typically, such a technology provides incentives to feed piglets with protein-rich feed such that the genetic potential to produce leaner meat is fully utilised. When the pig approaches slaughter maturity, the share of protein-rich feed in the diet gradually decreases and the amount of energy-rich feed increases. Generally, the optimal slaughter weight is within the weight range that pays the highest price per kilogram of pig meat. The optimal feeding pattern and the optimal timing of slaughter depend on price ratios. Particularly, an increase in the price of pig meat provides incentives to increase the growth rates up to the pig's biological maximum by increasing the amount of energy in the feed. Price changes and changes in slaughter premium can also have large income effects. Key words: barley, carcass composition, dynamic programming, feeding, genotypes, lean, pig fattening, precision agriculture, productivity, slaughter weight, soybeans
Resumo:
The Queensland strawberry (Fragaria ×ananassa) breeding program in subtropical Australia aims to improve sustainable profitability for the producer. Selection must account for the relative economic importance of each trait and the genetic architecture underlying these traits in the breeding population. Our study used estimates of the influence of a trait on production costs and profitability to develop a profitability index (PI) and an economic weight (i.e., change in PI for a unit change in level of trait) for each trait. The economic weights were then combined with the breeding values for 12 plant and fruit traits on over 3000 genotypes that were represented in either the current breeding population or as progenitors in the pedigree of these individuals. The resulting linear combination (i.e., sum of economic weight × breeding value for all 12 traits) estimated the overall economic worth of each genotype as H, the aggregate economic genotype. H values were validated by comparisons among commercial cultivars and were also compared with the estimated gross margins. When the H value of ‘Festival’ was set as zero, the H values of genotypes in the pedigree ranged from –0.36 to +0.28. H was highly correlated (R2 = 0.77) with the year of selection (1945–98). The gross margins were highly linearly related (R2 > 0.98) to H values when the genotype was planted on less than 50% of available area, but the relationship was non-linear [quadratic with a maximum (R2 > 0.96)] when the planted area exceeded 50%. Additionally, with H values above zero, the variation in gross margin increased with increasing H values as the percentage of area planted to a genotype increased. High correlations among some traits allowed the omission of any one of three of the 12 traits with little or no effect on ranking (Spearman’s rank correlation 0.98 or greater). Thus, these traits may be dropped from the aggregate economic genotype, leading to either cost reductions in the breeding program or increased selection intensities for the same resources. H was efficient in identifying economically superior genotypes for breeding and deployment, but because of the non-linear relationship with gross margin, calculation of a gross margin for genotypes with high H is also necessary when cultivars are deployed across more than 50% of the available area.
Resumo:
Manuscript: "Das fatale Loch in der Berliner Theatergeschichte". Speech exploring the burdens placed on scholarship and the students of the Theatrical Institute of the Free University of Berlin by the presence of professors who were compromised by their activities during the Nazi era.
Resumo:
This report provides a systematic review of the most economically damaging endemic diseases and conditions for the Australian red meat industry (cattle, sheep and goats). A number of diseases for cattle, sheep and goats have been identified and were prioritised according to their prevalence, distribution, risk factors and mitigation. The economic cost of each disease as a result of production losses, preventive costs and treatment costs is estimated at the herd and flock level, then extrapolated to a national basis using herd/flock demographics from the 2010-11 Agricultural Census by the Australian Bureau of Statistics. Information shortfalls and recommendations for further research are also specified. A total of 17 cattle, 23 sheep and nine goat diseases were prioritised based on feedback received from producer, government and industry surveys, followed by discussions between the consultants and MLA. Assumptions of disease distribution, in-herd/flock prevalence, impacts on mortality/production and costs for prevention and treatment were obtained from the literature where available. Where these data were not available, the consultants used their own expertise to estimate the relevant measures for each disease. Levels of confidence in the assumptions for each disease were estimated, and gaps in knowledge identified. The assumptions were analysed using a specialised Excel model that estimated the per animal, herd/flock and national costs of each important disease. The report was peer reviewed and workshopped by the consultants and experts selected by MLA before being finalised. Consequently, this report is an important resource that will guide and prioritise future research, development and extension activities by a variety of stakeholders in the red meat industry. This report completes Phase I and Phase II of an overall four-Phase project initiative by MLA, with identified data gaps in this report potentially being addressed within the later phases. Modelling the economic costs using a consistent approach for each disease ensures that the derived estimates are transparent and can be refined if improved data on prevalence becomes available. This means that the report will be an enduring resource for developing policies and strategies for the management of endemic diseases within the Australian red meat industry.
Resumo:
This doctoral thesis addresses the macroeconomic effects of real shocks in open economies in flexible exchange rate regimes. The first study of this thesis analyses the welfare effects of fiscal policy in a small open economy, where private and government consumption are substitutes in terms of private utility. The main findings are as follows: fiscal policy raises output, bringing it closer to its efficient level, but is not welfare-improving even though government spending directly affects private utility. The main reason for this is that the introduction of useful government spending implies a larger crowding-out effect on private consumption, when compared with the `pure waste' case. Utility decreases since one unit of government consumption yields less utility than one unit of private consumption. The second study of this thesis analyses the question of how the macroeconomic effects of fiscal policy in a small open economy depend on optimal intertemporal behaviour. The key result is that the effects of fiscal policy depend on the size of the elasticity of substitution between traded and nontraded goods. In particular, the sign of the current account response to fiscal policy depends on the interplay between the intertemporal elasticity of aggregate consumption and the elasticity of substitution between traded and nontraded goods. The third study analyses the consequences of productive government spending on the international transmission of fiscal policy. A standard result in the New Open Economy Macroeconomics literature is that a fiscal shock depreciates the exchange rate. I demonstrate that the response of the exchange rate depends on the productivity of government spending. If productivity is sufficiently high, a fiscal shock appreciates the exchange rate. It is also shown that the introduction of productive government spending increases both domestic and foreign welfare, when compared with the case where government spending is wasted. The fourth study analyses the question of how the international transmission of technology shocks depends on the specification of nominal rigidities. A growing body of empirical evidence suggests that a positive technology shock leads to a temporary decline in employment. In this study, I demonstrate that the open economy dimension can enhance the ability of sticky price models to account for the evidence. The reasoning is as follows. An improvement in technology appreciates the nominal exchange rate. Under producer-currency pricing, the exchange rate appreciation shifts global demand toward foreign goods away from domestic goods. This causes a temporary decline in domestic employment.
Resumo:
This thesis studies empirically whether measurement errors in aggregate production statistics affect sentiment and future output. Initial announcements of aggregate production are subject to measurement error, because many of the data required to compile the statistics are produced with a lag. This measurement error can be gauged as the difference between the latest revised statistic and its initial announcement. Assuming aggregate production statistics help forecast future aggregate production, these measurement errors are expected to affect macroeconomic forecasts. Assuming agents’ macroeconomic forecasts affect their production choices, these measurement errors should affect future output through sentiment. This thesis is primarily empirical, so the theoretical basis, strategic complementarity, is discussed quite briefly. However, it is a model in which higher aggregate production increases each agent’s incentive to produce. In this circumstance a statistical announcement which suggests aggregate production is high would increase each agent’s incentive to produce, thus resulting in higher aggregate production. In this way the existence of strategic complementarity provides the theoretical basis for output fluctuations caused by measurement mistakes in aggregate production statistics. Previous empirical studies suggest that measurement errors in gross national product affect future aggregate production in the United States. Additionally it has been demonstrated that measurement errors in the Index of Leading Indicators affect forecasts by professional economists as well as future industrial production in the United States. This thesis aims to verify the applicability of these findings to other countries, as well as study the link between measurement errors in gross domestic product and sentiment. This thesis explores the relationship between measurement errors in gross domestic production and sentiment and future output. Professional forecasts and consumer sentiment in the United States and Finland, as well as producer sentiment in Finland, are used as the measures of sentiment. Using statistical techniques it is found that measurement errors in gross domestic product affect forecasts and producer sentiment. The effect on consumer sentiment is ambiguous. The relationship between measurement errors and future output is explored using data from Finland, United States, United Kingdom, New Zealand and Sweden. It is found that measurement errors have affected aggregate production or investment in Finland, United States, United Kingdom and Sweden. Specifically, it was found that overly optimistic statistics announcements are associated with higher output and vice versa.
Resumo:
This study examines Finnish economic growth. The key driver of economic growth was productivity. And the major engine of productivity growth was technology, especially the general purpose technologies (GPTs) electricity and ICT. A new GPT builds on previous knowledge, yet often in an uncertain, punctuated, fashion. Economic history, as well as the Finnish data analyzed in this study, teaches that growth is not a smooth process but is subject to episodes of sharp acceleration and deceleration which are associated with the arrival, diffusion and exhaustion of new general purpose technologies. These are technologies that affect the whole economy by transforming both household life and the ways in which firms conduct business. The findings of previous research, that Finnish economic growth exhibited late industrialisation and significant structural changes were corroborated by this study. Yet, it was not solely a story of manufacturing and structural change was more the effect of than the cause for economic growth. We offered an empirical resolution to the Artto-Pohjola paradox as we showed that a high rate of return on capital was combined with low capital productivity growth. This result is important in understanding Finnish economic growth 1975-90. The main contribution of this thesis was the growth accounting results on the impact of ICT on growth and productivity, as well as the comparison of electricity and ICT. It was shown that ICT s contribution to GDP growth was almost twice as large as electricity s contribution over comparable periods of time. Finland has thus been far more successful as an ICT producer than a producer of electricity. Unfortunately in the use of ICT the results were still more modest than for electricity. During the end of the period considered in this thesis, Finland switched from resource-based to ICT-based growth. However, given the large dependency on the ICT-producing sector, the ongoing outsourcing of ICT production to low wage countries provides a threat to productivity performance in the future. For a developed country only change is constant and history teaches us that it is likely that Finland is obliged to reorganize its economy once again in the digital era.
Resumo:
Emerging embedded applications are based on evolving standards (e.g., MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a polymorphic ASIC in which specialized hardware units are replaced with basic hardware units that can create the same functionality by runtime re-composition. It is a ``future-proof'' custom hardware solution for multiple applications and their derivatives in a domain. In this article, we describe a compiler framework and supporting hardware comprising compute, storage, and communication resources. Applications described in high-level language (e.g., C) are compiled into application substructures. For each application substructure, a set of compute elements on the hardware are interconnected during runtime to form a pattern that closely matches the communication pattern of that particular application. The advantage is that the bounded CEs are neither processor cores nor logic elements as in FPGAs. Hence, REDEFINE offers the power and performance advantage of an ASIC and the hardware reconfigurability and programmability of that of an FPGA/instruction set processor. In addition, the hardware supports custom instruction pipelining. Existing instruction-set extensible processors determine a sequence of instructions that repeatedly occur within the application to create custom instructions at design time to speed up the execution of this sequence. We extend this scheme further, where a kernel is compiled into custom instructions that bear strong producer-consumer relationship (and not limited to frequently occurring sequences of instructions). Custom instructions, realized as hardware compositions effected at runtime, allow several instances of the same to be active in parallel. A key distinguishing factor in majority of the emerging embedded applications is stream processing. To reduce the overheads of data transfer between custom instructions, direct communication paths are employed among custom instructions. In this article, we present the overview of the hardware-aware compiler framework, which determines the NoC-aware schedule of transports of the data exchanged between the custom instructions on the interconnect. The results for the FFT kernel indicate a 25% reduction in the number of loads/stores, and throughput improves by log(n) for n-point FFT when compared to sequential implementation. Overall, REDEFINE offers flexibility and a runtime reconfigurability at the expense of 1.16x in power and 8x in area when compared to an ASIC. REDEFINE implementation consumes 0.1x the power of an FPGA implementation. In addition, the configuration overhead of the FPGA implementation is 1,000x more than that of REDEFINE.
Resumo:
Ma Ma Ma Mad is an autobiographical work, written and performed by Singaporean-Australian theatre maker Merlynn Tong. This production, presented at the Brisbane Powerhouse in December 2015, was a multi-genre work incorporating aspects of Butoh, physical theatre, cabaret and contemporary monologue. More than an experiment in mixed performative forms, however, this particular production was also an exercise in inter-cultural collaboration as well as gender in (and of) performance. Heavily influenced by the creator's experiences growing up in urban Southeast Asia, the director's specialisation in contemporary Australian theatre and experience telling uniquely Australian stories worked to manipulate the form in an endeavour to succinctly speak to local audiences, without pandering to entrenched stereotypes or diluting the underlying Chinese-Singaporean themes. The success of this production was also somewhat of a personal challenge for the creatives, after being told by some of Brisbane's most influential theatre venues and festivals that they would rather not support the work because a) it was a one woman show, and b) it was a one woman show about an Asian woman; and therefore would not sell well. One very influential local producer even said that he already had a one-woman show about an Asian person programmed, so he couldn't possibly program another. Operating in such a biased and out-of-touch artistic environment was seen as an easy challenge for the artists involved, which resulted in a highly successful and critically acclaimed sell-out run of Ma Ma Ma Mad, followed by offers to tour the work nationally and internationally. As such, this production also stands as a practical example of the ingrained and patriarchal structures of the Australian arts scene, and how art can work to break down the very barriers that it has helped to construct through a lack of vision and diversity amongst its leaders.
Resumo:
In this paper, a new strategy for scaling burners based on "mild combustion" is evolved and adopted to scaling a burner from 3 to a 150 kW burner at a high heat release Late of 5 MW/m(3) Existing scaling methods (constant velocity, constant residence time, and Cole's procedure [Proc. Combust. Inst. 28 (2000) 1297]) are found to be inadequate for mild combustion burners. Constant velocity approach leads to reduced heat release rates at large sizes and constant residence time approach in unacceptable levels of pressure drop across the system. To achieve mild combustion at high heat release rates at all scales, a modified approach with high recirculation is adopted in the present studies. Major geometrical dimensions are scaled as D similar to Q(1/3) with an air injection velocity of similar to 100 m/s (Delta p similar to 600 mm water gauge). Using CFD support, the position of air injection holes is selected to enhance the recirculation rates. The precise role of secondary air is to increase the recirculation rates and burn LIP the residual CO in the downstream. Measurements of temperature and oxidizer concentrations inside 3 kW, 150 kW burner and a jet flame are used to distinguish the combustion process in these burners. The burner can be used for a wide range of fuels from LPG to producer gas as extremes. Up to 8 dB of noise level reduction is observed in comparison to the conventional combustion mode. Exhaust NO emissions below 26 and 3 ppm and temperatures 1710 and 1520 K were measured for LPG and producer gas when the burner is operated at stoichiometry. (c) 2004 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
The study concerns service management, and specifically the action service firms take with regard to customer dissatisfaction, customer complaints and complaining customers in high touch services. Customer dissatisfaction, customer complaints and complaining customers are called negative incidents in the study. The study fills a research gap in service management studies by investigating negative incidents as a part of an open service system. In contrast to main stream service management studies defining service quality as how the customer as a consumer defines it, in the present study, the concept of interactive service quality is adopted. The customer is considered as a co-producer of service who thus has a role to play in service quality and productivity. Additionally, the study juxtaposes the often opposed perspectives of the manager and the customer as well as the often forgotten silent voices of service employees and supervisors. The study proposes that the service firm as an entity does not act but it is the actors at the different hierarchical layers who act. Additionally, it is acknowledged in the study that the different actors at the different hierarchical layers have different knowledge of the service system and different objectives for service encounters. Therefore, they interpret the negative incidents from different perspectives and their actions upon negative incidents are subsequently guided by their interpretations. The research question is: how do service firms act upon negative incidents in high touch services? In order to answer to the research question a narrative research approach was chosen. The actors at the different hierarchical layers acted as informants of the study and provided stories about customer dissatisfaction, customer complaining and complaint handling in high touch services. Through storytelling, access to the socially constructed reality of service firms’ action was achieved. Stemming from the literature review, analysis of empirical data and my theoretical thinking, a theory about service firms’ action upon negative incidents in high touch services was developed and the research question was answered. The study contributes to service recovery and complaint management studies as well as to studies on customer orientation and its implementation in service firms. Additionally, the study has a methodological contribution to service management studies since it reflects service firms’ action with narratives from multiple perspectives. The study is positioned in the tradition of the Nordic School of Marketing Thought and presents service firms’ action upon negative incidents in high touch services as a complex human-centered phenomenon in which the actors at the different hierarchical layers have crucial roles to play. Ritva Höykinpuro is associated with CERS, the Centre for Relationship Marketing and Service Management at Hanken School of Economics.
Resumo:
The aim of the current study is to examine the influence of the channel external environment on power, and the effect of power on the distribution network structure within the People’s Republic of China. Throughout the study a dual research process was applied. The theory was constructed by elaborating the main theoretical premises of the study, the channel power theories, the political economy framework and the distribution network structure, but these marketing channel concepts were expanded with other perspectives from other disciplines. The main method applied was a survey conducted among 164 Chinese retailers, complemented by interviews, photographs, observations and census data from the field. This multi-method approach enabled not only to validate and triangulate the quantitative results, but to uncover serendipitous findings as well. The theoretical contribution of the current study to the theory of marketing channels power is the different view it takes on power. First, earlier power studies have taken the producer perspective, whereas the current study also includes a distributor perspective to the discussion. Second, many power studies have dealt with strongly dependent relationships, whereas the current study examines loosely dependent relationships. Power is dependent on unequal distribution of resources rather than based on high dependency. The benefit of this view is in realising that power resources and power strategies are separate concepts. The empirical material of the current study confirmed that at least some resources were significantly related to power strategies. The study showed that the dimension resources composed of technology, know-how and knowledge, managerial freedom and reputation was significantly related to non-coercive power. Third, the notion of different outcomes of power is a contribution of this study to the channels power theory even though not confirmed by the empirical results. Fourth, it was proposed that channel external environment other than the resources would also contribute to the channel power. These propositions were partially supported thus providing only partial contribution to the channel power theory. Finally, power was equally distributed among the different types of actors. The findings from the qualitative data suggest that different types of retailers can be classified according to the meaning the actors put into their business. Some are more business oriented, for others retailing is the only way to earn a living. The findings also suggest that in some actors both retailing and wholesaling functions emerge, and this has implications for the marketing channels structure.
Resumo:
This paper extends current discussions about value creation and proposes a customer dominant value perspective. A customer-dominant marketing logic positions the customer in the center, rather than the service provider/producer or the interaction or the system. The focus is shifted from the company´s service processes involving the customer, to the customer´s multi-contextual value formation, involving the company. It is argued that value is not always an active process of creation; instead value is embedded and formed in the highly dynamic and multi-contextual reality and life of the customer. This leads to a need to look beyond the current line of visibility where visible customer-company interactions are focused to the invisible and mental life of the customer. From this follows a need to extend the temporal scope, from exchange and use even further to accumulated experiences in the customer´s life. The aim of this paper is to explore value formation from a customer dominant logic perspective. This is done in three steps: first, value formation is contrasted to earlier views on the company’s role in value creation by using a broad ontologically driven framework discussing what, how, when, where and who. Next, implications of the proposed characteristics of value formation compared to earlier approaches are put forward. Finally, some tentative suggestions of how this perspective would affect marketing in service companies are presented. As value formation in a CDL perspective has a different focus and scope than earlier views on value it leads to posing questions about the customer that reveals earlier hidden aspects of the role of a service for the customer. This insight might be used in service development and innovation.