943 resultados para pre-industrial Europe
Resumo:
Views on industrial service have conceptually progressed from the output of the provider’s production process to the result of an interaction process in which the customer also is involved. Although there are attempts to be customer-oriented, especially when the focus is on solutions, an industrial company’s offering combining goods and services is inherently seller-oriented. There is, however, a need to go beyond the current literature and company practices. We propose that what is needed is a genuinely customer-based parallel concept to offering that takes the customer’s view and put forward a new concept labelled customer needing. A needing is based on the customer’s mental model of their business and strategies which will affect priorities, decisions, and actions. A needing can be modelled as a configuration of three dimensions containing six functions that create realised value for the customer. These dimensions and functions can be used to describe needings which represent starting points for sellers’ creation of successful offerings. When offerings match needings over time the seller should have the potential to form and sustain successful buyer relationships.
Resumo:
There is an urgent interest in marketing to move away from neo-classical value definitions suggesting that value creation is a process of exchanging goods for money. In the present paper, value creation is conceptualized as an integration of two distinct, yet closely coupled processes. First, actors co-create what this paper calls an underlying basis of value. This is done by interactively re-configuring resources. By relating and combining resources, activity sets, and risks across actor boundaries in novel ways actors create joint productivity gains – a concept very similar to density (Normann, 2001). Second, actors engage in a process of signification and evaluation. Signification implies co-constructing the meaning and worth of joint productivity gains co-created through interactive resource re-configuration, as well as sharing those gains through a pricing mechanism as value to involved actors. The conceptual framework highlights an all-important dynamics associated with ´value creation´ and ´value´ - a dynamics the paper claims has eluded past marketing research. The paper argues that the framework presented here is appropriate for the interactive service perspective, where value and value creation are not objectively given, but depend on the power of involved actors´ socially constructed frames to mobilize resources across actor boundaries in ways that ´enhance system well-being´ (Vargo et al., 2008). The paper contributes to research on Service Logic, Service-Dominant Logic, and Service Science.
Resumo:
Strain and temperature sensitivities of a type I Bragg grating inscribed in a germania doped silica fiber, fabricated under normal conditions and zero strain, are compared with that of a Bragg grating inscribed under pre-strained condition. The results obtained reveal that the strain and temperature sensitivities of the two gratings are different. Based on these results, we demonstrate a technique which enables discrimination of strain and temperature in a Fiber Bragg Grating (FBG) with a linear response. The present technique allows for an easy implementation of the sensor by providing a direct access to the grating region in the fiber and demands only a simple interrogation system.
Resumo:
Yhteenveto: Kemikaalien teollisesta käsittelystä vesieliöille aiheutuvien riskien arviointi mallin avulla.
Resumo:
This master thesis studies how trade liberalization affects the firm-level productivity and industrial evolution. To do so, I built a dynamic model that considers firm-level productivity as endogenous to investigate the influence of trade on firm’s productivity and the market structure. In the framework, heterogeneous firms in the same industry operate differently in equilibrium. Specifically, firms are ex ante identical but heterogeneity arises as an equilibrium outcome. Under the setting of monopolistic competition, this type of model yields an industry that is represented not by a steady-state outcome, but by an evolution that rely on the decisions made by individual firms. I prove that trade liberalization has a general positive impact on technological adoption rates and hence increases the firm-level productivity. Besides, this endogenous technology adoption model also captures the stylized facts: exporting firms are larger and more productive than their non-exporting counterparts in the same sector. I assume that the number of firms is endogenous, since, according to the empirical literature, the industrial evolution shows considerably different patterns across countries; some industries experience large scale of firms’ exit in the period of contracting market shares, while some industries display relative stable number of firms or gradually increase quantities. The special word “shakeout” is used to describe the dramatic decrease in the number of firms. In order to explain the causes of shakeout, I construct a model where forward-looking firms decide to enter and exit the market on the basis of their state of technology. In equilibrium, firms choose different dates to adopt innovation which generate a gradual diffusion process. It is exactly this gradual diffusion process that generates the rapid, large-scale exit phenomenon. Specifically, it demonstrates that there is a positive feedback between firm’s exit and adoption, the reduction in the number of firms increases the incentives for remaining firms to adopt innovation. Therefore, in the setting of complete information, this model not only generates a shakeout but also captures the stability of an industry. However, the solely national view of industrial evolution neglects the importance of international trade in determining the shape of market structure. In particular, I show that the higher trade barriers lead to more fragile markets, encouraging the over-entry in the initial stage of industry life cycle and raising the probability of a shakeout. Therefore, more liberalized trade generates more stable market structure from both national and international viewpoints. The main references are Ederington and McCalman(2008,2009).
Resumo:
Pre-eclampsia is a pregnancy complication that affects about 5% of all pregnancies. It is known to be associated with alterations in angiogenesis -related factors, such as vascular endothelial growth factor (VEGF). An excess of antiangiogenic substances, especially the soluble receptor-1 of VEGF (sVEGFR-1), has been observed in maternal circulation after the onset of the disease, probably reflecting their increased placental production. Smoking reduces circulating concentrations of sVEGFR-1 in non-pregnant women, and in pregnant women it reduces the risk of pre-eclampsia. Soluble VEGFR-1 acts as a natural antagonist of VEGF and placental growth factor (PlGF) in human circulation, holding a promise for potential therapeutic use. In fact, it has been used as a model to generate a fusion protein, VEGF Trap , which has been found effective in anti-angiogenic treatment of certain tumors and ocular diseases. In the present study, we evaluated the potential use of maternal serum sVEGFR-1, Angiopoietin-2 (Ang-2) and endostatin, three central anti-angiogenic markers, in early prediction of subsequent pre-eclampsia. We also studied whether smoking affects circulating sVEGFR-1 concentrations in pregnant women or their first trimester placental secretion and expression in vitro. Last, in order to allow future discussion on the potential therapy based on sVEGFR-1, we determined the biological half-life of endogenous sVEGFR-1 in human circulation, and measured the concomitant changes in free VEGF concentrations. Blood or placental samples were collected from a total of 268 pregnant women between the years 2001 2007 in Helsinki University Central Hospital for the purposes above. The biomarkers were measured using commercially available enzyme-linked immunosorbent assays (ELISA). For the analyses of sVEGFR-1, Ang-2 and endostatin, a total of 3 240 pregnant women in the Helsinki area were admitted to blood sample collection during two routine ultrasoundscreening visits at 13.7 ± 0.5 (mean ± SD) and 19.2 ± 0.6 weeks of gestation. Of them, 49 women later developing pre-eclampsia were included in the study. Their disease was further classified as mild in 29 and severe in 20 patients. Isolated early-onset intrauterine growth retardation (IUGR) was diagnosed in 16 women with otherwise normal medical histories and uncomplicated pregnancies. Fifty-nine women remaining normotensive, non-proteinuric and finally giving birth to normal-weight infants were picked to serve as the control population of the study. Maternal serum concentrations of Ang-2, endostatin and sVEGFR-1, were increased already at 16 20 weeks of pregnancy, about 13 weeks before the clinical manifestation of preeclampsia. In addition, these biomarkers could be used to identify women at risk with a moderate precision. However, larger patient series are needed to determine whether these markers could be applied for clinical use to predict preeclampsia. Intrauterine growth retardation (IUGR), especially if noted at early stages of pregnancy and not secondary to any other pregnancy complication, has been suggested to be a form of preeclampsia compromising only the placental sufficiency and the fetus, but not affecting the maternal endothelium. In fact, IUGR and preeclampsia have been proposed to share a common vascular etiology in which factors regulating early placental angiogenesis are likely to play a central role. Thus, these factors have been suggested to be involved in the pathogenesis of IUGR. However, circulating sVEGFR-1, Ang-2 and endostatin concentrations were unaffected by subsequent IUGR at early second trimester. Furthermore, smoking was not associated with alterations in maternal circulating sVEGFR-1 or its placental production. The elimination of endogenous sVEGFR-1 after pregnancy was calculated from serial samples of eight pregnant women undergoing elective Caesarean section. As typical for proteins in human compartments, the elimination of sVEGFR-1 was biphasic, containing a rapid halflife of 3.4 h and a slow one of 29 h. The decline in sVEGFR-1 concentrations after mid-trimester legal termination of pregnancy was accompanied with a simultaneous increase in the serum levels of free VEGF so that within a few days after pregnancy VEGF dominated in the maternal circulation. Our study provides novel information on the kinetics of endogenous sVEGFR-1, which serves as a potential tool in the development of new strategies against diseases associated with angiogenic imbalance and alterations in VEGF signaling.
Resumo:
This article analyses the results of five Eurobarometer surveys (of 1995, 1997, 1998, 2000 and 2005) designed to measure which languages Europeans consider most useful to know. Most Europeans are of the opinion that English is the most useful, followed by French and German. During the last decade the popularity of French and German as useful languages has been decreasing significantly, while English has remained universally favoured as the most useful language. French and German have lost their popularity especially among those who do not speak them as a foreign language. On the other hand, Spanish, Russian and other languages (often these include languages of neighbouring countries, minority languages or a second official language of the country in question) have kept and even increased their former level of popularity. Opinions about useful languages vary according to a respondent’s knowledge of languages, education and profession. This article analyses these differences and discusses their impact on the study of foreign languages and the future of the practice of foreign languages in Europe.
Resumo:
For the past two centuries, nationalism has been among the most influential legitimizing principles of political organization. According to its simple definition, nationalism is a principle or a way of thinking and acting which holds that the world is divided into nations, and that national and political units should be congruent. Nationalism can thus be divided into two aspects: internal and external. Internally, the political units, i.e., states, should be made up of only one nation. Externally each nation-state should be sovereign. Transnational national governance of rights of national minorities violates both these principles. This study explores the formation, operation, and effectiveness of the European post-Cold War minorities system. The study identifies two basic approaches to minority rights: security and justice. These approaches have been used to legitimize international minority politics and they also inform the practice of transnational governance. The security approach is based on the recognition that the norm of national self-determination cannot be fulfilled in all relevant cases, and so minority rights are offered as a compensation to the dissatisfied national groups, reducing their aspiration to challenge the status quo. From the justice perspective, minority rights are justified as a compensatory strategy against discrimination caused by majority nation-building. The research concludes that the post-Cold War minorities system was justified on the basis of a particular version of the security approach, according to which only Eastern European minority situations are threatening because of the ethnic variant of nationalism that exists in that region. This security frame was essential in internationalising minority issues and justifying the swift development of norms and institutions to deal with these issues. However, from the justice perspective this approach is problematic, since it justified double standards in European minority politics. Even though majority nation-building is often detrimental to minorities also in Western Europe, Western countries can treat their minorities more or less however they choose. One of the main contributions of this thesis is the detailed investigation of the operation of the post-Cold War minorities system. For the first decade since its creation in the early 1990s, the system operated mainly through its security track, which is based on the field activities of the OSCE that are supported by the EU. The study shows how the effectiveness of this track was based on inter-organizational cooperation in which various transnational actors compensate for each other s weaknesses. After the enlargement of the EU and dissolution of the membership conditionality this track, which was limited to Eastern Europe from the start, has become increasingly ineffective. Since the EU enlargement, the focus minorities system has shifted more and more towards its legal track, which is based on the Framework Convention for the Protection of National Minorities (Council of Europe). The study presents in detail how a network of like-minded representatives of governments, international organizations, and independent experts was able strengthen the framework convention s (originally weak) monitoring system considerably. The development of the legal track allows for a more universal and consistent, justice-based approach to minority rights in contemporary Europe, but the nationalist principle of organization still severely hinders the materialization of this possibility.
Resumo:
Multilevel converters have been under research and development for more than three decades and have found successful industrial application. However, this is still a technology under development, and many new contributions and new commercial topologies have been reported in the last few years. The aim of this paper is to group and review these recent contributions, in order to establish the current state of the art and trends of the technology, to provide readers with a comprehensive and insightful review of where multilevel converter technology stands and is heading. This paper first presents a brief overview of well-established multilevel converters strongly oriented to their current state in industrial applications to then center the discussion on the new converters that have made their way into the industry. In addition, new promising topologies are discussed. Recent advances made in modulation and control of multilevel converters are also addressed. A great part of this paper is devoted to show nontraditional applications powered by multilevel converters and how multilevel converters are becoming an enabling technology in many industrial sectors. Finally, some future trends and challenges in the further development of this technology are discussed to motivate future contributions that address open problems and explore new possibilities.
Resumo:
In the study, the potential allowable cut in the district of Pohjois-Savo - based on the non-industrial private forest landowners' (NIPF) choices of timber management strategies - was clarified. Alternative timber management strategies were generated, and the choices and factors affecting the choices of timber management strategies by NIPF landowners were studied. The choices of timber management strategies were solved by maximizing the utility functions of the NIPF landowners. The parameters of the utility functions were estimated using the Analytic Hierarchy Process (AHP). The level of the potential allowable cut was compared to the cutting budgets based on the 7th and 8th National Forest Inventories (NFI7 and NFI8), to the combining of private forestry plans, and to the realized drain from non-industrial private forests. The potential allowable cut was calculated using the same MELA system as has been used in the calculation of the national cutting budget. The data consisted of the NIPF holdings (from the TASO planning system) that had been inventoried compartmentwise and had forestry plans made during the years 1984-1992. The NIPF landowners' choices of timber management strategies were clarified by a two-phase mail inquiry. The most preferred strategy obtained was "sustainability" (chosen by 62 % of landowners). The second in order of preference was "finance" (17 %) and the third was "saving" (11 %). "No cuttings", and "maximum cuttings" were the least preferred (9 % and 1 %, resp.). The factors promoting the choices of strategies with intensive cuttings were a) "farmer as forest owner" and "owning fields", b) "increase in the size of the forest holding", c) agriculture and forestry orientation in production, d) "decreasing short term stumpage earning expectations", e) "increasing intensity of future cuttings", and f) "choice of forest taxation system based on site productivity". The potential allowable cut defined in the study was 20 % higher than the average of the realized drain during the years 1988-1993, which in turn, was at the same level as the cutting budget based on the combining of forestry plans in eastern Finland. Respectively, the potential allowable cut defined in the study was 12 % lower than the NFI8-based greatest sustained allowable cut for the 1990s. Using the method presented in this study, timber management strategies can be clarified for non-industrial private forest landowners in different parts of Finland. Based on the choices of timber managemet strategies, regular cutting budgets can be calculated more realistically than before.
Resumo:
The factors affecting the non-industrial, private forest landowners' (hereafter referred to using the acronym NIPF) strategic decisions in management planning are studied. A genetic algorithm is used to induce a set of rules predicting potential cut of the landowners' choices of preferred timber management strategies. The rules are based on variables describing the characteristics of the landowners and their forest holdings. The predictive ability of a genetic algorithm is compared to linear regression analysis using identical data sets. The data are cross-validated seven times applying both genetic algorithm and regression analyses in order to examine the data-sensitivity and robustness of the generated models. The optimal rule set derived from genetic algorithm analyses included the following variables: mean initial volume, landowner's positive price expectations for the next eight years, landowner being classified as farmer, and preference for the recreational use of forest property. When tested with previously unseen test data, the optimal rule set resulted in a relative root mean square error of 0.40. In the regression analyses, the optimal regression equation consisted of the following variables: mean initial volume, proportion of forestry income, intention to cut extensively in future, and positive price expectations for the next two years. The R2 of the optimal regression equation was 0.34 and the relative root mean square error obtained from the test data was 0.38. In both models, mean initial volume and positive stumpage price expectations were entered as significant predictors of potential cut of preferred timber management strategy. When tested with the complete data set of 201 observations, both the optimal rule set and the optimal regression model achieved the same level of accuracy.
Resumo:
The methods of secondary wood processing are assumed to evolve over time and to affect the requirements set for the wood material and its suppliers. The study aimed at analysing the industrial operating modes applied by joinery and furniture manufacturers as sawnwood users. Industrial operating mode was defined as a pattern of important decisions and actions taken by a company which describes the company's level of adjustment in the late-industrial transition. A non-probabilistic sample of 127 companies was interviewed, including companies from Denmark, Germany, the Netherlands, and Finland. Fifty-two of the firms were furniture manufacturers and the other 75 were producing windows and doors. Variables related to business philosophy, production operations, and supplier choice criteria were measured and used as a basis for a customer typology; variables related to wood usage and perceived sawmill performance were measured to be used to profile the customer types. Factor analysis was used to determine the latent dimensions of industrial operating mode. Canonical correlations analysis was applied in developing the final base for classifying the observations. Non-hierarchical cluster analysis was employed to build a five-group typology of secondary wood processing firms; these ranged from traditional mass producers to late-industrial flexible manufacturers. There is a clear connection between the amount of late-industrial elements in a company and the share of special and customised sawnwood it uses. Those joinery or furniture manufacturers that are more late-industrial also are likely to use more component-type wood material and to appreciate customer-oriented technical precision. The results show that the change is towards the use of late-industrial sawnwood materials and late-industrial supplier relationships.
Resumo:
To enhance the utilization of the wood, the sawmills are forced to place more emphasis on planning to master the whole production chain from the forest to the end product. One significant obstacle to integrating the forest-sawmill-market production chain is the lack of appropriate information about forest stands. Since the wood procurement point of view in forest planning systems has been almost totally disregarded there has been a great need to develop an easy and efficient pre-harvest measurement method, allowing separate measurement of stands prior to harvesting. The main purpose of this study was to develop a measurement method for pine stands which forest managers could use in describing the properties of the standing trees for sawing production planning. Study materials were collected from ten Scots pine stands (Pinus sylvestris) located in North Häme and South Pohjanmaa, in southern Finland. The data comprise test sawing data on 314 pine stems, dbh and height measures of all trees and measures of the quality parameters of pine sawlog stems in all ten study stands as well as the locations of all trees in six stands. The study was divided into four sub-studies which deal with pine quality prediction, construction of diameter and dead branch height distributions, sampling designs and applying height and crown height models. The final proposal for the pre-harvest measurement method is a synthesis of the individual sub-studies. Quality analysis resulted in choosing dbh, distance from stump height to the first dead branch (dead branch height), crown height and tree height as the most appropriate quality characteristics of Scots pine. Dbh and dead branch height are measured from each pine sample tree while height and crown height are derived from dbh measures by aid of mixed height and crown height models. Pine and spruce diameter distribution as well as dead branch height distribution are most effectively predicted by the kernel function. Roughly 25 sample trees seems to be appropriate in pure pine stands. In mixed stands the number of sample trees needs to be increased in proportion to the intensity of pines in order to attain the same level of accuracy.