17 resultados para Kronecker product and Kronecker sum
em Helda - Digital Repository of University of Helsinki
Resumo:
The first line medication for mild to moderate Alzheimer s disease (AD) is based on cholinesterase inhibitors which prolong the effect of the neurotransmitter acetylcholine in cholinergic nerve synapses which relieves the symptoms of the disease. Implications of cholinesterases involvement in disease modifying processes has increased interest in this research area. The drug discovery and development process is a long and expensive process that takes on average 13.5 years and costs approximately 0.9 billion US dollars. Drug attritions in the clinical phases are common due to several reasons, e.g., poor bioavailability of compounds leading to low efficacy or toxic effects. Thus, improvements in the early drug discovery process are needed to create highly potent non-toxic compounds with predicted drug-like properties. Nature has been a good source for the discovery of new medicines accounting for around half of the new drugs approved to market during the last three decades. These compounds are direct isolates from the nature, their synthetic derivatives or natural mimics. Synthetic chemistry is an alternative way to produce compounds for drug discovery purposes. Both sources have pros and cons. The screening of new bioactive compounds in vitro is based on assaying compound libraries against targets. Assay set-up has to be adapted and validated for each screen to produce high quality data. Depending on the size of the library, miniaturization and automation are often requirements to reduce solvent and compound amounts and fasten the process. In this contribution, natural extract, natural pure compound and synthetic compound libraries were assessed as sources for new bioactive compounds. The libraries were screened primarily for acetylcholinesterase inhibitory effect and secondarily for butyrylcholinesterase inhibitory effect. To be able to screen the libraries, two assays were evaluated as screening tools and adapted to be compatible with special features of each library. The assays were validated to create high quality data. Cholinesterase inhibitors with various potencies and selectivity were found in natural product and synthetic compound libraries which indicates that the two sources complement each other. It is acknowledged that natural compounds differ structurally from compounds in synthetic compound libraries which further support the view of complementation especially if a high diversity of structures is the criterion for selection of compounds in a library.
Resumo:
Fluid bed granulation is a key pharmaceutical process which improves many of the powder properties for tablet compression. Dry mixing, wetting and drying phases are included in the fluid bed granulation process. Granules of high quality can be obtained by understanding and controlling the critical process parameters by timely measurements. Physical process measurements and particle size data of a fluid bed granulator that are analysed in an integrated manner are included in process analytical technologies (PAT). Recent regulatory guidelines strongly encourage the pharmaceutical industry to apply scientific and risk management approaches to the development of a product and its manufacturing process. The aim of this study was to utilise PAT tools to increase the process understanding of fluid bed granulation and drying. Inlet air humidity levels and granulation liquid feed affect powder moisture during fluid bed granulation. Moisture influences on many process, granule and tablet qualities. The approach in this thesis was to identify sources of variation that are mainly related to moisture. The aim was to determine correlations and relationships, and utilise the PAT and design space concepts for the fluid bed granulation and drying. Monitoring the material behaviour in a fluidised bed has traditionally relied on the observational ability and experience of an operator. There has been a lack of good criteria for characterising material behaviour during spraying and drying phases, even though the entire performance of a process and end product quality are dependent on it. The granules were produced in an instrumented bench-scale Glatt WSG5 fluid bed granulator. The effect of inlet air humidity and granulation liquid feed on the temperature measurements at different locations of a fluid bed granulator system were determined. This revealed dynamic changes in the measurements and enabled finding the most optimal sites for process control. The moisture originating from the granulation liquid and inlet air affected the temperature of the mass and pressure difference over granules. Moreover, the effects of inlet air humidity and granulation liquid feed rate on granule size were evaluated and compensatory techniques used to optimize particle size. Various end-point indication techniques of drying were compared. The ∆T method, which is based on thermodynamic principles, eliminated the effects of humidity variations and resulted in the most precise estimation of the drying end-point. The influence of fluidisation behaviour on drying end-point detection was determined. The feasibility of the ∆T method and thus the similarities of end-point moisture contents were found to be dependent on the variation in fluidisation between manufacturing batches. A novel parameter that describes behaviour of material in a fluid bed was developed. Flow rate of the process air and turbine fan speed were used to calculate this parameter and it was compared to the fluidisation behaviour and the particle size results. The design space process trajectories for smooth fluidisation based on the fluidisation parameters were determined. With this design space it is possible to avoid excessive fluidisation and improper fluidisation and bed collapse. Furthermore, various process phenomena and failure modes were observed with the in-line particle size analyser. Both rapid increase and a decrease in granule size could be monitored in a timely manner. The fluidisation parameter and the pressure difference over filters were also discovered to express particle size when the granules had been formed. The various physical parameters evaluated in this thesis give valuable information of fluid bed process performance and increase the process understanding.
Resumo:
The research reported in this thesis dealt with single crystals of thallium bromide grown for gamma-ray detector applications. The crystals were used to fabricate room temperature gamma-ray detectors. Routinely produced TlBr detectors often are poor quality. Therefore, this study concentrated on developing the manufacturing processes for TlBr detectors and methods of characterisation that can be used for optimisation of TlBr purity and crystal quality. The processes under concern were TlBr raw material purification, crystal growth, annealing and detector fabrication. The study focused on single crystals of TlBr grown from material purified by a hydrothermal recrystallisation method. In addition, hydrothermal conditions for synthesis, recrystallisation, crystal growth and annealing of TlBr crystals were examined. The final manufacturing process presented in this thesis deals with TlBr material purified by the Bridgman method. Then, material is hydrothermally recrystallised in pure water. A travelling molten zone (TMZ) method is used for additional purification of the recrystallised product and then for the final crystal growth. Subsequent processing is similar to that described in the literature. In this thesis, literature on improving quality of TlBr material/crystal and detector performance is reviewed. Aging aspects as well as the influence of different factors (temperature, time, electrode material and so on) on detector stability are considered and examined. The results of the process development are summarised and discussed. This thesis shows the considerable improvement in the charge carrier properties of a detector due to additional purification by hydrothermal recrystallisation. As an example, a thick (4 mm) TlBr detector produced by the process was fabricated and found to operate successfully in gamma-ray detection, confirming the validity of the proposed purification and technological steps. However, for the complete improvement of detector performance, further developments in crystal growth are required. The detector manufacturing process was optimized by characterisation of material and crystals using methods such as X-ray diffraction (XRD), polarisation microscopy, high-resolution inductively coupled plasma mass (HR-ICPM), Fourier transform infrared (FTIR), ultraviolet and visual (UV-Vis) spectroscopy, field emission scanning electron microscope (FESEM) and energy-dispersive X-ray spectroscopy (EDS), current-voltage (I-V) and capacity voltage (CV) characterisation, and photoconductivity, as well direct detector examination.
Resumo:
Free and Open Source Software (FOSS) has gained increased interest in the computer software industry, but assessing its quality remains a challenge. FOSS development is frequently carried out by globally distributed development teams, and all stages of development are publicly visible. Several product and process-level quality factors can be measured using the public data. This thesis presents a theoretical background for software quality and metrics and their application in a FOSS environment. Information available from FOSS projects in three information spaces are presented, and a quality model suitable for use in a FOSS context is constructed. The model includes both process and product quality metrics, and takes into account the tools and working methods commonly used in FOSS projects. A subset of the constructed quality model is applied to three FOSS projects, highlighting both theoretical and practical concerns in implementing automatic metric collection and analysis. The experiment shows that useful quality information can be extracted from the vast amount of data available. In particular, projects vary in their growth rate, complexity, modularity and team structure.
Resumo:
Agriculture is an economic activity that heavily relies on the availability of natural resources. Through its role in food production agriculture is a major factor affecting public welfare and health, and its indirect contribution to gross domestic product and employment is significant. Agriculture also contributes to numerous ecosystem services through management of rural areas. However, the environmental impact of agriculture is considerable and reaches far beyond the agroecosystems. The questions related to farming for food production are, thus, manifold and of great public concern. Improving environmental performance of agriculture and sustainability of food production, sustainabilizing food production, calls for application of wide range of expertise knowledge. This study falls within the field of agro-ecology, with interphases to food systems and sustainability research and exploits the methods typical of industrial ecology. The research in these fields extends from multidisciplinary to interdisciplinary and transdisciplinary, a holistic approach being the key tenet. The methods of industrial ecology have been applied extensively to explore the interaction between human economic activity and resource use. Specifically, the material flow approach (MFA) has established its position through application of systematic environmental and economic accounting statistics. However, very few studies have applied MFA specifically to agriculture. The MFA approach was used in this thesis in such a context in Finland. The focus of this study is the ecological sustainability of primary production. The aim was to explore the possibilities of assessing ecological sustainability of agriculture by using two different approaches. In the first approach the MFA-methods from industrial ecology were applied to agriculture, whereas the other is based on the food consumption scenarios. The two approaches were used in order to capture some of the impacts of dietary changes and of changes in production mode on the environment. The methods were applied at levels ranging from national to sector and local levels. Through the supply-demand approach, the viewpoint changed between that of food production to that of food consumption. The main data sources were official statistics complemented with published research results and expertise appraisals. MFA approach was used to define the system boundaries, to quantify the material flows and to construct eco-efficiency indicators for agriculture. The results were further elaborated for an input-output model that was used to analyse the food flux in Finland and to determine its relationship to the economy-wide physical and monetary flows. The methods based on food consumption scenarios were applied at regional and local level for assessing feasibility and environmental impacts of relocalising food production. The approach was also used for quantification and source allocation of greenhouse gas (GHG) emissions of primary production. GHG assessment provided, thus, a means of crosschecking the results obtained by using the two different approaches. MFA data as such or expressed as eco-efficiency indicators, are useful in describing the overall development. However, the data are not sufficiently detailed for identifying the hot spots of environmental sustainability. Eco-efficiency indicators should not be bluntly used in environmental assessment: the carrying capacity of the nature, the potential exhaustion of non-renewable natural resources and the possible rebound effect need also to be accounted for when striving towards improved eco-efficiency. The input-output model is suitable for nationwide economy analyses and it shows the distribution of monetary and material flows among the various sectors. Environmental impact can be captured only at a very general level in terms of total material requirement, gaseous emissions, energy consumption and agricultural land use. Improving environmental performance of food production requires more detailed and more local information. The approach based on food consumption scenarios can be applied at regional or local scales. Based on various diet options the method accounts for the feasibility of re-localising food production and environmental impacts of such re-localisation in terms of nutrient balances, gaseous emissions, agricultural energy consumption, agricultural land use and diversity of crop cultivation. The approach is applicable anywhere, but the calculation parameters need to be adjusted so as to comply with the specific circumstances. The food consumption scenario approach, thus, pays attention to the variability of production circumstances, and may provide some environmental information that is locally relevant. The approaches based on the input-output model and on food consumption scenarios represent small steps towards more holistic systemic thinking. However, neither one alone nor the two together provide sufficient information for sustainabilizing food production. Environmental performance of food production should be assessed together with the other criteria of sustainable food provisioning. This requires evaluation and integration of research results from many different disciplines in the context of a specified geographic area. Foodshed area that comprises both the rural hinterlands of food production and the population centres of food consumption is suggested to represent a suitable areal extent for such research. Finding a balance between the various aspects of sustainability is a matter of optimal trade-off. The balance cannot be universally determined, but the assessment methods and the actual measures depend on what the bottlenecks of sustainability are in the area concerned. These have to be agreed upon among the actors of the area
Resumo:
In daily life, rich experiences evolve in every environmental and social interaction. Because experience has a strong impact on how people behave, scholars in different fields are interested in understanding what constitutes an experience. Yet even if interest in conscious experience is on the increase, there is no consensus on how such experience should be studied. Whatever approach is taken, the subjective and psychologically multidimensional nature of experience should be respected. This study endeavours to understand and evaluate conscious experiences. First I intro-duce a theoretical approach to psychologically-based and content-oriented experience. In the experiential cycle presented here, classical psychology and orienting-environmental content are connected. This generic approach is applicable to any human-environment interaction. Here I apply the approach to entertainment virtual environments (VEs) such as digital games and develop a framework with the potential for studying experiences in VEs. The development of the methodological framework included subjective and objective data from experiences in the Cave Automatic Virtual Environment (CAVE) and with numerous digital games (N=2,414). The final framework consisted of fifteen factor-analytically formed subcomponents of the sense of presence, involvement and flow. Together, these show the multidimensional experiential profile of VEs. The results present general experiential laws of VEs and show that the interface of a VE is related to (physical) presence, which psychologically means attention, perception and the cognitively evaluated realness and spatiality of the VE. The narrative of the VE elicits (social) presence and involvement and affects emotional outcomes. Psychologically, these outcomes are related to social cognition, motivation and emotion. The mechanics of a VE affect the cognitive evaluations and emotional outcomes related to flow. In addition, at the very least, user background, prior experience and use context affect the experiential variation. VEs are part of many peoples lives and many different outcomes are related to them, such as enjoyment, learning and addiction, depending on who is making the evalua-tion. This makes VEs societally important and psychologically fruitful to study. The approach and framework presented here contribute to our understanding of experiences in general and VEs in particular. The research can provide VE developers with a state-of-the art method (www.eveqgp.fi) that can be utilized whenever new product and service concepts are designed, prototyped and tested.
Resumo:
The goal of this research was to establish the necessary conditions under which individuals are prepared to commit themselves to quality assurance work in the organisation of a Polytechnic. The conditions were studied using four main concepts: awareness of quality, commitment to the organisation, leadership and work welfare. First, individuals were asked to describe these four concepts. Then, relationships between the concepts were analysed in order to establish the conditions for the commitment of an individual towards quality assurance work (QA). The study group comprised the entire personnel of Helsinki Polytechnic, of which 341 (44.5%) individuals participated. Mixed methods were used as the methodological base. A questionnaire and interviews were used as the research methods. The data from the interviews were used for the validation of the results, as well as for completing the analysis. The results of these interviews and analyses were integrated using the concurrent nested design method. In addition, the questionnaire was used to separately analyse the impressions and meanings of the awareness of quality and leadership, because, according to the pre-understanding, impressions of phenomena expressed in terms of reality have an influence on the commitment to QA. In addition to statistical figures, principal component analysis was used as a description method. For comparisons between groups, one way variance analysis and effect size analysis were used. For explaining the analysis methods, forward regression analysis and structural modelling were applied. As a result of the research it was found that 51% of the conditions necessary for a commitment to QA were explained by an individual’s experience/belief that QA was a method of development, that QA was possible to participate in and that the meaning of quality included both product and process qualities. If analysed separately, other main concepts (commitment to the organisation, leadership and work welfare) played only a small part in explaining an individual’s commitment. In the context of this research, a structural path model of the main concepts was built. In the model, the concepts were interconnected by paths created as a result of a literature search covering the main concepts, as well as a result of an analysis of the empirical material of this thesis work. The path model explained 46% of the necessary conditions under which individuals are prepared to commit themselves to QA. The most important path for achieving a commitment stemmed from product and system quality emanating from the new goals of the Polytechnic, moved through the individual’s experience that QA is a method of the total development of quality and ended in a commitment to QA. The second most important path stemmed from the individual’s experience of belonging to a supportive work community, moved through the supportive value of the job and through affective commitment to the organisation and ended in a commitment to QA. The third path stemmed from an individual’s experiences in participating in QA, moved through collective system quality and through these to the supportive value of the job to affective commitment to the organisation and ended in a commitment to QA. The final path in the path model stemmed from leadership by empowerment, moved through collective system quality, the supportive value of the job and an affective commitment to the organisation, and again, ended in a commitment to QA. As a result of the research, it was found that the individual’s functional department was an important factor in explaining the differences between groups. Therefore, it was found that understanding the processing of part cultures in the organisation is important when developing QA. Likewise, learning-teaching paradigms proved to be a differentiating factor. Individuals thinking according to the humanistic-constructivistic paradigm showed more commitment to QA than technological-rational thinkers. Also, it was proved that the QA training program did not increase commitment, as the path model demonstrated that those who participated in training showed 34% commitment, whereas those who did not showed 55% commitment. As a summary of the results it can be said that the necessary conditions under which individuals are prepared to commit themselves to QA cannot be treated in a reductionistic way. Instead, the conditions must be treated as one totality, with all the main concepts interacting simultaneously. Also, the theoretical framework of quality must include its dynamic aspect, which means the development of the work of the individual and learning through auditing. In addition, this dynamism includes the reflection of the paradigm of the functions of the individual as well as that of all parts of the organisation. It is important to understand and manage the various ways of thinking and the cultural differences produced by the fragmentation of the organisation. Finally, it seems possible that the path model can be generalised for use in any organisation development project where the personnel should be committed.
Resumo:
A sense of community as a resource for developing university teaching and learning The aim of this doctoral research was to determine how a sense of community can be a resource for developing university teaching and learning. The theoretical background is linked to social sciences, social psychology, university pedagogy and educational sciences. The thesis is comprised of two separate studies. Study I consisted of an action research project in which a model of cooperatively developing a teaching and learning culture was created and tested. The focus of study I was the university pedagogy programme of the Faculty of Agriculture and Forestry. The results demonstrated that the theoretical framework and the methods of cooperative learning provide useful tools for developing an academic learning and teaching culture. The approach helps to create a benevolent learning atmosphere. The cooperative learning culture used in the action research project reflected the traditional academic learning culture and also caused a collision between the two cultures. The aim of study II was to determine how Open University students and Bachelor’s degree students experience their teaching-learning environment and the importance of the learning community and peer support to their studies. The results indicated that, with the exception of support from other students, the Open University students experienced their teaching-learning environments on average more positively than the Bachelor’s degree students. According to the Open University students, their own motivation and interest was the most important factor that enhanced studying. Furthermore, the most common factors delaying their studies were their life situation and a lack of time. The sense of community and social relations mainly promoted studying. Open University students experienced that they were supported by their teachers, tutors, other students, the working community, family and hobbies. The research demonstrated that the methods that make good use of communal resources are negotiation of shared goals and rules, working in various small groups, emphasis on shared and individual responsibilities and assessment of the product and the process of learning. The resources of the academic community can be developed if the members of the community develop, in addition to the communal working methods, their communal sensitivity. In other words, they should have an understanding of social psychological and sociological concepts that they can use for observing communal phenomena.
Resumo:
Motivation has an important role in academic learning for learning is regulated by motivation. Further motivation is centrally manifested by goals. Goals reflect values and regulate individual s orientation and what they strive for. In spite of the central role of motivation in academic learning, discussions on post-graduate education has somewhat overlooked motivational processes and concentrated on the excellence of performance. The aim of this study was to investigate what kind of goals PhD students have and how they experience their role in their own scientific community. It was also purpose to study how these goals and experienced roles are in relation with study each other, context, possible intentions of quitting studies and prolongation of studies. Furthermore, the aim was to investigate how different postgraduates differ in terms of how they experience their learning environment. The data was collected with the From PhD students to academic experts survey (Pyhältö & Lonka, 2006) from four complementary domains: medicine, arts, psychology and education. The survey consisted of both likert-scaled items and open ended questions. The participants were 601 postgraduate students. The goals and the experienced role in scientific community were analysed in terms of qualitative content analysis. The relation between goals and experienced role and background variables were tested using ?² and the differences between different postgraduate groups using one way analysis of variances (ANOVA). The results indicated that postgraduates goals varied based on whether they brought up goals related to the product (outcome of the thesis process), the process (thesis process as whole) or both the product and the process. Product goals consisted of for example career qualification and better status as process goals consisted for example of learning and influencing ones own discipline. The experienced role of the postgraduates differed in terms of whether the conception was organised, unorganised or controversial. Both the goals and the experienced roles were in relation with study context and commitment to the studies. The different postgraduate groups also differed in terms of how they experienced their own learning environment.
Resumo:
The aim of the study was to find out what kind of view on product quality dressmaker and customer have, how the views differ from each other and how the difference affects dressmaker s work as an entrepreneur. The research data consists of eight thematic interviews: four dressmakers and four customers were interviewed for the study. In the core of customised dressmaking is arelationship between a maker and a client. The product of a dressmaker, a unique dress, is created in an immediate interaction between a dressmaker and a client. Also the quality of a unique dress derives from this interaction. In the results of this study, the views on quality are linked with six themes: dress, process, dressmaker, customer, interaction and enterprise. The dressmakers and the customers agree that the quality of a custom-made dress is based on unique fit. Describing the process the dressmakers insist on the quality of manufacturing. The clients' view on process insists on those phases where they themselves take part: designing and fitting. The personality of the dressmaker is part of quality in both the dressmakers' and the customers' points of view. The dressmakers and the customers are also aware of the customers impact on fulfilling the expectations. The immediate interaction between dressmaker and customer is a key to the unique dressmaking. At its best the interaction is followed by a trusting relationship. Entrustment derives also from a good reputation, which is essential in dressmaker-entrepreneurs marketing strategy. The dressmakers views on quality are product- and manufacturing-based. According to the results of the study there can be seen different types of dressmakers, that emphasise different aspects of quality. At the other end is a manufacturing-based, even transcendent view on quality, which rests on the values of the dressmaker. At the other end lies a customer- and value-based approach, which is founded on fulfilling the expectations and needs of the customer. In their views on quality the customers emphasise the immediate interaction between dressmaker and client. Keywords: quality, dressmaker, customer, entrepreneur
Resumo:
The current study of Scandinavian multinational corporate subsidiaries in the rapidly growing Eastern European market, due to their particular organizational structure, attempts to gain some new insights into processes and potential benefits of knowledge and technology transfer. This study explores how to succeed in knowledge transfer and to become more competitive, driven by the need to improve transfer of systematic knowledge for the manufacture of product and service provisions in newly entered market. The scope of current research is exactly limited to multinational corporations, which are defined as enterprises comprising entities in two or more countries, regardless of legal forms and field of activity of those entities, and which operate under a system of decision-making permitting coherent policies and a common strategy through one or more decision-making centers. The entities are linked, by ownership, and able to exercise influence over the activities of the others; and, in particular, to share the knowledge, resources, and responsibilities with others. The research question is "How and to which extent can knowledge-transfer influence a company's technological competence and economic competitiveness?" and try to find out what particular forces and factors affect the development of subsidiary competencies; what factors influence the corporate integration and use of the subsidiary's competencies; and what may increase competitiveness of MNC pursuing leading position in entered market. The empirical part of the research was based on qualitative analyses of twenty interviews conducted among employees in Scandinavian MNC subsidiary units situated in Ukraine, using structured sequence of questions with open-ended answers. The data was investigated by comparison case analyses to literature framework. Findings indicate that a technological competence developed in one subsidiary will lead to an integration of that competence with other corporate units within the MNC. Success increasingly depends upon people's learning. The local economic area is crucial for understanding competition and industrial performance, as there seems to be a clear link between the performance of subsidiaries and the conditions prevailing in their environment. The linkage between competitive advantage and company's success is mutually dependent. Observation suggests that companies can be characterized as clusters of complementary activities such as R&D, administration, marketing, manufacturing and distribution. Study identifies barriers and obstacles in technology and knowledge transfer that is relevant for the subsidiaries' competence development. The accumulated experience can be implemented in new entered market with simple procedures, and at a low cost under specific circumstances, by cloning. The main goal is focused to support company prosperity, making more profits and sustaining an increased market share by improved product quality and/or reduced production cost of the subsidiaries through cloning approach. Keywords: multinational corporation; technology transfer; knowledge transfer; subsidiary competence; barriers and obstacles; competitive advantage; Eastern European market
Resumo:
Marketing of goods under geographical names has always been common. Aims to prevent abuse have given rise to separate forms of legal protection for geographical indications (GIs) both nationally and internationally. The European Community (EC) has also gradually enacted its own legal regime to protect geographical indications. The legal protection of GIs has traditionally been based on the idea that geographical origin endows a product exclusive qualities and characteristics. In today s world we are able to replicate almost any prod-uct anywhere, including its qualities and characteristics. One would think that this would preclude protec-tion from most geographical names, yet the number of geographical indications seems to be rising. GIs are no longer what they used to be. In the EC it is no longer required that a product is endowed exclusive characteristics by its geographical origin as long as consumers associate the product with a certain geo-graphical origin. This departure from the traditional protection of GIs is based on the premise that a geographical name extends beyond and exists apart from the product and therefore deserves protection itself. The thesis tries to clearly articulate the underlying reasons, justifications, principles and policies behind the protection of GIs in the EC and then scrutinise the scope and shape of the GI system in the light of its own justifications. The essential questions it attempts to aswer are (1) What is the basis and criteria for granting GI rights? (2) What is the scope of protection afforded to GIs? and (3) Are these both justified in the light of the functions and policies underlying granting and protecting of GIs? Despite the differences, the actual functions of GIs are in many ways identical to those of trade marks. Geographical indications have a limited role as source and quality indicators in allowing consumers to make informed and efficient choices in the market place. In the EC this role is undermined by allowing able room and discretion for uses that are arbitrary. Nevertheless, generic GIs are unable to play this role. The traditional basis for justifying legal protection seems implausible in most case. Qualities and charac-teristics are more likely to be related to transportable skill and manufacturing methods than the actual geographical location of production. Geographical indications are also incapable of protecting culture from market-induced changes. Protection against genericness, against any misuse, imitation and evocation as well as against exploiting the reputation of a GI seem to be there to protect the GI itself. Expanding or strengthening the already existing GI protection or using it to protect generic GIs cannot be justified with arguments on terroir or culture. The conclusion of the writer is that GIs themselves merit protection only in extremely rare cases and usually only the source and origin function of GIs should be protected. The approach should not be any different from one taken in trade mark law. GI protection should not be used as a means to mo-nopolise names. At the end of the day, the scope of GI protection is nevertheless a policy issue.
Resumo:
This thesis studies empirically whether measurement errors in aggregate production statistics affect sentiment and future output. Initial announcements of aggregate production are subject to measurement error, because many of the data required to compile the statistics are produced with a lag. This measurement error can be gauged as the difference between the latest revised statistic and its initial announcement. Assuming aggregate production statistics help forecast future aggregate production, these measurement errors are expected to affect macroeconomic forecasts. Assuming agents’ macroeconomic forecasts affect their production choices, these measurement errors should affect future output through sentiment. This thesis is primarily empirical, so the theoretical basis, strategic complementarity, is discussed quite briefly. However, it is a model in which higher aggregate production increases each agent’s incentive to produce. In this circumstance a statistical announcement which suggests aggregate production is high would increase each agent’s incentive to produce, thus resulting in higher aggregate production. In this way the existence of strategic complementarity provides the theoretical basis for output fluctuations caused by measurement mistakes in aggregate production statistics. Previous empirical studies suggest that measurement errors in gross national product affect future aggregate production in the United States. Additionally it has been demonstrated that measurement errors in the Index of Leading Indicators affect forecasts by professional economists as well as future industrial production in the United States. This thesis aims to verify the applicability of these findings to other countries, as well as study the link between measurement errors in gross domestic product and sentiment. This thesis explores the relationship between measurement errors in gross domestic production and sentiment and future output. Professional forecasts and consumer sentiment in the United States and Finland, as well as producer sentiment in Finland, are used as the measures of sentiment. Using statistical techniques it is found that measurement errors in gross domestic product affect forecasts and producer sentiment. The effect on consumer sentiment is ambiguous. The relationship between measurement errors and future output is explored using data from Finland, United States, United Kingdom, New Zealand and Sweden. It is found that measurement errors have affected aggregate production or investment in Finland, United States, United Kingdom and Sweden. Specifically, it was found that overly optimistic statistics announcements are associated with higher output and vice versa.
Resumo:
Intensive care is to be provided to patients benefiting from it, in an ethical, efficient, effective and cost-effective manner. This implies a long-term qualitative and quantitative analysis of intensive care procedures and related resources. The study population consists of 2709 patients treated in the general intensive care unit (ICU) of Helsinki University Hospital. Study sectors investigate intensive care patients mortality, quality of life (QOL), Quality-Adjusted Life-Years (QALY units) and factors related to severity of illness, length of stay (LOS), patient s age, evaluation period as well as experiences and memories connected with the ICU episode. In addition, the study examines the qualities of two QOL measures, the RAND 36 Item Health Survey 1.0 (RAND-36) and the 5 Item EuroQol-5D (EQ-5D) and assesses the correlation of the test results. Patients treated in 1995 responded to the RAND-36 questionnaire in 1996. All patients, treated from 1995-2000, received a QOL questionnaires in 2001, when 1 7 years had lapsed from the intensive treatment. Response rate was 79.5 %. Main Results 1) Of the patients who died within the first year (n = 1047) 66 % died during the intensive care period or within the following month. The non-survivors were more aged than the surviving patients, had generally a higher than average APACHE II and SOFA score depicting the severity of illness, their ICU LOS was longer and hospital stay shorter than of the surviving patients (p < 0.001). Mortality of patients receiving conservative treatment was higher than of those receiving surgical treatment. Patients replying to the QOL survey in 2001 (n = 1099) had recovered well: 97 % of those lived at home. More than half considered their QOL as good or extremely good, 40 % as satisfactory and 7 % as bad. All QOL indexes of those of working-age were considerably lower (p < 0.001) than comparable figures of the age- and gender-adjusted Finnish population. The 5-year monitoring period made evident that mental recovery was slower than physical recovery. 2) The results of RAND-36 and EQ-5D correlated well (p < 0.01). The RAND-36 profile measure distinguished more clearly between the different categories of QOL and their levels. EQ-5D measured well the patient groups general QOL and the sum index was used to calculate QALY units. 3) QALY units were calculated by multiplying the time the patient survived after ICU stay or expected life-years by the EQ-5D sum index. Aging automatically lowers the number of QALY units. Patients under the age of 65 receiving conservative treatment benefited from treatment to a greater extent measured in QALY units than their peers receiving surgical treatment, but in the age group 65 and over patients with surgical treatment received higher QALY ratings than recipients of conservative treatment. 4) The intensive care experience and QOL ratings were connected. The QOL indices were statistically highest for those recipients with memories of intensive care as a positive experience, albeit their illness requiring intensive care treatment was less serious than average. No statistically significant differences were found in the QOL indices of those with negative memories, no memories or those who did not express the quality of their experiences.
Resumo:
The paper explores the effect of customer satisfaction with online supporting services on loyalty to providers of an offline core service. Supporting services are provided to customers before, during, or after the purchase of a tangible or intangible core product, and have the purpose of enhancing or facilitating the use of this product. The internet has the potential to dominate all other marketing channels when it comes to the interactive and personalised communication that is considered quintessential for supporting services. Our study shows that the quality of online supporting services powerfully affects satisfaction with the provider and customer loyalty through its effect on online value and enjoyment. Managerial implications are provided.