39 resultados para material contexts
Resumo:
Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.
Resumo:
Designed by the Media The Media publicity of Design in the Finnish Economic Press The meaning of design has increased in consumer societies. Design is the subject of debate and the number of media discussions has also increased steadily. Especially the role of industrial design has been emphasised. In this study I examine the media publicity of design in the Finnish economic press from the late 1980s to the beginning of the 2000s. The research question is connected to media representations: How is design represented in the Finnish economic press? In other words, what are the central topics of design in the economic press, and to what issues are the media debates connected? The usually repeated phrase that design discussions take place only on the cultural pages of the daily press or in cultural contexts is being changed. Design is also linked to the consumer culture and consumers everyday practices. The research material has been collected from the Finnish economic press. The qualitative sample consists of articles from Kauppalehti, Taloussanomat and from several economic papers published by the Talentum Corporation. The approach of the research is explorative, descriptive and hermeneutic. This means that the economic press articles are used to explore how design is represented in the media. In addition, the characteristics of design represented in the media are described in detail. The research is based on the interpretive tradition of studying textual materials. Background assumptions are thus grounded in hermeneutics. Erving Goffman s frame analysis is applied in analysing the economic press materials. The frames interpreted from the articles depict the media publicity of design in the Finnish economic press. The research opens up a multidimensional picture of design in the economic press. The analysis resulted in five frames that describe design from various points of view. In the personal frame designers are described in private settings and through their personal experiences. The second frame relates to design work. In the frame of mastery of the profession, the designers work is interpreted widely. Design is considered from the aspects of controlling personal know-how, co-operation and the overall process of design. The third frame is connected to the actual substance of the economic press. In the frame of economy and market, design is linked to international competitiveness, companies competitive advantage and benefit creation for the consumers. The fourth frame is connected to the actors promoting design on a societal level. In the communal frame, the economic press describes design policy, design research and education and other actors that actively develop design in the societal networks. The last frame is linked to the traditions of design and above all to the examination of the cultural transition. In the frame of culture the traditions of design are emphasised. Design is also connected to the industrial culture and furthermore to the themes of the consumer culture. It can be argued that the frames construct media publicity of design from various points of view. The frames describe situations, action and the actors of design. The interpreted media frames make it possible to understand the relation of interpreted design actions and the culture. Thus, media has a crucial role in representing and recreating meanings related to design. The publicity of design is characterised by the five focal themes: personification, professionalisation, commercialisation, communalisation and transition of cultural focus from the traditions of design to the industrial culture and the consumer culture. Based on my interpretation these themes are guided by the mediatisation of design. The design phenomenon is defined more often on the basis of the media representations in the public discourses. The design culture outlined in this research connects socially constructed and structurally organised action. Socially constructed action in design is connected to the experiences, social recreation and collective development of design. Structurally, design is described as professional know-how, as a process and as an economic profit generating action in the society. The events described by the media affect the way in which people experience the world, the meanings they connect to the events around themselves and their life in the world. By affecting experiences, the media indirectly affects human actions. People have become habituated to read media representations on a daily basis, but they are not used to reading and interpreting the various meanings that are incorporated in the media texts.
Resumo:
The research reported in this thesis dealt with single crystals of thallium bromide grown for gamma-ray detector applications. The crystals were used to fabricate room temperature gamma-ray detectors. Routinely produced TlBr detectors often are poor quality. Therefore, this study concentrated on developing the manufacturing processes for TlBr detectors and methods of characterisation that can be used for optimisation of TlBr purity and crystal quality. The processes under concern were TlBr raw material purification, crystal growth, annealing and detector fabrication. The study focused on single crystals of TlBr grown from material purified by a hydrothermal recrystallisation method. In addition, hydrothermal conditions for synthesis, recrystallisation, crystal growth and annealing of TlBr crystals were examined. The final manufacturing process presented in this thesis deals with TlBr material purified by the Bridgman method. Then, material is hydrothermally recrystallised in pure water. A travelling molten zone (TMZ) method is used for additional purification of the recrystallised product and then for the final crystal growth. Subsequent processing is similar to that described in the literature. In this thesis, literature on improving quality of TlBr material/crystal and detector performance is reviewed. Aging aspects as well as the influence of different factors (temperature, time, electrode material and so on) on detector stability are considered and examined. The results of the process development are summarised and discussed. This thesis shows the considerable improvement in the charge carrier properties of a detector due to additional purification by hydrothermal recrystallisation. As an example, a thick (4 mm) TlBr detector produced by the process was fabricated and found to operate successfully in gamma-ray detection, confirming the validity of the proposed purification and technological steps. However, for the complete improvement of detector performance, further developments in crystal growth are required. The detector manufacturing process was optimized by characterisation of material and crystals using methods such as X-ray diffraction (XRD), polarisation microscopy, high-resolution inductively coupled plasma mass (HR-ICPM), Fourier transform infrared (FTIR), ultraviolet and visual (UV-Vis) spectroscopy, field emission scanning electron microscope (FESEM) and energy-dispersive X-ray spectroscopy (EDS), current-voltage (I-V) and capacity voltage (CV) characterisation, and photoconductivity, as well direct detector examination.
Resumo:
The present study examines how the landscape of the rural immigrant colony of New Finland (Saskatchewan, Canada) has reflected the Finnish origins of the about 350 settlers and their descendants, their changing ideologies, values, sense of collectiveness and the meanings of the Finnish roots. The study also reveals the reasons and power structures behind the ethnic expressions. Researched time period runs from the beginning of the settlement in 1888 to the turn of the millennium. The research concentrates on buildings, cemeteries, personal names and place names which contain strong visual and symbolic messages and are all important constituents of mundane landscapes. For example, the studied personal names are important identity-political indexes telling about the value of the Finnish nationalism, community spirit, dual Finnish-Canadian identities and also the process of assimilation which, for example, had differences between genders. The study is based on empirical field research, and iconographical and textual interpretations supported by classifications and comparative analyses. Several interviews and literature were essential means of understanding the changing political contexts which influenced the Finnish settlement and its multiple landscape representations. Five historical landscape periods were identified in New Finland. During these periods the meanings and representations of Finnish identity changed along with national and international politics and local power structures. For example, during the Second World War Canada discouraged representations of Finnish culture because Finland and Canada were enemies. But Canada s multicultural policy in the 1980s led to several material and symbolic representations indicating the Finnish settlement after a period of assimilation and deinstitutionalization. The study shows how these representations were indications of the politics of a (selective) memory. Especially Finnish language, cultural traditions and the Evangelical-Lutheran values of the pioneers, which have been passed down to new generations, are highly valued part of the Finnish heritage. Also the work of the pioneers and their participation in the building of Saskatchewan is an important collective narrative. The selectiveness of a memory created the landscape of forgetting which includes deliberately forgotten parts of the history. For example, the occasional disputes between the congregations are something that has been ignored. The results show how the different landscape elements can open up a useful perspective to diaspora colonies or other communities also by providing information which otherwise would be indistinguishable. In this case, for example, two cemeteries close together were a sign of religious distributions among the early settlers.
Resumo:
This study explores the EMU stand taken by the major Finnish political parties from 1994 to 1999. The starting point is the empirical evidence showing that party responses to European integration are shaped by a mix of national and cross-national factors, with national factors having more explanatory value. The study is the first to produce evidence that classified party documents such as protocols, manifestos and authoritative policy summaries may describe the EMU policy emphasis. In fact, as the literature review demonstrates, it has been unclear so far what kind of stand the three major Finnish political parties took during 1994–1999. Consequently, this study makes a substantive contribution to understanding the factors that shaped EMU party policies, and eventually, the national EMU policy during the 1990s. The research questions addressed are the following: What are the main factors that shaped partisan standpoints on EMU during 1994–1999? To what extent did the policy debate and themes change in the political parties? How far were the policies of the Social Democratic Party, the Centre Party and the National Coalition Party shaped by factors unique to their own national contexts? Furthermore, to what extent were they determined by cross-national influences from abroad, and especially from countries with which Finland has a special relationship, such as Sweden? The theoretical background of the study is in the area of party politics and approaches to EU policies, and party change, developed mainly by Kevin Featherstone, Peter Mair and Richard Katz. At the same time, it puts forward generic hypotheses that help to explain party standpoints on EMU. It incorporates a large quantity of classified new material based on primary research through content analysis and interviews. Quantitative and qualitative methods are used sequentially in order to overcome possible limitations. Established content-analysis techniques improve the reliability of the data. The coding frame is based on the salience theory of party competition. Interviews with eight party leaders and one independent expert civil servant provided additional insights and improve the validity of the data. Public-opinion surveys and media coverage are also used to complete the research path. Four major conclusions are drawn from the research findings. First, the quantitative and the interview data reveal the importance of the internal influences within the parties that most noticeably shaped their EMU policies during the 1990s. In contrast, international events play a minor role. The most striking feature turned out to be the strong emphasis by all of the parties on economic goals. However, it is important to note that the factors manifest differences between economic, democratic and international issues across the three major parties. Secondly, it seems that the parties have transformed into centralised and professional organisations in terms of their EMU policy-making. The weight and direction of party EMU strategy rests within the leadership and a few administrative elites. This could imply changes in their institutional environment. Eventually, parties may appear generally less differentiated and more standardised in their policy-making. Thirdly, the case of the Social Democratic Party shows that traditional organisational links continue to exist between the left and the trade unions in terms of their EMU policy-making. Hence, it could be that the parties have not yet moved beyond their conventional affiliate organisations. Fourthly, parties tend to neglect citizen opinion and demands with regard to EMU, which could imply conflict between the changes in their strategic environment. They seem to give more attention to the demands of political competition (party-party relationships) than to public attitudes (party-voter relationships), which would imply that they have had to learn to be more flexible and responsive. Finally, three suggestions for institutional reform are offered, which could contribute to the emergence of legitimised policy-making: measures to bring more party members and voter groups into the policy-making process; measures to adopt new technologies in order to open up the policy-formation process in the early phase; and measures to involve all interest groups in the policy-making process.
Resumo:
The purpose of this series of studies was to evaluate the biocompatibility of poly (ortho) ester (POE), copolymer of ε-caprolactone and D,L-lactide [P (ε-CL/DL-LA)] and the composite of P(ε-CL/DL-LA) and tricalciumphosphate (TCP) as bone filling material in bone defects. Tissue reactions and resorption times of two solid POE-implants (POE 140 and POE 46) with different methods of sterilization (gamma- and ethylene oxide sterilization), P(ε-CL/DL-LA)(40/60 w/w) in paste form and 50/50 w/w composite of 40/60 w/w P(ε-CL/DL-LA) and TCP and 27/73 w/w composite of 60/40 w/w P(ε-CL/DL-LA) and TCP were examined in experimental animals. The follow-up times were from one week to 52 weeks. The bone samples were evaluated histologically and the soft tissue samples histologically, immunohistochemically and electronmicroscopically. The results showed that the resorption time of gamma sterilized POE 140 was eight weeks and ethylene oxide sterilized POE 140 13 weeks in bone. The resorption time of POE 46 was more than 24 weeks. The gamma sterilized rods started to erode from the surface faster than ethylene oxide sterilized rods for both POEs. Inflammation in bone was from slight to moderate with POE 140 and moderate with POE 46. No highly fluorescent layer of tenascin or fibronectin was found in the soft tissue. Bone healing at the sites of implantation was slower than at control sites with the copolymer in small bone defects. The resorption time for the copolymer was over one year. Inflammation in bone was mostly moderate. Bone healing at the sites of implantation was also slower than at the control sites with the composite in small and large mandibular bone defects. Bone formation had ceased at both sites by the end of follow-up in large mandibular bone defects. The ultrastructure of the connective tissue was normal during the period of observation. It can be concluded that the method of sterilization influenced the resorption time of both POEs. Gamma sterilized POE 140 could have been suitable material for filling small bone defects, whereas the degradation times of solid EO-sterilized POE 140 and POE 46 were too slow to be considered as bone filling material. Solid material is difficult to contour, which can be considered as a disadvantage. The composites were excellent to handle, but the degradation time of the polymer and the composites were too slow. Therefore, the copolymer and the composite can not be recommended as bone filling material.
Resumo:
Fusion energy is a clean and safe solution for the intricate question of how to produce non-polluting and sustainable energy for the constantly growing population. The fusion process does not result in any harmful waste or green-house gases, since small amounts of helium is the only bi-product that is produced when using the hydrogen isotopes deuterium and tritium as fuel. Moreover, deuterium is abundant in seawater and tritium can be bred from lithium, a common metal in the Earth's crust, rendering the fuel reservoirs practically bottomless. Due to its enormous mass, the Sun has been able to utilize fusion as its main energy source ever since it was born. But here on Earth, we must find other means to achieve the same. Inertial fusion involving powerful lasers and thermonuclear fusion employing extreme temperatures are examples of successful methods. However, these have yet to produce more energy than they consume. In thermonuclear fusion, the fuel is held inside a tokamak, which is a doughnut-shaped chamber with strong magnets wrapped around it. Once the fuel is heated up, it is controlled with the help of these magnets, since the required temperatures (over 100 million degrees C) will separate the electrons from the nuclei, forming a plasma. Once the fusion reactions occur, excess binding energy is released as energetic neutrons, which are absorbed in water in order to produce steam that runs turbines. Keeping the power losses from the plasma low, thus allowing for a high number of reactions, is a challenge. Another challenge is related to the reactor materials, since the confinement of the plasma particles is not perfect, resulting in particle bombardment of the reactor walls and structures. Material erosion and activation as well as plasma contamination are expected. Adding to this, the high energy neutrons will cause radiation damage in the materials, causing, for instance, swelling and embrittlement. In this thesis, the behaviour of a material situated in a fusion reactor was studied using molecular dynamics simulations. Simulations of processes in the next generation fusion reactor ITER include the reactor materials beryllium, carbon and tungsten as well as the plasma hydrogen isotopes. This means that interaction models, {\it i.e. interatomic potentials}, for this complicated quaternary system are needed. The task of finding such potentials is nonetheless nearly at its end, since models for the beryllium-carbon-hydrogen interactions were constructed in this thesis and as a continuation of that work, a beryllium-tungsten model is under development. These potentials are combinable with the earlier tungsten-carbon-hydrogen ones. The potentials were used to explain the chemical sputtering of beryllium due to deuterium plasma exposure. During experiments, a large fraction of the sputtered beryllium atoms were observed to be released as BeD molecules, and the simulations identified the swift chemical sputtering mechanism, previously not believed to be important in metals, as the underlying mechanism. Radiation damage in the reactor structural materials vanadium, iron and iron chromium, as well as in the wall material tungsten and the mixed alloy tungsten carbide, was also studied in this thesis. Interatomic potentials for vanadium, tungsten and iron were modified to be better suited for simulating collision cascades that are formed during particle irradiation, and the potential features affecting the resulting primary damage were identified. Including the often neglected electronic effects in the simulations was also shown to have an impact on the damage. With proper tuning of the electron-phonon interaction strength, experimentally measured quantities related to ion-beam mixing in iron could be reproduced. The damage in tungsten carbide alloys showed elemental asymmetry, as the major part of the damage consisted of carbon defects. On the other hand, modelling the damage in the iron chromium alloy, essentially representing steel, showed that small additions of chromium do not noticeably affect the primary damage in iron. Since a complete assessment of the response of a material in a future full-scale fusion reactor is not achievable using only experimental techniques, molecular dynamics simulations are of vital help. This thesis has not only provided insight into complicated reactor processes and improved current methods, but also offered tools for further simulations. It is therefore an important step towards making fusion energy more than a future goal.
Resumo:
For achieving efficient fusion energy production, the plasma-facing wall materials of the fusion reactor should ensure long time operation. In the next step fusion device, ITER, the first wall region facing the highest heat and particle load, i.e. the divertor area, will mainly consist of tiles based on tungsten. During the reactor operation, the tungsten material is slowly but inevitably saturated with tritium. Tritium is the relatively short-lived hydrogen isotope used in the fusion reaction. The amount of tritium retained in the wall materials should be minimized and its recycling back to the plasma must be unrestrained, otherwise it cannot be used for fueling the plasma. A very expensive and thus economically not viable solution is to replace the first walls quite often. A better solution is to heat the walls to temperatures where tritium is released. Unfortunately, the exact mechanisms of hydrogen release in tungsten are not known. In this thesis both experimental and computational methods have been used for studying the release and retention of hydrogen in tungsten. The experimental work consists of hydrogen implantations into pure polycrystalline tungsten, the determination of the hydrogen concentrations using ion beam analyses (IBA) and monitoring the out-diffused hydrogen gas with thermodesorption spectrometry (TDS) as the tungsten samples are heated at elevated temperatures. Combining IBA methods with TDS, the retained amount of hydrogen is obtained as well as the temperatures needed for the hydrogen release. With computational methods the hydrogen-defect interactions and implantation-induced irradiation damage can be examined at the atomic level. The method of multiscale modelling combines the results obtained from computational methodologies applicable at different length and time scales. Electron density functional theory calculations were used for determining the energetics of the elementary processes of hydrogen in tungsten, such as diffusivity and trapping to vacancies and surfaces. Results from the energetics of pure tungsten defects were used in the development of an classical bond-order potential for describing the tungsten defects to be used in molecular dynamics simulations. The developed potential was utilized in determination of the defect clustering and annihilation properties. These results were further employed in binary collision and rate theory calculations to determine the evolution of large defect clusters that trap hydrogen in the course of implantation. The computational results for the defect and trapped hydrogen concentrations were successfully compared with the experimental results. With the aforedescribed multiscale analysis the experimental results within this thesis and found in the literature were explained both quantitatively and qualitatively.
Resumo:
This study offers a reconstruction and critical evaluation of globalization theory, a perspective that has been central for sociology and cultural studies in recent decades, from the viewpoint of media and communications. As the study shows, sociological and cultural globalization theorists rely heavily on arguments concerning media and communications, especially the so-called new information and communication technologies, in the construction of their frameworks. Together with deepening the understanding of globalization theory, the study gives new critical knowledge of the problematic consequences that follow from such strong investment in media and communications in contemporary theory. The book is divided into four parts. The first part presents the research problem, the approach and the theoretical contexts of the study. Followed by the introduction in Chapter 1, I identify the core elements of globalization theory in Chapter 2. At the heart of globalization theory is the claim that recent decades have witnessed massive changes in the spatio-temporal constitution of society, caused by new media and communications in particular, and that these changes necessitate the rethinking of the foundations of social theory as a whole. Chapter 3 introduces three paradigms of media research the political economy of media, cultural studies and medium theory the discussion of which will make it easier to understand the key issues and controversies that emerge in academic globalization theorists treatment of media and communications. The next two parts offer a close reading of four theorists whose works I use as entry points into academic debates on globalization. I argue that we can make sense of mainstream positions on globalization by dividing them into two paradigms: on the one hand, media-technological explanations of globalization and, on the other, cultural globalization theory. As examples of the former, I discuss the works of Manuel Castells (Chapter 4) and Scott Lash (Chapter 5). I maintain that their analyses of globalization processes are overtly media-centric and result in an unhistorical and uncritical understanding of social power in an era of capitalist globalization. A related evaluation of the second paradigm (cultural globalization theory), as exemplified by Arjun Appadurai and John Tomlinson, is presented in Chapter 6. I argue that due to their rejection of the importance of nation states and the notion of cultural imperialism for cultural analysis, and their replacement with a framework of media-generated deterritorializations and flows, these theorists underplay the importance of the neoliberalization of cultures throughout the world. The fourth part (Chapter 7) presents a central research finding of this study, namely that the media-centrism of globalization theory can be understood in the context of the emergence of neoliberalism. I find it problematic that at the same time when capitalist dynamics have been strengthened in social and cultural life, advocates of globalization theory have directed attention to media-technological changes and their sweeping socio-cultural consequences, instead of analyzing the powerful material forces that shape the society and the culture. I further argue that this shift serves not only analytical but also utopian functions, that is, the longing for a better world in times when such longing is otherwise considered impracticable.
Resumo:
This study examines how do the processes of politicization differ in the Finnish and the French local contexts, and what kinds of consequences do these processes have on the local civic practices, the definitions and redefinitions of democracy and citizenship, the dynamics of power and resistance, and the ways of solving controversies in the public sphere. By means of comparative anthropology of the state , focusing on how democracy actually is practiced in different contexts, politicizations the processes of opening political arenas and recognizing controversy are analyzed. The focus of the study is on local activists engaged in different struggles on various levels of the local public spheres, and local politicians and civil servants participating in these struggles from their respective positions, in two middle-size European cities, Helsinki and Lyon. The empirical analyses of the book compare different political actors and levels of practicing democracy simultaneously. The study is empirically based on four different bodies of material: Ethnographic notes taken during a fieldwork among the activities of several local activist groups; 47 interviews of local activists and politicians; images representing different levels of public portrayals from activist websites (Helsinki N=274, Lyon N=232) and from city information magazines (Helsinki-info N=208, Lyon Citoyen N= 357); and finally, newspaper articles concerning local conflict issues, and reporting on the encounters between local citizens and representatives of the cities (January-June in 2005; Helsingin Sanomat N=96 and Le Progrès N= 102). The study makes three distinctive contributions to the study of current democratic societies: (1) a conceptual one by bringing politicization at the center of a comparison of political cultures, and by considering in parallel the ethnographic group styles theory by Nina Eliasoph and Paul Lichterman, the theory on counter-democracy by Pierre Rosanvallon and the pragmatist justification theory by Luc Boltanski and Laurent Thévenot; (2) an empirical one through the triangulation of ethnographic, thematic interview, visual, and newspaper data through which the different aspects of democratic practices are examined; and (3) a methodological one by developing new ways of analyzing comparative cases an application of Frame Analysis to visual material and the creation of Public Justification Analysis for analyzing morally loaded claims in newspaper reports thus building bridges between cultural, political, and pragmatic sociology. The results of the study indicate that the cultural tools the Finnish civic actors had at their disposal were prone to hinder more than support politicization, whereas the tools the French actors mainly relied on were frequently apt for making politicization possible. This crystallization is defined and detailed in many ways in the analyses of the book. Its consequences to the understanding and future research on the current developments of democracy are multiple, as politicization, while not assuring good results as such, is central to a functioning and vibrant democracy in which injustices can be fixed and new directions and solutions sought collectively.
Resumo:
This dissertation considers the problem of trust in the context of food consumption. The research perspectives refer to institutional conditions for consumer trust, personal practices of food consumption, and strategies consumers employ for controlling the safety of their food. The main concern of the study is to investigate consumer trust as an adequate response to food risks, i.e. a strategy helping the consumer to make safe choices in an uncertain food situation. "Risky" perspective serves as a frame of reference for understanding and explaining trust relations. The original aim of the study was to reveal the meanings applied to the concepts of trust, safety and risks in the perspective of market choices, the assessments of food risks and the ways of handling them. Supplementary research tasks presumed descriptions of institutional conditions for consumer trust, including descriptions of the food market, and the presentation of food consumption patterns in St. Petersburg. The main empirical material is based on qualitative interviews with consumers and interviews and group discussions with professional experts (market actors, representatives of inspection bodies and consumer organizations). Secondary material is used for describing institutional conditions for consumer trust and the market situation. The results suggest that the idea of consumer trust is associated with the reputation of suppliers, stable quality and taste of their products, and reliable food information. Being a subjectively constructed state connected to the act of acceptance, consumer trust results in positive buying decisions and stable preferences in the food market. The consumers' strategies that aim at safe food choices refer to repetitive interactions with reliable market actors that free them from constant consideration in the marketplace. Trust in food is highly mediated by trust in institutions involved in the food system. The analysis reveals a clear pattern of disbelief in the efficiency of institutional food control. The study analyses this as a reflection of "total distrust" that appears to be a dominant mood in many contexts of modern Russia. However, the interviewees emphasize the state's decisive role in suppressing risks in the food market. Also, the findings are discussed with reference to the consumers' possibilities of personal control over food risks. Three main responses to a risky food situation are identified: the reflexive approach, the traditional approach, and the fatalistic approach.
Resumo:
Agriculture is an economic activity that heavily relies on the availability of natural resources. Through its role in food production agriculture is a major factor affecting public welfare and health, and its indirect contribution to gross domestic product and employment is significant. Agriculture also contributes to numerous ecosystem services through management of rural areas. However, the environmental impact of agriculture is considerable and reaches far beyond the agroecosystems. The questions related to farming for food production are, thus, manifold and of great public concern. Improving environmental performance of agriculture and sustainability of food production, sustainabilizing food production, calls for application of wide range of expertise knowledge. This study falls within the field of agro-ecology, with interphases to food systems and sustainability research and exploits the methods typical of industrial ecology. The research in these fields extends from multidisciplinary to interdisciplinary and transdisciplinary, a holistic approach being the key tenet. The methods of industrial ecology have been applied extensively to explore the interaction between human economic activity and resource use. Specifically, the material flow approach (MFA) has established its position through application of systematic environmental and economic accounting statistics. However, very few studies have applied MFA specifically to agriculture. The MFA approach was used in this thesis in such a context in Finland. The focus of this study is the ecological sustainability of primary production. The aim was to explore the possibilities of assessing ecological sustainability of agriculture by using two different approaches. In the first approach the MFA-methods from industrial ecology were applied to agriculture, whereas the other is based on the food consumption scenarios. The two approaches were used in order to capture some of the impacts of dietary changes and of changes in production mode on the environment. The methods were applied at levels ranging from national to sector and local levels. Through the supply-demand approach, the viewpoint changed between that of food production to that of food consumption. The main data sources were official statistics complemented with published research results and expertise appraisals. MFA approach was used to define the system boundaries, to quantify the material flows and to construct eco-efficiency indicators for agriculture. The results were further elaborated for an input-output model that was used to analyse the food flux in Finland and to determine its relationship to the economy-wide physical and monetary flows. The methods based on food consumption scenarios were applied at regional and local level for assessing feasibility and environmental impacts of relocalising food production. The approach was also used for quantification and source allocation of greenhouse gas (GHG) emissions of primary production. GHG assessment provided, thus, a means of crosschecking the results obtained by using the two different approaches. MFA data as such or expressed as eco-efficiency indicators, are useful in describing the overall development. However, the data are not sufficiently detailed for identifying the hot spots of environmental sustainability. Eco-efficiency indicators should not be bluntly used in environmental assessment: the carrying capacity of the nature, the potential exhaustion of non-renewable natural resources and the possible rebound effect need also to be accounted for when striving towards improved eco-efficiency. The input-output model is suitable for nationwide economy analyses and it shows the distribution of monetary and material flows among the various sectors. Environmental impact can be captured only at a very general level in terms of total material requirement, gaseous emissions, energy consumption and agricultural land use. Improving environmental performance of food production requires more detailed and more local information. The approach based on food consumption scenarios can be applied at regional or local scales. Based on various diet options the method accounts for the feasibility of re-localising food production and environmental impacts of such re-localisation in terms of nutrient balances, gaseous emissions, agricultural energy consumption, agricultural land use and diversity of crop cultivation. The approach is applicable anywhere, but the calculation parameters need to be adjusted so as to comply with the specific circumstances. The food consumption scenario approach, thus, pays attention to the variability of production circumstances, and may provide some environmental information that is locally relevant. The approaches based on the input-output model and on food consumption scenarios represent small steps towards more holistic systemic thinking. However, neither one alone nor the two together provide sufficient information for sustainabilizing food production. Environmental performance of food production should be assessed together with the other criteria of sustainable food provisioning. This requires evaluation and integration of research results from many different disciplines in the context of a specified geographic area. Foodshed area that comprises both the rural hinterlands of food production and the population centres of food consumption is suggested to represent a suitable areal extent for such research. Finding a balance between the various aspects of sustainability is a matter of optimal trade-off. The balance cannot be universally determined, but the assessment methods and the actual measures depend on what the bottlenecks of sustainability are in the area concerned. These have to be agreed upon among the actors of the area
Resumo:
This thesis explores the relationship between humans and ICTs (information and communication technologies). As ICTs are increasingly penetrating all spheres of social life, their role as mediators – between people, between people and information, and even between people and the natural world – is expanding, and they are increasingly shaping social life. Yet, we still know little of how our life is affected by their growing role. Our understanding of the actors and forces driving the accelerating adoption of new ICTs in all areas of life is also fairly limited. This thesis addresses these problems by interpretively exploring the link between ICTs and the shaping of society at home, in the office, and in the community. The thesis builds on empirical material gathered in three research projects, presented in four separate essays. The first project explores computerized office work through a case study. The second is a regional development project aiming at increasing ICT knowledge and use in 50 small-town families. In the third, the second project is compared to three other longitudinal development projects funded by the European Union. Using theories that consider the human-ICT relationship as intertwined, the thesis provides a multifaceted description of life with ICTs in contemporary information society. By oscillating between empirical and theoretical investigations and balancing between determinist and constructivist conceptualisations of the human-ICT relationship, I construct a dialectical theoretical framework that can be used for studying socio-technical contexts in society. This framework helps us see how societal change stems from the complex social processes that surround routine everyday actions. For example, interacting with and through ICTs may change individuals’ perceptions of time and space, social roles, and the proper ways to communicate – changes which at some point in time result in societal change in terms of, for example, new ways of acting and knowing things.