50 resultados para Reasonable profits

em Helda - Digital Repository of University of Helsinki


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The feasibility of different modern analytical techniques for the mass spectrometric detection of anabolic androgenic steroids (AAS) in human urine was examined in order to enhance the prevalent analytics and to find reasonable strategies for effective sports drug testing. A comparative study of the sensitivity and specificity between gas chromatography (GC) combined with low (LRMS) and high resolution mass spectrometry (HRMS) in screening of AAS was carried out with four metabolites of methandienone. Measurements were done in selected ion monitoring mode with HRMS using a mass resolution of 5000. With HRMS the detection limits were considerably lower than with LRMS, enabling detection of steroids at low 0.2-0.5 ng/ml levels. However, also with HRMS, the biological background hampered the detection of some steroids. The applicability of liquid-phase microextraction (LPME) was studied with metabolites of fluoxymesterone, 4-chlorodehydromethyltestosterone, stanozolol and danazol. Factors affecting the extraction process were studied and a novel LPME method with in-fiber silylation was developed and validated for GC/MS analysis of the danazol metabolite. The method allowed precise, selective and sensitive analysis of the metabolite and enabled simultaneous filtration, extraction, enrichment and derivatization of the analyte from urine without any other steps in sample preparation. Liquid chromatographic/tandem mass spectrometric (LC/MS/MS) methods utilizing electrospray ionization (ESI), atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were developed and applied for detection of oxandrolone and metabolites of stanozolol and 4-chlorodehydromethyltestosterone in urine. All methods exhibited high sensitivity and specificity. ESI showed, however, the best applicability, and a LC/ESI-MS/MS method for routine screening of nine 17-alkyl-substituted AAS was thus developed enabling fast and precise measurement of all analytes with detection limits below 2 ng/ml. The potential of chemometrics to resolve complex GC/MS data was demonstrated with samples prepared for AAS screening. Acquired full scan spectral data (m/z 40-700) were processed by the OSCAR algorithm (Optimization by Stepwise Constraints of Alternating Regression). The deconvolution process was able to dig out from a GC/MS run more than the double number of components as compared with the number of visible chromatographic peaks. Severely overlapping components, as well as components hidden in the chromatographic background could be isolated successfully. All studied techniques proved to be useful analytical tools to improve detection of AAS in urine. Superiority of different procedures is, however, compound-dependent and different techniques complement each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been suggested that semantic information processing is modularized according to the input form (e.g., visual, verbal, non-verbal sound). A great deal of research has concentrated on detecting a separate verbal module. Also, it has traditionally been assumed in linguistics that the meaning of a single clause is computed before integration to a wider context. Recent research has called these views into question. The present study explored whether it is reasonable to assume separate verbal and nonverbal semantic systems in the light of the evidence from event-related potentials (ERPs). The study also provided information on whether the context influences processing of a single clause before the local meaning is computed. The focus was on an ERP called N400. Its amplitude is assumed to reflect the effort required to integrate an item to the preceding context. For instance, if a word is anomalous in its context, it will elicit a larger N400. N400 has been observed in experiments using both verbal and nonverbal stimuli. Contents of a single sentence were not hypothesized to influence the N400 amplitude. Only the combined contents of the sentence and the picture were hypothesized to influence the N400. The subjects (n = 17) viewed pictures on a computer screen while hearing sentences through headphones. Their task was to judge the congruency of the picture and the sentence. There were four conditions: 1) the picture and the sentence were congruent and sensible, 2) the sentence and the picture were congruent, but the sentence ended anomalously, 3) the picture and the sentence were incongruent but sensible, 4) the picture and the sentence were incongruent and anomalous. Stimuli from the four conditions were presented in a semi-randomized sequence. Their electroencephalography was simultaneously recorded. ERPs were computed for the four conditions. The amplitude of the N400 effect was largest in the incongruent sentence-picture -pairs. The anomalously ending sentences did not elicit a larger N400 than the sensible sentences. The results suggest that there is no separate verbal semantic system, and that the meaning of a single clause is not processed independent of the context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examined the fundamental question of what really matters when selecting a new employee. The study focused on tacit knowledge used by personnel recruiters when interviewing employees. Knowledge was defined as the best view available, which helps one not to act haphazardly. Tacit knowledge was also defined as a positive concept, and it was seen as a part of personnel recruiters` improving proficiency. The research topic was chosen based on the observed increase in the amount of employment interviews and their importance in society. As recruiting is becoming a more distinct profession, it was reasonable to approach the topic from an educational point of view. The following research problems guided the examination of the phenomenon: 1) Where does the interviewer seek tacit knowledge from during the employment interview? 2) How is tacit knowledge achieved during the employment interview? 3) How does the interviewer defend the significance of the tacit knowledge gained as knowledge that has influence on the selection decision? The research data was collected by interviewing six personnel recruiters who conduct and evaluate employment interviews as part of their work responsibilities. The interview themes were linked to some recently made selection decision in each organization and the preceding employment interview with the selected candidate. In order to conceptualize tacit knowledge, reflective consideration of the interview event was used in the study. The lettered research data was analyzed inductively. As a result of the study, the objects of tacit knowledge in the context of an employment interview culminated into three areas: the applicant s verbal communication, the applicant s non-verbal communication and the interaction between interview participants. Observations directed toward those objects were shown to be intentional and three schemes were found behind them: experiences from previous interviews, applicant s application papers and the aptitude for the work responsibilities. The question of gaining knowledge was answered with the concept of procedural knowledge. Personnel recruiters were found to have four different, but interconnected ways to encounter knowledge during an employment interview: understanding, evaluative, revealing, and approving knowing. In order to explain the importance given to tacit knowledge, it was examined in connection with the most prevalent practices in the personnel selection industry. The significance of knowledge as the kind of knowledge that has an impact on the decision was supported by references to collective opinion (other people agree with it), circumstance (interview s short duration), or using some instrument (structured interview). The study revealed new aspects of employment selection process through examining tacit knowledge. The characteristics of the inductive analysis of the research data may also be utilized, when applicable, in tacit knowledge research within other contexts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: The aims of this study were 1) to identify and describe health economic studies that have used quality-adjusted life years (QALYs) based on actual measurements of patients' health-related quality of life (HRQoL); 2) to test the feasibility of routine collection of health-related quality of life (HRQoL) data as an indicator of effectiveness of secondary health care; and 3) to establish and compare the cost-utility of three large-volume surgical procedures in a real-world setting in the Helsinki University Central Hospital, a large referral hospital providing secondary and tertiary health-care services for a population of approximately 1.4 million. Patients and methods: So as to identify studies that have used QALYs as an outcome measure, a systematic search of the literature was performed using the Medline, Embase, CINAHL, SCI and Cochrane Library electronic databases. Initial screening of the identified articles involved two reviewers independently reading the abstracts; the full-text articles were also evaluated independently by two reviewers, with a third reviewer used in cases where the two reviewers could not agree a consensus on which articles should be included. The feasibility of routinely evaluating the cost-effectiveness of secondary health care was tested by setting up a system for collecting HRQoL data on approximately 4 900 patients' HRQoL before and after operative treatments performed in the hospital. The HRQoL data used as an indicator of treatment effectiveness was combined with diagnostic and financial indicators routinely collected in the hospital. To compare the cost-effectiveness of three surgical interventions, 712 patients admitted for routine operative treatment completed the 15D HRQoL questionnaire before and also 3-12 months after the operation. QALYs were calculated using the obtained utility data and expected remaining life years of the patients. Direct hospital costs were obtained from the clinical patient administration database of the hospital and a cost-utility analysis was performed from the perspective of the provider of secondary health care services. Main results: The systematic review (Study I) showed that although QALYs gained are considered an important measure of the effectiveness of health care, the number of studies in which QALYs are based on actual measurements of patients' HRQoL is still fairly limited. Of the reviewed full-text articles, only 70 reported QALYs based on actual before after measurements using a valid HRQoL instrument. Collection of simple cost-effectiveness data in secondary health care is feasible and could easily be expanded and performed on a routine basis (Study II). It allows meaningful comparisons between various treatments and provides a means for allocating limited health care resources. The cost per QALY gained was 2 770 for cervical operations and 1 740 for lumbar operations. In cases where surgery was delayed the cost per QALY was doubled (Study III). The cost per QALY ranges between subgroups in cataract surgery (Study IV). The cost per QALY gained was 5 130 for patients having both eyes operated on and 8 210 for patients with only one eye operated on during the 6-month follow-up. In patients whose first eye had been operated on previous to the study period, the mean HRQoL deteriorated after surgery, thus precluding the establishment of the cost per QALY. In arthroplasty patients (Study V) the mean cost per QALY gained in a one-year period was 6 710 for primary hip replacement, 52 270 for revision hip replacement, and 14 000 for primary knee replacement. Conclusions: Although the importance of cost-utility analyses has during recent years been stressed, there are only a limited number of studies in which the evaluation is based on patients own assessment of the treatment effectiveness. Most of the cost-effectiveness and cost-utility analyses are based on modeling that employs expert opinion regarding the outcome of treatment, not on patient-derived assessments. Routine collection of effectiveness information from patients entering treatment in secondary health care turned out to be easy enough and did not, for instance, require additional personnel on the wards in which the study was executed. The mean patient response rate was more than 70 %, suggesting that patients were happy to participate and appreciated the fact that the hospital showed an interest in their well-being even after the actual treatment episode had ended. Spinal surgery leads to a statistically significant and clinically important improvement in HRQoL. The cost per QALY gained was reasonable, at less than half of that observed for instance for hip replacement surgery. However, prolonged waiting for an operation approximately doubled the cost per QALY gained from the surgical intervention. The mean utility gain following routine cataract surgery in a real world setting was relatively small and confined mostly to patients who had had both eyes operated on. The cost of cataract surgery per QALY gained was higher than previously reported and was associated with considerable degree of uncertainty. Hip and knee replacement both improve HRQoL. The cost per QALY gained from knee replacement is two-fold compared to hip replacement. Cost-utility results from the three studied specialties showed that there is great variation in the cost-utility of surgical interventions performed in a real-world setting even when only common, widely accepted interventions are considered. However, the cost per QALY of all the studied interventions, except for revision hip arthroplasty, was well below 50 000, this figure being sometimes cited in the literature as a threshold level for the cost-effectiveness of an intervention. Based on the present study it may be concluded that routine evaluation of the cost-utility of secondary health care is feasible and produces information essential for a rational and balanced allocation of scarce health care resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although the principle of equal access to medically justified treatment has been promoted by official health policies in many Western health care systems, practices do not completely meet policy targets. Waiting times for elective surgery vary between patient groups and regions, and growing problems in the availability of services threaten equal access to treatment. Waiting times have come to the attention of decision-makers, and several policy initiatives have been introduced to ensure the availability of care within a reasonable time. In Finland, for example, the treatment guarantee came into force in 2005. However, no consensus exists on optimal waiting time for different patient groups. The purpose of this multi-centre randomized controlled trial was to analyse health-related quality of life, pain and physical function in total hip or knee replacement patients during the waiting time and to evaluate whether the waiting time is associated with patients health outcomes at admission. This study also assessed whether the length of waiting time is associated with social and health services utilization in patients awaiting total hip or knee replacement. In addition, patients health-related quality of life was compared with that of the general population. Consecutive patients with a need for a primary total hip or knee replacement due to osteoarthritis were placed on the waiting list between August 2002 and November 2003. Patients were randomly assigned to a short waiting time (maximum 3 months) or a non-fixed waiting time (waiting time not fixed in advance, instead the patient followed the hospitals routine practice). Patients health-related quality of life was measured upon being placed on the waiting list and again at hospital admission using the generic 15D instrument. Pain and physical function were evaluated using the self-report Harris Hip Score for hip patients and a scale modified from the Knee Society Clinical Rating System for knee patients. Utilization measures were the use of home health care, rehabilitation and social services, physician visits and inpatient care. Health and social services use was low in both waiting time groups. The most common services used while waiting were rehabilitation services and informal care, including unpaid care provided by relatives, neighbours and volunteers. Although patients suffered from clear restrictions in usual activities and physical functioning, they seemed primarily to lean on informal care and personal networks instead of professional care. While longer waiting time did not result in poorer health-related quality of life at admission and use of services during the waiting time was similar to that at the time of placement on the list, there is likely to be higher costs of waiting by people who wait longer simply because they are using services for a longer period. In economic terms, this would represent a negative impact of waiting. Only a few reports have been published of the health-related quality of life of patients awaiting total hip or knee replacement. These findings demonstrate that, in addition to physical dimensions of health, patients suffered from restrictions in psychological well-being such as depression, distress and reduced vitality. This raises the question of how to support patients who suffer from psychological distress during the waiting time and how to develop strategies to improve patients initiatives to reduce symptoms and the burden of waiting. Key words: waiting time, total hip replacement, total knee replacement, health-related quality of life, randomized controlled trial, outcome assessment, social service, utilization of health services

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rheumatoid arthritis (RA) is an autoimmune disease characterized by synovitis, progressive joint destruction, and disability. Reactive arthritis (ReA) is a sterile joint inflammation following a distant mucosal infection. The clinical course of these diseases is variable and cannot be predicted with reasonable accuracy by clinical and laboratory markers. The predictive value of circulating soluble interleukin-2 receptor (sIL-2R), a marker of lymphocyte activation, measured by Immulite® automated immunoassay analyzer, was evaluated in two cohorts of RA patients. In 175 patients with active early RA randomized to treatment with either on disease-modifying antirheumatic drug (DMARD) or a combination of 3 DMARDs and prednisolone, low baseline sIL-2R level predicted remission after 6 months in patients treated with a single DMARD. In 24 patients with active RA refractory to DMARDs, low baseline sIL-2R level predicted rapid clinical response to treatment with infliximab, an anti-tumour necrosis factor antibody. Furthermore, in a cohort of 26 patients with acute ReA, high baseline sIL-2R level predicted remission after 6 months. Levels of circulating soluble E-selectin (sE-selectin), a marker of endothelial activation, were measured annually by enzyme-linked immunosorbent assay (ELISA) in a cohort of 85 patients with early RA. During a five-year follow-up, sE-selectin levels were associated with activity and outcome of RA. The levels of neutrophil and monocyte CD11b/CD18 expression measured by flow cytometry, and circulating levels of sE-selectin measured by ELISA, and procalcitonin by immunoluminometric assay, were compared in 28 patients with acute ReA and 16 patients with early RA. The levels of the markers were comparable in ReA, RA, and healthy control subjects. In conlusion, sIL-2R may provide a new predictive marker in early RA treated with a single DMARD and refractory RA treated with infliximab. In addition, sIL-2R level predicts remission in acute ReA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

World marine fisheries suffer from economic and biological overfishing: too many vessels are harvesting too few fish stocks. Fisheries economics has explained the causes of overfishing and provided a theoretical background for management systems capable of solving the problem. Yet only a few examples of fisheries managed by the principles of the bioeconomic theory exist. With the aim of bridging the gap between the actual fish stock assessment models used to provide management advice and economic optimisation models, the thesis explores economically sound harvesting from national and international perspectives. Using data calibrated for the Baltic salmon and herring stocks, optimal harvesting policies are outlined using numerical methods. First, the thesis focuses on the socially optimal harvest of a single salmon stock by commercial and recreational fisheries. The results obtained using dynamic programming show that the optimal fishery configuration would be to close down three out of the five studied fisheries. The result is robust to stock size fluctuations. Compared to a base case situation, the optimal fleet structure would yield a slight decrease in the commercial catch, but a recreational catch that is nearly seven times higher. As a result, the expected economic net benefits from the fishery would increase nearly 60%, and the expected number of juvenile salmon (smolt) would increase by 30%. Second, the thesis explores the management of multiple salmon stocks in an international framework. Non-cooperative and cooperative game theory are used to demonstrate different "what if" scenarios. The results of the four player game suggest that, despite the commonly agreed fishing quota, the behaviour of the countries has been closer to non-cooperation than cooperation. Cooperation would more than double the net benefits from the fishery compared to a past fisheries policy. Side payments, however, are a prerequisite for a cooperative solution. Third, the thesis applies coalitional games in the partition function form to study whether the cooperative solution would be stable despite the potential presence of positive externalities. The results show that the cooperation of two out of four studied countries can be stable. Compared to a past fisheries policy, a stable coalition structure would provide substantial economic benefits. Nevertheless, the status of the salmon stocks would not improve significantly. Fourth, the thesis studies the prerequisites for and potential consequences of the implementation of an individual transferable quota (ITQ) system in the Finnish herring fishery. Simulation results suggest that ITQs would result in a decrease in the number of fishing vessels, but enables positive profits to overlap with a higher stock size. The empirical findings of the thesis affirm that the profitability of the studied fisheries could be improved. The evidence, however, indicates that incentives for free riding exist, and thus the most preferable outcome both in economic and biological terms is elusive.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current study of Scandinavian multinational corporate subsidiaries in the rapidly growing Eastern European market, due to their particular organizational structure, attempts to gain some new insights into processes and potential benefits of knowledge and technology transfer. This study explores how to succeed in knowledge transfer and to become more competitive, driven by the need to improve transfer of systematic knowledge for the manufacture of product and service provisions in newly entered market. The scope of current research is exactly limited to multinational corporations, which are defined as enterprises comprising entities in two or more countries, regardless of legal forms and field of activity of those entities, and which operate under a system of decision-making permitting coherent policies and a common strategy through one or more decision-making centers. The entities are linked, by ownership, and able to exercise influence over the activities of the others; and, in particular, to share the knowledge, resources, and responsibilities with others. The research question is "How and to which extent can knowledge-transfer influence a company's technological competence and economic competitiveness?" and try to find out what particular forces and factors affect the development of subsidiary competencies; what factors influence the corporate integration and use of the subsidiary's competencies; and what may increase competitiveness of MNC pursuing leading position in entered market. The empirical part of the research was based on qualitative analyses of twenty interviews conducted among employees in Scandinavian MNC subsidiary units situated in Ukraine, using structured sequence of questions with open-ended answers. The data was investigated by comparison case analyses to literature framework. Findings indicate that a technological competence developed in one subsidiary will lead to an integration of that competence with other corporate units within the MNC. Success increasingly depends upon people's learning. The local economic area is crucial for understanding competition and industrial performance, as there seems to be a clear link between the performance of subsidiaries and the conditions prevailing in their environment. The linkage between competitive advantage and company's success is mutually dependent. Observation suggests that companies can be characterized as clusters of complementary activities such as R&D, administration, marketing, manufacturing and distribution. Study identifies barriers and obstacles in technology and knowledge transfer that is relevant for the subsidiaries' competence development. The accumulated experience can be implemented in new entered market with simple procedures, and at a low cost under specific circumstances, by cloning. The main goal is focused to support company prosperity, making more profits and sustaining an increased market share by improved product quality and/or reduced production cost of the subsidiaries through cloning approach. Keywords: multinational corporation; technology transfer; knowledge transfer; subsidiary competence; barriers and obstacles; competitive advantage; Eastern European market

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies the informational efficiency of the European Union emission allowance (EUA) market. In an efficient market, the market price is unpredictable and profits above average are impossible in the long run. The main research problem is does the EUA price follow a random walk. The method is an econometric analysis of the price series, which includes an autocorrelation coefficient test and a variance ratio test. The results reveal that the price series is autocorrelated and therefore a nonrandom walk. In order to find out the extent of predictability, the price series is modelled with an autoregressive model. The conclusion is that the EUA price is autocorrelated only to a small degree and that the predictability cannot be used to make extra profits. The EUA market is therefore considered informationally efficient, although the price series does not fulfill the requirements of a random walk. A market review supports the conclusion, but it is clear that the maturing of the market is still in process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This doctoral thesis describes the development of a miniaturized capillary electrochromatography (CEC) technique suitable for the study of interactions between various nanodomains of biological importance. The particular focus of the study was low-density lipoprotein (LDL) particles and their interaction with components of the extracellular matrix (ECM). LDL transports cholesterol to the tissues through the blood circulation, but when the LDL level becomes too high the particles begin to permeate and accumulate in the arteries. Through binding sites on apolipoprotein B-100 (apoB-100), LDL interacts with components of the ECM, such as proteoglycans (PGs) and collagen, in what is considered the key mechanism in the retention of lipoproteins and onset of atherosclerosis. Hydrolytic enzymes and oxidizing agents in the ECM may later successively degrade the LDL surface. Metabolic diseases such as diabetes may provoke damage of the ECM structure through the non-enzymatic reaction of glucose with collagen. In this work, fused silica capillaries of 50 micrometer i.d. were successfully coated with LDL and collagen, and steroids and apoB-100 peptide fragments were introduced as model compounds for interaction studies. The LDL coating was modified with copper sulphate or hydrolytic enzymes, and the interactions of steroids with the native and oxidized lipoproteins were studied. Lipids were also removed from the LDL particle coating leaving behind an apoB-100 surface for further studies. The development of collagen and collagen decorin coatings was helpful in the elucidation of the interactions of apoB-100 peptide fragments with the primary ECM component, collagen. Furthermore, the collagen I coating provided a good platform for glycation studies and for clarification of LDL interactions with native and modified collagen. All methods developed are inexpensive, requiring just small amounts of biomaterial. Moreover, the experimental conditions in CEC are easily modified, and the analyses can be carried out in a reasonable time frame. Other techniques were employed to support and complement the CEC studies. Scanning electron microscopy and atomic force microscopy provided crucial visual information about the native and modified coatings. Asymmetrical flow field-flow fractionation enabled size measurements of the modified lipoproteins. Finally, the CEC results were exploited to develop new sensor chips for a continuous flow quartz crystal microbalance technique, which provided complementary information about LDL ECM interactions. This thesis demonstrates the potential of CEC as a valuable and flexible technique for surface interaction studies. Further, CEC can serve as a novel microreactor for the in situ modification of LDL and collagen coatings. The coatings developed in this study provide useful platforms for a diversity of future investigations on biological nanodomains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation focuses on the recognition of the problems of uneven regional development in Finland in the 1950s, and the way the idea of controlling this development was introduced to Finnish politics. Since it is often stated that Finnish regional policy only began in the mid-1960s, the period at hand is considered to fall in the time before regional policy. However, various ideas, plans and projects of regional development as well as different aims of development were brought forward and discussed already in the 1950s. These give an interesting perspective to the ideas of later regional development. In the 1950s, many Finnish politicians became more conscious of the unavoidable societal change. The need for overall modernisation of the society made it reasonable to expect a growing level of unemployment and eagerness to migration. The uneven distribution of well-being was also feared to cause discontent and political changes. International experience proved interfering in the regional development possible when using the argument of public interest ; the measures taken increased the level of well-being, helped sustain societal balance, and supported national economy. Many of the development projects of the 1950s focused on Northern Finland, the natural resources of which were considered an important reserve and the political climate of which was regarded unstable. After the late 1940s, regional development was discussed frequently both on the national and the regional level. Direct and indirect support was given to less developed areas and the government outlined thorough investigations in order to relieve the regional problem. Towards the end of the decade, the measures taken were already often connected to the idea of equality. In the 1950s the conflicts within and between the largest Finnish political parties significantly affected the decisions of regional development. There are three case studies in this qualitative research based on the narrative method. The case studies clarify the characteristics of the 1950s regional development. In the first one, the representatives of the northern region and the state first discuss the location of a state-run nitrogen fertilizer factory and later the location of a new university. In the second, the aims and perspectives of private entrepreneurs and the state collide due to ideas of statist industrialisation projects and later due to an idea of a tax relief targeting northern industry. In the third case, the main role is given to the changing rural areas, in relation to which societal development and urbanisation were often measured. The regional development of the 1950s laid groundwork for the new, more established regional policy. The early problem solving actions were aimed both at the prevailing situation and the future and thus showed the way for the upcoming actions. Regional development policy existed already before regional policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Palaeoenvironments of the latter half of the Weichselian ice age and the transition to the Holocene, from ca. 52 to 4 ka, were investigated using isotopic analysis of oxygen, carbon and strontium in mammal skeletal apatite. The study material consisted predominantly of subfossil bones and teeth of the woolly mammoth (Mammuthus primigenius Blumenbach), collected from Europe and Wrangel Island, northeastern Siberia. All samples have been radiocarbon dated, and their ages range from >52 ka to 4 ka. Altogether, 100 specimens were sampled for the isotopic work. In Europe, the studies focused on the glacial palaeoclimate and habitat palaeoecology. To minimise the influence of possible diagenetic effects, the palaeoclimatological and ecological reconstructions were based on the enamel samples only. The results of the oxygen isotope analysis of mammoth enamel phosphate from Finland and adjacent nortwestern Russia, Estonia, Latvia, Lithuania, Poland, Denmark and Sweden provide the first estimate of oxygen isotope values in glacial precipitation in northern Europe. The glacial precipitation oxygen isotope values range from ca. -9.2±1.5 in western Denmark to -15.3 in Kirillov, northwestern Russia. These values are 0.6-4.1 lower than those in present-day precipitation, with the largest changes recorded in the currently marine influenced southern Sweden and the Baltic region. The new enamel-derived oxygen isotope data from this study, combined with oxygen isotope records from earlier investigations on mammoth tooth enamel and palaeogroundwaters, facilitate a reconstruction of the spatial patterns of the oxygen isotope values of precipitation and palaeotemperatures over much of Europe. The reconstructed geographic pattern of oxygen isotope levels in precipitation during 52-24 ka reflects the progressive isotopic depletion of air masses moving northeast, consistent with a westerly source of moisture for the entire region, and a circulation pattern similar to that of the present-day. The application of regionally varied δ/T-slopes, estimated from palaeogroundwater data and modern spatial correlations, yield reasonable estimates of glacial surface temperatures in Europe and imply 2-9°C lower long-term mean annual surface temperatures during the glacial period. The isotopic composition of carbon in the enamel samples indicates a pure C3 diet for the European mammoths, in agreement with previous investigations of mammoth ecology. A faint geographical gradient in the carbon isotope values of enamel is discernible, with more negative values in the northeast. The spatial trend is consistent with the climatic implications of the enamel oxygen isotope data, but may also suggest regional differences in habitat openness. The palaeogeographical changes caused by the eustatic rise of global sea level at the end of the Weichselian ice age was investigated on Wrangel Island, using the strontium isotope (Sr-87/Sr-86) ratios in the skeletal apatite of the local mammoth fauna. The diagenetic evaluations suggest good preservation of the original Sr isotope ratios, even in the bone specimens included in the study material. To estimate present-day environmental Sr isotope values on Wrangel Island, bioapatite samples from modern reindeer and muskoxen, as well as surface waters from rivers and ice wedges were analysed. A significant shift towards more radiogenic bioapatite Sr isotope ratios, from 0.71218 ± 0.00103 to 0.71491 ± 0.00138, marks the beginning of the Holocene. This implies a change in the migration patterns of the mammals, ultimately reflecting the inundation of the mainland connection and isolation of the population. The bioapatite Sr isotope data supports published coastline reconstructions placing the time of separation from the mainland to ca. 10-10.5 ka ago. The shift towards more radiogenic Sr isotope values in mid-Holocene subfossil remains after 8 ka ago reflects the rapid rise of the sea level from 10 to 8 ka, resulting in a considerable reduction of the accessible range area on the early Wrangel Island.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensor networks represent an attractive tool to observe the physical world. Networks of tiny sensors can be used to detect a fire in a forest, to monitor the level of pollution in a river, or to check on the structural integrity of a bridge. Application-specific deployments of static-sensor networks have been widely investigated. Commonly, these networks involve a centralized data-collection point and no sharing of data outside the organization that owns it. Although this approach can accommodate many application scenarios, it significantly deviates from the pervasive computing vision of ubiquitous sensing where user applications seamlessly access anytime, anywhere data produced by sensors embedded in the surroundings. With the ubiquity and ever-increasing capabilities of mobile devices, urban environments can help give substance to the ubiquitous sensing vision through Urbanets, spontaneously created urban networks. Urbanets consist of mobile multi-sensor devices, such as smart phones and vehicular systems, public sensor networks deployed by municipalities, and individual sensors incorporated in buildings, roads, or daily artifacts. My thesis is that "multi-sensor mobile devices can be successfully programmed to become the underpinning elements of an open, infrastructure-less, distributed sensing platform that can bring sensor data out of their traditional close-loop networks into everyday urban applications". Urbanets can support a variety of services ranging from emergency and surveillance to tourist guidance and entertainment. For instance, cars can be used to provide traffic information services to alert drivers to upcoming traffic jams, and phones to provide shopping recommender services to inform users of special offers at the mall. Urbanets cannot be programmed using traditional distributed computing models, which assume underlying networks with functionally homogeneous nodes, stable configurations, and known delays. Conversely, Urbanets have functionally heterogeneous nodes, volatile configurations, and unknown delays. Instead, solutions developed for sensor networks and mobile ad hoc networks can be leveraged to provide novel architectures that address Urbanet-specific requirements, while providing useful abstractions that hide the network complexity from the programmer. This dissertation presents two middleware architectures that can support mobile sensing applications in Urbanets. Contory offers a declarative programming model that views Urbanets as a distributed sensor database and exposes an SQL-like interface to developers. Context-aware Migratory Services provides a client-server paradigm, where services are capable of migrating to different nodes in the network in order to maintain a continuous and semantically correct interaction with clients. Compared to previous approaches to supporting mobile sensing urban applications, our architectures are entirely distributed and do not assume constant availability of Internet connectivity. In addition, they allow on-demand collection of sensor data with the accuracy and at the frequency required by every application. These architectures have been implemented in Java and tested on smart phones. They have proved successful in supporting several prototype applications and experimental results obtained in ad hoc networks of phones have demonstrated their feasibility with reasonable performance in terms of latency, memory, and energy consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.