14 resultados para TECHNOLOGICAL PARAMETERS
em Helda - Digital Repository of University of Helsinki
Resumo:
This study addresses four issues concerning technological product innovations. First, the nature of the very early phases or "embryonic stages" of technological innovation is addressed. Second, this study analyzes why and by what means people initiate innovation processes outside the technological community and the field of expertise of the established industry. In other words, this study addresses the initiation of innovation that occurs without the expertise of established organizations, such as technology firms, professional societies and research institutes operating in the technological field under consideration. Third, the significance of interorganizational learning processes for technological innovation is dealt with. Fourth, this consideration is supplemented by considering how network collaboration and learning change when formalized product development work and the commercialization of innovation advance. These issues are addressed through the empirical analysis of the following three product innovations: Benecol margarine, the Nordic Mobile Telephone system (NMT) and the ProWellness Diabetes Management System (PDMS). This study utilizes the theoretical insights of cultural-historical activity theory on the development of human activities and learning. Activity-theoretical conceptualizations are used in the critical assessment and advancement of the concept of networks of learning. This concept was originally proposed by the research group of organizational scientist Walter Powell. A network of learning refers to the interorganizational collaboration that pools resources, ideas and know-how without market-based or hierarchical relations. The concept of an activity system is used in defining the nodes of the networks of learning. Network collaboration and learning are analyzed with regard to the shared object of development work. According to this study, enduring dilemmas and tensions in activity explain the participants' motives for carrying out actions that lead to novel product concepts in the early phases of technological innovation. These actions comprise the initiation of development work outside the relevant fields of expertise and collaboration and learning across fields of expertise in the absence of market-based or hierarchical relations. These networks of learning are fragile and impermanent. This study suggests that the significance of networks of learning across fields of expertise becomes more and more crucial for innovation activities.
Resumo:
In dentistry, basic imaging techniques such as intraoral and panoramic radiography are in most cases the only imaging techniques required for the detection of pathology. Conventional intraoral radiographs provide images with sufficient information for most dental radiographic needs. Panoramic radiography produces a single image of both jaws, giving an excellent overview of oral hard tissues. Regardless of the technique, plain radiography has only a limited capability in the evaluation of three-dimensional (3D) relationships. Technological advances in radiological imaging have moved from two-dimensional (2D) projection radiography towards digital, 3D and interactive imaging applications. This has been achieved first by the use of conventional computed tomography (CT) and more recently by cone beam CT (CBCT). CBCT is a radiographic imaging method that allows accurate 3D imaging of hard tissues. CBCT has been used for dental and maxillofacial imaging for more than ten years and its availability and use are increasing continuously. However, at present, only best practice guidelines are available for its use, and the need for evidence-based guidelines on the use of CBCT in dentistry is widely recognized. We evaluated (i) retrospectively the use of CBCT in a dental practice, (ii) the accuracy and reproducibility of pre-implant linear measurements in CBCT and multislice CT (MSCT) in a cadaver study, (iii) prospectively the clinical reliability of CBCT as a preoperative imaging method for complicated impacted lower third molars, and (iv) the tissue and effective radiation doses and image quality of dental CBCT scanners in comparison with MSCT scanners in a phantom study. Using CBCT, subjective identification of anatomy and pathology relevant in dental practice can be readily achieved, but dental restorations may cause disturbing artefacts. CBCT examination offered additional radiographic information when compared with intraoral and panoramic radiographs. In terms of the accuracy and reliability of linear measurements in the posterior mandible, CBCT is comparable to MSCT. CBCT is a reliable means of determining the location of the inferior alveolar canal and its relationship to the roots of the lower third molar. CBCT scanners provided adequate image quality for dental and maxillofacial imaging while delivering considerably smaller effective doses to the patient than MSCT. The observed variations in patient dose and image quality emphasize the importance of optimizing the imaging parameters in both CBCT and MSCT.
Resumo:
Pressurised hot water extraction (PHWE) exploits the unique temperature-dependent solvent properties of water minimising the use of harmful organic solvents. Water is environmentally friendly, cheap and easily available extraction medium. The effects of temperature, pressure and extraction time in PHWE have often been studied, but here the emphasis was on other parameters important for the extraction, most notably the dimensions of the extraction vessel and the stability and solubility of the analytes to be extracted. Non-linear data analysis and self-organising maps were employed in the data analysis to obtain correlations between the parameters studied, recoveries and relative errors. First, pressurised hot water extraction (PHWE) was combined on-line with liquid chromatography-gas chromatography (LC-GC), and the system was applied to the extraction and analysis of polycyclic aromatic hydrocarbons (PAHs) in sediment. The method is of superior sensitivity compared with the traditional methods, and only a small 10 mg sample was required for analysis. The commercial extraction vessels were replaced by laboratory-made stainless steel vessels because of some problems that arose. The performance of the laboratory-made vessels was comparable to that of the commercial ones. In an investigation of the effect of thermal desorption in PHWE, it was found that at lower temperatures (200ºC and 250ºC) the effect of thermal desorption is smaller than the effect of the solvating property of hot water. At 300ºC, however, thermal desorption is the main mechanism. The effect of the geometry of the extraction vessel on recoveries was studied with five specially constructed extraction vessels. In addition to the extraction vessel geometry, the sediment packing style and the direction of water flow through the vessel were investigated. The geometry of the vessel was found to have only minor effect on the recoveries, and the same was true of the sediment packing style and the direction of water flow through the vessel. These are good results because these parameters do not have to be carefully optimised before the start of extractions. Liquid-liquid extraction (LLE) and solid-phase extraction (SPE) were compared as trapping techniques for PHWE. LLE was more robust than SPE and it provided better recoveries and repeatabilities than did SPE. Problems related to blocking of the Tenax trap and unrepeatable trapping of the analytes were encountered in SPE. Thus, although LLE is more labour intensive, it can be recommended over SPE. The stabilities of the PAHs in aqueous solutions were measured using a batch-type reaction vessel. Degradation was observed at 300ºC even with the shortest heating time. Ketones and quinones and other oxidation products were observed. Although the conditions of the stability studies differed considerably from the extraction conditions in PHWE, the results indicate that the risk of analyte degradation must be taken into account in PHWE. The aqueous solubilities of acenaphthene, anthracene and pyrene were measured, first below and then above the melting point of the analytes. Measurements below the melting point were made to check that the equipment was working, and the results were compared with those obtained earlier. Good agreement was found between the measured and literature values. A new saturation cell was constructed for the solubility measurements above the melting point of the analytes because the flow-through saturation cell could not be used above the melting point. An exponential relationship was found between the solubilities measured for pyrene and anthracene and temperature.
Resumo:
This study addresses the following question: How to think about ethics in a technological world? The question is treated first thematically by framing central issues in the relationship between ethics and technology. This relationship has three distinct facets: i) technological advance poses new challenges for ethics, ii) traditional ethics may become poorly applicable in a technologically transformed world, and iii) the progress in science and technology has altered the concept of rationality in ways that undermine ethical thinking itself. The thematic treatment is followed by the description and analysis of three approaches to the questions framed. First, Hans Jonas s thinking on the ontology of life and the imperative of responsibility is studied. In Jonas s analysis modern culture is found to be nihilistic because it is unable to understand organic life, to find meaning in reality, and to justify morals. At the root of nihilism Jonas finds dualism, the traditional Western way of seeing consciousness as radically separate from the material world. Jonas attempts to create a metaphysical grounding for an ethic that would take the technologically increased human powers into account and make the responsibility for future generations meaningful and justified. The second approach is Albert Borgmann s philosophy of technology that mainly assesses the ways in which technological development has affected everyday life. Borgmann admits that modern technology has liberated humans from toil, disease, danger, and sickness. Furthermore, liberal democracy, possibilities for self-realization, and many of the freedoms we now enjoy would not be possible on a large scale without technology. Borgmann, however, argues that modern technology in itself does not provide a whole and meaningful life. In fact, technological conditions are often detrimental to the good life. Integrity in life, according to him, is to be sought among things and practices that evade technoscientific objectification and commodification. Larry Hickman s Deweyan philosophy of technology is the third approach under scrutiny. Central in Hickman s thinking is a broad definition of technology that is nearly equal to Deweyan inquiry. Inquiry refers to the reflective and experiential way humans adapt to their environment by modifying their habits and beliefs. In Hickman s work, technology consists of all kinds of activities that through experimentation and/or reflection aim at improving human techniques and habits. Thus, in addition to research and development, many arts and political reforms are technological for Hickman. He argues for recasting such distinctions as fact/value, poiesis/praxis/theoria, and individual/society. Finally, Hickman does not admit a categorical difference between ethics and technology: moral values and norms need to be submitted to experiential inquiry as well as all the other notions. This study mainly argues for an interdisciplinary approach to the ethics of technology. This approach should make use of the potentialities of the research traditions in applied ethics, the philosophy of technology, and the social studies on science and technology and attempt to overcome their limitations. This study also advocates an endorsement of mid-level ethics that concentrate on the practices, institutions, and policies of temporal human life. Mid-level describes the realm between the instantaneous and individualistic micro-level and the universal and global macro level.
Resumo:
The dissertation consists of an introductory chapter and three essays that apply search-matching theory to study the interaction of labor market frictions, technological change and macroeconomic fluctuations. The first essay studies the impact of capital-embodied growth on equilibrium unemployment by extending a vintage capital/search model to incorporate vintage human capital. In addition to the capital obsolescence (or creative destruction) effect that tends to raise unemployment, vintage human capital introduces a skill obsolescence effect of faster growth that has the opposite sign. Faster skill obsolescence reduces the value of unemployment, hence wages and leads to more job creation and less job destruction, unambiguously reducing unemployment. The second essay studies the effect of skill biased technological change on skill mismatch and the allocation of workers and firms in the labor market. By allowing workers to invest in education, we extend a matching model with two-sided heterogeneity to incorporate an endogenous distribution of high and low skill workers. We consider various possibilities for the cost of acquiring skills and show that while unemployment increases in most scenarios, the effect on the distribution of vacancy and worker types varies according to the structure of skill costs. When the model is extended to incorporate endogenous labor market participation, we show that the unemployment rate becomes less informative of the state of the labor market as the participation margin absorbs employment effects. The third essay studies the effects of labor taxes on equilibrium labor market outcomes and macroeconomic dynamics in a New Keynesian model with matching frictions. Three policy instruments are considered: a marginal tax and a tax subsidy to produce tax progression schemes, and a replacement ratio to account for variability in outside options. In equilibrium, the marginal tax rate and replacement ratio dampen economic activity whereas tax subsidies boost the economy. The marginal tax rate and replacement ratio amplify shock responses whereas employment subsidies weaken them. The tax instruments affect the degree to which the wage absorbs shocks. We show that increasing tax progression when taxation is initially progressive is harmful for steady state employment and output, and amplifies the sensitivity of macroeconomic variables to shocks. When taxation is initially proportional, increasing progression is beneficial for output and employment and dampens shock responses.
Resumo:
Technological development of fast multi-sectional, helical computed tomography (CT) scanners has allowed computed tomography perfusion (CTp) and angiography (CTA) in evaluating acute ischemic stroke. This study focuses on new multidetector computed tomography techniques, namely whole-brain and first-pass CT perfusion plus CTA of carotid arteries. Whole-brain CTp data is acquired during slow infusion of contrast material to achieve constant contrast concentration in the cerebral vasculature. From these data quantitative maps are constructed of perfused cerebral blood volume (pCBV). The probability curve of cerebral infarction as a function of normalized pCBV was determined in patients with acute ischemic stroke. Normalized pCBV, expressed as a percentage of contralateral normal brain pCBV, was determined in the infarction core and in regions just inside and outside the boundary between infarcted and noninfarcted brain. Corresponding probabilities of infarction were 0.99, 0.96, and 0.11, R² was 0.73, and differences in perfusion between core and inner and outer bands were highly significant. Thus a probability of infarction curve can help predict the likelihood of infarction as a function of percentage normalized pCBV. First-pass CT perfusion is based on continuous cine imaging over a selected brain area during a bolus injection of contrast. During its first passage, contrast material compartmentalizes in the intravascular space, resulting in transient tissue enhancement. Functional maps such as cerebral blood flow (CBF), and volume (CBV), and mean transit time (MTT) are then constructed. We compared the effects of three different iodine concentrations (300, 350, or 400 mg/mL) on peak enhancement of normal brain tissue and artery and vein, stratified by region-of-interest (ROI) location, in 102 patients within 3 hours of stroke onset. A monotonic increasing peak opacification was evident at all ROI locations, suggesting that CTp evaluation of patients with acute stroke is best performed with the highest available concentration of contrast agent. In another study we investigated whether lesion volumes on CBV, CBF, and MTT maps within 3 hours of stroke onset predict final infarct volume, and whether all these parameters are needed for triage to intravenous recombinant tissue plasminogen activator (IV-rtPA). The effect of IV-rtPA on the affected brain by measuring salvaged tissue volume in patients receiving IV-rtPA and in controls was investigated also. CBV lesion volume did not necessarily represent dead tissue. MTT lesion volume alone can serve to identify the upper size limit of the abnormally perfused brain, and those with IV-rtPA salvaged more brain than did controls. Carotid CTA was compared with carotid DSA in grading of stenosis in patients with stroke symptoms. In CTA, the grade of stenosis was determined by means of axial source and maximum intensity projection (MIP) images as well as a semiautomatic vessel analysis. CTA provides an adequate, less invasive alternative to conventional DSA, although tending to underestimate clinically relevant grades of stenosis.
Resumo:
Bone mass accrual and maintenance are regulated by a complex interplay between genetic and environmental factors. Recent studies have revealed an important role for the low-density lipoprotein receptor-related protein 5 (LRP5) in this process. The aim of this thesis study was to identify novel variants in the LRP5 gene and to further elucidate the association of LRP5 and its variants with various bone health related clinical characteristics. The results of our studies show that loss-of-function mutations in LRP5 cause severe osteoporosis not only in homozygous subjects but also in the carriers of these mutations, who have significantly reduced bone mineral density (BMD) and increased susceptibility to fractures. In addition, we demonstrated for the first time that a common polymorphic LRP5 variant (p.A1330V) was associated with reduced peak bone mass, an important determinant of BMD and osteoporosis in later life. The results from these two studies are concordant with results seen in other studies on LRP5 mutations and in association studies linking genetic variation in LRP5 with BMD and osteoporosis. Several rare LRP5 variants were identified in children with recurrent fractures. Sequencing and multiplex ligation-dependent probe amplification (MLPA) analyses revealed no disease-causing mutations or whole-exon deletions. Our findings from clinical assessments and family-based genotype-phenotype studies suggested that the rare LRP5 variants identified are not the definite cause of fractures in these children. Clinical assessments of our study subjects with LPR5 mutations revealed an unexpectedly high prevalence of impaired glucose tolerance and dyslipidaemia. Moreover, in subsequent studies we discovered that common polymorphic LRP5 variants are associated with unfavorable metabolic characteristics. Changes in lipid profile were already apparent in pre-pubertal children. These results, together with the findings from other studies, suggest an important role for LRP5 also in glucose and lipid metabolism. Our results underscore the important role of LRP5 not only in bone mass accrual and maintenance of skeletal health but also in glucose and lipid metabolism. The role of LRP5 in bone metabolism has long been studied, but further studies with larger study cohorts are still needed to evaluate the specific role of LRP5 variants as metabolic risk factors.
Resumo:
Thin films are the basis of much of recent technological advance, ranging from coatings with mechanical or optical benefits to platforms for nanoscale electronics. In the latter, semiconductors have been the norm ever since silicon became the main construction material for a multitude of electronical components. The array of characteristics of silicon-based systems can be widened by manipulating the structure of the thin films at the nanoscale - for instance, by making them porous. The different characteristics of different films can then to some extent be combined by simple superposition. Thin films can be manufactured using many different methods. One emerging field is cluster beam deposition, where aggregates of hundreds or thousands of atoms are deposited one by one to form a layer, the characteristics of which depend on the parameters of deposition. One critical parameter is deposition energy, which dictates how porous, if at all, the layer becomes. Other parameters, such as sputtering rate and aggregation conditions, have an effect on the size and consistency of the individual clusters. Understanding nanoscale processes, which cannot be observed experimentally, is fundamental to optimizing experimental techniques and inventing new possibilities for advances at this scale. Atomistic computer simulations offer a window to the world of nanometers and nanoseconds in a way unparalleled by the most accurate of microscopes. Transmission electron microscope image simulations can then bridge this gap by providing a tangible link between the simulated and the experimental. In this thesis, the entire process of cluster beam deposition is explored using molecular dynamics and image simulations. The process begins with the formation of the clusters, which is investigated for Si/Ge in an Ar atmosphere. The structure of the clusters is optimized to bring it as close to the experimental ideal as possible. Then, clusters are deposited, one by one, onto a substrate, until a sufficiently thick layer has been produced. Finally, the concept is expanded by further deposition with different parameters, resulting in multiple superimposed layers of different porosities. This work demonstrates how the aggregation of clusters is not entirely understood within the scope of the approximations used in the simulations; yet, it is also shown how the continued deposition of clusters with a varying deposition energy can lead to a novel kind of nanostructured thin film: a multielemental porous multilayer. According to theory, these new structures have characteristics that can be tailored for a variety of applications, with precision heretofore unseen in conventional multilayer manufacture.
Resumo:
Tämän tutkimuksen tavoitteena oli selvittää tilalla määritetyn hyvinvoinnin yhteyttä emakoiden tuotantotuloksiin. Hyvinvointia arvioitiin suomalaisen hyvinvointi-indeksin, A-indeksi, avulla. Tuotantotuloksina käytettiin kahta erilaista tuotosaineistoa, jotka molemmat pohjautuivat kansalliseen tuotosseuranta aineistoon. Hyvinvointimääritykset tehtiin 30 porsastuotantosikalassa maaliskuun 2007 aikana. A-indeksi koostuu kuudesta kategoriasta ’liikkumismahdollisuudet’, ’alustan ominaisuudet’, ’sosiaaliset kontaktit’, ’valo, ilma ja melu’, ’ruokinta ja veden saanti’ sekä ’eläinten terveys ja hoidon taso’. Jokaisessa kategoriassa on 3-10 pääosin ympäristöperäistä muuttujaa, jotka vaihtelevat osastoittain. Maksimipistemäärä osastolle on 100. Hyvinvointimittaukset tehtiin porsitus-, tiineytys- ja joutilasosastoilla. Erillisten tiineytysosastojen pienen lukumäärän takia (n=7) tilakohtaiset tiineytys- ja joutilasosastopisteet yhdistettiin ja keskiarvoja käytettiin analyyseissä. Yhteyksiä tuotokseen tutkittiin kahden eri aineiston avulla 1) Tilaraportti aineisto (n=29) muodostuu muokkaamattomista tila- ja tuotostuloksista tilavierailua edeltävän vuoden ajalta, 2) POTSIaineisto (n=30) muodostuu POTSI-ohjelmalla (MTT) muokatusta tuotantoaineistosta, joka sisältää managementtiryhmän (tila, vuosi, vuodenaika) vaikutuksen ensikoiden ja emakoiden pahnuekohtaiseen tuotokseen. Yhteyksiä analysointiin korrelaatio- ja regressioanalyysien avulla. Vaikka osallistuminen tutkimukseen oli vapaaehtoista, molempien tuotantoaineistojen perusteella tutkimustilat edustavat keskituottoista suomalaista sikatilaa. A-indeksin kokonaispisteet vaihtelivat välillä 37,5–64,0 porsitusosastolla ja 39,5–83,5 joutilasosastolla. Tilaraporttiaineistoa käytettäessä paremmat pisteet porsitusosaston ’eläinten terveys ja hoidon taso’ -kategoriasta lyhensivät eläinten lisääntymissykliä, lisäsivät syntyvien pahnueiden ja porsaiden määrää sekä alensivat kuolleena syntyneiden lukumäärää. Regressiomallin mukaan ’eläinten terveys ja hoidon taso’ -kategoria selitti syntyvien porsaiden lukumäärän, porsimisvälin pituuden sekä keskiporsimiskerran vaihtelua. Paremmat pisteet joutilasosaston ’liikkumismahdollisuudet’ kategoriasta alensivat syntyneiden pahnueiden sekä syntyneiden että vieroitettujen porsaiden lukumäärää. Regressiomallin mukaan ensikkopahnueiden osuus ja ”liikkumismahdollisuudet” kategorian pisteet selittivät vieroitettujen porsaiden lukumäärän vaihtelua. POTSI-aineiston yhteydessä kuolleena syntyneiden porsaiden lukumäärän aleneminen oli ensikoilla yhteydessä parempiin porsitusosaston ’sosiaalisiin kontakteihin’ ja emakoilla puolestaan joutilasosaston parempiin ’eläinten terveys ja hoidon taso’ pisteisiin. Kahden eri tuotantoaineiston avulla saadut tulokset erosivat toisistaan. Seuraavissa tutkimuksissa onkin suositeltavampaa käyttää Tilaraporttiaineistoja, joissa tuotokset ilmoitetaan vuosikohtaisina. Tämän tutkimuksen perusteella hyvinvoinnilla ja tuotoksella on yhteyksiä, joilla on myös merkittävää taloudellista vaikutusta. Erityisesti hyvä eläinten hoito ja eläinten terveys lisäävät tuotettujen porsaiden määrää ja lyhentävät lisääntymiskiertoa. Erityishuomiota tulee kiinnittää vapaana olevien joutilaiden emakoiden sosiaaliseen stressiin ja rehunsaannin varmistamiseen kaikille yksilöille.
Resumo:
Electric activity of the heart consists of repeated cardiomyocyte depolarizations and repolarizations. Abnormalities in repolarization predispose to ventricular arrhythmias. In body surface electrocardiogram, ventricular repolarization generates the T wave. Several electrocardiographic measures have been developed both for clinical and research purposes to detect repolarization abnormalities. The study aim was to investigate modifiers of ventricular repolarization with the focus on the relationship of the left ventricular mass, antihypertensive drugs, and common gene variants, to electrocardiographic repolarization parameters. The prognostic value of repolarization parameters was also assessed. The study subjects originated from a population of more than 200 middle-aged hypertensive men attending the GENRES hypertension study, and from an epidemiological survey, the Health 2000 Study, including more than 6000 participants. Ventricular repolarization was analysed from digital standard 12-lead resting electrocardiograms with two QT-interval based repolarization parameters (QT interval, T-wave peak to T-wave end interval) and with a set of four T-wave morphology parameters. The results showed that in hypertensive men, a linear change in repolarization parameters is present even in the normal range of left ventricular mass, and that even mild left ventricular hypertrophy is associated with potentially adverse electrocardiographic repolarization changes. In addition, treatments with losartan, bisoprolol, amlodipine, and hydrochlorothiazide have divergent short-term effects on repolarization parameters in hypertensive men. Analyses of the general population sample showed that single nucleotide polymorphisms in KCNH2, KCNE1, and NOS1AP genes are associated with changes in QT-interval based repolarization parameters but not consistently with T-wave morphology parameters. T-wave morphology parameters, but not QT interval or T-wave peak to T-wave end interval, provided independent prognostic information on mortality. The prognostic value was specifically related to cardiovascular mortality. The results indicate that, in hypertension, altered ventricular repolarization is already present in mild left ventricular mass increase, and that commonly used antihypertensive drugs may relatively rapidly and treatment-specifically modify electrocardiographic repolarization parameters. Common variants in cardiac ion channel genes and NOS1AP gene may also modify repolarization-related arrhythmia vulnerability. In the general population, T-wave morphology parameters may be useful in the risk assessment of cardiovascular mortality.