13 resultados para Technological tests
em Helda - Digital Repository of University of Helsinki
Resumo:
This study addresses four issues concerning technological product innovations. First, the nature of the very early phases or "embryonic stages" of technological innovation is addressed. Second, this study analyzes why and by what means people initiate innovation processes outside the technological community and the field of expertise of the established industry. In other words, this study addresses the initiation of innovation that occurs without the expertise of established organizations, such as technology firms, professional societies and research institutes operating in the technological field under consideration. Third, the significance of interorganizational learning processes for technological innovation is dealt with. Fourth, this consideration is supplemented by considering how network collaboration and learning change when formalized product development work and the commercialization of innovation advance. These issues are addressed through the empirical analysis of the following three product innovations: Benecol margarine, the Nordic Mobile Telephone system (NMT) and the ProWellness Diabetes Management System (PDMS). This study utilizes the theoretical insights of cultural-historical activity theory on the development of human activities and learning. Activity-theoretical conceptualizations are used in the critical assessment and advancement of the concept of networks of learning. This concept was originally proposed by the research group of organizational scientist Walter Powell. A network of learning refers to the interorganizational collaboration that pools resources, ideas and know-how without market-based or hierarchical relations. The concept of an activity system is used in defining the nodes of the networks of learning. Network collaboration and learning are analyzed with regard to the shared object of development work. According to this study, enduring dilemmas and tensions in activity explain the participants' motives for carrying out actions that lead to novel product concepts in the early phases of technological innovation. These actions comprise the initiation of development work outside the relevant fields of expertise and collaboration and learning across fields of expertise in the absence of market-based or hierarchical relations. These networks of learning are fragile and impermanent. This study suggests that the significance of networks of learning across fields of expertise becomes more and more crucial for innovation activities.
Resumo:
ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
Resumo:
This study addresses the following question: How to think about ethics in a technological world? The question is treated first thematically by framing central issues in the relationship between ethics and technology. This relationship has three distinct facets: i) technological advance poses new challenges for ethics, ii) traditional ethics may become poorly applicable in a technologically transformed world, and iii) the progress in science and technology has altered the concept of rationality in ways that undermine ethical thinking itself. The thematic treatment is followed by the description and analysis of three approaches to the questions framed. First, Hans Jonas s thinking on the ontology of life and the imperative of responsibility is studied. In Jonas s analysis modern culture is found to be nihilistic because it is unable to understand organic life, to find meaning in reality, and to justify morals. At the root of nihilism Jonas finds dualism, the traditional Western way of seeing consciousness as radically separate from the material world. Jonas attempts to create a metaphysical grounding for an ethic that would take the technologically increased human powers into account and make the responsibility for future generations meaningful and justified. The second approach is Albert Borgmann s philosophy of technology that mainly assesses the ways in which technological development has affected everyday life. Borgmann admits that modern technology has liberated humans from toil, disease, danger, and sickness. Furthermore, liberal democracy, possibilities for self-realization, and many of the freedoms we now enjoy would not be possible on a large scale without technology. Borgmann, however, argues that modern technology in itself does not provide a whole and meaningful life. In fact, technological conditions are often detrimental to the good life. Integrity in life, according to him, is to be sought among things and practices that evade technoscientific objectification and commodification. Larry Hickman s Deweyan philosophy of technology is the third approach under scrutiny. Central in Hickman s thinking is a broad definition of technology that is nearly equal to Deweyan inquiry. Inquiry refers to the reflective and experiential way humans adapt to their environment by modifying their habits and beliefs. In Hickman s work, technology consists of all kinds of activities that through experimentation and/or reflection aim at improving human techniques and habits. Thus, in addition to research and development, many arts and political reforms are technological for Hickman. He argues for recasting such distinctions as fact/value, poiesis/praxis/theoria, and individual/society. Finally, Hickman does not admit a categorical difference between ethics and technology: moral values and norms need to be submitted to experiential inquiry as well as all the other notions. This study mainly argues for an interdisciplinary approach to the ethics of technology. This approach should make use of the potentialities of the research traditions in applied ethics, the philosophy of technology, and the social studies on science and technology and attempt to overcome their limitations. This study also advocates an endorsement of mid-level ethics that concentrate on the practices, institutions, and policies of temporal human life. Mid-level describes the realm between the instantaneous and individualistic micro-level and the universal and global macro level.
Resumo:
The dissertation consists of an introductory chapter and three essays that apply search-matching theory to study the interaction of labor market frictions, technological change and macroeconomic fluctuations. The first essay studies the impact of capital-embodied growth on equilibrium unemployment by extending a vintage capital/search model to incorporate vintage human capital. In addition to the capital obsolescence (or creative destruction) effect that tends to raise unemployment, vintage human capital introduces a skill obsolescence effect of faster growth that has the opposite sign. Faster skill obsolescence reduces the value of unemployment, hence wages and leads to more job creation and less job destruction, unambiguously reducing unemployment. The second essay studies the effect of skill biased technological change on skill mismatch and the allocation of workers and firms in the labor market. By allowing workers to invest in education, we extend a matching model with two-sided heterogeneity to incorporate an endogenous distribution of high and low skill workers. We consider various possibilities for the cost of acquiring skills and show that while unemployment increases in most scenarios, the effect on the distribution of vacancy and worker types varies according to the structure of skill costs. When the model is extended to incorporate endogenous labor market participation, we show that the unemployment rate becomes less informative of the state of the labor market as the participation margin absorbs employment effects. The third essay studies the effects of labor taxes on equilibrium labor market outcomes and macroeconomic dynamics in a New Keynesian model with matching frictions. Three policy instruments are considered: a marginal tax and a tax subsidy to produce tax progression schemes, and a replacement ratio to account for variability in outside options. In equilibrium, the marginal tax rate and replacement ratio dampen economic activity whereas tax subsidies boost the economy. The marginal tax rate and replacement ratio amplify shock responses whereas employment subsidies weaken them. The tax instruments affect the degree to which the wage absorbs shocks. We show that increasing tax progression when taxation is initially progressive is harmful for steady state employment and output, and amplifies the sensitivity of macroeconomic variables to shocks. When taxation is initially proportional, increasing progression is beneficial for output and employment and dampens shock responses.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
The TOTEM collaboration has developed and tested the first prototype of its Roman Pots to be operated in the LHC. TOTEM Roman Pots contain stacks of 10 silicon detectors with strips oriented in two orthogonal directions. To measure proton scattering angles of a few microradians, the detectors will approach the beam centre to a distance of 10 sigma + 0.5 mm (= 1.3 mm). Dead space near the detector edge is minimised by using two novel "edgeless" detector technologies. The silicon detectors are used both for precise track reconstruction and for triggering. The first full-sized prototypes of both detector technologies as well as their read-out electronics have been developed, built and operated. The tests took place first in a fixed-target muon beam at CERN's SPS, and then in the proton beam-line of the SPS accelerator ring. We present the test beam results demonstrating the successful functionality of the system despite slight technical shortcomings to be improved in the near future.
Resumo:
Most new drug molecules discovered today suffer from poor bioavailability. Poor oral bioavailability results mainly from poor dissolution properties of hydrophobic drug molecules, because the drug dissolution is often the rate-limiting event of the drug’s absorption through the intestinal wall into the systemic circulation. During the last few years, the use of mesoporous silica and silicon particles as oral drug delivery vehicles has been widely studied, and there have been promising results of their suitability to enhance the physicochemical properties of poorly soluble drug molecules. Mesoporous silica and silicon particles can be used to enhance the solubility and dissolution rate of a drug by incorporating the drug inside the pores, which are only a few times larger than the drug molecules, and thus, breaking the crystalline structure into a disordered, amorphous form with better dissolution properties. Also, the high surface area of the mesoporous particles improves the dissolution rate of the incorporated drug. In addition, the mesoporous materials can also enhance the permeability of large, hydrophilic drug substances across biological barriers. T he loading process of drugs into silica and silicon mesopores is mainly based on the adsorption of drug molecules from a loading solution into the silica or silicon pore walls. There are several factors that affect the loading process: the surface area, the pore size, the total pore volume, the pore geometry and surface chemistry of the mesoporous material, as well as the chemical nature of the drugs and the solvents. Furthermore, both the pore and the surface structure of the particles also affect the drug release kinetics. In this study, the loading of itraconazole into mesoporous silica (Syloid AL-1 and Syloid 244) and silicon (TOPSi and TCPSi) microparticles was studied, as well as the release of itraconazole from the microparticles and its stability after loading. Itraconazole was selected for this study because of its highly hydrophobic and poorly soluble nature. Different mesoporous materials with different surface structures, pore volumes and surface areas were selected in order to evaluate the structural effect of the particles on the loading degree and dissolution behaviour of the drug using different loading parameters. The loaded particles were characterized with various analytical methods, and the drug release from the particles was assessed by in vitro dissolution tests. The results showed that the loaded drug was apparently in amorphous form after loading, and that the loading process did not alter the chemical structure of the silica or silicon surface. Both the mesoporous silica and silicon microparticles enhanced the solubility and dissolution rate of itraconazole. Moreover, the physicochemical properties of the particles and the loading procedure were shown to have an effect on the drug loading efficiency and drug release kinetics. Finally, the mesoporous silicon particles loaded with itraconazole were found to be unstable under stressed conditions (at 38 qC and 70 % relative humidity).
Resumo:
The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.
Resumo:
Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.
Resumo:
Many economic events involve initial observations that substantially deviate from long-run steady state. Initial conditions of this type have been found to impact diversely on the power of univariate unit root tests, whereas the impact on multivariate tests is largely unknown. This paper investigates the impact of the initial condition on tests for cointegration rank. We compare the local power of the widely used likelihood ratio (LR) test with the local power of a test based on the eigenvalues of the companion matrix. We find that the power of the LR test is increasing in the magnitude of the initial condition, whereas the power of the other test is decreasing. The behaviour of the tests is investigated in an application to price convergence.