28 resultados para Optimal tests
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.
Resumo:
Clinical trials have shown that weight reduction with lifestyles can delay or prevent diabetes and reduce blood pressure. An appropriate definition of obesity using anthropometric measures is useful in predicting diabetes and hypertension at the population level. However, there is debate on which of the measures of obesity is best or most strongly associated with diabetes and hypertension and on what are the optimal cut-off values for body mass index (BMI) and waist circumference (WC) in this regard. The aims of the study were 1) to compare the strength of the association for undiagnosed or newly diagnosed diabetes (or hypertension) with anthropometric measures of obesity in people of Asian origin, 2) to detect ethnic differences in the association of undiagnosed diabetes with obesity, 3) to identify ethnic- and sex-specific change point values of BMI and WC for changes in the prevalence of diabetes and 4) to evaluate the ethnic-specific WC cutoff values proposed by the International Diabetes Federation (IDF) in 2005 for central obesity. The study population comprised 28 435 men and 35 198 women, ≥ 25 years of age, from 39 cohorts participating in the DECODA and DECODE studies, including 5 Asian Indian (n = 13 537), 3 Mauritian Indian (n = 4505) and Mauritian Creole (n = 1075), 8 Chinese (n =10 801), 1 Filipino (n = 3841), 7 Japanese (n = 7934), 1 Mongolian (n = 1991), and 14 European (n = 20 979) studies. The prevalence of diabetes, hypertension and central obesity was estimated, using descriptive statistics, and the differences were determined with the χ2 test. The odds ratios (ORs) or coefficients (from the logistic model) and hazard ratios (HRs, from the Cox model to interval censored data) for BMI, WC, waist-to-hip ratio (WHR), and waist-to-stature ratio (WSR) were estimated for diabetes and hypertension. The differences between BMI and WC, WHR or WSR were compared, applying paired homogeneity tests (Wald statistics with 1 df). Hierarchical three-level Bayesian change point analysis, adjusting for age, was applied to identify the most likely cut-off/change point values for BMI and WC in association with previously undiagnosed diabetes. The ORs for diabetes in men (women) with BMI, WC, WHR and WSR were 1.52 (1.59), 1.54 (1.70), 1.53 (1.50) and 1.62 (1.70), respectively and the corresponding ORs for hypertension were 1.68 (1.55), 1.66 (1.51), 1.45 (1.28) and 1.63 (1.50). For diabetes the OR for BMI did not differ from that for WC or WHR, but was lower than that for WSR (p = 0.001) in men while in women the ORs were higher for WC and WSR than for BMI (both p < 0.05). Hypertension was more strongly associated with BMI than with WHR in men (p < 0.001) and most strongly with BMI than with WHR (p < 0.001), WSR (p < 0.01) and WC (p < 0.05) in women. The HRs for incidence of diabetes and hypertension did not differ between BMI and the other three central obesity measures in Mauritian Indians and Mauritian Creoles during follow-ups of 5, 6 and 11 years. The prevalence of diabetes was highest in Asian Indians, lowest in Europeans and intermediate in others, given the same BMI or WC category. The coefficients for diabetes in BMI (kg/m2) were (men/women): 0.34/0.28, 0.41/0.43, 0.42/0.61, 0.36/0.59 and 0.33/0.49 for Asian Indian, Chinese, Japanese, Mauritian Indian and European (overall homogeneity test: p > 0.05 in men and p < 0.001 in women). Similar results were obtained in WC (cm). Asian Indian women had lower coefficients than women of other ethnicities. The change points for BMI were 29.5, 25.6, 24.0, 24.0 and 21.5 in men and 29.4, 25.2, 24.9, 25.3 and 22.5 (kg/m2) in women of European, Chinese, Mauritian Indian, Japanese, and Asian Indian descent. The change points for WC were 100, 85, 79 and 82 cm in men and 91, 82, 82 and 76 cm in women of European, Chinese, Mauritian Indian, and Asian Indian. The prevalence of central obesity using the 2005 IDF definition was higher in Japanese men but lower in Japanese women than in their Asian counterparts. The prevalence of central obesity was 52 times higher in Japanese men but 0.8 times lower in Japanese women compared to the National Cholesterol Education Programme definition. The findings suggest that both BMI and WC predicted diabetes and hypertension equally well in all ethnic groups. At the same BMI or WC level, the prevalence of diabetes was highest in Asian Indians, lowest in Europeans and intermediate in others. Ethnic- and sex-specific change points of BMI and WC should be considered in setting diagnostic criteria for obesity to detect undiagnosed or newly diagnosed diabetes.
Resumo:
Phosphorus is a nutrient needed in crop production. While boosting crop yields it may also accelerate eutrophication in the surface waters receiving the phosphorus runoff. The privately optimal level of phosphorus use is determined by the input and output prices, and the crop response to phosphorus. Socially optimal use also takes into account the impact of phosphorus runoff on water quality. Increased eutrophication decreases the economic value of surface waters by Deteriorating fish stocks, curtailing the potential for recreational activities and by increasing the probabilities of mass algae blooms. In this dissertation, the optimal use of phosphorus is modelled as a dynamic optimization problem. The potentially plant available phosphorus accumulated in soil is treated as a dynamic state variable, the control variable being the annual phosphorus fertilization. For crop response to phosphorus, the state variable is more important than the annual fertilization. The level of this state variable is also a key determinant of the runoff of dissolved, reactive phosphorus. Also the loss of particulate phosphorus due to erosion is considered in the thesis, as well as its mitigation by constructing vegetative buffers. The dynamic model is applied for crop production on clay soils. At the steady state, the analysis focuses on the effects of prices, damage parameterization, discount rate and soil phosphorus carryover capacity on optimal steady state phosphorus use. The economic instruments needed to sustain the social optimum are also analyzed. According to the results the economic incentives should be conditioned on soil phosphorus values directly, rather than on annual phosphorus applications. The results also emphasize the substantial effects the differences in varying discount rates of the farmer and the social planner have on optimal instruments. The thesis analyzes the optimal soil phosphorus paths from its alternative initial levels. It also examines how erosion susceptibility of a parcel affects these optimal paths. The results underline the significance of the prevailing soil phosphorus status on optimal fertilization levels. With very high initial soil phosphorus levels, both the privately and socially optimal phosphorus application levels are close to zero as the state variable is driven towards its steady state. The soil phosphorus processes are slow. Therefore, depleting high phosphorus soils may take decades. The thesis also presents a methodologically interesting phenomenon in problems of maximizing the flow of discounted payoffs. When both the benefits and damages are related to the same state variable, the steady state solution may have an interesting property, under very general conditions: The tail of the payoffs of the privately optimal path as well as the steady state may provide a higher social welfare than the respective tail of the socially optimal path. The result is formalized and an applied to the created framework of optimal phosphorus use.
Resumo:
ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
Resumo:
The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.
Resumo:
A vast amount of public services and goods are contracted through procurement auctions. Therefore it is very important to design these auctions in an optimal way. Typically, we are interested in two different objectives. The first objective is efficiency. Efficiency means that the contract is awarded to the bidder that values it the most, which in the procurement setting means the bidder that has the lowest cost of providing a service with a given quality. The second objective is to maximize public revenue. Maximizing public revenue means minimizing the costs of procurement. Both of these goals are important from the welfare point of view. In this thesis, I analyze field data from procurement auctions and show how empirical analysis can be used to help design the auctions to maximize public revenue. In particular, I concentrate on how competition, which means the number of bidders, should be taken into account in the design of auctions. In the first chapter, the main policy question is whether the auctioneer should spend resources to induce more competition. The information paradigm is essential in analyzing the effects of competition. We talk of a private values information paradigm when the bidders know their valuations exactly. In a common value information paradigm, the information about the value of the object is dispersed among the bidders. With private values more competition always increases the public revenue but with common values the effect of competition is uncertain. I study the effects of competition in the City of Helsinki bus transit market by conducting tests for common values. I also extend an existing test by allowing bidder asymmetry. The information paradigm seems to be that of common values. The bus companies that have garages close to the contracted routes are influenced more by the common value elements than those whose garages are further away. Therefore, attracting more bidders does not necessarily lower procurement costs, and thus the City should not implement costly policies to induce more competition. In the second chapter, I ask how the auctioneer can increase its revenue by changing contract characteristics like contract sizes and durations. I find that the City of Helsinki should shorten the contract duration in the bus transit auctions because that would decrease the importance of common value components and cheaply increase entry which now would have a more beneficial impact on the public revenue. Typically, cartels decrease the public revenue in a significant way. In the third chapter, I propose a new statistical method for detecting collusion and compare it with an existing test. I argue that my test is robust to unobserved heterogeneity unlike the existing test. I apply both methods to procurement auctions that contract snow removal in schools of Helsinki. According to these tests, the bidding behavior of two of the bidders seems consistent with a contract allocation scheme.
Resumo:
Infection is a major cause of mortality and morbidity after thoracic organ transplantation. The aim of the present study was to evaluate the infectious complications after lung and heart transplantation, with a special emphasis on the usefulness of bronchoscopy and the demonstration of cytomegalovirus (CMV), human herpes virus (HHV)-6, and HHV-7. We reviewed all the consecutive bronchoscopies performed on heart transplant recipients (HTRs) from May 1988 to December 2001 (n = 44) and lung transplant recipients (LTRs) from February 1994 to November 2002 (n = 472). To compare different assays in the detection of CMV, a total of 21 thoracic organ transplant recipients were prospectively monitored by CMV pp65-antigenemia, DNAemia (PCR), and mRNAemia (NASBA) tests. The antigenemia test was the reference assay for therapeutic intervention. In addition to CMV antigenemia, 22 LTRs were monitored for HHV-6 and HHV-7 antigenemia. The diagnostic yield of the clinically indicated bronchoscopies was 41 % in the HTRs and 61 % in the LTRs. The utility of the bronchoscopy was highest from one to six months after transplantation. In contrast, the findings from the surveillance bronchoscopies performed on LTRs led to a change in the previous treatment in only 6 % of the cases. Pneumocystis carinii and CMV were the most commonly detected pathogens. Furthermore, 15 (65 %) of the P. carinii infections in the LTRs were detected during chemoprophylaxis. None of the complications of the bronchoscopies were fatal. Antigenemia, DNAemia, and mRNAemia were present in 98 %, 72 %, and 43 % of the CMV infections, respectively. The optimal DNAemia cut-off levels (sensitivity/specificity) were 400 (75.9/92.7 %), 850 (91.3/91.3 %), and 1250 (100/91.5 %) copies/ml for the antigenemia of 2, 5, and 10 pp65-positive leukocytes/50 000 leukocytes, respectively. The sensitivities of the NASBA were 25.9, 43.5, and 56.3 % in detecting the same cut-off levels. CMV DNAemia was detected in 93 % and mRNAemia in 61 % of the CMV antigenemias requiring antiviral therapy. HHV-6, HHV-7, and CMV antigenemia was detected in 20 (91 %), 11 (50 %), and 12 (55 %) of the 22 LTRs (median 16, 31, and 165 days), respectively. HHV-6 appeared in 15 (79 %), HHV-7 in seven (37 %), and CMV in one (7 %) of these patients during ganciclovir or valganciclovir prophylaxis. One case of pneumonitis and another of encephalitis were associated with HHV-6. In conclusion, bronchoscopy is a safe and useful diagnostic tool in LTRs and HTRs with a suspected respiratory infection, but the role of surveillance bronchoscopy in LTRs remains controversial. The PCR assay acts comparably with the antigenemia test in guiding the pre-emptive therapy against CMV when threshold levels of over 5 pp65-antigen positive leukocytes are used. In contrast, the low sensitivity of NASBA limits its usefulness. HHV-6 and HHV-7 activation is common after lung transplantation despite ganciclovir or valganciclovir prophylaxis, but clinical manifestations are infrequently linked to them.
Resumo:
The TOTEM collaboration has developed and tested the first prototype of its Roman Pots to be operated in the LHC. TOTEM Roman Pots contain stacks of 10 silicon detectors with strips oriented in two orthogonal directions. To measure proton scattering angles of a few microradians, the detectors will approach the beam centre to a distance of 10 sigma + 0.5 mm (= 1.3 mm). Dead space near the detector edge is minimised by using two novel "edgeless" detector technologies. The silicon detectors are used both for precise track reconstruction and for triggering. The first full-sized prototypes of both detector technologies as well as their read-out electronics have been developed, built and operated. The tests took place first in a fixed-target muon beam at CERN's SPS, and then in the proton beam-line of the SPS accelerator ring. We present the test beam results demonstrating the successful functionality of the system despite slight technical shortcomings to be improved in the near future.
Resumo:
Most new drug molecules discovered today suffer from poor bioavailability. Poor oral bioavailability results mainly from poor dissolution properties of hydrophobic drug molecules, because the drug dissolution is often the rate-limiting event of the drug’s absorption through the intestinal wall into the systemic circulation. During the last few years, the use of mesoporous silica and silicon particles as oral drug delivery vehicles has been widely studied, and there have been promising results of their suitability to enhance the physicochemical properties of poorly soluble drug molecules. Mesoporous silica and silicon particles can be used to enhance the solubility and dissolution rate of a drug by incorporating the drug inside the pores, which are only a few times larger than the drug molecules, and thus, breaking the crystalline structure into a disordered, amorphous form with better dissolution properties. Also, the high surface area of the mesoporous particles improves the dissolution rate of the incorporated drug. In addition, the mesoporous materials can also enhance the permeability of large, hydrophilic drug substances across biological barriers. T he loading process of drugs into silica and silicon mesopores is mainly based on the adsorption of drug molecules from a loading solution into the silica or silicon pore walls. There are several factors that affect the loading process: the surface area, the pore size, the total pore volume, the pore geometry and surface chemistry of the mesoporous material, as well as the chemical nature of the drugs and the solvents. Furthermore, both the pore and the surface structure of the particles also affect the drug release kinetics. In this study, the loading of itraconazole into mesoporous silica (Syloid AL-1 and Syloid 244) and silicon (TOPSi and TCPSi) microparticles was studied, as well as the release of itraconazole from the microparticles and its stability after loading. Itraconazole was selected for this study because of its highly hydrophobic and poorly soluble nature. Different mesoporous materials with different surface structures, pore volumes and surface areas were selected in order to evaluate the structural effect of the particles on the loading degree and dissolution behaviour of the drug using different loading parameters. The loaded particles were characterized with various analytical methods, and the drug release from the particles was assessed by in vitro dissolution tests. The results showed that the loaded drug was apparently in amorphous form after loading, and that the loading process did not alter the chemical structure of the silica or silicon surface. Both the mesoporous silica and silicon microparticles enhanced the solubility and dissolution rate of itraconazole. Moreover, the physicochemical properties of the particles and the loading procedure were shown to have an effect on the drug loading efficiency and drug release kinetics. Finally, the mesoporous silicon particles loaded with itraconazole were found to be unstable under stressed conditions (at 38 qC and 70 % relative humidity).
Resumo:
This study develops a real options approach for analyzing the optimal risk adoption policy in an environment where the adoption means a switch from one stochastic flow representation into another. We establish that increased volatility needs not decelerate investment, as predicted by the standard literature on real options, once the underlying volatility of the state is made endogenous. We prove that for a decision maker with a convex (concave) objective function, increased post-adoption volatility increases (decreases) the expected cumulative present value of the post-adoption profit flow, which consequently decreases (increases) the option value of waiting and, therefore, accelerates (decelerates) current investment.