28 resultados para Random sample


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the present study was to determine relationships between insurance status and utilization of oral health care and its characteristics and to identify factors related to insured patients’ selection of dental clinic or dentist. The study was based on cross-sectional data obtained through phone interviews. The target population included adults in the city of Tehran. Using a two-stage stratified random technique, 3,200 seven-digit numbers resembling real phone numbers were drawn; when calling, 1,669 numbers were unavailable (busy, no answer, fax, line blocked). Of the 1,531 subjects who answered the phone call, 224 were outside the target age (under 18), and 221 refused to respond, leaving 1,086 subjects in the final sample. The interviews were carried out using a structured questionnaire and covered characteristics of dental visits, the respondent’s reason for selecting a particular dentist or clinic and demographic and socio-economic background (gender, age, level of education, income, and insurance status). Data analysis included the Chi-square test, ANOVA, and logistic regression and the corresponding odds ratios (OR). Of all the 1,086 respondents, 57% were women, 62% were under age 35, 46% had a medium and 34% a high level of education, 13% were under the poverty line, and 70% had insurance coverage; 64% with the public, and 6% with a commercial insurance. Having insurance coverage was more likely for women (OR=1.5), for those in the oldest age group (OR=2.0), and for those with a high level of education (OR=2.5). Of those with dental insurance, 54% reported having had a dental visit within the past 12 months ; more often by those with commercial insurance in comparison with public (65% vs. 53% p<0.001). Check-up as the reason for the most recent visit occurred most frequently among those with commercial insurance (28%) compared with those having public insurance (16%) or being non-insured (13%) (p<0.001). Having had two or more dental visits within the past 12 months was most common among insured respondents, when compared with the non-insured (31% vs. 22% p=0.01). The non-insured respondents reported tooth extractions almost twice as frequently as did the insured ones (p<0.001). Of the 726 insured subjects, 60% selected fully out-of-pocket-paid services (FOP), and 53% were unaware of their insurance benefits. Of those who selected FOP, good interpersonal aspects (OR=4.6), being unaware of dental insurance benefits (OR=4.6), and good technical aspects (OR=2.3) as a reason had greater odds of selecting FOP. The present study revealed that dental insurance was positively related to demand for oral health care as well as to utilization of services, but to the latter with a minor extent. Among insured respondents, despite their opportunity to use fully or highly subsidized oral health care services, good interpersonal relationship and high quality of services were the most important factors when an insured patient selected a dentist or a clinic. The present findings indicate a clear need to modify dental insurance systems in Iran to facilitate optimal use of oral health care services to maximize the oral health of the population. A special emphasis in the insurance schemes should be focused on preventive care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, novel methodologies for the determination of antioxidative compounds in herbs and beverages were developed. Antioxidants are compounds that can reduce, delay or inhibit oxidative events. They are a part of the human defense system and are obtained through the diet. Antioxidants are naturally present in several types of foods, e.g. in fruits, beverages, vegetables and herbs. Antioxidants can also be added to foods during manufacturing to suppress lipid oxidation and formation of free radicals under conditions of cooking or storage and to reduce the concentration of free radicals in vivo after food ingestion. There is growing interest in natural antioxidants, and effective compounds have already been identified from antioxidant classes such as carotenoids, essential oils, flavonoids and phenolic acids. The wide variety of sample matrices and analytes presents quite a challenge for the development of analytical techniques. Growing demands have been placed on sample pretreatment. In this study, three novel extraction techniques, namely supercritical fluid extraction (SFE), pressurised hot water extraction (PHWE) and dynamic sonication-assisted extraction (DSAE) were studied. SFE was used for the extraction of lycopene from tomato skins and PHWE was used in the extraction of phenolic compounds from sage. DSAE was applied to the extraction of phenolic acids from Lamiaceae herbs. In the development of extraction methodologies, the main parameters of the extraction were studied and the recoveries were compared to those achieved by conventional extraction techniques. In addition, the stability of lycopene was also followed under different storage conditions. For the separation of the antioxidative compounds in the extracts, liquid chromatographic methods (LC) were utilised. Two novel LC techniques, namely ultra performance liquid chromatography (UPLC) and comprehensive two-dimensional liquid chromatography (LCxLC) were studied and compared with conventional high performance liquid chromatography (HPLC) for the separation of antioxidants in beverages and Lamiaceae herbs. In LCxLC, the selection of LC mode, column dimensions and flow rates were studied and optimised to obtain efficient separation of the target compounds. In addition, the separation powers of HPLC, UPLC, HPLCxHPLC and HPLCxUPLC were compared. To exploit the benefits of an integrated system, in which sample preparation and final separation are performed in a closed unit, dynamic sonication-assisted extraction was coupled on-line to a liquid chromatograph via a solid-phase trap. The increased sensitivity was utilised in the extraction of phenolic acids from Lamiaceae herbs. The results were compared to those of achieved by the LCxLC system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Planar curves arise naturally as interfaces between two regions of the plane. An important part of statistical physics is the study of lattice models. This thesis is about the interfaces of 2D lattice models. The scaling limit is an infinite system limit which is taken by letting the lattice mesh decrease to zero. At criticality, the scaling limit of an interface is one of the SLE curves (Schramm-Loewner evolution), introduced by Oded Schramm. This family of random curves is parametrized by a real variable, which determines the universality class of the model. The first and the second paper of this thesis study properties of SLEs. They contain two different methods to study the whole SLE curve, which is, in fact, the most interesting object from the statistical physics point of view. These methods are applied to study two symmetries of SLE: reversibility and duality. The first paper uses an algebraic method and a representation of the Virasoro algebra to find common martingales to different processes, and that way, to confirm the symmetries for polynomial expected values of natural SLE data. In the second paper, a recursion is obtained for the same kind of expected values. The recursion is based on stationarity of the law of the whole SLE curve under a SLE induced flow. The third paper deals with one of the most central questions of the field and provides a framework of estimates for describing 2D scaling limits by SLE curves. In particular, it is shown that a weak estimate on the probability of an annulus crossing implies that a random curve arising from a statistical physics model will have scaling limits and those will be well-described by Loewner evolutions with random driving forces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Time-dependent backgrounds in string theory provide a natural testing ground for physics concerning dynamical phenomena which cannot be reliably addressed in usual quantum field theories and cosmology. A good, tractable example to study is the rolling tachyon background, which describes the decay of an unstable brane in bosonic and supersymmetric Type II string theories. In this thesis I use boundary conformal field theory along with random matrix theory and Coulomb gas thermodynamics techniques to study open and closed string scattering amplitudes off the decaying brane. The calculation of the simplest example, the tree-level amplitude of n open strings, would give us the emission rate of the open strings. However, even this has been unknown. I will organize the open string scattering computations in a more coherent manner and will argue how to make further progress.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Varttuminen vietnamilaisena Suomessa: 12 vuoden seurantajakso – Vietnamilaisten hyvinvointi ja sosiokulttuurinen sopeutuminen lapsena/nuorena sekä nuorena aikuisena Tämä tutkimus oli määrällinen pitkittäistutkimus lapsena tai nuorena vuosina 1979-1991 Suomeen saapuneiden vietnamilaisten akkulturaatiosta (kulttuurin muutoksista), psyykkisestä hyvinvoinnista ja sosiokulttuurisesta sopeutumisesta. Tutkimukseen osallistui ensimmäisessä vaiheessa (vuonna 1992) 97 satunnaisesti valittua vietnamilaista peruskoululaista ympäri maata, joita verrattin suomalaisiin luokkatovereihin. Seurantavaiheeseen (vuonna 2004) osallistui 59 ensimmäisessä vaiheessa mukana ollutta vietnamilaista, nyt iältään 20 – 31 -vuotiaita. Tutkimuksen tavoitteena oli selvittää mitkä tekijät ennustivat akkulturaation lopputuloksia, samalla huomioiden iän ja ympäristön (kontekstin) vaikutukset psyykkiseen hyvinvointiin ja sosiokulttuuriseen sopeutumiseen. Yksittäiset akkulturaatiodimensiot (kieli, arvot ja identiteetti) osoittautuivat tärkeämmiksi psyykkiselle hyvinvoinnille ja sosiokulttuuriselle sopeutumiselle kuin etniset, kansalliset tai kaksikulttuuriset profiilit, joissa yhdistyivät ao. kieli, arvot ja identiteetti. Identiteettimuutosta tapahtui (etniseen) vietnamilaiseen suuntaan ajan kuluessa, kun taas arvomuutosta tapahtui (kansalliseen) suomalaiseen suuntaan. Sekä suomen että vietnamin kielen taito lisääntyivät ajan myötä, millä oli myönteisiä vaikutuksia sekä psyykkiseen hyvinvointiin että sosiokulttuuriseen sopeutumiseen. Lähtötilanteen psyykkinen hyvinvointi ennusti hyvinvointia (masennuksen puutetta ja itsetuntoa) aikuisena, mutta sosiokulttuurinen sopeutuminen (koulumenestys) lapsena tai nuorena ei ennustanut kouluttautumista aikuisena. Parempi suomen kielen taito ja vähemmän identifioitumista suomalaiseksi aikuisena sekä masentuneisuuden puute ja vähemmän koettua syrjintää lapsena tai nuorena erottelivat psyykkisesti paremmin voivat aikuiset (ei-masentuneet) heistä, jotka olivat masentuneita. Parempaa kouluttautumista aikuisena ennustivat toisaalta vähemmän koettua syrjintää lapsena tai nuorena ja toisaalta aikuisena parempi suomen kielen taito, suurempi kansallisten (suomalaisten) itsenäisyysarvojen kannattaminen, mutta kuitenkin vähemmän identifioitumista suomalaisiin. Koetun syrjinnän merkitys psyykkiselle hyvinvoinnille, erityisesti lapsena tai nuorena, sekä sen pitkäaikaisvaikutukset psyykkiselle hyvinvoinnille ja sosiokulttuuriselle sopeutumiselle aikuisena osoittavat tarpeen puuttua varhain psyykkisiin ongelmiin sekä parantaa etnisten ryhmien välisiä suhteita. Avainsanat: akkulturaatio, psyykkinen hyvinvointi, sosiokultuurinen sopeutuminen, kieli, arvot, identiteetti, vietnamilainen, Suomi, lapset, nuoret, nuoret aikuiset

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main method of modifying properties of semiconductors is to introduce small amount of impurities inside the material. This is used to control magnetic and optical properties of materials and to realize p- and n-type semiconductors out of intrinsic material in order to manufacture fundamental components such as diodes. As diffusion can be described as random mixing of material due to thermal movement of atoms, it is essential to know the diffusion behavior of the impurities in order to manufacture working components. In modified radiotracer technique diffusion is studied using radioactive isotopes of elements as tracers. The technique is called modified as atoms are deployed inside the material by ion beam implantation. With ion implantation, a distinct distribution of impurities can be deployed inside the sample surface with good con- trol over the amount of implanted atoms. As electromagnetic radiation and other nuclear decay products emitted by radioactive materials can be easily detected, only very low amount of impurities can be used. This makes it possible to study diffusion in pure materials without essentially modifying the initial properties by doping. In this thesis a modified radiotracer technique is used to study the diffusion of beryllium in GaN, ZnO, SiGe and glassy carbon. GaN, ZnO and SiGe are of great interest to the semiconductor industry and beryllium as a small and possibly rapid dopant hasn t been studied previously using the technique. Glassy carbon has been added to demonstrate the feasibility of the technique. In addition, the diffusion of magnetic impurities, Mn and Co, has been studied in GaAs and ZnO (respectively) with spintronic applications in mind.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a measurement of the top quark mass with t-tbar dilepton events produced in p-pbar collisions at the Fermilab Tevatron $\sqrt{s}$=1.96 TeV and collected by the CDF II detector. A sample of 328 events with a charged electron or muon and an isolated track, corresponding to an integrated luminosity of 2.9 fb$^{-1}$, are selected as t-tbar candidates. To account for the unconstrained event kinematics, we scan over the phase space of the azimuthal angles ($\phi_{\nu_1},\phi_{\nu_2}$) of neutrinos and reconstruct the top quark mass for each $\phi_{\nu_1},\phi_{\nu_2}$ pair by minimizing a $\chi^2$ function in the t-tbar dilepton hypothesis. We assign $\chi^2$-dependent weights to the solutions in order to build a preferred mass for each event. Preferred mass distributions (templates) are built from simulated t-tbar and background events, and parameterized in order to provide continuous probability density functions. A likelihood fit to the mass distribution in data as a weighted sum of signal and background probability density functions gives a top quark mass of $165.5^{+{3.4}}_{-{3.3}}$(stat.)$\pm 3.1$(syst.) GeV/$c^2$.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growing interest for sequencing with higher throughput in the last decade has led to the development of new sequencing applications. This thesis concentrates on optimizing DNA library preparation for Illumina Genome Analyzer II sequencer. The library preparation steps that were optimized include fragmentation, PCR purification and quantification. DNA fragmentation was performed with focused sonication in different concentrations and durations. Two column based PCR purification method, gel matrix method and magnetic bead based method were compared. Quantitative PCR and gel electrophoresis in a chip were compared for DNA quantification. The magnetic bead purification was found to be the most efficient and flexible purification method. The fragmentation protocol was changed to produce longer fragments to be compatible with longer sequencing reads. Quantitative PCR correlates better with the cluster number and should thus be considered to be the default quantification method for sequencing. As a result of this study more data have been acquired from sequencing with lower costs and troubleshooting has become easier as qualification steps have been added to the protocol. New sequencing instruments and applications will create a demand for further optimizations in future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markov random fields (MRF) are popular in image processing applications to describe spatial dependencies between image units. Here, we take a look at the theory and the models of MRFs with an application to improve forest inventory estimates. Typically, autocorrelation between study units is a nuisance in statistical inference, but we take an advantage of the dependencies to smooth noisy measurements by borrowing information from the neighbouring units. We build a stochastic spatial model, which we estimate with a Markov chain Monte Carlo simulation method. The smooth values are validated against another data set increasing our confidence that the estimates are more accurate than the originals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ever since its initial introduction some fifty years ago, the rational expectations paradigm has dominated the way economic theory handles uncertainty. The main assertion made by John F. Muth (1961), seen by many as the father of the paradigm, is that expectations of rational economic agents should essentially be equal to the predictions of relevant economic theory, since rational agents should use information available to them in an optimal way. This assumption often has important consequences on the results and interpretations of the models where it is applied. Although the rational expectations assumption can be applied to virtually any economic theory, the focus in this thesis is on macroeconomic theories of consumption, especially the Rational Expectations–Permanent Income Hypothesis proposed by Robert E. Hall in 1978. The much-debated theory suggests that, assuming that agents have rational expectations on their future income, consumption decisions should follow a random walk, and the best forecast of future consumption level is the current consumption level. Then, changes in consumption are unforecastable. This thesis constructs an empirical test for the Rational Expectations–Permanent Income Hypothesis using Finnish Consumer Survey data as well as various Finnish macroeconomic data. The data sample covers the years 1995–2010. Consumer survey data may be interpreted to directly represent household expectations, which makes it an interesting tool for this particular test. The variable to be predicted is the growth of total household consumption expenditure. The main empirical result is that the Consumer Confidence Index (CCI), a balance figure computed from the most important consumer survey responses, does have statistically significant predictive power over the change in total consumption expenditure. The history of consumption expenditure growth itself, however, fails to predict its own future values. This indicates that the CCI contains some information that the history of consumption decisions does not, and that the consumption decisions are not optimal in the theoretical context. However, when conditioned on various macroeconomic variables, the CCI loses its predictive ability. This finding suggests that the index is merely a (partial) summary of macroeconomic information, and does not contain any significant private information on consumption intentions of households not directly deductible from the objective economic variables. In conclusion, the Rational Expectations–Permanent Income Hypothesis is strongly rejected by the empirical results in this thesis. This result is in accordance with most earlier studies conducted on the topic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.