35 resultados para Error-Free Transformations
Resumo:
Visual acuities at the time of referral and on the day before surgery were compared in 124 patients operated on for cataract in Vaasa Central Hospital, Finland. Preoperative visual acuity and the occurrence of ocular and general disease were compared in samples of consecutive cataract extractions performed in 1982, 1985, 1990, 1995 and 2000 in two hospitals in the Vaasa region in Finland. The repeatability and standard deviation of random measurement error in visual acuity and refractive error determination in a clinical environment in cataractous, pseudophakic and healthy eyes were estimated by re-examining visual acuity and refractive error of patients referred to cataract surgery or consultation by ophthalmic professionals. Altogether 99 eyes of 99 persons (41 cataractous, 36 pseudophakic and 22 healthy eyes) with a visual acuity range of Snellen 0.3 to 1.3 (0.52 to -0.11 logMAR) were examined. During an average waiting time of 13 months, visual acuity in the study eye decreased from 0.68 logMAR to 0.96 logMAR (from 0.2 to 0.1 in Snellen decimal values). The average decrease in vision was 0.27 logMAR per year. In the fastest quartile, visual acuity change per year was 0.75 logMAR, and in the second fastest 0.29 logMAR, the third and fourth quartiles were virtually unaffected. From 1982 to 2000, the incidence of cataract surgery increased from 1.0 to 7.2 operations per 1000 inhabitants per year in the Vaasa region. The average preoperative visual acuity in the operated eye increased by 0.85 logMAR (in decimal values from 0.03to 0.2) and in the better eye 0.27 logMAR (in decimal values from 0.23 to 0.43) over this period. The proportion of patients profoundly visually handicapped (VA in the better eye <0.1) before the operation fell from 15% to 4%, and that of patients less profoundly visually handicapped (VA in the better eye 0.1 to <0.3) from 47% to 15%. The repeatability visual acuity measurement estimated as a coefficient of repeatability for all 99 eyes was ±0.18 logMAR, and the standard deviation of measurement error was 0.06 logMAR. Eyes with the lowest visual acuity (0.3-0.45) had the largest variability, the coefficient of repeatability values being ±0.24 logMAR and eyes with a visual acuity of 0.7 or better had the smallest, ±0.12 logMAR. The repeatability of refractive error measurement was studied in the same patient material as the repeatability of visual acuity. Differences between measurements 1 and 2 were calculated as three-dimensional vector values and spherical equivalents and expressed by coefficients of repeatability. Coefficients of repeatability for all eyes for vertical, torsional and horisontal vectors were ±0.74D, ±0.34D and ±0.93D, respectively, and for spherical equivalent for all eyes ±0.74D. Eyes with lower visual acuity (0.3-0.45) had larger variability in vector and spherical equivalent values (±1.14), but the difference between visual acuity groups was not statistically significant. The difference in the mean defocus equivalent between measurements 1 and 2 was, however, significantly greater in the lower visual acuity group. If a change of ±0.5D (measured in defocus equivalents) is accepted as a basis for change of spectacles for eyes with good vision, the basis for eyes in the visual acuity range of 0.3 - 0.65 would be ±1D. Differences in repeated visual acuity measurements are partly explained by errors in refractive error measurements.
Resumo:
The output of a laser is a high frequency propagating electromagnetic field with superior coherence and brightness compared to that emitted by thermal sources. A multitude of different types of lasers exist, which also translates into large differences in the properties of their output. Moreover, the characteristics of the electromagnetic field emitted by a laser can be influenced from the outside, e.g., by injecting an external optical field or by optical feedback. In the case of free-running solitary class-B lasers, such as semiconductor and Nd:YVO4 solid-state lasers, the phase space is two-dimensional, the dynamical variables being the population inversion and the amplitude of the electromagnetic field. The two-dimensional structure of the phase space means that no complex dynamics can be found. If a class-B laser is perturbed from its steady state, then the steady state is restored after a short transient. However, as discussed in part (i) of this Thesis, the static properties of class-B lasers, as well as their artificially or noise induced dynamics around the steady state, can be experimentally studied in order to gain insight on laser behaviour, and to determine model parameters that are not known ab initio. In this Thesis particular attention is given to the linewidth enhancement factor, which describes the coupling between the gain and the refractive index in the active material. A highly desirable attribute of an oscillator is stability, both in frequency and amplitude. Nowadays, however, instabilities in coupled lasers have become an active area of research motivated not only by the interesting complex nonlinear dynamics but also by potential applications. In part (ii) of this Thesis the complex dynamics of unidirectionally coupled, i.e., optically injected, class-B lasers is investigated. An injected optical field increases the dimensionality of the phase space to three by turning the phase of the electromagnetic field into an important variable. This has a radical effect on laser behaviour, since very complex dynamics, including chaos, can be found in a nonlinear system with three degrees of freedom. The output of the injected laser can be controlled in experiments by varying the injection rate and the frequency of the injected light. In this Thesis the dynamics of unidirectionally coupled semiconductor and Nd:YVO4 solid-state lasers is studied numerically and experimentally.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
In line with cultural psychology and developmental theory, a single case approach is applied to construct knowledge on how children s interaction emerge interlinked to historical, social, cultural, and material context. The study focuses on the negotiation of constraints and meaning construction among 2-to 3-year-old children, a preschool teacher, and the researcher in settings with water. Water as an element offers a special case of cultural canalization: adults selectively monitor and guide children s access to it. The work follows the socio-cultural tradition in psychology, particularly the co-constructivist theory of human development and the Network of Meanings perspective developed at the University of São Paulo. Valsiner s concepts of Zone of Free Movement and Zone of Promoted Action are applied together with studies where interactions are seen as spaces of construction where negotiation of constraints for actions, emotions, and conceptions occur. The corpus was derived at a Finnish municipal day care centre. During a seven months period, children s actions were video recorded in small groups twice a month. The teacher and the researcher were present. Four sessions with two children were chosen for qualitative microanalysis; the analysis also addressed the transformations during the months covered by the study. Moreover, the data derivation was analyzed reflectively. The narrowed down arenas for actions were continuously negotiated among the participants both nonverbally and verbally. The adults expectations and intentions were materialized in the arrangements of the setting canalizing the possibilities for actions. Children s co-regulated actions emerged in relation to the adults presence, re-structuring attempts, and the constraints of the setting. Children co-constructed novel movements and meanings in relation to the initiatives and objects offered. Gestures, postures, and verbalizations emerged from the initially random movements and became constructed to have specific meanings and functions; meaning construction became abbreviated. The participants attempted to make sense of the ambiguous (explicit and implicit) intentions and fuzzy boundaries of promoted and possible actions: individualized yet overlapping features were continuously negotiated by all the participants. Throughout the months, children s actions increasingly corresponded adults (re-defined) conceptions of water researchers as an emerging group culture. Water became an instrument and a context for co-regulations. The study contributes to discussions on children as participants in cultural canalization and emphasizes the need for analysis in early childhood education practices on the implicit and explicit constraint structures for actions.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
This dissertation traces a set of historical transformations the Darwinian evolutionary narrative has undergone toward the end of the twentieth century, especially as reflected in Anglo-American popular science books and novels. The study has three objectives. First, it seeks to understand the organizing logic of evolutionary narratives and the role that assumptions about gender and sexuality play in that logic. Second, it asks what kinds of cultural anxieties evolutionary theory raises and how evolutionary narratives negotiate them. Third, it examines the possibilities and limits of narrative transformation both as a historical phenomenon and as a theoretical question. This interdisciplinary dissertation is situated at the intersection of science studies, cultural studies, literary studies, and gender studies. Its understanding of science as a cultural practice that both emerges from and contributes to cultural expectations and institutional structures follows the tradition of science studies. Its focus on the question of popular appeal and the mechanisms of cultural change arises from cultural studies. Its view of narrative as a structural phenomenon is grounded in literary studies in general and feminist narrative theory in particular. Its understanding of gender and sexuality as implicated in discourses of epistemic authority builds on the view of gender and sexuality as contingent cultural categories central to gender studies. The primary material consists of over 25 British and American popular science books and novels, published roughly between 1990 and 2005. In order to highlight historical transformations, these texts are read in the context of Darwin s The Origin of Species and The Descent of Man, on the one hand, and such sociobiological classics as E. O. Wilson s On Human Nature and Richard Dawkins s The Selfish Gene, on the other. The research method combines feminist narrative analysis with cultural and historical contextualization, emphasizing discursive abruptions, recurrent narrative patterns, and underlying continuities. The dissertation demonstrates that the relationship between Darwin s evolutionary narrative and late twentieth-century evolutionary narratives is characterized by reemphasis, omissions, and continuous rewriting. In particular, contemporary evolutionary discourse extends the role assigned to reproduction both sexual and narrative in Darwin s writing, generating a narrative logic that imagines the desire to reproduce as the driving force of evolution and posits the reproductive sex act as the endlessly repeated narrative event that keeps the story going. The study argues that the popular appeal of evolutionary accounts of gender, sexuality, and human nature may arise, to an extent, from this reproductive narrative dynamic. This narrative dynamic, however, is not logically invulnerable. Since the continuation of the evolutionary narrative relies on successful reproduction, the possibility of reproductive failure poses a constant risk to narrative futurity, arousing cultural anxieties that evolutionary narratives need to address. The study argues that evolutionary narratives appease such anxieties by evoking a range of cultural narratives, especially romantic, religious, and national narratives. Furthermore, the study shows that the event-based logic of evolutionary narratives privileges observable acts over emotions, pleasures, identities, and desires, thus engendering a set of conceptual exclusions that limits the imaginative scope of evolution as a cultural narrative.
Resumo:
Koskenniemen Äärellistilaisen leikkauskieliopin (FSIG) lauseopilliset rajoitteet ovat loogisesti vähemmän kompleksisia kuin mihin niissä käytetty formalismi vittaisi. Osoittautuukin että vaikka Voutilaisen (1994) englannin kielelle laatima FSIG-kuvaus käyttää useita säännöllisten lausekkeiden laajennuksia, kieliopin kuvaus kokonaisuutenaan palautuu äärelliseen yhdistelmään unionia, komplementtia ja peräkkäinasettelua. Tämä on oleellinen parannus ENGFSIG:n descriptiiviseen kompleksisuuteen. Tulos avaa ovia FSIG-kuvauksen loogisten ominaisuuksien syvemmälle analyysille ja FSIG kuvausten mahdolliselle optimoinnillle. Todistus sisältää uuden kaavan, joka kääntää Koskenniemien rajoiteoperaation ilman markkerimerkkejä.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
This paper is concerned with using the bootstrap to obtain improved critical values for the error correction model (ECM) cointegration test in dynamic models. In the paper we investigate the effects of dynamic specification on the size and power of the ECM cointegration test with bootstrap critical values. The results from a Monte Carlo study show that the size of the bootstrap ECM cointegration test is close to the nominal significance level. We find that overspecification of the lag length results in a loss of power. Underspecification of the lag length results in size distortion. The performance of the bootstrap ECM cointegration test deteriorates if the correct lag length is not used in the ECM. The bootstrap ECM cointegration test is therefore not robust to model misspecification.