170 resultados para MISALIGNMENT
Resumo:
Optical filters are crucial elements in optical communications. The influence of cascaded filters in the optical signal will affect the communications quality seriously. In this paper we will study and simulate the optical signal impairment caused by different kinds of filters which include Butterworth, Bessel, Fiber Bragg Grating (FBG) and Fabry-Perot (FP). Optical signal impairment is analyzed from an Eye Opening Penalty (EOP) and optical spectrum point of view. The simulation results show that when the center frequency of all filters aligns with the laser’s frequency, the Butterworth has the smallest influence to the signal while the F-P has the biggest. With a -1dB EOP, the amount of cascaded Butterworth optical filters with a bandwidth of 50 GHz is 18 in 40 Gbps NRZ-DQPSK systems and 12 in 100 Gbps PMNRZ- DQPSK systems. The value is reduced to 9 and 6 respectively for Febry-Perot optical filters. In the situation of frequency misalignment, the impairment caused by filters is more serious. Our research shows that with a frequency deviation of 5 GHz, only 12 and 9 Butterworth optical filters can be cascaded in 40 Gbps NRZ-DQPSK and 100 Gbps PM-NRZ-DQPSK systems respectively. We also study the signal impairment caused by different orders of the Butterworth filter model. Our study shows that although the higher-order has a smaller clipping effect in the transmission spectrum, it will introduce a more serious phase ripple which seriously affects the signal. Simulation result shows that the 2nd order Butterworth filter has the best performance.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
The ability to accurately observe the Earth's carbon cycles from space gives scientists an important tool to analyze climate change. Current space-borne Integrated-Path Differential Absorption (IPDA) Iidar concepts have the potential to meet this need. They are mainly based on the pulsed time-offlight principle, in which two high energy pulses of different wavelengths interrogate the atmosphere for its transmission properties and are backscattered by the ground. In this paper, feasibility study results of a Pseudo-Random Single Photon Counting (PRSPC) IPDA lidar are reported. The proposed approach replaces the high energy pulsed source (e.g. a solidstate laser), with a semiconductor laser in CW operation with a similar average power of a few Watts, benefiting from better efficiency and reliability. The auto-correlation property of Pseudo-Random Binary Sequence (PRBS) and temporal shifting of the codes can be utilized to transmit both wavelengths simultaneously, avoiding the beam misalignment problem experienced by pulsed techniques. The envelope signal to noise ratio has been analyzed, and various system parameters have been selected. By restricting the telescopes field-of-view, the dominant noise source of ambient light can be suppressed, and in addition with a low noise single photon counting detector, a retrieval precision of 1.5 ppm over 50 km along-track averaging could be attained. We also describe preliminary experimental results involving a negative feedback Indium Gallium Arsenide (InGaAs) single photon avalanche photodiode and a low power Distributed Feedback laser diode modulated with PRBS driven acoustic optical modulator. The results demonstrate that higher detector saturation count rates will be needed for use in future spacebourne missions but measurement linearity and precision should meet the stringent requirements set out by future Earthobserving missions.
Resumo:
An approximately decadal periodicity in surface air temperature is discernable in global observations from A.D. 1855 to 1900 and since A.D. 1945, but with a periodicity of only about 6 years during the intervening period. Changes in solar irradiance related to the sunspot cycle have been proposed to account for the former, but cannot account for the latter. To explain both by a single mechanism, we propose that extreme oceanic tides may produce changes in sea surface temperature at repeat periods, which alternate between approximately one-third and one-half of the lunar nodal cycle of 18.6 years. These alternations, recurring at nearly 90-year intervals, reflect varying slight degrees of misalignment and departures from the closest approach of the Earth with the Moon and Sun at times of extreme tide raising forces. Strong forcing, consistent with observed temperature periodicities, occurred at 9-year intervals close to perihelion (solar perigee) for several decades centered on A.D. 1881 and 1974, but at 6-year intervals for several decades centered on A.D. 1923. As a physical explanation for tidal forcing of temperature we propose that the dissipation of extreme tides increases vertical mixing of sea water, thereby causing episodic cooling near the sea surface. If this mechanism correctly explains near-decadal temperature periodicities, it may also apply to variability in temperature and climate on other times-scales, even millennial and longer.
Resumo:
Rearrangements between tandem sequence homologies of various lengths are a major source of genomic change and can be deleterious to the organism. These rearrangements can result in either deletion or duplication of genetic material flanked by direct sequence repeats. Molecular genetic analysis of repetitive sequence instability in Escherichia coli has provided several clues to the underlying mechanisms of these rearrangements. We present evidence for three mechanisms of RecA-independent sequence rearrangements: simple replication slippage, sister-chromosome exchange-associated slippage, and single-strand annealing. We discuss the constraints of these mechanisms and contrast their properties with RecA-dependent homologous recombination. Replication plays a critical role in the two slipped misalignment mechanisms, and difficulties in replication appear to trigger rearrangements via all these mechanisms.
Resumo:
A functional methyl-directed mismatch repair pathway in Escherichia coli prevents the formation of deletions between 101-bp tandem repeats with 4% sequence divergence. Deletions between perfectly homologous repeats are unaffected. Deletion in both cases occurs independently of the homologous recombination gene, recA. Because the methyl-directed mismatch repair pathway detects and excises one strand of a mispaired duplex, an intermediate for RecA-independent deletion of tandem repeats must therefore be a heteroduplex formed between strands of each repeat. We find that MutH endonuclease, which in vivo incises specifically the newly replicated strand of DNA, and the Dam methylase, the source of this strand-discrimination, are required absolutely for the exclusion of "homeologous" (imperfectly homologous) tandem deletion. This supports the idea that the heteroduplex intermediate for deletion occurs during or shortly after DNA replication in the context of hemi-methylation. Our findings confirm a "replication slippage" model for deletion formation whereby the displacement and misalignment of the nascent strand relative to the repeated sequence in the template strand accomplishes the deletion.
Resumo:
O objetivo principal do estudo é comparar o teste em 3 pontos com braquetes com o teste de resistência ao deslizamento utilizando um novo dispositivo que realiza a mensuração simultânea do coeficiente de atrito, das forças e dos momentos nos braquetes de ancoragem e da força de desativação no braquete desalinhado, exercidos por fios ortodônticos. Os objetivos secundários foram desenvolver o dispositivo e comparar, no teste em 3 pontos: (i) a influência, nas grandezas e no coeficiente de atrito cinético, da variação da simetria nas distâncias inter-braquetes, do tipo de braquete de ancoragem (canino ou 2º pré-molar), do deslocamento (3 ou 5mm) do braquete central, do sentido do desalinhamento (vestibular ou lingual) do braquete central e da marca de fio-braquete; (ii) as 3 formas de cálculo do coeficiente de atrito cinético; (iii) os 10 ciclos, para vestibular ou lingual, para verificar se eles são semelhantes ou não entre si. Foram utilizados braquetes autoligáveis (dentes 13, 14 e 15) e fios 0.014\'\' NiTi e CuNiTi das marcas Aditek e Ormco. O teste de resistência ao deslizamento foi realizado no desalinhamento lingual, nos dois deslocamentos e na configuração simétrica. O teste em 3 pontos com braquetes foi realizado no desalinhamento lingual e vestibular, nos dois deslocamentos e na configuração simétrica e assimétrica. Por meio da ANOVA, foram comparados, entre os dois tipos de teste: (A) as grandezas e o coeficiente de atrito e (B) o coeficiente de atrito gerado apenas no braquete de 2º pré-molar. Utilizando-se do mesmo teste estatístico foram comparados, no teste em 3 pontos com braquetes: (A) na configuração simétrica, algumas grandezas e o coeficiente de atrito advindos da variação da marca de fio-braquete, do deslocamento, do desalinhamento e do tipo de braquete; (B) algumas grandezas e o coeficiente de atrito gerados na configuração simétrica e assimétrica; (C) os valores das 3 formas de cálculo do coeficiente de atrito na configuração simétrica; e (D) algumas grandezas e o coeficiente de atrito encontrados nos 10 ciclos. Resultados: (A) a maioria dos valores das grandezas e do coeficiente de atrito gerados pelos dois tipos de teste foram diferentes estatisticamente; (B) o braquete de 2º pré-molar apresentou valores de coeficiente de atrito diferentes entre os dois tipos de teste; (C) na configuração simétrica, as variáveis foram estatisticamente significantes na maioria dos casos para as grandezas analisadas e para o coeficiente de atrito; (D) houve diferença entre a configuração simétrica e assimétrica; (E) o coeficiente de atrito baseado nas duas normais e na força de atrito se aproximou mais da realidade clínica e foi sensível à variação da geometria da relação fio-braquete; e (F) os 10 ciclos para lingual foram semelhantes entre si em 70% dos casos e os 10 ciclos para vestibular foram diferentes em 57% dos casos. Conclusões: o teste em 3 pontos com braquetes é diferente do teste de resistência ao deslizamento; a variação das configurações geométricas e da marca de fio-braquete pode influenciar nos valores das grandezas e do coeficiente de atrito cinético; os 10 ciclos para lingual foram mais semelhantes entre si que os 10 ciclos para vestibular.
Resumo:
A chemical sensor based on a coated long-period grating has been prepared and characterized. Designer coatings based on polydimethylsiloxane were prepared by the incorporation of diphenylsiloxane and titanium cross-linker in order to provide enhanced sensitivity for a variety of key environmental pollutants and optimal refractive index of the coating. Upon microextraction of the analyte into the polymer matrix, an increase in the refractive index of the coating resulted in a change in the attenuation spectrum of the long-period grating. The grating was interrogated using ring-down detection as a means to amplify the optical loss and to gain stability against misalignment and power fluctuations. Chemical differentiation of cyclohexane and xylene was achieved and a detection limit of 300 ppm of xylene vapour was realized.
Resumo:
The issue of “trade and exchange rate misalignments” is being discussed at the G20, IMF and WTO, following an initiative by Brazil. The main purpose of this paper is to apply the methodology developed by the authors to exam the impacts of misalignment on tariffs in order to analyse the impacts of misalignments on the trade relations between two customs unions – the EU and Mercosur, as well as to explain how tariff barriers are affected. It is divided into several sections: the first summarises the debate on exchange rates at the WTO; the second explains the methodology used to determine exchange rate misalignments; the third and fourth summarises the methodology applied to calculate the impacts of exchange rate misalignments on the level of tariff protection through an exercise of ‘misalignment tariffication’; the fifth reviews the effects of exchange rate misalignments on tariffs and its consequences for the trade negotiations between the two areas; and the last concludes and suggests a way to move the debate forward in the context of regional arrangements.
Resumo:
Purpose: To evaluate the effects of instrument realignment and angular misalignment during the clinical determination of wavefront aberrations by simulation in model eyes. Setting: Aston Academy of Life Sciences, Aston University, Birmingham, United Kingdom. Methods: Six model eyes were examined with wavefront-aberration-supported cornea ablation (WASCA) (Carl Zeiss Meditec) in 4 sessions of 10 measurements each: sessions 1 and 2, consecutive repeated measures without realignment; session 3, realignment of the instrument between readings; session 4, measurements without realignment but with the model eye shifted 6 degrees angularly. Intersession repeatability and the effects of realignment and misalignment were obtained by comparing the measurements in the various sessions for coma, spherical aberration, and higher-order aberrations (HOAs). Results: The mean differences between the 2 sessions without realignment of the instrument were 0.020 μm ± 0.076 (SD) for Z3 - 1(P = .551), 0.009 ± 0.139 μm for Z3 1(P = .877), 0.004 ± 0.037 μm for Z4 0 (P = .820), and 0.005 ± 0.01 μm for HO root mean square (RMS) (P = .301). Differences between the nonrealigned and realigned instruments were -0.017 ± 0.026 μm for Z3 - 1(P = .159), 0.009 ± 0.028 μm for Z3 1 (P = .475), 0.007 ± 0.014 μm for Z4 0(P = .296), and 0.002 ± 0.007 μm for HO RMS (P = 0.529; differences between centered and misaligned instruments were -0.355 ± 0.149 μm for Z3 - 1 (P = .002), 0.007 ± 0.034 μm for Z3 1(P = .620), -0.005 ± 0.081 μm for Z4 0(P = .885), and 0.012 ± 0.020 μm for HO RMS (P = .195). Realignment increased the standard deviation by a factor of 3 compared with the first session without realignment. Conclusions: Repeatability of the WASCA was excellent in all situations tested. Realignment substantially increased the variance of the measurements. Angular misalignment can result in significant errors, particularly in the determination of coma. These findings are important when assessing highly aberrated eyes during follow-up or before surgery. © 2007 ASCRS and ESCRS.
Resumo:
Measurements (autokeratometry, A-scan ultrasonography and video ophthalmophakometry) of ocular surface radii, axial separations and alignment were made in the horizontal meridian of nine emmetropes (aged 20-38 years) with relaxed (cycloplegia) and active accommodation (mean ± 95% confidence interval: 3.7 ± 1.1 D). The anterior chamber depth (-1.5 ± 0.3 D) and both crystalline lens surfaces (front 3.1 ± 0.8 D; rear 2.1 ± 0.6 D) contributed to dioptric vergence changes that accompany accommodation. Accommodation did not alter ocular surface alignment. Ocular misalignment in relaxed eyes is mainly because of eye rotation (5.7 ± 1.6° temporally) with small amounts of lens tilt (0.2 ± 0.8° temporally) and decentration (0.1 ± 0.1 mm nasally) but these results must be viewed with caution as we did not account for corneal asymmetry. Comparison of calculated and empirically derived coefficients (upon which ocular surface alignment calculations depend) revealed that negligible inherent errors arose from neglect of ocular surface asphericity, lens gradient refractive index properties, surface astigmatism, effects of pupil size and centration, assumed eye rotation axis position and use of linear equations for analysing Purkinje image shifts. © 2004 The College of Optometrists.
Resumo:
It is well established that hydrodynamic journal bearings are responsible for self-excited vibrations and have the effect of lowering the critical speeds of rotor systems. The forces within the oil film wedge, generated by the vibrating journal, may be represented by displacement and velocity coefficient~ thus allowing the dynamical behaviour of the rotor to be analysed both for stability purposes and for anticipating the response to unbalance. However, information describing these coefficients is sparse, misleading, and very often not applicable to industrial type bearings. Results of a combined analytical and experimental investigation into the hydrodynamic oil film coefficients operating in the laminar region are therefore presented, the analysis being applied to a 120 degree partial journal bearing having a 5.0 in diameter journal and a LID ratio of 1.0. The theoretical analysis shows that for this type of popular bearing, the eight linearized coefficients do not accurately describe the behaviour of the vibrating journal based on the theory of small perturbations, due to them being masked by the presence of nonlinearity. A method is developed using the second order terms of Taylor expansion whereby design charts are provided which predict the twentyeight force coefficients for both aligned, and for varying amounts of journal misalignment. The resulting non-linear equations of motion are solved using a modified Newton-Raphson method whereby the whirl trajectories are obtained, thus providing a physical appreciation of the bearing characteristics under dynamically loaded conditions.
Resumo:
Visual perception is dependent on both light transmission through the eye and neuronal conduction through the visual pathway. Advances in clinical diagnostics and treatment modalities over recent years have increased the opportunities to improve the optical path and retinal image quality. Higher order aberrations and retinal straylight are two major factors that influence light transmission through the eye and ultimately, visual outcome. Recent technological advancements have brought these important factors into the clinical domain, however the potential applications of these tools and considerations regarding interpretation of data are much underestimated. The purpose of this thesis was to validate and optimise wavefront analysers and a new clinical tool for the objective evaluation of intraocular scatter. The application of these methods in a clinical setting involving a range of conditions was also explored. The work was divided into two principal sections: 1. Wavefront Aberrometry: optimisation, validation and clinical application The main findings of this work were: • Observer manipulation of the aberrometer increases variability by a factor of 3. • Ocular misalignment can profoundly affect reliability, notably for off-axis aberrations. • Aberrations measured with wavefront analysers using different principles are not interchangeable, with poor relationships and significant differences between values. • Instrument myopia of around 0.30D is induced when performing wavefront analysis in non-cyclopleged eyes; values can be as high as 3D, being higher as the baseline level of myopia decreases. Associated accommodation changes may result in relevant changes to the aberration profile, particularly with respect to spherical aberration. • Young adult healthy Caucasian eyes have significantly more spherical aberration than Asian eyes when matched for age, gender, axial length and refractive error. Axial length is significantly correlated with most components of the aberration profile. 2. Intraocular light scatter: Evaluation of subjective measures and validation and application of a new objective method utilising clinically derived wavefront patterns. The main findings of this work were: • Subjective measures of clinical straylight are highly repeatable. Three measurements are suggested as the optimum number for increased reliability. • Significant differences in straylight values were found for contact lenses designed for contrast enhancement compared to clear lenses of the same design and material specifications. Specifically, grey/green tints induced significantly higher values of retinal straylight. • Wavefront patterns from a commercial Hartmann-Shack device can be used to obtain objective measures of scatter and are well correlated with subjective straylight values. • Perceived retinal stray light was similar in groups of patients implanted with monofocal and multi focal intraocular lenses. Correlation between objective and subjective measurements of scatter is poor, possibly due to different illumination conditions between the testing procedures, or a neural component which may alter with age. Careful acquisition results in highly reproducible in vivo measures of higher order aberrations; however, data from different devices are not interchangeable which brings the accuracy of measurement into question. Objective measures of intraocular straylight can be derived from clinical aberrometry and may be of great diagnostic and management importance in the future.
Resumo:
This research develops a low cost remote sensing system for use in agricultural applications. The important features of the system are that it monitors the near infrared and it incorporates position and attitude measuring equipment allowing for geo-rectified images to be produced without the use of ground control points. The equipment is designed to be hand held and hence requires no structural modification to the aircraft. The portable remote sensing system consists of an inertia measurement unit (IMU), which is accelerometer based, a low-cost GPS device and a small format false colour composite digital camera. The total cost of producing such a system is below GBP 3000, which is far cheaper than equivalent existing systems. The design of the portable remote sensing device has eliminated bore sight misalignment errors from the direct geo-referencing process. A new processing technique has been introduced for the data obtained from these low-cost devices, and it is found that using this technique the image can be matched (overlaid) onto Ordnance Survey Master Maps at an accuracy compatible with precision agriculture requirements. The direct geo-referencing has also been improved by introducing an algorithm capable of correcting oblique images directly. This algorithm alters the pixels value, hence it is advised that image analysis is performed before image georectification. The drawback of this research is that the low-cost GPS device experienced bad checksum errors, which resulted in missing data. The Wide Area Augmented System (WAAS) correction could not be employed because the satellites could not be locked onto whilst flying. The best GPS data were obtained from the Garmin eTrex (15 m kinematic and 2 m static) instruments which have a highsensitivity receiver with good lock on capability. The limitation of this GPS device is the inability to effectively receive the P-Code wavelength, which is needed to gain the best accuracy when undertaking differential GPS processing. Pairing the carrier phase L1 with the pseudorange C/A-Code received, in order to determine the image coordinates by the differential technique, is still under investigation. To improve the position accuracy, it is recommended that a GPS base station should be established near the survey area, instead of using a permanent GPS base station established by the Ordnance Survey.
Resumo:
PURPOSE: To assess the clinical outcomes after implantation of a new hydrophobic acrylic toric intraocular lens (IOL) to correct preexisting corneal astigmatism in patients having routine cataract surgery. SETTING: Four hospital eye clinics throughout Europe. DESIGN: Cohort study. METHODS: This study included eyes with at least 0.75 diopter (D) of preexisting corneal astigmatism having routine cataract surgery. Phacoemulsification was performed followed by insertion and alignment of a Tecnis toric IOL. Patients were examined 4 to 8 weeks postoperatively; uncorrected distance visual acuity (UDVA), corrected distance visual acuity, manifest refraction, and keratometry were measured. Individual patient satisfaction with uncorrected vision and the surgeon’s assessment of ease of handling and performance of the IOL were also documented. The cylinder axis of the toric IOL was determined by dilated slitlamp examination. RESULTS: The study enrolled 67 eyes of 60 patients. Four to 8 weeks postoperatively, the mean UDVA was 0.15 logMAR G 0.17 (SD) and the UDVA was 20/40 or better in 88% of eyes. The mean refractive cylinder decreased significantly postoperatively, from -1.91 +/- 1.07 D to -0.67 +/- 0.54 D. No significant change in keratometric cylinder was observed. The mean absolute IOL misalignment from the intended axis was 3.4 degrees (range 0 to 12 degrees). The good UDVA resulted in high levels of patient satisfaction. CONCLUSION: Implantation of the new toric IOL was an effective, safe, and predictable method to manage corneal astigmatism in patients having routine cataract surgery.