941 resultados para Roundoff errors.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose The aim of this study is to assess the refractive and visual outcomes following cataract surgery and implantation of the AcrySof IQ Toric SN6AT2 intraolcular lens (IOL) (Alcon Laboratories, Inc) in patients with low corneal astigmatism. Materials and Methods A retrospective, consecutive, single surgeon series of ninety-eight eyes of 88 patients following cataract surgery and implantation of the AcrySof IQ Toric SN6AT2 IOL in eyes with low preoperative corneal astigmatism. Postoperative measurements were obtained at one month post surgery. Main outcome measures were monocular distance visual acuity and residual refractive astigmatism. Results The mean preoperative corneal astigmatic power vector (APV) was 0.38 ± 0.09 D. Following surgery and implantation of the toric IOL, mean postoperative refractive APV was 0.13 ± 0.10 D. Mean postoperative distance uncorrected visual acuity (UCVA) was 0.08 ± 0.09 logMAR. Postoperative spherical equivalent refraction (SER) resulted in a mean of - 0.23 ± 0.22 D, with 96% of eyes falling within 0.50 D of the target SER. Conclusions The AcrySof IQ Toric SN6AT2 IOL is a safe and effective option for eyes undergoing cataract surgery with low amounts of preoperative corneal astigmatism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Personal ultraviolet dosimeters have been used in epidemiological studies to understand the risks and benefits of individuals' exposure to solar ultraviolet radiation (UVR). We investigated the types and determinants of non-compliance associated with a protocol for use of polysulphone UVR dosimeters. In the AusD Study, 1,002 Australian adults (aged 18-75 years) were asked to wear a new dosimeter on their wrist each day for 10 consecutive days to quantify their daily exposure to solar UVR. Of the 10,020 dosimeters distributed, 296 (3%) were not returned or used (Type I non-compliance) and other usage errors were reported for 763 (8%) returned dosimeters (Type II non-compliance). Type I errors were more common in participants with predominantly outdoor occupations. Type II errors were reported more frequently on the first day of measurement; weekend days or rainy days; and among females; younger people; more educated participants or those with outdoor occupations. Half (50%) the participants reported a non-compliance error on at least one day during the 10-day period. However, 92% of participants had at least 7 days of usable data without any apparent non-compliance issues. The factors identified should be considered when designing future UVR dosimetry studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proxy re-encryption (PRE) is a highly useful cryptographic primitive whereby Alice and Bob can endow a proxy with the capacity to change ciphertext recipients from Alice to Bob, without the proxy itself being able to decrypt, thereby providing delegation of decryption authority. Key-private PRE (KP-PRE) specifies an additional level of confidentiality, requiring pseudo-random proxy keys that leak no information on the identity of the delegators and delegatees. In this paper, we propose a CPA-secure PK-PRE scheme in the standard model (which we then transform into a CCA-secure scheme in the random oracle model). Both schemes enjoy highly desirable properties such as uni-directionality and multi-hop delegation. Unlike (the few) prior constructions of PRE and KP-PRE that typically rely on bilinear maps under ad hoc assumptions, security of our construction is based on the hardness of the standard Learning-With-Errors (LWE) problem, itself reducible from worst-case lattice hard problems that are conjectured immune to quantum cryptanalysis, or “post-quantum”. Of independent interest, we further examine the practical hardness of the LWE assumption, using Kannan’s exhaustive search algorithm coupling with pruning techniques. This leads to state-of-the-art parameters not only for our scheme, but also for a number of other primitives based on LWE published the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel algorithm based on particle swarm optimization (PSO) to estimate the states of electric distribution networks. In order to improve the performance, accuracy, convergence speed, and eliminate the stagnation effect of original PSO, a secondary PSO loop and mutation algorithm as well as stretching function is proposed. For accounting uncertainties of loads in distribution networks, pseudo-measurements is modeled as loads with the realistic errors. Simulation results on 6-bus radial and 34-bus IEEE test distribution networks show that the distribution state estimation based on proposed DLM-PSO presents lower estimation error and standard deviation in comparison with algorithms such as WLS, GA, HBMO, and original PSO.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptively-secure IBE and a Hierarchical IBE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a pole inspection system for outdoor environments comprising a high-speed camera on a vertical take-off and landing (VTOL) aerial platform. The pole inspection task requires a vehicle to fly close to a structure while maintaining a fixed stand-off distance from it. Typical GPS errors make GPS-based navigation unsuitable for this task however. When flying outdoors a vehicle is also affected by aerodynamics disturbances such as wind gusts, so the onboard controller must be robust to these disturbances in order to maintain the stand-off distance. Two problems must therefor be addressed: fast and accurate state estimation without GPS, and the design of a robust controller. We resolve these problems by a) performing visual + inertial relative state estimation and b) using a robust line tracker and a nested controller design. Our state estimation exploits high-speed camera images (100Hz) and 70Hz IMU data fused in an Extended Kalman Filter (EKF). We demonstrate results from outdoor experiments for pole-relative hovering, and pole circumnavigation where the operator provides only yaw commands. Lastly, we show results for image-based 3D reconstruction and texture mapping of a pole to demonstrate the usefulness for inspection tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social tagging systems are shown to evidence a well known cognitive heuristic, the guppy effect, which arises from the combination of different concepts. We present some empirical evidence of this effect, drawn from a popular social tagging Web service. The guppy effect is then described using a quantum inspired formalism that has been already successfully applied to model conjunction fallacy and probability judgement errors. Key to the formalism is the concept of interference, which is able to capture and quantify the strength of the guppy effect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Method Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. Results The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. Conclusions The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an approach to automatically de-identify health records. In our approach, personal health information is identified using a Conditional Random Fields machine learning classifier, a large set of linguistic and lexical features, and pattern matching techniques. Identified personal information is then removed from the reports. The de-identification of personal health information is fundamental for the sharing and secondary use of electronic health records, for example for data mining and disease monitoring. The effectiveness of our approach is first evaluated on the 2007 i2b2 Shared Task dataset, a widely adopted dataset for evaluating de-identification techniques. Subsequently, we investigate the robustness of the approach to limited training data; we study its effectiveness on different type and quality of data by evaluating the approach on scanned pathology reports from an Australian institution. This data contains optical character recognition errors, as well as linguistic conventions that differ from those contained in the i2b2 dataset, for example different date formats. The findings suggest that our approach compares to the best approach from the 2007 i2b2 Shared Task; in addition, the approach is found to be robust to variations of training size, data type and quality in presence of sufficient training data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Disjoint top-view networked cameras are among the most commonly utilized networks in many applications. One of the open questions for these cameras' study is the computation of extrinsic parameters (positions and orientations), named extrinsic calibration or localization of cameras. Current approaches either rely on strict assumptions of the object motion for accurate results or fail to provide results of high accuracy without the requirement of the object motion. To address these shortcomings, we present a location-constrained maximum a posteriori (LMAP) approach by applying known locations in the surveillance area, some of which would be passed by the object opportunistically. The LMAP approach formulates the problem as a joint inference of the extrinsic parameters and object trajectory based on the cameras' observations and the known locations. In addition, a new task-oriented evaluation metric, named MABR (the Maximum value of All image points' Back-projected localization errors' L2 norms Relative to the area of field of view), is presented to assess the quality of the calibration results in an indoor object tracking context. Finally, results herein demonstrate the superior performance of the proposed method over the state-of-the-art algorithm based on the presented MABR and classical evaluation metric in simulations and real experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background. Volitional risky driving behaviours such as drink- and drug-driving (i.e. substance-impaired driving) and speeding contribute to the overrepresentation of young novice drivers in road crash fatalities, and crash risk is greatest during the first year of independent driving in particular. Aims. To explore the: 1) self-reported compliance of drivers with road rules regarding substance-impaired driving and other risky driving behaviours (e.g., speeding, driving while tired), one year after progression from a Learner to a Provisional (intermediate) licence; and 2) interrelationships between substance-impaired driving and other risky driving behaviours (e.g., crashes, offences, and Police avoidance). Methods. Drivers (n = 1,076; 319 males) aged 18-20 years were surveyed regarding their sociodemographics (age, gender) and self-reported driving behaviours including crashes, offences, Police avoidance, and driving intentions. Results. A relatively small proportion of participants reported driving after taking drugs (6.3% of males, 1.3% of females) and drinking alcohol (18.5% of males, 11.8% of females). In comparison, a considerable proportion of participants reported at least occasionally exceeding speed limits (86.7% of novices), and risky behaviours like driving when tired (83.6% of novices). Substance-impaired driving was associated with avoiding Police, speeding, risky driving intentions, and self-reported crashes and offences. Forty-three percent of respondents who drove after taking drugs also reported alcohol-impaired driving. Discussion and Conclusions. Behaviours of concern include drink driving, speeding, novice driving errors such as misjudging the speed of oncoming vehicles, violations of graduated driver licensing passenger restrictions, driving tired, driving faster if in a bad mood, and active punishment avoidance. Given the interrelationships between the risky driving behaviours, a deeper understanding of influential factors is required to inform targeted and general countermeasure implementation and evaluation during this critical driving period. Notwithstanding this, a combination of enforcement, education, and engineering efforts appear necessary to improve the road safety of the young novice driver, and for the drink-driving young novice driver in particular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software to create individualised finite element (FE) models of the osseoligamentous spine using pre-operative computed tomography (CT) data-sets for spinal surgery patients has recently been developed. This study presents a geometric sensitivity analysis of this software to assess the effect of intra-observer variability in user-selected anatomical landmarks. User-selected landmarks on the osseous anatomy were defined from CT data-sets for three scoliosis patients and these landmarks were used to reconstruct patient-specific anatomy of the spine and ribcage using parametric descriptions. The intra-observer errors in landmark co-ordinates for these anatomical landmarks were calculated. FE models of the spine and ribcage were created using the reconstructed anatomy for each patient and these models were analysed for a loadcase simulating clinical flexibility assessment. The intra-observer error in the anatomical measurements was low in comparison to the initial dimensions, with the exception of the angular measurements for disc wedge and zygapophyseal joint (z-joint) orientation and disc height. This variability suggested that CT resolution may influence such angular measurements, particularly for small anatomical features, such as the z-joints, and may also affect disc height. The results of the FE analysis showed low variation in the model predictions for spinal curvature with the mean intra-observer variability substantially less than the accepted error in clinical measurement. These findings demonstrate that intra-observer variability in landmark point selection has minimal effect on the subsequent FE predictions for a clinical loadcase.