418 resultados para Roundness errors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proxy re-encryption (PRE) is a highly useful cryptographic primitive whereby Alice and Bob can endow a proxy with the capacity to change ciphertext recipients from Alice to Bob, without the proxy itself being able to decrypt, thereby providing delegation of decryption authority. Key-private PRE (KP-PRE) specifies an additional level of confidentiality, requiring pseudo-random proxy keys that leak no information on the identity of the delegators and delegatees. In this paper, we propose a CPA-secure PK-PRE scheme in the standard model (which we then transform into a CCA-secure scheme in the random oracle model). Both schemes enjoy highly desirable properties such as uni-directionality and multi-hop delegation. Unlike (the few) prior constructions of PRE and KP-PRE that typically rely on bilinear maps under ad hoc assumptions, security of our construction is based on the hardness of the standard Learning-With-Errors (LWE) problem, itself reducible from worst-case lattice hard problems that are conjectured immune to quantum cryptanalysis, or “post-quantum”. Of independent interest, we further examine the practical hardness of the LWE assumption, using Kannan’s exhaustive search algorithm coupling with pruning techniques. This leads to state-of-the-art parameters not only for our scheme, but also for a number of other primitives based on LWE published the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel algorithm based on particle swarm optimization (PSO) to estimate the states of electric distribution networks. In order to improve the performance, accuracy, convergence speed, and eliminate the stagnation effect of original PSO, a secondary PSO loop and mutation algorithm as well as stretching function is proposed. For accounting uncertainties of loads in distribution networks, pseudo-measurements is modeled as loads with the realistic errors. Simulation results on 6-bus radial and 34-bus IEEE test distribution networks show that the distribution state estimation based on proposed DLM-PSO presents lower estimation error and standard deviation in comparison with algorithms such as WLS, GA, HBMO, and original PSO.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptively-secure IBE and a Hierarchical IBE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a pole inspection system for outdoor environments comprising a high-speed camera on a vertical take-off and landing (VTOL) aerial platform. The pole inspection task requires a vehicle to fly close to a structure while maintaining a fixed stand-off distance from it. Typical GPS errors make GPS-based navigation unsuitable for this task however. When flying outdoors a vehicle is also affected by aerodynamics disturbances such as wind gusts, so the onboard controller must be robust to these disturbances in order to maintain the stand-off distance. Two problems must therefor be addressed: fast and accurate state estimation without GPS, and the design of a robust controller. We resolve these problems by a) performing visual + inertial relative state estimation and b) using a robust line tracker and a nested controller design. Our state estimation exploits high-speed camera images (100Hz) and 70Hz IMU data fused in an Extended Kalman Filter (EKF). We demonstrate results from outdoor experiments for pole-relative hovering, and pole circumnavigation where the operator provides only yaw commands. Lastly, we show results for image-based 3D reconstruction and texture mapping of a pole to demonstrate the usefulness for inspection tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social tagging systems are shown to evidence a well known cognitive heuristic, the guppy effect, which arises from the combination of different concepts. We present some empirical evidence of this effect, drawn from a popular social tagging Web service. The guppy effect is then described using a quantum inspired formalism that has been already successfully applied to model conjunction fallacy and probability judgement errors. Key to the formalism is the concept of interference, which is able to capture and quantify the strength of the guppy effect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Method Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. Results The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. Conclusions The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an approach to automatically de-identify health records. In our approach, personal health information is identified using a Conditional Random Fields machine learning classifier, a large set of linguistic and lexical features, and pattern matching techniques. Identified personal information is then removed from the reports. The de-identification of personal health information is fundamental for the sharing and secondary use of electronic health records, for example for data mining and disease monitoring. The effectiveness of our approach is first evaluated on the 2007 i2b2 Shared Task dataset, a widely adopted dataset for evaluating de-identification techniques. Subsequently, we investigate the robustness of the approach to limited training data; we study its effectiveness on different type and quality of data by evaluating the approach on scanned pathology reports from an Australian institution. This data contains optical character recognition errors, as well as linguistic conventions that differ from those contained in the i2b2 dataset, for example different date formats. The findings suggest that our approach compares to the best approach from the 2007 i2b2 Shared Task; in addition, the approach is found to be robust to variations of training size, data type and quality in presence of sufficient training data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Disjoint top-view networked cameras are among the most commonly utilized networks in many applications. One of the open questions for these cameras' study is the computation of extrinsic parameters (positions and orientations), named extrinsic calibration or localization of cameras. Current approaches either rely on strict assumptions of the object motion for accurate results or fail to provide results of high accuracy without the requirement of the object motion. To address these shortcomings, we present a location-constrained maximum a posteriori (LMAP) approach by applying known locations in the surveillance area, some of which would be passed by the object opportunistically. The LMAP approach formulates the problem as a joint inference of the extrinsic parameters and object trajectory based on the cameras' observations and the known locations. In addition, a new task-oriented evaluation metric, named MABR (the Maximum value of All image points' Back-projected localization errors' L2 norms Relative to the area of field of view), is presented to assess the quality of the calibration results in an indoor object tracking context. Finally, results herein demonstrate the superior performance of the proposed method over the state-of-the-art algorithm based on the presented MABR and classical evaluation metric in simulations and real experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background. Volitional risky driving behaviours such as drink- and drug-driving (i.e. substance-impaired driving) and speeding contribute to the overrepresentation of young novice drivers in road crash fatalities, and crash risk is greatest during the first year of independent driving in particular. Aims. To explore the: 1) self-reported compliance of drivers with road rules regarding substance-impaired driving and other risky driving behaviours (e.g., speeding, driving while tired), one year after progression from a Learner to a Provisional (intermediate) licence; and 2) interrelationships between substance-impaired driving and other risky driving behaviours (e.g., crashes, offences, and Police avoidance). Methods. Drivers (n = 1,076; 319 males) aged 18-20 years were surveyed regarding their sociodemographics (age, gender) and self-reported driving behaviours including crashes, offences, Police avoidance, and driving intentions. Results. A relatively small proportion of participants reported driving after taking drugs (6.3% of males, 1.3% of females) and drinking alcohol (18.5% of males, 11.8% of females). In comparison, a considerable proportion of participants reported at least occasionally exceeding speed limits (86.7% of novices), and risky behaviours like driving when tired (83.6% of novices). Substance-impaired driving was associated with avoiding Police, speeding, risky driving intentions, and self-reported crashes and offences. Forty-three percent of respondents who drove after taking drugs also reported alcohol-impaired driving. Discussion and Conclusions. Behaviours of concern include drink driving, speeding, novice driving errors such as misjudging the speed of oncoming vehicles, violations of graduated driver licensing passenger restrictions, driving tired, driving faster if in a bad mood, and active punishment avoidance. Given the interrelationships between the risky driving behaviours, a deeper understanding of influential factors is required to inform targeted and general countermeasure implementation and evaluation during this critical driving period. Notwithstanding this, a combination of enforcement, education, and engineering efforts appear necessary to improve the road safety of the young novice driver, and for the drink-driving young novice driver in particular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software to create individualised finite element (FE) models of the osseoligamentous spine using pre-operative computed tomography (CT) data-sets for spinal surgery patients has recently been developed. This study presents a geometric sensitivity analysis of this software to assess the effect of intra-observer variability in user-selected anatomical landmarks. User-selected landmarks on the osseous anatomy were defined from CT data-sets for three scoliosis patients and these landmarks were used to reconstruct patient-specific anatomy of the spine and ribcage using parametric descriptions. The intra-observer errors in landmark co-ordinates for these anatomical landmarks were calculated. FE models of the spine and ribcage were created using the reconstructed anatomy for each patient and these models were analysed for a loadcase simulating clinical flexibility assessment. The intra-observer error in the anatomical measurements was low in comparison to the initial dimensions, with the exception of the angular measurements for disc wedge and zygapophyseal joint (z-joint) orientation and disc height. This variability suggested that CT resolution may influence such angular measurements, particularly for small anatomical features, such as the z-joints, and may also affect disc height. The results of the FE analysis showed low variation in the model predictions for spinal curvature with the mean intra-observer variability substantially less than the accepted error in clinical measurement. These findings demonstrate that intra-observer variability in landmark point selection has minimal effect on the subsequent FE predictions for a clinical loadcase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plant based dried food products are popular commodities in global market where much research is focused to improve the products and processing techniques. In this regard, numerical modelling is highly applicable and in this work, a coupled meshfree particle-based two-dimensional (2-D) model was developed to simulate micro-scale deformations of plant cells during drying. Smoothed Particle Hydrodynamics (SPH) was used to model the viscous cell protoplasm (cell fluid) by approximating it to an incompressible Newtonian fluid. The visco-elastic characteristic of the cell wall was approximated to a Neo-Hookean solid material augmented with a viscous term and modelled with a Discrete Element Method (DEM). Compared to a previous work [H. C. P. Karunasena, W. Senadeera, Y. T. Gu and R. J. Brown, Appl. Math. Model., 2014], this study proposes three model improvements: linearly decreasing positive cell turgor pressure during drying, cell wall contraction forces and cell wall drying. The improvements made the model more comparable with experimental findings on dried cell morphology and geometric properties such as cell area, diameter, perimeter, roundness, elongation and compactness. This single cell model could be used as a building block for advanced tissue models which are highly applicable for product and process optimizations in Food Engineering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study determined differences between computer workers with varying levels of neck pain in terms of work stressors, employee strain, electromyography (EMG) amplitude and heart rate response to various tasks. Participants included 85 workers (33, no pain; 38, mild pain; 14, moderate pain) and 22 non-working controls. Work stressors evaluated were job demands, decision authority, and social support. Heart rate was recorded during three tasks: copy-typing, typing with superimposed stress and a colour word task. Measures included electromyography signals from the sternocleidomastoid (SCM), anterior scalene (AS), cervical extensor (CE) and upper trapezius (UT) muscles bilaterally. Results showed no difference between groups in work stressors or employee strain measures. Workers with and without pain had higher measured levels of EMG amplitude in SCM, AS and CE muscles during the tasks than controls (all P < 0.02). In workers with neck pain, the UT had difficulty in switching off on completion of tasks compared with controls and workers without pain. There was an increase in heart rate, perceived tension and pain and decrease in accuracy for all groups during the stressful tasks with symptomatic workers producing more typing errors than controls and workers without pain. These findings suggest an altered muscle recruitment pattern in the neck flexor and extensor muscles. Whether this is a consequence or source of the musculoskeletal disorder cannot be determined from this study. It is possible that workers currently without symptoms may be at risk of developing a musculoskeletal disorder.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Construction industry accounts for a tenth of global GDP. Still, challenges such as slow adoption of new work processes, islands of information, and legal disputes, remain frequent, industry-wide occurrences despite various attempts to address them. In response, IT-based approaches have been adopted to explore collaborative ways of executing construction projects. Building Information Modelling (BIM) is an exemplar of integrative technologies whose 3D-visualisation capabilities have fostered collaboration especially between clients and design teams. Yet, the ways in which specification documents are created and used in capturing clients' expectations based on industry standards have remained largely unchanged since the 18th century. As a result, specification-related errors are still common place in an industry where vast amounts of information are consumed as well as produced in the course project implementation in the built environment. By implication, processes such as cost planning which depend on specification-related information remain largely inaccurate even with the use of BIM-based technologies. This paper briefly distinguishes between non-BIM-based and BIM-based specifications and reports on-going efforts geared towards the latter. We review exemplars aimed at extending Building Information Models to specification information embedded within the objects in a product library and explore a viable way of reasoning about a semi-automated process of specification using our product library.