936 resultados para Saccade threshold


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims Corneal nerve morphology and corneal sensation threshold have recently been explored as potential surrogate markers for the evaluation of diabetic neuropathy. We present the baseline findings of the ‘Longitudinal Assessment of Neuropathy in type 1 Diabetes using novel ophthalmic Markers’(LANDMark) study. Methods The LANDMark study is a 4-year, two-site, natural history study of three participant groups: type 1 diabetes with neuropathy (T1W), type 1 diabetes without neuropathy (T1WO) and control participants without diabetes or neuropathy. All participants undergo a detailed annual assessment of neuropathy including corneal nerve parameters measured using corneal confocal microscopy and corneal sensitivity measured using non-contact corneal aesthesiometry. Results 76 T1W, 166 T1WO and 154 control participants were enrolled into the study. Corneal sensation threshold (mbars) was significantly higher (i.e. sensitivity was lower) in T1W (1.0 ± 1.1) than T1WO (0.7 ± 0.7) and controls (0.6 ± 0.4) (p < 0.001), with no difference between T1WO and controls. Corneal nerve fibre length was lower in T1W (14.0 ± 6.4 mm/mm2) compared to T1WO (19.1 ± 5.8 mm/mm2) and controls (23.2 ± 6.3 mm/mm2) (p < 0.001). Corneal nerve fibre length was lower in T1WO compared to controls. Conclusions The LANDMark baseline findings confirm a reduction in corneal sensitivity only in Type 1 patients with neuropathy. However, corneal nerve fibre length is reduced even in Type 1 patients without neuropathy with an even greater deficit in Type 1 patients with neuropathy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Operating in vegetated environments is a major challenge for autonomous robots. Obstacle detection based only on geometric features causes the robot to consider foliage, for example, small grass tussocks that could be easily driven through, as obstacles. Classifying vegetation does not solve this problem since there might be an obstacle hidden behind the vegetation. In addition, dense vegetation typically needs to be considered as an obstacle. This paper addresses this problem by augmenting probabilistic traversability map constructed from laser data with ultra-wideband radar measurements. An adaptive detection threshold and a probabilistic sensor model are developed to convert the radar data to occupancy probabilities. The resulting map captures the fine resolution of the laser map but clears areas from the traversability map that are induced by obstacle-free foliage. Experimental results validate that this method is able to improve the accuracy of traversability maps in vegetated environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Over the past decade, corneal nerve morphology and corneal sensation threshold have been explored as potential surrogate markers for the evaluation of diabetic neuropathy. We present the baseline findings of a Longitudinal Assessment of Neuropathy in Diabetes using novel ophthalmic Markers (LANDMark). Methods The LANDMark Study is a 5-year, two-site, natural history (observational) study of individuals with Type 1 diabetes stratified into those with (T1W) and without (T1WO) neuropathy according to the Toronto criteria, and control subjects. All study participants undergo detailed annual assessment of neuropathy including corneal nerve parameters measured using corneal confocal microscopy and corneal sensitivity measured using non-contact corneal esthesiometry. Results 396 eligible individuals (208 in Brisbane and 188 in Manchester) were assessed: 76 T1W, 166 T1WO and 154 controls. Corneal sensation threshold (mbars) was significantly higher in T1W (1.0 ± 1.1) than T1WO (0.7 ± 0.7) and controls (0.6 ± 0.4) (P=0.002); post-hoc analysis (PHA) revealed no difference between T1WO and controls (Tukey HSD, P=0.502). Corneal nerve fiber length (mm/mm2) (CNFL) was lower in T1W (13.8 ± 6.4) than T1WO (19.1 ± 5.8) and controls (23.2 ± 6.3) (P<0.001); PHA revealed CNFL to be lower in T1W than T1WO, and lower in both of these groups than controls (P<0.001). Corneal nerve branch density (branches/mm2) (CNBD) was significantly lower in T1W (40 ± 32) than T1WO (62 ± 37) and controls (83 ± 46) (P<0.001); PHA showed CNBD was lower in T1W than T1WO, and lower in both groups than controls (P<0.001). Alcohol and cigarette consumption did not differ between groups, although age, BMI, BP, waist circumference, HbA1c, albumin-creatinine ratio, and cholesterol were slightly greater in T1W than T1WO (p<0.05). Some site differences were observed. Conclusions The LANDMark baseline findings confirm that corneal sensitivity and corneal nerve morphometry can detect differences in neuropathy status in individuals with Type 1 diabetes and healthy controls. Corneal nerve morphology is significantly abnormal even in diabetic patients ‘without neuropathy’ compared to control participants. Results of the longitudinal trial will assess the capability of these tests for monitoring change in these parameters over time as potential surrogate markers for neuropathy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Endurance exercise can cause immunosuppression and increase the risk of upper respiratory illness. The present study examined changes in the secretion of T helper (Th) cell cytokines after endurance exercise. Ten highly trained road cyclists [mean±SEM: age 24.2±1.7 years; height 1.82±0.02 m; body mass 73.8±2.0 kg; peak oxygen uptake 65.9±2.3 mL/(kg•min)] performed 2 h of cycling exercise at 90% of the second ventilatory threshold. Peripheral blood mononuclear cells were isolated and stimulated with phytohemagglutinin. Plasma cortisol concentrations and the concentration of Th1/Th2/Th17 cell cytokines were examined. Data were analyzed using both traditional statistics and magnitude-based inferences. Results revealed a significant decrease in plasma cortisol at 4–24 h postexercise compared with pre-exercise values. Qualitative analysis revealed postexercise changes in concentrations of plasma cortisol, IL-2, TNF, IL-4, IL-6, IL-10, and IL-17A compared with pre-exercise values. A Th1/Th2 shift was evident immediately postexercise. Furthermore, for multiple cytokines, including IL-2 and TNF (Th1), IL-6 and IL-10 (Th2), and IL-17 (Th17), no meaningful change in concentration occurred until more than 4 h postexercise, highlighting the duration of exercise-induced changes in immune function. These results demonstrate the importance of considering “clinically” significant versus statistically significant changes in immune cell function after exercise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study used automated data processing techniques to calculate a set of novel treatment plan accuracy metrics, and investigate their usefulness as predictors of quality assurance (QA) success and failure. 151 beams from 23 prostate and cranial IMRT treatment plans were used in this study. These plans had been evaluated before treatment using measurements with a diode array system. The TADA software suite was adapted to allow automatic batch calculation of several proposed plan accuracy metrics, including mean field area, small-aperture, off-axis and closed-leaf factors. All of these results were compared the gamma pass rates from the QA measurements and correlations were investigated. The mean field area factor provided a threshold field size (5 cm2, equivalent to a 2.2 x 2.2 cm2 square field), below which all beams failed the QA tests. The small aperture score provided a useful predictor of plan failure, when averaged over all beams, despite being weakly correlated with gamma pass rates for individual beams. By contrast, the closed leaf and off-axis factors provided information about the geometric arrangement of the beam segments but were not useful for distinguishing between plans that passed and failed QA. This study has provided some simple tests for plan accuracy, which may help minimise time spent on QA assessments of treatments that are unlikely to pass.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Shared Material on Dying is a trio/solo work commissioned by Jenny Roche and the Dublin Dance Festival in 2008 from choreographer Liz Roche. Touring widely since its creation, it continues to be a rich research environment for the interrogation by Jenny Roche of the dancer’s first-person perspective in choreographic production and performance. The research perspective drawn from the live performance of this iteration was how the exploration of the same dancing moment might be expressed from multiple perspectives by three different dancers and what this might reveal about the dancer’s inner configuring of the performance environment. Erin Manning (2009) describes the unstable and emergent moment before movement materialises as the preacceleration of the movement, when the potentialities of the gesture collapse and stabilize into form. It is this threshold of potentiality that is interesting, the moment before the dance happens when the configuring process of the dancer brings it into being. Cynthia Roses Thema (2007), after neuroscientist Antonio Damasio, writes that embodied experience is mapped as it unfolds and alters from moment to moment in line with a constantly changing internal milieu. As a performer in this piece, I explored the inner terrain of the three performers (myself included) by externalizing these inner states. This research contributed to a paper presentation at the Digital Resources for the Arts and Humanities Conference, UK 2013.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliability of carrier phase ambiguity resolution (AR) of an integer least-squares (ILS) problem depends on ambiguity success rate (ASR), which in practice can be well approximated by the success probability of integer bootstrapping solutions. With the current GPS constellation, sufficiently high ASR of geometry-based model can only be achievable at certain percentage of time. As a result, high reliability of AR cannot be assured by the single constellation. In the event of dual constellations system (DCS), for example, GPS and Beidou, which provide more satellites in view, users can expect significant performance benefits such as AR reliability and high precision positioning solutions. Simply using all the satellites in view for AR and positioning is a straightforward solution, but does not necessarily lead to high reliability as it is hoped. The paper presents an alternative approach that selects a subset of the visible satellites to achieve a higher reliability performance of the AR solutions in a multi-GNSS environment, instead of using all the satellites. Traditionally, satellite selection algorithms are mostly based on the position dilution of precision (PDOP) in order to meet accuracy requirements. In this contribution, some reliability criteria are introduced for GNSS satellite selection, and a novel satellite selection algorithm for reliable ambiguity resolution (SARA) is developed. The SARA algorithm allows receivers to select a subset of satellites for achieving high ASR such as above 0.99. Numerical results from a simulated dual constellation cases show that with the SARA procedure, the percentages of ASR values in excess of 0.99 and the percentages of ratio-test values passing the threshold 3 are both higher than those directly using all satellites in view, particularly in the case of dual-constellation, the percentages of ASRs (>0.99) and ratio-test values (>3) could be as high as 98.0 and 98.5 % respectively, compared to 18.1 and 25.0 % without satellite selection process. It is also worth noting that the implementation of SARA is simple and the computation time is low, which can be applied in most real-time data processing applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We construct two efficient Identity-Based Encryption (IBE) systems that admit selective-identity security reductions without random oracles in groups equipped with a bilinear map. Selective-identity secure IBE is a slightly weaker security model than the standard security model for IBE. In this model the adversary must commit ahead of time to the identity that it intends to attack, whereas in an adaptive-identity attack the adversary is allowed to choose this identity adaptively. Our first system—BB1—is based on the well studied decisional bilinear Diffie–Hellman assumption, and extends naturally to systems with hierarchical identities, or HIBE. Our second system—BB2—is based on a stronger assumption which we call the Bilinear Diffie–Hellman Inversion assumption and provides another approach to building IBE systems. Our first system, BB1, is very versatile and well suited for practical applications: the basic hierarchical construction can be efficiently secured against chosen-ciphertext attacks, and further extended to support efficient non-interactive threshold decryption, among others, all without using random oracles. Both systems, BB1 and BB2, can be modified generically to provide “full” IBE security (i.e., against adaptive-identity attacks), either using random oracles, or in the standard model at the expense of a non-polynomial but easy-to-compensate security reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Breast cancer is the cancer that most commonly affects women worldwide. This type of cancer is genetically complex, but is strongly linked to steroid hormone signalling systems. Because microRNAs act as translational regulators of multiple genes, including the steroid nuclear receptors, single nucleotide polymorphisms (SNPs) in microRNAs genes can have potentially wide-ranging influences on breast cancer development. Thus, this study was conducted to investigate the relationships between six SNPs (rs6977848, rs199981120, rs185641358, rs113054794, rs66461782, and rs12940701) located in four miRNA genes predicted to target the estrogen receptor (miR-148a, miR-221, miR-186, and miR-152) and breast cancer risk in Caucasian Australian women. By using high resolution melt analysis (HRM) and polymerase chain reaction- restriction fragment length polymorphism (PCR-RFLP), 487 samples including 225 controls and 262 cases were genotyped. Analysis of their genotype and allele frequencies indicated that the differences between case and control populations was not significant for rs6977848, rs66461782, and rs12940701 because their p-values are 0.81, 0.93, 0.1 which are all above the threshold value (p=0.05). Our data thus suggests that these SNPs do not affect breast cancer risk in the tested population. In addition, rs199981120, rs185641358, and rs113054794 could not be found in this population, suggesting that these SNPs do not occur in Caucasian Australians.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, a convex hull-based human identification protocol was proposed by Sobrado and Birget, whose steps can be performed by humans without additional aid. The main part of the protocol involves the user mentally forming a convex hull of secret icons in a set of graphical icons and then clicking randomly within this convex hull. While some rudimentary security issues of this protocol have been discussed, a comprehensive security analysis has been lacking. In this paper, we analyze the security of this convex hull-based protocol. In particular, we show two probabilistic attacks that reveal the user’s secret after the observation of only a handful of authentication sessions. These attacks can be efficiently implemented as their time and space complexities are considerably less than brute force attack. We show that while the first attack can be mitigated through appropriately chosen values of system parameters, the second attack succeeds with a non-negligible probability even with large system parameter values that cross the threshold of usability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this chapter we continue the exposition of crypto topics that was begun in the previous chapter. This chapter covers secret sharing, threshold cryptography, signature schemes, and finally quantum key distribution and quantum cryptography. As in the previous chapter, we have focused only on the essentials of each topic. We have selected in the bibliography a list of representative items, which can be consulted for further details. First we give a synopsis of the topics that are discussed in this chapter. Secret sharing is concerned with the problem of how to distribute a secret among a group of participating individuals, or entities, so that only predesignated collections of individuals are able to recreate the secret by collectively combining the parts of the secret that were allocated to them. There are numerous applications of secret-sharing schemes in practice. One example of secret sharing occurs in banking. For instance, the combination to a vault may be distributed in such a way that only specified collections of employees can open the vault by pooling their portions of the combination. In this way the authority to initiate an action, e.g., the opening of a bank vault, is divided for the purposes of providing security and for added functionality, such as auditing, if required. Threshold cryptography is a relatively recently studied area of cryptography. It deals with situations where the authority to initiate or perform cryptographic operations is distributed among a group of individuals. Many of the standard operations of single-user cryptography have counterparts in threshold cryptography. Signature schemes deal with the problem of generating and verifying electronic) signatures for documents.Asubclass of signature schemes is concerned with the shared-generation and the sharedverification of signatures, where a collaborating group of individuals are required to perform these actions. A new paradigm of security has recently been introduced into cryptography with the emergence of the ideas of quantum key distribution and quantum cryptography. While classical cryptography employs various mathematical techniques to restrict eavesdroppers from learning the contents of encrypted messages, in quantum cryptography the information is protected by the laws of physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently a convex hull based human identification protocol was proposed by Sobrado and Birget, whose steps can be performed by humans without additional aid. The main part of the protocol involves the user mentally forming a convex hull of secret icons in a set of graphical icons and then clicking randomly within this convex hull. In this paper we show two efficient probabilistic attacks on this protocol which reveal the user’s secret after the observation of only a handful of authentication sessions. We show that while the first attack can be mitigated through appropriately chosen values of system parameters, the second attack succeeds with a non-negligible probability even with large system parameter values which cross the threshold of usability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Secure multi-party computation (MPC) protocols enable a set of n mutually distrusting participants P 1, ..., P n , each with their own private input x i , to compute a function Y = F(x 1, ..., x n ), such that at the end of the protocol, all participants learn the correct value of Y, while secrecy of the private inputs is maintained. Classical results in the unconditionally secure MPC indicate that in the presence of an active adversary, every function can be computed if and only if the number of corrupted participants, t a , is smaller than n/3. Relaxing the requirement of perfect secrecy and utilizing broadcast channels, one can improve this bound to t a  < n/2. All existing MPC protocols assume that uncorrupted participants are truly honest, i.e., they are not even curious in learning other participant secret inputs. Based on this assumption, some MPC protocols are designed in such a way that after elimination of all misbehaving participants, the remaining ones learn all information in the system. This is not consistent with maintaining privacy of the participant inputs. Furthermore, an improvement of the classical results given by Fitzi, Hirt, and Maurer indicates that in addition to t a actively corrupted participants, the adversary may simultaneously corrupt some participants passively. This is in contrast to the assumption that participants who are not corrupted by an active adversary are truly honest. This paper examines the privacy of MPC protocols, and introduces the notion of an omnipresent adversary, which cannot be eliminated from the protocol. The omnipresent adversary can be either a passive, an active or a mixed one. We assume that up to a minority of participants who are not corrupted by an active adversary can be corrupted passively, with the restriction that at any time, the number of corrupted participants does not exceed a predetermined threshold. We will also show that the existence of a t-resilient protocol for a group of n participants, implies the existence of a t’-private protocol for a group of n′ participants. That is, the elimination of misbehaving participants from a t-resilient protocol leads to the decomposition of the protocol. Our adversary model stipulates that a MPC protocol never operates with a set of truly honest participants (which is a more realistic scenario). Therefore, privacy of all participants who properly follow the protocol will be maintained. We present a novel disqualification protocol to avoid a loss of privacy of participants who properly follow the protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In moderate to high sea states, the effectiveness of ship fin stabilizers can severely deteriorate due to nonlinear effects arising from unsteady hydrodynamic characteristics of the fins: dynamic stall. These nonlinear effects take the form of a hysteresis, and they become very significant when the effective angle of attack of the fins exceeds a certain threshold angle. Dynamic stall can result in a complete loss of control action depending on how much the fins exceed the threshold angle. When this is detected, it is common to reduce the gain of the controller that commands the fins. This approach is cautious and tends to reduce performance when the conditions leading to dynamic stall disappear. An alternative approach for preventing the effects while keeping high performance, consists of estimating the effective angle of attack and set a conservative constraint on it as part of the control objectives. In this paper, we investigate the latter approach, and propose the use of a model predictive control (MPC) to prevent the development of these nonlinear effects by considering constraints on both the mechanical angle of the fins and the effective angle of attack.