41 resultados para Free-space method

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A free space quantum key distribution system has been demonstrated. Consideration has been given to factors such as field of view and spectral width, to cut down the deleterious effect from background light levels. Suitable optical sources such as lasers and RCLEDs have been investigated as well as optimal wavelength choices, always with a view to building a compact and robust system. The implementation of background reduction measures resulted in a system capable of operating in daylight conditions. An autonomous system was left running and generating shared key material continuously for over 7 days. © 2009 Published by Elsevier B.V..

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a free space quantum cryptography system which is designed to allow continuous unattended key exchanges for periods of several days, and over ranges of a few kilometres. The system uses a four-laser faint-pulse transmission system running at a pulse rate of 10MHz to generate the required four alternative polarization states. The receiver module similarly automatically selects a measurement basis and performs polarization measurements with four avalanche photodiodes. The controlling software can implement the full key exchange including sifting, error correction, and privacy amplification required to generate a secure key.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cascade transitions of rare earth ions involved in infrared host fiber provide the potential to generate dual or multiple wavelength lasing at mid-infrared region. In addition, the fast development of saturable absorber (SA) towards the long wavelengths motivates the realization of passively switched mid-infrared pulsed lasers. In this work, by combing the above two techniques, a new phenomenon of passively Q-switched ~3 μm and gain-switched ~2 μm pulses in a shared cavity was demonstrated with a Ho3+-doped fluoride fiber and a specifically designed semiconductor saturable absorber (SESAM) as the SA. The repetition rate of ~2 μm pulses can be tuned between half and same as that of ~3 μm pulses by changing the pump power. The proposed method here will add new capabilities and more flexibility for generating mid-infrared multiple wavelength pulses simultaneously that has important potential applications for laser surgery, material processing, laser radar, and free-space communications, and other areas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Agents inhabiting large scale environments are faced with the problem of generating maps by which they can navigate. One solution to this problem is to use probabilistic roadmaps which rely on selecting and connecting a set of points that describe the interconnectivity of free space. However, the time required to generate these maps can be prohibitive, and agents do not typically know the environment in advance. In this paper we show that the optimal combination of different point selection methods used to create the map is dependent on the environment, no point selection method dominates. This motivates a novel self-adaptive approach for an agent to combine several point selection methods. The success rate of our approach is comparable to the state of the art and the generation cost is substantially reduced. Self-adaptation therefore enables a more efficient use of the agent's resources. Results are presented for both a set of archetypal scenarios and large scale virtual environments based in Second Life, representing real locations in London.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Adapting one eye to a high contrast grating reduces sensitivity to similar target gratings shown to the same eye, and also to those shown to the opposite eye. According to the textbook account, interocular transfer (IOT) of adaptation is around 60% of the within-eye effect. However, most previous studies on this were limited to using high spatial frequencies, sustained presentation, and criterion-dependent methods for assessing threshold. Here, we measure IOT across a wide range of spatiotemporal frequencies, using a criterion-free 2AFC method. We find little or no IOT at low spatial frequencies, consistent with other recent observations. At higher spatial frequencies, IOT was present, but weaker than previously reported (around 35%, on average, at 8c/deg). Across all conditions, monocular adaptation raised thresholds by around a factor of 2, and observers showed normal binocular summation, demonstrating that they were not binocularly compromised. These findings prompt a reassessment of our understanding of the binocular architecture implied by interocular adaptation. In particular, the output of monocular channels may be available to perceptual decision making at low spatial frequencies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A sizeable amount of the testing in eye care, requires either the identification of targets such as letters to assess functional vision, or the subjective evaluation of imagery by an examiner. Computers can render a variety of different targets on their monitors and can be used to store and analyse ophthalmic images. However, existing computing hardware tends to be large, screen resolutions are often too low, and objective assessments of ophthalmic images unreliable. Recent advances in mobile computing hardware and computer-vision systems can be used to enhance clinical testing in optometry. High resolution touch screens embedded in mobile devices, can render targets at a wide variety of distances and can be used to record and respond to patient responses, automating testing methods. This has opened up new opportunities in computerised near vision testing. Equally, new image processing techniques can be used to increase the validity and reliability of objective computer vision systems. Three novel apps for assessing reading speed, contrast sensitivity and amplitude of accommodation were created by the author to demonstrate the potential of mobile computing to enhance clinical measurement. The reading speed app could present sentences effectively, control illumination and automate the testing procedure for reading speed assessment. Meanwhile the contrast sensitivity app made use of a bit stealing technique and swept frequency target, to rapidly assess a patient’s full contrast sensitivity function at both near and far distances. Finally, customised electronic hardware was created and interfaced to an app on a smartphone device to allow free space amplitude of accommodation measurement. A new geometrical model of the tear film and a ray tracing simulation of a Placido disc topographer were produced to provide insights on the effect of tear film breakdown on ophthalmic images. Furthermore, a new computer vision system, that used a novel eye-lash segmentation technique, was created to demonstrate the potential of computer vision systems for the clinical assessment of tear stability. Studies undertaken by the author to assess the validity and repeatability of the novel apps, found that their repeatability was comparable to, or better, than existing clinical methods for reading speed and contrast sensitivity assessment. Furthermore, the apps offered reduced examination times in comparison to their paper based equivalents. The reading speed and amplitude of accommodation apps correlated highly with existing methods of assessment supporting their validity. Their still remains questions over the validity of using a swept frequency sine-wave target to assess patient’s contrast sensitivity functions as no clinical test provides the range of spatial frequencies and contrasts, nor equivalent assessment at distance and near. A validation study of the new computer vision system found that the authors tear metric correlated better with existing subjective measures of tear film stability than those of a competing computer-vision system. However, repeatability was poor in comparison to the subjective measures due to eye lash interference. The new mobile apps, computer vision system, and studies outlined in this thesis provide further insight into the potential of applying mobile and image processing technology to enhance clinical testing by eye care professionals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A novel highly efficient, fiber-compatible spectrally encoded imaging (SEI) system using a 45° tilted fiber grating (TFG) is proposed and experimentally demonstrated for the first time, to the best of our knowledge. The TFG serves as an in-fiber lateral diffraction element, eliminating the need for bulky and lossy free-space diffraction gratings in conventional SEI systems. Under proper polarization control, due to the strong tilted reflection, the 45° TFG offers a diffraction efficiency as high as 93.5%. Our new design significantly reduces the volume of the SEI system and improves energy efficiency and system stability. As a proof-ofprinciple experiment, spectrally encoded imaging of a customer-designed sample (9.6 mm x 3.0 mm) using the TFG-based system is demonstrated. The lateral resolution of the SEI system is measured to be 42 μm in our experiment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose and demonstrate, for the first time to our best knowledge, the use of a 45° tilted fiber grating (TFG) as an infiber lateral diffraction element in an efficient and fiber-compatible spectrally encoded imaging (SEI) system. Under proper polarization control, the TFG has significantly enhanced diffraction efficiency (93.5%) due to strong tilted reflection. Our conceptually new fiber-topics-based design eliminates the need for bulky and lossy free-space diffraction gratings, significantly reduces the volume and cost of the imaging system, improves energy efficiency, and increases system stability. As a proof-of-principle experiment, we use the proposed system to perform an one dimensional (1D) line scan imaging of a customer-designed three-slot sample and the results show that the constructed image matches well with the actual sample. The angular dispersion of the 45° TFG is measured to be 0.054°/nm and the lateral resolution of the SEI system is measured to be 28 μm in our experiment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Matrix application continues to be a critical step in sample preparation for matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging (MSI). Imaging of small molecules such as drugs and metabolites is particularly problematic because the commonly used washing steps to remove salts are usually omitted as they may also remove the analyte, and analyte spreading is more likely with conventional wet matrix application methods. We have developed a method which uses the application of matrix as a dry, finely divided powder, here referred to as dry matrix application, for the imaging of drug compounds. This appears to offer a complementary method to wet matrix application for the MALDI-MSI of small molecules, with the alternative matrix application techniques producing different ion profiles, and allows the visualization of compounds not observed using wet matrix application methods. We demonstrate its value in imaging clozapine from rat kidney and 4-bromophenyl-1,4-diazabicyclo(3.2.2)nonane-4-carboxylic acid from rat brain. In addition, exposure of the dry matrix coated sample to a saturated moist atmosphere appears to enhance the visualization of a different set of molecules.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, free surface problems of Stefan-type for the parabolic heat equation are investigated using the method of fundamental solutions. The additional measurement necessary to determine the free surface could be a boundary temperature, a heat flux or an energy measurement. Both one- and two-phase flows are investigated. Numerical results are presented and discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new 3D implementation of a hybrid model based on the analogy with two-phase hydrodynamics has been developed for the simulation of liquids at microscale. The idea of the method is to smoothly combine the atomistic description in the molecular dynamics zone with the Landau-Lifshitz fluctuating hydrodynamics representation in the rest of the system in the framework of macroscopic conservation laws through the use of a single "zoom-in" user-defined function s that has the meaning of a partial concentration in the two-phase analogy model. In comparison with our previous works, the implementation has been extended to full 3D simulations for a range of atomistic models in GROMACS from argon to water in equilibrium conditions with a constant or a spatially variable function s. Preliminary results of simulating the diffusion of a small peptide in water are also reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Training Mixture Density Network (MDN) configurations within the NETLAB framework takes time due to the nature of the computation of the error function and the gradient of the error function. By optimising the computation of these functions, so that gradient information is computed in parameter space, training time is decreased by at least a factor of sixty for the example given. Decreased training time increases the spectrum of problems to which MDNs can be practically applied making the MDN framework an attractive method to the applied problem solver.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer models, or simulators, are widely used in a range of scientific fields to aid understanding of the processes involved and make predictions. Such simulators are often computationally demanding and are thus not amenable to statistical analysis. Emulators provide a statistical approximation, or surrogate, for the simulators accounting for the additional approximation uncertainty. This thesis develops a novel sequential screening method to reduce the set of simulator variables considered during emulation. This screening method is shown to require fewer simulator evaluations than existing approaches. Utilising the lower dimensional active variable set simplifies subsequent emulation analysis. For random output, or stochastic, simulators the output dispersion, and thus variance, is typically a function of the inputs. This work extends the emulator framework to account for such heteroscedasticity by constructing two new heteroscedastic Gaussian process representations and proposes an experimental design technique to optimally learn the model parameters. The design criterion is an extension of Fisher information to heteroscedastic variance models. Replicated observations are efficiently handled in both the design and model inference stages. Through a series of simulation experiments on both synthetic and real world simulators, the emulators inferred on optimal designs with replicated observations are shown to outperform equivalent models inferred on space-filling replicate-free designs in terms of both model parameter uncertainty and predictive variance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background/Aims: Positron emission tomography has been applied to study cortical activation during human swallowing, but employs radio-isotopes precluding repeated experiments and has to be performed supine, making the task of swallowing difficult. Here we now describe Synthetic Aperture Magnetometry (SAM) as a novel method of localising and imaging the brain's neuronal activity from magnetoencephalographic (MEG) signals to study the cortical processing of human volitional swallowing in the more physiological prone position. Methods: In 3 healthy male volunteers (age 28–36), 151-channel whole cortex MEG (Omega-151, CTF Systems Inc.) was recorded whilst seated during the conditions of repeated volitional wet swallowing (5mls boluses at 0.2Hz) or rest. SAM analysis was then performed using varying spatial filters (5–60Hz) before co-registration with individual MRI brain images. Activation areas were then identified using standard sterotactic space neuro-anatomical maps. In one subject repeat studies were performed to confirm the initial study findings. Results: In all subjects, cortical activation maps for swallowing could be generated using SAM, the strongest activations being seen with 10–20Hz filter settings. The main cortical activations associated with swallowing were in: sensorimotor cortex (BA 3,4), insular cortex and lateral premotor cortex (BA 6,8). Of relevance, each cortical region displayed consistent inter-hemispheric asymmetry, to one or other hemisphere, this being different for each region and for each subject. Intra-subject comparisons of activation localisation and asymmetry showed impressive reproducibility. Conclusion: SAM analysis using MEG is an accurate, repeatable, and reproducible method for studying the brain processing of human swallowing in a more physiological manner and provides novel opportunities for future studies of the brain-gut axis in health and disease.