961 resultados para Rotated lattices
Resumo:
Refraction may be affected by the forces of lids and extraocular muscles when eye direction and head direction are not aligned (oblique viewing) which might potentially influence past findings on peripheral refraction of the eye. We investigated the effect of oblique viewing on axial and peripheral refraction. In a first experiment, cycloplegic axial refractions were determined when subjects' heads were positioned to look straight-ahead through an open-view autorefractor and when the heads were rotated to the right or left by 30° with compensatory eye rotation (oblique viewing). Subjects were 16 young emmetropes (18–35 years), 22 young myopes (19–36 years) and 15 old emmetropes (45–60 years). In a second experiment, cycloplegic peripheral refraction measurements were taken out to ±34° horizontally from fixation while the subjects rotated their heads to match the peripheral refraction angles (eye in primary position with respect to the head) or the eyes were rotated with respect to the head (oblique viewing). Subjects were 10 emmetropes and 10 myopes. We did not find any significant changes in axial or peripheral refraction upon oblique viewing for any of the subject groups. In general for the range of horizontal angles used, it is not critical whether or not the eye is rotated with respect to the head during axial or peripheral refraction.
Resumo:
Spoken term detection (STD) popularly involves performing word or sub-word level speech recognition and indexing the result. This work challenges the assumption that improved speech recognition accuracy implies better indexing for STD. Using an index derived from phone lattices, this paper examines the effect of language model selection on the relationship between phone recognition accuracy and STD accuracy. Results suggest that language models usually improve phone recognition accuracy but their inclusion does not always translate to improved STD accuracy. The findings suggest that using phone recognition accuracy to measure the quality of an STD index can be problematic, and highlight the need for an alternative that is more closely aligned with the goals of the specific detection task.
Resumo:
Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc. The geometric and dosimetric accuracy of CTCombine’s output has been assessed by simulating simple and complex treatments applied to a rotated planar phantom and a rotated humanoid phantom and comparing the resulting virtual EPID images with the images acquired using experimental measurements and independent simulations of equivalent phantoms. It is expected that CTCombine will be useful for Monte Carlo studies of EPID dosimetry as well as other EPID imaging applications.
Resumo:
Aberrations affect image quality of the eye away from the line of sight as well as along it. High amounts of lower order aberrations are found in the peripheral visual field and higher order aberrations change away from the centre of the visual field. Peripheral resolution is poorer than that in central vision, but peripheral vision is important for movement and detection tasks (for example driving) which are adversely affected by poor peripheral image quality. Any physiological process or intervention that affects axial image quality will affect peripheral image quality as well. The aim of this study was to investigate the effects of accommodation, myopia, age, and refractive interventions of orthokeratology, laser in situ keratomileusis and intraocular lens implantation on the peripheral aberrations of the eye. This is the first systematic investigation of peripheral aberrations in a variety of subject groups. Peripheral aberrations can be measured either by rotating a measuring instrument relative to the eye or rotating the eye relative to the instrument. I used the latter as it is much easier to do. To rule out effects of eye rotation on peripheral aberrations, I investigated the effects of eye rotation on axial and peripheral cycloplegic refraction using an open field autorefractor. For axial refraction, the subjects fixated at a target straight ahead, while their heads were rotated by ±30º with a compensatory eye rotation to view the target. For peripheral refraction, the subjects rotated their eyes to fixate on targets out to ±34° along the horizontal visual field, followed by measurements in which they rotated their heads such that the eyes stayed in the primary position relative to the head while fixating at the peripheral targets. Oblique viewing did not affect axial or peripheral refraction. Therefore it is not critical, within the range of viewing angles studied, if axial and peripheral refractions are measured with rotation of the eye relative to the instrument or rotation of the instrument relative to the eye. Peripheral aberrations were measured using a commercial Hartmann-Shack aberrometer. A number of hardware and software changes were made. The 1.4 mm range limiting aperture was replaced by a larger aperture (2.5 mm) to ensure all the light from peripheral parts of the pupil reached the instrument detector even when aberrations were high such as those occur in peripheral vision. The power of the super luminescent diode source was increased to improve detection of spots passing through the peripheral pupil. A beam splitter was placed between the subjects and the aberrometer, through which they viewed an array of targets on a wall or projected on a screen in a 6 row x 7 column matrix of points covering a visual field of 42 x 32. In peripheral vision, the pupil of the eye appears elliptical rather than circular; data were analysed off-line using custom software to determine peripheral aberrations. All analyses in the study were conducted for 5.0 mm pupils. Influence of accommodation on peripheral aberrations was investigated in young emmetropic subjects by presenting fixation targets at 25 cm and 3 m (4.0 D and 0.3 D accommodative demands, respectively). Increase in accommodation did not affect the patterns of any aberrations across the field, but there was overall negative shift in spherical aberration across the visual field of 0.10 ± 0.01m. Subsequent studies were conducted with the targets at a 1.2 m distance. Young emmetropes, young myopes and older emmetropes exhibited similar patterns of astigmatism and coma across the visual field. However, the rate of change of coma across the field was higher in young myopes than young emmetropes and was highest in older emmetropes amongst the three groups. Spherical aberration showed an overall decrease in myopes and increase in older emmetropes across the field, as compared to young emmetropes. Orthokeratology, spherical IOL implantation and LASIK altered peripheral higher order aberrations considerably, especially spherical aberration. Spherical IOL implantation resulted in an overall increase in spherical aberration across the field. Orthokeratology and LASIK reversed the direction of change in coma across the field. Orthokeratology corrected peripheral relative hypermetropia through correcting myopia in the central visual field. Theoretical ray tracing demonstrated that changes in aberrations due to orthokeratology and LASIK can be explained by the induced changes in radius of curvature and asphericity of the cornea. This investigation has shown that peripheral aberrations can be measured with reasonable accuracy with eye rotation relative to the instrument. Peripheral aberrations are affected by accommodation, myopia, age, orthokeratology, spherical intraocular lens implantation and laser in situ keratomileusis. These factors affect the magnitudes and patterns of most aberrations considerably (especially coma and spherical aberration) across the studied visual field. The changes in aberrations across the field may influence peripheral detection and motion perception. However, further research is required to investigate how the changes in aberrations influence peripheral detection and motion perception and consequently peripheral vision task performance.
Resumo:
Previous work has shown that amplitude and direction are two independently controlled parameters of aimed arm movements, and performance, therefore, suffers when they must be decomposed into Cartesian coordinates. We now compare decomposition into different coordinate systems. Subjects pointed at visual targets in 2-D with a cursor, using a two-axis joystick or two single-axis joysticks. In the latter case, joystick axes were aligned with the subjects’ body axes, were rotated by –45°, or were oblique (i.e., one axis was in an egocentric frame and the other was rotated by –45°). Cursor direction always corresponded to joystick direction. We found that compared with the two-axis joystick, responses with single-axis joysticks were slower and less accurate when the axes were oriented egocentrically; the deficit was even more pronounced when the axes were rotated and was most pronounced when they were oblique. This confirms that decomposition of motor commands is computationally demanding and documents that this demand is lowest for egocentric, higher for rotated, and highest for oblique coordinates. We conclude that most current vehicles use computationally demanding man–machine interfaces.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Wireless Multi-media Sensor Networks (WMSNs) have become increasingly popular in recent years, driven in part by the increasing commoditization of small, low-cost CMOS sensors. As such, the challenge of automatically calibrating these types of cameras nodes has become an important research problem, especially for the case when a large quantity of these type of devices are deployed. This paper presents a method for automatically calibrating a wireless camera node with the ability to rotate around one axis. The method involves capturing images as the camera is rotated and computing the homographies between the images. The camera parameters, including focal length, principal point and the angle and axis of rotation can then recovered from two or more homographies. The homography computation algorithm is designed to deal with the limited resources of the wireless sensor and to minimize energy con- sumption. In this paper, a modified RANdom SAmple Consensus (RANSAC) algorithm is proposed to effectively increase the efficiency and reliability of the calibration procedure.
Resumo:
Cloninger’s psychobiological model of temperament and character is a general model of personality that has been widely used in clinical psychology, but has seldom been applied in other domains. In this research we apply Cloninger’s model to the study of leadership. Our study comprised 81 participants who took part in a diverse range of small group tasks. Participants rotated through tasks and groups and rated each other on “emergent leadership.” As hypothesized, leader emergence tended to be consistent regardless of the specific tasks and groups. It was found that personality factors from Cloninger, Svrakic, and Przybeck’s (1993) model could explain trait-based variance in emergent leadership. Results also highlight the role of “cooperativeness” in the prediction of leadership emergence. Implications are discussed in terms of our theoretical understanding of trait-based leadership, and more generally in terms of the utility of Cloninger’s model in leadership research.
Resumo:
The contributions of this thesis fall into three areas of certificateless cryptography. The first area is encryption, where we propose new constructions for both identity-based and certificateless cryptography. We construct an n-out-of- n group encryption scheme for identity-based cryptography that does not require any special means to generate the keys of the trusted authorities that are participating. We also introduce a new security definition for chosen ciphertext secure multi-key encryption. We prove that our construction is secure as long as at least one authority is uncompromised, and show that the existing constructions for chosen ciphertext security from identity-based encryption also hold in the group encryption case. We then consider certificateless encryption as the special case of 2-out-of-2 group encryption and give constructions for highly efficient certificateless schemes in the standard model. Among these is the first construction of a lattice-based certificateless encryption scheme. Our next contribution is a highly efficient certificateless key encapsulation mechanism (KEM), that we prove secure in the standard model. We introduce a new way of proving the security of certificateless schemes based that are based on identity-based schemes. We leave the identity-based part of the proof intact, and just extend it to cover the part that is introduced by the certificateless scheme. We show that our construction is more efficient than any instanciation of generic constructions for certificateless key encapsulation in the standard model. The third area where the thesis contributes to the advancement of certificateless cryptography is key agreement. Swanson showed that many certificateless key agreement schemes are insecure if considered in a reasonable security model. We propose the first provably secure certificateless key agreement schemes in the strongest model for certificateless key agreement. We extend Swanson's definition for certificateless key agreement and give more power to the adversary. Our new schemes are secure as long as each party has at least one uncompromised secret. Our first construction is in the random oracle model and gives the adversary slightly more capabilities than our second construction in the standard model. Interestingly, our standard model construction is as efficient as the random oracle model construction.
Resumo:
In the structure of the title hydrate salt 2(CH6N3)+ C8H2Cl2O42- . H2O, the planes of the carboxylate groups of the dianion are rotated out of the plane of the benzene ring [dihedral angles 48.42(10) and 55.64(9)deg.]. A duplex-sheet structure is formed through guanidinium-carboxylate N-H...O, guanidinium-water N-H...O, and water-carboxylate O-H..O hydrogen-bonding associations.
Resumo:
Using six kinds of lattice types (4×4 ,5×5 , and6×6 square lattices;3×3×3 cubic lattice; and2+3+4+3+2 and4+5+6+5+4 triangular lattices), three different size alphabets (HP ,HNUP , and 20 letters), and two energy functions, the designability of proteinstructures is calculated based on random samplings of structures and common biased sampling (CBS) of proteinsequence space. Then three quantities stability (average energy gap),foldability, and partnum of the structure, which are defined to elucidate the designability, are calculated. The authors find that whatever the type of lattice, alphabet size, and energy function used, there will be an emergence of highly designable (preferred) structure. For all cases considered, the local interactions reduce degeneracy and make the designability higher. The designability is sensitive to the lattice type, alphabet size, energy function, and sampling method of the sequence space. Compared with the random sampling method, both the CBS and the Metropolis Monte Carlo sampling methods make the designability higher. The correlation coefficients between the designability, stability, and foldability are mostly larger than 0.5, which demonstrate that they have strong correlation relationship. But the correlation relationship between the designability and the partnum is not so strong because the partnum is independent of the energy. The results are useful in practical use of the designability principle, such as to predict the proteintertiary structure.
Resumo:
In the structure of the 1:1 proton-transfer compound of brucine with 2-(2,4,6-trinitroanilino)benzoic acid C23H27N2O4+ . C13H7N4O8- . H~2~O, the brucinium cations form the classic undulating ribbon substructures through overlapping head-to-tail interactions while the anions and the three related partial water molecules of solvation (having occupancies of 0.73, 0.17 and 0.10) occupy the interstitial regions of the structure. The cations are linked to the anions directly through N-H...O(carboxyl) hydrogen bonds and indirectly by the three water molecules which form similar conjoint cyclic bridging units [graph set R2/4(8)] through O-H...O(carbonyl) and O(carboxyl) hydrogen bonds, giving a two-dimensional layered structure. Within the anion, intramolecular N-H...O(carboxyl) and N H...O(nitro) hydrogen bonds result in the benzoate and picrate rings being rotated slightly out of coplanarity inter-ring dihedral angle 32.50(14)\%]. This work provides another example of the molecular selectivity of brucine in forming stable crystal structures and also represents the first reported structure of any form of the guest compound 2-(2,4,6-trinitroanilino)benzoic acid.
Resumo:
Features derived from the trispectra of DFT magnitude slices are used for multi-font digit recognition. These features are insensitive to translation, rotation, or scaling of the input. They are also robust to noise. Classification accuracy tests were conducted on a common data base of 256× 256 pixel bilevel images of digits in 9 fonts. Randomly rotated and translated noisy versions were used for training and testing. The results indicate that the trispectral features are better than moment invariants and affine moment invariants. They achieve a classification accuracy of 95% compared to about 81% for Hu's (1962) moment invariants and 39% for the Flusser and Suk (1994) affine moment invariants on the same data in the presence of 1% impulse noise using a 1-NN classifier. For comparison, a multilayer perceptron with no normalization for rotations and translations yields 34% accuracy on 16× 16 pixel low-pass filtered and decimated versions of the same data.
Resumo:
In this paper, we describe an analysis for data collected on a three-dimensional spatial lattice with treatments applied at the horizontal lattice points. Spatial correlation is accounted for using a conditional autoregressive model. Observations are defined as neighbours only if they are at the same depth. This allows the corresponding variance components to vary by depth. We use the Markov chain Monte Carlo method with block updating, together with Krylov subspace methods, for efficient estimation of the model. The method is applicable to both regular and irregular horizontal lattices and hence to data collected at any set of horizontal sites for a set of depths or heights, for example, water column or soil profile data. The model for the three-dimensional data is applied to agricultural trial data for five separate days taken roughly six months apart in order to determine possible relationships over time. The purpose of the trial is to determine a form of cropping that leads to less moist soils in the root zone and beyond.We estimate moisture for each date, depth and treatment accounting for spatial correlation and determine relationships of these and other parameters over time.
Resumo:
Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.