903 resultados para Gaussian curvature


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Density functional theory (DFT) is a powerful approach to electronic structure calculations in extended systems, but suffers currently from inadequate incorporation of long-range dispersion, or Van der Waals (VdW) interactions. VdW-corrected DFT is tested for interactions involving molecular hydrogen, graphite, single-walled carbon nanotubes (SWCNTs), and SWCNT bundles. The energy correction, based on an empirical London dispersion term with a damping function at short range, allows a reasonable physisorption energy and equilibrium distance to be obtained for H2 on a model graphite surface. The VdW-corrected DFT calculation for an (8, 8) nanotube bundle reproduces accurately the experimental lattice constant. For H2 inside or outside an (8, 8) SWCNT, we find the binding energies are respectively higher and lower than that on a graphite surface, correctly predicting the well known curvature effect. We conclude that the VdW correction is a very effective method for implementing DFT calculations, allowing a reliable description of both short-range chemical bonding and long-range dispersive interactions. The method will find powerful applications in areas of SWCNT research where empirical potential functions either have not been developed, or do not capture the necessary range of both dispersion and bonding interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a model for thin film flow down the outside and inside of a vertical cylinder. Our focus is to study the effect that the curvature of the cylinder has on the gravity-driven instability of the advancing contact line and to simulate the resulting fingering patterns that form due to this instability. The governing partial differential equation is fourth order with a nonlinear degenerate diffusion term that represents the stabilising effect of surface tension. We present numerical solutions obtained by implementing an efficient alternating direction implicit scheme. When compared to the problem of flow down a vertical plane, we find that increasing substrate curvature tends to increase the fingering instability for flow down the outside of the cylinder, whereas flow down the inside of the cylinder substrate curvature has the opposite effect. Further, we demonstrate the existence of nontrivial travelling wave solutions which describe fingering patterns that propagate down the inside of a cylinder at constant speed without changing form. These solutions are perfectly analogous to those found previously for thin film flow down an inclined plane.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years face recognition systems have been applied in various useful applications, such as surveillance, access control, criminal investigations, law enforcement, and others. However face biometric systems can be highly vulnerable to spoofing attacks where an impostor tries to bypass the face recognition system using a photo or video sequence. In this paper a novel liveness detection method, based on the 3D structure of the face, is proposed. Processing the 3D curvature of the acquired data, the proposed approach allows a biometric system to distinguish a real face from a photo, increasing the overall performance of the system and reducing its vulnerability. In order to test the real capability of the methodology a 3D face database has been collected simulating spoofing attacks, therefore using photographs instead of real faces. The experimental results show the effectiveness of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: We aimed to determine the prevalence and associations of refractive error on Norfolk Island. DESIGN: Population-based study on Norfolk Island, South Pacific. PARTICIPANTS: All permanent residents on Norfolk Island aged ≥ 15 years were invited to participate. METHODS: Patients underwent non-cycloplegic autorefraction, slit-lamp biomicroscope examination and biometry assessment. Only phakic eyes were analysed. MAIN OUTCOME MEASURES: Prevalence and multivariate associations of refractive error and myopia. RESULTS: There were 677 people (645 right phakic eyes, 648 left phakic eyes) aged ≥ 15 years were included in this study. Mean age of participants was 51.1 (standard deviation 15.7; range 15-81). Three hundred and seventy-six people (55.5%) were female. Adjusted to the 2006 Norfolk Island population, prevalence estimates of refractive error were as follows: myopia (mean spherical equivalent ≥ -1.0 D) 10.1%, hypermetropia (mean spherical equivalent ≥ 1.0 D) 36.6%, and astigmatism 17.7%. Significant independent predictors of myopia in the multivariate model were lower age (P < 0.001), longer axial length (P < 0.001), shallower anterior chamber depth (P = 0.031) and increased corneal curvature (P < 0.001). Significant independent predictors of refractive error were increasing age (P < 0.001), male gender (P = 0.009), Pitcairn ancestry (P = 0.041), cataract (P < 0.001), longer axial length (P < 0.001) and decreased corneal curvature (P < 0.001). CONCLUSIONS: The prevalence of myopia on Norfolk Island is lower than on mainland Australia, and the Norfolk Island population demonstrates ethnic differences in the prevalence estimates. Given the significant associations between refractive error and several ocular biometry characteristics, Norfolk Island may be a useful population in which to find the genetic basis of refractive error.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops analytical distributions of temperature indices on which temperature derivatives are written. If the deviations of daily temperatures from their expected values are modelled as an Ornstein-Uhlenbeck process with timevarying variance, then the distributions of the temperature index on which the derivative is written is the sum of truncated, correlated Gaussian deviates. The key result of this paper is to provide an analytical approximation to the distribution of this sum, thus allowing the accurate computation of payoffs without the need for any simulation. A data set comprising average daily temperature spanning over a hundred years for four Australian cities is used to demonstrate the efficacy of this approach for estimating the payoffs to temperature derivatives. It is demonstrated that expected payoffs computed directly from historical records are a particularly poor approach to the problem when there are trends in underlying average daily temperature. It is shown that the proposed analytical approach is superior to historical pricing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant amount of speech data is required to develop a robust speaker verification system, but it is difficult to find enough development speech to match all expected conditions. In this paper we introduce a new approach to Gaussian probabilistic linear discriminant analysis (GPLDA) to estimate reliable model parameters as a linearly weighted model taking more input from the large volume of available telephone data and smaller proportional input from limited microphone data. In comparison to a traditional pooled training approach, where the GPLDA model is trained over both telephone and microphone speech, this linear-weighted GPLDA approach is shown to provide better EER and DCF performance in microphone and mixed conditions in both the NIST 2008 and NIST 2010 evaluation corpora. Based upon these results, we believe that linear-weighted GPLDA will provide a better approach than pooled GPLDA, allowing for the further improvement of GPLDA speaker verification in conditions with limited development data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our results demonstrate that photorefractive residual amplitude modulation (RAM) noise in electro-optic modulators (EOMs) can be reduced by modifying the incident beam intensity distribution. Here we report an order of magnitude reduction in RAM when beams with uniform intensity (flat-top) profiles, generated with an LCOS-SLM, are used instead of the usual fundamental Gaussian mode (TEM00). RAM arises from the photorefractive amplified scatter noise off the defects and impurities within the crystal. A reduction in RAM is observed with increasing intensity uniformity (flatness), which is attributed to a reduction in space charge field on the beam axis. The level of RAM reduction that can be achieved is physically limited by clipping at EOM apertures, with the observed results agreeing well with a simple model. These results are particularly important in applications where the reduction of residual amplitude modulation to 10^-6 is essential.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many bridges, vertical displacements are one of the most relevant parameters for structural health monitoring in both the short- and long-terms. Bridge managers around the globe are always looking for a simple way to measure vertical displacements of bridges. However, it is difficult to carry out such measurements. On the other hand, in recent years, with the advancement of fibre-optic technologies, fibre Bragg grating (FBG) sensors are more commonly used in structural health monitoring due to their outstanding advantages including multiplexing capability, immunity of electromagnetic interference as well as high resolution and accuracy. For these reasons, a methodology for measuring the vertical displacements of bridges using FBG sensors is proposed. The methodology includes two approaches. One of which is based on curvature measurements while the other utilises inclination measurements from successfully developed FBG tilt sensors. A series of simulation tests of a full-scale bridge was conducted. It shows that both approaches can be implemented to measure the vertical displacements for bridges with various support conditions, varying stiffness along the spans and without any prior known loading. A static loading beam test with increasing loads at the mid-span and a beam test with different loading locations were conducted to measure vertical displacements using FBG strain sensors and tilt sensors. The results show that the approaches can successfully measure vertical displacements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We read with interest the article entitled ‘Population spherical aberration: associations with ametropia, age, corneal curvature, and image quality’ by Amanda C Kingston and Ian G Cox (2013). The authors provided higher order aberrations data for a sample of 1124 eyes and performed correlation analyses to compare higher order aberrations with refraction and biometry data, such as spherical equivalent power and corneal curvature. Special attention was drawn to spherical aberration...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discretization of a geographical region is quite common in spatial analysis. There have been few studies into the impact of different geographical scales on the outcome of spatial models for different spatial patterns. This study aims to investigate the impact of spatial scales and spatial smoothing on the outcomes of modelling spatial point-based data. Given a spatial point-based dataset (such as occurrence of a disease), we study the geographical variation of residual disease risk using regular grid cells. The individual disease risk is modelled using a logistic model with the inclusion of spatially unstructured and/or spatially structured random effects. Three spatial smoothness priors for the spatially structured component are employed in modelling, namely an intrinsic Gaussian Markov random field, a second-order random walk on a lattice, and a Gaussian field with Matern correlation function. We investigate how changes in grid cell size affect model outcomes under different spatial structures and different smoothness priors for the spatial component. A realistic example (the Humberside data) is analyzed and a simulation study is described. Bayesian computation is carried out using an integrated nested Laplace approximation. The results suggest that the performance and predictive capacity of the spatial models improve as the grid cell size decreases for certain spatial structures. It also appears that different spatial smoothness priors should be applied for different patterns of point data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Osteochondral grafts are common treatment options for joint focal defects due to their excellent functionality. However, the difficulty is matching the topography of host and graft(s) surfaces flush to one another. Incongruence could lead to disintegration particularly when the gap reaches subchondoral region. The aim of this study is therefore to investigate cell response to gap geometry when forming cartilage-cartilage bridge at the interface. The question is what would be the characteristics of such a gap if the cells could bridge across to fuse the edges? To answer this, osteochondral plugs devoid of host cells were prepared through enzymatic decellularization and artificial clefts of different sizes were created on the cartilage surface using laser ablation. High density pellets of heterologous chondrocytes were seeded on the defects and cultured with chondrogenic differentiation media for 35 days. The results showed that the behavior of chondrocytes was a function of gap topography. Depending on the distance of the edges two types of responses were generated. Resident cells surrounding distant edges demonstrated superficial attachment to one side whereas clefts of 150 to 250 µm width experienced cell migration and anchorage across the interface. The infiltration of chondrocytes into the gaps provided extra space for their proliferation and laying matrix; as the result faster filling of the initial void space was observed. On the other hand, distant and fit edges created an incomplete healing response due to the limited ability of differentiated chondrocytes to migrate and incorporate within the interface. It seems that the initial condition of the defects and the curvature profile of the adjacent edges were the prime determinants of the quality of repair; however, further studies to reveal the underlying mechanisms of cells adapting to and modifying the new environment would be of particular interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important aspect of robotic path planning for is ensuring that the vehicle is in the best location to collect the data necessary for the problem at hand. Given that features of interest are dynamic and move with oceanic currents, vehicle speed is an important factor in any planning exercises to ensure vehicles are at the right place at the right time. Here, we examine different Gaussian process models to find a suitable predictive kinematic model that enable the speed of an underactuated, autonomous surface vehicle to be accurately predicted given a set of input environmental parameters.