865 resultados para grid points
Resumo:
We present and analyze several gaze-based graphical password schemes based on recall and cued-recall of grid points; eye-trackers are used to record user's gazes, which can prevent shoulder-surfing and may be suitable for users with disabilities. Our 22-subject study observes that success rate and entry time for the grid-based schemes we consider are comparable to other gaze-based graphical password schemes. We propose the first password security metrics suitable for analysis of graphical grid passwords and provide an in-depth security analysis of user-generated passwords from our study, observing that, on several metrics, user-generated graphical grid passwords are substantially weaker than uniformly random passwords, despite our attempts at designing schemes to improve quality of user-generated passwords.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
AMS subject classification: 90B80.
A Modified inverse integer Cholesky decorrelation method and the performance on ambiguity resolution
Resumo:
One of the research focuses in the integer least squares problem is the decorrelation technique to reduce the number of integer parameter search candidates and improve the efficiency of the integer parameter search method. It remains as a challenging issue for determining carrier phase ambiguities and plays a critical role in the future of GNSS high precise positioning area. Currently, there are three main decorrelation techniques being employed: the integer Gaussian decorrelation, the Lenstra–Lenstra–Lovász (LLL) algorithm and the inverse integer Cholesky decorrelation (IICD) method. Although the performance of these three state-of-the-art methods have been proved and demonstrated, there is still a potential for further improvements. To measure the performance of decorrelation techniques, the condition number is usually used as the criterion. Additionally, the number of grid points in the search space can be directly utilized as a performance measure as it denotes the size of search space. However, a smaller initial volume of the search ellipsoid does not always represent a smaller number of candidates. This research has proposed a modified inverse integer Cholesky decorrelation (MIICD) method which improves the decorrelation performance over the other three techniques. The decorrelation performance of these methods was evaluated based on the condition number of the decorrelation matrix, the number of search candidates and the initial volume of search space. Additionally, the success rate of decorrelated ambiguities was calculated for all different methods to investigate the performance of ambiguity validation. The performance of different decorrelation methods was tested and compared using both simulation and real data. The simulation experiment scenarios employ the isotropic probabilistic model using a predetermined eigenvalue and without any geometry or weighting system constraints. MIICD method outperformed other three methods with conditioning improvements over LAMBDA method by 78.33% and 81.67% without and with eigenvalue constraint respectively. The real data experiment scenarios involve both the single constellation system case and dual constellations system case. Experimental results demonstrate that by comparing with LAMBDA, MIICD method can significantly improve the efficiency of reducing the condition number by 78.65% and 97.78% in the case of single constellation and dual constellations respectively. It also shows improvements in the number of search candidate points by 98.92% and 100% in single constellation case and dual constellations case.
Resumo:
The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.
Resumo:
The occurrence of extreme water level events along low-lying, highly populated and/or developed coastlines can lead to devastating impacts on coastal infrastructure. Therefore it is very important that the probabilities of extreme water levels are accurately evaluated to inform flood and coastal management and for future planning. The aim of this study was to provide estimates of present day extreme total water level exceedance probabilities around the whole coastline of Australia, arising from combinations of mean sea level, astronomical tide and storm surges generated by both extra-tropical and tropical storms, but exclusive of surface gravity waves. The study has been undertaken in two main stages. In the first stage, a high-resolution (~10 km along the coast) hydrodynamic depth averaged model has been configured for the whole coastline of Australia using the Danish Hydraulics Institute’s Mike21 modelling suite of tools. The model has been forced with astronomical tidal levels, derived from the TPX07.2 global tidal model, and meteorological fields, from the US National Center for Environmental Prediction’s global reanalysis, to generate a 61-year (1949 to 2009) hindcast of water levels. This model output has been validated against measurements from 30 tide gauge sites around Australia with long records. At each of the model grid points located around the coast, time series of annual maxima and the several highest water levels for each year were derived from the multi-decadal water level hindcast and have been fitted to extreme value distributions to estimate exceedance probabilities. Stage 1 provided a reliable estimate of the present day total water level exceedance probabilities around southern Australia, which is mainly impacted by extra-tropical storms. However, as the meteorological fields used to force the hydrodynamic model only weakly include the effects of tropical cyclones the resultant water levels exceedance probabilities were underestimated around western, northern and north-eastern Australia at higher return periods. Even if the resolution of the meteorological forcing was adequate to represent tropical cyclone-induced surges, multi-decadal periods yielded insufficient instances of tropical cyclones to enable the use of traditional extreme value extrapolation techniques. Therefore, in the second stage of the study, a statistical model of tropical cyclone tracks and central pressures was developed using histroic observations. This model was then used to generate synthetic events that represented 10,000 years of cyclone activity for the Australia region, with characteristics based on the observed tropical cyclones over the last ~40 years. Wind and pressure fields, derived from these synthetic events using analytical profile models, were used to drive the hydrodynamic model to predict the associated storm surge response. A random time period was chosen, during the tropical cyclone season, and astronomical tidal forcing for this period was included to account for non-linear interactions between the tidal and surge components. For each model grid point around the coast, annual maximum total levels for these synthetic events were calculated and these were used to estimate exceedance probabilities. The exceedance probabilities from stages 1 and 2 were then combined to provide a single estimate of present day extreme water level probabilities around the whole coastline of Australia.
Resumo:
Non-rigid image registration is an essential tool required for overcoming the inherent local anatomical variations that exist between images acquired from different individuals or atlases. Furthermore, certain applications require this type of registration to operate across images acquired from different imaging modalities. One popular local approach for estimating this registration is a block matching procedure utilising the mutual information criterion. However, previous block matching procedures generate a sparse deformation field containing displacement estimates at uniformly spaced locations. This neglects to make use of the evidence that block matching results are dependent on the amount of local information content. This paper presents a solution to this drawback by proposing the use of a Reversible Jump Markov Chain Monte Carlo statistical procedure to optimally select grid points of interest. Three different methods are then compared to propagate the estimated sparse deformation field to the entire image including a thin-plate spline warp, Gaussian convolution, and a hybrid fluid technique. Results show that non-rigid registration can be improved by using the proposed algorithm to optimally select grid points of interest.
Resumo:
In this paper, we derive a new nonlinear two-sided space-fractional diffusion equation with variable coefficients from the fractional Fick’s law. A semi-implicit difference method (SIDM) for this equation is proposed. The stability and convergence of the SIDM are discussed. For the implementation, we develop a fast accurate iterative method for the SIDM by decomposing the dense coefficient matrix into a combination of Toeplitz-like matrices. This fast iterative method significantly reduces the storage requirement of O(n2)O(n2) and computational cost of O(n3)O(n3) down to n and O(nlogn)O(nlogn), where n is the number of grid points. The method retains the same accuracy as the underlying SIDM solved with Gaussian elimination. Finally, some numerical results are shown to verify the accuracy and efficiency of the new method.
Resumo:
In this work an attempt has been made to evaluate the seismic hazard of South India (8.0 degrees N-20 degrees N; 72 degrees E-88 degrees E) based on the probabilistic seismic hazard analysis (PSHA). The earthquake data obtained from different sources were declustered to remove the dependent events. A total of 598 earthquakes of moment magnitude 4 and above were obtained from the study area after declustering, and were considered for further hazard analysis. The seismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones in the study area which are associated with earthquakes of magnitude 4 and above. For assessing theseismic hazard, the study area was divided into small grids of size 0.1 degrees x0.1 degrees, and the hazard parameters were calculated at the centre of each of these grid cells by considering all the seismic sources with in a radius of 300 km. Rock level peak horizontal acceleration (PHA) and spectral acceleration (SA) values at 1 corresponding to 10% and 2% probability of exceedance in 50 years have been calculated for all the grid points. The contour maps showing the spatial variation of these values are presented here. Uniform hazard response spectrum (UHRS) at rock level for 5% damping and 10% and 2% probability of exceedance in 50 years were also developed for all the grid points. The peak ground acceleration (PGA) at surface level was calculated for the entire South India for four different site classes. These values can be used to find the PGA values at any site in South India based on site class at that location. Thus, this method can be viewed as a simplified method to evaluate the PGA values at any site in the study area.
Resumo:
Short-time analytical solutions of solid and liquid temperatures and freezing front have been obtained for the outward radially symmetric spherical solidification of a superheated melt. Although results are presented here only for time dependent boundary flux, the method of solution can be used for other kinds of boundary conditions also. Later, the analytical solution has been compared with the numerical solution obtained with the help of a finite difference numerical scheme in which the grid points change with the freezing front position. An efficient method of execution of the numerical scheme has been discussed in details. Graphs have been drawn for the total solidification times and temperature distributions in the solid.
Resumo:
Analytical and numerical solutions of a general problem related to the radially symmetric inward spherical solidification of a superheated melt have been studied in this paper. In the radiation-convection type boundary conditions, the heat transfer coefficient has been taken as time dependent which could be infinite, at time,t=0. This is necessary, for the initiation of instantaneous solidification of superheated melt, over its surface. The analytical solution consists of employing suitable fictitious initial temperatures and fictitious extensions of the original region occupied by the melt. The numerical solution consists of finite difference scheme in which the grid points move with the freezing front. The numerical scheme can handle with ease the density changes in the solid and liquid states and the shrinkage or expansions of volumes due to density changes. In the numerical results, obtained for the moving boundary and temperatures, the effects of several parameters such as latent heat, Boltzmann constant, density ratios, heat transfer coefficients, etc. have been shown. The correctness of numerical results has also been checked by satisfying the integral heat balance at every timestep.
Resumo:
The urban heat island phenomenon is the most well-known all-year-round urban climate phenomenon. It occurs in summer during the daytime due to the short-wave radiation from the sun and in wintertime, through anthropogenic heat production. In summertime, the properties of the fabric of city buildings determine how much energy is stored, conducted and transmitted through the material. During night-time, when there is no incoming short-wave radiation, all fabrics of the city release the energy in form of heat back to the urban atmosphere. In wintertime anthropogenic heating of buildings and traffic deliver energy into the urban atmosphere. The initial focus of Helsinki urban heat island was on the description of the intensity of the urban heat island (Fogelberg 1973, Alestalo 1975). In this project our goal was to carry out as many measurements as possible over a large area of Helsinki to give a long term estimate of the Helsinki urban heat island. Helsinki is a city with 550 000 inhabitants and located on the north shore of Finnish Bay of the Baltic Sea. Initially, comparison studies against long-term weather station records showed that our regular, but weekly, sampling of observations adequately describe the Helsinki urban heat island. The project covered an entire seasonal cycle over the 12 months from July 2009 to June 2010. The measurements were conducted using a moving platform following microclimatological traditions. Tuesday was selected as the measuring day because it was the only weekday during the one year time span without any public holidays. Once a week, two set of measurements, in total 104, were conducted in the heterogeneous temperature conditions of Helsinki city centre. In the more homogeneous suburban areas, one set of measurements was taken every second week, to give a total of 52.The first set of measurements took place before noon, and the second 12 hours, just prior to midnight. Helsinki Kaisaniemi weather station was chosen as the reference station. This weather station is located in a large park in the city centre of Helsinki. Along the measurement route, 336 fixed points were established, and the monthly air temperature differences to Kaisaniemi were calculated to produce monthly and annual maps. The monthly air temperature differences were interpolated 21.1 km by 18.1 km horizontal grid with 100 metre resolution residual kriging method. The following independent variables for the kriging interpolation method were used: topographical height, portion of sea area, portion of trees, fraction of built-up and not built-up area, volumes of buildings, and population density. The annual mean air temperature difference gives the best representation of the Helsinki urban heat island effect- Due to natural variability of weather conditions during the measurement campaign care must be taken when interpretation the results for the monthly values. The main results of this urban heat island research project are: a) The city centre of Helsinki is warmer than its surroundings, both on a monthly main basis, and for the annual mean, however, there are only a few grid points, 46 out of 38 191, which display a temperature difference of more than 1K. b) If the monthly spatial variation is air temperature differences is small, then usually the temperature difference between the city and the surroundings is also small. c) Isolated large buildings and suburban centres create their own individual heat island. d) The topographical influence on air temperature can generally be neglected for the monthly mean, but can be strong under certain weather conditions.
Resumo:
Statistically averaged lattices provide a common basis to understand the diffraction properties of structures displaying deviations from regular crystal structures. An average lattice is defined and examples are given in one and two dimensions along with their diffraction patterns. The absence of periodicity in reciprocal space corresponding to aperiodic structures is shown to arise out of different projected spacings that are irrationally related, when the grid points are projected along the chosen coordinate axes. It is shown that the projected length scales are important factors which determine the existence or absence of observable periodicity in the diffraction pattern more than the sequence of arrangement.
Resumo:
A group of high-order finite-difference schemes for incompressible flow was implemented to simulate the evolution of turbulent spots in channel flows. The long-time accuracy of these schemes was tested by comparing the evolution of small disturbances to a plane channel flow against the growth rate predicted by linear theory. When the perturbation is the unstable eigenfunction at a Reynolds number of 7500, the solution grows only if there are a comparatively large number of (equispaced) grid points across the channel. Fifth-order upwind biasing of convection terms is found to be worse than second-order central differencing. But, for a decaying mode at a Reynolds number of 1000, about a fourth of the points suffice to obtain the correct decay rate. We show that this is due to the comparatively high gradients in the unstable eigenfunction near the walls. So, high-wave-number dissipation of the high-order upwind biasing degrades the solution especially. But for a well-resolved calculation, the weak dissipation does not degrade solutions even over the very long times (O(100)) computed in these tests. Some new solutions of spot evolution in Couette flows with pressure gradients are presented. The approach to self-similarity at long times can be seen readily in contour plots.
Resumo:
In this work, an attempt has been made to evaluate the spatial variation of peak horizontal acceleration (PHA) and spectral acceleration (SA) values at rock level for south India based on the probabilistic seismic hazard analysis (PSHA). These values were estimated by considering the uncertainties involved in magnitude, hypocentral distance and attenuation of seismic waves. Different models were used for the hazard evaluation, and they were combined together using a logic tree approach. For evaluating the seismic hazard, the study area was divided into small grids of size 0.1A degrees A xA 0.1A degrees, and the hazard parameters were calculated at the centre of each of these grid cells by considering all the seismic sources within a radius of 300 km. Rock level PHA values and SA at 1 s corresponding to 10% probability of exceedance in 50 years were evaluated for all the grid points. Maps showing the spatial variation of rock level PHA values and SA at 1 s for the entire south India are presented in this paper. To compare the seismic hazard for some of the important cities, the seismic hazard curves and the uniform hazard response spectrum (UHRS) at rock level with 10% probability of exceedance in 50 years are also presented in this work.