892 resultados para Thresholding Approximation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global Navigation Satellite Systems (GNSS)-based observation systems can provide high precision positioning and navigation solutions in real time, in the order of subcentimetre if we make use of carrier phase measurements in the differential mode and deal with all the bias and noise terms well. However, these carrier phase measurements are ambiguous due to unknown, integer numbers of cycles. One key challenge in the differential carrier phase mode is to fix the integer ambiguities correctly. On the other hand, in the safety of life or liability-critical applications, such as for vehicle safety positioning and aviation, not only is high accuracy required, but also the reliability requirement is important. This PhD research studies to achieve high reliability for ambiguity resolution (AR) in a multi-GNSS environment. GNSS ambiguity estimation and validation problems are the focus of the research effort. Particularly, we study the case of multiple constellations that include initial to full operations of foreseeable Galileo, GLONASS and Compass and QZSS navigation systems from next few years to the end of the decade. Since real observation data is only available from GPS and GLONASS systems, the simulation method named Virtual Galileo Constellation (VGC) is applied to generate observational data from another constellation in the data analysis. In addition, both full ambiguity resolution (FAR) and partial ambiguity resolution (PAR) algorithms are used in processing single and dual constellation data. Firstly, a brief overview of related work on AR methods and reliability theory is given. Next, a modified inverse integer Cholesky decorrelation method and its performance on AR are presented. Subsequently, a new measure of decorrelation performance called orthogonality defect is introduced and compared with other measures. Furthermore, a new AR scheme considering the ambiguity validation requirement in the control of the search space size is proposed to improve the search efficiency. With respect to the reliability of AR, we also discuss the computation of the ambiguity success rate (ASR) and confirm that the success rate computed with the integer bootstrapping method is quite a sharp approximation to the actual integer least-squares (ILS) method success rate. The advantages of multi-GNSS constellations are examined in terms of the PAR technique involving the predefined ASR. Finally, a novel satellite selection algorithm for reliable ambiguity resolution called SARA is developed. In summary, the study demonstrats that when the ASR is close to one, the reliability of AR can be guaranteed and the ambiguity validation is effective. The work then focuses on new strategies to improve the ASR, including a partial ambiguity resolution procedure with a predefined success rate and a novel satellite selection strategy with a high success rate. The proposed strategies bring significant benefits of multi-GNSS signals to real-time high precision and high reliability positioning services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we report a new neutron Compton scattering (NCS) measurement of the ground state single atom kinetic energy of polycrystalline beryllium at momentum transfers in the range 27}104 As ~1 and temperatures in the range 110}1150 K. The measurements have been made with the electron Volt spectrometer (eVS) at the ISIS facility and the measured kinetic energies are shown to be &10% higher than calculations made in the harmonic approximation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report inelastic neutron scattering measurements of the neutron Compton profile, J(y), for Be and for D in polycrystalline ZrD2 over a range of momentum transfers, q between 27 and 178 °A−1. The measurements were performed using the inverse geometry spectrometer eVS which is situated at the UK pulsed spallation neutron source ISIS. We have investigated deviations from impulse approximation (IA) scattering which are generically referred to as final state effects (FSEs) using a method described by Sears. This method allows both the magnitude and the q dependence of the FSE to be studied. Analysis of the measured data was compared with analysis of numerical simulations based on the harmonic approximation and good agreement was found for both ZrD2 and Be. Finally we have shown how (∇2V), where V is the interatomic potential, can be extracted from the antisymmetric component of J(y).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Groundwater flow models are usually characterized as being either transient flow models or steady state flow models. Given that steady state groundwater flow conditions arise as a long time asymptotic limit of a particular transient response, it is natural for us to seek a finite estimate of the amount of time required for a particular transient flow problem to effectively reach steady state. Here, we introduce the concept of mean action time (MAT) to address a fundamental question: How long does it take for a groundwater recharge process or discharge processes to effectively reach steady state? This concept relies on identifying a cumulative distribution function, $F(t;x)$, which varies from $F(0;x)=0$ to $F(t;x) \to \infty$ as $t\to \infty$, thereby providing us with a measurement of the progress of the system towards steady state. The MAT corresponds to the mean of the associated probability density function $f(t;x) = \dfrac{dF}{dt}$, and we demonstrate that this framework provides useful analytical insight by explicitly showing how the MAT depends on the parameters in the model and the geometry of the problem. Additional theoretical results relating to the variance of $f(t;x)$, known as the variance of action time (VAT), are also presented. To test our theoretical predictions we include measurements from a laboratory–scale experiment describing flow through a homogeneous porous medium. The laboratory data confirms that the theoretical MAT predictions are in good agreement with measurements from the physical model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a classification problem typically we face two challenging issues, the diverse characteristic of negative documents and sometimes a lot of negative documents that are closed to positive documents. Therefore, it is hard for a single classifier to clearly classify incoming documents into classes. This paper proposes a novel gradual problem solving to create a two-stage classifier. The first stage identifies reliable negatives (negative documents with weak positive characteristics). It concentrates on minimizing the number of false negative documents (recall-oriented). We use Rocchio, an existing recall based classifier, for this stage. The second stage is a precision-oriented “fine tuning”, concentrates on minimizing the number of false positive documents by applying pattern (a statistical phrase) mining techniques. In this stage a pattern-based scoring is followed by threshold setting (thresholding). Experiment shows that our statistical phrase based two-stage classifier is promising.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we report the preparation and characterisation of nanometer-sized TiO2, CdO, and ZnO semiconductor particles trapped in zeolite NaY. Preparation of these particles was carried out via the traditional ion exchange method and subsequent calcination procedure. It was found that the smaller cations, i.e., Cd2+ and Zn2+ could be readily introduced into the SI′ and SII′ sites located in the sodalite cages, through ion exchange; while this is not the case for the larger Ti species, i.e., Ti monomer [TiO]2+ or dimer [Ti2O3]2+ which were predominantly dispersed on the external surface of zeolite NaY. The subsequent calcination procedure promoted these Ti species to migrate into the internal surface of the supercages. These semiconductor particles confined in NaY zeolite host exhibited a significant blue shift in the UV-VIS absorption spectra, in contrast to the respective bulk semiconductor materials, due to the quantum size effect (QSE). The particle sizes calculated from the UV-VIS optical absorption spectra using the effective mass approximation model are in good agreement with the atomic absorption data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In most intent recognition studies, annotations of query intent are created post hoc by external assessors who are not the searchers themselves. It is important for the field to get a better understanding of the quality of this process as an approximation for determining the searcher's actual intent. Some studies have investigated the reliability of the query intent annotation process by measuring the interassessor agreement. However, these studies did not measure the validity of the judgments, that is, to what extent the annotations match the searcher's actual intent. In this study, we asked both the searchers themselves and external assessors to classify queries using the same intent classification scheme. We show that of the seven dimensions in our intent classification scheme, four can reliably be used for query annotation. Of these four, only the annotations on the topic and spatial sensitivity dimension are valid when compared with the searcher's annotations. The difference between the interassessor agreement and the assessor-searcher agreement was significant on all dimensions, showing that the agreement between external assessors is not a good estimator of the validity of the intent classifications. Therefore, we encourage the research community to consider using query intent classifications by the searchers themselves as test data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In biology, we frequently observe different species existing within the same environment. For example, there are many cell types in a tumour, or different animal species may occupy a given habitat. In modelling interactions between such species, we often make use of the mean field approximation, whereby spatial correlations between the locations of individuals are neglected. Whilst this approximation holds in certain situations, this is not always the case, and care must be taken to ensure the mean field approximation is only used in appropriate settings. In circumstances where the mean field approximation is unsuitable we need to include information on the spatial distributions of individuals, which is not a simple task. In this paper we provide a method that overcomes many of the failures of the mean field approximation for an on-lattice volume-excluding birth-death-movement process with multiple species. We explicitly take into account spatial information on the distribution of individuals by including partial differential equation descriptions of lattice site occupancy correlations. We demonstrate how to derive these equations for the multi-species case, and show results specific to a two-species problem. We compare averaged discrete results to both the mean field approximation and our improved method which incorporates spatial correlations. We note that the mean field approximation fails dramatically in some cases, predicting very different behaviour from that seen upon averaging multiple realisations of the discrete system. In contrast, our improved method provides excellent agreement with the averaged discrete behaviour in all cases, thus providing a more reliable modelling framework. Furthermore, our method is tractable as the resulting partial differential equations can be solved efficiently using standard numerical techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis establishes performance properties for approximate filters and controllers that are designed on the basis of approximate dynamic system representations. These performance properties provide a theoretical justification for the widespread application of approximate filters and controllers in the common situation where system models are not known with complete certainty. This research also provides useful tools for approximate filter designs, which are applied to hybrid filtering of uncertain nonlinear systems. As a contribution towards applications, this thesis also investigates air traffic separation control in the presence of measurement uncertainties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Utility functions in Bayesian experimental design are usually based on the posterior distribution. When the posterior is found by simulation, it must be sampled from for each future data set drawn from the prior predictive distribution. Many thousands of posterior distributions are often required. A popular technique in the Bayesian experimental design literature to rapidly obtain samples from the posterior is importance sampling, using the prior as the importance distribution. However, importance sampling will tend to break down if there is a reasonable number of experimental observations and/or the model parameter is high dimensional. In this paper we explore the use of Laplace approximations in the design setting to overcome this drawback. Furthermore, we consider using the Laplace approximation to form the importance distribution to obtain a more efficient importance distribution than the prior. The methodology is motivated by a pharmacokinetic study which investigates the effect of extracorporeal membrane oxygenation on the pharmacokinetics of antibiotics in sheep. The design problem is to find 10 near optimal plasma sampling times which produce precise estimates of pharmacokinetic model parameters/measures of interest. We consider several different utility functions of interest in these studies, which involve the posterior distribution of parameter functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In cold-formed steel construction, the use of a range of thin, high strength steels (0.35 mm thickness and 550 MPa yield stress) has increased significantly in recent times. A good knowledge of the basic mechanical properties of these steels is needed for a satisfactory use of them. In relation to the modulus of elasticity, the current practice is to assume it to be about 200 GPa for all steel grades. However, tensile tests of these steels have consistently shown that the modulus of elasticity varies with grade of steel and thickness. It was found that it increases to values as high as 240 GPa for smaller thicknesses and higher grades of steel. This paper discusses this topic, presents the tensile test results for a number of steel grades and thicknesses, and attempts to develop a relationship between modulus of elasticity, yield stress and thickness for the steel grades considered in this investigation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Condensation technique of degree of freedom is firstly proposed to improve the computational efficiency of meshfree method with Galerkin weak form. In present method, scattered nodes without connectivity are divided into several subsets by cells with arbitrary shape. The local discrete equations are established over each cell by using moving kriging interpolation, in which the nodes that located in the cell are used for approximation. Then, the condensation technique can be introduced into the local discrete equations by transferring equations of inner nodes to equations of boundary nodes based on cell. In the scheme of present method, the calculation of each cell is carried out by meshfree method with Galerkin weak form, and local search is implemented in interpolation. Numerical examples show that the present method has high computational efficiency and convergence, and good accuracy is also obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An investigation of the drying of spherical food particles was performed, using peas as the model material. In the development of a mathematical model for drying curves, moisture diffusion was modelled using Fick’s second law for mass transfer. The resulting partial differential equation was solved using a forward-time central-space finite difference approximation, with the assumption of variable effective diffusivity. In order to test the model, experimental data was collected for the drying of green peas in a fluidised bed at three drying temperatures. Through fitting three equation types for effective diffusivity to the data, it was found that a linear equation form, in which diffusivity increased with decreasing moisture content, was most appropriate. The final model accurately described the drying curves of the three experimental temperatures, with an R2 value greater than 98.6% for all temperatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an investigation into event detection in crowded scenes, where the event of interest co-occurs with other activities and only binary labels at the clip level are available. The proposed approach incorporates a fast feature descriptor from the MPEG domain, and a novel multiple instance learning (MIL) algorithm using sparse approximation and random sensing. MPEG motion vectors are used to build particle trajectories that represent the motion of objects in uniform video clips, and the MPEG DCT coefficients are used to compute a foreground map to remove background particles. Trajectories are transformed into the Fourier domain, and the Fourier representations are quantized into visual words using the K-Means algorithm. The proposed MIL algorithm models the scene as a linear combination of independent events, where each event is a distribution of visual words. Experimental results show that the proposed approaches achieve promising results for event detection compared to the state-of-the-art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discretization of a geographical region is quite common in spatial analysis. There have been few studies into the impact of different geographical scales on the outcome of spatial models for different spatial patterns. This study aims to investigate the impact of spatial scales and spatial smoothing on the outcomes of modelling spatial point-based data. Given a spatial point-based dataset (such as occurrence of a disease), we study the geographical variation of residual disease risk using regular grid cells. The individual disease risk is modelled using a logistic model with the inclusion of spatially unstructured and/or spatially structured random effects. Three spatial smoothness priors for the spatially structured component are employed in modelling, namely an intrinsic Gaussian Markov random field, a second-order random walk on a lattice, and a Gaussian field with Matern correlation function. We investigate how changes in grid cell size affect model outcomes under different spatial structures and different smoothness priors for the spatial component. A realistic example (the Humberside data) is analyzed and a simulation study is described. Bayesian computation is carried out using an integrated nested Laplace approximation. The results suggest that the performance and predictive capacity of the spatial models improve as the grid cell size decreases for certain spatial structures. It also appears that different spatial smoothness priors should be applied for different patterns of point data.