545 resultados para Smoothing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Wet Tropics World Heritage Area in Far North Queens- land, Australia consists predominantly of tropical rainforest and wet sclerophyll forest in areas of variable relief. Previous maps of vegetation communities in the area were produced by a labor-intensive combination of field survey and air-photo interpretation. Thus,. the aim of this work was to develop a new vegetation mapping method based on imaging radar that incorporates topographical corrections, which could be repeated frequently, and which would reduce the need for detailed field assessments and associated costs. The method employed G topographic correction and mapping procedure that was developed to enable vegetation structural classes to be mapped from satellite imaging radar. Eight JERS-1 scenes covering the Wet Tropics area for 1996 were acquired from NASDA under the auspices of the Global Rainforest Mapping Project. JERS scenes were geometrically corrected for topographic distortion using an 80 m DEM and a combination of polynomial warping and radar viewing geometry modeling. An image mosaic was created to cover the Wet Tropics region, and a new technique for image smoothing was applied to the JERS texture bonds and DEM before a Maximum Likelihood classification was applied to identify major land-cover and vegetation communities. Despite these efforts, dominant vegetation community classes could only be classified to low levels of accuracy (57.5 percent) which were partly explained by the significantly larger pixel size of the DEM in comparison to the JERS image (12.5 m). In addition, the spatial and floristic detail contained in the classes of the original validation maps were much finer than the JERS classification product was able to distinguish. In comparison to field and aerial photo-based approaches for mapping the vegetation of the Wet Tropics, appropriately corrected SAR data provides a more regional scale, all-weather mapping technique for broader vegetation classes. Further work is required to establish an appropriate combination of imaging radar with elevation data and other environmental surrogates to accurately map vegetation communities across the entire Wet Tropics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The olive ridley is the most abundant seaturtle species in the world but little is known of the demography of this species. We used skeletochronological data on humerus diameter growth changes to estimate the age of North Pacific olive ridley seaturtles caught incidentally by pelagic longline fisheries operating near Hawaii and from dead turtles washed ashore on the main Hawaiian Islands. Two age estimation methods [ranking, correction factor (CF)] were used and yielded age estimates ranging from 5 to 38 and 7 to 24 years, respectively. Rank age-estimates are highly correlated (r = 0.93) with straight carapace length (SCL), CF age estimates are not (r = 0.62). We consider the CF age-estimates as biologically more plausible because of the disassociation of age and size. Using the CF age-estimates, we then estimate the median age at sexual maturity to be around 13 years old (mean carapace size c. 60 cm SCL) and found that somatic growth was negligible by 15 years of age. The expected age-specific growth rate function derived using numerical differentiation suggests at least one juvenile growth spurt at about 10–12 years of age when maximum age-specific growth rates, c. 5 cm SCL year−1, are apparent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the cross-entropy method has been successfully applied to a wide range of discrete optimization tasks. In this paper we consider the cross-entropy method in the context of continuous optimization. We demonstrate the effectiveness of the cross-entropy method for solving difficult continuous multi-extremal optimization problems, including those with non-linear constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thrust of this report concerns spline theory and some of the background to spline theory and follows the development in (Wahba, 1991). We also review methods for determining hyper-parameters, such as the smoothing parameter, by Generalised Cross Validation. Splines have an advantage over Gaussian Process based procedures in that we can readily impose atmospherically sensible smoothness constraints and maintain computational efficiency. Vector splines enable us to penalise gradients of vorticity and divergence in wind fields. Two similar techniques are summarised and improvements based on robust error functions and restricted numbers of basis functions given. A final, brief discussion of the application of vector splines to the problem of scatterometer data assimilation highlights the problems of ambiguous solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spectral and coherence methodologies are ubiquitous for the analysis of multiple time series. Partial coherence analysis may be used to try to determine graphical models for brain functional connectivity. The outcome of such an analysis may be considerably influenced by factors such as the degree of spectral smoothing, line and interference removal, matrix inversion stabilization and the suppression of effects caused by side-lobe leakage, the combination of results from different epochs and people, and multiple hypothesis testing. This paper examines each of these steps in turn and provides a possible path which produces relatively ‘clean’ connectivity plots. In particular we show how spectral matrix diagonal up-weighting can simultaneously stabilize spectral matrix inversion and reduce effects caused by side-lobe leakage, and use the stepdown multiple hypothesis test procedure to help formulate an interaction strength.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose - To provide a framework of accounting policy choice associated with the timing of adoption of the UK Statement of Standard Accounting Practice (SSAP) No. 20, "Foreign Currency Translation". The conceptual framework describes the accounting policy choices that firms face in a setting that is influenced by: their financial characteristics; the flexible foreign exchange rates; and the stock market response to accounting decisions. Design/methodology/approach - Following the positive accounting theory context, this paper puts into a framework the motives and choices of UK firms with regard to the adoption or deferment of the adoption of SSAP 20. The paper utilises the theoretical and empirical findings of previous studies to form and substantiate the conceptual framework. Given the UK foreign exchange setting, the framework identifies the initial stage: lack of regulation and flexibility in financial reporting; the intermediate stage: accounting policy choice; and the final stage: accounting choice and policy review. Findings - There are situations where accounting regulation contrasts with the needs and business objectives of firms and vice-versa. Thus, firms may delay the adoption up to the point where the increase in political costs can just be tolerated. Overall, the study infers that firms might have chosen to defer the adoption of SSAP 20 until they reach a certain corporate goal, or the adverse impact (if any) of the accounting change on firms' financial numbers is minimal. Thus, the determination of the timing of the adoption is a matter which is subject to the objectives of the managers in association with the market and economic conditions. The paper suggests that the flexibility in financial reporting, which may enhance the scope for income-smoothing, can be mitigated by the appropriate standardisation of accounting practice. Research limitations/implications - First, the study encompassed a period when firms and investors were less sophisticated users of financial information. Second, it is difficult to ascertain the decisions that firms would have taken, had the pound appreciated over the period of adoption and had the firms incurred translation losses rather than translation gains. Originality/value - This paper is useful to accounting standards setters, professional accountants, academics and investors. The study can give the accounting standard-setting bodies useful information when they prepare a change in the accounting regulation or set an appropriate date for the implementation of an accounting standard. The paper provides significant insight about the behaviour of firms and the associated impacts of financial markets and regulation on the decision-making process of firms. The framework aims to assist the market and other authorities to reduce information asymmetry and to reinforce the efficiency of the market. © Emerald Group Publishing Limited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a framework for Bayesian inference in continuous-time diffusion processes. The new method is directly related to the recently proposed variational Gaussian Process approximation (VGPA) approach to Bayesian smoothing of partially observed diffusions. By adopting a basis function expansion (BF-VGPA), both the time-dependent control parameters of the approximate GP process and its moment equations are projected onto a lower-dimensional subspace. This allows us both to reduce the computational complexity and to eliminate the time discretisation used in the previous algorithm. The new algorithm is tested on an Ornstein-Uhlenbeck process. Our preliminary results show that BF-VGPA algorithm provides a reasonably accurate state estimation using a small number of basis functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present work the neutron emission spectra from a graphite cube, and from natural uranium, lithium fluoride, graphite, lead and steel slabs bombarded with 14.1 MeV neutrons were measured to test nuclear data and calculational methods for D - T fusion reactor neutronics. The neutron spectra measured were performed by an organic scintillator using a pulse shape discrimination technique based on a charge comparison method to reject the gamma rays counts. A computer programme was used to analyse the experimental data by the differentiation unfolding method. The 14.1 MeV neutron source was obtained from T(d,n)4He reaction by the bombardment of T - Ti target with a deuteron beam of energy 130 KeV. The total neutron yield was monitored by the associated particle method using a silicon surface barrier detector. The numerical calculations were performed using the one-dimensional discrete-ordinate neutron transport code ANISN with the ZZ-FEWG 1/ 31-1F cross section library. A computer programme based on Gaussian smoothing function was used to smooth the calculated data and to match the experimental data. There was general agreement between measured and calculated spectra for the range of materials studied. The ANISN calculations carried out with P3 - S8 calculations together with representation of the slab assemblies by a hollow sphere with no reflection at the internal boundary were adequate to model the experimental data and hence it appears that the cross section set is satisfactory and for the materials tested needs no modification in the range 14.1 MeV to 2 MeV. Also it would be possible to carry out a study on fusion reactor blankets, using cylindrical geometry and including a series of concentric cylindrical shells to represent the torus wall, possible neutron converter and breeder regions, and reflector and shielding regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An investigation is carried out into the design of a small local computer network for eventual implementation on the University of Aston campus. Microprocessors are investigated as a possible choice for use as a node controller for reasons of cost and reliability. Since the network will be local, high speed lines of megabit order are proposed. After an introduction to several well known networks, various aspects of networks are discussed including packet switching, functions of a node and host-node protocol. Chapter three develops the network philosophy with an introduction to microprocessors. Various organisations of microprocessors into multicomputer and multiprocessor systems are discussed, together with methods of achieving reliabls computing. Chapter four presents the simulation model and its implentation as a computer program. The major modelling effort is to study the behaviour of messages queueing for access to the network and the message delay experienced on the network. Use is made of spectral analysis to determine the sampling frequency while Sxponentially Weighted Noving Averages are used for data smoothing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the determinants of short-term wage dynamics, using a sample of large Hungarian companies for the period of 1996-1999. We test the basic implications of an efficient contract model of bargaining between the incumbent employees and the managers, which we are unable to reject. In particular, there are structural differences between the ownership sectors consistent with our prior knowledge on relative bargaining strength and unionisation measures. Stronger bargaining position of workers leads to higher ability to pay elasticity of wages, and lower outside option elasticity. Our results indicate that while bargaining position of workers in domestic privatised firms may be weaker than in the state sector, the more robust difference relate to state sector workers versus the privatised firms with the majority foreign ownership. We examine several extensions. We augment the bargaining specification by controls related to workers' skills and find that the basic findings are robust to that. We take a closer look at the outside options of the workers. We find some interactive effects, where unemployment modify the impact of availability of rents on wages. We interpret our results as an indication that bargaining power of workers may be affected by changes in their outside options. We also experiment with one concise indicator of reservation wage which is closest to the theoretical model specification and combines sectoral wages, unemployment benefits and regional unemployment levels,. We found that measure performing well. Finally, we found that while responsiveness of wages towards ability to pay is higher in the state sector, variation in wage dynamics is lower. This may indicate some wage smoothing in the state sector, consistent with the preferences of employees.