196 resultados para Spatially modulated
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
Nitrous oxide (N2O) is primarily produced by the microbially-mediated nitrification and denitrification processes in soils. It is influenced by a suite of climate (i.e. temperature and rainfall) and soil (physical and chemical) variables, interacting soil and plant nitrogen (N) transformations (either competing or supplying substrates) as well as land management practices. It is not surprising that N2O emissions are highly variable both spatially and temporally. Computer simulation models, which can integrate all of these variables, are required for the complex task of providing quantitative determinations of N2O emissions. Numerous simulation models have been developed to predict N2O production. Each model has its own philosophy in constructing simulation components as well as performance strengths. The models range from those that attempt to comprehensively simulate all soil processes to more empirical approaches requiring minimal input data. These N2O simulation models can be classified into three categories: laboratory, field and regional/global levels. Process-based field-scale N2O simulation models, which simulate whole agroecosystems and can be used to develop N2O mitigation measures, are the most widely used. The current challenge is how to scale up the relatively more robust field-scale model to catchment, regional and national scales. This paper reviews the development history, main construction components, strengths, limitations and applications of N2O emissions models, which have been published in the literature. The three scale levels are considered and the current knowledge gaps and challenges in modelling N2O emissions from soils are discussed.
Resumo:
Estimates of potential and actual C sequestration require areal information about various types of management activities. Forest surveys, land use data, and agricultural statistics contribute information enabling calculation of the impacts of current and historical land management on C sequestration in biomass (in forests) or in soil (in agricultural systems). Unfortunately little information exists on the distribution of various management activities that can impact soil C content in grassland systems. Limited information of this type restricts our ability to carry out bottom-up estimates of the current C balance of grasslands or to assess the potential for grasslands to act as C sinks with changes in management. Here we review currently available information about grassland management, how that information could be related to information about the impacts of management on soil C stocks, information that may be available in the future, and needs that remain to be filled before in-depth assessments may be carried out. We also evaluate constraints induced by variability in information sources within and between countries. It is readily apparent that activity data for grassland management is collected less frequently and on a coarser scale than data for forest or agricultural inventories and that grassland activity data cannot be directly translated into IPCC-type factors as is done for IPCC inventories of agricultural soils. However, those management data that are available can serve to delineate broad-scale differences in management activities within regions in which soil C is likely to change in response to changes in management. This, coupled with the distinct possibility of more intensive surveys planned in the future, may enable more accurate assessments of grassland C dynamics with higher resolution both spatially and in the number management activities.
Resumo:
The potential to sequester atmospheric carbon in agricultural and forest soils to offset greenhouse gas emissions has generated interest in measuring changes in soil carbon resulting from changes in land management. However, inherent spatial variability of soil carbon limits the precision of measurement of changes in soil carbon and hence, the ability to detect changes. We analyzed variability of soil carbon by intensively sampling sites under different land management as a step toward developing efficient soil sampling designs. Sites were tilled crop-land and a mixed deciduous forest in Tennessee, and old-growth and second-growth coniferous forest in western Washington, USA. Six soil cores within each of three microplots were taken as an initial sample and an additional six cores were taken to simulate resampling. Soil C variability was greater in Washington than in Tennessee, and greater in less disturbed than in more disturbed sites. Using this protocol, our data suggest that differences on the order of 2.0 Mg C ha(-1) could be detected by collection and analysis of cores from at least five (tilled) or two (forest) microplots in Tennessee. More spatial variability in the forested sites in Washington increased the minimum detectable difference, but these systems, consisting of low C content sandy soil with irregularly distributed pockets of organic C in buried logs, are likely to rank among the most spatially heterogeneous of systems. Our results clearly indicate that consistent intramicroplot differences at all sites will enable detection of much more modest changes if the same microplots are resampled.
Resumo:
Development of tissue-engineered constructs for skeletal regeneration of large critical-sized defects requires the identification of a sustained mineralizing cell source and careful optimization of scaffold architecture and surface properties. We have recently reported that Runx2-genetically engineered primary dermal fibroblasts express a mineralizing phenotype in monolayer culture, highlighting their potential as an autologous osteoblastic cell source which can be easily obtained in large quantities. The objective of the present study was to evaluate the osteogenic potential of Runx2-expressing fibroblasts when cultured in vitro on three commercially available scaffolds with divergent properties: fused deposition-modeled polycaprolactone (PCL), gas-foamed polylactide-co-glycolide (PLGA), and fibrous collagen disks. We demonstrate that the mineralization capacity of Runx2-engineered fibroblasts is scaffold dependent, with collagen foams exhibiting ten-fold higher mineral volume compared to PCL and PLGA matrices. Constructs were differentially colonized by genetically modified fibroblasts, but scaffold-directed changes in DNA content did not correlate with trends in mineral deposition. Sustained expression of Runx2 upregulated osteoblastic gene expression relative to unmodified control cells, and the magnitude of this expression was modulated by scaffold properties. Histological analyses revealed that matrix mineralization co-localized with cellular distribution, which was confined to the periphery of fibrous collagen and PLGA sponges and around the circumference of PCL microfilaments. Finally, FTIR spectroscopy verified that mineral deposits within all Runx2-engineered scaffolds displayed the chemical signature characteristic of carbonate-containing, poorly crystalline hydroxyapatite. These results highlight the important effect of scaffold properties on the capacity of Runx2-expressing primary dermal fibroblasts to differentiate into a mineralizing osteoblastic phenotype for bone tissue engineering applications.
Resumo:
Advances in safety research—trying to improve the collective understanding of motor vehicle crash causation—rests upon the pursuit of numerous lines of inquiry. The research community has focused on analytical methods development (negative binomial specifications, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might think of different lines of inquiry in terms of ‘low lying fruit’—areas of inquiry that might provide significant improvements in understanding crash causation. It is the contention of this research that omitted variable bias caused by the exclusion of important variables is an important line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant ability to better understand contributing factors to crashes. This study—believed to represent a unique contribution to the safety literature—develops and examines the role of a sizeable set of spatial variables in intersection crash occurrence. In addition to commonly considered traffic and geometric variables, examined spatial factors include local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools. The results indicate that inclusion of these factors results in significant improvement in model explanatory power, and the results also generally agree with expectation. The research illuminates the importance of spatial variables in safety research and also the negative consequences of their omissions.
Resumo:
Polarising the issue of governance is the increasingly acknowledged role of airports in regional economic development, both as significant sources of direct employment and as attractants of commerce through enhanced mobility (Vickerman, Spiekermann & Wegener 1999; Hakfoort, Poot & Rietveld 2001). Most airports were once considered spatially removed from their cities, but as cities have expanded their airports no longer sit distinct of the urban environment. This newfound spatial proximity means that decisions for land use and development on either city or airport land are likely to have impacts that affect one another in either or both the short- or long-term (Stevens, Baker and Freestone 2007). These impacts increase the demand for decision making to find ways of integrating strategies for future development to ensure that airport developments do not impede the sustainable growth of its city, and likewise that city developments do not impede the sustainable growth of its airport (Gillen 2006). However questions of how, under what conditions, and to what extent decision making integration might be suitable for “airport regions” are yet to be explored let alone answered.
Resumo:
On the microscale, migration, proliferation and death are crucial in the development, homeostasis and repair of an organism; on the macroscale, such effects are important in the sustainability of a population in its environment. Dependent on the relative rates of migration, proliferation and death, spatial heterogeneity may arise within an initially uniform field; this leads to the formation of spatial correlations and can have a negative impact upon population growth. Usually, such effects are neglected in modeling studies and simple phenomenological descriptions, such as the logistic model, are used to model population growth. In this work we outline some methods for analyzing exclusion processes which include agent proliferation, death and motility in two and three spatial dimensions with spatially homogeneous initial conditions. The mean-field description for these types of processes is of logistic form; we show that, under certain parameter conditions, such systems may display large deviations from the mean field, and suggest computationally tractable methods to correct the logistic-type description.
Resumo:
The Sascha-Pelligrini low-sulphidation epithermal system is located on the western edge of the Deseado Massif, Santa Cruz Province, Argentina. Outcrop sampling has returned values of up to 160g/t gold and 796g/t silver, with Mirasol Resources and Coeur D.Alene Mines currently exploring the property. Detailed mapping of the volcanic stratigraphy has defined three units that comprise the middle Jurassic Chon Aike Formation and two units that comprise the upper Jurassic La Matilde Formation. The Chon Aike Formation consists of rhyodacite ignimbrites and tuffs, with the La Matilde Formation including rhyolite ash and lithic tuffs. The volcanic sequence is intruded by a large flow-banded rhyolite dome, with small, spatially restricted granodiorite dykes and sills cropping out across the study area. ASTER multispectral mineral mapping, combined with PIMA (Portable Infrared Mineral Analyser) and XRD (X-ray diffraction) analysis defines an alteration pattern that zones from laumontite-montmorillonite, to illite-pyritechlorite, followed by a quartz-illite-smectite-pyrite-adularia vein selvage. Supergene kaolinite and steam-heated acid-sulphate kaolinite-alunite-opal alteration horizons crop out along the Sascha Vein trend and Pelligrini respectively. Paragenetically, epithermal veining varies from chalcedonic to saccharoidal with minor bladed textures, colloform/crustiform-banded with visible electrum and acanthite, crustiform-banded grey chalcedonic to jasperoidal with fine pyrite, and crystalline comb quartz. Geothermometry of mineralised veins constrains formation temperatures from 174.8 to 205.1¡ÆC and correlates with the stability field for the interstratified illite-smectite vein selvage. Vein morphology, mineralogy and associated alteration are controlled by host rock rheology, permeability, and depth of the palaeo-water table. Mineralisation within ginguro banded veins resulted from fluctuating fluid pH associated with selenide-rich magmatic pulses, pressure release boiling and wall-rock silicate buffering. The study of the Sascha-Pelligrini epithermal system will form the basis for a deposit-specific model helping to clarify the current understanding of epithermal deposits, and may serve as a template for exploration of similar epithermal deposits throughout Santa Cruz.
Resumo:
To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our study to be a starting point for animal experiments that test navigation in perceptually ambiguous environments.
Resumo:
A Geant4 based simulation tool has been developed to perform Monte Carlo modelling of a 6 MV VarianTM iX clinac. The computer aided design interface of Geant4 was used to accurately model the LINAC components, including the Millenium multi-leaf collimators (MLCs). The simulation tool was verified via simulation of standard commissioning dosimetry data acquired with an ionisation chamber in a water phantom. Verification of the MLC model was achieved by simulation of leaf leakage measurements performed using GafchromicTM film in a solid water phantom. An absolute dose calibration capability was added by including a virtual monitor chamber into the simulation. Furthermore, a DICOM-RT interface was integrated with the application to allow the simulation of treatment plans in radiotherapy. The ability of the simulation tool to accurately model leaf movements and doses at each control point was verified by simulation of a widely used intensity-modulated radiation therapy (IMRT) quality assurance (QA) technique, the chair test.