896 resultados para deduced optical model parameters
Resumo:
Strain localisation is a widespread phenomenon often observed in shear and compressive loading of geomaterials, for example, the fault gouge. It is believed that the main mechanisms of strain localisation are strain softening and mismatch between dilatancy and pressure sensitivity. Observations show that gouge deformation is accompanied by considerable rotations of grains. In our previous work as a model for gouge material, we proposed a continuum description for an assembly of particles of equal radius in which the particle rotation is treated as an independent degree of freedom. We showed that there exist critical values of the model parameters for which the displacement gradient exhibits a pronounced localisation at the mid-surface layers of the fault, even in the absence of inelasticity. Here, we generalise the model to the case of finite deformations characteristic for the gouge deformation. We derive objective constitutive relationships relating the Jaumann rates of stress and moment stress to the relative strain and curvature rates, respectively. The model suggests that the pattern of localisation remains the same as in the linear case. However, the presence of the Jaumann terms leads to the emergence of non-zero normal stresses acting along and perpendicular to the shear layer (with zero hydrostatic pressure), and localised along the mid-line of the gouge; these stress components are absent in the linear model of simple shear. These additional normal stresses, albeit small, cause a change in the direction in which the maximal normal stresses act and in which en-echelon fracturing is formed.
Resumo:
How do signals from the 2 eyes combine and interact? Our recent work has challenged earlier schemes in which monocular contrast signals are subject to square-law transduction followed by summation across eyes and binocular gain control. Much more successful was a new 'two-stage' model in which the initial transducer was almost linear and contrast gain control occurred both pre- and post-binocular summation. Here we extend that work by: (i) exploring the two-dimensional stimulus space (defined by left- and right-eye contrasts) more thoroughly, and (ii) performing contrast discrimination and contrast matching tasks for the same stimuli. Twenty-five base-stimuli made from 1 c/deg patches of horizontal grating, were defined by the factorial combination of 5 contrasts for the left eye (0.3-32%) with five contrasts for the right eye (0.3-32%). Other than in contrast, the gratings in the two eyes were identical. In a 2IFC discrimination task, the base-stimuli were masks (pedestals), where the contrast increment was presented to one eye only. In a matching task, the base-stimuli were standards to which observers matched the contrast of either a monocular or binocular test grating. In the model, discrimination depends on the local gradient of the observer's internal contrast-response function, while matching equates the magnitude (rather than gradient) of response to the test and standard. With all model parameters fixed by previous work, the two-stage model successfully predicted both the discrimination and the matching data and was much more successful than linear or quadratic binocular summation models. These results show that performance measures and perception (contrast discrimination and contrast matching) can be understood in the same theoretical framework for binocular contrast vision. © 2007 VSP.
Resumo:
We propose a simple model that captures the salient properties of distribution networks, and study the possible occurrence of blackouts, i.e., sudden failings of large portions of such networks. The model is defined on a random graph of finite connectivity. The nodes of the graph represent hubs of the network, while the edges of the graph represent the links of the distribution network. Both, the nodes and the edges carry dynamical two state variables representing the functioning or dysfunctional state of the node or link in question. We describe a dynamical process in which the breakdown of a link or node is triggered when the level of maintenance it receives falls below a given threshold. This form of dynamics can lead to situations of catastrophic breakdown, if levels of maintenance are themselves dependent on the functioning of the net, once maintenance levels locally fall below a critical threshold due to fluctuations. We formulate conditions under which such systems can be analyzed in terms of thermodynamic equilibrium techniques, and under these conditions derive a phase diagram characterizing the collective behavior of the system, given its model parameters. The phase diagram is confirmed qualitatively and quantitatively by simulations on explicit realizations of the graph, thus confirming the validity of our approach. © 2007 The American Physical Society.
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.
Resumo:
A study was performed on non-Gaussian statistics of an optical soliton in the presence of amplified spontaneous emission. An approach based on the Fokker-Planck equation was applied to study the optical soliton parameters in the presence of additive noise. The rigorous method not only allowed to reproduce and justify the classical Gordon-Haus formula but also led to new exact results.
Resumo:
This paper is partially supported by the Bulgarian Science Fund under grant Nr. DO 02– 359/2008.
Resumo:
We develop, implement and study a new Bayesian spatial mixture model (BSMM). The proposed BSMM allows for spatial structure in the binary activation indicators through a latent thresholded Gaussian Markov random field. We develop a Gibbs (MCMC) sampler to perform posterior inference on the model parameters, which then allows us to assess the posterior probabilities of activation for each voxel. One purpose of this article is to compare the HJ model and the BSMM in terms of receiver operating characteristics (ROC) curves. Also we consider the accuracy of the spatial mixture model and the BSMM for estimation of the size of the activation region in terms of bias, variance and mean squared error. We perform a simulation study to examine the aforementioned characteristics under a variety of configurations of spatial mixture model and BSMM both as the size of the region changes and as the magnitude of activation changes.
Detecting Precipitation Climate Changes: An Approach Based on a Stochastic Daily Precipitation Model
Resumo:
2002 Mathematics Subject Classification: 62M10.
Resumo:
The correlated probit model is frequently used for multiple ordered data since it allows to incorporate seamlessly different correlation structures. The estimation of the probit model parameters based on direct maximization of the limited information maximum likelihood is a numerically intensive procedure. We propose an extension of the EM algorithm for obtaining maximum likelihood estimates for a correlated probit model for multiple ordinal outcomes. The algorithm is implemented in the free software environment for statistical computing and graphics R. We present two simulation studies to examine the performance of the developed algorithm. We apply the model to data on 121 women with cervical or endometrial cancer. Patients developed normal tissue reactions as a result of post-operative external beam pelvic radiotherapy. In this work we focused on modeling the effects of a genetic factor on early skin and early urogenital tissue reactions and on assessing the strength of association between the two types of reactions. We established that there was an association between skin reactions and polymorphism XRCC3 codon 241 (C>T) (rs861539) and that skin and urogenital reactions were positively correlated. ACM Computing Classification System (1998): G.3.
Resumo:
2010 Mathematics Subject Classification: 62J99.
Resumo:
In recent years, there has been an increasing interest in learning a distributed representation of word sense. Traditional context clustering based models usually require careful tuning of model parameters, and typically perform worse on infrequent word senses. This paper presents a novel approach which addresses these limitations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned representations outperform the publicly available embeddings on half of the metrics in the word similarity task, 6 out of 13 sub tasks in the analogical reasoning task, and gives the best overall accuracy in the word sense effect classification task, which shows the effectiveness of our proposed distributed distribution learning model.
Resumo:
Markovian models are widely used to analyse quality-of-service properties of both system designs and deployed systems. Thanks to the emergence of probabilistic model checkers, this analysis can be performed with high accuracy. However, its usefulness is heavily dependent on how well the model captures the actual behaviour of the analysed system. Our work addresses this problem for a class of Markovian models termed discrete-time Markov chains (DTMCs). We propose a new Bayesian technique for learning the state transition probabilities of DTMCs based on observations of the modelled system. Unlike existing approaches, our technique weighs observations based on their age, to account for the fact that older observations are less relevant than more recent ones. A case study from the area of bioinformatics workflows demonstrates the effectiveness of the technique in scenarios where the model parameters change over time.
Resumo:
This study examines the effect of blood absorption on the endogenous fluorescence signal intensity of biological tissues. Experimental studies were conducted to identify these effects. To register the fluorescence intensity, the fluorescence spectroscopy method was employed. The intensity of the blood flow was measured by laser Doppler flowmetry. We proposed one possible implementation of the Monte Carlo method for the theoretical analysis of the effect of blood on the fluorescence signals. The simulation is constructed as a four-layer skin optical model based on the known optical parameters of the skin with different levels of blood supply. With the help of the simulation, we demonstrate how the level of blood supply can affect the appearance of the fluorescence spectra. In addition, to describe the properties of biological tissue, which may affect the fluorescence spectra, we turned to the method of diffuse reflectance spectroscopy (DRS). Using the spectral data provided by the DRS, the tissue attenuation effect can be extracted and used to correct the fluorescence spectra.
Resumo:
Research on the perception of temporal order uses either temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks, in both of which two stimuli are presented with some temporal delay and observers must judge the order of presentation. Results generally differ across tasks, raising concerns about whether they measure the same processes. We present a model including sensory and decisional parameters that places these tasks in a common framework that allows studying their implications on observed performance. TOJ tasks imply specific decisional components that explain the discrepancy of results obtained with TOJ and SJ tasks. The model is also tested against published data on audiovisual temporal-order judgments, and the fit is satisfactory, although model parameters are more accurately estimated with SJ tasks. Measures of latent point of subjective simultaneity and latent sensitivity are defined that are invariant across tasks by isolating the sensory parameters governing observed performance, whereas decisional parameters vary across tasks and account for observed differences across them. Our analyses concur with other evidence advising against the use of TOJ tasks in research on perception of temporal order.
Resumo:
The goal of my Ph.D. thesis is to enhance the visualization of the peripheral retina using wide-field optical coherence tomography (OCT) in a clinical setting.
OCT has gain widespread adoption in clinical ophthalmology due to its ability to visualize the diseases of the macula and central retina in three-dimensions, however, clinical OCT has a limited field-of-view of 300. There has been increasing interest to obtain high-resolution images outside of this narrow field-of-view, because three-dimensional imaging of the peripheral retina may prove to be important in the early detection of neurodegenerative diseases, such as Alzheimer's and dementia, and the monitoring of known ocular diseases, such as diabetic retinopathy, retinal vein occlusions, and choroid masses.
Before attempting to build a wide-field OCT system, we need to better understand the peripheral optics of the human eye. Shack-Hartmann wavefront sensors are commonly used tools for measuring the optical imperfections of the eye, but their acquisition speed is limited by their underlying camera hardware. The first aim of my thesis research is to create a fast method of ocular wavefront sensing such that we can measure the wavefront aberrations at numerous points across a wide visual field. In order to address aim one, we will develop a sparse Zernike reconstruction technique (SPARZER) that will enable Shack-Hartmann wavefront sensors to use as little as 1/10th of the data that would normally be required for an accurate wavefront reading. If less data needs to be acquired, then we can increase the speed at which wavefronts can be recorded.
For my second aim, we will create a sophisticated optical model that reproduces the measured aberrations of the human eye. If we know how the average eye's optics distort light, then we can engineer ophthalmic imaging systems that preemptively cancel inherent ocular aberrations. This invention will help the retinal imaging community to design systems that are capable of acquiring high resolution images across a wide visual field. The proposed model eye is also of interest to the field of vision science as it aids in the study of how anatomy affects visual performance in the peripheral retina.
Using the optical model from aim two, we will design and reduce to practice a clinical OCT system that is capable of imaging a large (800) field-of-view with enhanced visualization of the peripheral retina. A key aspect of this third and final aim is to make the imaging system compatible with standard clinical practices. To this end, we will incorporate sensorless adaptive optics in order to correct the inter- and intra- patient variability in ophthalmic aberrations. Sensorless adaptive optics will improve both the brightness (signal) and clarity (resolution) of features in the peripheral retina without affecting the size of the imaging system.
The proposed work should not only be a noteworthy contribution to the ophthalmic and engineering communities, but it should strengthen our existing collaborations with the Duke Eye Center by advancing their capability to diagnose pathologies of the peripheral retinal.