960 resultados para DIMENSIONAL MODEL
Resumo:
Neoplastic tissue is typically highly vascularized, contains abnormal concentrations of extracellular proteins (e.g. collagen, proteoglycans) and has a high interstitial fluid pres- sure compared to most normal tissues. These changes result in an overall stiffening typical of most solid tumors. Elasticity Imaging (EI) is a technique which uses imaging systems to measure relative tissue deformation and thus noninvasively infer its mechanical stiffness. Stiffness is recovered from measured deformation by using an appropriate mathematical model and solving an inverse problem. The integration of EI with existing imaging modal- ities can improve their diagnostic and research capabilities. The aim of this work is to develop and evaluate techniques to image and quantify the mechanical properties of soft tissues in three dimensions (3D). To that end, this thesis presents and validates a method by which three dimensional ultrasound images can be used to image and quantify the shear modulus distribution of tissue mimicking phantoms. This work is presented to motivate and justify the use of this elasticity imaging technique in a clinical breast cancer screening study. The imaging methodologies discussed are intended to improve the specificity of mammography practices in general. During the development of these techniques, several issues concerning the accuracy and uniqueness of the result were elucidated. Two new algorithms for 3D EI are designed and characterized in this thesis. The first provides three dimensional motion estimates from ultrasound images of the deforming ma- terial. The novel features include finite element interpolation of the displacement field, inclusion of prior information and the ability to enforce physical constraints. The roles of regularization, mesh resolution and an incompressibility constraint on the accuracy of the measured deformation is quantified. The estimated signal to noise ratio of the measured displacement fields are approximately 1800, 21 and 41 for the axial, lateral and eleva- tional components, respectively. The second algorithm recovers the shear elastic modulus distribution of the deforming material by efficiently solving the three dimensional inverse problem as an optimization problem. This method utilizes finite element interpolations, the adjoint method to evaluate the gradient and a quasi-Newton BFGS method for optimiza- tion. Its novel features include the use of the adjoint method and TVD regularization with piece-wise constant interpolation. A source of non-uniqueness in this inverse problem is identified theoretically, demonstrated computationally, explained physically and overcome practically. Both algorithms were test on ultrasound data of independently characterized tissue mimicking phantoms. The recovered elastic modulus was in all cases within 35% of the reference elastic contrast. Finally, the preliminary application of these techniques to tomosynthesis images showed the feasiblity of imaging an elastic inclusion.
Resumo:
One-and two-dimensional cellular automata which are known to be fault-tolerant are very complex. On the other hand, only very simple cellular automata have actually been proven to lack fault-tolerance, i.e., to be mixing. The latter either have large noise probability ε or belong to the small family of two-state nearest-neighbor monotonic rules which includes local majority voting. For a certain simple automaton L called the soldiers rule, this problem has intrigued researchers for the last two decades since L is clearly more robust than local voting: in the absence of noise, L eliminates any finite island of perturbation from an initial configuration of all 0's or all 1's. The same holds for a 4-state monotonic variant of L, K, called two-line voting. We will prove that the probabilistic cellular automata Kε and Lε asymptotically lose all information about their initial state when subject to small, strongly biased noise. The mixing property trivially implies that the systems are ergodic. The finite-time information-retaining quality of a mixing system can be represented by its relaxation time Relax(⋅), which measures the time before the onset of significant information loss. This is known to grow as (1/ε)^c for noisy local voting. The impressive error-correction ability of L has prompted some researchers to conjecture that Relax(Lε) = 2^(c/ε). We prove the tight bound 2^(c1log^21/ε) < Relax(Lε) < 2^(c2log^21/ε) for a biased error model. The same holds for Kε. Moreover, the lower bound is independent of the bias assumption. The strong bias assumption makes it possible to apply sparsity/renormalization techniques, the main tools of our investigation, used earlier in the opposite context of proving fault-tolerance.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
A neural network model is presented to account for the three dimensional perception of visual space by way of an analog Gestalt-like perceptual mechanism.
Resumo:
In this thesis I present the work done during my PhD in the area of low dimensional quantum gases. The chapters of this thesis are self contained and represent individual projects which have been peer reviewed and accepted for publication in respected international journals. Various systems are considered, the first of which is a two particle model which possesses an exact analytical solution. I investigate the non-classical correlations that exist between the particles as a function of the tunable properties of the system. In the second work I consider the coherences and out of equilibrium dynamics of a one-dimensional Tonks-Girardeau gas. I show how the coherence of the gas can be inferred from various properties of the reduced state and how this may be observed in experiments. I then present a model which can be used to probe a one-dimensional Fermi gas by performing a measurement on an impurity which interacts with the gas. I show how this system can be used to observe the so-called orthogonality catastrophe using modern interferometry techniques. In the next chapter I present a simple scheme to create superposition states of particles with special emphasis on the NOON state. I explore the effect of inter-particle interactions in the process and then characterise the usefulness of these states for interferometry. Finally I present my contribution to a project on long distance entanglement generation in ion chains. I show how carefully tuning the environment can create decoherence-free subspaces which allows one to create and preserve entanglement.
Resumo:
Integrated nanowire electrodes that permit direct, sensitive and rapid electrochemical based detection of chemical and biological species are a powerful emerging class of sensor devices. As critical dimensions of the electrodes enter the nanoscale, radial analyte diffusion profiles to the electrode dominate with a corresponding enhancement in mass transport, steady-state sigmoidal voltammograms, low depletion of target molecules and faster analysis. To optimise these sensors it is necessary to fully understand the factors that influence performance limits including: electrode geometry, electrode dimensions, electrode separation distances (within nanowire arrays) and diffusional mass transport. Therefore, in this thesis, theoretical simulations of analyte diffusion occurring at a variety of electrode designs were undertaken using Comsol Multiphysics®. Sensor devices were fabricated and corresponding experiments were performed to challenge simulation results. Two approaches for the fabrication and integration of metal nanowire electrodes are presented: Template Electrodeposition and Electron-Beam Lithography. These approaches allow for the fabrication of nanowires which may be subsequently integrated at silicon chip substrates to form fully functional electrochemical devices. Simulated and experimental results were found to be in excellent agreement validating the simulation model. The electrochemical characteristics exhibited by nanowire electrodes fabricated by electronbeam lithography were directly compared against electrochemical performance of a commercial ultra-microdisc electrode. Steady-state cyclic voltammograms in ferrocenemonocarboxylic acid at single ultra-microdisc electrodes were observed at low to medium scan rates (≤ 500 mV.s-1). At nanowires, steady-state responses were observed at ultra-high scan rates (up to 50,000 mV.s-1), thus allowing for much faster analysis (20 ms). Approaches for elucidating faradaic signal without the requirement for background subtraction were also developed. Furthermore, diffusional process occurring at arrays with increasing inter-electrode distance and increasing number of nanowires were explored. Diffusion profiles existing at nanowire arrays were simulated with Comsol Multiphysics®. A range of scan rates were modelled, and experiments were undertaken at 5,000 mV.s-1 since this allows rapid data capture required for, e.g., biomedical, environmental and pharmaceutical diagnostic applications.
Resumo:
In this paper, we propose generalized sampling approaches for measuring a multi-dimensional object using a compact compound-eye imaging system called thin observation module by bound optics (TOMBO). This paper shows the proposed system model, physical examples, and simulations to verify TOMBO imaging using generalized sampling. In the system, an object is modulated and multiplied by a weight distribution with physical coding, and the coded optical signal is integrated on to a detector array. A numerical estimation algorithm employing a sparsity constraint is used for object reconstruction.
Resumo:
In the presence of a chemical potential, the physics of level crossings leads to singularities at zero temperature, even when the spatial volume is finite. These singularities are smoothed out at a finite temperature but leave behind nontrivial finite size effects which must be understood in order to extract thermodynamic quantities using Monte Carlo methods, particularly close to critical points. We illustrate some of these issues using the classical nonlinear O(2) sigma model with a coupling β and chemical potential μ on a 2+1-dimensional Euclidean lattice. In the conventional formulation this model suffers from a sign problem at nonzero chemical potential and hence cannot be studied with the Wolff cluster algorithm. However, when formulated in terms of the worldline of particles, the sign problem is absent, and the model can be studied efficiently with the "worm algorithm." Using this method we study the finite size effects that arise due to the chemical potential and develop an effective quantum mechanical approach to capture the effects. As a side result we obtain energy levels of up to four particles as a function of the box size and uncover a part of the phase diagram in the (β,μ) plane. © 2010 The American Physical Society.
Resumo:
We consider the problem of variable selection in regression modeling in high-dimensional spaces where there is known structure among the covariates. This is an unconventional variable selection problem for two reasons: (1) The dimension of the covariate space is comparable, and often much larger, than the number of subjects in the study, and (2) the covariate space is highly structured, and in some cases it is desirable to incorporate this structural information in to the model building process. We approach this problem through the Bayesian variable selection framework, where we assume that the covariates lie on an undirected graph and formulate an Ising prior on the model space for incorporating structural information. Certain computational and statistical problems arise that are unique to such high-dimensional, structured settings, the most interesting being the phenomenon of phase transitions. We propose theoretical and computational schemes to mitigate these problems. We illustrate our methods on two different graph structures: the linear chain and the regular graph of degree k. Finally, we use our methods to study a specific application in genomics: the modeling of transcription factor binding sites in DNA sequences. © 2010 American Statistical Association.
Resumo:
© 2014, Springer-Verlag Berlin Heidelberg.This study assesses the skill of advanced regional climate models (RCMs) in simulating southeastern United States (SE US) summer precipitation and explores the physical mechanisms responsible for the simulation skill at a process level. Analysis of the RCM output for the North American Regional Climate Change Assessment Program indicates that the RCM simulations of summer precipitation show the largest biases and a remarkable spread over the SE US compared to other regions in the contiguous US. The causes of such a spread are investigated by performing simulations using the Weather Research and Forecasting (WRF) model, a next-generation RCM developed by the US National Center for Atmospheric Research. The results show that the simulated biases in SE US summer precipitation are due mainly to the misrepresentation of the modeled North Atlantic subtropical high (NASH) western ridge. In the WRF simulations, the NASH western ridge shifts 7° northwestward when compared to that in the reanalysis ensemble, leading to a dry bias in the simulated summer precipitation according to the relationship between the NASH western ridge and summer precipitation over the southeast. Experiments utilizing the four dimensional data assimilation technique further suggest that the improved representation of the circulation patterns (i.e., wind fields) associated with the NASH western ridge substantially reduces the bias in the simulated SE US summer precipitation. Our analysis of circulation dynamics indicates that the NASH western ridge in the WRF simulations is significantly influenced by the simulated planetary boundary layer (PBL) processes over the Gulf of Mexico. Specifically, a decrease (increase) in the simulated PBL height tends to stabilize (destabilize) the lower troposphere over the Gulf of Mexico, and thus inhibits (favors) the onset and/or development of convection. Such changes in tropical convection induce a tropical–extratropical teleconnection pattern, which modulates the circulation along the NASH western ridge in the WRF simulations and contributes to the modeled precipitation biases over the SE US. In conclusion, our study demonstrates that the NASH western ridge is an important factor responsible for the RCM skill in simulating SE US summer precipitation. Furthermore, the improvements in the PBL parameterizations for the Gulf of Mexico might help advance RCM skill in representing the NASH western ridge circulation and summer precipitation over the SE US.
Resumo:
The intensity and valence of 30 emotion terms, 30 events typical of those emotions, and 30 autobiographical memories cued by those emotions were each rated by different groups of 40 undergraduates. A vector model gave a consistently better account of the data than a circumplex model, both overall and in the absence of high-intensity, neutral valence stimuli. The Positive Activation - Negative Activation (PANA) model could be tested at high levels of activation, where it is identical to the vector model. The results replicated when ratings of arousal were used instead of ratings of intensity for the events and autobiographical memories. A reanalysis of word norms gave further support for the vector and PANA models by demonstrating that neutral valence, high-arousal ratings resulted from the averaging of individual positive and negative valence ratings. Thus, compared to a circumplex model, vector and PANA models provided overall better fits.
Resumo:
The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval $[0,1]$ with dependence on a single parameter, $\lambda$. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on $\lambda$ and the behavior of the initial data around $1$. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.
Resumo:
A novel multi-scale seamless model of brittle-crack propagation is proposed and applied to the simulation of fracture growth in a two-dimensional Ag plate with macroscopic dimensions. The model represents the crack propagation at the macroscopic scale as the drift-diffusion motion of the crack tip alone. The diffusive motion is associated with the crack-tip coordinates in the position space, and reflects the oscillations observed in the crack velocity following its critical value. The model couples the crack dynamics at the macroscales and nanoscales via an intermediate mesoscale continuum. The finite-element method is employed to make the transition from the macroscale to the nanoscale by computing the continuum-based displacements of the atoms at the boundary of an atomic lattice embedded within the plate and surrounding the tip. Molecular dynamics (MD) simulation then drives the crack tip forward, producing the tip critical velocity and its diffusion constant. These are then used in the Ito stochastic calculus to make the reverse transition from the nanoscale back to the macroscale. The MD-level modelling is based on the use of a many-body potential. The model successfully reproduces the crack-velocity oscillations, roughening transitions of the crack surfaces, as well as the macroscopic crack trajectory. The implications for a 3-D modelling are discussed.
Resumo:
This paper presents a comparison of fire field model predictions with experiment for the case of a fire within a compartment which is vented (buoyancydriven) to the outside by a single horizontal ceiling vent. Unlike previous work, the mathematical model does not employ a mixing ratio to represent vent temperatures but allows the model to predict vent temperatures a priori. The experiment suggests that the flow through the vent produces oscillatory behaviour in vent temperatures with puffs of smoke emerging from the fire compartment. This type of flow is also predicted by the fire field model. While the numerical predictions are in good qualitative agreement with observations, they overpredict the amplitudes of the temperature oscillations within the vent and also the compartment temperatures. The discrepancies are thought to be due to three-dimensional effects not accounted for in this model as well as using standard ‘practices’ normally used by the community with regards to discretization and turbulence models. Furthermore, it is important to note that the use of the k–ε turbulence model in a transient mode, as is used here, may have a significant effect on the results. The numerical results also suggest that a linear relationship exists between the frequency of vent temperature oscillation (n) and the heat release rate (Q0) of the type n∝Q0.290, similar to that observed for compartments with two horizontal vents. This relationship is predicted to occur only for heat release rates below a critical value. Furthermore, the vent discharge coefficient is found to vary in an oscillatory fashion with a mean value of 0.58. Below the critical heat release rate the mean discharge coefficient is found to be insensitive to fire size.