914 resultados para first order transition system
Resumo:
2000 Mathematics Subject Classification: 62G32, 62G20.
Resumo:
We present, for the first time, a detailed investigation of the impact of second order co-propagating Raman pumping on long-haul 100G WDM DP-QPSK coherent transmission of up to 7082 km using Raman fibre laser based configurations. Signal power and noise distributions along the fibre for each pumping scheme were characterised both numerically and experimentally. Based on these pumping schemes, the Q factor penalties versus co-pump power ratios were experimentally measured and quantified. A significant Q factor penalty of up to 4.15 dB was observed after 1666 km using symmetric bidirectional pumping, compared with counter-pumping only. Our results show that whilst using co-pumping minimises the intra-cavity signal power variation and amplification noise, the Q factor penalty with co-pumping was too great for any advantage to be seen. The relative intensity noise (RIN) characteristics of the induced fibre laser and the output signal, and the intra-cavity RF spectra of the fibre laser are also presented. We attribute the Q factor degradation to RIN induced penalty due to RIN being transferred from the first order fibre laser and second order co-pump to the signal. More importantly, there were two different fibre lasing regimes contributing to the amplification. It was random distributed feedback lasing when using counter-pumping only and conventional Fabry-Perot cavity lasing when using all bidirectional pumping schemes. This also results in significantly different performances due to different laser cavity lengths for these two classes of laser.
Resumo:
We experimentally investigate three Raman fibre laser based amplification techniques with second-order bidirectional pumping. Relatively intensity noise (RIN) being transferred to the signal can be significantly suppressed by reducing first-order reflection near the input end. © 2015 OSA.
Resumo:
We investigate a class of simple models for Langevin dynamics of turbulent flows, including the one-layer quasi-geostrophic equation and the two-dimensional Euler equations. Starting from a path integral representation of the transition probability, we compute the most probable fluctuation paths from one attractor to any state within its basin of attraction. We prove that such fluctuation paths are the time reversed trajectories of the relaxation paths for a corresponding dual dynamics, which are also within the framework of quasi-geostrophic Langevin dynamics. Cases with or without detailed balance are studied. We discuss a specific example for which the stationary measure displays either a second order (continuous) or a first order (discontinuous) phase transition and a tricritical point. In situations where a first order phase transition is observed, the dynamics are bistable. Then, the transition paths between two coexisting attractors are instantons (fluctuation paths from an attractor to a saddle), which are related to the relaxation paths of the corresponding dual dynamics. For this example, we show how one can analytically determine the instantons and compute the transition probabilities for rare transitions between two attractors.
Resumo:
Ebben a tanulmányban ismertetjük a Nöther-tétel lényegi vonatkozásait, és kitérünk a Lie-szimmetriák értelmezésére abból a célból, hogy közgazdasági folyamatokra is alkalmazzuk a Lagrange-formalizmuson nyugvó elméletet. A Lie-szimmetriák dinamikai rendszerekre történő feltárása és viselkedésük jellemzése a legújabb kutatások eredményei e területen. Például Sen és Tabor (1990), Edward Lorenz (1963), a komplex kaotikus dinamika vizsgálatában jelent®s szerepet betöltő 3D modelljét, Baumann és Freyberger (1992) a két-dimenziós Lotka-Volterra dinamikai rendszert, és végül Almeida és Moreira (1992) a három-hullám interakciós problémáját vizsgálták a megfelelő Lie-szimmetriák segítségével. Mi most empirikus elemzésre egy közgazdasági dinamikai rendszert választottunk, nevezetesen Goodwin (1967) ciklusmodelljét. Ennek vizsgálatát tűztük ki célul a leírandó rendszer Lie-szimmetriáinak meghatározásán keresztül. / === / The dynamic behavior of a physical system can be frequently described very concisely by the least action principle. In the centre of its mathematical presentation is a specic function of coordinates and velocities, i.e., the Lagrangian. If the integral of the Lagrangian is stationary, then the system is moving along an extremal path through the phase space, and vice versa. It can be seen, that each Lie symmetry of a Lagrangian in general corresponds to a conserved quantity, and the conservation principle is explained by a variational symmetry related to a dynamic or geometrical symmetry. Briey, that is the meaning of Noether's theorem. This paper scrutinizes the substantial characteristics of Noether's theorem, interprets the Lie symmetries by PDE system and calculates the generators (symmetry vectors) on R. H. Goodwin's cyclical economic growth model. At first it will be shown that the Goodwin model also has a Lagrangian structure, therefore Noether's theorem can also be applied here. Then it is proved that the cyclical moving in his model derives from its Lie symmetries, i.e., its dynamic symmetry. All these proofs are based on the investigations of the less complicated Lotka Volterra model and those are extended to Goodwin model, since both models are one-to-one maps of each other. The main achievement of this paper is the following: Noether's theorem is also playing a crucial role in the mechanics of Goodwin model. It also means, that its cyclical moving is optimal. Generalizing this result, we can assert, that all dynamic systems' solutions described by first order nonlinear ODE system are optimal by the least action principle, if they have a Lagrangian.
Resumo:
The traditional approach to crisis management suggest autocratic leadership, that has risks anyway (leader is the bottle-neck of problem solving, single-loop learning, crisis management is a matter of efficiency). However, managing nowadays crisis is rather effectiveness issue, and requires double-loop learning (second-order change) and leadership role in the sense of Kotter’s theory. Paper discusses the top-management’s leadership responsibilities, and their special tasks in the problem solving process of change. Inappropriate perception of leadership responsibilities and insisting upon first-order change strategy results in becoming part of the problem, rather that part of the solution of the problem.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
Dissolved organic matter (DOM) is one of the largest carbon reservoirs on this planet and is present in aquatic environments as a highly complex mixture of organic compounds. The Florida coastal Everglades (FCE) is one of the largest wetlands in the world. DOM in this system is an important biogeochemical component as most of the nitrogen (N) and phosphorous (P) are in organic forms. Achieving a better understanding of DOM dynamics in large coastal wetlands is critical, and a particularly important issue in the context of Everglades restoration. In this work, the environmental dynamics of surface water DOM on spatial and temporal scales was investigated. In addition, photo- and bio-reactivity of this DOM was determined, surface-to-groundwater exchange of DOM was investigated, and the size distribution of freshwater DOM in Everglades was assessed. The data show that DOM dynamics in this ecosystem are controlled by both hydrological and ecological drivers and are clearly different on spatial scales and variable seasonally. The DOM reactivity data, modeled with a multi-pool first order degradation kinetics model, found that fluorescent DOM in FCE is generally photo-reactive and bio-refractory. Yet the sequential degradation proved a “priming effect” of sunlight on the bacterial uptake and reworking of this subtropical wetland DOM. Interestingly, specific PARAFAC components were found to have different photo- and bio-degradation rates, suggesting a highly heterogeneous nature of fluorophores associated with the DOM. Surface-to-groundwater exchange of DOM was observed in different regions of the system, and compositional differences were associated with source and photo-reactivity. Lastly, the high degree of heterogeneity of DOM associated fluorophores suggested based on the degradation studies was confirmed through the EEM-PARAFAC analysis of DOM along a molecular size continuum, suggesting that the fluorescence characteristics of DOM are highly controlled by different size fractions and as such can exhibit significant differences in reactivity.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.
Resumo:
Oil exploration is one of the most important industrial activities of modern society. Despite its derivatives present numerous applications in industrial processes, there are many undesirable by-products during this process, one of them is water separated from oil, called water production, it is constituted by pollutants difficult to degrade. In addition, the high volume of generated water makes its treatment a major problem for oil industries. Among the major contaminants of such effluents are phenol and its derivatives, substances of difficult natural degradation, which due their toxicity must be removed by a treatment process before its final disposal. In order to facilitate the removal of phenol in wastedwater from oil industry, it was developed an extraction system by ionic flocculation with surfactant. The ionic flocculation relies on the reaction of carboxylate surfactant and calcium íons, yielding in an insoluble surfactant that under stirring, aggregates forming floc capable of attracting the organic matter by adsorption. In this work was used base soap as ionic surfactant in the flocculation process and evaluated phenol removal efficiency in relation to the following parameters: surfactant concentration, phenol, calcium and electrolytes, stirring speed, contact time, temperature and pH. The flocculation of the surfactant occurred in the effluent (initial phenol concentration = 100 ppm) reaching 65% of phenol removal to concentrations of 1300 ppm and calcium of 1000 ppm, respectively, at T = 35 °C, pH = 9.7, stirring rate = 100 rpm and contact time of 5 minutes. The permanence of the flocs in an aqueous medium promotes desorption of the phenol from the flake surface to the solution, reaching 90% of desorption at a time of 150 minutes, and the study of desorption kinetics showed that Lagergren model of pseudo-first order was adequate to describe the phenol desorption. These results shows that the process may configure a new alternative of treatment in regard the removal of phenol of aqueous effluent of oil industry.
Resumo:
Oil exploration is one of the most important industrial activities of modern society. Despite its derivatives present numerous applications in industrial processes, there are many undesirable by-products during this process, one of them is water separated from oil, called water production, it is constituted by pollutants difficult to degrade. In addition, the high volume of generated water makes its treatment a major problem for oil industries. Among the major contaminants of such effluents are phenol and its derivatives, substances of difficult natural degradation, which due their toxicity must be removed by a treatment process before its final disposal. In order to facilitate the removal of phenol in wastedwater from oil industry, it was developed an extraction system by ionic flocculation with surfactant. The ionic flocculation relies on the reaction of carboxylate surfactant and calcium íons, yielding in an insoluble surfactant that under stirring, aggregates forming floc capable of attracting the organic matter by adsorption. In this work was used base soap as ionic surfactant in the flocculation process and evaluated phenol removal efficiency in relation to the following parameters: surfactant concentration, phenol, calcium and electrolytes, stirring speed, contact time, temperature and pH. The flocculation of the surfactant occurred in the effluent (initial phenol concentration = 100 ppm) reaching 65% of phenol removal to concentrations of 1300 ppm and calcium of 1000 ppm, respectively, at T = 35 °C, pH = 9.7, stirring rate = 100 rpm and contact time of 5 minutes. The permanence of the flocs in an aqueous medium promotes desorption of the phenol from the flake surface to the solution, reaching 90% of desorption at a time of 150 minutes, and the study of desorption kinetics showed that Lagergren model of pseudo-first order was adequate to describe the phenol desorption. These results shows that the process may configure a new alternative of treatment in regard the removal of phenol of aqueous effluent of oil industry.
Resumo:
The phase diagram of the simplest approximation to double-exchange systems, the bosonic double-exchange model with antiferromagnetic (AFM) superexchange coupling, is fully worked out by means of Monte Carlo simulations, large-N expansions, and variational mean-field calculations. We find a rich phase diagram, with no first-order phase transitions. The most surprising finding is the existence of a segmentlike ordered phase at low temperature for intermediate AFM coupling which cannot be detected in neutron-scattering experiments. This is signaled by a maximum (a cusp) in the specific heat. Below the phase transition, only short-range ordering would be found in neutron scattering. Researchers looking for a quantum critical point in manganites should be wary of this possibility. Finite-size scaling estimates of critical exponents are presented, although large scaling corrections are present in the reachable lattice sizes.
Resumo:
We analyze a recent proposal for spontaneous mirror symmetry breaking based on the coupling of first-order enantioselective autocatalysis and direct production of the enantiomers that invokes a critical role for intrinsic reaction noise. For isolated systems, the racemic state is the unique stable outcome for both stochastic and deterministic dynamics when the system is in compliance with the constraints dictated by the thermodynamics of chemical reaction processes. In open systems, the racemic outcome also results for both stochastic and deterministic dynamics when driving the auto-catalysis unidirectionally by external reagents. Nonracemic states can result in the latter only if the reverse reactions are strictly zero: these are kinetically controlled outcomes for small populations and volumes, and can be simulated by stochastic dynamics. However, the stability of the thermodynamic limit proves that the racemic outcome is the unique stable state for strictly irreversible externally driven autocatalysis. These findings contradict the suggestion that the inhibition requirement of the Frank autocatalytic model for the emergence of homochirality may be relaxed in a noise-induced mechanism.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Limit-periodic (LP) structures exhibit a type of nonperiodic order yet to be found in a natural material. A recent result in tiling theory, however, has shown that LP order can spontaneously emerge in a two-dimensional (2D) lattice model with nearest-and next-nearest-neighbor interactions. In this dissertation, we explore the question of what types of interactions can lead to a LP state and address the issue of whether the formation of a LP structure in experiments is possible. We study emergence of LP order in three-dimensional (3D) tiling models and bring the subject into the physical realm by investigating systems with realistic Hamiltonians and low energy LP states. Finally, we present studies of the vibrational modes of a simple LP ball and spring model whose results indicate that LP materials would exhibit novel physical properties.
A 2D lattice model defined on a triangular lattice with nearest- and next-nearest-neighbor interactions based on the Taylor-Socolar (TS) monotile is known to have a LP ground state. The system reaches that state during a slow quench through an infinite sequence of phase transitions. Surprisingly, even when the strength of the next-nearest-neighbor interactions is zero, in which case there is a large degenerate class of both crystalline and LP ground states, a slow quench yields the LP state. The first study in this dissertation introduces 3D models closely related to the 2D models that exhibit LP phases. The particular 3D models were designed such that next-nearest-neighbor interactions of the TS type are implemented using only nearest-neighbor interactions. For one of the 3D models, we show that the phase transitions are first order, with equilibrium structures that can be more complex than in the 2D case.
In the second study, we investigate systems with physical Hamiltonians based on one of the 2D tiling models with the goal of stimulating attempts to create a LP structure in experiments. We explore physically realizable particle designs while being mindful of particular features that may make the assembly of a LP structure in an experimental system difficult. Through Monte Carlo (MC) simulations, we have found that one particle design in particular is a promising template for a physical particle; a 2D system of identical disks with embedded dipoles is observed to undergo the series of phase transitions which leads to the LP state.
LP structures are well ordered but nonperiodic, and hence have nontrivial vibrational modes. In the third section of this dissertation, we study a ball and spring model with a LP pattern of spring stiffnesses and identify a set of extended modes with arbitrarily low participation ratios, a situation that appears to be unique to LP systems. The balls that oscillate with large amplitude in these modes live on periodic nets with arbitrarily large lattice constants. By studying periodic approximants to the LP structure, we present numerical evidence for the existence of such modes, and we give a heuristic explanation of their structure.