979 resultados para first order transition system
Resumo:
The phase diagram of the simplest approximation to double-exchange systems, the bosonic double-exchange model with antiferromagnetic (AFM) superexchange coupling, is fully worked out by means of Monte Carlo simulations, large-N expansions, and variational mean-field calculations. We find a rich phase diagram, with no first-order phase transitions. The most surprising finding is the existence of a segmentlike ordered phase at low temperature for intermediate AFM coupling which cannot be detected in neutron-scattering experiments. This is signaled by a maximum (a cusp) in the specific heat. Below the phase transition, only short-range ordering would be found in neutron scattering. Researchers looking for a quantum critical point in manganites should be wary of this possibility. Finite-size scaling estimates of critical exponents are presented, although large scaling corrections are present in the reachable lattice sizes.
Resumo:
We analyze a recent proposal for spontaneous mirror symmetry breaking based on the coupling of first-order enantioselective autocatalysis and direct production of the enantiomers that invokes a critical role for intrinsic reaction noise. For isolated systems, the racemic state is the unique stable outcome for both stochastic and deterministic dynamics when the system is in compliance with the constraints dictated by the thermodynamics of chemical reaction processes. In open systems, the racemic outcome also results for both stochastic and deterministic dynamics when driving the auto-catalysis unidirectionally by external reagents. Nonracemic states can result in the latter only if the reverse reactions are strictly zero: these are kinetically controlled outcomes for small populations and volumes, and can be simulated by stochastic dynamics. However, the stability of the thermodynamic limit proves that the racemic outcome is the unique stable state for strictly irreversible externally driven autocatalysis. These findings contradict the suggestion that the inhibition requirement of the Frank autocatalytic model for the emergence of homochirality may be relaxed in a noise-induced mechanism.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Limit-periodic (LP) structures exhibit a type of nonperiodic order yet to be found in a natural material. A recent result in tiling theory, however, has shown that LP order can spontaneously emerge in a two-dimensional (2D) lattice model with nearest-and next-nearest-neighbor interactions. In this dissertation, we explore the question of what types of interactions can lead to a LP state and address the issue of whether the formation of a LP structure in experiments is possible. We study emergence of LP order in three-dimensional (3D) tiling models and bring the subject into the physical realm by investigating systems with realistic Hamiltonians and low energy LP states. Finally, we present studies of the vibrational modes of a simple LP ball and spring model whose results indicate that LP materials would exhibit novel physical properties.
A 2D lattice model defined on a triangular lattice with nearest- and next-nearest-neighbor interactions based on the Taylor-Socolar (TS) monotile is known to have a LP ground state. The system reaches that state during a slow quench through an infinite sequence of phase transitions. Surprisingly, even when the strength of the next-nearest-neighbor interactions is zero, in which case there is a large degenerate class of both crystalline and LP ground states, a slow quench yields the LP state. The first study in this dissertation introduces 3D models closely related to the 2D models that exhibit LP phases. The particular 3D models were designed such that next-nearest-neighbor interactions of the TS type are implemented using only nearest-neighbor interactions. For one of the 3D models, we show that the phase transitions are first order, with equilibrium structures that can be more complex than in the 2D case.
In the second study, we investigate systems with physical Hamiltonians based on one of the 2D tiling models with the goal of stimulating attempts to create a LP structure in experiments. We explore physically realizable particle designs while being mindful of particular features that may make the assembly of a LP structure in an experimental system difficult. Through Monte Carlo (MC) simulations, we have found that one particle design in particular is a promising template for a physical particle; a 2D system of identical disks with embedded dipoles is observed to undergo the series of phase transitions which leads to the LP state.
LP structures are well ordered but nonperiodic, and hence have nontrivial vibrational modes. In the third section of this dissertation, we study a ball and spring model with a LP pattern of spring stiffnesses and identify a set of extended modes with arbitrarily low participation ratios, a situation that appears to be unique to LP systems. The balls that oscillate with large amplitude in these modes live on periodic nets with arbitrarily large lattice constants. By studying periodic approximants to the LP structure, we present numerical evidence for the existence of such modes, and we give a heuristic explanation of their structure.
Resumo:
This paper proposes extended nonlinear analytical models, third-order models, of compliant parallelogram mechanisms. These models are capable of capturing the accurate effects from the very large axial force within the transverse motion range of 10% of the beam length through incorporating the terms associated with the high-order (up to third-order) axial force. Firstly, the free-body diagram method is employed to derive the nonlinear analytical model for a basic compliant parallelogram mechanism based on load-displacement relations of a single beam, geometry compatibility conditions, and load-equilibrium conditions. The procedures for the forward solutions and inverse solutions are described. Nonlinear analytical models for guided compliant multi-beam parallelogram mechanisms are then obtained. A case study of the compound compliant parallelogram mechanism, composed of two basic compliant parallelogram mechanisms in symmetry, is further implemented. This work intends to estimate the internal axial force change, the transverse force change, and the transverse stiffness change with the transverse motion using the proposed third-order model in comparison with the first-order model proposed in the prior art. In addition, FEA (finite element analysis) results validate the accuracy of the third-order model for a typical example. It is shown that in the case study the slenderness ratio affects the result discrepancy between the third-order model and the first-order model significantly, and the third-order model can illustrate a non-monotonic transverse stiffness curve if the beam is thin enough.
Resumo:
Owing to their important roles in biogeochemical cycles, phytoplankton functional types (PFTs) have been the aim of an increasing number of ocean color algorithms. Yet, none of the existing methods are based on phytoplankton carbon (C) biomass, which is a fundamental biogeochemical and ecological variable and the "unit of accounting" in Earth system models. We present a novel bio-optical algorithm to retrieve size-partitioned phytoplankton carbon from ocean color satellite data. The algorithm is based on existing methods to estimate particle volume from a power-law particle size distribution (PSD). Volume is converted to carbon concentrations using a compilation of allometric relationships. We quantify absolute and fractional biomass in three PFTs based on size - picophytoplankton (0.5-2 µm in diameter), nanophytoplankton (2-20 µm) and microphytoplankton (20-50 µm). The mean spatial distributions of total phytoplankton C biomass and individual PFTs, derived from global SeaWiFS monthly ocean color data, are consistent with current understanding of oceanic ecosystems, i.e., oligotrophic regions are characterized by low biomass and dominance of picoplankton, whereas eutrophic regions have high biomass to which nanoplankton and microplankton contribute relatively larger fractions. Global climatological, spatially integrated phytoplankton carbon biomass standing stock estimates using our PSD-based approach yield - 0.25 Gt of C, consistent with analogous estimates from two other ocean color algorithms and several state-of-the-art Earth system models. Satisfactory in situ closure observed between PSD and POC measurements lends support to the theoretical basis of the PSD-based algorithm. Uncertainty budget analyses indicate that absolute carbon concentration uncertainties are driven by the PSD parameter No which determines particle number concentration to first order, while uncertainties in PFTs' fractional contributions to total C biomass are mostly due to the allometric coefficients. The C algorithm presented here, which is not empirically constrained a priori, partitions biomass in size classes and introduces improvement over the assumptions of the other approaches. However, the range of phytoplankton C biomass spatial variability globally is larger than estimated by any other models considered here, which suggests an empirical correction to the No parameter is needed, based on PSD validation statistics. These corrected absolute carbon biomass concentrations validate well against in situ POC observations.
Resumo:
Total organic carbon, total inorganic carbon, biogenic silica content and total organic carbon/total nitrogen ratios of the Laguna Potrok Aike lacustrine sediment record are used to reconstruct the environmental history of south-east Patagonia during the past 51 ka in high resolution. High lake level conditions are assumed to have prevailed during the Last Glacial, as sediments are carbonate-free. Increased runoff linked to permafrost and reduced evaporation due to colder temperatures and reduced influence of Southern Hemispheric Westerlies (SHW) may have caused these high lake levels with lake productivity being low and organic matter mainly of algal or cyanobacterial origin. Aquatic moss growth and diatom blooms occurred synchronously with southern hemispheric glacial warming events such as the Antarctic A-events, the postglacial warming following the LGM and the Younger Dryas chronozone. During these times, a combination of warmer climatic conditions with related thawing permafrost could have increased the allochthonous input of nutrients and in combination with warmer surface waters increased aquatic moss growth and diatom production. The SHW were not observed to affect southern Patagonia during the Last Glacial. The Holocene presents a completely different lacustrine system because (a) permafrost no longer inhibits infiltration nor emits meltwater pulses and (b) the positioning of the SHW over the investigated area gives rise to strong and dry winds. Under these conditions total organic carbon, total organic carbon/total nitrogen ratios and biogenic silica cease to be first order productivity indicators. On the one hand, the biogenic silica is influenced by dissolution of diatoms due to higher salinity and pH of the lake water under evaporative stress characterizing low lake levels. On the other hand, total organic carbon and total organic carbon/total nitrogen profiles are influenced by reworked macrophytes from freshly exposed lake level terraces during lowstands. Total inorganic carbon remains the most reliable proxy for climatic variations during the Holocene as high precipitation of carbonates can be linked to low lake levels and high autochthonous production. The onset of inorganic carbon precipitation has been associated with the southward shift of the SHW over the latitudes of Laguna Potrok Aike. The refined age-depth model of this record suggests that this shift occurred around 9.4 cal. ka BP.
Resumo:
An investigation was conducted to determine the effects of elevated pCO2 on the net production and calcification of an assemblage of corals maintained under near-natural conditions of temperature, light, nutrient, and flow. Experiments were performed in summer and winter to explore possible interactions between seasonal change in temperature and irradiance and the effect of elevated pCO2. Particular attention was paid to interactions between net production and calcification because these two processes are thought to compete for the same internal supply of dissolved inorganic carbon (DIC). A nutrient enrichment experiment was performed because it has been shown to induce a competitive interaction between photosynthesis and calcification that may serve as an analog to the effect of elevated pCO2. Net carbon production, NPC, increased with increased pCO2 at the rate of 3 ± 2% (?mol CO2aq kg?1)?1. Seasonal change of the slope NPC-[CO2aq] relationship was not significant. Calcification (G) was strongly related to the aragonite saturation state ? a . Seasonal change of the G-? a relationship was not significant. The first-order saturation state model gave a good fit to the pooled summer and winter data: G = (8 ± 1 mmol CaCO3 m?2 h?1)(? a ? 1), r 2 = 0.87, P = 0.0001. Both nutrient and CO2 enrichment resulted in an increase in NPC and a decrease in G, giving support to the hypothesis that the cellular mechanism underlying the decrease in calcification in response to increased pCO2 could be competition between photosynthesis and calcification for a limited supply of DIC.
Resumo:
Virtual-build-to-order (VBTO) is a form of order fulfilment system in which the producer has the ability to search across the entire pipeline of finished stock, products in production and those in the production plan, in order to find the best product for a customer. It is a system design that is attractive to Mass Customizers, such as those in the automotive sector, whose manufacturing lead time exceeds their customers' tolerable waiting times, and for whom the holding of partly-finished stocks at a fixed decoupling point is unattractive or unworkable. This paper describes and develops the operational concepts that underpin VBTO, in particular the concepts of reconfiguration flexibility and customer aversion to waiting. Reconfiguration is the process of changing a product's specification at any point along the order fulfilment pipeline. The extent to which an order fulfilment system is flexible or inflexible reveals itself in the reconfiguration cost curve, of which there are four basic types. The operational features of the generic VBTO system are described and simulation is used to study its behaviour and performance. The concepts of reconfiguration flexibility and floating decoupling point are introduced and discussed.
Resumo:
Virtual-Build-to-Order (VBTO) is an emerging order fulfilment system within the automotive sector that is intended to improve fulfilment performance by taking advantage of integrated information systems. The primary innovation in VBTO systems is the ability to make available all unsold products that are in the production pipeline to all customers. In a conventional system the pipeline is inaccessible and a customer can be fulfilled by a product from stock or having a product Built-to-Order (BTO), whereas in a VBTO system a customer can be fulfilled by a product from stock, by being allocated a product in the pipeline, or by a build-to-order product. Simulation is used to investigate and profile the fundamental behaviour of the basic VBTO system and to compare it to a Conventional system. A predictive relationship is identified, between the proportions of customers fulfilled through each mechanism and the ratio of product variety / pipeline length. The simulations reveal that a VBTO system exhibits inherent behaviour that alters the stock mix and levels, leading to stock levels being higher than in an equivalent conventional system at certain variety / pipeline ratios. The results have implications for the design and management of order fulfilment systems in sectors such as automotive where VBTO is a viable operational model.
Resumo:
Human operators are unique in their decision making capability, judgment and nondeterminism. Their sense of judgment, unpredictable decision procedures, susceptibility to environmental elements can cause them to erroneously execute a given task description to operate a computer system. Usually, a computer system is protected against some erroneous human behaviors by having necessary safeguard mechanisms in place. But some erroneous human operator behaviors can lead to severe or even fatal consequences especially in safety critical systems. A generalized methodology that can allow modeling and analyzing the interactions between computer systems and human operators where the operators are allowed to deviate from their prescribed behaviors will provide a formal understanding of the robustness of a computer system against possible aberrant behaviors by its human operators. We provide several methodology for assisting in modeling and analyzing human behaviors exhibited while operating computer systems. Every human operator is usually given a specific recommended set of guidelines for operating a system. We first present process algebraic methodology for modeling and verifying recommended human task execution behavior. We present how one can perform runtime monitoring of a computer system being operated by a human operator for checking violation of temporal safety properties. We consider the concept of a protection envelope giving a wider class of behaviors than those strictly prescribed by a human task that can be tolerated by a system. We then provide a framework for determining whether a computer system can maintain its guarantees if the human operators operate within their protection envelopes. This framework also helps to determine the robustness of the computer system under weakening of the protection envelopes. In this regard, we present a tool called Tutela that assists in implementing the framework. We then examine the ability of a system to remain safe under broad classes of variations of the prescribed human task. We develop a framework for addressing two issues. The first issue is: given a human task specification and a protection envelope, will the protection envelope properties still hold under standard erroneous executions of that task by the human operators? In other words how robust is the protection envelope? The second issue is: in the absence of a protection envelope, can we approximate a protection envelope encompassing those standard erroneous human behaviors that can be safely endured by the system? We present an extension of Tutela that implements this framework. The two frameworks mentioned above use Concurrent Game Structures (CGS) as models for both computer systems and their human operators. However, there are some shortcomings of this formalism for our uses. We add incomplete information concepts in CGSs to achieve better modularity for the players. We introduce nondeterminism in both the transition system and strategies of players and in the modeling of human operators and computer systems. Nondeterministic action strategies for players in \emph{i}ncomplete information \emph{N}ondeterministic CGS (iNCGS) is a more precise formalism for modeling human behaviors exhibited while operating a computer system. We show how we can reason about a human behavior satisfying a guarantee by providing a semantics of Alternating Time Temporal Logic based on iNCGS player strategies. In a nutshell this dissertation provides formal methodology for modeling and analyzing system robustness against both expected and erroneous human operator behaviors.
Resumo:
In this work, the relationship between diameter at breast height (d) and total height (h) of individual-tree was modeled with the aim to establish provisory height-diameter (h-d) equations for maritime pine (Pinus pinaster Ait.) stands in the Lomba ZIF, Northeast Portugal. Using data collected locally, several local and generalized h-d equations from the literature were tested and adaptations were also considered. Model fitting was conducted by using usual nonlinear least squares (nls) methods. The best local and generalized models selected, were also tested as mixed models applying a first-order conditional expectation (FOCE) approximation procedure and maximum likelihood methods to estimate fixed and random effects. For the calibration of the mixed models and in order to be consistent with the fitting procedure, the FOCE method was also used to test different sampling designs. The results showed that the local h-d equations with two parameters performed better than the analogous models with three parameters. However a unique set of parameter values for the local model can not be used to all maritime pine stands in Lomba ZIF and thus, a generalized model including covariates from the stand, in addition to d, was necessary to obtain an adequate predictive performance. No evident superiority of the generalized mixed model in comparison to the generalized model with nonlinear least squares parameters estimates was observed. On the other hand, in the case of the local model, the predictive performance greatly improved when random effects were included. The results showed that the mixed model based in the local h-d equation selected is a viable alternative for estimating h if variables from the stand are not available. Moreover, it was observed that it is possible to obtain an adequate calibrated response using only 2 to 5 additional h-d measurements in quantile (or random) trees from the distribution of d in the plot (stand). Balancing sampling effort, accuracy and straightforwardness in practical applications, the generalized model from nls fit is recommended. Examples of applications of the selected generalized equation to the forest management are presented, namely how to use it to complete missing information from forest inventory and also showing how such an equation can be incorporated in a stand-level decision support system that aims to optimize the forest management for the maximization of wood volume production in Lomba ZIF maritime pine stands.
Resumo:
The model presented allows simulating the pesticide concentration in fruit trees and estimating the pesticide bioconcentration factor in fruits of woody species. The model allows estimating the pesticide uptake by plants through the water transpiration stream and also the time in which maximum pesticide concentration occur in the fruits. The equation proposed presents the relationships between bioconcentration factor (BCF) and the following variables: plant water transpiration volume (Q), pesticide transpiration stream concentration factor (TSCF), pesticide stem-water partition coefficient (KWood,w), stem dry biomass (M) and pesticide dissipation rate in the soil-plant system (kEGS). The modeling started and was developed from a previous model ?Fruit Tree Model? (FTM), reported by Trapp and collaborators in 2003, to which was added the hypothesis that the pesticide degradation in the soil follows a first order kinetic equation. The model fitness was evaluated through the sensitivity analysis of the pesticide BCF values in fruits with respect to the model entry data variability.
Resumo:
Mathematical models of gene regulation are a powerful tool for understanding the complex features of genetic control. While various modeling efforts have been successful at explaining gene expression dynamics, much less is known about how evolution shapes the structure of these networks. An important feature of gene regulatory networks is their stability in response to environmental perturbations. Regulatory systems are thought to have evolved to exist near the transition between stability and instability, in order to have the required stability to environmental fluctuations while also being able to achieve a wide variety of functions (corresponding to different dynamical patterns). We study a simplified model of gene network evolution in which links are added via different selection rules. These growth models are inspired by recent work on `explosive' percolation which shows that when network links are added through competitive rather than random processes, the connectivity phase transition can be significantly delayed, and when it is reached, it appears to be first order (discontinuous, e.g., going from no failure at all to large expected failure) instead of second order (continuous, e.g., going from no failure at all to very small expected failure). We find that by modifying the traditional framework for networks grown via competitive link addition to capture how gene networks evolve to avoid damage propagation, we also see significant delays in the transition that depend on the selection rules, but the transitions always appear continuous rather than `explosive'.
Resumo:
Integrated circuit scaling has enabled a huge growth in processing capability, which necessitates a corresponding increase in inter-chip communication bandwidth. As bandwidth requirements for chip-to-chip interconnection scale, deficiencies of electrical channels become more apparent. Optical links present a viable alternative due to their low frequency-dependent loss and higher bandwidth density in the form of wavelength division multiplexing. As integrated photonics and bonding technologies are maturing, commercialization of hybrid-integrated optical links are becoming a reality. Increasing silicon integration leads to better performance in optical links but necessitates a corresponding co-design strategy in both electronics and photonics. In this light, holistic design of high-speed optical links with an in-depth understanding of photonics and state-of-the-art electronics brings their performance to unprecedented levels. This thesis presents developments in high-speed optical links by co-designing and co-integrating the primary elements of an optical link: receiver, transmitter, and clocking.
In the first part of this thesis a 3D-integrated CMOS/Silicon-photonic receiver will be presented. The electronic chip features a novel design that employs a low-bandwidth TIA front-end, double-sampling and equalization through dynamic offset modulation. Measured results show -14.9dBm of sensitivity and energy efficiency of 170fJ/b at 25Gb/s. The same receiver front-end is also used to implement source-synchronous 4-channel WDM-based parallel optical receiver. Quadrature ILO-based clocking is employed for synchronization and a novel frequency-tracking method that exploits the dynamics of IL in a quadrature ring oscillator to increase the effective locking range. An adaptive body-biasing circuit is designed to maintain the per-bit-energy consumption constant across wide data-rates. The prototype measurements indicate a record-low power consumption of 153fJ/b at 32Gb/s. The receiver sensitivity is measured to be -8.8dBm at 32Gb/s.
Next, on the optical transmitter side, three new techniques will be presented. First one is a differential ring modulator that breaks the optical bandwidth/quality factor trade-off known to limit the speed of high-Q ring modulators. This structure maintains a constant energy in the ring to avoid pattern-dependent power droop. As a first proof of concept, a prototype has been fabricated and measured up to 10Gb/s. The second technique is thermal stabilization of micro-ring resonator modulators through direct measurement of temperature using a monolithic PTAT temperature sensor. The measured temperature is used in a feedback loop to adjust the thermal tuner of the ring. A prototype is fabricated and a closed-loop feedback system is demonstrated to operate at 20Gb/s in the presence of temperature fluctuations. The third technique is a switched-capacitor based pre-emphasis technique designed to extend the inherently low bandwidth of carrier injection micro-ring modulators. A measured prototype of the optical transmitter achieves energy efficiency of 342fJ/bit at 10Gb/s and the wavelength stabilization circuit based on the monolithic PTAT sensor consumes 0.29mW.
Lastly, a first-order frequency synthesizer that is suitable for high-speed on-chip clock generation will be discussed. The proposed design features an architecture combining an LC quadrature VCO, two sample-and-holds, a PI, digital coarse-tuning, and rotational frequency detection for fine-tuning. In addition to an electrical reference clock, as an extra feature, the prototype chip is capable of receiving a low jitter optical reference clock generated by a high-repetition-rate mode-locked laser. The output clock at 8GHz has an integrated RMS jitter of 490fs, peak-to-peak periodic jitter of 2.06ps, and total RMS jitter of 680fs. The reference spurs are measured to be –64.3dB below the carrier frequency. At 8GHz the system consumes 2.49mW from a 1V supply.