937 resultados para Mass-consistent model
Resumo:
Historic changes in water-use management in the Florida Everglades have caused the quantity of freshwater inflow to Florida Bay to decline by approximately 60% while altering its timing and spatial distribution. Two consequences have been (1) increased salinity throughout the bay, including occurrences of hypersalinity, coupled with a decrease in salinity variability, and (2) change in benthic habitat structure. Restoration goals have been proposed to return the salinity climates (salinity and its variability) of Florida Bay to more estuarine conditions through changes in upstream water management, thereby returning seagrass species cover to a more historic state. To assess the potential for meeting those goals, we used two modeling approaches and long-term monitoring data. First, we applied the hydrological mass balance model FATHOM to predict salinity climate changes in sub-basins throughout the bay in response to a broad range of freshwater inflow from the Everglades. Second, because seagrass species exhibit different sensitivities to salinity climates, we used the FATHOM-modeled salinity climates as input to a statistical discriminant function model that associates eight seagrass community types with water quality variables including salinity, salinity variability, total organic carbon, total phosphorus, nitrate, and ammonium, as well as sediment depth and light reaching the benthos. Salinity climates in the western sub-basins bordering the Gulf of Mexico were insensitive to even the largest (5-fold) modeled increases in freshwater inflow. However, the north, northeastern, and eastern sub-basins were highly sensitive to freshwater inflow and responded to comparatively small increases with decreased salinity and increased salinity variability. The discriminant function model predicted increased occurrences ofHalodule wrightii communities and decreased occurrences of Thalassia testudinum communities in response to the more estuarine salinity climates. The shift in community composition represents a return to the historically observed state and suggests that restoration goals for Florida Bay can be achieved through restoration of freshwater inflow from the Everglades.
Resumo:
Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.
Resumo:
The thesis uses a three-dimensional, first-principles model of the ionosphere in combination with High Frequency (HF) raytracing model to address key topics related to the physics of HF propagation and artificial ionospheric heating. In particular: 1. Explores the effect of the ubiquitous electron density gradients caused by Medium Scale Traveling Ionospheric Disturbances (MSTIDs) on high-angle of incidence HF radio wave propagation. Previous studies neglected the all-important presence of horizontal gradients in both the cross- and down-range directions, which refract the HF waves, significantly changing their path through the ionosphere. The physics-based ionosphere model SAMI3/ESF is used to generate a self-consistently evolving MSTID that allows for the examination of the spatio-temporal progression of the HF radio waves in the ionosphere. 2. Tests the potential and determines engineering requirements for ground- based high power HF heaters to trigger and control the evolution of Equatorial Spread F (ESF). Interference from ESF on radio wave propagation through the ionosphere remains a critical issue on HF systems reliability. Artificial HF heating has been shown to create plasma density cavities in the ionosphere similar to those that may trigger ESF bubbles. The work explores whether HF heating may trigger or control ESF bubbles. 3. Uses the combined ionosphere and HF raytracing models to create the first self-consistent HF Heating model. This model is utilized to simulate results from an Arecibo experiment and to provide understanding of the physical mechanism behind observed phenomena. The insights gained provide engineering guidance for new artificial heaters that are being built for use in low to middle latitude regions. In accomplishing the above topics: (i) I generated a model MSTID using the SAMI3/ESF code, and used a raytrace model to examine the effects of the MSTID gradients on radio wave propagation observables; (ii) I implemented a three- dimensional HF heating model in SAMI3/ESF and used the model to determine whether HF heating could artificially generate an ESF bubble; (iii) I created the first self-consistent model for artificial HF heating using the SAMI3/ESF ionosphere model and the MoJo raytrace model and ran a series of simulations that successfully modeled the results of early artificial heating experiments at Arecibo.
Resumo:
Introduction Prediction of soft tissue changes following orthognathic surgery has been frequently attempted in the past decades. It has gradually progressed from the classic “cut and paste” of photographs to the computer assisted 2D surgical prediction planning; and finally, comprehensive 3D surgical planning was introduced to help surgeons and patients to decide on the magnitude and direction of surgical movements as well as the type of surgery to be considered for the correction of facial dysmorphology. A wealth of experience was gained and numerous published literature is available which has augmented the knowledge of facial soft tissue behaviour and helped to improve the ability to closely simulate facial changes following orthognathic surgery. This was particularly noticed following the introduction of the three dimensional imaging into the medical research and clinical applications. Several approaches have been considered to mathematically predict soft tissue changes in three dimensions, following orthognathic surgery. The most common are the Finite element model and Mass tensor Model. These were developed into software packages which are currently used in clinical practice. In general, these methods produce an acceptable level of prediction accuracy of soft tissue changes following orthognathic surgery. Studies, however, have shown a limited prediction accuracy at specific regions of the face, in particular the areas around the lips. Aims The aim of this project is to conduct a comprehensive assessment of hard and soft tissue changes following orthognathic surgery and introduce a new method for prediction of facial soft tissue changes. Methodology The study was carried out on the pre- and post-operative CBCT images of 100 patients who received their orthognathic surgery treatment at Glasgow dental hospital and school, Glasgow, UK. Three groups of patients were included in the analysis; patients who underwent Le Fort I maxillary advancement surgery; bilateral sagittal split mandibular advancement surgery or bimaxillary advancement surgery. A generic facial mesh was used to standardise the information obtained from individual patient’s facial image and Principal component analysis (PCA) was applied to interpolate the correlations between the skeletal surgical displacement and the resultant soft tissue changes. The identified relationship between hard tissue and soft tissue was then applied on a new set of preoperative 3D facial images and the predicted results were compared to the actual surgical changes measured from their post-operative 3D facial images. A set of validation studies was conducted. To include: • Comparison between voxel based registration and surface registration to analyse changes following orthognathic surgery. The results showed there was no statistically significant difference between the two methods. Voxel based registration, however, showed more reliability as it preserved the link between the soft tissue and skeletal structures of the face during the image registration process. Accordingly, voxel based registration was the method of choice for superimposition of the pre- and post-operative images. The result of this study was published in a refereed journal. • Direct DICOM slice landmarking; a novel technique to quantify the direction and magnitude of skeletal surgical movements. This method represents a new approach to quantify maxillary and mandibular surgical displacement in three dimensions. The technique includes measuring the distance of corresponding landmarks digitized directly on DICOM image slices in relation to three dimensional reference planes. The accuracy of the measurements was assessed against a set of “gold standard” measurements extracted from simulated model surgery. The results confirmed the accuracy of the method within 0.34mm. Therefore, the method was applied in this study. The results of this validation were published in a peer refereed journal. • The use of a generic mesh to assess soft tissue changes using stereophotogrammetry. The generic facial mesh played a major role in the soft tissue dense correspondence analysis. The conformed generic mesh represented the geometrical information of the individual’s facial mesh on which it was conformed (elastically deformed). Therefore, the accuracy of generic mesh conformation is essential to guarantee an accurate replica of the individual facial characteristics. The results showed an acceptable overall mean error of the conformation of generic mesh 1 mm. The results of this study were accepted for publication in peer refereed scientific journal. Skeletal tissue analysis was performed using the validated “Direct DICOM slices landmarking method” while soft tissue analysis was performed using Dense correspondence analysis. The analysis of soft tissue was novel and produced a comprehensive description of facial changes in response to orthognathic surgery. The results were accepted for publication in a refereed scientific Journal. The main soft tissue changes associated with Le Fort I were advancement at the midface region combined with widening of the paranasal, upper lip and nostrils. Minor changes were noticed at the tip of the nose and oral commissures. The main soft tissue changes associated with mandibular advancement surgery were advancement and downward displacement of the chin and lower lip regions, limited widening of the lower lip and slight reversion of the lower lip vermilion combined with minimal backward displacement of the upper lip were recorded. Minimal changes were observed on the oral commissures. The main soft tissue changes associated with bimaxillary advancement surgery were generalized advancement of the middle and lower thirds of the face combined with widening of the paranasal, upper lip and nostrils regions. In Le Fort I cases, the correlation between the changes of the facial soft tissue and the skeletal surgical movements was assessed using PCA. A statistical method known as ’Leave one out cross validation’ was applied on the 30 cases which had Le Fort I osteotomy surgical procedure to effectively utilize the data for the prediction algorithm. The prediction accuracy of soft tissue changes showed a mean error ranging between (0.0006mm±0.582) at the nose region to (-0.0316mm±2.1996) at the various facial regions.
Resumo:
Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.
Resumo:
Synthetic biology, by co-opting molecular machinery from existing organisms, can be used as a tool for building new genetic systems from scratch, for understanding natural networks through perturbation, or for hybrid circuits that piggy-back on existing cellular infrastructure. Although the toolbox for genetic circuits has greatly expanded in recent years, it is still difficult to separate the circuit function from its specific molecular implementation. In this thesis, we discuss the function-driven design of two synthetic circuit modules, and use mathematical models to understand the fundamental limits of circuit topology versus operating regimes as determined by the specific molecular implementation. First, we describe a protein concentration tracker circuit that sets the concentration of an output protein relative to the concentration of a reference protein. The functionality of this circuit relies on a single negative feedback loop that is implemented via small programmable protein scaffold domains. We build a mass-action model to understand the relevant timescales of the tracking behavior and how the input/output ratios and circuit gain might be tuned with circuit components. Second, we design an event detector circuit with permanent genetic memory that can record order and timing between two chemical events. This circuit was implemented using bacteriophage integrases that recombine specific segments of DNA in response to chemical inputs. We simulate expected population-level outcomes using a stochastic Markov-chain model, and investigate how inferences on past events can be made from differences between single-cell and population-level responses. Additionally, we present some preliminary investigations on spatial patterning using the event detector circuit as well as the design of stationary phase promoters for growth-phase dependent activation. These results advance our understanding of synthetic gene circuits, and contribute towards the use of circuit modules as building blocks for larger and more complex synthetic networks.
Resumo:
The International Space Station (ISS) requires a substantial amount of potable water for use by the crew. The economic and logistic limitations of transporting the vast amount of water required onboard the ISS necessitate onboard recovery and reuse of the aqueous waste streams. Various treatment technologies are employed within the ISS water processor to render the waste water potable, including filtration, ion exchange, adsorption, and catalytic wet oxidation. The ion exchange resins and adsorption media are combined in multifiltration beds for removal of ionic and organic compounds. A mathematical model (MFBMODEL™) designed to predict the performance of a multifiltration (MF) bed was developed. MFBMODEL consists of ion exchange models for describing the behavior of the different resin types in a MF bed (e.g., mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins) and an adsorption model capable of predicting the performance of the adsorbents in a MF bed. Multicomponent ion exchange ii equilibrium models that incorporate the water formation reaction, electroneutrality condition, and degree of ionization of weak acids and bases for mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins were developed and verified. The equilibrium models developed use a tanks-inseries approach that allows for consideration of variable influent concentrations. The adsorption modeling approach was developed in related studies and application within the MFBMODEL framework was demonstrated in the Appendix to this study. MFBMODEL consists of a graphical user interface programmed in Visual Basic and Fortran computational routines. This dissertation shows MF bed modeling results in which the model is verified for a surrogate of the ISS waste shower and handwash stream. In addition, a multicomponent ion exchange model that incorporates mass transfer effects was developed, which is capable of describing the performance of strong acid cation (SAC) and strong base anion (SBA) exchange resins, but not including reaction effects. This dissertation presents results showing the mass transfer model's capability to predict the performance of binary and multicomponent column data for SAC and SBA exchange resins. The ion exchange equilibrium and mass transfer models developed in this study are also applicable to terrestrial water treatment systems. They could be applied for removal of cations and anions from groundwater (e.g., hardness, nitrate, perchlorate) and from industrial process waters (e.g. boiler water, ultrapure water in the semiconductor industry).
Resumo:
In this work we study the dynamical generation of mass in the massless N = 1 Wess-Zumino model in a three-dimensional spacetime. Using the tadpole method to compute the effective potential, we observe that supersymmetry is dynamically broken together with the discrete symmetry A(x) -> A(x). We show that this model, different from nonsupersymmetric scalar models, exhibits a consistent perturbative dynamical generation of mass after two-loop corrections to the effective potential.
Resumo:
We design consistent discontinuous Galerkin finite element schemes for the approximation of a quasi-incompressible two phase flow model of Allen–Cahn/Cahn–Hilliard/Navier–Stokes–Korteweg type which allows for phase transitions. We show that the scheme is mass conservative and monotonically energy dissipative. In this case the dissipation is isolated to discrete equivalents of those effects already causing dissipation on the continuous level, that is, there is no artificial numerical dissipation added into the scheme. In this sense the methods are consistent with the energy dissipation of the continuous PDE system.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We derive the equation of state of nuclear matter for the quark-meson coupling model taking into account quantum fluctuations of the σ meson as well as vacuum polarization effects for the nucleons. This model incorporates explicitly quark degrees of freedom with quarks coupled to the scalar and vector mesons. Quantum fluctuations lead to a softer equation of state for nuclear matter giving a lower value of incompressibility than would be reached without quantum effects. The in-medium nucleon and σ-meson masses are also calculated in a self-consistent manner. The spectral function of the σ meson is calculated and the σ mass has the value increased with respect to the purely classical approximation at high densities.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We consider a model eigenvalue problem (EVP) in 1D, with periodic or semi–periodic boundary conditions (BCs). The discretization of this type of EVP by consistent mass finite element methods (FEMs) leads to the generalized matrix EVP Kc = λ M c, where K and M are real, symmetric matrices, with a certain (skew–)circulant structure. In this paper we fix our attention to the use of a quadratic FE–mesh. Explicit expressions for the eigenvalues of the resulting algebraic EVP are established. This leads to an explicit form for the approximation error in terms of the mesh parameter, which confirms the theoretical error estimates, obtained in [2].
Resumo:
6