958 resultados para Physical-ecological coupled model
Resumo:
The neutron-rich lead isotopes, up to Pb-216, have been studied for the first time, exploiting the fragmentation of a primary uranium beam at the FRS-RISING setup at GSI. The observed isomeric states exhibit electromagnetic transition strengths which deviate from state-of-the-art shell-model calculations. It is shown that their complete description demands the introduction of effective three-body interactions and two-body transition operators in the conventional neutron valence space beyond Pb-208.
Resumo:
We have searched for sidereal variations in the rate of antineutrino interactions in the MINOS Near Detector. Using antineutrinos produced by the NuMI beam, we find no statistically significant sidereal modulation in the rate. When this result is placed in the context of the Standard Model Extension theory we are able to place upper limits on the coefficients defining the theory. These limits are used in combination with the results from an earlier analysis of MINOS neutrino data to further constrain the coefficients.
Resumo:
Este artigo versa sobre uma pesquisa integrante do Núcleo de Apoio à Pesquisa em Estudos de Linguagem em Arquitetura e Cidade (N.ELAC), que desenvolve pesquisas em Linguagem e Representação. Entre as diversas formas de representação em arquitetura, a presente pesquisa traz o modelo tridimensional físico como ferramenta que proporciona maior facilidade de leitura do projeto, sendo mais concreta que os desenhos técnicos. Objetiva-se, assim, destacar a importância do modelo físico como meio de aproximação da população ao patrimônio arquitetônico. Como estudo de caso, foi escolhido o Edifício E1, obra de Ernest Mange e Hélio Duarte. Localizado no campus da USP em São Carlos, é considerado patrimônio da cidade, entretanto, encontra-se praticamente enclausurado no interior do campus, dificultando maior contato da comunidade com o edifício. O projeto do edifício utilizou apenas o desenho como representação, não incluindo nenhum tipo de modelo tridimensional (físico ou digital). A partir de um levantamento das representações gráficas utilizadas pelos projetistas, foi possível fazer uma comparação entre o nível de compreensão do projeto apenas com as peças gráficas dos arquitetos e a partir do modelo físico, produzido pela pesquisadora. Foi realizado um pré-teste em escola pública municipal, despertando o interesse desses alunos pelo edifício em questão.
Resumo:
O presente artigo vincula-se às pesquisas do Núcleo de Apoio à Pesquisa em Estudos de Linguagem em Arquitetura e Cidade (N.ELAC), que atua na área de Linguagem e Representação. Diante das diversas formas de representação em arquitetura (desenho, maquete, modelos digitais), nesta pesquisa o modelo tridimensional físico é trazido como ferramenta que proporciona maior facilidade de leitura do projeto e tratado como meio de aproximação da comunidade ao patrimônio arquitetônico, envolvendo, sobretudo, a arquitetura moderna paulista. Como estudo de caso, escolheu-se o Edifício E1, obra de Ernest Mange e Hélio Duarte. Localizado no campus da USP em São Carlos, é considerado patrimônio da cidade, entretanto, encontra-se praticamente enclausurado no interior do campus, dificultando maior contato da comunidade com o edifício. Durante sua execução, foi utilizado apenas o desenho como ferramenta de representação de projeto, não incluindo nenhum tipo de modelo tridimensional (físico ou digital). A partir do levantamento das representações gráficas utilizadas, foi possível fazer uma comparação entre o nível de compreensão do projeto apenas com as peças gráficas dos arquitetos e a partir do modelo físico, produzido pela pesquisadora. Realizou-se um pré-teste em escola pública municipal, que indicou um aumento no interesse desses alunos pelo edifício em questão.
Resumo:
Durante o processo de projeto, o arquiteto transpõe suas ideias para o campo real, do concreto. Os diversos modos de expressão e representação têm como função mediar essa interação, diminuindo a distância entre esses dois campos. Vive-se hoje, um momento de intensa transformação das estratégias projetuais, propiciada pelos novos meios digitais. Assim, esta pesquisa, centra-se na comparação entre diversos momentos do uso de modelos nos processos projetivos contemporâneos, através de uma investigação em escritórios de arquitetura paulistanos que utilizam o modelo físico como parte de seus processos de projeto. Busca-se entender qual o papel dessa ferramenta de representação e suas potencialidades nos dias atuais. Como estudo de caso, faz-se uma análise comparativa entre o uso das maquetes digital e física, destacando dois estudos: a maquete do Conjunto Ponte dos Remédios, do arquiteto Marcos Acayaba e as maquetes de estudos elaboradas pelo escritório Andrade Morettin Arquitetos, para o concurso para o Instituto Moreira Salles/SP. Entre os objetivos desse trabalho também se encontra uma análise da contribuição dos modelos físicos no Ensino de Arquitetura.
Resumo:
Durante o processo de projeto, o arquiteto transpõe suas ideias para o campo real, do concreto. Os diversos modos de expressão e representação têm como função mediar essa interação, diminuindo a distância entre esses dois campos. Vive-se hoje, um momento de intensa transformação das estratégias projetuais, propiciadas pelos novos meios digitais. Esta pesquisa tem como objetivo o estudo do uso das representações tridimensionais, especificamente dos modelos físicos e digitais. Pretende-se flagrar os momentos contributivos dos modelos no processo projetivo e as características intrínsecas a eles. A discussão busca não apenas destacar a importância dessa ferramenta, como também, traçar uma breve comparação entre a tecnologia digital e a feitura manual. Para este trabalho foram selecionados alguns arquitetos significativos do cenário da arquitetura paulista em cujos projetos comparecem o uso de modelos. Como estudos de caso tem-se a Residência do arquiteto Marcos Acayaba e o projeto vencedor do Concurso para o Instituto Moreira Salles/ SP, do escritório Andrade Morettin Arquitetos. Soma-se a estes objetivos, a apresentação do uso de modelos físicos e digitais em uma experiência didática projetiva.
Resumo:
This thesis tackles the problem of the automated detection of the atmospheric boundary layer (BL) height, h, from aerosol lidar/ceilometer observations. A new method, the Bayesian Selective Method (BSM), is presented. It implements a Bayesian statistical inference procedure which combines in an statistically optimal way different sources of information. Firstly atmospheric stratification boundaries are located from discontinuities in the ceilometer back-scattered signal. The BSM then identifies the discontinuity edge that has the highest probability to effectively mark the BL height. Information from the contemporaneus physical boundary layer model simulations and a climatological dataset of BL height evolution are combined in the assimilation framework to assist this choice. The BSM algorithm has been tested for four months of continuous ceilometer measurements collected during the BASE:ALFA project and is shown to realistically diagnose the BL depth evolution in many different weather conditions. Then the BASE:ALFA dataset is used to investigate the boundary layer structure in stable conditions. Functions from the Obukhov similarity theory are used as regression curves to fit observed velocity and temperature profiles in the lower half of the stable boundary layer. Surface fluxes of heat and momentum are best-fitting parameters in this exercise and are compared with what measured by a sonic anemometer. The comparison shows remarkable discrepancies, more evident in cases for which the bulk Richardson number turns out to be quite large. This analysis supports earlier results, that surface turbulent fluxes are not the appropriate scaling parameters for profiles of mean quantities in very stable conditions. One of the practical consequences is that boundary layer height diagnostic formulations which mainly rely on surface fluxes are in disagreement to what obtained by inspecting co-located radiosounding profiles.
Resumo:
In this work the numerical coupling of thermal and electric network models with model equations for optoelectronic semiconductor devices is presented. Modified nodal analysis (MNA) is applied to model electric networks. Thermal effects are modeled by an accompanying thermal network. Semiconductor devices are modeled by the energy-transport model, that allows for thermal effects. The energy-transport model is expandend to a model for optoelectronic semiconductor devices. The temperature of the crystal lattice of the semiconductor devices is modeled by the heat flow eqaution. The corresponding heat source term is derived under thermodynamical and phenomenological considerations of energy fluxes. The energy-transport model is coupled directly into the network equations and the heat flow equation for the lattice temperature is coupled directly into the accompanying thermal network. The coupled thermal-electric network-device model results in a system of partial differential-algebraic equations (PDAE). Numerical examples are presented for the coupling of network- and one-dimensional semiconductor equations. Hybridized mixed finite elements are applied for the space discretization of the semiconductor equations. Backward difference formluas are applied for time discretization. Thus, positivity of charge carrier densities and continuity of the current density is guaranteed even for the coupled model.
Resumo:
Questa tesi descrive alcuni studi di messa a punto di metodi di analisi fisici accoppiati con tecniche statistiche multivariate per valutare la qualità e l’autenticità di oli vegetali e prodotti caseari. L’applicazione di strumenti fisici permette di abbattere i costi ed i tempi necessari per le analisi classiche ed allo stesso tempo può fornire un insieme diverso di informazioni che possono riguardare tanto la qualità come l’autenticità di prodotti. Per il buon funzionamento di tali metodi è necessaria la costruzione di modelli statistici robusti che utilizzino set di dati correttamente raccolti e rappresentativi del campo di applicazione. In questo lavoro di tesi sono stati analizzati oli vegetali e alcune tipologie di formaggi (in particolare pecorini per due lavori di ricerca e Parmigiano-Reggiano per un altro). Sono stati utilizzati diversi strumenti di analisi (metodi fisici), in particolare la spettroscopia, l’analisi termica differenziale, il naso elettronico, oltre a metodiche separative tradizionali. I dati ottenuti dalle analisi sono stati trattati mediante diverse tecniche statistiche, soprattutto: minimi quadrati parziali; regressione lineare multipla ed analisi discriminante lineare.
Resumo:
The exchange of chemical constituents between ocean and atmosphere provides potentially important feedback mechanisms in the climate system. The aim of this study is to develop and evaluate a chemically coupled global atmosphere-ocean model. For this, an atmosphere-ocean general circulation model with atmospheric chemistry has been expanded to include oceanic biogeochemistry and the process of air-sea gas exchange. The calculation of seawater concentrations in the oceanic biogeochemistry submodel has been expanded from DMS, CO₂
Resumo:
The present thesis work proposes a new physical equivalent circuit model for a recently proposed semiconductor transistor, a 2-drain MSET (Multiple State Electrostatically Formed Nanowire Transistor). It presents a new software-based experimental setup that has been developed for carrying out numerical simulations on the device and on equivalent circuits. As of 2015, we have already approached the scaling limits of the ubiquitous CMOS technology that has been in the forefront of mainstream technological advancement, so many researchers are exploring different ideas in the realm of electrical devices for logical applications, among them MSET transistors. The idea that underlies MSETs is that a single multiple-terminal device could replace many traditional transistors. In particular a 2-drain MSET is akin to a silicon multiplexer, consisting in a Junction FET with independent gates, but with a split drain, so that a voltage-controlled conductive path can connect either of the drains to the source. The first chapter of this work presents the theory of classical JFETs and its common equivalent circuit models. The physical model and its derivation are presented, the current state of equivalent circuits for the JFET is discussed. A physical model of a JFET with two independent gates has been developed, deriving it from previous results, and is presented at the end of the chapter. A review of the characteristics of MSET device is shown in chapter 2. In this chapter, the proposed physical model and its formulation are presented. A listing for the SPICE model was attached as an appendix at the end of this document. Chapter 3 concerns the results of the numerical simulations on the device. At first the research for a suitable geometry is discussed and then comparisons between results from finite-elements simulations and equivalent circuit runs are made. Where points of challenging divergence were found between the two numerical results, the relevant physical processes are discussed. In the fourth chapter the experimental setup is discussed. The GUI-based environments that allow to explore the four-dimensional solution space and to analyze the physical variables inside the device are described. It is shown how this software project has been structured to overcome technical challenges in structuring multiple simulations in sequence, and to provide for a flexible platform for future research in the field.
Resumo:
Electrical Power Assisted Steering system (EPAS) will likely be used on future automotive power steering systems. The sinusoidal brushless DC (BLDC) motor has been identified as one of the most suitable actuators for the EPAS application. Motor characteristic variations, which can be indicated by variations of the motor parameters such as the coil resistance and the torque constant, directly impart inaccuracies in the control scheme based on the nominal values of parameters and thus the whole system performance suffers. The motor controller must address the time-varying motor characteristics problem and maintain the performance in its long service life. In this dissertation, four adaptive control algorithms for brushless DC (BLDC) motors are explored. The first algorithm engages a simplified inverse dq-coordinate dynamics controller and solves for the parameter errors with the q-axis current (iq) feedback from several past sampling steps. The controller parameter values are updated by slow integration of the parameter errors. Improvement such as dynamic approximation, speed approximation and Gram-Schmidt orthonormalization are discussed for better estimation performance. The second algorithm is proposed to use both the d-axis current (id) and the q-axis current (iq) feedback for parameter estimation since id always accompanies iq. Stochastic conditions for unbiased estimation are shown through Monte Carlo simulations. Study of the first two adaptive algorithms indicates that the parameter estimation performance can be achieved by using more history data. The Extended Kalman Filter (EKF), a representative recursive estimation algorithm, is then investigated for the BLDC motor application. Simulation results validated the superior estimation performance with the EKF. However, the computation complexity and stability may be barriers for practical implementation of the EKF. The fourth algorithm is a model reference adaptive control (MRAC) that utilizes the desired motor characteristics as a reference model. Its stability is guaranteed by Lyapunov’s direct method. Simulation shows superior performance in terms of the convergence speed and current tracking. These algorithms are compared in closed loop simulation with an EPAS model and a motor speed control application. The MRAC is identified as the most promising candidate controller because of its combination of superior performance and low computational complexity. A BLDC motor controller developed with the dq-coordinate model cannot be implemented without several supplemental functions such as the coordinate transformation and a DC-to-AC current encoding scheme. A quasi-physical BLDC motor model is developed to study the practical implementation issues of the dq-coordinate control strategy, such as the initialization and rotor angle transducer resolution. This model can also be beneficial during first stage development in automotive BLDC motor applications.
Resumo:
Understanding natural climate variability and its driving factors is crucial to assessing future climate change. Therefore, comparing proxy-based climate reconstructions with forcing factors as well as comparing these with paleoclimate model simulations is key to gaining insights into the relative roles of internal versus forced variability. A review of the state of modelling of the climate of the last millennium prior to the CMIP5–PMIP3 (Coupled Model Intercomparison Project Phase 5–Paleoclimate Modelling Intercomparison Project Phase 3) coordinated effort is presented and compared to the available temperature reconstructions. Simulations and reconstructions broadly agree on reproducing the major temperature changes and suggest an overall linear response to external forcing on multidecadal or longer timescales. Internal variability is found to have an important influence at hemispheric and global scales. The spatial distribution of simulated temperature changes during the transition from the Medieval Climate Anomaly to the Little Ice Age disagrees with that found in the reconstructions. Thus, either internal variability is a possible major player in shaping temperature changes through the millennium or the model simulations have problems realistically representing the response pattern to external forcing. A last millennium transient climate response (LMTCR) is defined to provide a quantitative framework for analysing the consistency between simulated and reconstructed climate. Beyond an overall agreement between simulated and reconstructed LMTCR ranges, this analysis is able to single out specific discrepancies between some reconstructions and the ensemble of simulations. The disagreement is found in the cases where the reconstructions show reduced covariability with external forcings or when they present high rates of temperature change.
Resumo:
Previous studies have either exclusively used annual tree-ring data or have combined tree-ring series with other, lower temporal resolution proxy series. Both approaches can lead to significant uncertainties, as tree-rings may underestimate the amplitude of past temperature variations, and the validity of non-annual records cannot be clearly assessed. In this study, we assembled 45 published Northern Hemisphere (NH) temperature proxy records covering the past millennium, each of which satisfied 3 essential criteria: the series must be of annual resolution, span at least a thousand years, and represent an explicit temperature signal. Suitable climate archives included ice cores, varved lake sediments, tree-rings and speleothems. We reconstructed the average annual land temperature series for the NH over the last millennium by applying 3 different reconstruction techniques: (1) principal components (PC) plus second-order autoregressive model (AR2), (2) composite plus scale (CPS) and (3) regularized errors-in-variables approach (EIV). Our reconstruction is in excellent agreement with 6 climate model simulations (including the first 5 models derived from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and an earth system model of intermediate complexity (LOVECLIM), showing similar temperatures at multi-decadal timescales; however, all simulations appear to underestimate the temperature during the Medieval Warm Period (MWP). A comparison with other NH reconstructions shows that our results are consistent with earlier studies. These results indicate that well-validated annual proxy series should be used to minimize proxy-based artifacts, and that these proxy series contain sufficient information to reconstruct the low-frequency climate variability over the past millennium.
Resumo:
Understanding the preferential timescales of variability in the North Atlantic, usually associated with the Atlantic meridional overturning circulation (AMOC), is essential for the prospects for decadal prediction. However, the wide variety of mechanisms proposed from the analysis of climate simulations, potentially dependent on the models themselves, has stimulated the debate of which processes take place in reality. One mechanism receiving increasing attention, identified both in idealized models and observations, is a westward propagation of subsurface buoyancy anomalies that impact the AMOC through a basin-scale intensification of the zonal density gradient, enhancing the northward transport via thermal wind balance. In this study, we revisit a control simulation from the Institut Pierre-Simon Laplace Coupled Model 5A (IPSL-CM5A), characterized by a strong AMOC periodicity at 20 years, previously explained by an upper ocean–atmosphere–sea ice coupled mode driving convection activity south of Iceland. Our study shows that this mechanism interacts constructively with the basin-wide propagation in the subsurface. This constructive feedback may explain why bi-decadal variability is so intense in this coupled model as compared to others.