890 resultados para Nelson and Siegel model
Resumo:
The integration of processes at different scales is a key problem in the modelling of cell populations. Owing to increased computational resources and the accumulation of data at the cellular and subcellular scales, the use of discrete, cell-level models, which are typically solved using numerical simulations, has become prominent. One of the merits of this approach is that important biological factors, such as cell heterogeneity and noise, can be easily incorporated. However, it can be difficult to efficiently draw generalizations from the simulation results, as, often, many simulation runs are required to investigate model behaviour in typically large parameter spaces. In some cases, discrete cell-level models can be coarse-grained, yielding continuum models whose analysis can lead to the development of insight into the underlying simulations. In this paper we apply such an approach to the case of a discrete model of cell dynamics in the intestinal crypt. An analysis of the resulting continuum model demonstrates that there is a limited region of parameter space within which steady-state (and hence biologically realistic) solutions exist. Continuum model predictions show good agreement with corresponding results from the underlying simulations and experimental data taken from murine intestinal crypts.
Resumo:
The development of a combined engineering and statistical Artificial Neural Network model of UK domestic appliance load profiles is presented. The model uses diary-style appliance use data and a survey questionnaire collected from 51 suburban households and 46 rural households during the summer of 2010 and2011 respectively. It also incorporates measured energy data and is sensitive to socioeconomic, physical dwelling and temperature variables. A prototype model is constructed in MATLAB using a two layer feed forward network with back propagation training which has a 12:10:24 architecture. Model outputs include appliance load profiles which can be applied to the fields of energy planning (microrenewables and smart grids), building simulation tools and energy policy.
Resumo:
For the first time, vertical column measurements of (HNO3) above the Arctic Stratospheric Ozone Observatory (AStrO) at Eureka (80N, 86W), Canada, have been made during polar night using lunar spectra recorded with a Fourier Transform Infrared (FTIR) spectrometer, from October 2001 to March 2002. AStrO is part of the primary Arctic station of the Network for the Detection of Stratospheric Change (NDSC). These measurements were compared with FTIR measurements at two other NDSC Arctic sites: Thule, Greenland (76.5N, 68.8W) and Kiruna, Sweden (67.8N, 20.4E). The measurements were also compared with two atmospheric models: the Canadian Middle Atmosphere Model (CMAM) and SLIMCAT. This is the first time that CMAM HNO3 columns have been compared with observations in the Arctic. Eureka lunar measurements are in good agreement with solar ones made with the same instrument. Eureka and Thule HNO3 columns are consistent within measurement error. Differences among HNO3 columns measured at Kiruna and those measured at Eureka and Thule can be explained on the basis of the available sunlight hours and the polar vortex location. The comparison of CMAM HNO3 columns with Eureka and Kiruna data shows good agreement, considering CMAM small inter-annual variability. The warm 2001/02 winter with almost no Polar Stratospheric Clouds (PSCs) makes the comparison of the warm climate version of CMAM with these observations a good test for CMAM under no PSC conditions. SLIMCAT captures the magnitude of HNO3 columns at Eureka, and the day-to-day variability, but generally reports higher HNO3 columns than the CMAM climatological mean columns.
Resumo:
Details are given of the development and application of a 2D depth-integrated, conformal boundary-fitted, curvilinear model for predicting the depth-mean velocity field and the spatial concentration distribution in estuarine and coastal waters. A numerical method for conformal mesh generation, based on a boundary integral equation formulation, has been developed. By this method a general polygonal region with curved edges can be mapped onto a regular polygonal region with the same number of horizontal and vertical straight edges and a multiply connected region can be mapped onto a regular region with the same connectivity. A stretching transformation on the conformally generated mesh has also been used to provide greater detail where it is needed close to the coast, with larger mesh sizes further offshore, thereby minimizing the computing effort whilst maximizing accuracy. The curvilinear hydrodynamic and solute model has been developed based on a robust rectilinear model. The hydrodynamic equations are approximated using the ADI finite difference scheme with a staggered grid and the solute transport equation is approximated using a modified QUICK scheme. Three numerical examples have been chosen to test the curvilinear model, with an emphasis placed on complex practical applications
Resumo:
We have developed a model of the local field potential (LFP) based on the conservation of charge, the independence principle of ionic flows and the classical Hodgkin–Huxley (HH) type intracellular model of synaptic activity. Insights were gained through the simulation of the HH intracellular model on the nonlinear relationship between the balance of synaptic conductances and that of post-synaptic currents. The latter is dependent not only on the former, but also on the temporal lag between the excitatory and inhibitory conductances, as well as the strength of the afferent signal. The proposed LFP model provides a method for decomposing the LFP recordings near the soma of layer IV pyramidal neurons in the barrel cortex of anaesthetised rats into two highly correlated components with opposite polarity. The temporal dynamics and the proportional balance of the two components are comparable to the excitatory and inhibitory post-synaptic currents computed from the HH model. This suggests that the two components of the LFP reflect the underlying excitatory and inhibitory post-synaptic currents of the local neural population. We further used the model to decompose a sequence of evoked LFP responses under repetitive electrical stimulation (5 Hz) of the whisker pad. We found that as neural responses adapted, the excitatory and inhibitory components also adapted proportionately, while the temporal lag between the onsets of the two components increased during frequency adaptation. Our results demonstrated that the balance between neural excitation and inhibition can be investigated using extracellular recordings. Extension of the model to incorporate multiple compartments should allow more quantitative interpretations of surface Electroencephalography (EEG) recordings into components reflecting the excitatory, inhibitory and passive ionic current flows generated by local neural populations.
Resumo:
Background Polygalacturonase-inhibiting proteins (PGIPs) are leucine-rich repeat (LRR) plant cell wall glycoproteins involved in plant immunity. They are typically encoded by gene families with a small number of gene copies whose evolutionary origin has been poorly investigated. Here we report the complete characterization of the full complement of the pgip family in soybean (Glycine max [L.] Merr.) and the characterization of the genomic region surrounding the pgip family in four legume species. Results BAC clone and genome sequence analyses showed that the soybean genome contains two pgip loci. Each locus is composed of three clustered genes that are induced following infection with the fungal pathogen Sclerotinia sclerotiorum (Lib.) de Bary, and remnant sequences of pgip genes. The analyzed homeologous soybean genomic regions (about 126 Kb) that include the pgip loci are strongly conserved and this conservation extends also to the genomes of the legume species Phaseolus vulgaris L., Medicago truncatula Gaertn. and Cicer arietinum L., each containing a single pgip locus. Maximum likelihood-based gene trees suggest that the genes within the pgip clusters have independently undergone tandem duplication in each species. Conclusions The paleopolyploid soybean genome contains two pgip loci comprised in large and highly conserved duplicated regions, which are also conserved in bean, M. truncatula and C. arietinum. The genomic features of these legume pgip families suggest that the forces driving the evolution of pgip genes follow the birth-and-death model, similar to that proposed for the evolution of resistance (R) genes of NBS-LRR-type.
Resumo:
There are some long-established biases in atmospheric models that originate from the representation of tropical convection. Previously, it has been difficult to separate cause and effect because errors are often the result of a number of interacting biases. Recently, researchers have gained the ability to run multiyear global climate model simulations with grid spacings small enough to switch the convective parameterization off, which permits the convection to develop explicitly. There are clear improvements to the initiation of convective storms and the diurnal cycle of rainfall in the convection-permitting simulations, which enables a new process-study approach to model bias identification. In this study, multiyear global atmosphere-only climate simulations with and without convective parameterization are undertaken with the Met Office Unified Model and are analyzed over the Maritime Continent region, where convergence from sea-breeze circulations is key for convection initiation. The analysis shows that, although the simulation with parameterized convection is able to reproduce the key rain-forming sea-breeze circulation, the parameterization is not able to respond realistically to the circulation. A feedback of errors also occurs: the convective parameterization causes rain to fall in the early morning, which cools and wets the boundary layer, reducing the land–sea temperature contrast and weakening the sea breeze. This is, however, an effect of the convective bias, rather than a cause of it. Improvements to how and when convection schemes trigger convection will improve both the timing and location of tropical rainfall and representation of sea-breeze circulations.
Resumo:
The vertical profile of aerosol is important for its radiative effects, but weakly constrained by observations on the global scale, and highly variable among different models. To investigate the controlling factors in one particular model, we investigate the effects of individual processes in HadGEM3–UKCA and compare the resulting diversity of aerosol vertical profiles with the inter-model diversity from the AeroCom Phase II control experiment. In this way we show that (in this model at least) the vertical profile is controlled by a relatively small number of processes, although these vary among aerosol components and particle sizes. We also show that sufficiently coarse variations in these processes can produce a similar diversity to that among different models in terms of the global-mean profile and, to a lesser extent, the zonal-mean vertical position. However, there are features of certain models' profiles that cannot be reproduced, suggesting the influence of further structural differences between models. In HadGEM3–UKCA, convective transport is found to be very important in controlling the vertical profile of all aerosol components by mass. In-cloud scavenging is very important for all except mineral dust. Growth by condensation is important for sulfate and carbonaceous aerosol (along with aqueous oxidation for the former and ageing by soluble material for the latter). The vertical extent of biomass-burning emissions into the free troposphere is also important for the profile of carbonaceous aerosol. Boundary-layer mixing plays a dominant role for sea salt and mineral dust, which are emitted only from the surface. Dry deposition and below-cloud scavenging are important for the profile of mineral dust only. In this model, the microphysical processes of nucleation, condensation and coagulation dominate the vertical profile of the smallest particles by number (e.g. total CN > 3 nm), while the profiles of larger particles (e.g. CN > 100 nm) are controlled by the same processes as the component mass profiles, plus the size distribution of primary emissions. We also show that the processes that affect the AOD-normalised radiative forcing in the model are predominantly those that affect the vertical mass distribution, in particular convective transport, in-cloud scavenging, aqueous oxidation, ageing and the vertical extent of biomass-burning emissions.
Resumo:
We consider the raise and peel model of a one-dimensional fluctuating interface in the presence of an attractive wall. The model can also describe a pair annihilation process in disordered unquenched media with a source at one end of the system. For the stationary states, several density profiles are studied using Monte Carlo simulations. We point out a deep connection between some profiles seen in the presence of the wall and in its absence. Our results are discussed in the context of conformal invariance ( c = 0 theory). We discover some unexpected values for the critical exponents, which are obtained using combinatorial methods. We have solved known ( Pascal`s hexagon) and new (split-hexagon) bilinear recurrence relations. The solutions of these equations are interesting in their own right since they give information on certain classes of alternating sign matrices.
Resumo:
This presentation was offered as part of the CUNY Library Assessment Conference, Reinventing Libraries: Reinventing Assessment, held at the City University of New York in June 2014.
Resumo:
Canada releases over 150 billion litres of untreated and undertreated wastewater into the water environment every year1. To clean up urban wastewater, new Federal Wastewater Systems Effluent Regulations (WSER) on establishing national baseline effluent quality standards that are achievable through secondary wastewater treatment were enacted on July 18, 2012. With respect to the wastewater from the combined sewer overflows (CSO), the Regulations require the municipalities to report the annual quantity and frequency of effluent discharges. The City of Toronto currently has about 300 CSO locations within an area of approximately 16,550 hectares. The total sewer length of the CSO area is about 3,450 km and the number of sewer manholes is about 51,100. A system-wide monitoring of all CSO locations has never been undertaken due to the cost and practicality. Instead, the City has relied on estimation methods and modelling approaches in the past to allow funds that would otherwise be used for monitoring to be applied to the reduction of the impacts of the CSOs. To fulfill the WSER requirements, the City is now undertaking a study in which GIS-based hydrologic and hydraulic modelling is the approach. Results show the usefulness of this for 1) determining the flows contributing to the combined sewer system in the local and trunk sewers for dry weather flow, wet weather flow, and snowmelt conditions; 2) assessing hydraulic grade line and surface water depth in all the local and trunk sewers under heavy rain events; 3) analysis of local and trunk sewer capacities for future growth; and 4) reporting of the annual quantity and frequency of CSOs as per the requirements in the new Regulations. This modelling approach has also allowed funds to be applied toward reducing and ultimately eliminating the adverse impacts of CSOs rather than expending resources on unnecessary and costly monitoring.
Resumo:
Este trabalho testa a existência de relações de codependência de ordem zero em spreads formados a partir da estrutura a termo da taxa de juros no Brasil. O objetivo é verificar se existem combinações lineares dos spreads que geram um processo ruído branco contemporâneo. Essas combinações lineares poderiam ser utilizadas para a previsão de taxas de juros futuras dado que desvios destas relações estáveis implicariam em um movimento futuro das taxas de juros no sentido de restabelecer o equilíbrio. O modelo de Nelson e Siegel (1987) serve de base teórica para os testes empíricos. Ao verificar a hipótese de codependência de ordem zero é possível também analisar premissas quanto aos parâmetros do modelo em relação à estrutura a termo da taxa de juros no Brasil. As evidências obtidas a partir dos resultados empíricos apontam na rejeição da hipótese de codependência de ordem zero e, consequentemente, na impossibilidade de definir as combinações lineares mencionadas. Esta constatação pode estar relacionada aos períodos de instabilidade presentes na amostra ou na existência de codependência de ordem superior a zero.
Resumo:
O trabalho busca através de um exercício empírico, extrair as curvas de probabilidade implícita de default em debêntures brasileiras. A construção ocorre em duas etapas. O primeiro desafio é obter as estruturas a termo das debêntures brasileiras. Foi utilizada a revisão proposta por Diebold e Li (2006) do modelo de Nelson Siegel (1987) para construç o das ETTJs. A segunda etapa consiste em extrair a probabilidade de default utilizado a forma reduzida do modelo de Duffie e Singleton (1999). A fração de perda em caso de default foi considerada constante conforme estudo de Xu e Nencioni (2000). A taxa de decaimento também foi mantida constante conforme proposto por Diebold e Li (2006) e Araújo (2012). O exercício foi replicado para três datas distintas durante o ciclo de redução de juros no Brasil. Dentre os resultados desse estudo identificou-se que os agentes do mercado reduziram a probabilidade de default dos emissores durante esse período. A redução nos vértices mais curtos foi mais significativa do que em vértices mais longos.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)