917 resultados para Inflation (Finance) - Mathematical models
Resumo:
All mammals lose their ability to produce lactase (β-galactosidase), the enzyme that cleaves lactose into galactose and glucose, after weaning. The prevalence of lactase deficiency (LD) spans from 2 to 15% among northern Europeans, to nearly 100% among Asians. Following lactose consumption, people with LD often experience gastrointestinal symptoms such as abdominal pain, bowel distension, cramps and flatulence, or even systemic problems such as headache, loss of concentration and muscle pain. These symptoms vary depending on the amount of lactose ingested, type of food and degree of intolerance. Although those affected can avoid the uptake of dairy products, in doing so, they lose a readily available source of calcium and protein. In this work, gels obtained by complexation of Tetronic 90R4 with α-cyclodextrin loaded with β-galactosidase are proposed as a way to administer the enzyme immediately before or with the lactose-containing meal. Both molecules are biocompatible, can form gels in situ, and show sustained erosion kinetics in aqueous media. The complex was characterized by FTIR that evidenced an inclusion complex between the polyethylene oxide block and α-cyclodextrin. The release profiles of β-galactosidase from two different matrices (gels and tablets) of the in situ hydrogels have been obtained. The influence of the percentage of Tetronic in media of different pH was evaluated. No differences were observed regarding the release rate from the gel matrices at pH 6 (t50 = 105 min). However, in the case of the tablets, the kinetics were faster and they released a greater amount of 90R4 (25%, t50 = 40–50 min). Also, the amount of enzyme released was higher for mixtures with 25% Tetronic. Using suitable mathematical models, the corresponding kinetic parameters have been calculated. In all cases, the release data fit quite well to the Peppas–Sahlin model equation, indicating that the release of β-galactosidase is governed by a combination of diffusion and erosion processes. It has been observed that the diffusion mechanism prevails over erosion during the first 50 minutes, followed by continued release of the enzyme due to the disintegration of the matrix.
Resumo:
Mathematical models are useful tools for simulation, evaluation, optimal operation and control of solar cells and proton exchange membrane fuel cells (PEMFCs). To identify the model parameters of these two type of cells efficiently, a biogeography-based optimization algorithm with mutation strategies (BBO-M) is proposed. The BBO-M uses the structure of biogeography-based optimization algorithm (BBO), and both the mutation motivated from the differential evolution (DE) algorithm and the chaos theory are incorporated into the BBO structure for improving the global searching capability of the algorithm. Numerical experiments have been conducted on ten benchmark functions with 50 dimensions, and the results show that BBO-M can produce solutions of high quality and has fast convergence rate. Then, the proposed BBO-M is applied to the model parameter estimation of the two type of cells. The experimental results clearly demonstrate the power of the proposed BBO-M in estimating model parameters of both solar and fuel cells.
Resumo:
Whole genome sequencing (WGS) technology holds great promise as a tool for the forensic epidemiology of bacterial pathogens. It is likely to be particularly useful for studying the transmission dynamics of an observed epidemic involving a largely unsampled 'reservoir' host, as for bovine tuberculosis (bTB) in British and Irish cattle and badgers. BTB is caused by Mycobacterium bovis, a member of the M. tuberculosis complex that also includes the aetiological agent for human TB. In this study, we identified a spatio-temporally linked group of 26 cattle and 4 badgers infected with the same Variable Number Tandem Repeat (VNTR) type of M. bovis. Single-nucleotide polymorphisms (SNPs) between sequences identified differences that were consistent with bacterial lineages being persistent on or near farms for several years, despite multiple clear whole herd tests in the interim. Comparing WGS data to mathematical models showed good correlations between genetic divergence and spatial distance, but poor correspondence to the network of cattle movements or within-herd contacts. Badger isolates showed between zero and four SNP differences from the nearest cattle isolate, providing evidence for recent transmissions between the two hosts. This is the first direct genetic evidence of M. bovis persistence on farms over multiple outbreaks with a continued, ongoing interaction with local badgers. However, despite unprecedented resolution, directionality of transmission cannot be inferred at this stage. Despite the often notoriously long timescales between time of infection and time of sampling for TB, our results suggest that WGS data alone can provide insights into TB epidemiology even where detailed contact data are not available, and that more extensive sampling and analysis will allow for quantification of the extent and direction of transmission between cattle and badgers. © 2012 Biek et al.
Resumo:
A desmaterialização da economia é um dos caminhos para a promoção do desenvolvimento sustentável na medida em que elimina ou reduz a utilização de recursos naturais, fazendo mais com menos. A intensificação dos processos tecnológicos é uma forma de desmaterializar a economia. Sistemas mais compactos e mais eficientes consomem menos recursos. No caso concreto dos sistemas envolvendo processo de troca de calor, a intensificação resulta na redução da área de permuta e da quantidade de fluido de trabalho, o que para além de outra vantagem que possa apresentar decorrentes da miniaturização, é um contributo inegável para a sustentabilidade da sociedade através do desenvolvimento científico e tecnológico. O desenvolvimento de nanofluidos surge no sentido de dar resposta a estes tipo de desafios da sociedade moderna, contribuindo para a inovação de produtos e sistemas, dando resposta a problemas colocados ao nível das ciências de base. A literatura é unânime na identificação do seu potencial como fluidos de permuta, dada a sua elevada condutividade, no entanto a falta de rigor subjacente às técnicas de preparação dos mesmos, assim como de um conhecimento sistemático das suas propriedades físicas suportado por modelos físico-matemáticos devidamente validados levam a que a operacionalização industrial esteja longe de ser concretizável. Neste trabalho, estudou-se de forma sistemática a condutividade térmica de nanofluidos de base aquosa aditivados com nanotubos de carbono, tendo em vista a identificação dos mecanismos físicos responsáveis pela condução de calor no fluido e o desenvolvimento de um modelo geral que permita com segurança determinar esta propriedade com o rigor requerido ao nível da engenharia. Para o efeito apresentam-se métodos para uma preparação rigorosa e reprodutível deste tipo de nanofluido assim como das metodologias consideradas mais importantes para a aferição da sua estabilidade, assegurando deste modo o rigor da técnica da sua produção. A estabilidade coloidal é estabelecida de forma rigorosa tendo em conta parâmetros quantificáveis como a ausência de aglomeração, a separação de fases e a deterioração da morfologia das nanopartículas. Uma vez assegurado o método de preparação dos nanofluídos, realizou-se uma análise paramétrica conducente a uma base de dados obtidos experimentalmente que inclui a visão central e globalizante da influência relativa dos diferentes fatores de controlo com impacto nas propriedades termofísicas. De entre as propriedades termofísicas, este estudo deu particular ênfase à condutividade térmica, sendo os fatores de controlo selecionados os seguintes: fluido base, temperatura, tamanho da partícula e concentração de nanopartículas. Experimentalmente, verificou-se que de entre os fatores de controlo estudados, os que maior influência detêm sobre a condutividade térmica do nanofluido, são o tamanho e concentração das nanopartículas. Com a segurança conferida por uma base de dados sólida e com o conhecimento acerca da contribuição relativa de cada fator de controlo no processo de transferência de calor, desenvolveu-se e validou-se um modelo físico-matemático com um caracter generalista, que permitirá determinar com segurança a condutividade térmica de nanofluidos.
Resumo:
Climate changes are foreseen to produce a large impact in the morphology of estuaries and coastal systems. The morphology changes will subsequently drive changes in the biologic compartments of the systems and ultimately in their ecosystems. Sea level rise is one of the main factors controlling these changes. Morphologic changes can be better understood with the use of long term morphodynamic mathematical models.
Resumo:
Trabalho Final para obtenção do grau Mestre em Engenharia Electrotécnica
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo Energia
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica no Ramo de Automação e Electrónica Industrial
Resumo:
Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.
Resumo:
Em Angola, apenas cerca de 30% da população tem acesso à energia elétrica, nível que decresce para valores inferiores a 10% em zonas rurais mais remotas. Este problema é agravado pelo facto de, na maioria dos casos, as infraestruturas existentes se encontrarem danificadas ou não acompanharem o desenvolvimento da região. Em particular na capital angolana, Luanda que, sendo a menor província de Angola, é a que regista atualmente a maior densidade populacional. Com uma população de cerca de 5 milhões de habitantes, não só há frequentemente problemas relacionados com a falha do fornecimento de energia elétrica como há ainda uma percentagem considerável de municípios onde a rede elétrica ainda nem sequer chegou. O governo de Angola, no seu esforço de crescimento e aproveitamento das suas enormes potencialidades, definiu o setor energético como um dos fatores críticos para o desenvolvimento sustentável do país, tendo assumido que este é um dos eixos prioritários até 2016. Existem objetivos claros quanto à reabilitação e expansão das infraestruturas do setor elétrico, aumentando a capacidade instalada do país e criando uma rede nacional adequada, com o intuito não só de melhorar a qualidade e fiabilidade da rede já existente como de a aumentar. Este trabalho de dissertação consistiu no levantamento de dados reais relativamente à rede de distribuição de energia elétrica de Luanda, na análise e planeamento do que é mais premente fazer relativamente à sua expansão, na escolha dos locais onde é viável localizar novas subestações, na modelação adequada do problema real e na proposta de uma solução ótima para a expansão da rede existente. Depois de analisados diferentes modelos matemáticos aplicados ao problema de expansão de redes de distribuição de energia elétrica encontrados na literatura, optou-se por um modelo de programação linear inteira mista (PLIM) que se mostrou adequado. Desenvolvido o modelo do problema, o mesmo foi resolvido por recurso a software de otimização Analytic Solver e CPLEX. Como forma de validação dos resultados obtidos, foi implementada a solução de rede no simulador PowerWorld 8.0 OPF, software este que permite a simulação da operação do sistema de trânsito de potências.
Resumo:
We develop a new a coinfection model for hepatitis C virus (HCV) and the human immunodeficiency virus (HIV). We consider treatment for both diseases, screening, unawareness and awareness of HIV infection, and the use of condoms. We study the local stability of the disease-free equilibria for the full model and for the two submodels (HCV only and HIV only submodels). We sketch bifurcation diagrams for different parameters, such as the probabilities that a contact will result in a HIV or an HCV infection. We present numerical simulations of the full model where the HIV, HCV and double endemic equilibria can be observed. We also show numerically the qualitative changes of the dynamical behavior of the full model for variation of relevant parameters. We extrapolate the results from the model for actual measures that could be implemented in order to reduce the number of infected individuals.
Resumo:
Cerebral metabolism is compartmentalized between neurons and glia. Although glial glycolysis is thought to largely sustain the energetic requirements of neurotransmission while oxidative metabolism takes place mainly in neurons, this hypothesis is matter of debate. The compartmentalization of cerebral metabolic fluxes can be determined by (13)C nuclear magnetic resonance (NMR) spectroscopy upon infusion of (13)C-enriched compounds, especially glucose. Rats under light α-chloralose anesthesia were infused with [1,6-(13)C]glucose and (13)C enrichment in the brain metabolites was measured by (13)C NMR spectroscopy with high sensitivity and spectral resolution at 14.1 T. This allowed determining (13)C enrichment curves of amino acid carbons with high reproducibility and to reliably estimate cerebral metabolic fluxes (mean error of 8%). We further found that TCA cycle intermediates are not required for flux determination in mathematical models of brain metabolism. Neuronal tricarboxylic acid cycle rate (V(TCA)) and neurotransmission rate (V(NT)) were 0.45 ± 0.01 and 0.11 ± 0.01 μmol/g/min, respectively. Glial V(TCA) was found to be 38 ± 3% of total cerebral oxidative metabolism, accounting for more than half of neuronal oxidative metabolism. Furthermore, glial anaplerotic pyruvate carboxylation rate (V(PC)) was 0.069 ± 0.004 μmol/g/min, i.e., 25 ± 1% of the glial TCA cycle rate. These results support a role of glial cells as active partners of neurons during synaptic transmission beyond glycolytic metabolism.
Resumo:
An analytical model for bacterial accumulation in a discrete fractllre has been developed. The transport and accumlllation processes incorporate into the model include advection, dispersion, rate-limited adsorption, rate-limited desorption, irreversible adsorption, attachment, detachment, growth and first order decay botl1 in sorbed and aqueous phases. An analytical solution in Laplace space is derived and nlln1erically inverted. The model is implemented in the code BIOFRAC vvhich is written in Fortran 99. The model is derived for two phases, Phase I, where adsorption-desorption are dominant, and Phase II, where attachment-detachment are dominant. Phase I ends yvhen enollgh bacteria to fully cover the substratllm have accllillulated. The model for Phase I vvas verified by comparing to the Ogata-Banks solution and the model for Phase II was verified by comparing to a nonHomogenous version of the Ogata-Banks solution. After verification, a sensitiv"ity analysis on the inpllt parameters was performed. The sensitivity analysis was condllcted by varying one inpllt parameter vvhile all others were fixed and observing the impact on the shape of the clirve describing bacterial concentration verSllS time. Increasing fracture apertllre allovvs more transport and thus more accllffilliation, "Vvhich diminishes the dllration of Phase I. The larger the bacteria size, the faster the sllbstratum will be covered. Increasing adsorption rate, was observed to increase the dllration of Phase I. Contrary to the aSSllmption ofllniform biofilm thickness, the accllffilliation starts frOll1 the inlet, and the bacterial concentration in aqlleous phase moving towards the olitiet declines, sloyving the accumulation at the outlet. Increasing the desorption rate, redllces the dliration of Phase I, speeding IIp the accllmlilation. It was also observed that Phase II is of longer duration than Phase I. Increasing the attachment rate lengthens the accliffililation period. High rates of detachment speeds up the transport. The grovvth and decay rates have no significant effect on transport, althollgh increases the concentrations in both aqueous and sorbed phases are observed. Irreversible adsorption can stop accllillulation completely if the vallIes are high.
Resumo:
We have presented a Green's function method for the calculation of the atomic mean square displacement (MSD) for an anharmonic Hamil toni an . This method effectively sums a whole class of anharmonic contributions to MSD in the perturbation expansion in the high temperature limit. Using this formalism we have calculated the MSD for a nearest neighbour fcc Lennard Jones solid. The results show an improvement over the lowest order perturbation theory results, the difference with Monte Carlo calculations at temperatures close to melting is reduced from 11% to 3%. We also calculated the MSD for the Alkali metals Nat K/ Cs where a sixth neighbour interaction potential derived from the pseudopotential theory was employed in the calculations. The MSD by this method increases by 2.5% to 3.5% over the respective perturbation theory results. The MSD was calculated for Aluminum where different pseudopotential functions and a phenomenological Morse potential were used. The results show that the pseudopotentials provide better agreement with experimental data than the Morse potential. An excellent agreement with experiment over the whole temperature range is achieved with the Harrison modified point-ion pseudopotential with Hubbard-Sham screening function. We have calculated the thermodynamic properties of solid Kr by minimizing the total energy consisting of static and vibrational components, employing different schemes: The quasiharmonic theory (QH), ).2 and).4 perturbation theory, all terms up to 0 ().4) of the improved self consistent phonon theory (ISC), the ring diagrams up to o ().4) (RING), the iteration scheme (ITER) derived from the Greens's function method and a scheme consisting of ITER plus the remaining contributions of 0 ().4) which are not included in ITER which we call E(FULL). We have calculated the lattice constant, the volume expansion, the isothermal and adiabatic bulk modulus, the specific heat at constant volume and at constant pressure, and the Gruneisen parameter from two different potential functions: Lennard-Jones and Aziz. The Aziz potential gives generally a better agreement with experimental data than the LJ potential for the QH, ).2, ).4 and E(FULL) schemes. When only a partial sum of the).4 diagrams is used in the calculations (e.g. RING and ISC) the LJ results are in better agreement with experiment. The iteration scheme brings a definitive improvement over the).2 PT for both potentials.