53 resultados para mathematical analysis
Resumo:
A multivariate fit to the variation in global mean surface air temperature anomaly over the past half century is presented. The fit procedure allows for the effect of response time on the waveform, amplitude and lag of each radiative forcing input, and each is allowed to have its own time constant. It is shown that the contribution of solar variability to the temperature trend since 1987 is small and downward; the best estimate is -1.3% and the 2sigma confidence level sets the uncertainty range of -0.7 to -1.9%. The result is the same if one quantifies the solar variation using galactic cosmic ray fluxes (for which the analysis can be extended back to 1953) or the most accurate total solar irradiance data composite. The rise in the global mean air surface temperatures is predominantly associated with a linear increase that represents the combined effects of changes in anthropogenic well-mixed greenhouse gases and aerosols, although, in recent decades, there is also a considerable contribution by a relative lack of major volcanic eruptions. The best estimate is that the anthropogenic factors contribute 75% of the rise since 1987, with an uncertainty range (set by the 2sigma confidence level using an AR(1) noise model) of 49–160%; thus, the uncertainty is large, but we can state that at least half of the temperature trend comes from the linear term and that this term could explain the entire rise. The results are consistent with the intergovernmental panel on climate change (IPCC) estimates of the changes in radiative forcing (given for 1961–1995) and are here combined with those estimates to find the response times, equilibrium climate sensitivities and pertinent heat capacities (i.e. the depth into the oceans to which a given radiative forcing variation penetrates) of the quasi-periodic (decadal-scale) input forcing variations. As shown by previous studies, the decadal-scale variations do not penetrate as deeply into the oceans as the longer term drifts and have shorter response times. Hence, conclusions about the response to century-scale forcing changes (and hence the associated equilibrium climate sensitivity and the temperature rise commitment) cannot be made from studies of the response to shorter period forcing changes.
Resumo:
We study the elliptic sine-Gordon equation in the quarter plane using a spectral transform approach. We determine the Riemann-Hilbert problem associated with well-posed boundary value problems in this domain and use it to derive a formal representation of the solution. Our analysis is based on a generalization of the usual inverse scattering transform recently introduced by Fokas for studying linear elliptic problems.
Resumo:
Many different individuals, who have their own expertise and criteria for decision making, are involved in making decisions on construction projects. Decision-making processes are thus significantly affected by communication, in which a dynamic performance of human intentions leads to unpredictable outcomes. In order to theorise the decision making processes including communication, it is argued here that the decision making processes resemble evolutionary dynamics in terms of both selection and mutation, which can be expressed by the replicator-mutator equation. To support this argument, a mathematical model of decision making has been made from an analogy with evolutionary dynamics, in which there are three variables: initial support rate, business hierarchy, and power of persuasion. On the other hand, a survey of patterns in decision making in construction projects has also been performed through self-administered mail questionnaire to construction practitioners. Consequently, comparison between the numerical analysis of mathematical model and the statistical analysis of empirical data has shown a significant potential of the replicator-mutator equation as a tool to study dynamic properties of intentions in communication.
Resumo:
Individuals with elevated levels of plasma low density lipoprotein (LDL) cholesterol (LDL-C) are considered to be at risk of developing coronary heart disease. LDL particles are removed from the blood by a process known as receptor-mediated endocytosis, which occurs mainly in the liver. A series of classical experiments delineated the major steps in the endocytotic process; apolipoprotein B-100 present on LDL particles binds to a specific receptor (LDL receptor, LDL-R) in specialized areas of the cell surface called clathrin-coated pits. The pit comprising the LDL-LDL-R complex is internalized forming a cytoplasmic endosome. Fusion of the endosome with a lysosome leads to degradation of the LDL into its constituent parts (that is, cholesterol, fatty acids, and amino acids), which are released for reuse by the cell, or are excreted. In this paper, we formulate a mathematical model of LDL endocytosis, consisting of a system of ordinary differential equations. We validate our model against existing in vitro experimental data, and we use it to explore differences in system behavior when a single bolus of extracellular LDL is supplied to cells, compared to when a continuous supply of LDL particles is available. Whereas the former situation is common to in vitro experimental systems, the latter better reflects the in vivo situation. We use asymptotic analysis and numerical simulations to study the longtime behavior of model solutions. The implications of model-derived insights for experimental design are discussed.
Resumo:
Elevated levels of low-density-lipoprotein cholesterol (LDL-C) in the plasma are a well-established risk factor for the development of coronary heart disease. Plasma LDL-C levels are in part determined by the rate at which LDL particles are removed from the bloodstream by hepatic uptake. The uptake of LDL by mammalian liver cells occurs mainly via receptor-mediated endocytosis, a process which entails the binding of these particles to specific receptors in specialised areas of the cell surface, the subsequent internalization of the receptor-lipoprotein complex, and ultimately the degradation and release of the ingested lipoproteins' constituent parts. We formulate a mathematical model to study the binding and internalization (endocytosis) of LDL and VLDL particles by hepatocytes in culture. The system of ordinary differential equations, which includes a cholesterol-dependent pit production term representing feedback regulation of surface receptors in response to intracellular cholesterol levels, is analysed using numerical simulations and steady-state analysis. Our numerical results show good agreement with in vitro experimental data describing LDL uptake by cultured hepatocytes following delivery of a single bolus of lipoprotein. Our model is adapted in order to reflect the in vivo situation, in which lipoproteins are continuously delivered to the hepatocyte. In this case, our model suggests that the competition between the LDL and VLDL particles for binding to the pits on the cell surface affects the intracellular cholesterol concentration. In particular, we predict that when there is continuous delivery of low levels of lipoproteins to the cell surface, more VLDL than LDL occupies the pit, since VLDL are better competitors for receptor binding. VLDL have a cholesterol content comparable to LDL particles; however, due to the larger size of VLDL, one pit-bound VLDL particle blocks binding of several LDLs, and there is a resultant drop in the intracellular cholesterol level. When there is continuous delivery of lipoprotein at high levels to the hepatocytes, VLDL particles still out-compete LDL particles for receptor binding, and consequently more VLDL than LDL particles occupy the pit. Although the maximum intracellular cholesterol level is similar for high and low levels of lipoprotein delivery, the maximum is reached more rapidly when the lipoprotein delivery rates are high. The implications of these results for the design of in vitro experiments is discussed.
Resumo:
The mathematical models that describe the immersion-frying period and the post-frying cooling period of an infinite slab or an infinite cylinder were solved and tested. Results were successfully compared with those found in the literature or obtained experimentally, and were discussed in terms of the hypotheses and simplifications made. The models were used as the basis of a sensitivity analysis. Simulations showed that a decrease in slab thickness and core heat capacity resulted in faster crust development. On the other hand, an increase in oil temperature and boiling heat transfer coefficient between the oil and the surface of the food accelerated crust formation. The model for oil absorption during cooling was analysed using the tested post-frying cooling equation to determine the moment in which a positive pressure driving force, allowing oil suction within the pore, originated. It was found that as crust layer thickness, pore radius and ambient temperature decreased so did the time needed to start the absorption. On the other hand, as the effective convective heat transfer coefficient between the air and the surface of the slab increased the required cooling time decreased. In addition, it was found that the time needed to allow oil absorption during cooling was extremely sensitive to pore radius, indicating the importance of an accurate pore size determination in future studies.
Resumo:
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible "straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values, but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked singularities in all equations.
Resumo:
We argue the case for a new branch of mathematics and its applications: Mathematics for the Digital Society. There is a challenge for mathematics, a strong “pull” from new and emerging commercial and public activities; and a need to train and inspire a generation of quantitative scientists who will seek careers within the associated sectors. Although now going through an early phase of boiling up, prior to scholarly distillation, we discuss how data rich activities and applications may benefit from a wide range of continuous and discrete models, methods, analysis and inference. In ten years time such applications will be common place and associated courses may be embedded within the undergraduate curriculum.
Resumo:
Bloom-forming and toxin-producing cyanobacteria remain a persistent nuisance across the world. Modelling cyanobacterial behaviour in freshwaters is an important tool for understanding their population dynamics and predicting the location and timing of the bloom events in lakes, reservoirs and rivers. A new deterministic–mathematical model was developed, which simulates the growth and movement of cyanobacterial blooms in river systems. The model focuses on the mathematical description of the bloom formation, vertical migration and lateral transport of colonies within river environments by taking into account the major factors that affect the cyanobacterial bloom formation in rivers including light, nutrients and temperature. A parameter sensitivity analysis using a one-at-a-time approach was carried out. There were two objectives of the sensitivity analysis presented in this paper: to identify the key parameters controlling the growth and movement patterns of cyanobacteria and to provide a means for model validation. The result of the analysis suggested that maximum growth rate and day length period were the most significant parameters in determining the population growth and colony depth, respectively.
Resumo:
Little has been reported on the performance of near-far resistant CDMA detectors in the presence of system parameter estimation errors (SPEEs). Starting with the general mathematical model of matched filters, the paper examines the effects of three classes of SPEEs, i.e., time-delay, carrier phase, and carrier frequency errors, on the performance (BER) of an emerging type of near-far resistant coherent DS/SSMA detector, i.e., the linear decorrelating detector. For comparison, the corresponding results for the conventional detector are also presented. It is shown that the linear decorrelating detector can still maintain a considerable performance advantage over the conventional detector even when some SPEEs exist.
Resumo:
Decision theory is the study of models of judgement involved in, and leading to, deliberate and (usually) rational choice. In real estate investment there are normative models for the allocation of assets. These asset allocation models suggest an optimum allocation between the respective asset classes based on the investors’ judgements of performance and risk. Real estate is selected, as other assets, on the basis of some criteria, e.g. commonly its marginal contribution to the production of a mean variance efficient multi asset portfolio, subject to the investor’s objectives and capital rationing constraints. However, decisions are made relative to current expectations and current business constraints. Whilst a decision maker may believe in the required optimum exposure levels as dictated by an asset allocation model, the final decision may/will be influenced by factors outside the parameters of the mathematical model. This paper discusses investors' perceptions and attitudes toward real estate and highlights the important difference between theoretical exposure levels and pragmatic business considerations. It develops a model to identify “soft” parameters in decision making which will influence the optimal allocation for that asset class. This “soft” information may relate to behavioural issues such as the tendency to mirror competitors; a desire to meet weight of money objectives; a desire to retain the status quo and many other non-financial considerations. The paper aims to establish the place of property in multi asset portfolios in the UK and examine the asset allocation process in practice, with a view to understanding the decision making process and to look at investors’ perceptions based on an historic analysis of market expectation; a comparison with historic data and an analysis of actual performance.
Resumo:
The Newton‐Raphson method is proposed for the solution of the nonlinear equation arising from a theoretical model of an acid/base titration. It is shown that it is necessary to modify the form of the equation in order that the iteration is guaranteed to converge. A particular example is considered to illustrate the analysis and method, and a BASIC program is included that can be used to predict the pH of any weak acid/weak base titration.