875 resultados para Mathematical proficiency
Resumo:
A large computer program has been developed to aid applied mathematicians in the solution of problems in non-numerical analysis which involve tedious manipulations of mathematical expressions. The mathematician uses typed commands and a light pen to direct the computer in the application of mathematical transformations; the intermediate results are displayed in standard text-book format so that the system user can decide the next step in the problem solution. Three problems selected from the literature have been solved to illustrate the use of the system. A detailed analysis of the problems of input, transformation, and display of mathematical expressions is also presented.
Resumo:
This report develops a conceptual framework in which to talk about mathematical knowledge. There are several broad categories of mathematical knowledge: results which contain the traditional logical aspects of mathematics; examples which contain illustrative material; and concepts which include formal and informal ideas, that is, definitions and heuristics.
Resumo:
A multi-plate (NIP) mathematical model was proposed by frontal analysis to evaluate nonlinear chromatographic performance. One of its advantages is that the parameters may be easily calculated from experimental data. Moreover, there is a good correlation between it and the equilibrium-dispersive (E-D) or Thomas models. This shows that it can well accommodate both types of band broadening that is comprised of either diffusion-dominated processes or kinetic sorption processes. The MP model can well describe experimental breakthrough curves that were obtained from membrane affinity chromatography and column reversed-phase liquid chromatography. Furthermore, the coefficients of mass transfer may be calculated according to the relationship between the MP model and the E-D or Thomas models. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
In this PhD study, mathematical modelling and optimisation of granola production has been carried out. Granola is an aggregated food product used in breakfast cereals and cereal bars. It is a baked crispy food product typically incorporating oats, other cereals and nuts bound together with a binder, such as honey, water and oil, to form a structured unit aggregate. In this work, the design and operation of two parallel processes to produce aggregate granola products were incorporated: i) a high shear mixing granulation stage (in a designated granulator) followed by drying/toasting in an oven. ii) a continuous fluidised bed followed by drying/toasting in an oven. In addition, the particle breakage of granola during pneumatic conveying produced by both a high shear granulator (HSG) and fluidised bed granulator (FBG) process were examined. Products were pneumatically conveyed in a purpose built conveying rig designed to mimic product conveying and packaging. Three different conveying rig configurations were employed; a straight pipe, a rig consisting two 45° bends and one with 90° bend. It was observed that the least amount of breakage occurred in the straight pipe while the most breakage occurred at 90° bend pipe. Moreover, lower levels of breakage were observed in two 45° bend pipe than the 90° bend vi pipe configuration. In general, increasing the impact angle increases the degree of breakage. Additionally for the granules produced in the HSG, those produced at 300 rpm have the lowest breakage rates while the granules produced at 150 rpm have the highest breakage rates. This effect clearly the importance of shear history (during granule production) on breakage rates during subsequent processing. In terms of the FBG there was no single operating parameter that was deemed to have a significant effect on breakage during subsequent conveying. A population balance model was developed to analyse the particle breakage occurring during pneumatic conveying. The population balance equations that govern this breakage process are solved using discretization. The Markov chain method was used for the solution of PBEs for this process. This study found that increasing the air velocity (by increasing the air pressure to the rig), results in increased breakage among granola aggregates. Furthermore, the analysis carried out in this work provides that a greater degree of breakage of granola aggregates occur in line with an increase in bend angle.
A mathematical theory of stochastic microlensing. II. Random images, shear, and the Kac-Rice formula
Resumo:
Continuing our development of a mathematical theory of stochastic microlensing, we study the random shear and expected number of random lensed images of different types. In particular, we characterize the first three leading terms in the asymptotic expression of the joint probability density function (pdf) of the random shear tensor due to point masses in the limit of an infinite number of stars. Up to this order, the pdf depends on the magnitude of the shear tensor, the optical depth, and the mean number of stars through a combination of radial position and the star's mass. As a consequence, the pdf's of the shear components are seen to converge, in the limit of an infinite number of stars, to shifted Cauchy distributions, which shows that the shear components have heavy tails in that limit. The asymptotic pdf of the shear magnitude in the limit of an infinite number of stars is also presented. All the results on the random microlensing shear are given for a general point in the lens plane. Extending to the general random distributions (not necessarily uniform) of the lenses, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to microlensing, we calculate the asymptotic global expected number of minimum images in the limit of an infinite number of stars, where the stars are uniformly distributed. This global expectation is bounded, while the global expected number of images and the global expected number of saddle images diverge as the order of the number of stars. © 2009 American Institute of Physics.
Resumo:
BACKGROUND: Serotonin is a neurotransmitter that has been linked to a wide variety of behaviors including feeding and body-weight regulation, social hierarchies, aggression and suicidality, obsessive compulsive disorder, alcoholism, anxiety, and affective disorders. Full understanding of serotonergic systems in the central nervous system involves genomics, neurochemistry, electrophysiology, and behavior. Though associations have been found between functions at these different levels, in most cases the causal mechanisms are unknown. The scientific issues are daunting but important for human health because of the use of selective serotonin reuptake inhibitors and other pharmacological agents to treat disorders in the serotonergic signaling system. METHODS: We construct a mathematical model of serotonin synthesis, release, and reuptake in a single serotonergic neuron terminal. The model includes the effects of autoreceptors, the transport of tryptophan into the terminal, and the metabolism of serotonin, as well as the dependence of release on the firing rate. The model is based on real physiology determined experimentally and is compared to experimental data. RESULTS: We compare the variations in serotonin and dopamine synthesis due to meals and find that dopamine synthesis is insensitive to the availability of tyrosine but serotonin synthesis is sensitive to the availability of tryptophan. We conduct in silico experiments on the clearance of extracellular serotonin, normally and in the presence of fluoxetine, and compare to experimental data. We study the effects of various polymorphisms in the genes for the serotonin transporter and for tryptophan hydroxylase on synthesis, release, and reuptake. We find that, because of the homeostatic feedback mechanisms of the autoreceptors, the polymorphisms have smaller effects than one expects. We compute the expected steady concentrations of serotonin transporter knockout mice and compare to experimental data. Finally, we study how the properties of the the serotonin transporter and the autoreceptors give rise to the time courses of extracellular serotonin in various projection regions after a dose of fluoxetine. CONCLUSIONS: Serotonergic systems must respond robustly to important biological signals, while at the same time maintaining homeostasis in the face of normal biological fluctuations in inputs, expression levels, and firing rates. This is accomplished through the cooperative effect of many different homeostatic mechanisms including special properties of the serotonin transporters and the serotonin autoreceptors. Many difficult questions remain in order to fully understand how serotonin biochemistry affects serotonin electrophysiology and vice versa, and how both are changed in the presence of selective serotonin reuptake inhibitors. Mathematical models are useful tools for investigating some of these questions.
Resumo:
PURPOSE: To develop a mathematical model that can predict refractive changes after Descemet stripping endothelial keratoplasty (DSEK). METHODS: A mathematical formula based on the Gullstrand eye model was generated to estimate the change in refractive power of the eye after DSEK. This model was applied to four DSEK cases retrospectively, to compare measured and predicted refractive changes after DSEK. RESULTS: The refractive change after DSEK is determined by calculating the difference in the power of the eye before and after DSEK surgery. The power of the eye post-DSEK surgery can be calculated with modified Gullstrand eye model equations that incorporate the change in the posterior radius of curvature and change in the distance between the principal planes of the cornea and lens after DSEK. Analysis of this model suggests that the ratio of central to peripheral graft thickness (CP ratio) and central thickness can have significant effect on refractive change where smaller CP ratios and larger graft thicknesses result in larger hyperopic shifts. This model was applied to four patients, and the average predicted hyperopic shift in the overall power of the eye was calculated to be 0.83 D. This change reflected in a mean of 93% (range, 75%-110%) of patients' measured refractive shifts. CONCLUSIONS: This simplified DSEK mathematical model can be used as a first step for estimating the hyperopic shift after DSEK. Further studies are necessary to refine the validity of this model.
Resumo:
En este trabajo se describe una experiencia llevada a cabo con profesores de matemáticas en formación, sobre el papel que pueden desarrollar las nuevas tecnologías para llevar a cabo procesos de demostración y prueba en el aula de secundaria.
Resumo:
En este trabajo se describe detalladamente una experiencia llevada a cabo con profesores de matemáticas en formación, sobre el papel que pueden desarrollar las nuevas tecnologías para llevar a cabo procesos de demostración y prueba en el aula de secundaria.
Resumo:
Dai ethnic mathematical culture is an important part of Dai ethnic culture. Mathematical elements show in their daily life. Through a research project of the Yunnan Dehong Dai people in southwest China, We collected the first-hand information, tried to do a small investigative study, and collected mathematics teaching resources that is useful to primary and secondary schools students on mathematics learning in this minority areas. Keyword: Dai ethnic; Mathematical culture; Primary and secondary schools; Teaching resources.
Resumo:
The phrase “not much mathematics required” can imply a variety of skill levels. When this phrase is applied to computer scientists, software engineers, and clients in the area of formal specification, the word “much” can be widely misinterpreted with disastrous consequences. A small experiment in reading specifications revealed that students already trained in discrete mathematics and the specification notation performed very poorly; much worse than could reasonably be expected if formal methods proponents are to be believed.
Resumo:
It is well known that during alloy solidification, convection currents close to the so-lidification front have an influence on the structure of dendrites, the local solute concentration, the pattern of solid segregation, and eventually the microstructure of the casting and hence its mechanical properties. Controlled stirring of the melt in continuous casting or in ingot solidification is thought to have a beneficial effect. Free convection currents occur naturally due to temperature differences in the melt and for any given configuration, their strength is a function of the degree of superheat present. A more controlled forced convection current can be induced using electro-magnetic stirring. The authors have applied their Control-Volume based MHD method [1, 2] to the problem of tin solidification in an annular crucible with a water-cooled inner wall and a resistance heated outer one, for both free and forced convection situations and for various degrees of superheat. This problem was studied experimentally by Vives and Perry [3] who obtained temperature measurements, front positions and maps of electro-magnetic body force for a range of superheat values. The results of the mathematical model are compared critically against the experimental ones, in order to validate the model and also to demonstrate the usefulness of the coupled solution technique followed, as a predictive tool and a design aid. Figs 6, refs 19.
Resumo:
Computer based mathematical models describing aircraft fire have a role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria and in post mortuum accident investigation. As the cost involved in performing large-scale fire experiments for the next generation 'Ultra High Capacity Aircraft' (UHCA) are expected to be prohibitively high, the development and use of these modelling tools may become essential if these aircraft are to prove a safe and viable reality. By describing the present capabilities and limitations of aircraft fire models, this paper will examine the future development of these models in the areas of large scale applications through parallel computing, combustion modelling and extinguishment modelling.
Resumo:
Computer equipment, once viewed as leading edge, is quickly condemned as obsolete and banished to basement store rooms or rubbish bins. The magpie instincts of some of the academics and technicians at the University of Greenwich, London, preserved some such relics in cluttered offices and garages to the dismay of colleagues and partners. When the University moved into its new campus in the historic buildings of the Old Royal Naval College in the center of Greenwich, corridor space in King William Court provided an opportunity to display some of this equipment so that students could see these objects and gain a more vivid appreciation of their subject's history.