984 resultados para Mathematical statistics.
Resumo:
Janet Taylor, Ross D King, Thomas Altmann and Oliver Fiehn (2002). Application of metabolomics to plant genotype discrimination using statistics and machine learning. 1st European Conference on Computational Biology (ECCB). (published as a journal supplement in Bioinformatics 18: S241-S248).
Resumo:
Raufaste, C., Dollet, B., Cox, S., Jiang, Y. and Graner, F. (2007). Yield drag in a two-dimensional foam flow around a circular obstacle: Effect of liquid fraction. European Physical Journal E, 23 (2), 217?228 Sponsorship: Y.J. is supported by US DOE under contract No. DE-AC52-06NA25396. S.C. is supported by EPSRC (EP/D071127/1)
Resumo:
Praca dotyczy wybranych metod pozyskiwania, czyli ekscerpcji, informacji o charakterze leksykalnym z elektronicznych zbiorów tekstów.Jej celem jest, po pierwsze, sformułowanie nowych, oryginalnych metod, które mogą być użyteczne w pozyskiwaniu materiału do analiz leksykalnych, a następnie zbadanie ich na wybranym zbiorze tekstów.Planowano opracowanie metod niewymagających zaawansowanej znajomości programowania komputerowego, a jednocześnie umożliwiających uzyskanie wartościowych wyników, gdzie za wartościowość metody uznaje się daną wydajność ekscerpcyjną. Trzy sformułowane metody dopracowano i zoptymalizowano.Metoda ekscerpcji jednostek nowych dostarczyła ponad 1000 wyrazów nowych, niezarejestrowanych, metoda ekscerpcji kolokacji w oparciu o akronimy daje ponad 6000 jednostek, zaś metoda ekscerpcji kolokacji wykorzystująca końcówkę liczby mnogiej dała ponad 110 tysięcy wyodrębnionych jednostek.
Resumo:
Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.
Resumo:
Under natural viewing conditions small movements of the eye, head, and body prevent the maintenance of a steady direction of gaze. It is known that stimuli tend to fade when they a restabilized on the retina for several seconds. However; it is unclear whether the physiological motion of the retinal image serves a visual purpose during the brief periods of natural visual fixation. This study examines the impact of fixational instability on the statistics of the visua1 input to the retina and on the structure of neural activity in the early visual system. We show that fixational instability introduces a component in the retinal input signals that in the presence of natural images, lacks spatial correlations. This component strongly influences neural activity in a model of the LGN. It decorrelates cell responses even if the contrast sensitivity functions of simulated cells arc not perfectly tuned to counterbalance the power-law spectrum of natural images. A decorrelation of neural activity at the early stages of the visual system has been proposed to be beneficial for discarding statistical redundancies in the input signals. The results of this study suggest that fixational instability might contribute to establishing efficient representations of natural stimuli.
Resumo:
In this PhD study, mathematical modelling and optimisation of granola production has been carried out. Granola is an aggregated food product used in breakfast cereals and cereal bars. It is a baked crispy food product typically incorporating oats, other cereals and nuts bound together with a binder, such as honey, water and oil, to form a structured unit aggregate. In this work, the design and operation of two parallel processes to produce aggregate granola products were incorporated: i) a high shear mixing granulation stage (in a designated granulator) followed by drying/toasting in an oven. ii) a continuous fluidised bed followed by drying/toasting in an oven. In addition, the particle breakage of granola during pneumatic conveying produced by both a high shear granulator (HSG) and fluidised bed granulator (FBG) process were examined. Products were pneumatically conveyed in a purpose built conveying rig designed to mimic product conveying and packaging. Three different conveying rig configurations were employed; a straight pipe, a rig consisting two 45° bends and one with 90° bend. It was observed that the least amount of breakage occurred in the straight pipe while the most breakage occurred at 90° bend pipe. Moreover, lower levels of breakage were observed in two 45° bend pipe than the 90° bend vi pipe configuration. In general, increasing the impact angle increases the degree of breakage. Additionally for the granules produced in the HSG, those produced at 300 rpm have the lowest breakage rates while the granules produced at 150 rpm have the highest breakage rates. This effect clearly the importance of shear history (during granule production) on breakage rates during subsequent processing. In terms of the FBG there was no single operating parameter that was deemed to have a significant effect on breakage during subsequent conveying. A population balance model was developed to analyse the particle breakage occurring during pneumatic conveying. The population balance equations that govern this breakage process are solved using discretization. The Markov chain method was used for the solution of PBEs for this process. This study found that increasing the air velocity (by increasing the air pressure to the rig), results in increased breakage among granola aggregates. Furthermore, the analysis carried out in this work provides that a greater degree of breakage of granola aggregates occur in line with an increase in bend angle.
Resumo:
In this paper, we examine exchange rates in Vietnam’s transitional economy. Evidence of long-run equilibrium are established in most cases through a single co-integrating vector among endogenous variables that determine the real exchange rates. This supports relative PPP in which ECT of the system can be combined linearly into a stationary process, reducing deviation from PPP in the long run. Restricted coefficient vectors ß’ = (1, 1, -1) for real exchange rates of currencies in question are not rejected. This empirics of relative PPP adds to found evidences by many researchers, including Flre et al. (1999), Lee (1999), Johnson (1990), Culver and Papell (1999), Cuddington and Liang (2001). Instead of testing for different time series on a common base currency, we use different base currencies (USD, GBP, JPY and EUR). By doing so we want to know the whether theory may posit significant differences against one currency? We have found consensus, given inevitable technical differences, even with smallerdata sample for EUR. Speeds of convergence to PPP and adjustment are faster compared to results from other researches for developed economies, using both observed and bootstrapped HL measures. Perhaps, a better explanation is the adjustment from hyperinflation period, after which the theory indicates that adjusting process actually accelerates. We observe that deviation appears to have been large in early stages of the reform, mostly overvaluation. Over time, its correction took place leading significant deviations to gradually disappear.
A mathematical theory of stochastic microlensing. II. Random images, shear, and the Kac-Rice formula
Resumo:
Continuing our development of a mathematical theory of stochastic microlensing, we study the random shear and expected number of random lensed images of different types. In particular, we characterize the first three leading terms in the asymptotic expression of the joint probability density function (pdf) of the random shear tensor due to point masses in the limit of an infinite number of stars. Up to this order, the pdf depends on the magnitude of the shear tensor, the optical depth, and the mean number of stars through a combination of radial position and the star's mass. As a consequence, the pdf's of the shear components are seen to converge, in the limit of an infinite number of stars, to shifted Cauchy distributions, which shows that the shear components have heavy tails in that limit. The asymptotic pdf of the shear magnitude in the limit of an infinite number of stars is also presented. All the results on the random microlensing shear are given for a general point in the lens plane. Extending to the general random distributions (not necessarily uniform) of the lenses, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to microlensing, we calculate the asymptotic global expected number of minimum images in the limit of an infinite number of stars, where the stars are uniformly distributed. This global expectation is bounded, while the global expected number of images and the global expected number of saddle images diverge as the order of the number of stars. © 2009 American Institute of Physics.
Resumo:
BACKGROUND: Serotonin is a neurotransmitter that has been linked to a wide variety of behaviors including feeding and body-weight regulation, social hierarchies, aggression and suicidality, obsessive compulsive disorder, alcoholism, anxiety, and affective disorders. Full understanding of serotonergic systems in the central nervous system involves genomics, neurochemistry, electrophysiology, and behavior. Though associations have been found between functions at these different levels, in most cases the causal mechanisms are unknown. The scientific issues are daunting but important for human health because of the use of selective serotonin reuptake inhibitors and other pharmacological agents to treat disorders in the serotonergic signaling system. METHODS: We construct a mathematical model of serotonin synthesis, release, and reuptake in a single serotonergic neuron terminal. The model includes the effects of autoreceptors, the transport of tryptophan into the terminal, and the metabolism of serotonin, as well as the dependence of release on the firing rate. The model is based on real physiology determined experimentally and is compared to experimental data. RESULTS: We compare the variations in serotonin and dopamine synthesis due to meals and find that dopamine synthesis is insensitive to the availability of tyrosine but serotonin synthesis is sensitive to the availability of tryptophan. We conduct in silico experiments on the clearance of extracellular serotonin, normally and in the presence of fluoxetine, and compare to experimental data. We study the effects of various polymorphisms in the genes for the serotonin transporter and for tryptophan hydroxylase on synthesis, release, and reuptake. We find that, because of the homeostatic feedback mechanisms of the autoreceptors, the polymorphisms have smaller effects than one expects. We compute the expected steady concentrations of serotonin transporter knockout mice and compare to experimental data. Finally, we study how the properties of the the serotonin transporter and the autoreceptors give rise to the time courses of extracellular serotonin in various projection regions after a dose of fluoxetine. CONCLUSIONS: Serotonergic systems must respond robustly to important biological signals, while at the same time maintaining homeostasis in the face of normal biological fluctuations in inputs, expression levels, and firing rates. This is accomplished through the cooperative effect of many different homeostatic mechanisms including special properties of the serotonin transporters and the serotonin autoreceptors. Many difficult questions remain in order to fully understand how serotonin biochemistry affects serotonin electrophysiology and vice versa, and how both are changed in the presence of selective serotonin reuptake inhibitors. Mathematical models are useful tools for investigating some of these questions.
Resumo:
PURPOSE: To develop a mathematical model that can predict refractive changes after Descemet stripping endothelial keratoplasty (DSEK). METHODS: A mathematical formula based on the Gullstrand eye model was generated to estimate the change in refractive power of the eye after DSEK. This model was applied to four DSEK cases retrospectively, to compare measured and predicted refractive changes after DSEK. RESULTS: The refractive change after DSEK is determined by calculating the difference in the power of the eye before and after DSEK surgery. The power of the eye post-DSEK surgery can be calculated with modified Gullstrand eye model equations that incorporate the change in the posterior radius of curvature and change in the distance between the principal planes of the cornea and lens after DSEK. Analysis of this model suggests that the ratio of central to peripheral graft thickness (CP ratio) and central thickness can have significant effect on refractive change where smaller CP ratios and larger graft thicknesses result in larger hyperopic shifts. This model was applied to four patients, and the average predicted hyperopic shift in the overall power of the eye was calculated to be 0.83 D. This change reflected in a mean of 93% (range, 75%-110%) of patients' measured refractive shifts. CONCLUSIONS: This simplified DSEK mathematical model can be used as a first step for estimating the hyperopic shift after DSEK. Further studies are necessary to refine the validity of this model.
Resumo:
Nolan and Temple Lang argue that “the ability to express statistical computations is an es- sential skill.” A key related capacity is the ability to conduct and present data analysis in a way that another person can understand and replicate. The copy-and-paste workflow that is an artifact of antiquated user-interface design makes reproducibility of statistical analysis more difficult, especially as data become increasingly complex and statistical methods become increasingly sophisticated. R Markdown is a new technology that makes creating fully-reproducible statistical analysis simple and painless. It provides a solution suitable not only for cutting edge research, but also for use in an introductory statistics course. We present experiential and statistical evidence that R Markdown can be used effectively in introductory statistics courses, and discuss its role in the rapidly-changing world of statistical computation.
Resumo:
© 2015 Society for Industrial and Applied Mathematics.We consider parabolic PDEs with randomly switching boundary conditions. In order to analyze these random PDEs, we consider more general stochastic hybrid systems and prove convergence to, and properties of, a stationary distribution. Applying these general results to the heat equation with randomly switching boundary conditions, we find explicit formulae for various statistics of the solution and obtain almost sure results about its regularity and structure. These results are of particular interest for biological applications as well as for their significant departure from behavior seen in PDEs forced by disparate Gaussian noise. Our general results also have applications to other types of stochastic hybrid systems, such as ODEs with randomly switching right-hand sides.
Resumo:
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or "quakes". We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects "tuned critical" behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.
Resumo:
En este trabajo se describe una experiencia llevada a cabo con profesores de matemáticas en formación, sobre el papel que pueden desarrollar las nuevas tecnologías para llevar a cabo procesos de demostración y prueba en el aula de secundaria.
Resumo:
En este trabajo se describe detalladamente una experiencia llevada a cabo con profesores de matemáticas en formación, sobre el papel que pueden desarrollar las nuevas tecnologías para llevar a cabo procesos de demostración y prueba en el aula de secundaria.