973 resultados para mathematical functions
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
This thesis presents an interdisciplinary analysis of how models and simulations function in the production of scientific knowledge. The work is informed by three scholarly traditions: studies on models and simulations in philosophy of science, so-called micro-sociological laboratory studies within science and technology studies, and cultural-historical activity theory. Methodologically, I adopt a naturalist epistemology and combine philosophical analysis with a qualitative, empirical case study of infectious-disease modelling. This study has a dual perspective throughout the analysis: it specifies the modelling practices and examines the models as objects of research. The research questions addressed in this study are: 1) How are models constructed and what functions do they have in the production of scientific knowledge? 2) What is interdisciplinarity in model construction? 3) How do models become a general research tool and why is this process problematic? The core argument is that the mediating models as investigative instruments (cf. Morgan and Morrison 1999) take questions as a starting point, and hence their construction is intentionally guided. This argument applies the interrogative model of inquiry (e.g., Sintonen 2005; Hintikka 1981), which conceives of all knowledge acquisition as process of seeking answers to questions. The first question addresses simulation models as Artificial Nature, which is manipulated in order to answer questions that initiated the model building. This account develops further the "epistemology of simulation" (cf. Winsberg 2003) by showing the interrelatedness of researchers and their objects in the process of modelling. The second question clarifies why interdisciplinary research collaboration is demanding and difficult to maintain. The nature of the impediments to disciplinary interaction are examined by introducing the idea of object-oriented interdisciplinarity, which provides an analytical framework to study the changes in the degree of interdisciplinarity, the tools and research practices developed to support the collaboration, and the mode of collaboration in relation to the historically mutable object of research. As my interest is in the models as interdisciplinary objects, the third research problem seeks to answer my question of how we might characterise these objects, what is typical for them, and what kind of changes happen in the process of modelling. Here I examine the tension between specified, question-oriented models and more general models, and suggest that the specified models form a group of their own. I call these Tailor-made models, in opposition to the process of building a simulation platform that aims at generalisability and utility for health-policy. This tension also underlines the challenge of applying research results (or methods and tools) to discuss and solve problems in decision-making processes.
Resumo:
A geometrical structure called the implied minterm structure (IMS) has been developed from the properties of minterms of a threshold function. The IMS is useful for the manual testing of linear separability of switching functions of up to six variables. This testing is done just by inspection of the plot of the function on the IMS.
Resumo:
Transmission loss of a rectangular expansion chamber, the inlet and outlet of which are situated at arbitrary locations of the chamber, i.e., the side wall or the face of the chamber, are analyzed here based on the Green's function of a rectangular cavity with homogeneous boundary conditions. The rectangular chamber Green's function is expressed in terms of a finite number of rigid rectangular cavity mode shapes. The inlet and outlet ports are modeled as uniform velocity pistons. If the size of the piston is small compared to wavelength, then the plane wave excitation is a valid assumption. The velocity potential inside the chamber is expressed by superimposing the velocity potentials of two different configurations. The first configuration is a piston source at the inlet port and a rigid termination at the outlet, and the second one is a piston at the outlet with a rigid termination at the inlet. Pressure inside the chamber is derived from velocity potentials using linear momentum equation. The average pressure acting on the pistons at the inlet and outlet locations is estimated by integrating the acoustic pressure over the piston area in the two constituent configurations. The transfer matrix is derived from the average pressure values and thence the transmission loss is calculated. The results are verified against those in the literature where use has been made of modal expansions and also numerical models (FEM fluid). The transfer matrix formulation for yielding wall rectangular chambers has been derived incorporating the structural–acoustic coupling. Parametric studies are conducted for different inlet and outlet configurations, and the various phenomena occurring in the TL curves that cannot be explained by the classical plane wave theory, are discussed.
Resumo:
A mathematical model for pulsatile flow in a partially occluded tube is presented. The problem has applications in studying the effects of blood flow characteristics on atherosclerotic development. The model brings out the importance of the pulsatility of blood flow on separation and the stress distribution. The results obtained show fairly good agreement with the available experimental results.
Resumo:
It is well-known that new particle formation (NPF) in the atmosphere is inhibited by pre-existing particles in the air that act as condensation sinks to decrease the concentration and, thus, the supersaturation of precursor gases. In this study, we investigate the effects of two parameters - atmospheric visibility, expressed as the particle back-scatter coefficient (BSP), and PM10 particulate mass concentration, on the occurrences of NPF events in an urban environment where the majority of precursor gases originate from motor vehicle and industrial sources. This is the first attempt to derive direct relationships between each of these two parameters and the occurrence of NPF. NPF events were identified from data obtained with a neutral cluster and air ion spectrometer over 245 days within a calendar year. Bayesian logistic regression was used to determine the probability of observing NPF as functions of BSP and PM10. We show that the BSP at 08 h on a given day is a reliable indicator of an NPF event later that day. The posterior median probability of observing an NPF event was greater than 0.5 (95%) when the BSP at 08 h was less than 6.8 Mm-1.
Human cortical functions in auditory change detection evaluated with multiple brain research methods
Resumo:
The problem of decaying states and resonances is examined within the framework of scattering theory in a rigged Hilbert space formalism. The stationary free,''in,'' and ''out'' eigenvectors of formal scattering theory, which have a rigorous setting in rigged Hilbert space, are considered to be analytic functions of the energy eigenvalue. The value of these analytic functions at any point of regularity, real or complex, is an eigenvector with eigenvalue equal to the position of the point. The poles of the eigenvector families give origin to other eigenvectors of the Hamiltonian: the singularities of the ''out'' eigenvector family are the same as those of the continued S matrix, so that resonances are seen as eigenvectors of the Hamiltonian with eigenvalue equal to their location in the complex energy plane. Cauchy theorem then provides for expansions in terms of ''complete'' sets of eigenvectors with complex eigenvalues of the Hamiltonian. Applying such expansions to the survival amplitude of a decaying state, one finds that resonances give discrete contributions with purely exponential time behavior; the background is of course present, but explicitly separated. The resolvent of the Hamiltonian, restricted to the nuclear space appearing in the rigged Hilbert space, can be continued across the absolutely continuous spectrum; the singularities of the continuation are the same as those of the ''out'' eigenvectors. The free, ''in'' and ''out'' eigenvectors with complex eigenvalues and those corresponding to resonances can be approximated by physical vectors in the Hilbert space, as plane waves can. The need for having some further physical information in addition to the specification of the total Hamiltonian is apparent in the proposed framework. The formalism is applied to the Lee–Friedrichs model and to the scattering of a spinless particle by a local central potential. Journal of Mathematical Physics is copyrighted by The American Institute of Physics.
Resumo:
The problem of learning correct decision rules to minimize the probability of misclassification is a long-standing problem of supervised learning in pattern recognition. The problem of learning such optimal discriminant functions is considered for the class of problems where the statistical properties of the pattern classes are completely unknown. The problem is posed as a game with common payoff played by a team of mutually cooperating learning automata. This essentially results in a probabilistic search through the space of classifiers. The approach is inherently capable of learning discriminant functions that are nonlinear in their parameters also. A learning algorithm is presented for the team and convergence is established. It is proved that the team can obtain the optimal classifier to an arbitrary approximation. Simulation results with a few examples are presented where the team learns the optimal classifier.
Resumo:
This thesis concerns the development of mathematical models to describe the interactions that occur between spray droplets and leaves. Models are presented that not only provide a contribution to mathematical knowledge in the field of fluid dynamics, but are also of utility within the agrichemical industry. The thesis is presented in two parts. First, thin film models are implemented with efficient numerical schemes in order to simulate droplets on virtual leaf surfaces. Then the interception event is considered, whereby energy balance techniques are employed to instantaneously predict whether an impacting droplet will bounce, splash, or adhere to a leaf.
Resumo:
The development of algorithms, based on Haar functions, for extracting the desired frequency components from transient power-system relaying signals is presented. The applications of these algorithms to impedance detection in transmission line protection and to harmonic restraint in transformer differential protection are discussed. For transmission line protection, three modes of application of the Haar algorithms are described: a full-cycle window algorithm, an approximate full-cycle window algorithm, and a half-cycle window algorithm. For power transformer differential protection, the combined second and fifth harmonic magnitude of the differential current is compared with that of fundamental to arrive at a trip decision. The proposed line protection algorithms are evaluated, under different fault conditions, using realistic relaying signals obtained from transient analysis conducted on a model 400 kV, 3-phase system. The transformer differential protection algorithms are also evaluated using a variety of simulated inrush and internal fault signals.
Resumo:
Tämän itsenäisistä osatutkimuksista koostuvan tutkimussarjan tavoitteena oli pyrkiä täydentämään kuvaa matemaattisilta taidoiltaan heikkojen lasten ja nuorten tiedonkäsittelyvalmiuksista selvittämällä, ovatko visuaalis-spatiaaliset työmuistivalmiudet yhteydessä matemaattiseen suoriutumiseen. Teoreettinen viitekehys rakentui Baddeleyn (1986, 1997) kolmikomponenttimallin ympärille. Työmuistikäsitys oli kuitenkin esikuvaansa laajempi sisällyttäen visuaalis-spatiaaliseen työmuistiin Cornoldin ja Vecchin (2003) termein sekä passiiviset varastotoiminnot että aktiiviset prosessointitoiminnot. Yhteyksiä työmuistin ja matemaattisten taitojen välillä tarkasteltiin viiden eri osatutkimuksen avulla. Kaksi ensimmäistä keskittyivät alle kouluikäisten lukukäsitteen hallinnan ja visuaalis-spatiaalisten työmuistivalmiuksen tutkimiseen ja kolme jälkimmäistä peruskoulun yhdeksäsluokkalaisten matemaattisten taitojen ja visuaalis-spatiaalisten työmuistitaitojen välisten yhteyksien selvittämiseen. Tutkimussarjan avulla pyrittiin selvittämään, ovatko visuaalis-spatiaaliset työmuistivalmiudet yhteydessä matemaattiseen suoriutumiseen sekä esi- että yläkouluiässä (osatutkimukset I, II, III, IV, V), onko yhteys spesifi rajoittuen tiettyjen visuaalis-spatiaalisten valmiuksien ja matemaattisen suoriutumisen välille vai onko se yleinen koskien matemaattisia taitoja ja koko visuaalis-spatiaalista työmuistia (osatutkimukset I, II, III, IV, V) tai työmuistia laajemmin (osatutkimukset II, III) sekä onko yhteys työmuistispesifi vai selitettävissä älykkyyden kaltaisella yleisellä päättelykapasiteetilla (osatutkimukset I, II, IV). Tutkimussarjan tulokset osoittavat, että kyky säilyttää ja käsitellä hetkellisesti visuaalis-spatiaalista informaatiota on yhteydessä matemaattiseen suoriutumiseen eikä yhteyttä voida selittää yksinomaan joustavalla älykkyydellä. Suoriutuminen visuaalis-spatiaalista työmuistia mittaavissa tehtävissä on yhteydessä sekä alle kouluikäisten esimatemaattisten taitojen hallintaan että peruskoulun yhdeksäsluokkalaisten matematiikan taitoihin. Matemaattisilta taidoiltaan heikkojen lasten ja nuorten visuaalis-spatiaalisten työmuistiresurssien heikkoudet vaikuttavat kuitenkin olevan sangen spesifejä rajoittuen tietyntyyppisissä muistitehtävissä vaadittaviin valmiuksiin; kaikissa visuaalis-spatiaalisen työmuistin valmiuksia mittaavissa tehtävissä suoriutuminen ei ole yhteydessä matemaattisiin taitoihin. Työmuistivalmiuksissa ilmenevät erot sekä alle kouluikäisten että kouluikäisten matemaattisilta taidoiltaan heikkojen ja normaalisuoriutujien välillä näyttävät olevan kuitenkin jossain määrin yhteydessä kielellisiin taitoihin viitaten vaikeuksien tietynlaiseen kasautumiseen; niillä matemaattisesti heikoilla, joilla on myös kielellisiä vaikeuksia, on keskimäärin laajemmat työmuistiheikkoudet. Osalla matematiikassa heikosti suoriutuvista on näin ollen selvästi keskimääräistä heikommat visuaalis-spatiaaliset työmuistivalmiudet, ja tämä heikkous saattaa olla yksi mahdollinen syy tai vaikeuksia lisäävä tekijä heikon matemaattisen suoriutumisen taustalla. Visuaalis-spatiaalisen työmuistin heikkous merkitsee konkreettisesti vähemmän mentaalista prosessointitilaa, joka rajoittaa oppimista ja suoritustilanteita. Tiedonkäsittelyvalmiuksien heikkous liittyy nimenomaan oppimisnopeuteen, ei asioiden opittavuuteen sinänsä. Mikäli oppimisympäristö ottaa huomioon valmiuksien rajallisuuden, työmuistiheikkoudet eivät todennäköisesti estä asioiden oppimista sinänsä. Avainsanat: Työmuisti, visuaalis-spatiaalinen työmuisti, matemaattiset taidot, lukukäsite, matematiikan oppimisvaikeudet
Resumo:
Images from cell biology experiments often indicate the presence of cell clustering, which can provide insight into the mechanisms driving the collective cell behaviour. Pair-correlation functions provide quantitative information about the presence, or absence, of clustering in a spatial distribution of cells. This is because the pair-correlation function describes the ratio of the abundance of pairs of cells, separated by a particular distance, relative to a randomly distributed reference population. Pair-correlation functions are often presented as a kernel density estimate where the frequency of pairs of objects are grouped using a particular bandwidth (or bin width), Δ>0. The choice of bandwidth has a dramatic impact: choosing Δ too large produces a pair-correlation function that contains insufficient information, whereas choosing Δ too small produces a pair-correlation signal dominated by fluctuations. Presently, there is little guidance available regarding how to make an objective choice of Δ. We present a new technique to choose Δ by analysing the power spectrum of the discrete Fourier transform of the pair-correlation function. Using synthetic simulation data, we confirm that our approach allows us to objectively choose Δ such that the appropriately binned pair-correlation function captures known features in uniform and clustered synthetic images. We also apply our technique to images from two different cell biology assays. The first assay corresponds to an approximately uniform distribution of cells, while the second assay involves a time series of images of a cell population which forms aggregates over time. The appropriately binned pair-correlation function allows us to make quantitative inferences about the average aggregate size, as well as quantifying how the average aggregate size changes with time.