941 resultados para Chebyshev And Binomial Distributions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Migraine is a painful and debilitating disorder with a significant genetic component. Steroid hormones, in particular estrogen, have long been considered to play a role in migraine, as variations in hormone levels are associated with migraine onset in many sufferers of the disorder. Steroid hormones mediate their activity via hormone receptors, which have a wide tissue distribution. Estrogen receptors have been localized to the brain in regions considered to be involved in migraine pathogenesis. Hence it is possible that genetic variation in the estrogen receptor gene may play a role in migraine susceptibility. This study thus examined the estrogen receptor 1 (ESRalpha) gene for a potential role in migraine pathogenesis and susceptibility. A population-based cohort of 224 migraine sufferers and 224 matched controls were genotyped for the G594A polymorphism located in exon 8 of the ESR1 gene. Statistical analysis indicated a significant difference between migraineurs and non-migraineurs in both the allele frequencies (P=0.003) and genotype distributions (P=0.008) in this sample. An independent follow-up study was then undertaken using this marker in an additional population-based cohort of 260 migraine sufferers and 260 matched controls. This resulted in a significant association between the two groups with regard to allele frequencies (P=8x10(-6)) and genotype distributions (P=4x10(-5)). Our findings support the hypothesis that genetic variation in hormone receptors, in particular the ESR1 gene, may play a role in migraine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are at least two reasons for a symmetric, unimodal, diffuse tailed hyperbolic secant distribution to be interesting in real-life applications. It displays one of the common types of non normality in natural data and is closely related to the logistic and Cauchy distributions that often arise in practice. To test the difference in location between two hyperbolic secant distributions, we develop a simple linear rank test with trigonometric scores. We investigate the small-sample and asymptotic properties of the test statistic and provide tables of the exact null distribution for small sample sizes. We compare the test to the Wilcoxon two-sample test and show that, although the asymptotic powers of the tests are comparable, the present test has certain practical advantages over the Wilcoxon test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We extend the projected Gross-Pitaevskii equation formalism of Davis [Phys. Rev. Lett. 87, 160402 (2001)] to the experimentally relevant case of thermal Bose gases in harmonic potentials and outline a robust and accurate numerical scheme that can efficiently simulate this system. We apply this method to investigate the equilibrium properties of the harmonically trapped three-dimensional projected Gross-Pitaevskii equation at finite temperature and consider the dependence of condensate fraction, position, and momentum distributions and density fluctuations on temperature. We apply the scheme to simulate an evaporative cooling process in which the preferential removal of high-energy particles leads to the growth of a Bose-Einstein condensate. We show that a condensate fraction can be inferred during the dynamics even in this nonequilibrium situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generalized secant hyperbolic distribution (GSHD) proposed in Vaughan (2002) includes a wide range of unimodal symmetric distributions, with the Cauchy and uniform distributions being the limiting cases, and the logistic and hyperbolic secant distributions being special cases. The current article derives an asymptotically efficient rank estimator of the location parameter of the GSHD and suggests the corresponding one- and two-sample optimal rank tests. The rank estimator derived is compared to the modified MLE of location proposed in Vaughan (2002). By combining these two estimators, a computationally attractive method for constructing an exact confidence interval of the location parameter is developed. The statistical procedures introduced in the current article are illustrated by examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biologists are increasingly conscious of the critical role that noise plays in cellular functions such as genetic regulation, often in connection with fluctuations in small numbers of key regulatory molecules. This has inspired the development of models that capture this fundamentally discrete and stochastic nature of cellular biology - most notably the Gillespie stochastic simulation algorithm (SSA). The SSA simulates a temporally homogeneous, discrete-state, continuous-time Markov process, and of course the corresponding probabilities and numbers of each molecular species must all remain positive. While accurately serving this purpose, the SSA can be computationally inefficient due to very small time stepping so faster approximations such as the Poisson and Binomial τ-leap methods have been suggested. This work places these leap methods in the context of numerical methods for the solution of stochastic differential equations (SDEs) driven by Poisson noise. This allows analogues of Euler-Maruyuma, Milstein and even higher order methods to be developed through the Itô-Taylor expansions as well as similar derivative-free Runge-Kutta approaches. Numerical results demonstrate that these novel methods compare favourably with existing techniques for simulating biochemical reactions by more accurately capturing crucial properties such as the mean and variance than existing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’obiettivo della presente tesi è evidenziare l’importanza dell’approccio critico alla valutazione della vulnerabilità sismica di edifici in muratura e misti Il contributo della tesi sottolinea i diversi risultati ottenuti nella modellazione di tre edifici esistenti ed uno ipotetico usando due diversi programmi basati sul modello del telaio equivalente. La modellazione delle diverse ipotesi di vincolamento ed estensione delle zone rigide ha richiesto la formulazione di quattro modelli di calcolo in Aedes PCM ed un modello in 3muri. I dati ottenuti sono stati confrontati, inoltre, con l’analisi semplificata speditiva per la valutazione della vulnerabilità a scala territoriale prevista nelle “Linee Guida per la valutazione e riduzione del rischio sismico del Patrimonio Culturale”. Si può notare che i valori ottenuti sono piuttosto diversi e che la variabilità aumenta nel caso di edifici non regolari, inoltre le evidenze legate ai danni realmente rilevati sugli edifici mostrano un profondo iato tra la previsione di danno ottenuta tramite calcolatore e le lesioni rilevate; questo costituisce un campanello d’allarme nei confronti di un approccio acritico nei confronti del mero dato numerico ed un richiamo all’importanza del processo conoscitivo. I casi di studio analizzati sono stati scelti in funzione delle caratteristiche seguenti: il primo è una struttura semplice e simmetrica nelle due direzioni che ha avuto la funzione di permettere di testare in modo controllato le ipotesi di base. Gli altri sono edifici reali: il Padiglione Morselli è un edificio in muratura a pianta a forma di C, regolare in pianta ed in elevazione solamente per quanto concerne la direzione y: questo ha permesso di raffrontare il diverso comportamento dei modelli di calcolo nelle sue direzioni; il liceo Marconi è un edificio misto in cui elementi in conglomerato cementizio armato affiancano le pareti portanti in muratura, che presenta un piano di copertura piuttosto irregolare; il Corpo 4 dell’Ospedale di Castelfranco Emilia è un edificio in muratura, a pianta regolare che presenta le medesime irregolarità nel piano sommitale del precedente. I dati ottenuti hanno dimostrato un buon accordo per la quantificazione dell’indice di sicurezza per i modelli regolari e semplici con uno scarto di circa il 30% mentre il delta si incrementa per le strutture irregolari, in particolare quando le pareti portanti in muratura vengono sostituite da elementi puntuali nei piani di copertura arrivando a valori massimi del 60%. I confronti sono stati estesi per le tre strutture anche alla modellazione proposta dalle Linee Guida per la valutazione dell’indice di sicurezza sismica a scala territoriale LV1 mostrando differenze nell’ordine del 30% per il Padiglione Morselli e del 50% per il Liceo Marconi; il metodo semplificato risulta correttamente cautelativo. È, quindi, possibile affermare che tanto più gli edifici si mostrano regolari in riferimento a masse e rigidezze, tanto più la modellazione a telaio equivalente restituisce valori in accordo tra i programmi e di più immediata comprensione. Questa evidenza può essere estesa ad altri casi reali divenendo un vero e proprio criterio operativo che consiglia la suddivisione degli edifici esistenti in muratura, solitamente molto complessi poiché frutto di successive stratificazioni, in parti più semplici, ricorrendo alle informazioni acquisite attraverso il percorso della conoscenza che diviene in questo modo uno strumento utile e vitale. La complessità dell’edificato storico deve necessariamente essere approcciata in una maniera più semplice identificando sub unità regolari per percorso dei carichi, epoca e tecnologia costruttiva e comportamento strutturale dimostrato nel corso del tempo che siano più semplici da studiare. Una chiara comprensione del comportamento delle strutture permette di agire mediante interventi puntuali e meno invasivi, rispettosi dell’esistente riconducendo, ancora una volta, l’intervento di consolidamento ai principi propri del restauro che includono i principi di minimo intervento, di riconoscibilità dello stesso, di rispetto dei materiali esistenti e l’uso di nuovi compatibili con i precedenti. Il percorso della conoscenza diviene in questo modo la chiave per liberare la complessità degli edifici storici esistenti trasformando un mero tecnicismo in una concreta operazione culturale . Il presente percorso di dottorato è stato svolto in collaborazione tra l’Università di Parma, DICATeA e lo Studio di Ingegneria Melegari mediante un percorso di Apprendistato in Alta Formazione e Ricerca.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Error rates of a Boolean perceptron with threshold and either spherical or Ising constraint on the weight vector are calculated for storing patterns from biased input and output distributions derived within a one-step replica symmetry breaking (RSB) treatment. For unbiased output distribution and non-zero stability of the patterns, we find a critical load, α p, above which two solutions to the saddlepoint equations appear; one with higher free energy and zero threshold and a dominant solution with non-zero threshold. We examine this second-order phase transition and the dependence of α p on the required pattern stability, κ, for both one-step RSB and replica symmetry (RS) in the spherical case and for one-step RSB in the Ising case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of MS imaging (MSI) to resolve the spatial and pharmacodynamic distributions of compounds in tissues is emerging as a powerful tool for pharmacological research. Unlike established imaging techniques, only limited a priori knowledge is required and no extensive manipulation (e.g., radiolabeling) of drugs is necessary prior to dosing. MS provides highly multiplexed detection, making it possible to identify compounds, their metabolites and other changes in biomolecular abundances directly off tissue sections in a single pass. This can be employed to obtain near cellular, or potentially subcellular, resolution images. Consideration of technical limitations that affect the process is required, from sample preparation through to analyte ionization and detection. The techniques have only recently been adapted for imaging and novel variations to the established MSI methodologies will further enhance the application of MSI for pharmacological research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular dynamics simulations were carried out for Si/Ge axial nanowire heterostructures using modified effective atom method (MEAM) potentials. A Si–Ge MEAM interatomic cross potential was developed based on available experimental data and was used for these studies. The atomic distortions and strain distributions near the Si/Ge interfaces are predicted for nanowires with their axes oriented along the [111] direction. The cases of 10 and 25 nm diameter Si/Ge biwires and of 25 nm diameter Si/Ge/Si axial heterostructures with the Ge disk 1 nm thick were studied. Substantial distortions in the height of the atoms adjacent to the interface were found for the biwires but not for the Ge disks. Strains as high as 3.5% were found for the Ge disk and values of 2%–2.5% were found at the Si and Ge interfacial layers in the biwires. Deformation potential theory was used to estimate the influence of the strains on the band gap, and reductions in band gap to as small as 40% of bulk values are predicted for the Ge disks. The localized regions of increased strain and resulting energy minima were also found within the Si/Ge biwire interfaces with the larger effects on the Ge side of the interface. The regions of strain maxima near and within the interfaces are anticipated to be useful for tailoring band gaps and producing quantum confinement of carriers. These results suggest that nanowire heterostructures provide greater design flexibility in band structure modification than is possible with planar layer growth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The gamma-rays produced by the inelastic scattering of 14 MeV neutrons. in fusion reactor materials have been studied using a gamma-ray spectrometer employing a sodium iodide scintillation detector. The source neutrons are produced by the T(d,n)4He reaction using the SAMES accelerator at the University of Aston in Birmingham. In order to eliminate the large gamma-ray background and neutron signal due to the sensitivity of the sodium iodide detector to neutrons, the gamma-ray detector is heavily shielded and is used together with a particle time of flight discrimination system based on the associated particle time of flight method. The instant of production of a source neutron is determined by detecting the associated alpha-particle enabling discrimination between the neutrons and gamma-rays by their different time of flight times. The electronic system used for measuring the time of flight of the neutrons and gamrna-rays over the fixed flight path is described. The materials studied in this work were Lithium and Lead because of their importance as fuel breeding and shielding materials in conceptual fusion reactor designs. Several sample thicknesses were studied to determine the multiple scattering effects. The observed gamma-ray spectra from each sample at several scattering angles in the angular range Oº - 90° enabled absolute differential gamma-ray production cross-sections and angular distributions of the resolved gamma-rays from Lithium to be measured and compared with published data. For the Lead sample, the absolute differential gamma-ray production cross-sections for discrete 1 MeV ranges and the angular distributions were measured. The measured angular distributions of the present work and those on Iron from previous work are compared to the predictions of the Monte Carlo programme M.O.R.S.E. Good agreement was obtained between the experimental results and the theoretical predictions. In addition an empirical relation has been constructed which describes the multiple scattering effects by a single parameter and is capable of predicting the gamma-ray production cross-sections for the materials to an accuracy of ± 25%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the procedure and results from four years research undertaken through the IHD (Interdisciplinary Higher Degrees) Scheme at Aston University in Birmingham, sponsored by the SERC (Science and Engineering Research Council) and Monk Dunstone Associates, Chartered Quantity Surveyors. A stochastic networking technique VERT (Venture Evaluation and Review Technique) was used to model the pre-tender costs of public health, heating ventilating, air-conditioning, fire protection, lifts and electrical installations within office developments. The model enabled the quantity surveyor to analyse, manipulate and explore complex scenarios which previously had defied ready mathematical analysis. The process involved the examination of historical material costs, labour factors and design performance data. Components and installation types were defined and formatted. Data was updated and adjusted using mechanical and electrical pre-tender cost indices and location, selection of contractor, contract sum, height and site condition factors. Ranges of cost, time and performance data were represented by probability density functions and defined by constant, uniform, normal and beta distributions. These variables and a network of the interrelationships between services components provided the framework for analysis. The VERT program, in this particular study, relied upon Monte Carlo simulation to model the uncertainties associated with pre-tender estimates of all possible installations. The computer generated output in the form of relative and cumulative frequency distributions of current element and total services costs, critical path analyses and details of statistical parameters. From this data alternative design solutions were compared, the degree of risk associated with estimates was determined, heuristics were tested and redeveloped, and cost significant items were isolated for closer examination. The resultant models successfully combined cost, time and performance factors and provided the quantity surveyor with an appreciation of the cost ranges associated with the various engineering services design options.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with the problems associated with the planning and control of production, with particular reference to a small aluminium die casting company. The main problem areas were identified as: (a) A need to be able to forecast the customers demands upon the company's facilities. (b) A need to produce a manufacturing programme in which the output of the foundry (or die casting section) was balanced with the available capacity in the machine shop. (c) The need to ensure that the resultant system enabled the company's operating budget to have a reasonable chance of being achieved. At the commencement of the research work the major customers were members of the automobile industry and had their own system of forecasting, from which they issued manufacturing schedules to their component suppliers, The errors in the forecast were analysed and the distributions noted. Using these distributions the customer's forecast was capable of being modified to enable his final demand to be met with a known degree of confidence. Before a manufacturing programme could be developed the actual manufacturing system had to be reviewed and it was found that as with many small companies there was a remarkable lack of formal control and written data. Relevant data with regards to the component and the manufacturing process had therefore to be collected and analysed. The foundry process was fixed but the secondary machining operations were analysed by a technique similar to Component Flow Analysis and as a result the machines were arranged in a series of flow lines. A system of manual production control was proposed and for comparison, a local computer bureau was approached and a system proposed incorporating the production of additional management information. These systems are compared and the relative merits discussed and a proposal made for implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Substantial altimetry datasets collected by different satellites have only become available during the past five years, but the future will bring a variety of new altimetry missions, both parallel and consecutive in time. The characteristics of each produced dataset vary with the different orbital heights and inclinations of the spacecraft, as well as with the technical properties of the radar instrument. An integral analysis of datasets with different properties offers advantages both in terms of data quantity and data quality. This thesis is concerned with the development of the means for such integral analysis, in particular for dynamic solutions in which precise orbits for the satellites are computed simultaneously. The first half of the thesis discusses the theory and numerical implementation of dynamic multi-satellite altimetry analysis. The most important aspect of this analysis is the application of dual satellite altimetry crossover points as a bi-directional tracking data type in simultaneous orbit solutions. The central problem is that the spatial and temporal distributions of the crossovers are in conflict with the time-organised nature of traditional solution methods. Their application to the adjustment of the orbits of both satellites involved in a dual crossover therefore requires several fundamental changes of the classical least-squares prediction/correction methods. The second part of the thesis applies the developed numerical techniques to the problems of precise orbit computation and gravity field adjustment, using the altimetry datasets of ERS-1 and TOPEX/Poseidon. Although the two datasets can be considered less compatible that those of planned future satellite missions, the obtained results adequately illustrate the merits of a simultaneous solution technique. In particular, the geographically correlated orbit error is partially observable from a dataset consisting of crossover differences between two sufficiently different altimetry datasets, while being unobservable from the analysis of altimetry data of both satellites individually. This error signal, which has a substantial gravity-induced component, can be employed advantageously in simultaneous solutions for the two satellites in which also the harmonic coefficients of the gravity field model are estimated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of more realistic constitutive models for granular media, such as sand, requires ingredients which take into account the internal micro-mechanical response to deformation. Unfortunately, at present, very little is known about these mechanisms and therefore it is instructive to find out more about the internal nature of granular samples by conducting suitable tests. In contrast to physical testing the method of investigation used in this study employs the Distinct Element Method. This is a computer based, iterative, time-dependent technique that allows the deformation of granular assemblies to be numerically simulated. By making assumptions regarding contact stiffnesses each individual contact force can be measured and by resolution particle centroid forces can be calculated. Then by dividing particle forces by their respective mass, particle centroid velocities and displacements are obtained by numerical integration. The Distinct Element Method is incorporated into a computer program 'Ball'. This program is effectively a numerical apparatus which forms a logical housing for this method and allows data input and output, and also provides testing control. By using this numerical apparatus tests have been carried out on disc assemblies and many new interesting observations regarding the micromechanical behaviour are revealed. In order to relate the observed microscopic mechanisms of deformation to the flow of the granular system two separate approaches have been used. Firstly a constitutive model has been developed which describes the yield function, flow rule and translation rule for regular assemblies of spheres and discs when subjected to coaxial deformation. Secondly statistical analyses have been carried out using data which was extracted from the simulation tests. These analyses define and quantify granular structure and then show how the force and velocity distributions use the structure to produce the corresponding stress and strain-rate tensors.