956 resultados para Weibull distribution function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il lavoro presentato in questa Tesi si basa sul calcolo di modelli dinamici per Galassie Sferoidali Nane studiando il problema mediante l'utilizzo di funzioni di distribuzione. Si è trattato un tipo di funzioni di distribuzione, "Action-Based distribution functions", le quali sono funzioni delle sole variabili azione. Fornax è stata descritta con un'appropriata funzione di distribuzione e il problema della costruzione di modelli dinamici è stato affrontato assumendo sia un alone di materia oscura con distribuzione di densità costante nelle regioni interne sia un alone con cuspide. Per semplicità è stata assunta simmetria sferica e non è stato calcolato esplicitamente il potenziale gravitazionale della componente stellare (le stelle sono traccianti in un potenziale gravitazionale fissato). Tramite un diretto confronto con alcune osservabili, quali il profilo di densità stellare proiettata e il profilo di dispersione di velocità lungo la linea di vista, sono stati trovati alcuni modelli rappresentativi della dinamica di Fornax. Modelli calcolati tramite funzioni di distribuzione basati su azioni permettono di determinare in maniera autoconsistente profili di anisotropia. Tutti i modelli calcolati sono caratterizzati dal possedere un profilo di anisotropia con forte anisotropia tangenziale. Sono state poi comparate le stime di materia oscura di questi modelli con i più comuni e usati stimatori di massa in letteratura. E stato inoltre stimato il rapporto tra la massa totale del sistema (componente stellare e materia oscura) e la componente stellare di Fornax, entro 1600 pc ed entro i 3 kpc. Come esplorazione preliminare, in questo lavoro abbiamo anche presentato anche alcuni esempi di modelli sferici a due componenti in cui il campo gravitazionale è determinato dall'autogravità delle stelle e da un potenziale esterno che rappresenta l'alone di materia oscura.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La tesi di laurea è stata svolta presso l’Università di Scienze Applicate di Rosenheim, in Germania; il progetto di ricerca si basa sulla tecnica di rinforzo conosciuta come “Soil Nailing”, che consiste nella costruzione di un’opera di sostegno nella realizzazione di pareti di scavo o nel consolidamento di versanti instabili. L’obiettivo principale dell’elaborato sarà quello di valutare la fattibilità dell’impiego di tubi fabbricati con legno di faggio, in sostituzione dei chiodi d’acciaio comunemente utilizzati; la scelta di questo tipo di legno è dettata dalla larga disponibilità presente in Germania. La sollecitazione principale su tali tubi sarà di trazione parallela alla fibratura, tramite test sperimentali è stato possibile valutare tale resistenza nelle diverse condizioni in cui si verrà a trovare il tubo dopo l’installazione nel terreno. A tal proposito è necessario specificare che, l’indagine per risalire all’influenza che le condizioni ambientali esercitano sull’elemento, verrà condotta su provini costituiti da un singolo strato di legno; in tal modo si può apprezzare l’influenza direttamente sull’elemento base e poi risalire al comportamento globale. I dati ottenuti dall’indagine sperimentale sono stati elaborati tramite la teoria di Weibull, largamente utilizzata in tecnologia dei materiali per quanto riguarda materiali fragili come il legno; tali distribuzioni hanno permesso la determinazione della resistenza caratteristica dei provini per ogni condizione ambientale d’interesse. Per quanto riguarda la valutazione della fattibilità dell’uso di tubi in legno in questa tecnica di consolidamento, è stato eseguito il dimensionamento del tubo, utilizzando i dati a disposizione ottenuti dall’indagine sperimentale eseguita; ed infine sono state eseguite le verifiche di stabilità dell’intervento.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The recent development of semi-automated techniques for staining and analyzing flow cytometry samples has presented new challenges. Quality control and quality assessment are critical when developing new high throughput technologies and their associated information services. Our experience suggests that significant bottlenecks remain in the development of high throughput flow cytometry methods for data analysis and display. Especially, data quality control and quality assessment are crucial steps in processing and analyzing high throughput flow cytometry data. Methods: We propose a variety of graphical exploratory data analytic tools for exploring ungated flow cytometry data. We have implemented a number of specialized functions and methods in the Bioconductor package rflowcyt. We demonstrate the use of these approaches by investigating two independent sets of high throughput flow cytometry data. Results: We found that graphical representations can reveal substantial non-biological differences in samples. Empirical Cumulative Distribution Function and summary scatterplots were especially useful in the rapid identification of problems not identified by manual review. Conclusions: Graphical exploratory data analytic tools are quick and useful means of assessing data quality. We propose that the described visualizations should be used as quality assessment tools and where possible, be used for quality control.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AIM: The purpose of this randomized split-mouth clinical trial was to determine the active tactile sensibility between single-tooth implants and opposing natural teeth and to compare it with the tactile sensibility of pairs of natural teeth on the contralateral side in the same mouth (intraindividual comparison). MATERIAL AND METHODS: The hypothesis was that the active tactile sensibilities of the implant side and control side are equivalent. Sixty two subjects (n=36 from Bonn, n=26 from Bern) with single-tooth implants (22 anterior and 40 posterior dental implants) were asked to bite on narrow copper foil strips varying in thickness (5-200 microm) and to decide whether or not they were able to identify a foreign body between their teeth. Active tactile sensibility was defined as the 50% threshold of correct answers estimated by means of the Weibull distribution. RESULTS: The results obtained for the interocclusal perception sensibility differed between subjects far more than they differed between natural teeth and implants in the same individual [implant/natural tooth: 16.7+/-11.3 microm (0.6-53.1 microm); natural tooth/natural tooth: 14.3+/-10.6 microm (0.5-68.2 microm)]. The intraindividual differences only amounted to a mean value of 2.4+/-9.4 microm (-15.1 to 27.5 microm). The result of our statistical calculations showed that the active tactile sensibility of single-tooth implants, both in the anterior and posterior region of the mouth, in combination with a natural opposing tooth is similar to that of pairs of opposing natural teeth (double t-test, equivalence margin: +/-8 microm, P<0.001, power >80%). Hence, the implants could be integrated in the stomatognathic control circuit.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Marshall's (1970) lemma is an analytical result which implies root-n-consistency of the distribution function corresponding to the Grenander (1956) estimator of a non-decreasing probability density. The present paper derives analogous results for the setting of convex densities on [0,\infty).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new approach, the four-window technique, was developed to measure optical phase-space-time-frequency tomography (OPSTFT). The four-window technique is based on balanced heterodyne detection with two local oscillator (LO) fields. This technique can provide independent control of position, momentum, time and frequency resolution. The OPSTFT is a Wigner distribution function of two independent Fourier transform pairs, phase-space and time-frequency. The OPSTFT can be applied for early disease detection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present studies of the spatial clustering of inertial particles embedded in turbulent flow. A major part of the thesis is experimental, involving the technique of Phase Doppler Interferometry (PDI). The thesis also includes significant amount of simulation studies and some theoretical considerations. We describe the details of PDI and explain why it is suitable for study of particle clustering in turbulent flow with a strong mean velocity. We introduce the concept of the radial distribution function (RDF) as our chosen way of quantifying inertial particle clustering and present some original works on foundational and practical considerations related to it. These include methods of treating finite sampling size, interpretation of the magnitude of RDF and the possibility of isolating RDF signature of inertial clustering from that of large scale mixing. In experimental work, we used the PDI to observe clustering of water droplets in a turbulent wind tunnel. From that we present, in the form of a published paper, evidence of dynamical similarity (Stokes number similarity) of inertial particle clustering together with other results in qualitative agreement with available theoretical prediction and simulation results. We next show detailed quantitative comparisons of results from our experiments, direct-numerical-simulation (DNS) and theory. Very promising agreement was found for like-sized particles (mono-disperse). Theory is found to be incorrect regarding clustering of different-sized particles and we propose a empirical correction based on the DNS and experimental results. Besides this, we also discovered a few interesting characteristics of inertial clustering. Firstly, through observations, we found an intriguing possibility for modeling the RDF arising from inertial clustering that has only one (sensitive) parameter. We also found that clustering becomes saturated at high Reynolds number.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hall-effect thrusters (HETs) are compact electric propulsion devices with high specific impulse used for a variety of space propulsion applications. HET technology is well developed but the electron properties in the discharge are not completely understood, mainly due to the difficulty involved in performing accurate measurements in the discharge. Measurements of electron temperature and density have been performed using electrostatic probes, but presence of the probes can significantly disrupt thruster operation, and thus alter the electron temperature and density. While fast-probe studies have expanded understanding of HET discharges, a non-invasive method of measuring the electron temperature and density in the plasma is highly desirable. An alternative to electrostatic probes is a non-perturbing laser diagnostic technique that measures Thomson scattering from the plasma. Thomson scattering is the process by which photons are elastically scattered from the free electrons in a plasma. Since the electrons have thermal energy their motion causes a Doppler shift in the scattered photons that is proportional to their velocity. Like electrostatic probes, laser Thomson scattering (LTS) can be used to determine the temperature and density of free electrons in the plasma. Since Thomson scattering measures the electron velocity distribution function directly no assumptions of the plasma conditions are required, allowing accurate measurements in anisotropic and non-Maxwellian plasmas. LTS requires a complicated measurement apparatus, but has the potential to provide accurate, non-perturbing measurements of electron temperature and density in HET discharges. In order to assess the feasibility of LTS diagnostics on HETs non-invasive measurements of electron temperature and density in the near-field plume of a Hall thruster were performed using a custom built laser Thomson scattering diagnostic. Laser measurements were processed using a maximum likelihood estimation method and results were compared to conventional electrostatic double probe measurements performed at the same thruster conditions. Electron temperature was found to range from approximately 1 – 40 eV and density ranged from approximately 1.0 x 1017 m-3 to 1.3 x 1018 m-3 over discharge voltages from 250 to 450 V and mass flow rates of 40 to 80 SCCM using xenon propellant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It has been proposed that inertial clustering may lead to an increased collision rate of water droplets in clouds. Atmospheric clouds and electrosprays contain electrically charged particles embedded in turbulent flows, often under the influence of an externally imposed, approximately uniform gravitational or electric force. In this thesis, we present the investigation of charged inertial particles embedded in turbulence. We have developed a theoretical description for the dynamics of such systems of charged, sedimenting particles in turbulence, allowing radial distribution functions to be predicted for both monodisperse and bidisperse particle size distributions. The governing parameters are the particle Stokes number (particle inertial time scale relative to turbulence dissipation time scale), the Coulomb-turbulence parameter (ratio of Coulomb ’terminalar speed to turbulence dissipation velocity scale), and the settling parameter (the ratio of the gravitational terminal speed to turbulence dissipation velocity scale). For the monodispersion particles, The peak in the radial distribution function is well predicted by the balance between the particle terminal velocity under Coulomb repulsion and a time-averaged ’drift’ velocity obtained from the nonuniform sampling of fluid strain and rotation due to finite particle inertia. The theory is compared to measured radial distribution functions for water particles in homogeneous, isotropic air turbulence. The radial distribution functions are obtained from particle positions measured in three dimensions using digital holography. The measurements support the general theoretical expression, consisting of a power law increase in particle clustering due to particle response to dissipative turbulent eddies, modulated by an exponential electrostatic interaction term. Both terms are modified as a result of the gravitational diffusion-like term, and the role of ’gravity’ is explored by imposing a macroscopic uniform electric field to create an enhanced, effective gravity. The relation between the radial distribution functions and inward mean radial relative velocity is established for charged particles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Truncated distributions of the exponential family have great influence in the simulation models. This paper discusses the truncated Weibull distribution specifically. The truncation of the distribution is achieved by the Maximum Likelihood Estimation method or combined with the expectation and variance expressions. After the fitting of distribution, the goodness-of-fit tests (the Chi-Square test and the Kolmogorov-Smirnov test) are executed to rule out the rejected hypotheses. Finally the distributions are integrated in various simulation models, e. g. shipment consolidation model, to compare the influence of truncated and original versions of Weibull distribution on the model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fossil pollen data from stratigraphic cores are irregularly spaced in time due to non-linear age-depth relations. Moreover, their marginal distributions may vary over time. We address these features in a nonparametric regression model with errors that are monotone transformations of a latent continuous-time Gaussian process Z(T). Although Z(T) is unobserved, due to monotonicity, under suitable regularity conditions, it can be recovered facilitating further computations such as estimation of the long-memory parameter and the Hermite coefficients. The estimation of Z(T) itself involves estimation of the marginal distribution function of the regression errors. These issues are considered in proposing a plug-in algorithm for optimal bandwidth selection and construction of confidence bands for the trend function. Some high-resolution time series of pollen records from Lago di Origlio in Switzerland, which go back ca. 20,000 years are used to illustrate the methods.