10 resultados para Discrete Conditional Phase-type model

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advantages and popularity of Permanent Magnet (PM) motors due to their high power density, there is an increasing incentive to use them in variety of applications including electric actuation. These applications have strict noise emission standards. The generation of audible noise and associated vibration modes are characteristics of all electric motors, it is especially problematic in low speed sensorless control rotary actuation applications using high frequency voltage injection technique. This dissertation is aimed at solving the problem of optimizing the sensorless control algorithm for low noise and vibration while achieving at least 12 bit absolute accuracy for speed and position control. The low speed sensorless algorithm is simulated using an improved Phase Variable Model, developed and implemented in a hardware-in-the-loop prototyping environment. Two experimental testbeds were developed and built to test and verify the algorithm in real time.^ A neural network based modeling approach was used to predict the audible noise due to the high frequency injected carrier signal. This model was created based on noise measurements in an especially built chamber. The developed noise model is then integrated into the high frequency based sensorless control scheme so that appropriate tradeoffs and mitigation techniques can be devised. This will improve the position estimation and control performance while keeping the noise below a certain level. Genetic algorithms were used for including the noise optimization parameters into the developed control algorithm.^ A novel wavelet based filtering approach was proposed in this dissertation for the sensorless control algorithm at low speed. This novel filter was capable of extracting the position information at low values of injection voltage where conventional filters fail. This filtering approach can be used in practice to reduce the injected voltage in sensorless control algorithm resulting in significant reduction of noise and vibration.^ Online optimization of sensorless position estimation algorithm was performed to reduce vibration and to improve the position estimation performance. The results obtained are important and represent original contributions that can be helpful in choosing optimal parameters for sensorless control algorithm in many practical applications.^

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier-Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier–Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most experiments in particle physics are scattering experiments, the analysis of which leads to masses, scattering phases, decay widths and other properties of one or multi-particle systems. Until the advent of Lattice Quantum Chromodynamics (LQCD) it was difficult to compare experimental results on low energy hadron-hadron scattering processes to the predictions of QCD, the current theory of strong interactions. The reason being, at low energies the QCD coupling constant becomes large and the perturbation expansion for scattering; amplitudes does not converge. To overcome this, one puts the theory onto a lattice, imposes a momentum cutoff, and computes the integral numerically. For particle masses, predictions of LQCD agree with experiment, but the area of decay widths is largely unexplored. ^ LQCD provides ab initio access to unusual hadrons like exotic mesons that are predicted to contain real gluonic structure. To study decays of these type resonances the energy spectra of a two-particle decay state in a finite volume of dimension L can be related to the associated scattering phase shift δ(k) at momentum k through exact formulae derived by Lüscher. Because the spectra can be computed using numerical Monte Carlo techniques, the scattering phases can thus be determined using Lüscher's formulae, and the corresponding decay widths can be found by fitting Breit-Wigner functions. ^ Results of such a decay width calculation for an exotic hybrid( h) meson (JPC = 1-+) are presented for the decay channel h → πa 1. This calculation employed Lüscher's formulae and an approximation of LQCD called the quenched approximation. Energy spectra for the h and πa1 systems were extracted using eigenvalues of a correlation matrix, and the corresponding scattering phase shifts were determined for a discrete set of πa1 momenta. Although the number of phase shift data points was sparse, fits to a Breit-Wigner model were made, resulting in a decay width of about 60 MeV. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Access to healthcare is a major problem in which patients are deprived of receiving timely admission to healthcare. Poor access has resulted in significant but avoidable healthcare cost, poor quality of healthcare, and deterioration in the general public health. Advanced Access is a simple and direct approach to appointment scheduling in which the majority of a clinic's appointments slots are kept open in order to provide access for immediate or same day healthcare needs and therefore, alleviate the problem of poor access the healthcare. This research formulates a non-linear discrete stochastic mathematical model of the Advanced Access appointment scheduling policy. The model objective is to maximize the expected profit of the clinic subject to constraints on minimum access to healthcare provided. Patient behavior is characterized with probabilities for no-show, balking, and related patient choices. Structural properties of the model are analyzed to determine whether Advanced Access patient scheduling is feasible. To solve the complex combinatorial optimization problem, a heuristic that combines greedy construction algorithm and neighborhood improvement search was developed. The model and the heuristic were used to evaluate the Advanced Access patient appointment policy compared to existing policies. Trade-off between profit and access to healthcare are established, and parameter analysis of input parameters was performed. The trade-off curve is a characteristic curve and was observed to be concave. This implies that there exists an access level at which at which the clinic can be operated at optimal profit that can be realized. The results also show that, in many scenarios by switching from existing scheduling policy to Advanced Access policy clinics can improve access without any decrease in profit. Further, the success of Advanced Access policy in providing improved access and/or profit depends on the expected value of demand, variation in demand, and the ratio of demand for same day and advanced appointments. The contributions of the dissertation are a model of Advanced Access patient scheduling, a heuristic to solve the model, and the use of the model to understand the scheduling policy trade-offs which healthcare clinic managers must make. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Type 2 diabetes mellitus (T2DM) is increasingly becoming a major public health problem worldwide. Estimating the future burden of diabetes is instrumental to guide the public health response to the epidemic. This study aims to project the prevalence of T2DM among adults in Syria over the period 2003–2022 by applying a modelling approach to the country’s own data. Methods Future prevalence of T2DM in Syria was estimated among adults aged 25 years and older for the period 2003–2022 using the IMPACT Diabetes Model (a discrete-state Markov model). Results According to our model, the prevalence of T2DM in Syria is projected to double in the period between 2003 and 2022 (from 10% to 21%). The projected increase in T2DM prevalence is higher in men (148%) than in women (93%). The increase in prevalence of T2DM is expected to be most marked in people younger than 55 years especially the 25–34 years age group. Conclusions The future projections of T2DM in Syria put it amongst countries with the highest levels of T2DM worldwide. It is estimated that by 2022 approximately a fifth of the Syrian population aged 25 years and older will have T2DM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One in five adults 65 years and older has diabetes. Coping with diabetes is a lifelong task, and much of the responsibility for managing the disease falls upon the individual. Reports of non-adherence to recommended treatments are high. Understanding the additive impact of diabetes on quality of life issues is important. The purpose of this study was to investigate the quality of life and diabetes self-management behaviors in ethnically diverse older adults with type 2 diabetes. The SF-12v2 was used to measure physical and mental health quality of life. Scores were compared to general, age sub-groups, and diabetes-specific norms. The Transtheoretical Model (TTM) was applied to assess perceived versus actual behavior for three diabetes self-management tasks: dietary management, medication management, and blood glucose self-monitoring. Dietary intake and hemoglobin A1c values were measured as outcome variables. Utilizing a cross-sectional research design, participants were recruited from Elderly Nutrition Program congregate meal sites (n = 148, mean age 75). ^ Results showed that mean scores of the SF-12v2 were significantly lower in the study sample than the general norms for physical health (p < .001), mental health (p < .01), age sub-group norms (p < .05), and diabetes-specific norms for physical health (p < .001). A multiple regression analysis found that adherence to an exercise plan was significantly associated with better physical health (p < .001). Transtheoretical Model multiple regression analyses explained 68% of the variance for % Kcal from fat, 41% for fiber, 70% for % Kcal from carbohydrate, and 7% for hemoglobin A 1c values. Significant associations were found between TTM stage of change and dietary fiber intake (p < .01). Other significant associations related to diet included gender (p < .01), ethnicity (p < .05), employment (p < .05), type of insurance (p < .05), adherence to an exercise plan (p < .05), number of doctor visits/year ( p < .01), and physical health (p < .05). Significant associations were found between hemoglobin A1c values and age ( p < .05), being non-Hispanic Black (p < .01), income (p < .01), and eye problems (p < .05). ^ The study highlights the importance of the beneficial effects of exercise on quality of life issues. Furthermore, application of the Transtheoretical Model in conjunction with an assessment of dietary intake may be valuable in helping individuals make lifestyle changes. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.