35 resultados para Two variable oregonator model

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human actions have been widely studied for their potential application in various areas such as sports, pervasive patient monitoring, and rehabilitation. However, challenges still persist pertaining to determining the most useful ways to describe human actions at the sensor, then limb and complete action levels of representation and deriving important relations between these levels each involving their own atomic components. In this paper, we report on a motion encoder developed for the sensor level based on the need to distinguish between the shape of the sensor's trajectory and its temporal characteristics during execution. This distinction is critical as it provides a different encoding scheme than the usual velocity and acceleration measures which confound these two attributes of any motion. At the same time, we eliminate noise from sensors by comparing temporal and spatial indexing schemes and a number of optimal filtering models for robust encoding. Results demonstrate the benefits of spatial indexing and separating the shape and dynamics of a motion, as well as its ability to decompose complex motions into several atomic ones. Finally, we discuss how this specific type of sensor encoder bears on the derivation of limb and complete action descriptions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of the present work is searching for the correlation between the carbon content in steels and the parameters of the rheological models, which are used to describe the materials behavior during hot plastic deformation. This correlation can be expected in the internal variable models, which are based on physical phenomena occurring in the material. Such a model, based on the dislocation density as the internal variable, is investigated in this work. The experiments including hot torsion tests are used for the analysis.
The procedure is composed of three parts. Plastometric tests were performed for steels with various carbon content. Optimization techniques were applied next to determine the coefficients in the internal variable rheological model for these steels. Two versions of the model are considered. One is based on the average dislocation density and the second accounts for the distribution of dislocation densities. Evaluation of correlation between carbon content and such coefficients in the models as activation energy for self diffusion, activation energy for recrystallization, grain boundary mobility, recovery coefficient etc. was the main objective of the work. In consequence, the model which may be used for simulation of hot forming processes for steels with various chemical compositions, is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background/Purpose

Hepatocellular carcinoma (HCC) has been the leading cause of cancer death in Taiwan since the 1980s. A two-stage screening intervention was introduced in 1996 and has been implemented in a limited number of hospitals. The present study assessed the costs and health outcomes associated with the introduction of screening intervention, from the perspective of the Taiwanese government. The cost-effectiveness analysis aimed to assist informed decision making by the health authority in Taiwan.
Methods

A two-phase economic model, 1-year decision analysis and a 60-year Markov simulation, was developed to conceptualize the screening intervention within current practice, and was compared with opportunistic screening alone. Incremental analyses were conducted to compare the incremental costs and outcomes associated with the introduction of the intervention. Sensitivity analyses were performed to investigate the uncertainties that surrounded the model.
Results

The Markov model simulation demonstrated an incremental cost-effectiveness ratio (ICER) of NT$498,000 (US$15,600) per life-year saved, with a 5% discount rate. An ICER of NT$402,000 (US$12,600) per quality-adjusted life-year was achieved by applying utility weights. Sensitivity analysis showed that excess mortality reduction of HCC by screening and HCC incidence rates were the most influential factors on the ICERs. Scenario analysis also indicated that expansion of the HCC screening intervention by focusing on regular monitoring of the high-risk individuals could achieve a more favorable result.
Conclusion

Screening the population of high-risk individuals for HCC with the two-stage screening intervention in Taiwan is considered potentially cost-effective compared with opportunistic screening in the target population of an HCC endemic area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Researches on Physarum polycephalum show that methods inspired by the primitive unicellular organism can construct an efficient network and solve some complex problems in graph theory. Current models simulating the intelligent behavior of Physarum are mainly based on Hagen-Poiseuille Law and Kirchhoff Law, reaction-diffusion, Cellular Automaton and multi-agent approach. In this paper, based on an assumption that the plasmodium of Physarum forages for food along the gradient of chemo-attractants on a nutrient-poor substrate, a new model is proposed to imitate its intelligent foraging behavior. The key point of the model is that the growth of Physarum is determined by the simple particle concentration field relating the distance to food source and the shape of food source on a nutrient-poor substrate. To verify this model, numerical experiments are conducted according to Adamatzky[U+05F3]s experiment. Results in spanning tree construction by this model are almost the same as those of Physarum and Oregonator model. The proposed model can also imitate Physarum to avoid repellents. Furthermore, the Euclidean Spanning tree built by this model is similar to its corresponding Minimal Euclidean Spanning tree.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A two-stage hybrid model for data classification and rule extraction is proposed. The first stage uses a Fuzzy ARTMAP (FAM) classifier with Q-learning (known as QFAM) for incremental learning of data samples, while the second stage uses a Genetic Algorithm (GA) for rule extraction from QFAM. Given a new data sample, the resulting hybrid model, known as QFAM-GA, is able to provide prediction pertaining to the target class of the data sample as well as to give a fuzzy if-then rule to explain the prediction. To reduce the network complexity, a pruning scheme using Q-values is applied to reduce the number of prototypes generated by QFAM. A 'don't care' technique is employed to minimize the number of input features using the GA. A number of benchmark problems are used to evaluate the effectiveness of QFAM-GA in terms of test accuracy, noise tolerance, model complexity (number of rules and total rule length). The results are comparable, if not better, than many other models reported in the literature. The main significance of this research is a usable and useful intelligent model (i.e., QFAM-GA) for data classification in noisy conditions with the capability of yielding a set of explanatory rules with minimum antecedents. In addition, QFAM-GA is able to maximize accuracy and minimize model complexity simultaneously. The empirical outcome positively demonstrate the potential impact of QFAM-GA in the practical environment, i.e., providing an accurate prediction with a concise justification pertaining to the prediction to the domain users, therefore allowing domain users to adopt QFAM-GA as a useful decision support tool in assisting their decision-making processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many environmental studies require accurate simulation of water and solute fluxes in the unsaturated zone. This paper evaluates one- and multi-dimensional approaches for soil water flow as well as different spreading mechanisms to model solute behavior at different scales. For quantification of soil water fluxes,Richards equation has become the standard. Although current numerical codes show perfect water balances, the calculated soil water fluxes in case of head boundary conditions may depend largely on the method used for spatial averaging of the hydraulic conductivity. Atmospheric boundary conditions, especially in the case of phreatic groundwater levels fluctuating above and below a soil surface, require sophisticated solutions to ensure convergence. Concepts for flow in soils with macro pores and unstable wetting fronts are still in development. One-dimensional flow models are formulated to work with lumped parameters in order to account for the soil heterogeneity and preferential flow. They can be used at temporal and spatial scales that are of interest to water managers and policymakers. Multi-dimensional flow models are hampered by data and computation requirements.Their main strength is detailed analysis of typical multi-dimensional flow problems, including soil heterogeneity and preferential flow. Three physically based solute-transport concepts have been proposed to describe solute spreading during unsaturated flow: The stochastic-convective model (SCM), the convection-dispersion equation (CDE), and the fraction aladvection-dispersion equation (FADE). A less physical concept is the continuous-time random-walk process (CTRW). Of these, the SCM and the CDE are well established, and their strengths and weaknesses are identified. The FADE and the CTRW are more recent,and only a tentative strength weakness opportunity threat (SWOT)analysis can be presented at this time. We discuss the effect of the number of dimensions in a numerical model and the spacing between model nodes on solute spreading and the values of the solute-spreading parameters. In order to meet the increasing complexity of environmental problems, two approaches of model combination are used: Model integration and model coupling. Amain drawback of model integration is the complexity of there sulting code. Model coupling requires a systematic physical domain and model communication analysis. The setup and maintenance of a hydrologic framework for model coupling requires substantial resources, but on the other hand, contributions can be made by many research groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyse the incentives and welfare implications of costly technology adoption in a two-period duopoly model where firms have different amounts of capital. We also extend our framework to an open economy set-up and examine the relationship between trade and technology adoption. Our findings are as follows. First, no monotone relationship exists between the threshold cost of adoption and capital shares. Second, an unequal distribution of capital, despite lessening competition, can increase total surplus. Third, trade generally encourages adoption of modern technology unless the share of capital for the adopters is too low.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The latent structure of a 16-item mentoring function instrument was analyzed with the responses of 568 full-time employees. Features of the analyses included (a) assessment of the items' distributional properties (i.e., skewness and kurtosis); (b) assessment of the factor structure using the Satorra-Bentler scaled test statistic; and (c) evaluation of the instrument's invariance across sex. Confirmatory factor analyses using the scaled chi-square supported a two-factor oblique model that consisted of psychosocial and career-related mentoring functions. The invariance tests suggested that the structure was invariant across sex groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper compares the credit risk profile for two types of model, the Monte Carlo model used in the existing literature, and the Cox, Ingersoll and Ross (CIR) model. Each of the profiles has a concave or hump-backed shape, reflecting the amortisation and diffusion effects. However, the CIR model generates significantly different results. In addition, we consider the sensitivity of these models of credit risk to initial interest rates, volatility, maturity, kappa and delta. The results show that the sensitivities vary across the models, and we explore the meaning of that variation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the problem ofstructuralizing education and training videos for high-level semantics extraction and nonlinear media presentation in e-learning applications. Drawing guidance from production knowledge in instructional media, we propose six main narrative structures employed in education and training videos for both motivation and demonstration during learning and practical training. We devise a powerful audiovisual feature set, accompanied by a hierarchical decision tree-based classification system to determine and discriminate between these structures. Based on a two-liered hierarchical model, we demonstrate that we can achieve an accuracy of 84.7% on a comprehensive set of education and training video data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Point Distribution Model (PDM) has been successfully used in representing sets of static and moving images. A recent extension to the PDM for moving objects, the temporal PDM, has been proposed. This utilises quantities such as velocity and acceleration to more explicitly consider the characteristics of the movement and the sequencing of the changes in shape that occur. This research aims to compare the two types of model based on a series of arm movements, and to examine the characteristics of both approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background : The Beck Depression Inventory (BDI) is frequently employed as measure of depression in studies of obesity. The aim of the study was to assess the factorial structure of the BDI in obese patients prior to bariatric surgery.

Methods : Confirmatory factor analysis was conducted on the current published factor analyses of the BDI. Three published models were initially analysed with two additional modified models subsequently included. A sample of 285 patients presenting for Lap-Band® surgery was used.

Results : The published bariatric model by Munoz et al. was not an adequate fit to the data. The general model by Shafer et al. was a good fit to the data but had substantial limitations. The weight loss item did not significantly load on any factor in either model. A modified Shafer model and a proposed model were tested, and both were found to be a good fit to the data with minimal differences between the two. A proposed model, in which two items, weight loss and appetite, were omitted, was suggested to be the better model with good reliability.

Conclusions : The previously published factor analysis in bariatric candidates by Munoz et al. was a poor fit to the data, and use of this factor structure should be seriously reconsidered within the obese population. The hypothesised model was the best fit to the data. The findings of the study suggest that the existing published models are not adequate for investigating depression in obese patients seeking surgery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

© 2015 Springer Science+Business Media Dordrecht This study formulates a two-factor empirical model under the intertemporal CAPM framework to evaluate the cross-sectional implications of socially responsible investments in the US equity market. Our results show that socially responsible investments have no asset pricing impact on the US market. We argue that this ‘no financial impact’ finding indicates that investors will not be disadvantaged financially by investing in socially responsible funds or corporations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The support vector machine (SVM) is a popular method for classification, well known for finding the maximum-margin hyperplane. Combining SVM with l1-norm penalty further enables it to simultaneously perform feature selection and margin maximization within a single framework. However, l1-norm SVM shows instability in selecting features in presence of correlated features. We propose a new method to increase the stability of l1-norm SVM by encouraging similarities between feature weights based on feature correlations, which is captured via a feature covariance matrix. Our proposed method can capture both positive and negative correlations between features. We formulate the model as a convex optimization problem and propose a solution based on alternating minimization. Using both synthetic and real-world datasets, we show that our model achieves better stability and classification accuracy compared to several state-of-the-art regularized classification methods.