930 resultados para pattern-mixture model
Resumo:
Interaction engineering is fundamental for agent based systems. In this paper we will present a design pattern for the core of a multi-agent platform - the message communication and behavior activation mechanisms - using language features of C#. An agent platform is developed based on the pattern structure, which is legiti- mated through experiences of using JADE in real applications. Results of the communication model are compared against the popular JADE platform.
Resumo:
In nonlinear and stochastic control problems, learning an efficient feed-forward controller is not amenable to conventional neurocontrol methods. For these approaches, estimating and then incorporating uncertainty in the controller and feed-forward models can produce more robust control results. Here, we introduce a novel inversion-based neurocontroller for solving control problems involving uncertain nonlinear systems which could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. Based on importance sampling from these distributions a novel robust inverse control approach is obtained. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider. A nonlinear multi-variable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non-Gaussian distributions of control signal as well as processes with hysteresis. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
Тодор П. Чолаков, Димитър Й. Биров - Тази статия представя цялостен модел за автоматизиран реинженеринг на наследени системи. Тя описва в детайли процесите на превод на софтуера и на рефакторинг и степента, до която могат да се автоматизират тези процеси. По отношение на превода на код се представя модел за автоматизирано превеждане на код, съдържащ указатели и работа с адресна аритметика. Също така се дефинира рамка за процеса на реинженеринг и се набелязват възможности за по-нататъшно развитие на концепции, инструменти и алгоритми.
Resumo:
Graph-based representations have been used with considerable success in computer vision in the abstraction and recognition of object shape and scene structure. Despite this, the methodology available for learning structural representations from sets of training examples is relatively limited. In this paper we take a simple yet effective Bayesian approach to attributed graph learning. We present a naïve node-observation model, where we make the important assumption that the observation of each node and each edge is independent of the others, then we propose an EM-like approach to learn a mixture of these models and a Minimum Message Length criterion for components selection. Moreover, in order to avoid the bias that could arise with a single estimation of the node correspondences, we decide to estimate the sampling probability over all the possible matches. Finally we show the utility of the proposed approach on popular computer vision tasks such as 2D and 3D shape recognition. © 2011 Springer-Verlag.
Detecting Precipitation Climate Changes: An Approach Based on a Stochastic Daily Precipitation Model
Resumo:
2002 Mathematics Subject Classification: 62M10.
Resumo:
Cigarette smoke is a complex mixture of more than 4000 hazardous chemicals including the carcinogenic benzopyrenes. Nicotine, the most potent component of tobacco, is responsible for the addictive nature of cigarettes and is a major component of e-cigarette cartridges. Our study aims to investigate the toxicity of nicotine with special emphasis on the replacement of animals. Furthermore, we intend to study the effect of nicotine, cigarette smoke and e-cigarette vapours on human airways. In our current work, the BEAS 2B human bronchial epithelial cell line was used to analyse the effect of nicotine in isolation, on cell viability. Concentrations of nicotine from 1.1µM to 75µM were added to 5x105 cells per well in a 96 well plate and incubated for 24 hours. Cell titre blue results showed that all the nicotine treated cells were more metabolically active than the control wells (cells alone). These data indicate that, under these conditions, nicotine does not affect cell viability and in fact, suggests that there is a stimulatory effect of nicotine on metabolism. We are now furthering this finding by investigating the pro-inflammatory response of these cells to nicotine by measuring cytokine secretion via ELISA. Further work includes analysing nicotine exposure at different time points and on other epithelial cells lines like Calu-3.
Resumo:
The paper presents the simulation of the pyrolysis vapors condensation process using an Eulerian approach. The condensable volatiles produced by the fast pyrolysis of biomass in a 100 g/h bubbling fluidized bed reactor are condensed in a water cooled condenser. The vapors enter the condenser at 500 °C, and the water temperature is 15 °C. The properties of the vapor phase are calculated according to the mole fraction of its individual compounds. The saturated vapor pressure is calculated for the vapor mixture using a corresponding states correlation and assuming that the mixture of the condensable compounds behave as a pure fluid. Fluent 6.3 has been used as the simulation platform, while the condensation model has been incorporated to the main code using an external user defined function. © 2011 American Chemical Society.
Resumo:
Building an interest model is the key to realize personalized text recommendation. Previous interest models neglect the fact that a user may have multiple angles of interests. Different angles of interest provide different requests and criteria for text recommendation. This paper proposes an interest model that consists of two kinds of angles: persistence and pattern, which can be combined to form complex angles. The model uses a new method to represent the long-term interest and the short-term interest, and distinguishes the interest on object and the interest on the link structure of objects. Experiments with news-scale text data show that the interest on object and the interest on link structure have real requirements, and it is effective to recommend texts according to the angles.
Resumo:
How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.
Resumo:
The conventional, geometrically lumped description of the physical processes inside a high shear granulator is not reliable for process design and scale-up. In this study, a compartmental Population Balance Model (PBM) with spatial dependence is developed and validated in two lab-scale high shear granulation processes using a 1.9L MiPro granulator and 4L DIOSNA granulator. The compartmental structure is built using a heuristic approach based on computational fluid dynamics (CFD) analysis, which includes the overall flow pattern, velocity and solids concentration. The constant volume Monte Carlo approach is implemented to solve the multi-compartment population balance equations. Different spatial dependent mechanisms are included in the compartmental PBM to describe granule growth. It is concluded that for both cases (low and high liquid content), the adjustment of parameters (e.g. layering, coalescence and breakage rate) can provide a quantitative prediction of the granulation process.
Resumo:
Our aim was to approach an important and well-investigable phenomenon – connected to a relatively simple but real field situation – in such a way, that the results of field observations could be directly comparable with the predictions of a simulation model-system which uses a simple mathematical apparatus and to simultaneously gain such a hypothesis-system, which creates the theoretical opportunity for a later experimental series of studies. As a phenomenon of the study, we chose the seasonal coenological changes of aquatic and semiaquatic Heteroptera community. Based on the observed data, we developed such an ecological model-system, which is suitable for generating realistic patterns highly resembling to the observed temporal patterns, and by the help of which predictions can be given to alternative situations of climatic circumstances not experienced before (e.g. climate changes), and furthermore; which can simulate experimental circumstances. The stable coenological state-plane, which was constructed based on the principle of indirect ordination is suitable for unified handling of data series of monitoring and simulation, and also fits for their comparison. On the state-plane, such deviations of empirical and model-generated data can be observed and analysed, which could otherwise remain hidden.
Resumo:
A tanulmány a nemzetköziesedés elméleti modelljeinek hiányosságaival szembesülve amellett érvel, hogy ahhoz, hogy megértsük a nemzetköziesedés időben zajló folyamatát, a vállalatok nemzetközi evolúcióját a kontextussal összefüggésben kell vizsgálni. A tanulmány a vállalatok nemzetköziesedésének folyamatát vizsgáló elméleti szakirodalomra építve egy olyan kutatási modellt vázol fel, amely alkalmas lehet a nemzetköziesedés dinamikájának empirikus vizsgálatához. A modell a belépés, diverzitás, ütem és szakasz koncepciók mentén operacionalizált nemzetköziesedés mintázata és a kontextus (környezet, vállalat, menedzsment) közötti kapcsolatot teremti meg. A vázolt kutatási modell empirikus alkalmazásával lehetővé válik a nemzetközivé válás folyamata során kialakuló komplex kapcsolatrendszer feltárása és megértése. ________ Addressing some of the limitations of the theoretical models of firm internationalization this paper argues that in order to understand the process of firm internationalization along time one should observe firms and their contextual environments as complex interacting processes. Drawing on the theoretical models of firm internationalization process this study proposes a research model that may be suitable for the empirical examination of the dynamics of internationalization. The proposed model builds a relationship between the pattern of firm internationalization operationalized via concepts such as entry, diversity, pace, phases and the context (environment, firm, management) of it. By analyzing the interaction between the multi-level processes that shape internationalization, one can explore the reasons behind the dynamic profile of firm internationalization.
Resumo:
With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
Conceptual database design is an unusually difficult and error-prone task for novice designers. This study examined how two training approaches---rule-based and pattern-based---might improve performance on database design tasks. A rule-based approach prescribes a sequence of rules for modeling conceptual constructs, and the action to be taken at various stages while developing a conceptual model. A pattern-based approach presents data modeling structures that occur frequently in practice, and prescribes guidelines on how to recognize and use these structures. This study describes the conceptual framework, experimental design, and results of a laboratory experiment that employed novice designers to compare the effectiveness of the two training approaches (between-subjects) at three levels of task complexity (within subjects). Results indicate an interaction effect between treatment and task complexity. The rule-based approach was significantly better in the low-complexity and the high-complexity cases; there was no statistical difference in the medium-complexity case. Designer performance fell significantly as complexity increased. Overall, though the rule-based approach was not significantly superior to the pattern-based approach in all instances, it out-performed the pattern-based approach at two out of three complexity levels. The primary contributions of the study are (1) the operationalization of the complexity construct to a degree not addressed in previous studies; (2) the development of a pattern-based instructional approach to database design; and (3) the finding that the effectiveness of a particular training approach may depend on the complexity of the task.