894 resultados para Modeling approach
Resumo:
We present a new approach accounting for the nonadditivity of attractive parts of solid-fluid and fluidfluid potentials to improve the quality of the description of nitrogen and argon adsorption isotherms on graphitized carbon black in the framework of non-local density functional theory. We show that the strong solid-fluid interaction in the first monolayer decreases the fluid-fluid interaction, which prevents the twodimensional phase transition to occur. This results in smoother isotherm, which agrees much better with experimental data. In the region of multi-layer coverage the conventional non-local density functional theory and grand canonical Monte Carlo simulations are known to over-predict the amount adsorbed against experimental isotherms. Accounting for the non-additivity factor decreases the solid-fluid interaction with the increase of intermolecular interactions in the dense adsorbed fluid, preventing the over-prediction of loading in the region of multi-layer adsorption. Such an improvement of the non-local density functional theory allows us to describe experimental nitrogen and argon isotherms on carbon black quite accurately with mean error of 2.5 to 5.8% instead of 17 to 26% in the conventional technique. With this approach, the local isotherms of model pores can be derived, and consequently a more reliab * le pore size distribution can be obtained. We illustrate this by applying our theory against nitrogen and argon isotherms on a number of activated carbons. The fitting between our model and the data is much better than the conventional NLDFT, suggesting the more reliable PSD obtained with our approach.
Resumo:
Refinement in software engineering allows a specification to be developed in stages, with design decisions taken at earlier stages constraining the design at later stages. Refinement in complex data models is difficult due to lack of a way of defining constraints, which can be progressively maintained over increasingly detailed refinements. Category theory provides a way of stating wide scale constraints. These constraints lead to a set of design guidelines, which maintain the wide scale constraints under increasing detail. Previous methods of refinement are essentially local, and the proposed method does not interfere very much with these local methods. The result is particularly applicable to semantic web applications, where ontologies provide systems of more or less abstract constraints on systems, which must be implemented and therefore refined by participating systems. With the approach of this paper, the concept of committing to an ontology carries much more force. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
In a recent study, severe distortions in the proton images of an excised, fixed, human brain in an 11.1 Tesla/40 cm MR instrument have been observed, and the effect modeled on phantom images using a finite difference time domain (FDTD) model. in the present study, we extend these simulations to that of a complete human head, employing a hybrid FDTD and method of moments (MoM) approach, which provides a validated method for simulating biological samples in coil structures. The effect of fixative on the image distortions is explored. importantly, temperature distributions within the head are also simulated using a bioheat method based on parameters derived from the electromagnetic simulations. The MoM/FDTD simulations confirm that the transverse magnetic field (B,) from a ReCav resonator exhibits good homogeneity in air but strong inhomogeneity when loaded with the head with or without fixative. The fixative serves to increase the distortions, but they are still significant for the in vivo simulations. The simulated signal intensity (SI) distribution within the sample confirm the distortions in the experimental images are caused by the complex interactions of the incident electromagnetic fields with tissue, which is heterogeneous in terms of conductivity and permittivity. The temperature distribution is likewise heterogeneous, raising concerns regarding hot spot generation in the sample that may exceed acceptable levels in future in vivo studies. As human imaging at 11.1 T is some time away, simulations are important in terms of predicting potential safety issues as well as evaluating practical concerns about the quality of images. Simulation on a whole human head at 11.1 T implies the wave behavior presents significant engineering challenges for ultra-high-field (UHF) MRI. Novel strategies will have to be employed in imaging technique and resonator design for UHF MRI to achieve the theoretical signal-to-noise ratio (SNR) improvements it offers over lower field systems. (C) 2005 Wiley Periodicals, Inc.
Resumo:
Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
A critical assessment is presented for the existing fluid flow models used for dense medium cyclones (DMCs) and hydrocyclones. As the present discussion indicates, the understanding of dense medium cyclone flow is still far from the complete. However, its similarity to the hydrocyclone provides a basis for improved understanding of fluid flow in DMCs. The complexity of fluid flow in DMCs is basically due to the existence of medium as well as the dominance of turbulent particle size and density effects on separation. Both the theoretical and experimental analysis is done with respect to two-phase motions and solid phase flow in hydrocyclones or DMCs. A detailed discussion is presented on the empirical, semiempirical, and the numerical models based upon both the vorticity-stream function approach and Navier-Stokes equations in their primitive variables and in cylindrical coordinates available in literature. The existing equations describing turbulence and multiphase flows in cyclone are also critically reviewed.
Resumo:
A large number of mineral processing equipment employs the basic principles of gravity concentration in a flowing fluid of a few millimetres thick in small open channels where the particles are distributed along the flow height based on their physical properties and the fluid flow characteristics. Fluid flow behaviour and slurry transportation characteristics in open channels have been the research topic for many years in many engineering disciplines. However, the open channels used in the mineral processing industries are different in terms of the size of the channel and the flow velocity used. Understanding of water split behaviour is, therefore, essential in modeling flowing film concentrators. In this paper, an attempt has been made to model the water split behaviour in an inclined open rectangular channel, resembling the actual size and the flow velocity used by the mineral processing industries, based on the Prandtl's mixing length approach. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Ecologists and economists both use models to help develop strategies for biodiversity management. The practical use of disciplinary models, however, can be limited because ecological models tend not to address the socioeconomic dimension of biodiversity management, whereas economic models tend to neglect the ecological dimension. Given these shortcomings of disciplinary models, there is a necessity to combine ecological and economic knowledge into ecological-economic models. It is insufficient if scientists work separately in their own disciplines and combine their knowledge only when it comes to formulating management recommendations. Such an approach does not capture feedback loops between the ecological and the socioeconomic systems. Furthermore, each discipline poses the management problem in its own way and comes up with its own most appropriate solution. These disciplinary solutions, however are likely to be so different that a combined solution considering aspects of both disciplines cannot be found. Preconditions for a successful model-based integration of ecology and economics include (1) an in-depth knowledge of the two disciplines, (2) the adequate identification and framing of the problem to be investigated, and (3) a common understanding between economists and ecologists of modeling and scale. To further advance ecological-economic modeling the development of common benchmarks, quality controls, and refereeing standards for ecological-economic models is desirable.
Resumo:
This paper presents a new approach to improving the effectiveness of autonomous systems that deal with dynamic environments. The basis of the approach is to find repeating patterns of behavior in the dynamic elements of the system, and then to use predictions of the repeating elements to better plan goal directed behavior. It is a layered approach involving classifying, modeling, predicting and exploiting. Classifying involves using observations to place the moving elements into previously defined classes. Modeling involves recording features of the behavior on a coarse grained grid. Exploitation is achieved by integrating predictions from the model into the behavior selection module to improve the utility of the robot's actions. This is in contrast to typical approaches that use the model to select between different strategies or plays. Three methods of adaptation to the dynamic features of the environment are explored. The effectiveness of each method is determined using statistical tests over a number of repeated experiments. The work is presented in the context of predicting opponent behavior in the highly dynamic and multi-agent robot soccer domain (RoboCup)
Resumo:
Starting with a UML specification that captures the underlying functionality of some given Java-based concurrent system, we describe a systematic way to construct, from this specification, test sequences for validating an implementation of the system. The approach is to first extend the specification to create UML state machines that directly address those aspects of the system we wish to test. To be specific, the extended UML state machines can capture state information about the number of waiting threads or the number of threads blocked on a given object. Using the SAL model checker we can generate from the extended UML state machines sequences that cover all the various possibilities of events and states. These sequences can then be directly transformed into test sequences suitable for input into a testing tool such as ConAn. As an illustration, the methodology is applied to generate sequences for testing a Java implementation of the producer-consumer system. © 2005 IEEE
Resumo:
In this paper, we present a framework for pattern-based model evolution approaches in the MDA context. In the framework, users define patterns using a pattern modeling language that is designed to describe software design patterns, and they can use the patterns as rules to evolve their model. In the framework, design model evolution takes place via two steps. The first step is a binding process of selecting a pattern and defining where and how to apply the pattern in the model. The second step is an automatic model transformation that actually evolves the model according to the binding information and the pattern rule. The pattern modeling language is defined in terms of a MOF-based role metamodel, and implemented using an existing modeling framework, EMF, and incorporated as a plugin to the Eclipse modeling environment. The model evolution process is also implemented as an Eclipse plugin. With these two plugins, we provide an integrated framework where defining and validating patterns, and model evolution based on patterns can take place in a single modeling environment.
Resumo:
This paper presents an innovative approach for signature verification and forgery detection based on fuzzy modeling. The signature image is binarized and resized to a fixed size window and is then thinned. The thinned image is then partitioned into a fixed number of eight sub-images called boxes. This partition is done using the horizontal density approximation approach. Each sub-image is then further resized and again partitioned into twelve further sub-images using the uniform partitioning approach. The features of consideration are normalized vector angle (α) from each box. Each feature extracted from sample signatures gives rise to a fuzzy set. Since the choice of a proper fuzzification function is crucial for verification, we have devised a new fuzzification function with structural parameters, which is able to adapt to the variations in fuzzy sets. This function is employed to develop a complete forgery detection and verification system.
Resumo:
Purpose: The purpose of this paper is to describe how the application of systems thinking to designing, managing and improving business processes has resulted in a new and unique holonic-based process modeling methodology know as process orientated holonic modeling. Design/methodology/approach: The paper describes key systems thinking axioms that are built upon in an overview of the methodology; the techniques are described using an example taken from a large organization designing and manufacturing capital goods equipment operating within a complex and dynamic environment. These were produced in an 18 month project, using an action research approach, to improve quality and process efficiency. Findings: The findings of this research show that this new methodology can support process depiction and improvement in industrial sectors which are characterized by environments of high variety and low volume (e.g. projects; such as the design and manufacture of a radar system or a hybrid production process) which do not provide repetitive learning opportunities. In such circumstances, the methodology has not only been able to deliver holonic-based process diagrams but also been able to transfer strategic vision from top management to middle and operational levels without being reductionistic. Originality/value: This paper will be of interest to organizational analysts looking at large complex projects whom require a methodology that does not confine them to thinking reductionistically in "task-breakdown" based approaches. The novel ideas in this paper have great impact on the way analysts should perceive organizational processes. Future research is applying the methodology in similar environments in other industries. © Emerald Group Publishing Limited.
Resumo:
The application of systems thinking to designing, managing, and improving business processes has developed a new "holonic-based" process modeling methodology. The theoretical background and the methodology are described using examples taken from a large organization designing and manufacturing capital goods equipment operating within a complex and dynamic environment. A key point of differentiation attributed to this methodology is that it allows a set of models to be produced without taking a task breakdown approach but instead uses systems thinking and a construct known as the "holon" to build process descriptions as a system of systems (i.e., a holarchy). The process-oriented holonic modeling methodology has been used for total quality management and business process engineering exercises in different industrial sectors and builds models that connect the strategic vision of a company to its operational processes. Exercises have been conducted in response to environmental pressures to make operations align with strategic thinking as well as becoming increasingly agile and efficient. This unique methodology is best applied in environments of high complexity, low volume, and high variety, where repeated learning opportunities are few and far between (e.g., large development projects). © 2007 IEEE.
Resumo:
The CancerGrid consortium is developing open-standards cancer informatics to address the challenges posed by modern cancer clinical trials. This paper presents the service-oriented software paradigm implemented in CancerGrid to derive clinical trial information management systems for collaborative cancer research across multiple institutions. Our proposal is founded on a combination of a clinical trial (meta)model and WSRF (Web Services Resource Framework), and is currently being evaluated for use in early phase trials. Although primarily targeted at cancer research, our approach is readily applicable to other areas for which a similar information model is available.
Resumo:
The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.