906 resultados para model-based reasoning processes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract In this work, a novel on-line process for production of food-grade emulsions containing oily extracts, i.e. oil-in-water (O/W) emulsions, in only one step is presented. This process has been called ESFE, Emulsions from Supercritical Fluid Extraction. With this process, emulsions containing supercritical fluid extracts can be obtained directly from plant materials. The aim in the conception of this process is to propose a new rapid way to obtain emulsions from supercritical fluid extracts. Nowadays the conventional emulsion formulation method is a two-step procedure, i.e. first supercritical fluid extraction for obtaining an extract; secondly emulsion formulation using another device. Other variation of the process was tested and successfully validated originating a new acronymed process: EPFE (Emulsions from Pressurized Fluid Extractions). Both processes exploit the supercritical CO2-essential oils miscibility, in addition, EPFE process exploits the emulsification properties of saponin-rich pressurized aqueous plant extracts. The feasibility of this latter process was demonstrated using Pfaffia glomerata roots as source of saponin-rich extract, water as extracting solvent and clove essential oil, directly extracted using supercritical CO2, as a model dispersed phase. In addition, examples of pressurized fluid-based coupled processes applied for adding value to food bioactive compounds developed in the past five years are reviewed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette recherche qualitative avait pour but d’explorer le raisonnement clinique d’infirmières de première ligne en CSSS/CLSC lorsqu’elles priorisent leurs interventions auprès de familles vivant en contexte de vulnérabilité dans le cadre du programme des Services intégrés en périnatalité et pour la petite enfance (SIPPE). Il s’agit d’une étude de cas qui comporte un échantillon intentionnel de sept épisodes de soins impliquant deux infirmières auprès de sept familles en période postnatale lorsqu’elles priorisent leurs interventions. La collecte de données a procédé par méthode think aloud, suivie d’entretiens semi-dirigés auprès des infirmières. Une analyse qualitative des données a été effectuée selon des méthodes interprétatives et par comptage de catégories. Ces dernières ont été formulées et mises en relation en s’inspirant de la modélisation du processus de raisonnement clinique de Tanner (2006) ainsi que des stratégies de raisonnement clinique proposés par Fonteyn (1998). Au terme de cette étude, le processus de raisonnement clinique ne semble pas être différent selon le type de priorité d’intervention auprès de familles en contexte de vulnérabilité, particulièrement lorsque nous distinguons la priorité selon un degré d’urgence (prioritaire ou secondaire). Aussi, nous constatons qu’il existe peu de diversité dans les processus de raisonnement clinique mobilisés à travers les sept épisodes de soins; et qu’un processus narratif de raisonnement est fréquent. Si une famille exprime un besoin urgent, l’infirmière y répond prioritairement. Par ailleurs, lorsque des conditions suggèrent un potentiel accru de vulnérabilité des familles, un mode de raisonnement clinique plus systématique, qui comporte une collecte et une mise en relation d’informations afin de formuler une proposition pour soutenir le passage à l’action, semble être mobilisé pour prioriser l’intervention. Il en est ainsi s’il s’agit d’un premier bébé, que la famille n’utilise pas d’autres ressources formelles de soutien. Autrement, s’il s’agit d’un deuxième bébé et que les familles utilisent d’autres ressources, les infirmières tendent plutôt à appliquer une routine d’intervention SIPPE. Aussi, cette recherche témoigne de l’engagement soutenu des infirmières auprès de familles vivant d’importants défis. Il importe toutefois de soutenir le développement d’un répertoire plus varié de processus de raisonnement clinique afin de renforcer leur capacité de prioriser leur intervention qui se déroule dans un contexte de multiples contraintes organisationnelles et interpersonnelles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autonomous vehicles are increasingly being used in mission-critical applications, and robust methods are needed for controlling these inherently unreliable and complex systems. This thesis advocates the use of model-based programming, which allows mission designers to program autonomous missions at the level of a coach or wing commander. To support such a system, this thesis presents the Spock generative planner. To generate plans, Spock must be able to piece together vehicle commands and team tactics that have a complex behavior represented by concurrent processes. This is in contrast to traditional planners, whose operators represent simple atomic or durative actions. Spock represents operators using the RMPL language, which describes behaviors using parallel and sequential compositions of state and activity episodes. RMPL is useful for controlling mobile autonomous missions because it allows mission designers to quickly encode expressive activity models using object-oriented design methods and an intuitive set of activity combinators. Spock also is significant in that it uniformly represents operators and plan-space processes in terms of Temporal Plan Networks, which support temporal flexibility for robust plan execution. Finally, Spock is implemented as a forward progression optimal planner that walks monotonically forward through plan processes, closing any open conditions and resolving any conflicts. This thesis describes the Spock algorithm in detail, along with example problems and test results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper focuses on taking advantage of large amounts of data that are systematically stored in plants (by means of SCADA systems), but not exploited enough in order to achieve supervisory goals (fault detection, diagnosis and reconfiguration). The methodology of case base reasoning (CBR) is proposed to perform supervisory tasks in industrial processes by re-using the stored data. The goal is to take advantage of experiences, registered in a suitable structure as cam, avoiding the tedious task of knowledge acquisition and representation needed by other reasoning techniques as expert systems. An outlook of CBR terminology and basic concepts are presented. The adaptation of CBR in performing expert supervisory tasks, taking into account the particularities and difficulties derived from dynamic systems, is discussed. A special interest is focused in proposing a general case definition suitable for supervisory tasks. Finally, this structure and the whole methodology is tested in a application example for monitoring a real drier chamber

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis pretende describir la situación actual del sector de seguridad privada, al implementar y adoptar estrategias de CRM. Con una revisión confiable y el estudio de casos relacionados con el tema, lo cual permitirá constatar la realidad en cuanto la aplicación del modelo, en el sector de seguridad privada, según lo planteado por diversos autores. Los resultados obtenidos permitirán, de este modo, al sector y a sus gerentes, desarrollar estrategias que ayuden a la satisfacción de sus clientes y a la prestación de un mejor servicio. En el campo académico, este estudio servirá como guía teórico-práctica para estudiantes y profesores, de modo que permitirá afianzar conocimientos en cuanto al CRM, al marketing relacional y su uso en el sector de seguridad privada. Según este modelo la información acerca de los clientes, es una información estratégica vital para las organizaciones que ayuda a la toma de decisiones, pronosticar cambios en cuanto a demanda, además de establecer control sobre procesos en los que se involucre el cliente; de modo que la adopción e implementación de CRM, ayude a la empresa, en este caso a las del sector de seguridad privada, a estar atentos a la manera como se interactúa con el cliente y por ende mejorar el servicio, lo que tendrá repercusión en la percepción que tenga de la organización el cliente. De este modo, se ve como en la actualidad las estrategias de CRM definen el rumbo de una empresa, ayudando atraer nuevos clientes y además de esto, ayuda de igual modo a mantener felices a los clientes actuales; lo cual repercute en la demanda o el requerimiento del servicio, y así en una mejor rentabilidad para las empresas del sector. Razones por las que el sector de vigilancia se verá beneficiado por medio de las estrategias del CRM, lo que lo llevara a ofrecer mejores servicios a sus clientes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En un mundo hiperconectado, dinámico y cargado de incertidumbre como el actual, los métodos y modelos analíticos convencionales están mostrando sus limitaciones. Las organizaciones requieren, por tanto, herramientas útiles que empleen tecnología de información y modelos de simulación computacional como mecanismos para la toma de decisiones y la resolución de problemas. Una de las más recientes, potentes y prometedoras es el modelamiento y la simulación basados en agentes (MSBA). Muchas organizaciones, incluidas empresas consultoras, emplean esta técnica para comprender fenómenos, hacer evaluación de estrategias y resolver problemas de diversa índole. Pese a ello, no existe (hasta donde conocemos) un estado situacional acerca del MSBA y su aplicación a la investigación organizacional. Cabe anotar, además, que por su novedad no es un tema suficientemente difundido y trabajado en Latinoamérica. En consecuencia, este proyecto pretende elaborar un estado situacional sobre el MSBA y su impacto sobre la investigación organizacional.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of a model-based diagnosis system could be affected by several uncertainty sources, such as,model errors,uncertainty in measurements, and disturbances. This uncertainty can be handled by mean of interval models.The aim of this thesis is to propose a methodology for fault detection, isolation and identification based on interval models. The methodology includes some algorithms to obtain in an automatic way the symbolic expression of the residual generators enhancing the structural isolability of the faults, in order to design the fault detection tests. These algorithms are based on the structural model of the system. The stages of fault detection, isolation, and identification are stated as constraint satisfaction problems in continuous domains and solved by means of interval based consistency techniques. The qualitative fault isolation is enhanced by a reasoning in which the signs of the symptoms are derived from analytical redundancy relations or bond graph models of the system. An initial and empirical analysis regarding the differences between interval-based and statistical-based techniques is presented in this thesis. The performance and efficiency of the contributions are illustrated through several application examples, covering different levels of complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Results from the first Sun-to-Earth coupled numerical model developed at the Center for Integrated Space Weather Modeling are presented. The model simulates physical processes occurring in space spanning from the corona of the Sun to the Earth's ionosphere, and it represents the first step toward creating a physics-based numerical tool for predicting space weather conditions in the near-Earth environment. Two 6- to 7-d intervals, representing different heliospheric conditions in terms of the three-dimensional configuration of the heliospheric current sheet, are chosen for simulations. These conditions lead to drastically different responses of the simulated magnetosphere-ionosphere system, emphasizing, on the one hand, challenges one encounters in building such forecasting tools, and on the other hand, emphasizing successes that can already be achieved even at this initial stage of Sun-to-Earth modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grass-based diets are of increasing social-economic importance in dairy cattle farming, but their low supply of glucogenic nutrients may limit the production of milk. Current evaluation systems that assess the energy supply and requirements are based on metabolisable energy (ME) or net energy (NE). These systems do not consider the characteristics of the energy delivering nutrients. In contrast, mechanistic models take into account the site of digestion, the type of nutrient absorbed and the type of nutrient required for production of milk constituents, and may therefore give a better prediction of supply and requirement of nutrients. The objective of the present study is to compare the ability of three energy evaluation systems, viz. the Dutch NE system, the agricultural and food research council (AFRC) ME system, and the feed into milk (FIM) ME system, and of a mechanistic model based on Dijkstra et al. [Simulation of digestion in cattle fed sugar cane: prediction of nutrient supply for milk production with locally available supplements. J. Agric. Sci., Cambridge 127, 247-60] and Mills et al. [A mechanistic model of whole-tract digestion and methanogenesis in the lactating dairy cow: model development, evaluation and application. J. Anim. Sci. 79, 1584-97] to predict the feed value of grass-based diets for milk production. The dataset for evaluation consists of 41 treatments of grass-based diets (at least 0.75 g ryegrass/g diet on DM basis). For each model, the predicted energy or nutrient supply, based on observed intake, was compared with predicted requirement based on observed performance. Assessment of the error of energy or nutrient supply relative to requirement is made by calculation of mean square prediction error (MSPE) and by concordance correlation coefficient (CCC). All energy evaluation systems predicted energy requirement to be lower (6-11%) than energy supply. The root MSPE (expressed as a proportion of the supply) was lowest for the mechanistic model (0.061), followed by the Dutch NE system (0.082), FIM ME system (0.097) and AFRCME system(0.118). For the energy evaluation systems, the error due to overall bias of prediction dominated the MSPE, whereas for the mechanistic model, proportionally 0.76 of MSPE was due to random variation. CCC analysis confirmed the higher accuracy and precision of the mechanistic model compared with energy evaluation systems. The error of prediction was positively related to grass protein content for the Dutch NE system, and was also positively related to grass DMI level for all models. In conclusion, current energy evaluation systems overestimate energy supply relative to energy requirement on grass-based diets for dairy cattle. The mechanistic model predicted glucogenic nutrients to limit performance of dairy cattle on grass-based diets, and proved to be more accurate and precise than the energy systems. The mechanistic model could be improved by allowing glucose maintenance and utilization requirements parameters to be variable. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We developed a stochastic simulation model incorporating most processes likely to be important in the spread of Phytophthora ramorum and similar diseases across the British landscape (covering Rhododendron ponticum in woodland and nurseries, and Vaccinium myrtillus in heathland). The simulation allows for movements of diseased plants within a realistically modelled trade network and long-distance natural dispersal. A series of simulation experiments were run with the model, representing an experiment varying the epidemic pressure and linkage between natural vegetation and horticultural trade, with or without disease spread in commercial trade, and with or without inspections-with-eradication, to give a 2 x 2 x 2 x 2 factorial started at 10 arbitrary locations spread across England. Fifty replicate simulations were made at each set of parameter values. Individual epidemics varied dramatically in size due to stochastic effects throughout the model. Across a range of epidemic pressures, the size of the epidemic was 5-13 times larger when commercial movement of plants was included. A key unknown factor in the system is the area of susceptible habitat outside the nursery system. Inspections, with a probability of detection and efficiency of infected-plant removal of 80% and made at 90-day intervals, reduced the size of epidemics by about 60% across the three sectors with a density of 1% susceptible plants in broadleaf woodland and heathland. Reducing this density to 0.1% largely isolated the trade network, so that inspections reduced the final epidemic size by over 90%, and most epidemics ended without escape into nature. Even in this case, however, major wild epidemics developed in a few percent of cases. Provided the number of new introductions remains low, the current inspection policy will control most epidemics. However, as the rate of introduction increases, it can overwhelm any reasonable inspection regime, largely due to spread prior to detection. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a non-local version of the NJL model, based on a separable quark-quark interaction. The interaction is extended to include terms that bind vector and axial-vector mesons. The non-locality means that no further regulator is required. Moreover the model is able to confine the quarks by generating a quark propagator without poles at real energies. Working in the ladder approximation, we calculate amplitudes in Euclidean space and discuss features of their continuation to Minkowski energies. Conserved currents are constructed and we demonstrate their consistency with various Ward identities. Various meson masses are calculated, along with their strong and electromagnetic decay amplitudes. We also calculate the electromagnetic form factor of the pion, as well as form factors associated with the processes γγ* → π0 and ω → π0γ*. The results are found to lead to a satisfactory phenomenology and lend some dynamical support to the idea of vector-meson dominance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.