946 resultados para process model consolidation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Species distribution models are increasingly used to address conservation questions, so their predictive capacity requires careful evaluation. Previous studies have shown how individual factors used in model construction can affect prediction. Although some factors probably have negligible effects compared to others, their relative effects are largely unknown. 2. We introduce a general "virtual ecologist" framework to study the relative importance of factors involved in the construction of species distribution models. 3. We illustrate the framework by examining the relative importance of five key factors-a missing covariate, spatial autocorrelation due to a dispersal process in presences/absences, sample size, sampling design and modeling technique-in a real study framework based on plants in a mountain landscape at regional scale, and show that, for the parameter values considered here, most of the variation in prediction accuracy is due to sample size and modeling technique. Contrary to repeatedly reported concerns, spatial autocorrelation has only comparatively small effects. 4. This study shows the importance of using a nested statistical framework to evaluate the relative effects of factors that may affect species distribution models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this dissertation is to find and provide the basis for a managerial tool that allows a firm to easily express its business logic. The methodological basis for this work is design science, where the researcher builds an artifact to solve a specific problem. In this case the aim is to provide an ontology that makes it possible to explicit a firm's business model. In other words, the proposed artifact helps a firm to formally describe its value proposition, its customers, the relationship with them, the necessary intra- and inter-firm infrastructure and its profit model. Such an ontology is relevant because until now there is no model that expresses a company's global business logic from a pure business point of view. Previous models essentially take an organizational or process perspective or cover only parts of a firm's business logic. The four main pillars of the ontology, which are inspired by management science and enterprise- and processmodeling, are product, customer interface, infrastructure and finance. The ontology is validated by case studies, a panel of experts and managers. The dissertation also provides a software prototype to capture a company's business model in an information system. The last part of the thesis consists of a demonstration of the value of the ontology in business strategy and Information Systems (IS) alignment. Structure of this thesis: The dissertation is structured in nine parts: Chapter 1 presents the motivations of this research, the research methodology with which the goals shall be achieved and why this dissertation present a contribution to research. Chapter 2 investigates the origins, the term and the concept of business models. It defines what is meant by business models in this dissertation and how they are situated in the context of the firm. In addition this chapter outlines the possible uses of the business model concept. Chapter 3 gives an overview of the research done in the field of business models and enterprise ontologies. Chapter 4 introduces the major contribution of this dissertation: the business model ontology. In this part of the thesis the elements, attributes and relationships of the ontology are explained and described in detail. Chapter 5 presents a case study of the Montreux Jazz Festival which's business model was captured by applying the structure and concepts of the ontology. In fact, it gives an impression of how a business model description based on the ontology looks like. Chapter 6 shows an instantiation of the ontology into a prototype tool: the Business Model Modelling Language BM2L. This is an XML-based description language that allows to capture and describe the business model of a firm and has a large potential for further applications. Chapter 7 is about the evaluation of the business model ontology. The evaluation builds on literature review, a set of interviews with practitioners and case studies. Chapter 8 gives an outlook on possible future research and applications of the business model ontology. The main areas of interest are alignment of business and information technology IT/information systems IS and business model comparison. Finally, chapter 9 presents some conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The changing business environment demands that chemical industrial processes be designed such that they enable the attainment of multi-objective requirements and the enhancement of innovativedesign activities. The requirements and key issues for conceptual process synthesis have changed and are no longer those of conventional process design; there is an increased emphasis on innovative research to develop new concepts, novel techniques and processes. A central issue, how to enhance the creativity of the design process, requires further research into methodologies. The thesis presentsa conflict-based methodology for conceptual process synthesis. The motivation of the work is to support decision-making in design and synthesis and to enhance the creativity of design activities. It deals with the multi-objective requirements and combinatorially complex nature of process synthesis. The work is carriedout based on a new concept and design paradigm adapted from Theory of InventiveProblem Solving methodology (TRIZ). TRIZ is claimed to be a `systematic creativity' framework thanks to its knowledge based and evolutionary-directed nature. The conflict concept, when applied to process synthesis, throws new lights on design problems and activities. The conflict model is proposed as a way of describing design problems and handling design information. The design tasks are represented as groups of conflicts and conflict table is built as the design tool. The general design paradigm is formulated to handle conflicts in both the early and detailed design stages. The methodology developed reflects the conflict nature of process design and synthesis. The method is implemented and verified through case studies of distillation system design, reactor/separator network design and waste minimization. Handling the various levels of conflicts evolve possible design alternatives in a systematic procedure which consists of establishing an efficient and compact solution space for the detailed design stage. The approach also provides the information to bridge the gap between the application of qualitative knowledge in the early stage and quantitative techniques in the detailed design stage. Enhancement of creativity is realized through the better understanding of the design problems gained from the conflict concept and in the improvement in engineering design practice via the systematic nature of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accumulation of aqueous pollutants is becoming a global problem. The search for suitable methods and/or combinations of water treatment processes is a task that can slow down and stop the process of water pollution. In this work, the method of wet oxidation was considered as an appropriate technique for the elimination of the impurities present in paper mill process waters. It has been shown that, when combined with traditional wastewater treatment processes, wet oxidation offers many advantages. The combination of coagulation and wet oxidation offers a new opportunity for the improvement of the quality of wastewater designated for discharge or recycling. First of all, the utilization of coagulated sludge via wet oxidation provides a conditioning process for the sludge, i.e. dewatering, which is rather difficult to carry out with untreated waste. Secondly, Fe2(SO4)3, which is employed earlier as a coagulant, transforms the conventional wet oxidation process into a catalytic one. The use of coagulation as the post-treatment for wet oxidation can offer the possibility of the brown hue that usually accompanies the partial oxidation to be reduced. As a result, the supernatant is less colored and also contains a rather low amount of Fe ions to beconsidered for recycling inside mills. The thickened part that consists of metal ions is then recycled back to the wet oxidation system. It was also observed that wet oxidation is favorable for the degradation of pitch substances (LWEs) and lignin that are present in the process waters of paper mills. Rather low operating temperatures are needed for wet oxidation in order to destruct LWEs. The oxidation in the alkaline media provides not only the faster elimination of pitch and lignin but also significantly improves the biodegradable characteristics of wastewater that contains lignin and pitch substances. During the course of the kinetic studies, a model, which can predict the enhancements of the biodegradability of wastewater, was elaborated. The model includes lumped concentrations suchas the chemical oxygen demand and biochemical oxygen demand and reflects a generalized reaction network of oxidative transformations. Later developments incorporated a new lump, the immediately available biochemical oxygen demand, which increased the fidelity of the predictions made by the model. Since changes in biodegradability occur simultaneously with the destruction of LWEs, an attempt was made to combine these two facts for modeling purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the European Union, the importance of mobile communications was realized early on. The process of mobile communications becoming ubiquitous has taken time, as the innovation of mobile communications diffused into the society. The aim of this study is to find out how the evolution and spatial patterns of the diffusion of mobile communications within the European Union could be taken into account in forecasting the diffusion process. There is relatively lot of research of innovation diffusion on the individual (micro) andthe country (macro) level, if compared to the territorial level. Territorial orspatial diffusion refers either to the intra-country or inter-country diffusionof an innovation. In both settings, the dif- fusion of a technological innovation has gained scarce attention. This study adds knowledge of the diffusion between countries, focusing especially on the role of location in this process. The main findings of the study are the following: The penetration rates of the European Union member countries have become more even in the period of observation, from the year 1981 to 2000. The common digital GSM system seems to have hastened this process. As to the role of location in the diffusion process, neighboring countries have had similar diffusion processes. They can be grouped into three, the Nordic countries, the central and southern European countries, and the remote southern European countries. The neighborhood effect is also domi- nating in thegravity model which is used for modeling the adoption timing of the countries. The subsequent diffusion within a country, measured by the logistic model in Finland, is af- fected positively by its economic situation, and it seems to level off at some 92 %. Considering the launch of future mobile communications systemsusing a common standard should implicate an equal development between the countries. The launching time should be carefully selected as the diffusion is probably delayed in economic downturns. The location of a country, measured by distance, can be used in forecasting the adoption and diffusion. Fi- nally, the result of penetration rates becoming more even implies that in a relatively homoge- nous set of countries, such as the European Union member countries, the estimated final pene- tration of a single country can be used for approximating the penetration of the others. The estimated eventual penetration of Finland, some 92 %, should thus also be the eventual level for all the European Union countries and for the European Union as a whole.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. Methodology/Principal Findings: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. Conclusions/Significance: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flow structures above vegetation canopies have received much attention within terrestrial and aquatic literature. This research has led to a good process understanding of mean and turbulent canopy flow structure. However, much of this research has focused on rigid or semi-rigid vegetation with relatively simple morphology. Aquatic macrophytes differ from this form, exhibiting more complex morphologies, predominantly horizontal posture in the flow and a different force balance. While some recent studies have investigated such canopies, there is still the need to examine the relevance and applicability of general canopy layer theory to these types of vegetation. Here, we report on a range of numerical experiments, using both semi-rigid and highly flexible canopies. The results for the semi-rigid canopies support existing canopy layer theory. However, for the highly flexible vegetation, the flow pattern is much more complex and suggests that a new canopy model may be required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been convincingly argued that computer simulation modeling differs from traditional science. If we understand simulation modeling as a new way of doing science, the manner in which scientists learn about the world through models must also be considered differently. This article examines how researchers learn about environmental processes through computer simulation modeling. Suggesting a conceptual framework anchored in a performative philosophical approach, we examine two modeling projects undertaken by research teams in England, both aiming to inform flood risk management. One of the modeling teams operated in the research wing of a consultancy firm, the other were university scientists taking part in an interdisciplinary project experimenting with public engagement. We found that in the first context the use of standardized software was critical to the process of improvisation, the obstacles emerging in the process concerned data and were resolved through exploiting affordances for generating, organizing, and combining scientific information in new ways. In the second context, an environmental competency group, obstacles were related to the computer program and affordances emerged in the combination of experience-based knowledge with the scientists' skill enabling a reconfiguration of the mathematical structure of the model, allowing the group to learn about local flooding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a computer simulation study of the ion binding process at an ionizable surface using a semi-grand canonical Monte Carlo method that models the surface as a discrete distribution of charged and neutral functional groups in equilibrium with explicit ions modelled in the context of the primitive model. The parameters of the simulation model were tuned and checked by comparison with experimental titrations of carboxylated latex particles in the presence of different ionic strengths of monovalent ions. The titration of these particles was analysed by calculating the degree of dissociation of the latex functional groups vs. pH curves at different background salt concentrations. As the charge of the titrated surface changes during the simulation, a procedure to keep the electroneutrality of the system is required. Here, two approaches are used with the choice depending on the ion selected to maintain electroneutrality: counterion or coion procedures. We compare and discuss the difference between the procedures. The simulations also provided a microscopic description of the electrostatic double layer (EDL) structure as a function of pH and ionic strength. The results allow us to quantify the effect of the size of the background salt ions and of the surface functional groups on the degree of dissociation. The non-homogeneous structure of the EDL was revealed by plotting the counterion density profiles around charged and neutral surface functional groups. © 2011 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A variation of task analysis was used to build an empirical model of how therapists may facilitate client assimilation process, described in the Assimilation of Problematic Experiences Scale. A rational model was specified and considered in light of an analysis of therapist in-session performances (N = 117) drawn from six inpatient therapies for depression. The therapist interventions were measured by the Comprehensive Psychotherapeutic Interventions Rating Scale. Consistent with the rational model, confronting interventions were particularly useful in helping clients elaborate insight. However, rather than there being a small number of progress-related interventions at lower levels of assimilation, therapists' use of interventions was broader than hypothesized and drew from a wide range of therapeutic approaches. Concerning the higher levels of assimilation, there was insufficient data to allow an analysis of the therapist's progress-related interventions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today's business environment has become increasingly unexpected and fast changing because of the global competition. This new environment requires the companies to organize their control differently, e.g. by logistic process thinking. Logistic process thinking in software engineering applies the principles of production process to immaterial products. Processes must be optimized, so that every phase adds value to the customer, and the lead times can be cut shorter to meet the new customer requirements. The purpose of this thesis is to examine and optimize the testing processes of software engineering concentrating on module testing, functional testing and their interface. The concept of logistic process thinking is introduced through production process, value added model and process management. Also theory of testing based on literature is presented, concentrating on module testing and functional testing. The testing processes of the Case Company are presented together with the project models in which they are implemented. The real life practices in module testing and functional testing and their interface are examined through interviews. These practices are analyzed against the processes and the testing theory, through which ideas for optimizing the testing process are introduced. The project world of the Case Company is also introduced together with two example testing projects in different life cycle phases. The examples give a view of how much effort of the project is put in different types of testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The processes of change implied in weight management remain unclear. The present study aimed to identify these processes by validating a questionnaire designed to assess processes of change (the P-Weight) in line with the transtheoretical model. The relationship of processes of change with stages of change and other external variables is also examined. Methods: Participants were 723 people from community and clinical settings in Barcelona. Their mean age was 32.07 (SD = 14.55) years; most of them were women (75.0%), and their mean BMI was 26.47 (SD = 8.52) kg/m2. They all completed the P-Weight and the stages of change questionnaire (SWeight), both applied to weight management, as well as two subscales from the Eating Disorders Inventory-2 and Eating Attitudes Test-40 questionnaires about the concern with dieting. Results: A 34-item version of the PWeight was obtained by means of a refinement process. The principal components analysis applied to half of the sample identified four processes of change. A confirmatory factor analysis was then carried out with the other half of the sample, revealing that the model of four freely correlated first-order factors showed the best fit (GFI = 0.988, AGFI = 0.986, NFI = 0.986, and SRMR = 0.0559). Corrected item-total correlations (0.322-0.865) and Cronbach"s alpha coefficients (0.781-0.960) were adequate. The relationship between the P-Weight and the S-Weight and the concern with dieting measures from other questionnaires supported the validity of the scale. Conclusion: The study identified processes of change involved in weight management and reports the adequate psychometric properties of the P-Weight. It also reveals the relationship between processes and stages of change and other external variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. METHODS AND MATERIALS: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. RESULTS: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. CONCLUSION: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diagrams and tools help to support task modelling in engi- neering and process management. Unfortunately they are unfit to help in a business context at a strategic level, because of the flexibility needed for creative thinking and user friendly interactions. We propose a tool which bridges the gap between freedom of actions, encouraging creativity, and constraints, allowing validation and advanced features.