926 resultados para Process Models
Resumo:
The goal was to understand, document and module how information is currently flown internally in the largest dairy organization in Finland. The organization has undergone radical changes in the past years due to economic sanctions between European Union and Russia. Therefore, organization’s ultimate goal would be to continue its growth through managing its sales process more efficiently. The thesis consists of a literature review and an empirical part. The literature review consists of knowledge management and process modeling theories. First, the knowledge management discusses how data, information and knowledge are exchanged in the process. Knowledge management models and processes are describing how knowledge is created, exchanged and can be managed in an organization. Secondly, the process modeling is responsible for visualizing information flow through discussion of modeling approaches and presenting different methods and techniques. Finally, process’ documentation procedure was presented. In the end, a constructive research approach was used in order to identify process’ related problems and bottlenecks. Therefore, possible solutions were presented based on this approach. The empirical part of the study is based on 37 interviews, organization’s internal data sources and theoretical framework. The acquired data and information were used to document and to module the sales process in question with a flowchart diagram. Results are conducted through construction of the flowchart diagram and analysis of the documentation. In fact, answers to research questions are derived from empirical and theoretical parts. In the end, 14 problems and two bottlenecks were identified in the process. The most important problems are related to approach and/or standardization for information sharing, insufficient information technology tool utilization and lack of systematization of documentation. The bottlenecks are caused by the alarming amount of changes to files after their deadlines.
Resumo:
Several models have been studied on predictive epidemics of arthropod vectored plant viruses in an attempt to bring understanding to the complex but specific relationship between the three cornered pathosystem (virus, vector and host plant), as well as their interactions with the environment. A large body of studies mainly focuses on weather based models as management tool for monitoring pests and diseases, with very few incorporating the contribution of vector's life processes in the disease dynamics, which is an essential aspect when mitigating virus incidences in a crop stand. In this study, we hypothesized that the multiplication and spread of tomato spotted wilt virus (TSWV) in a crop stand is strongly related to its influences on Frankliniella occidentalis preferential behavior and life expectancy. Model dynamics of important aspects in disease development within TSWV-F. occidentalis-host plant interactions were developed, focusing on F. occidentalis' life processes as influenced by TSWV. The results show that the influence of TSWV on F. occidentalis preferential behaviour leads to an estimated increase in relative acquisition rate of the virus, and up to 33% increase in transmission rate to healthy plants. Also, increased life expectancy; which relates to improved fitness, is dependent on the virus induced preferential behaviour, consequently promoting multiplication and spread of the virus in a crop stand. The development of vector-based models could further help in elucidating the role of tri-trophic interactions in agricultural disease systems. Use of the model to examine the components of the disease process could also boost our understanding on how specific epidemiological characteristics interact to cause diseases in crops. With this level of understanding we can efficiently develop more precise control strategies for the virus and the vector.
Resumo:
Mass Customization (MC) is not a mature business strategy and hence it is not clear that a single or small group of operational models are dominating. Companies tend to approach MC from either a mass production or a customization origin and this in itself gives reason to believe that several operational models will be observable. This paper reviews actual and theoretical fulfilment systems that enterprises could apply when offering a pre-engineered catalogue of customizable products and options. Issues considered are: How product flows are structured in relation to processes, inventories and decoupling point(s); - Characteristics of the OF process that inhibit or facilitate fulfilment; - The logic of how products are allocated to customers; - Customer factors that influence OF process design and operation. Diversity in the order fulfilment structures is expected and is found in the literature. The review has identified four structural forms that have been used in a Catalogue MC context: - fulfilment from stock; - fulfilment from a single fixed decoupling point; - fulfilment from one of several fixed decoupling points; - fulfilment from several locations, with floating decoupling points. From the review it is apparent that producers are being imaginative in coping with the demands of high variety, high volume, customization and short lead times. These demands have encouraged the relationship between product, process and customer to be re-examined. Not only has this strengthened interest in commonality and postponement, but, as is reported in the paper, has led to the re-engineering of the order fulfilment process to create models with multiple fixed decoupling points and the floating decoupling point system
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
We study the production and signatures of doubly charged Higgs bosons (DCHBs) in the process gamma gamma <-> H(--)H(++) at the e(-)e(+) International Linear Collider and CERN Linear Collider, where the intermediate photons are given by the Weizsacker-Willians and laser backscattering distributions.
Resumo:
Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.
Resumo:
Doutoramento em Gestão
Resumo:
Doctor of Philosophy in Mathematics
Resumo:
Dissertação de Mestrado, Tecnologia dos Alimentos, Instituto Superior de Engenharia, Universidade do Algarve, 2014
Resumo:
Although hydrothermal carbonization of biomass components is known to be mainly governed by reaction temperature, consistent reports on the effect and statistical significance of process conditions on hydrochar properties are still lacking. The objective of this research was to determine the importance and significance of reaction temperature, retention time and solid load on the properties of hydrochar produced from an industrial lignocellulosic sludge residue. According to the results, reaction temperature and retention time had a statistically significant effect on hydrochar ash content, solid yield, carbon content, O/C-ratio, energy densification and energy yield as reactor solid load was statistically insignificant for all acquired models within the design range. Although statistically significant, the effect of retention time was 3–7 times lower than that of reaction temperature. Predicted dry ash-free solid yields of attained hydrochar decreased to approximately 40% due to the dissolution of biomass components at higher reaction temperatures, as respective oxygen contents were comparable to subbituminous coal. Significant increases in the carbon contents of hydrochar led to predicted energy densification ratios of 1–1.5 with respective energy yields of 60–100%. Estimated theoretical energy requirements of carbonization were dependent on the literature method used and mainly controlled by reaction temperature and reactor solid load. The attained results enable future prediction of hydrochar properties from this feedstock and help to understand the effect of process conditions on hydrothermal treatment of lignocellulosic biomass.
Resumo:
The International Space Station (ISS) requires a substantial amount of potable water for use by the crew. The economic and logistic limitations of transporting the vast amount of water required onboard the ISS necessitate onboard recovery and reuse of the aqueous waste streams. Various treatment technologies are employed within the ISS water processor to render the waste water potable, including filtration, ion exchange, adsorption, and catalytic wet oxidation. The ion exchange resins and adsorption media are combined in multifiltration beds for removal of ionic and organic compounds. A mathematical model (MFBMODEL™) designed to predict the performance of a multifiltration (MF) bed was developed. MFBMODEL consists of ion exchange models for describing the behavior of the different resin types in a MF bed (e.g., mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins) and an adsorption model capable of predicting the performance of the adsorbents in a MF bed. Multicomponent ion exchange ii equilibrium models that incorporate the water formation reaction, electroneutrality condition, and degree of ionization of weak acids and bases for mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins were developed and verified. The equilibrium models developed use a tanks-inseries approach that allows for consideration of variable influent concentrations. The adsorption modeling approach was developed in related studies and application within the MFBMODEL framework was demonstrated in the Appendix to this study. MFBMODEL consists of a graphical user interface programmed in Visual Basic and Fortran computational routines. This dissertation shows MF bed modeling results in which the model is verified for a surrogate of the ISS waste shower and handwash stream. In addition, a multicomponent ion exchange model that incorporates mass transfer effects was developed, which is capable of describing the performance of strong acid cation (SAC) and strong base anion (SBA) exchange resins, but not including reaction effects. This dissertation presents results showing the mass transfer model's capability to predict the performance of binary and multicomponent column data for SAC and SBA exchange resins. The ion exchange equilibrium and mass transfer models developed in this study are also applicable to terrestrial water treatment systems. They could be applied for removal of cations and anions from groundwater (e.g., hardness, nitrate, perchlorate) and from industrial process waters (e.g. boiler water, ultrapure water in the semiconductor industry).
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Resumo:
Ever since the birth of the Smart City paradigm, a wide variety of initiatives have sprung up involving this phenomenon: best practices, projects, pilot projects, transformation plans, models, standards, indicators, measuring systems, etc. The question to ask, applicable to any government official, city planner or researcher, is whether this effect is being felt in how cities are transforming, or whether, in contrast, it is not very realistic to speak of cities imbued with this level of intelligence. Many cities are eager to define themselves as smart, but the variety, complexity and scope of the projects needed for this transformation indicate that the change process is longer than it seems. If our goal is to carry out a comparative analysis of this progress among cities by using the number of projects executed and their scope as a reference for the transformation, we could find such a task inconsequential due to the huge differences and characteristics that define a city. We believe that the subject needs simplification (simpler, more practical models) and a new approach. This paper presents a detailed analysis of the smart city transformation process in Spain and provides a support model that helps us understand the changes and the speed at which they are being implemented. To this end we define a set of elements of change called "transformation factors" that group a city's smartness into one of three levels (Low/Medium/Fully) and more homogeneously identify the level of advancement of this process. © 2016 IEEE.
Resumo:
Effective and efficient implementation of intelligent and/or recently emerged networked manufacturing systems require an enterprise level integration. The networked manufacturing offers several advantages in the current competitive atmosphere by way to reduce, by shortening manufacturing cycle time and maintaining the production flexibility thereby achieving several feasible process plans. The first step in this direction is to integrate manufacturing functions such as process planning and scheduling for multi-jobs in a network based manufacturing system. It is difficult to determine a proper plan that meets conflicting objectives simultaneously. This paper describes a mobile-agent based negotiation approach to integrate manufacturing functions in a distributed manner; and its fundamental framework and functions are presented. Moreover, ontology has been constructed by using the Protégé software which possesses the flexibility to convert knowledge into Extensible Markup Language (XML) schema of Web Ontology Language (OWL) documents. The generated XML schemas have been used to transfer information throughout the manufacturing network for the intelligent interoperable integration of product data models and manufacturing resources. To validate the feasibility of the proposed approach, an illustrative example along with varied production environments that includes production demand fluctuations is presented and compared the proposed approach performance and its effectiveness with evolutionary algorithm based Hybrid Dynamic-DNA (HD-DNA) algorithm. The results show that the proposed scheme is very effective and reasonably acceptable for integration of manufacturing functions.
Resumo:
Abstract : Recently, there is a great interest to study the flow characteristics of suspensions in different environmental and industrial applications, such as snow avalanches, debris flows, hydrotransport systems, and material casting processes. Regarding rheological aspects, the majority of these suspensions, such as fresh concrete, behave mostly as non-Newtonian fluids. Concrete is the most widely used construction material in the world. Due to the limitations that exist in terms of workability and formwork filling abilities of normal concrete, a new class of concrete that is able to flow under its own weight, especially through narrow gaps in the congested areas of the formwork was developed. Accordingly, self-consolidating concrete (SCC) is a novel construction material that is gaining market acceptance in various applications. Higher fluidity characteristics of SCC enable it to be used in a number of special applications, such as densely reinforced sections. However, higher flowability of SCC makes it more sensitive to segregation of coarse particles during flow (i.e., dynamic segregation) and thereafter at rest (i.e., static segregation). Dynamic segregation can increase when SCC flows over a long distance or in the presence of obstacles. Therefore, there is always a need to establish a trade-off between the flowability, passing ability, and stability properties of SCC suspensions. This should be taken into consideration to design the casting process and the mixture proportioning of SCC. This is called “workability design” of SCC. An efficient and non-expensive workability design approach consists of the prediction and optimization of the workability of the concrete mixtures for the selected construction processes, such as transportation, pumping, casting, compaction, and finishing. Indeed, the mixture proportioning of SCC should ensure the construction quality demands, such as demanded levels of flowability, passing ability, filling ability, and stability (dynamic and static). This is necessary to develop some theoretical tools to assess under what conditions the construction quality demands are satisfied. Accordingly, this thesis is dedicated to carry out analytical and numerical simulations to predict flow performance of SCC under different casting processes, such as pumping and tremie applications, or casting using buckets. The L-Box and T-Box set-ups can evaluate flow performance properties of SCC (e.g., flowability, passing ability, filling ability, shear-induced and gravitational dynamic segregation) in casting process of wall and beam elements. The specific objective of the study consists of relating numerical results of flow simulation of SCC in L-Box and T-Box test set-ups, reported in this thesis, to the flow performance properties of SCC during casting. Accordingly, the SCC is modeled as a heterogeneous material. Furthermore, an analytical model is proposed to predict flow performance of SCC in L-Box set-up using the Dam Break Theory. On the other hand, results of the numerical simulation of SCC casting in a reinforced beam are verified by experimental free surface profiles. The results of numerical simulations of SCC casting (modeled as a single homogeneous fluid), are used to determine the critical zones corresponding to the higher risks of segregation and blocking. The effects of rheological parameters, density, particle contents, distribution of reinforcing bars, and particle-bar interactions on flow performance of SCC are evaluated using CFD simulations of SCC flow in L-Box and T-box test set-ups (modeled as a heterogeneous material). Two new approaches are proposed to classify the SCC mixtures based on filling ability and performability properties, as a contribution of flowability, passing ability, and dynamic stability of SCC.