35 resultados para Complexity of Relations


Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The design methods and languages targeted to modern System-on-Chip designs are facing tremendous pressure of the ever-increasing complexity, power, and speed requirements. To estimate any of these three metrics, there is a trade-off between accuracy and abstraction level of detail in which a system under design is analyzed. The more detailed the description, the more accurate the simulation will be, but, on the other hand, the more time consuming it will be. Moreover, a designer wants to make decisions as early as possible in the design flow to avoid costly design backtracking. To answer the challenges posed upon System-on-chip designs, this thesis introduces a formal, power aware framework, its development methods, and methods to constraint and analyze power consumption of the system under design. This thesis discusses on power analysis of synchronous and asynchronous systems not forgetting the communication aspects of these systems. The presented framework is built upon the Timed Action System formalism, which offer an environment to analyze and constraint the functional and temporal behavior of the system at high abstraction level. Furthermore, due to the complexity of System-on-Chip designs, the possibility to abstract unnecessary implementation details at higher abstraction levels is an essential part of the introduced design framework. With the encapsulation and abstraction techniques incorporated with the procedure based communication allows a designer to use the presented power aware framework in modeling these large scale systems. The introduced techniques also enable one to subdivide the development of communication and computation into own tasks. This property is taken into account in the power analysis part as well. Furthermore, the presented framework is developed in a way that it can be used throughout the design project. In other words, a designer is able to model and analyze systems from an abstract specification down to an implementable specification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In many industries, such as petroleum production, and the petrochemical, metal, food and cosmetics industries, wastewaters containing an emulsion of oil in water are often produced. The emulsions consist of water (up to 90%), oils (mineral, animal, vegetable and synthetic), surfactants and other contaminates. In view of its toxic nature and its deleterious effects on the surrounding environment (soil, water) such wastewater needs to be treated before release into natural water ways. Membrane-based processes have successfully been applied in industrial applications and are considered as possible candidates for the treatment of oily wastewaters. Easy operation, lower cost, and in some cases, the ability to reduce contaminants below existing pollution limits are the main advantages of these systems. The main drawback of membranes is flux decline due tofouling and concentration polarisation. The complexity of oil-containing systems demands complementary studies on issues related to the mitigation of fouling and concentration polarisation in membranebased ultrafiltration. In this thesis the effect of different operating conditions (factors) on ultrafiltration of oily water is studied. Important factors are normally correlated and, therefore, their effect should be studied simultaneously. This work uses a novel approach to study different operating conditions, like pressure, flow velocity, and temperature, and solution properties, like oil concentration (cutting oil, diesel, kerosene), pH, and salt concentration (CaCl2 and NaCl)) in the ultrafiltration of oily water, simultaneously and in a systematic way using an experimental design approach. A hypothesis is developed to describe the interaction between the oil drops, salt and the membrane surface. The optimum conditions for ultrafiltration and the contribution of each factor in the ultrafiltration of oily water are evaluated. It is found that the effect on permeate flux of the various factors studied strongly depended on the type of oil, the type of membrane and the amount of salts. The thesis demonstrates that a system containing oil is very complex, and that fouling and flux decline can be observed even at very low pressures. This means that only the weak form of the critical flux exists for such systems. The cleaning of the fouled membranes and the influence of different parameters (flow velocity, temperature, time, pressure, and chemical concentration (SDS, NaOH)) were evaluated in this study. It was observed that fouling, and consequently cleaning, behaved differently for the studied membranes. Of the membranes studied, the membrane with the lowest propensity for fouling and the most easily cleaned was the regenerated cellulose membrane (C100H). In order to get more information about the interaction between the membrane and the components of the emulsion, a streaming potential study was performed on the membrane. The experiments were carried out at different pH and oil concentration. It was seen that oily water changed the surface charge of the membrane significantly. The surface charge and the streaming potential during different stages of filtration were measured and analysed being a new method for fouling of oil in this thesis. The surface charge varied in different stages of filtration. It was found that the surface charge of a cleaned membrane was not the same as initially; however, the permeability was equal to that of a virgin membrane. The effect of filtration mode was studied by performing the filtration in both cross-flow and deadend mode. The effect of salt on performance was considered in both studies. It was found that salt decreased the permeate flux even at low concentration. To test the effect of hydrophilicity change, the commercial membranes used in this thesis were modified by grafting (PNIPAAm) on their surfaces. A new technique (corona treatment) was used for this modification. The effect of modification on permeate flux and retention was evaluated. The modified membranes changed their pore size around 33oC resulting in different retention and permeability. The obtained results in this thesis can be applied to optimise the operation of a membrane plant under normal or shock conditions or to modify the process such that it becomes more efficient or effective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The size and complexity of projects in the software development are growing very fast. At the same time, the proportion of successful projects is still quite low according to the previous research. Although almost every project's team knows main areas of responsibility which would help to finish project on time and on budget, this knowledge is rarely used in practice. So it is important to evaluate the success of existing software development projects and to suggest a method for evaluating success chances which can be used in the software development projects. The main aim of this study is to evaluate the success of projects in the selected geographical region (Russia-Ukraine-Belarus). The second aim is to compare existing models of success prediction and to determine their strengths and weaknesses. Research was done as an empirical study. A survey with structured forms and theme-based interviews were used as the data collection methods. The information gathering was done in two stages. At the first stage, project manager or someone with similar responsibilities answered the questions over Internet. At the second stage, the participant was interviewed; his or her answers were discussed and refined. It made possible to get accurate information about each project and to avoid errors. It was found out that there are many problems in the software development projects. These problems are widely known and were discussed in literature many times. The research showed that most of the projects have problems with schedule, requirements, architecture, quality, and budget. Comparison of two models of success prediction presented that The Standish Group overestimates problems in project. At the same time, McConnell's model can help to identify problems in time and avoid troubles in future. A framework for evaluating success chances in distributed projects was suggested. The framework is similar to The Standish Group model but it was customized for distributed projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software systems are expanding and becoming increasingly present in everyday activities. The constantly evolving society demands that they deliver more functionality, are easy to use and work as expected. All these challenges increase the size and complexity of a system. People may not be aware of a presence of a software system, until it malfunctions or even fails to perform. The concept of being able to depend on the software is particularly significant when it comes to the critical systems. At this point quality of a system is regarded as an essential issue, since any deficiencies may lead to considerable money loss or life endangerment. Traditional development methods may not ensure a sufficiently high level of quality. Formal methods, on the other hand, allow us to achieve a high level of rigour and can be applied to develop a complete system or only a critical part of it. Such techniques, applied during system development starting at early design stages, increase the likelihood of obtaining a system that works as required. However, formal methods are sometimes considered difficult to utilise in traditional developments. Therefore, it is important to make them more accessible and reduce the gap between the formal and traditional development methods. This thesis explores the usability of rigorous approaches by giving an insight into formal designs with the use of graphical notation. The understandability of formal modelling is increased due to a compact representation of the development and related design decisions. The central objective of the thesis is to investigate the impact that rigorous approaches have on quality of developments. This means that it is necessary to establish certain techniques for evaluation of rigorous developments. Since we are studying various development settings and methods, specific measurement plans and a set of metrics need to be created for each setting. Our goal is to provide methods for collecting data and record evidence of the applicability of rigorous approaches. This would support the organisations in making decisions about integration of formal methods into their development processes. It is important to control the software development, especially in its initial stages. Therefore, we focus on the specification and modelling phases, as well as related artefacts, e.g. models. These have significant influence on the quality of a final system. Since application of formal methods may increase the complexity of a system, it may impact its maintainability, and thus quality. Our goal is to leverage quality of a system via metrics and measurements, as well as generic refinement patterns, which are applied to a model and a specification. We argue that they can facilitate the process of creating software systems, by e.g. controlling complexity and providing the modelling guidelines. Moreover, we find them as additional mechanisms for quality control and improvement, also for rigorous approaches. The main contribution of this thesis is to provide the metrics and measurements that help in assessing the impact of rigorous approaches on developments. We establish the techniques for the evaluation of certain aspects of quality, which are based on structural, syntactical and process related characteristics of an early-stage development artefacts, i.e. specifications and models. The presented approaches are applied to various case studies. The results of the investigation are juxtaposed with the perception of domain experts. It is our aspiration to promote measurements as an indispensable part of quality control process and a strategy towards the quality improvement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The wars the Western armies are involved with today are different from those that were fought in the end of 20th century. To explain this change, the Western military thinkers have come up with various different types of definitions of warfare over the last 30 years, each describing the tendencies involved in the conflicts of the time. The changing nature of conflicts surfaced a new term – hybrid warfare. The term was to describe and explain the multi-modality and complexity of modern day conflict. This thesis seeks the answer for the question: what is the development of thought behind hybrid warfare? In this thesis the Vietnam War (1965-1975) is used as an example of compound warfare focusing on the American involvement in the war. The Second Lebanon War (2006) serves as an example of hybrid warfare. Both case studies include an irregular opposing force, namely National Liberation Front in Vietnam War and Hezbollah in the Second Lebanon War. These two case studies are compared with the term full spectrum operations introduced in the current U.S. Department of Army Field Manual No. 3-0 Operations to see the differences and similarities of each term. The perspective of this thesis is the American point of view. This thesis concludes that hybrid warfare, compound warfare and full spectrum operations are very similar. The first two terms are included in the last one. Although hybrid warfare is not officially defined, it will most likely remain to be used in the discussion in the future, since hybrid wars and hybrid threats are officially accepted terms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Implementation of different policies and plans aiming at providing education for all is a challenge in Tanzania. The need for educators and professionals with relevant knowledge and qualifications in special education is substantial. Teacher education does not equip educators with sufficient knowledge and skills in special education and professional development programs in special education are few in number. Up to 2005 no degree programs in special education at university level were available in Tanzania. The B.Ed. Special Education program offered by the Open University of Tanzania in collaboration with Åbo Akademi University in Finland was one of the efforts aimed at addressing the big national need for teachers and other professionals with degree qualifications in special education. This pilot program offered unique possibilities to study professional development in Tanzania. The research group in this study consisted of the group of students who participated in the degree program 2005-2007. The study is guided by three theoretical perspectives: individual, social and societal. The individual perspective emphasizes psychological factors as motives, motivation, achievement, self-directed behavior and personal growth. Within social perspective, professional development is viewed as situated within the social and cultural context. The third perspective, the societal, focuses on change, reforms, innovations and transformation of school systems and societies. Accordingly, professional development is viewed as an individual, social and societal phenomenon. The overall aim of the study is to explore the participants’ motives for participating in a B.Ed. Special Education program and the perceived outcomes of the program in terms of professional development. In order to achieve the objectives of the study, a case study approach was adopted. Questionnaires and semi-structured interviews were administered in three waves between January 2007 and February 2009 to the 35 educators participating in the B.Ed. Special Education program. The findings of the study reveal that the participants expressed motives which were related to job performance, knowledge, skills, academic degree and career. Also altruistic motives were expressed by the participants in terms of helping and supporting students with special needs and their communities. The perceived outcomes of the program were in line with the expressed motives. However, the results indicate that the participants also learned new skills, as interaction skills and guidance and counseling skills. Increased self-confidence was also mentioned as an outcome. The participants also got deepened understanding of disability issues. In addition, they learned strategies for creating awareness of persons with disability in the communities. Thus the findings of the study indicate positive outcomes of the program in terms of professional development. The conclusion of the study is that individual, social and societal factors interact when it comes to explaining why Tanzanian educators in special education choose to pursue a degree program in special education. The individual motives, as increased knowledge and better prospects of career development interact with the social and societal motives to help and support vulnerable student groups. The study contributes to increased understanding of the complexity of professional development and of the realities educators meet when educational reforms are implemented in a developing country.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Developing software is a difficult and error-prone activity. Furthermore, the complexity of modern computer applications is significant. Hence,an organised approach to software construction is crucial. Stepwise Feature Introduction – created by R.-J. Back – is a development paradigm, in which software is constructed by adding functionality in small increments. The resulting code has an organised, layered structure and can be easily reused. Moreover, the interaction with the users of the software and the correctness concerns are essential elements of the development process, contributing to high quality and functionality of the final product. The paradigm of Stepwise Feature Introduction has been successfully applied in an academic environment, to a number of small-scale developments. The thesis examines the paradigm and its suitability to construction of large and complex software systems by focusing on the development of two software systems of significant complexity. Throughout the thesis we propose a number of improvements and modifications that should be applied to the paradigm when developing or reengineering large and complex software systems. The discussion in the thesis covers various aspects of software development that relate to Stepwise Feature Introduction. More specifically, we evaluate the paradigm based on the common practices of object-oriented programming and design and agile development methodologies. We also outline the strategy to testing systems built with the paradigm of Stepwise Feature Introduction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Welding has a growing role in modern world manufacturing. Welding joints are extensively used from pipes to aerospace industries. Prediction of welding residual stresses and distortions is necessary for accurate evaluation of fillet welds in relation to design and safety conditions. Residual stresses may be beneficial or detrimental, depending whether they are tensile or compressive and the loading. They directly affect the fatigue life of the weld by impacting crack growth rate. Beside theoretical background of residual stresses this study calculates residual stresses and deformations due to localized heating by welding process and subsequent rapid cooling in fillet welds. Validated methods are required for this purpose due to complexity of process, localized heating, temperature dependence of material properties and heat source. In this research both empirical and simulation methods were used for the analysis of welded joints. Finite element simulation has become a popular tool of prediction of welding residual stresses and distortion. Three different cases with and without preload have been modeled during this study. Thermal heat load set is used by calculating heat flux from the given heat input energy. First the linear and then nonlinear material behavior model is modeled for calculation of residual stresses. Experimental work is done to calculate the stresses empirically. The results from both the methods are compared to check their reliability. Residual stresses can have a significant effect on fatigue performance of the welded joints made of high strength steel. Both initial residual stress state and subsequent residual stress relaxation need to be considered for accurate description of fatigue behavior. Tensile residual stresses are detrimental and will reduce the fatigue life and compressive residual stresses will increase it. The residual stresses follow the yield strength of base or filler material and the components made of high strength steel are typically thin, where the role of distortion is emphasizing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lignocellulosic biomasses (e.g., wood and straws) are a potential renewable source for the production of a wide variety of chemicals that could be used to replace those currently produced by petrochemical industry. This would lead to lower greenhouse gas emissions and waste amounts, and to economical savings. There are many possible pathways available for the manufacturing of chemicals from lignocellulosic biomasses. One option is to hydrolyze the cellulose and hemicelluloses of these biomasses into monosaccharides using concentrated sulfuric acid as catalyst. This process is an efficient method for producing monosaccharides which are valuable platforn chemicals. Also other valuable products are formed in the hydrolysis. Unfortunately, the concentrated acid hydrolysis has been deemed unfeasible mainly due to high chemical consumption resulting from the need to remove sulfuric acid from the obtained hydrolysates prior to the downstream processing of the monosaccharides. Traditionally, this has been done by neutralization with lime. This, however, results in high chemical consumption. In addition, the by-products formed in the hydrolysis are not removed and may, thus, hinder the monosaccharide processing. In order to improve the feasibility of the concentrated acid hydrolysis, the chemical consumption should be decreased by recycling of sulfuric acid without neutralization. Furthermore, the monosaccharides and the other products formed in the hydrolysis should be recovered selectively for efficient downstream processing. The selective recovery of the hydrolysis by-products would have additional economical benefits on the process due to their high value. In this work, the use of chromatographic fractionation for the recycling of sulfuric acid and the selective recovery of the main components from the hydrolysates formed in the concentrated acid hydrolysis was investigated. Chromatographic fractionation based on the electrolyte exclusion with gel type strong acid cation exchange resins in acid (H+) form as a stationary phase was studied. A systematic experimental and model-based study regarding the separation task at hand was conducted. The phenomena affecting the separation were determined and their effects elucidated. Mathematical models that take accurately into account these phenomena were derived and used in the simulation of the fractionation process. The main components of the concentrated acid hydrolysates (sulfuric acid, monosaccharides, and acetic acid) were included into this model. Performance of the fractionation process was investigated experimentally and by simulations. Use of different process options was also studied. Sulfuric acid was found to have a significant co-operative effect on the sorption of the other components. This brings about interesting and beneficial effects in the column operations. It is especially beneficial for the separation of sulfuric acid and the monosaccharides. Two different approaches for the modelling of the sorption equilibria were investigated in this work: a simple empirical approach and a thermodynamically consistent approach (the Adsorbed Solution theory). Accurate modelling of the phenomena observed in this work was found to be possible using the simple empirical models. The use of the Adsorbed Solution theory is complicated by the nature of the theory and the complexity of the studied system. In addition to the sorption models, a dynamic column model that takes into account the volume changes of the gel type resins as changing resin bed porosity was also derived. Using the chromatography, all the main components of the hydrolysates can be recovered selectively, and the sulfuric acid consumption of the hydrolysis process can be lowered considerably. Investigation of the performance of the chromatographic fractionation showed that the highest separation efficiency in this separation task is obtained with a gel type resin with a high crosslinking degree (8 wt. %); especially when the hydrolysates contain high amounts of acetic acid. In addition, the concentrated acid hydrolysis should be done with as low sulfuric acid concentration as possible to obtain good separation performance. The column loading and flow rate also have large effects on the performance. In this work, it was demonstrated that when recycling of the fractions obtained in the chromatographic fractionation are recycled to preceding unit operations these unit operations should included in the performance evaluation of the fractionation. When this was done, the separation performance and the feasibility of the concentrated acid hydrolysis process were found to improve considerably. Use of multi-column chromatographic fractionation processes, the Japan Organo process and the Multi-Column Recycling Chromatography process, was also investigated. In the studied case, neither of these processes could compete with the single-column batch process in the productivity. However, due to internal recycling steps, the Multi-Column Recycling Chromatography was found to be superior to the batch process when the product yield and the eluent consumption were taken into account.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays the Western companies are considered responsible for the social and environmental issues in their whole supply chains. To influence the practices of their suppliers the Western companies have created suppliers codes of conduct (SCCs) which express their requirements. Suppliers’ compliance with the SCCs is checked through audits. The purpose of this thesis is to analyze SCCs as a means for Western companies to ensure socially and environmentally responsible actions in their global supply chains, and the sub-objectives are to find out 1) how well do the SCCs and their auditing work at suppliers’ production sites and 2) how can possible problems related to SCCs and their auditing be solved. This is a qualitative research carried out in the form of a case study with two case companies. In this study both primary and secondary data is used. The primary data is collected in the form of interviews of the case company representatives and three external experts. Based on a theoretical framework of previous research in the fields of corporate social responsibility and supply chain management, a model with eleven factors, which influence the success of SCC implementation and the auditing of SCC –implementation, is drafted. Also several different best-practices to help to solve and avoid possible problems related to SCC -implementation and auditing have been identified from previous research. Based on the findings of this study the theoretical model has been updated adding two new influential factors. It seems that how well the SCC and its auditing work at suppliers’ production sites depends on the joint effect of thirteen influential factors: buyer’s purchasing policy, supplier’s motivation, buyer’s commitment, the solving of agency problems, the contents of the SCC, supplier’s role and the buyer-supplier –relationship, complexity of supply chain, the limitations of the smaller buyers, cooperation through a business association or multi-stakeholder system, the role of supplier’s employees, SCC –related communication and supplier’s understanding, cheating in audits and the auditors. The possible problems related to SCCs and their auditing can be solved by adopting best-practices. Nine of the theoretical best-practices stand out from the findings of this study: 1) two-way communication and collecting feedback from suppliers, 2) the philosophy of continuous improvement, 3) long-term business relationships with the supplier, 4) informing the supplier about the advantages of SCC –compliance, 5) rewarding code-compliant suppliers, 6) building collaborative, good buyer-supplier relationships, 7) supporting and advising the supplier, 8) joining a business association or multi-stakeholder system and 9) interviewing supplier’s employees as a part of the audits.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014