846 resultados para fault-tolerant scheduling
Resumo:
Herbicides used in Clearfield(r) rice system may persist in the environment, damaging non-tolerant crops sown in succession and/or rotation. These damages vary according to soil characteristics, climate and soil management. The thickness of the soil profile may affect carryover effect; deeper soils may allow these molecules to leach, reaching areas below the roots absorption zone. The aim of this study was to evaluate the effect of the thickness of soil profile in the carryover of imazethapyr + imazapic on ryegrass and non-tolerant rice, sown in succession and rotation to rice, respectively. Lysimeters of different thicknesses (15, 20, 30, 40, 50 and 65 cm) were constructed, where 1 L ha-1 of the imazethapyr + imazapic formulated mixture was applied in tolerant rice. Firstly, imidazolinone-tolerant rice was planted, followed by ryegrass and non-tolerant rice in succession and rotation, respectively. Herbicide injury, height reduction and dry weight of non-tolerant species were assessed. There was no visual symptoms of herbicide injury on ryegrass sown 128 days after the herbicide application; however it causes dry weight mass reduction of plants. The herbicides persist in the soil and cause injury in non-tolerant rice, sown 280 days after application, and the deeper the soil profile, the lower the herbicides injury on irrigated rice.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
In order to identify early abnormalities in non-insulin-dependent diabetes mellitus (NIDDM) we determined insulin (using an assay that does not cross-react with proinsulin) and proinsulin concentrations. The proinsulin/insulin ratio was used as an indicator of abnormal ß-cell function. The ratio of the first 30-min increase in insulin to glucose concentrations following the oral glucose tolerance test (OGTT; I30-0/G30-0) was taken as an indicator of insulin secretion. Insulin resistance (R) was evaluated by the homeostasis model assessment (HOMA) method. True insulin and proinsulin were measured during a 75-g OGTT in 35 individuals: 20 with normal glucose tolerance (NGT) and without diabetes among their first-degree relatives (FDR) served as controls, and 15 with NGT who were FDR of patients with NIDDM. The FDR group presented higher insulin (414 pmol/l vs 195 pmol/l; P = 0.04) and proinsulin levels (19.6 pmol/l vs 12.3 pmol/l; P = 0.03) post-glucose load than the control group. When these groups were stratified according to BMI, the obese FDR (N = 8) showed higher fasting and post-glucose insulin levels than the obese NGT (N = 9) (fasting: 64.8 pmol/l vs 7.8 pmol/l; P = 0.04, and 60 min post-glucose: 480.6 pmol/l vs 192 pmol/l; P = 0.01). Also, values for HOMA (R) were higher in the obese FDR compared to obese NGT (2.53 vs 0.30; P = 0.075). These results show that FDR of NIDDM patients have true hyperinsulinemia (which is not a consequence of cross-reactivity with proinsulin) and hyperproinsulinemia and no dysfunction of a qualitative nature in ß-cells.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
The bedrock of old crystalline cratons is characteristically saturated with brittle structures formed during successive superimposed episodes of deformation and under varying stress regimes. As a result, the crust effectively deforms through the reactivation of pre-existing structures rather than by through the activation, or generation, of new ones, and is said to be in a state of 'structural maturity'. By combining data from Olkiluoto Island, southwestern Finland, which has been investigated as the potential site of a deep geological repository for high-level nuclear waste, with observations from southern Sweden, it can be concluded that the southern part of the Svecofennian shield had already attained structural maturity during the Mesoproterozoic era. This indicates that the phase of activation of the crust, i.e. the time interval during which new fractures were generated, was brief in comparison to the subsequent reactivation phase. Structural maturity of the bedrock was also attained relatively rapidly in Namaqualand, western South Africa, after the formation of first brittle structures during Neoproterozoic time. Subsequent brittle deformation in Namaqualand was controlled by the reactivation of pre-existing strike-slip faults.In such settings, seismic events are likely to occur through reactivation of pre-existing zones that are favourably oriented with respect to prevailing stresses. In Namaqualand, this is shown for present day seismicity by slip tendency analysis, and at Olkiluoto, for a Neoproterozoic earthquake reactivating a Mesoproterozoic fault. By combining detailed field observations with the results of paleostress inversions and relative and absolute time constraints, seven distinctm superimposed paleostress regimes have been recognized in the Olkiluoto region. From oldest to youngest these are: (1) NW-SE to NNW-SSE transpression, which prevailed soon after 1.75 Ga, when the crust had sufficiently cooled down to allow brittle deformation to occur. During this phase conjugate NNW-SSE and NE-SW striking strike-slip faults were active simultaneous with reactivation of SE-dipping low-angle shear zones and foliation planes. This was followed by (2) N-S to NE-SW transpression, which caused partial reactivation of structures formed in the first event; (3) NW-SE extension during the Gothian orogeny and at the time of rapakivi magmatism and intrusion of diabase dikes; (4) NE-SW transtension that occurred between 1.60 and 1.30 Ga and which also formed the NW-SE-trending Satakunta graben located some 20 km north of Olkiluoto. Greisen-type veins also formed during this phase. (5) NE-SW compression that postdates both the formation of the 1.56 Ga rapakivi granites and 1.27 Ga olivine diabases of the region; (6) E-W transpression during the early stages of the Mesoproterozoic Sveconorwegian orogeny and which also predated (7) almost coaxial E-W extension attributed to the collapse of the Sveconorwegian orogeny. The kinematic analysis of fracture systems in crystalline bedrock also provides a robust framework for evaluating fluid-rock interaction in the brittle regime; this is essential in assessment of bedrock integrity for numerous geo-engineering applications, including groundwater management, transient or permanent CO2 storage and site investigations for permanent waste disposal. Investigations at Olkiluoto revealed that fluid flow along fractures is coupled with low normal tractions due to in-situ stresses and thus deviates from the generally accepted critically stressed fracture concept, where fluid flow is concentrated on fractures on the verge of failure. The difference is linked to the shallow conditions of Olkiluoto - due to the low differential stresses inherent at shallow depths, fracture activation and fluid flow is controlled by dilation due to low normal tractions. At deeper settings, however, fluid flow is controlled by fracture criticality caused by large differential stress, which drives shear deformation instead of dilation.
Resumo:
Tolerance to lipopolysaccharide (LPS) occurs when animals or cells exposed to LPS become hyporesponsive to a subsequent challenge with LPS. This mechanism is believed to be involved in the down-regulation of cellular responses observed in septic patients. The aim of this investigation was to evaluate LPS-induced monocyte tolerance of healthy volunteers using whole blood. The detection of intracellular IL-6, bacterial phagocytosis and reactive oxygen species (ROS) was determined by flow cytometry, using anti-IL-6-PE, heat-killed Staphylococcus aureus stained with propidium iodide and 2',7'-dichlorofluorescein diacetate, respectively. Monocytes were gated in whole blood by combining FSC and SSC parameters and CD14-positive staining. The exposure to increasing LPS concentrations resulted in lower intracellular concentration of IL-6 in monocytes after challenge. A similar effect was observed with challenge with MALP-2 (a Toll-like receptor (TLR)2/6 agonist) and killed Pseudomonas aeruginosa and S. aureus, but not with flagellin (a TLR5 agonist). LPS conditioning with 15 ng/mL resulted in a 40% reduction of IL-6 in monocytes. In contrast, phagocytosis of P. aeruginosa and S. aureus and induced ROS generation were preserved or increased in tolerant cells. The phenomenon of tolerance involves a complex regulation in which the production of IL-6 was diminished, whereas the bacterial phagocytosis and production of ROS was preserved. Decreased production of proinflammatory cytokines and preserved or increased production of ROS may be an adaptation to control the deleterious effects of inflammation while preserving antimicrobial activity.
Resumo:
Recent Storms in Nordic countries were a reason of long power outages in huge territories. After these disasters distribution networks' operators faced with a problem how to provide adequate quality of supply in such situation. The decision of utilization cable lines rather than overhead lines were made, which brings new features to distribution networks. The main idea of this work is a complex analysis of medium voltage distribution networks with long cable lines. High value of cable’s specific capacitance and length of lines determine such problems as: high values of earth fault currents, excessive amount of reactive power flow from distribution to transmission network, possibility of a high voltage level at the receiving end of cable feeders. However the core tasks was to estimate functional ability of the earth fault protection and the possibility to utilize simplified formulas for operating setting calculations in this network. In order to provide justify solution or evaluation of mentioned above problems corresponding calculations were made and in order to analyze behavior of relay protection principles PSCAD model of the examined network have been created. Evaluation of the voltage rise in the end of a cable line have educed absence of a dangerous increase in a voltage level, while excessive value of reactive power can be a reason of final penalty according to the Finish regulations. It was proved and calculated that for this networks compensation of earth fault currents should be implemented. In PSCAD models of the electrical grid with isolated neutral, central compensation and hybrid compensation were created. For the network with hybrid compensation methodology which allows to select number and rated power of distributed arc suppression coils have been offered. Based on the obtained results from experiments it was determined that in order to guarantee selective and reliable operation of the relay protection should be utilized hybrid compensation with connection of high-ohmic resistor. Directional and admittance based relay protection were tested under these conditions and advantageous of the novel protection were revealed. However, for electrical grids with extensive cabling necessity of a complex approach to the relay protection were explained and illustrated. Thus, in order to organize reliable earth fault protection is recommended to utilize both intermittent and conventional relay protection with operational settings calculated by the use of simplified formulas.
Resumo:
Operational excellence of individual tramp shipping companies is important in today’s market, where competition is intense, freight revenues are modest and capital costs high due to global financial crisis, and tighter regulatory framework is generating additional costs and challenges to the industry. This thesis concentrates on tramp shipping, where a tramp operator in a form of an individual case company, specialized in short-sea shipping activities in the Baltic Sea region, is searching ways to map their current fleet operations and better understand potential ways to improve the overall routing and scheduling decisions. The research problem is related to tramp fleet planning where several cargoes are carried on board at the same time, which are here systematically referred to as part cargoes. The purpose is to determine the pivotal dimensions and characteristics of these part cargo operations in tramp shipping, and offer both the individual case company and wider research community better understanding of potential risks and benefits related to utilization of part cargo operations. A mixed method research approach is utilized in this research, as the objectives are related to complex, real-life business practices in the field of supply chain management and more specifically, maritime logistics. A quantitative analysis of different voyage scenarios is executed, including alternative voyage legs with varying cost structure and customer involvement. An on-line-based questionnaire designed and prepared by case company’s decision group again provides desired data of predominant attitudes and views of most important industrial customers regarding the part cargo-related operations and potential future utilization of this business model. The results gained from these quantitative methods are complied with qualitative data collection tools, along with suitable secondary data sources. Based on results and logical analysis of different data sources, a framework for characterizing the different aspects of part cargo operations is developed, utilizing both existing research and empirical investigation of the phenomenon. As conclusions, part cargoes have the ability to be part of viable fleet operations, and even increase flexibility among the fleet to a certain extent. Naturally, several hinderers for this development is recognized as well, such as potential issues with information gathering and sharing, inefficient port activities, and increased transit times.
Resumo:
Several irrigation treatments were evaluated on Sovereign Coronation table grapes at two sites over a 3-year period in the cool humid Niagara Peninsula of Ontario. Trials were conducted in the Hippie (Beamsville, ON) and the Lambert Vineyards (Niagara-on-the-Lake, ON) in 2003 to 2005 with the objective of assessing the usefulness of the modified Penman-Monteith equation to accurately schedule vine irrigation needs. Data (relative humidity, windspeed, solar radiation, and temperature) required to precisely calculate evapotranspiration (ETq) were downloaded from the Ontario Weather Network. One of two ETq values (either 100 or 150%) were used in combination with one of two crop coefficients (Kc; either fixed at 0.75 or 0.2 to 0.8 based upon increasing canopy volume) to calculate the amount of irrigation water required. Five irrigation treatments were: un irrigated control; (lOOET) X Kc =0.75; 150ET X Kc =0.75; lOOET X Kc =0.2-0.8; 150ET X Kc =0.2-0.8. Transpiration, water potential (v|/), and soil moisture data were collected each growing seasons. Yield component data was collected and berries from each treatment were analyzed for soluble solids (Brix), pH, titratable acidity (TA), anthocyanins, methyl anthranilate (MA), and total volatile esters (TVE). Irrigation showed a substantial positive effect on transpiration rate and soil moisture; the control treatment showed consistently lower transpiration and soil moisture over the 3 seasons. Transpiration appeared accurately reflect Sovereign Coronation grapevines water status. Soil moisture also accurately reflected level of irrigation. Moreover, irrigation showed impact of leaf \|/, which was more negative throughout the 3 seasons for vines that were not irrigated. Irrigation had a substantial positive effect on yield (kg/vine) and its various components (clusters/vine, cluster weight, and berries/cluster) in 2003 and 2005. Berry weights were higher under the irrigated treatments at both sites. Berry weight consistently appeared to be the main factor leading to these increased yields, as inconsistent responses were noted for some yield variables. Soluble solids was highest under the ET150 and ET100 treatments both with Kc at 0.75. Both pH and TA were highest under control treatments in 2003 and 2004, but highest under irrigated treatments in 2005. Anthocyanins and phenols were highest under the control treatments in 2003 and 2004, but highest under irrigated treatments in 2005. MA and TVE were highest under the ET150 treatments. Vine and soil water status measurements (soil moisture, leaf \|/, and transpiration) confirmed that irrigation was required for the summers of 2003 and 2005 due to dry weather in those years. They also partially supported the hypothesis that the Penman-Monteith equation is useful for calculating vineyard water needs. Both ET treatments gave clear evidence that irrigation could be effective in reducing water stress and for improving vine performance, yield and fruit composition. Use of properly scheduled irrigation was beneficial for Sovereign Coronation table grapes in the Niagara region. Findings herein should give growers some strong guidehnes on when, how and how much to irrigate their vineyards.
Resumo:
This qualitative study explored secondary teachers' perceptions of scheduling in relation to pedagogy, curriculum, and observation of student learning. Its objective was to determine the best way to organize the scheduling for the delivery of Ontario's new 4-year curriculum. Six participants were chosen. Two were teaching in a semestered timetable, 1 in a traditional timetable, and 3 had experience in both schedules. Participants related a pressure cooker "lived experience" with weaker students in the semester system experiencing a particularly harsh environment. The inadequate amount of time for review in content-heavy courses, gap scheduling problems, catch-up difficulties for students missing classes, and the fast pace of semestering are identified as factors negatively impacting on these students. Government testing adds to the pressure by shifting teachers' time and attention in the classroom from deeper learning to a superficial coverage of material, from curriculum as lived to curriculum as text to be covered. Scheduling choice should be available in public education to accommodate the needs of all students. Curriculum guidelines need to be revamped to reflect the content that teachers believe is necessary for a successful course delivery. Applied level courses need to be developed for students who are not academically inferior but learn differently.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved.
Resumo:
Assembly job shop scheduling problem (AJSP) is one of the most complicated combinatorial optimization problem that involves simultaneously scheduling the processing and assembly operations of complex structured products. The problem becomes even more complicated if a combination of two or more optimization criteria is considered. This thesis addresses an assembly job shop scheduling problem with multiple objectives. The objectives considered are to simultaneously minimizing makespan and total tardiness. In this thesis, two approaches viz., weighted approach and Pareto approach are used for solving the problem. However, it is quite difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. Two metaheuristic techniques namely, genetic algorithm and tabu search are investigated in this thesis for solving the multiobjective assembly job shop scheduling problems. Three algorithms based on the two metaheuristic techniques for weighted approach and Pareto approach are proposed for the multi-objective assembly job shop scheduling problem (MOAJSP). A new pairing mechanism is developed for crossover operation in genetic algorithm which leads to improved solutions and faster convergence. The performances of the proposed algorithms are evaluated through a set of test problems and the results are reported. The results reveal that the proposed algorithms based on weighted approach are feasible and effective for solving MOAJSP instances according to the weight assigned to each objective criterion and the proposed algorithms based on Pareto approach are capable of producing a number of good Pareto optimal scheduling plans for MOAJSP instances.
Resumo:
Beta-glucosidases are critical enzymes in biomass hydrolysis process and is important in creating highly efficient enzyme cocktails for the bio-ethanol industry. Among the two strategies proposed for overcoming the glucose inhibition of commercial cellulases, one is to use heavy dose of BGL in the enzyme blends and the second is to do simultaneous saccharification and fermentation where glucose is converted to alcohol as soon as it is being generated. While the former needs extremely high quantities of enzyme, the latter is inefficient since the conditions for hydrolysis and fermentation are different. This makes the process technically challenging and also in this case, the alcohol generation is lesser, making its recovery difficult. A third option is to use glucose tolerant β-glucosidases which can work at elevated glucose concentrations. However, there are very few reports on such enzymes from microbial sources especially filamentous fungi which can be cultivated on cheap biomass as raw material. There has been very less number of studies directed at this, though there is every possibility that filamentous fungi that are efficient degraders of biomass may harbor such enzymes. The study therefore aimed at isolating a fungus capable of secreting glucose tolerant β- glucosidase enzyme. Production, characterization of β-glucosidases and application of BGL for bioethanol production were attempted.