921 resultados para Requirements Engineering, Requirement Specification
Resumo:
Active pharmaceutical ingredients have very strict quality requirements; minor changes in the physical and chemical properties of pharmaceuticals can adversely affect the dissolution rate and therefore the bioavailability of a given drug. Accordingly, the aim of the present study was to investigate the effect of spray drying on the physical and in vitro dissolution properties of four different active pharmaceutical ingredients, namely carbamazepine, indomethacin, piroxicam, and nifedipine. Each drug was dispersed in a solution of ethanol and water (70:30) and subjected to single-step spray drying using similar operational conditions. A complete characterization of the spray-dried drugs was performed via differential scanning calorimetry (DSC), scanning electron microscopy (SEM), X-ray powder diffraction (XRPD), particle size distribution analysis, solubility analysis, and an in vitro dissolution study. The results from the thermal analysis and X-ray diffraction showed that, except for carbamazepine, no chemical modifications occurred as a result of spray drying. Moreover, the particle size distribution of all the spray-dried drugs significantly decreased. In addition, SEM images showed that most of the particles had an irregular shape. There was no significant improvement in the solubility of the spray-dried drugs compared with the unprocessed compounds; however, in general, the dissolution rates of the spray-dried drugs showed a remarkable improvement over their non-spray-dried counterparts. Therefore, the results from this study demonstrate that a single spray-drying step may lead to changes in the physical properties and dissolution characteristics of drugs and thus improve their therapeutic action.
Resumo:
Recent developments in piston engine technology have increased performance in a very significant way. Diesel turbocharged/turbo compound engines, fuelled by jet fuels, have great performances. The focal point of this thesis is the transformation of the FIAT 1900 jtd diesel common rail engine for the installation on general aviation aircrafts like the CESSNA 172. All considerations about the diesel engine are supported by the studies that have taken place in the laboratories of the II Faculty of Engineering in Forlì. This work, mostly experimental, concerns the transformation of the automotive FIAT 1900 jtd – 4 cylinders – turbocharged – diesel common rail into an aircraft engine. The design philosophy of the aluminium alloy basement of the spark ignition engine have been transferred to the diesel version while the pistons and the head of the FIAT 1900 jtd are kept in the aircraft engine. Different solutions have been examined in this work. A first V 90° cylinders version that can develop up to 300 CV and whose weight is 30 kg, without auxiliaries and turbocharging group. The second version is a development of e original version of the diesel 1900 cc engine with an optimized crankshaft, that employ a special steel, 300M, and that is verified for the aircraft requirements. Another version with an augmented stroke and with a total displacement of 2500 cc has been examined; the result is a 30% engine heavier. The last version proposed is a 1600 cc diesel engine that work at 5000 rpm, with a reduced stroke and capable of more than 200 CV; it was inspired to the Yamaha R1 motorcycle engine. The diesel aircraft engine design keeps the bore of 82 mm, while the stroke is reduced to 64.6 mm, so the engine size is reduced along with weight. The basement weight, in GD AlSi 9 MgMn alloy, is 8,5 kg. Crankshaft, rods and accessories have been redesigned to comply to aircraft standards. The result is that the overall size is increased of only the 8% when referred to the Yamaha engine spark ignition version, while the basement weight increases of 53 %, even if the bore of the diesel version is 11% lager. The original FIAT 1900 jtd piston has been slightly modified with the combustion chamber reworked to the compression ratio of 15:1. The material adopted for the piston is the aluminium alloy A390.0-T5 commonly used in the automotive field. The piston weight is 0,5 kg for the diesel engine. The crankshaft is verified to torsional vibrations according to the Lloyd register of shipping requirements. The 300M special steel crankshaft total weight is of 14,5 kg. The result reached is a very small and light engine that may be certified for general aviation: the engine weight, without the supercharger, air inlet assembly, auxiliary generators and high pressure body, is 44,7 kg and the total engine weight, with enlightened HP pump body and the titanium alloy turbocharger is less than 100 kg, the total displacement is 1365 cm3 and the estimated output power is 220 CV. The direct conversion of automotive piston engine to aircrafts pays too huge weight penalties. In fact the main aircraft requirement is to optimize the power to weight ratio in order to obtain compact and fast engines for aeronautical use: this 1600 common rail diesel engine version demonstrates that these results can be reached.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.
Resumo:
Communication and coordination are two key-aspects in open distributed agent system, being both responsible for the system’s behaviour integrity. An infrastructure capable to handling these issues, like TuCSoN, should to be able to exploit modern technologies and tools provided by fast software engineering contexts. Thesis aims to demonstrate TuCSoN infrastructure’s abilities to cope new possibilities, hardware and software, offered by mobile technology. The scenarios are going to configure, are related to the distributed nature of multi-agent systems where an agent should be located and runned just on a mobile device. We deal new mobile technology frontiers concerned with smartphones using Android operating system by Google. Analysis and deployment of a distributed agent-based system so described go first to impact with quality and quantity considerations about available resources. Engineering issue at the base of our research is to use TuCSoN against to reduced memory and computing capability of a smartphone, without the loss of functionality, efficiency and integrity for the infrastructure. Thesis work is organized on two fronts simultaneously: the former is the rationalization process of the available hardware and software resources, the latter, totally orthogonal, is the adaptation and optimization process about TuCSoN architecture for an ad-hoc client side release.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
In the last decades, the building materials and construction industry has been contributing to a great extent to generate a high impact on our environment. As it has been considered one of the key areas in which to operate to significantly reduce our footprint on environment, there has been widespread belief that particular attention now has to be paid and specific measures have to be taken to limit the use of non-renewable resources.The aim of this thesis is therefore to study and evaluate sustainable alternatives to commonly used building materials, mainly based on ordinary Portland Cement, and find a supportable path to reduce CO2 emissions and promote the re-use of waste materials. More specifically, this research explores different solutions for replacing cementitious binders in distinct application fields, particularly where special and more restricting requirements are needed, such as restoration and conservation of architectural heritage. Emphasis was thus placed on aspects and implications more closely related to the concept of non-invasivity and environmental sustainability. A first part of the research was addressed to the study and development of sustainable inorganic matrices, based on lime putty, for the pre-impregnation and on-site binding of continuous carbon fiber fabrics for structural rehabilitation and heritage restoration. Moreover, with the aim to further limit the exploitation of non-renewable resources, the synthesis of chemically activated silico-aluminate materials, as metakaolin, ladle slag or fly ash, was thus successfully achieved. New sustainable binders were hence proposed as novel building materials, suitable to be used as primary component for construction and repair mortars, as bulk materials in high-temperature applications or as matrices for high-toughness fiber reinforced composites.
Night Vision Imaging System (NVIS) certification requirements analysis of an Airbus Helicopters H135
Resumo:
The safe operation of nighttime flight missions would be enhanced using Night Vision Imaging Systems (NVIS) equipment. This has been clear to the military since 1970s and to the civil helicopters since 1990s. In these last months, even Italian Emergency Medical Service (EMS) operators require Night Vision Goggles (NVG) devices that therefore amplify the ambient light. In order to fly with this technology, helicopters have to be NVIS-approved. The author have supported a company, to quantify the potentiality of undertaking the certification activity, through a feasibility study. Even before, NVG description and working principles have been done, then specifications analysis about the processes to make a helicopter NVIS-approved has been addressed. The noteworthy difference between military specifications and the civilian ones highlights non-irrevelant lacks in the latter. The activity of NVIS certification could be a good investment because the following targets have been achieved: Reductions of the certification cost, of the operating time and of the number of non-compliance.
Resumo:
The 5th generation of mobile networking introduces the concept of “Network slicing”, the network will be “sliced” horizontally, each slice will be compliant with different requirements in terms of network parameters such as bandwidth, latency. This technology is built on logical instead of physical resources, relies on virtual network as main concept to retrieve a logical resource. The Network Function Virtualisation provides the concept of logical resources for a virtual network function, enabling the concept virtual network; it relies on the Software Defined Networking as main technology to realize the virtual network as resource, it also define the concept of virtual network infrastructure with all components needed to enable the network slicing requirements. SDN itself uses cloud computing technology to realize the virtual network infrastructure, NFV uses also the virtual computing resources to enable the deployment of virtual network function instead of having custom hardware and software for each network function. The key of network slicing is the differentiation of slice in terms of Quality of Services parameters, which relies on the possibility to enable QoS management in cloud computing environment. The QoS in cloud computing denotes level of performances, reliability and availability offered. QoS is fundamental for cloud users, who expect providers to deliver the advertised quality characteristics, and for cloud providers, who need to find the right tradeoff between QoS levels that has possible to offer and operational costs. While QoS properties has received constant attention before the advent of cloud computing, performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis and deploying, prediction, and assurance. This is prompting several researchers to investigate automated QoS management methods that can leverage the high programmability of hardware and software resources in the cloud.
Resumo:
Articular cartilage injuries and degeneration affect a large proportion of the population in developed countries world wide. Stem cells can be differentiated into chondrocytes by adding transforming growth factor-beta1 and dexamethasone to a pellet culture, which are unfeasible for tissue engineering purposes. We attempted to achieve stable chondrogenesis without any requirement for exogenous growth factors. Human mesenchymal stem cells were transduced with an adenoviral vector containing the SRY-related HMG-box gene 9 (SOX9), and were cultured in a three-dimensional (3D) hydrogel scaffold composite. As an additional treatment, mechanical stimulation was applied in a custom-made bioreactor. SOX9 increased the expression level of its known target genes, as well as its cofactors: the long form of SOX5 and SOX6. However, it was unable to increase the synthesis of sulfated glycosaminoglycans (GAGs). Mechanical stimulation slightly enhanced collagen type X and increased lubricin expression. The combination of SOX9 and mechanical load boosted GAG synthesis as shown by (35)S incorporation. GAG production rate corresponded well with the amount of (endogenous) transforming growth factor-beta1. Finally, cartilage oligomeric matrix protein expression was increased by both treatments. These findings provide insight into the mechanotransduction of mesenchymal stem cells and demonstrate the potential of a transcription factor in stem cell therapy.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.
Resumo:
An introductory course in probability and statistics for third-year and fourth-year electrical engineering students is described. The course is centered around several computer-based projects that are designed to achieve two objectives. First, the projects illustrate the course topics and provide hands-on experience for the students. The second and equally important objective of the projects is to convey the relevance and usefulness of probability and statistics to practical problems that undergraduate students can appreciate. The benefit of this course as to motivate electrical engineering students to excel in the study of probability concepts, instead of viewing the subject as one more course requirement toward graduation. The authors co-teach the course, and MATLAB is used for mast of the computer-based projects
Resumo:
The production by biosynthesis of optically active amino acids and amines satisfies the pharmaceutical industry in its demand for chiral building blocks for the synthesis of various pharmaceuticals. Among several enzymatic methods that allow the synthesis of optically active aminoacids and amines, the use of minotransferase is a promising one due to its broad substrate specificity and no requirement for external cofactor regeneration. The synthesis of chiral compounds by aminotransferases can be done either by asymmetric synthesis starting from keto acids or ketones, and by kinetic resolution starting from racemic aminoacids or amines. The asymmetric synthesis of substituted (S)-aminotetralin, an active pharmaceutical ingredient (API), has shown to have two major factors that contribute to increasing the cost of production. These factors are the raw material cost of biocatalyst used to produce it and product loss during biocatalyst separation. To minimize the cost contribution of biocatalyst and to minimize the loss of product, two routes have been chosen in this research: 1. To engineer the aminotransferase biocatalyst to have greater specific activity, and 2. Improve the engineering of the process by immobilization of biocatalyst in calcium alginate and addition of cosolvents. An (S)-aminotransferase (Mutant CNB03-03) was immobilized, not as purified enzyme but as enzyme within spray dried cells, in calcium alginate beads and used to produce substituted (S)-aminotetralin at 50 °C and pH 7 in experiments where the immobilized biocatalyst was recycled. Initial rate of reaction for cycle 1 (6 hr duration) was determined to be 0.258 mM/min, for cycle 2 (20 hr duration) it decreased by ~50% compared to cycle 1, and for cycle 3 (20 hr duration) it decreased by ~90% compared to cycle 1 (immobilized preparation consisted of 50 mg of spray dried cells per gram of calcium alginate). Conversion to product for each cycle decreased as well, from 100% in cycle 1 (About 50 mM), 80% in cycle 2, and 30% after cycle 3. This mutant was determined to be deactivated at elevated temperatures during the reaction cycle and was not stable enough to allow multiple cycles in its immobilized form. A new mutant aminotransferase was isolated by applying error-prone polymerase chain reaction (PCR) on the gene coding for this enzyme and screening/selection: CNB04-01. This mutant showed a significant improvement in thermostability in comparison to CNB03-03. The new mutant was immobilized and tested under similar reaction conditions. Initial rate remained fairly constant (0.2 mM/min) over four cycles (each cycle with a duration of about 20 hours) with the mutant retaining almost 80% of initial rate in the fourth cycle. The final product concentrations after each cycle did not decrease during recycle experiments. Thermostability of CNB04-01 was much improved compared to CNB03-03. Under the same reaction conditions as stated above, the addition of co-solvents was studied in order to increase substituted tetralone solubility. Toluene and sodium dodecylsulfate (SDS) were used. SDS at 0.01% (w/v) allowed four recycles of the immobilized spray dried cells of CNB04-01, always reaching higher product concentration (80-85 mM) than the system with toluene at 3% (v/v) -70 mM-. The long term activity of immobilized CNB04-01 in a system with SDS 0.01% (w/v) at 50 °C, pH 7 was retained for three cycles (20 to 24 hours each one), reaching always final product concentration between 80-85 mM, but dropping precipitously in the fourth cycle to a final product concentration of 50 mM. Although significant improvement of immobilization on productivity and stability were observed using CNB04-01, another observation demonstrated the limitations of an immobilization strategy on reducing process costs. After analyzing the results of this experiment it was seen that a sudden drop occurred on final product concentration after the third recycle. This was due to product accumulation inside the immobilized preparation. In order to improve the economics of the process, research was focused on developing a free enzyme with an even higher activity, thus reducing raw material cost as well as improving biomass separation. A new enzyme was obtained (CNB05-01) using error-prone PCR and screening using as a template the gene derived from the previous improved enzyme. This mutant was determined to have 1.6 times the initial rate of CNB04-01 and had a higher temperature optimum (55°). This new enzyme would allow reducing enzyme loading in the reaction by five-fold compared to CNB03-03, when using it at concentration of one gram of spray dried cells per liter (completing the reaction after 20-24 hours). Also this mutant would allow reducing process time to 7-8 hours when used at a concentration of 5 grams of spray dried cells per liter compared to 24 hours for CNB03-03, assuming that the observations shown before are scalable. It could be possible to improve the economics of the process by either reducing enzyme concentration or reducing process time, since the production cost of the desired product is primarily a function of both enzyme concentration and process time.
Resumo:
The Michigan Department of Transportation is evaluating upgrading their portion of the Wolverine Line between Chicago and Detroit to accommodate high speed rail. This will entail upgrading the track to allow trains to run at speeds in excess of 110 miles per hour (mph). An important component of this upgrade will be to assess the requirement for ballast material for high speed rail. In the event that the existing ballast materials do not meet specifications for higher speed train, additional ballast will be required. The purpose of this study, therefore, is to investigate the current MDOT railroad ballast quality specifications and compare them to both the national and international specifications for use on high speed rail lines. The study found that while MDOT has quality specifications for railroad ballast it does not have any for high speed rail. In addition, the American Railway Engineering and Maintenance-of-Way Association (AREMA), while also having specifications for railroad ballast, does not have specific specifications for high speed rail lines. The AREMA aggregate specifications for ballast include the following tests: (1) LA Abrasion, (2) Percent Moisture Absorption, (3) Flat and Elongated Particles, (4) Sulfate Soundness test. Internationally, some countries do require a highly standard for high speed rail such as the Los Angeles (LA) Abrasion test, which is uses a higher standard performance and the Micro Duval test, which is used to determine the maximum speed that a high speed can operate at. Since there are no existing MDOT ballast specification for high speed rail, it is assumed that aggregate ballast specifications for the Wolverine Line will use the higher international specifications. The Wolverine line, however, is located in southern Michigan is a region of sedimentary rocks which generally do not meet the existing MDOT ballast specifications. The investigation found that there were only 12 quarries in the Michigan that meet the MDOT specification. Of these 12 quarries, six were igneous or metamorphic rock quarries, while six were carbonate quarries. Of the six carbonate quarries four were locate in the Lower Peninsula and two in the Upper Peninsula. Two of the carbonate quarries were located in near proximity to the Wolverine Line, while the remaining quarries were at a significant haulage distance. In either case, the cost of haulage becomes an important consideration. In this regard, four of the quarries were located with lake terminals allowing water transportation to down state ports. The Upper Peninsula also has a significant amount of metal based mining in both igneous and metamorphic rock that generate significant amount of waste rock that could be used as a ballast material. The main drawback, however, is the distance to the Wolverine rail line. One potential source is the Cliffs Natural Resources that operates two large surface mines in the Marquette area with rail and water transportation to both Lake Superior and Lake Michigan. Both mines mine rock with a very high compressive strength far in excess of most ballast materials used in the United States and would make an excellent ballast materials. Discussions with Cliffs, however, indicated that due to environmental concerns that they would most likely not be interested in producing a ballast material. In the United States carbonate aggregates, while used for ballast, many times don't meet the ballast specifications in addition to the problem of particle degradation that can lead to fouling and cementation issues. Thus, many carbonate aggregate quarries in close proximity to railroads are not used. Since Michigan has a significant amount of carbonate quarries, the research also investigated using the dynamic properties of aggregate as a possible additional test for aggregate ballast quality. The dynamic strength of a material can be assessed using a split Hopkinson Pressure Bar (SHPB). The SHPB has been traditionally used to assess the dynamic properties of metal but over the past 20 years it is now being used to assess the dynamic properties of brittle materials such as ceramics and rock. In addition, the wear properties of metals have been related to their dynamic properties. Wear or breakdown of railroad ballast materials is one of the main problems with ballast material due to the dynamic loading generated by trains and which will be significantly higher for high speed rails. Previous research has indicated that the Port Inland quarry along Lake Michigan in the Southern Upper Peninsula has significant dynamic properties that might make it potentially useable as an aggregate for high speed rail. The dynamic strength testing conducted in this research indicate that the Port Inland limestone in fact has a dynamic strength close to igneous rocks and much higher than other carbonate rocks in the Great Lakes region. It is recommended that further research be conducted to investigate the Port Inland limestone as a high speed ballast material.