10 resultados para Software product line engineering

em Digital Commons - Michigan Tech


Relevância:

30.00% 30.00%

Publicador:

Resumo:

From the customer satisfaction point of view, sound quality of any product has become one of the important factors these days. The primary objective of this research is to determine factors which affect the acceptability of impulse noise. Though the analysis is based on a sample impulse sound file of a Commercial printer, the results can be applied to other similar impulsive noise. It is assumed that impulsive noise can be tuned to meet the accepTable criteria. Thus it is necessary to find the most significant factors which can be controlled physically. This analysis is based on a single impulse. A sample impulsive sound file is tweaked for different amplitudes, background noise, attack time, release time and the spectral content. A two level factorial design of experiments (DOE) is applied to study the significant effects and interactions. For each impulse file modified as per the DOE, the magnitude of perceived annoyance is calculated from the objective metric developed recently at Michigan Technological University. This metric is based on psychoacoustic criteria such as loudness, sharpness, roughness and loudness based impulsiveness. Software called ‘Artemis V11.2’ developed by HEAD Acoustics is used to calculate these psychoacoustic terms. As a result of two level factorial analyses, a new objective model of perceived annoyance is developed in terms of above mentioned physical parameters such as amplitudes, background noise, impulse attack time, impulse release time and the spectral content. Also the effects of the significant individual factors as well as two level interactions are also studied. The results show that all the mentioned five factors affect annoyance level of an impulsive sound significantly. Thus annoyance level can be reduced under the criteria by optimizing the levels. Also, an additional analysis is done to study the effect of these five significant parameters on the individual psychoacoustic metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Target localization has a wide range of military and civilian applications in wireless mobile networks. Examples include battle-field surveillance, emergency 911 (E911), traffc alert, habitat monitoring, resource allocation, routing, and disaster mitigation. Basic localization techniques include time-of-arrival (TOA), direction-of-arrival (DOA) and received-signal strength (RSS) estimation. Techniques that are proposed based on TOA and DOA are very sensitive to the availability of Line-of-sight (LOS) which is the direct path between the transmitter and the receiver. If LOS is not available, TOA and DOA estimation errors create a large localization error. In order to reduce NLOS localization error, NLOS identifcation, mitigation, and localization techniques have been proposed. This research investigates NLOS identifcation for multiple antennas radio systems. The techniques proposed in the literature mainly use one antenna element to enable NLOS identifcation. When a single antenna is utilized, limited features of the wireless channel can be exploited to identify NLOS situations. However, in DOA-based wireless localization systems, multiple antenna elements are available. In addition, multiple antenna technology has been adopted in many widely used wireless systems such as wireless LAN 802.11n and WiMAX 802.16e which are good candidates for localization based services. In this work, the potential of spatial channel information for high performance NLOS identifcation is investigated. Considering narrowband multiple antenna wireless systems, two xvNLOS identifcation techniques are proposed. Here, the implementation of spatial correlation of channel coeffcients across antenna elements as a metric for NLOS identifcation is proposed. In order to obtain the spatial correlation, a new multi-input multi-output (MIMO) channel model based on rough surface theory is proposed. This model can be used to compute the spatial correlation between the antenna pair separated by any distance. In addition, a new NLOS identifcation technique that exploits the statistics of phase difference across two antenna elements is proposed. This technique assumes the phases received across two antenna elements are uncorrelated. This assumption is validated based on the well-known circular and elliptic scattering models. Next, it is proved that the channel Rician K-factor is a function of the phase difference variance. Exploiting Rician K-factor, techniques to identify NLOS scenarios are proposed. Considering wideband multiple antenna wireless systems which use MIMO-orthogonal frequency division multiplexing (OFDM) signaling, space-time-frequency channel correlation is exploited to attain NLOS identifcation in time-varying, frequency-selective and spaceselective radio channels. Novel NLOS identi?cation measures based on space, time and frequency channel correlation are proposed and their performances are evaluated. These measures represent a better NLOS identifcation performance compared to those that only use space, time or frequency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The production by biosynthesis of optically active amino acids and amines satisfies the pharmaceutical industry in its demand for chiral building blocks for the synthesis of various pharmaceuticals. Among several enzymatic methods that allow the synthesis of optically active aminoacids and amines, the use of minotransferase is a promising one due to its broad substrate specificity and no requirement for external cofactor regeneration. The synthesis of chiral compounds by aminotransferases can be done either by asymmetric synthesis starting from keto acids or ketones, and by kinetic resolution starting from racemic aminoacids or amines. The asymmetric synthesis of substituted (S)-aminotetralin, an active pharmaceutical ingredient (API), has shown to have two major factors that contribute to increasing the cost of production. These factors are the raw material cost of biocatalyst used to produce it and product loss during biocatalyst separation. To minimize the cost contribution of biocatalyst and to minimize the loss of product, two routes have been chosen in this research: 1. To engineer the aminotransferase biocatalyst to have greater specific activity, and 2. Improve the engineering of the process by immobilization of biocatalyst in calcium alginate and addition of cosolvents. An (S)-aminotransferase (Mutant CNB03-03) was immobilized, not as purified enzyme but as enzyme within spray dried cells, in calcium alginate beads and used to produce substituted (S)-aminotetralin at 50 °C and pH 7 in experiments where the immobilized biocatalyst was recycled. Initial rate of reaction for cycle 1 (6 hr duration) was determined to be 0.258 mM/min, for cycle 2 (20 hr duration) it decreased by ~50% compared to cycle 1, and for cycle 3 (20 hr duration) it decreased by ~90% compared to cycle 1 (immobilized preparation consisted of 50 mg of spray dried cells per gram of calcium alginate). Conversion to product for each cycle decreased as well, from 100% in cycle 1 (About 50 mM), 80% in cycle 2, and 30% after cycle 3. This mutant was determined to be deactivated at elevated temperatures during the reaction cycle and was not stable enough to allow multiple cycles in its immobilized form. A new mutant aminotransferase was isolated by applying error-prone polymerase chain reaction (PCR) on the gene coding for this enzyme and screening/selection: CNB04-01. This mutant showed a significant improvement in thermostability in comparison to CNB03-03. The new mutant was immobilized and tested under similar reaction conditions. Initial rate remained fairly constant (0.2 mM/min) over four cycles (each cycle with a duration of about 20 hours) with the mutant retaining almost 80% of initial rate in the fourth cycle. The final product concentrations after each cycle did not decrease during recycle experiments. Thermostability of CNB04-01 was much improved compared to CNB03-03. Under the same reaction conditions as stated above, the addition of co-solvents was studied in order to increase substituted tetralone solubility. Toluene and sodium dodecylsulfate (SDS) were used. SDS at 0.01% (w/v) allowed four recycles of the immobilized spray dried cells of CNB04-01, always reaching higher product concentration (80-85 mM) than the system with toluene at 3% (v/v) -70 mM-. The long term activity of immobilized CNB04-01 in a system with SDS 0.01% (w/v) at 50 °C, pH 7 was retained for three cycles (20 to 24 hours each one), reaching always final product concentration between 80-85 mM, but dropping precipitously in the fourth cycle to a final product concentration of 50 mM. Although significant improvement of immobilization on productivity and stability were observed using CNB04-01, another observation demonstrated the limitations of an immobilization strategy on reducing process costs. After analyzing the results of this experiment it was seen that a sudden drop occurred on final product concentration after the third recycle. This was due to product accumulation inside the immobilized preparation. In order to improve the economics of the process, research was focused on developing a free enzyme with an even higher activity, thus reducing raw material cost as well as improving biomass separation. A new enzyme was obtained (CNB05-01) using error-prone PCR and screening using as a template the gene derived from the previous improved enzyme. This mutant was determined to have 1.6 times the initial rate of CNB04-01 and had a higher temperature optimum (55°). This new enzyme would allow reducing enzyme loading in the reaction by five-fold compared to CNB03-03, when using it at concentration of one gram of spray dried cells per liter (completing the reaction after 20-24 hours). Also this mutant would allow reducing process time to 7-8 hours when used at a concentration of 5 grams of spray dried cells per liter compared to 24 hours for CNB03-03, assuming that the observations shown before are scalable. It could be possible to improve the economics of the process by either reducing enzyme concentration or reducing process time, since the production cost of the desired product is primarily a function of both enzyme concentration and process time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded siloxane polymer waveguides have shown promising results for use in optical backplanes. They exhibit high temperature stability, low optical absorption, and require common processing techniques. A challenging aspect of this technology is out-of-plane coupling of the waveguides. A multi-software approach to modeling an optical vertical interconnect (via) is proposed. This approach utilizes the beam propagation method to generate varied modal field distribution structures which are then propagated through a via model using the angular spectrum propagation technique. Simulation results show average losses between 2.5 and 4.5 dB for different initial input conditions. Certain configurations show losses of less than 3 dB and it is shown that in an input/output pair of vias, average losses per via may be lower than the targeted 3 dB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finite element tire modeling can be a challenging process, due to the overall complexities within the tire and the many variables that are required to produce capable predictive simulations. Utilizing tools from Abaqus finite element software, adequate predictive simulations that represent actual operational conditions can be made possible. Many variables that result from complex geometries and materials, multiple loading conditions, and surface contact can be incorporated into modeling simulations. This thesis outlines modeling practices used to conduct analysis on specific tire variants of the STL3 series OTR tire line, produced by Titan Tire. Finite element models were created to represent an inflated tire and rim assembly, supporting a 30,000 lb load while resting on a flat surface. Simulations were conducted with reinforcement belt cords at variable angles in order to understand how belt cord arrangement affects tire components and stiffness response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Waste effluents from the forest products industry are sources of lignocellulosic biomass that can be converted to ethanol by yeast after pretreatment. However, the challenge of improving ethanol yields from a mixed pentose and hexose fermentation of a potentially inhibitory hydrolysate still remains. Hardboard manufacturing process wastewater (HPW) was evaluated at a potential feedstream for lignocellulosic ethanol production by native xylose-fermenting yeast. After screening of xylose-fermenting yeasts, Scheffersomyces stipitis CBS 6054 was selected as the ideal organism for conversion of the HPW hydrolysate material. The individual and synergistic effects of inhibitory compounds present in the hydrolysate were evaluated using response surface methodology. It was concluded that organic acids have an additive negative effect on fermentations. Fermentation conditions were also optimized in terms of aeration and pH. Methods for improving productivity and achieving higher ethanol yields were investigated. Adaptation to the conditions present in the hydrolysate through repeated cell sub-culturing was used. The objectives of this present study were to adapt S. stipitis CBS6054 to a dilute-acid pretreated lignocellulosic containing waste stream; compare the physiological, metabolic, and proteomic profiles of the adapted strain to its parent; quantify changes in protein expression/regulation, metabolite abundance, and enzyme activity; and determine the biochemical and molecular mechanism of adaptation. The adapted culture showed improvement in both substrate utilization and ethanol yields compared to the unadapted parent strain. The adapted strain also represented a growth phenotype compared to its unadapted parent based on its physiological and proteomic profiles. Several potential targets that could be responsible for strain improvement were identified. These targets could have implications for metabolic engineering of strains for improved ethanol production from lignocellulosic feedstocks. Although this work focuses specifically on the conversion of HPW to ethanol, the methods developed can be used for any feedstock/product systems that employ a microbial conversion step. The benefit of this research is that the organisms will the optimized for a company's specific system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Michigan Department of Transportation is evaluating upgrading their portion of the Wolverine Line between Chicago and Detroit to accommodate high speed rail. This will entail upgrading the track to allow trains to run at speeds in excess of 110 miles per hour (mph). An important component of this upgrade will be to assess the requirement for ballast material for high speed rail. In the event that the existing ballast materials do not meet specifications for higher speed train, additional ballast will be required. The purpose of this study, therefore, is to investigate the current MDOT railroad ballast quality specifications and compare them to both the national and international specifications for use on high speed rail lines. The study found that while MDOT has quality specifications for railroad ballast it does not have any for high speed rail. In addition, the American Railway Engineering and Maintenance-of-Way Association (AREMA), while also having specifications for railroad ballast, does not have specific specifications for high speed rail lines. The AREMA aggregate specifications for ballast include the following tests: (1) LA Abrasion, (2) Percent Moisture Absorption, (3) Flat and Elongated Particles, (4) Sulfate Soundness test. Internationally, some countries do require a highly standard for high speed rail such as the Los Angeles (LA) Abrasion test, which is uses a higher standard performance and the Micro Duval test, which is used to determine the maximum speed that a high speed can operate at. Since there are no existing MDOT ballast specification for high speed rail, it is assumed that aggregate ballast specifications for the Wolverine Line will use the higher international specifications. The Wolverine line, however, is located in southern Michigan is a region of sedimentary rocks which generally do not meet the existing MDOT ballast specifications. The investigation found that there were only 12 quarries in the Michigan that meet the MDOT specification. Of these 12 quarries, six were igneous or metamorphic rock quarries, while six were carbonate quarries. Of the six carbonate quarries four were locate in the Lower Peninsula and two in the Upper Peninsula. Two of the carbonate quarries were located in near proximity to the Wolverine Line, while the remaining quarries were at a significant haulage distance. In either case, the cost of haulage becomes an important consideration. In this regard, four of the quarries were located with lake terminals allowing water transportation to down state ports. The Upper Peninsula also has a significant amount of metal based mining in both igneous and metamorphic rock that generate significant amount of waste rock that could be used as a ballast material. The main drawback, however, is the distance to the Wolverine rail line. One potential source is the Cliffs Natural Resources that operates two large surface mines in the Marquette area with rail and water transportation to both Lake Superior and Lake Michigan. Both mines mine rock with a very high compressive strength far in excess of most ballast materials used in the United States and would make an excellent ballast materials. Discussions with Cliffs, however, indicated that due to environmental concerns that they would most likely not be interested in producing a ballast material. In the United States carbonate aggregates, while used for ballast, many times don't meet the ballast specifications in addition to the problem of particle degradation that can lead to fouling and cementation issues. Thus, many carbonate aggregate quarries in close proximity to railroads are not used. Since Michigan has a significant amount of carbonate quarries, the research also investigated using the dynamic properties of aggregate as a possible additional test for aggregate ballast quality. The dynamic strength of a material can be assessed using a split Hopkinson Pressure Bar (SHPB). The SHPB has been traditionally used to assess the dynamic properties of metal but over the past 20 years it is now being used to assess the dynamic properties of brittle materials such as ceramics and rock. In addition, the wear properties of metals have been related to their dynamic properties. Wear or breakdown of railroad ballast materials is one of the main problems with ballast material due to the dynamic loading generated by trains and which will be significantly higher for high speed rails. Previous research has indicated that the Port Inland quarry along Lake Michigan in the Southern Upper Peninsula has significant dynamic properties that might make it potentially useable as an aggregate for high speed rail. The dynamic strength testing conducted in this research indicate that the Port Inland limestone in fact has a dynamic strength close to igneous rocks and much higher than other carbonate rocks in the Great Lakes region. It is recommended that further research be conducted to investigate the Port Inland limestone as a high speed ballast material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective for this thesis is to outline a Performance-Based Engineering (PBE) framework to address the multiple hazards of Earthquake (EQ) and subsequent Fire Following Earthquake (FFE). Currently, fire codes for the United States are largely empirical and prescriptive in nature. The reliance on prescriptive requirements makes quantifying sustained damage due to fire difficult. Additionally, the empirical standards have resulted from individual member or individual assembly furnace testing, which have been shown to differ greatly from full structural system behavior. The very nature of fire behavior (ignition, growth, suppression, and spread) is fundamentally difficult to quantify due to the inherent randomness present in each stage of fire development. The study of interactions between earthquake damage and fire behavior is also in its infancy with essentially no available empirical testing results. This thesis will present a literature review, a discussion, and critique of the state-of-the-art, and a summary of software currently being used to estimate loss due to EQ and FFE. A generalized PBE framework for EQ and subsequent FFE is presented along with a combined hazard probability to performance objective matrix and a table of variables necessary to fully implement the proposed framework. Future research requirements and summary are also provided with discussions of the difficulties inherent in adequately describing the multiple hazards of EQ and FFE.