891 resultados para Minimization
Resumo:
Assessment of the integrity of structural components is of great importance for aerospace systems, land and marine transportation, civil infrastructures and other biological and mechanical applications. Guided waves (GWs) based inspections are an attractive mean for structural health monitoring. In this thesis, the study and development of techniques for GW ultrasound signal analysis and compression in the context of non-destructive testing of structures will be presented. In guided wave inspections, it is necessary to address the problem of the dispersion compensation. A signal processing approach based on frequency warping was adopted. Such operator maps the frequencies axis through a function derived by the group velocity of the test material and it is used to remove the dependence on the travelled distance from the acquired signals. Such processing strategy was fruitfully applied for impact location and damage localization tasks in composite and aluminum panels. It has been shown that, basing on this processing tool, low power embedded system for GW structural monitoring can be implemented. Finally, a new procedure based on Compressive Sensing has been developed and applied for data reduction. Such procedure has also a beneficial effect in enhancing the accuracy of structural defects localization. This algorithm uses the convolutive model of the propagation of ultrasonic guided waves which takes advantage of a sparse signal representation in the warped frequency domain. The recovery from the compressed samples is based on an alternating minimization procedure which achieves both an accurate reconstruction of the ultrasonic signal and a precise estimation of waves time of flight. Such information is used to feed hyperbolic or elliptic localization procedures, for accurate impact or damage localization.
Resumo:
Die Materialverfolgung gewinnt in der Metallindustrie immer mehr an Bedeutung:rnEs ist notwendig, dass ein Metallband im Fertigungsprozess ein festgelegtes Programm durchläuft - erst dann ist die Qualität des Endprodukts garantiert. Die bisherige Praxis besteht darin, jedem Metallband eine Nummer zuzuordnen, mit der dieses Band beschriftet wird. Bei einer tagelangen Lagerung der Bänder zwischen zwei Produktionsschritten erweist sich diese Methode als fehleranfällig: Die Beschriftungen können z.B. verloren gehen, verwechselt, falsch ausgelesen oder unleserlich werden. 2007 meldete die iba AG das Patent zur Identifikation der Metallbänder anhand ihres Dickenprofils an (Anhaus [3]) - damit kann die Identität des Metallbandes zweifelsfrei nachgewiesen werden, eine zuverlässige Materialverfolgung wurde möglich.Es stellte sich jedoch heraus, dass die messfehlerbehafteten Dickenprofile, die als lange Zeitreihen aufgefasst werden können, mit Hilfe von bisherigen Verfahren (z.B. L2-Abstandsminimierung oder Dynamic Time Warping) nicht erfolgreich verglichen werden können.Diese Arbeit stellt einen effizienten feature-basierten Algorithmus zum Vergleichrnzweier Zeitreihen vor. Er ist sowohl robust gegenüber Rauschen und Messausfällen als auch invariant gegenüber solchen Koordinatentransformationen der Zeitreihen wie Skalierung und Translation. Des Weiteren sind auch Vergleiche mit Teilzeitreihen möglich. Unser Framework zeichnet sich sowohl durch seine hohe Genauigkeit als auch durch seine hohe Geschwindigkeit aus: Mehr als 99.5% der Anfragen an unsere aus realen Profilen bestehende Testdatenbank werden richtig beantwortet. Mit mehreren hundert Zeitreihen-Vergleichen pro Sekunde ist es etwa um den Faktor 10 schneller als die auf dem Gebiet der Zeitreihenanalyse etablierten Verfahren, die jedoch nicht im Stande sind, mehr als 90% der Anfragen korrekt zu verarbeiten. Der Algorithmus hat sich als industrietauglich erwiesen. Die iba AG setzt ihn in einem weltweit einzigartigen dickenprofilbasierten Überwachungssystemrnzur Materialverfolgung ein, das in ersten Stahl- und Aluminiumwalzwerkenrnbereits erfolgreich zum Einsatz kommt.
Resumo:
In this work we study a model for the breast image reconstruction in Digital Tomosynthesis, that is a non-invasive and non-destructive method for the three-dimensional visualization of the inner structures of an object, in which the data acquisition includes measuring a limited number of low-dose two-dimensional projections of an object by moving a detector and an X-ray tube around the object within a limited angular range. The problem of reconstructing 3D images from the projections provided in the Digital Tomosynthesis is an ill-posed inverse problem, that leads to a minimization problem with an object function that contains a data fitting term and a regularization term. The contribution of this thesis is to use the techniques of the compressed sensing, in particular replacing the standard least squares problem of data fitting with the problem of minimizing the 1-norm of the residuals, and using as regularization term the Total Variation (TV). We tested two different algorithms: a new alternating minimization algorithm (ADM), and a version of the more standard scaled projected gradient algorithm (SGP) that involves the 1-norm. We perform some experiments and analyse the performance of the two methods comparing relative errors, iterations number, times and the qualities of the reconstructed images. In conclusion we noticed that the use of the 1-norm and the Total Variation are valid tools in the formulation of the minimization problem for the image reconstruction resulting from Digital Tomosynthesis and the new algorithm ADM has reached a relative error comparable to a version of the classic algorithm SGP and proved best in speed and in the early appearance of the structures representing the masses.
Resumo:
We present an automatic method to segment brain tissues from volumetric MRI brain tumor images. The method is based on non-rigid registration of an average atlas in combination with a biomechanically justified tumor growth model to simulate soft-tissue deformations caused by the tumor mass-effect. The tumor growth model, which is formulated as a mesh-free Markov Random Field energy minimization problem, ensures correspondence between the atlas and the patient image, prior to the registration step. The method is non-parametric, simple and fast compared to other approaches while maintaining similar accuracy. It has been evaluated qualitatively and quantitatively with promising results on eight datasets comprising simulated images and real patient data.
Resumo:
Hand transplantation has been indicated in selective patients after traumatic upper extremity amputation and only performed in a few centers around the world for the last decade. In comparison to solid organ transplantation, there is a challenge to overcome the host immunological barrier due to complex antigenicity of the different included tissues, the skin being the most susceptible to rejection. Patients require lifelong immunosuppression for non life-threatening conditions. Minimization of maintenance immunosuppression represents the key step for promoting wider applicability of hand transplantation. Current research is working towards the understanding mechanisms of composite tissue allograft (CTA) rejection. Worldwide, in 51 patients 72 hands (21 double hand transplants) and once both arms have been successfully transplanted since 1998.
Resumo:
This thesis examines three questions regarding the content of Bucknell University‟s waste stream and the contributors to campus recycling and solid waste disposal. The first asks, “What does Bucknell‟s waste stream consist of?” To answer this question, I designed a campus-wide waste audit procedure that sampled one dumpster from each of the eleven „activity‟ types on campus in order to better understand Bucknell‟s waste composition. The audit was implemented during the Fall semester of the 2011-2012 school year. The waste from each dumpster was sorted into several recyclable and non-recyclable categories and then weighed individually. Results showed the Bison and Carpenter Shop dumpsters to contain the highest percentage of divertible materials (through recycling and/or composting). When extrapolated, results also showed the Dining Services buildings and Facilities buildings to be the most waste dense in terms of pounds of waste generated per square foot. The Bison also generated the most overall waste by weight. The average composition of all dumpsters revealed that organic waste composed 24% of all waste, 23% was non-recyclable paper, and 20% was non-recyclable plastic. It will be important to move forward using these results to help create effective waste programs that target the appropriate areas of concern. My second question asks, “What influences waste behavior to contribute to this „picture‟ of the waste stream?” To answer this question, I created a survey that was sent out to randomly selected sub-group of the university‟s three constituencies: students, faculty, and staff. The survey sought responses regarding each constituency‟s solid waste disposal and recycling behavior, attitudes toward recycling, and motivating factors for solid waste disposal behaviors across different sectors of the university. Using regression analysis, I found three statistically significant motivating factors that influence solid waste disposal behavior: knowledge and awareness, moral value, and social norms. I further examined how a person‟s characteristics associate to these motivating factors and found that one‟s position on campus proved a significant association. Consistently, faculty and staff were strongly influenced by the aforementioned motivating factors, while students‟ behavior was less influenced by them. This suggests that new waste programs should target students to help increase the influence of these motivators to improve the recycling rate and lower overall solid waste disposal on campus. After making overall conclusions regarding the waste audit and survey, I ask my third question, which inquires, “What actions can Bucknell take to increase recycling rates and decrease solid waste generation?” Bucknell currently features several recycling and waste minimization programs on campus. However, using results from the waste audit and campus survey, we can better understand what are the issues of the waste stream, how do we go about addressing these issues, and who needs to be addressed. I propose several suggestions for projects that future students may take on for summer or thesis research. Suggestions include targeting the appropriate categories of waste that occur most frequently in the waste stream, as well as the building types that have the highest waste density and potential recovery rates. Additionally, certain groups on campus should be targeted more directly than others, namely the student body, which demonstrates the lowest influence by motivators of recycling and waste behavior. Several variables were identified as significant motivators of waste and recycling behavior, and could be used as program tactics to encourage more effective behavior.
Resumo:
Anaerobic digestion of food scraps has the potential to accomplish waste minimization, energy production, and compost or humus production. At Bucknell University, removal of food scraps from the waste stream could reduce municipal solid waste transportation costs and landfill tipping fees, and provide methane and humus for use on campus. To determine the suitability of food waste produced at Bucknell for high-solids anaerobic digestion (HSAD), a year-long characterization study was conducted. Physical and chemical properties, waste biodegradability, and annual production of biodegradable waste were assessed. Bucknell University food and landscape waste was digested at pilot-scale for over a year to test performance at low and high loading rates, ease of operation at 20% solids, benefits of codigestion of food and landscape waste, and toprovide digestate for studies to assess the curing needs of HSAD digestate. A laboratory-scale curing study was conducted to assess the curing duration required to reduce microbial activity, phytotoxicity, and odors to acceptable levels for subsequent use ofhumus. The characteristics of Bucknell University food and landscape waste were tested approximately weekly for one year, to determine chemical oxygen demand (COD), total solids (TS), volatile solids (VS), and biodegradability (from batch digestion studies). Fats, oil, and grease and total Kjeldahl nitrogen were also tested for some food waste samples. Based on the characterization and biodegradability studies, Bucknell University dining hall food waste is a good candidate for HSAD. During batch digestion studies Bucknell University food waste produced a mean of 288 mL CH4/g COD with a 95%confidence interval of 0.06 mL CH4/g COD. The addition of landscape waste for digestion increased methane production from both food and landscape waste; however, because the landscape waste biodegradability was extremely low the increase was small.Based on an informal waste audit, Bucknell could collect up to 100 tons of food waste from dining facilities each year. The pilot-scale high-solids anaerobic digestion study confirmed that digestion ofBucknell University food waste combined with landscape waste at a low organic loading rate (OLR) of 2 g COD/L reactor volume-day is feasible. During low OLR operation, stable reactor performance was demonstrated through monitoring of biogas production and composition, reactor total and volatile solids, total and soluble chemical oxygendemand, volatile fatty acid content, pH, and bicarbonate alkalinity. Low OLR HSAD of Bucknell University food waste and landscape waste combined produced 232 L CH4/kg COD and 229 L CH4/kg VS. When OLR was increased to high loading (15 g COD/L reactor volume-day) to assess maximum loading conditions, reactor performance became unstable due to ammonia accumulation and subsequent inhibition. The methaneproduction per unit COD also decreased (to 211 L CH4/kg COD fed), although methane production per unit VS increased (to 272 L CH4/kg VS fed). The degree of ammonia inhibition was investigated through respirometry in which reactor digestate was diluted and exposed to varying concentrations of ammonia. Treatments with low ammoniaconcentrations recovered quickly from ammonia inhibition within the reactor. The post-digestion curing process was studied at laboratory-scale, to provide a preliminary assessment of curing duration. Digestate was mixed with woodchips and incubated in an insulated container at 35 °C to simulate full-scale curing self-heatingconditions. Degree of digestate stabilization was determined through oxygen uptake rates, percent O2, temperature, volatile solids, and Solvita Maturity Index. Phytotoxicity was determined through observation of volatile fatty acid and ammonia concentrations.Stabilization of organics and elimination of phytotoxic compounds (after 10–15 days of curing) preceded significant reductions of volatile sulfur compounds (hydrogen sulfide, methanethiol, and dimethyl sulfide) after 15–20 days of curing. Bucknell University food waste has high biodegradability and is suitable for high-solids anaerobic digestion; however, it has a low C:N ratio which can result in ammonia accumulation under some operating conditions. The low biodegradability of Bucknell University landscape waste limits the amount of bioavailable carbon that it can contribute, making it unsuitable for use as a cosubstrate to increase the C:N ratio of food waste. Additional research is indicated to determine other cosubstrates with higher biodegradabilities that may allow successful HSAD of Bucknell University food waste at high OLRs. Some cosubstrates to investigate are office paper, field residues, or grease trap waste. A brief curing period of less than 3 weeks was sufficient to produce viable humus from digestate produced by low OLR HSAD of food and landscape waste.
Resumo:
Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.
Resumo:
EPON 862 is an epoxy resin which is cured with the hardening agent DETDA to form a crosslinked epoxy polymer and is used as a component in modern aircraft structures. These crosslinked polymers are often exposed to prolonged periods of temperatures below glass transition range which cause physical aging to occur. Because physical aging can compromise the performance of epoxies and their composites and because experimental techniques cannot provide all of the necessary physical insight that is needed to fully understand physical aging, efficient computational approaches to predict the effects of physical aging on thermo-mechanical properties are needed. In this study, Molecular Dynamics and Molecular Minimization simulations are being used to establish well-equilibrated, validated molecular models of the EPON 862-DETDA epoxy system with a range of crosslink densities using a united-atom force field. These simulations are subsequently used to predict the glass transition temperature, thermal expansion coefficients, and elastic properties of each of the crosslinked systems for validation of the modeling techniques. The results indicate that glass transition temperature and elastic properties increase with increasing levels of crosslink density and the thermal expansion coefficient decreases with crosslink density, both above and below the glass transition temperature. The results also indicate that there may be an upper limit to crosslink density that can be realistically achieved in epoxy systems. After evaluation of the thermo-mechanical properties, a method is developed to efficiently establish molecular models of epoxy resins that represent the corresponding real molecular structure at specific aging times. Although this approach does not model the physical aging process, it is useful in establishing a molecular model that resembles the physically-aged state for further use in predicting thermo-mechanical properties as a function of aging time. An equation has been predicted based on the results which directly correlate aging time to aged volume of the molecular model. This equation can be helpful for modelers who want to study properties of epoxy resins at different levels of aging but have little information about volume shrinkage occurring during physical aging.
Resumo:
Interest in the study of magnetic/non-magnetic multilayered structures took a giant leap since Grünberg and his group established that the interlayer exchange coupling (IEC) is a function of the non-magnetic spacer width. This interest was further fuelled by the discovery of the phenomenal Giant Magnetoresistance (GMR) effect. In fact, in 2007 Albert Fert and Peter Grünberg were awarded the Nobel Prize in Physics for their contribution to the discovery of GMR. GMR is the key property that is being used in the read-head of the present day computer hard drive as it requires a high sensitivity in the detection of magnetic field. The recent increase in demand for device miniaturization encouraged researchers to look for GMR in nanoscale multilayered structures. In this context, one dimensional(1-D) multilayerd nanowire structure has shown tremendous promise as a viable candidate for ultra sensitive read head sensors. In fact, the phenomenal giant magnetoresistance(GMR) effect, which is the novel feature of the currently used multilayered thin film, has already been observed in multilayered nanowire systems at ambient temperature. Geometrical confinement of the supper lattice along the 2-dimensions (2-D) to construct the 1-D multilayered nanowire prohibits the minimization of magnetic interaction- offering a rich variety of magnetic properties in nanowire that can be exploited for novel functionality. In addition, introduction of non-magnetic spacer between the magnetic layers presents additional advantage in controlling magnetic properties via tuning the interlayer magnetic interaction. Despite of a large volume of theoretical works devoted towards the understanding of GMR and IEC in super lattice structures, limited theoretical calculations are reported in 1-D multilayered systems. Thus to gauge their potential application in new generation magneto-electronic devices, in this thesis, I have discussed the usage of first principles density functional theory (DFT) in predicting the equilibrium structure, stability as well as electronic and magnetic properties of one dimensional multilayered nanowires. Particularly, I have focused on the electronic and magnetic properties of Fe/Pt multilayered nanowire structures and the role of non-magnetic Pt spacer in modulating the magnetic properties of the wire. It is found that the average magnetic moment per atom in the nanowire increases monotonically with an ~1/(N(Fe)) dependance, where N(Fe) is the number of iron layers in the nanowire. A simple model based upon the interfacial structure is given to explain the 1/(N(Fe)) trend in magnetic moment obtained from the first principle calculations. A new mechanism, based upon spin flip with in the layer and multistep electron transfer between the layers, is proposed to elucidate the enhancement of magnetic moment of Iron atom at the Platinum interface. The calculated IEC in the Fe/Pt multilayered nanowire is found to switch sign as the width of the non-magnetic spacer varies. The competition among short and long range direct exchange and the super exchange has been found to play a key role for the non-monotonous sign in IEC depending upon the width of the Platinum spacer layer. The calculated magnetoresistance from Julliere's model also exhibit similar switching behavior as that of IEC. The universality of the behavior of exchange coupling has also been looked into by introducing different non-magnetic spacers like Palladium, Copper, Silver, and Gold in between magnetic Iron layers. The nature of hybridization between Fe and other non-magnetic spacer is found to dictate the inter layer magnetic interaction. For example, in Fe/Pd nanowire the d-p hybridization in two spacer layer case favors anti-ferromagnetic (AFM) configuration over ferromagnetic (FM) configuration. However, the hybridization between half-filled Fe(d) and filled Cu(p) state in Fe/Cu nanowire favors FM coupling in the 2-spacer system.
Resumo:
The demands in production and associate costs at power generation through non renewable resources are increasing at an alarming rate. Solar energy is one of the renewable resource that has the potential to minimize this increase. Utilization of solar energy have been concentrated mainly on heating application. The use of solar energy in cooling systems in building would benefit greatly achieving the goal of non-renewable energy minimization. The approaches of solar energy heating system research done by initiation such as University of Wisconsin at Madison and building heat flow model research conducted by Oklahoma State University can be used to develop and optimize solar cooling building system. The research uses two approaches to develop a Graphical User Interface (GUI) software for an integrated solar absorption cooling building model, which is capable of simulating and optimizing the absorption cooling system using solar energy as the main energy source to drive the cycle. The software was then put through a number of litmus test to verify its integrity. The litmus test was conducted on various building cooling system data sets of similar applications around the world. The output obtained from the software developed were identical with established experimental results from the data sets used. Software developed by other research are catered for advanced users. The software developed by this research is not only reliable in its code integrity but also through its integrated approach which is catered for new entry users. Hence, this dissertation aims to correctly model a complete building with the absorption cooling system in appropriate climate as a cost effective alternative to conventional vapor compression system.
Resumo:
With proper application of Best Management Practices (BMPs), the impact from the sediment to the water bodies could be minimized. However, finding the optimal allocation of BMP can be difficult, since there are numerous possible options. Also, economics plays an important role in BMP affordability and, therefore, the number of BMPs able to be placed in a given budget year. In this study, two methodologies are presented to determine the optimal cost-effective BMP allocation, by coupling a watershed-level model, Soil and Water Assessment Tool (SWAT), with two different methods, targeting and a multi-objective genetic algorithm (Non-dominated Sorting Genetic Algorithm II, NSGA-II). For demonstration, these two methodologies were applied to an agriculture-dominant watershed located in Lower Michigan to find the optimal allocation of filter strips and grassed waterways. For targeting, three different criteria were investigated for sediment yield minimization, during the process of which it was found that the grassed waterways near the watershed outlet reduced the watershed outlet sediment yield the most under this study condition, and cost minimization was also included as a second objective during the cost-effective BMP allocation selection. NSGA-II was used to find the optimal BMP allocation for both sediment yield reduction and cost minimization. By comparing the results and computational time of both methodologies, targeting was determined to be a better method for finding optimal cost-effective BMP allocation under this study condition, since it provided more than 13 times the amount of solutions with better fitness for the objective functions while using less than one eighth of the SWAT computational time than the NSGA-II with 150 generations did.
Resumo:
This thesis studies the minimization of the fuel consumption for a Hybrid Electric Vehicle (HEV) using Model Predictive Control (MPC). The presented MPC – based controller calculates an optimal sequence of control inputs to a hybrid vehicle using the measured plant outputs, the current dynamic states, a system model, system constraints, and an optimization cost function. The MPC controller is developed using Matlab MPC control toolbox. To evaluate the performance of the presented controller, a power-split hybrid vehicle, 2004 Toyota Prius, is selected. The vehicle uses a planetary gear set to combine three power components, an engine, a motor, and a generator, and transfer energy from these components to the vehicle wheels. The planetary gear model is developed based on the Willis’s formula. The dynamic models of the engine, the motor, and the generator, are derived based on their dynamics at the planetary gear. The MPC controller for HEV energy management is validated in the MATLAB/Simulink environment. Both the step response performance (a 0 – 60 mph step input) and the driving cycle tracking performance are evaluated. Two standard driving cycles, Urban Dynamometer Driving Schedule (UDDS) and Highway Fuel Economy Driving Schedule (HWFET), are used in the evaluation tests. For the UDDS and HWFET driving cycles, the simulation results, the fuel consumption and the battery state of charge, using the MPC controller are compared with the simulation results using the original vehicle model in Autonomie. The MPC approach shows the feasibility to improve vehicle performance and minimize fuel consumption.
Resumo:
PMR-15 polyimide is a polymer that is used as a matrix in composites. These composites with PMR-15 matrices are called advanced polymer matrix composite that is abundantly used in the aerospace and electronics industries because of its high temperature resistivity. Apart from having high temperature sustainability, PMR-15 composites also display good thermal-oxidative stability, mechanical properties, processability and low costs, which makes it a suitable material for manufacturing aircraft structures. PMR-15 uses the reverse Diels-Alder (RDA) method for crosslinking which provides it with the groundwork for its distinctive thermal stability and a range of 280-300 degree Centigrade use temperature. Regardless of such desirable properties, this material has a number of limitations that compromises its application on a large scale basis. PMR-15 composites has been known to be very vulnerable to micro-cracking at inter and intra-laminar cracking. But the major factor that hinders its demand is PMR-15's carcinogenic constituent, methylene dianilineme (MDA), also a liver toxin. The necessity of providing a safe working environment during its production adds up to the cost of this material. In this study, Molecular Dynamics and Energy Minimization techniques are utilized to simulate a structure of PMR-15 at a given density of 1.324 g/cc and an attempt to recreate the polyimide to reduce the number of experimental testing and hence subdue the health hazards as well as the cost involved in its production. Even though this study does not involve in validating any mechanical properties of the model, it could be used in future for the validation of its properties and further testing for different properties like aging, microcracking, creep etc.
Resumo:
We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.