941 resultados para Biofertilizer and optimization
Resumo:
The ever increasing demand for new services from users who want high-quality broadband services while on the move, is straining the efficiency of current spectrum allocation paradigms, leading to an overall feeling of spectrum scarcity. In order to circumvent this problem, two possible solutions are being investigated: (i) implementing new technologies capable of accessing the temporarily/locally unused bands, without interfering with the licensed services, like Cognitive Radios; (ii) release some spectrum bands thanks to new services providing higher spectral efficiency, e.g., DVB-T, and allocate them to new wireless systems. These two approaches are promising, but also pose novel coexistence and interference management challenges to deal with. In particular, the deployment of devices such as Cognitive Radio, characterized by the inherent unplanned, irregular and random locations of the network nodes, require advanced mathematical techniques in order to explicitly model their spatial distribution. In such context, the system performance and optimization are strongly dependent on this spatial configuration. On the other hand, allocating some released spectrum bands to other wireless services poses severe coexistence issues with all the pre-existing services on the same or adjacent spectrum bands. In this thesis, these methodologies for better spectrum usage are investigated. In particular, using Stochastic Geometry theory, a novel mathematical framework is introduced for cognitive networks, providing a closed-form expression for coverage probability and a single-integral form for average downlink rate and Average Symbol Error Probability. Then, focusing on more regulatory aspects, interference challenges between DVB-T and LTE systems are analysed proposing a versatile methodology for their proper coexistence. Moreover, the studies performed inside the CEPT SE43 working group on the amount of spectrum potentially available to Cognitive Radios and an analysis of the Hidden Node problem are provided. Finally, a study on the extension of cognitive technologies to Hybrid Satellite Terrestrial Systems is proposed.
Resumo:
Here, we present the adaptation and optimization of (i) the solvothermal and (ii) the metal-organic chemical vapor deposition (MOCVD) approach as simple methods for the high-yield synthesis of MQ2 (M=Mo, W, Zr; Q = O, S) nanoparticles. Extensive characterization was carried out using X-ray diffraction (XRD), scanning and transmission electron micros¬copy (SEM/TEM) combined with energy dispersive X-ray analysis (EDXA), Raman spectroscopy, thermal analyses (DTA/TG), small angle X-ray scattering (SAXS) and BET measurements. After a general introduction to the state of the art, a simple route to nanostructured MoS2 based on the decomposition of the cluster-based precursor (NH4)2Mo3S13∙xH2O under solvothermal conditions (toluene, 653 K) is presented. Solvothermal decomposition results in nanostructured material that is distinct from the material obtained by decomposition of the same precursor in sealed quartz tubes at the same temperature. When carried out in the presence of the surfactant cetyltrimethyl¬ammonium bromide (CTAB), the decomposition product exhibits highly disordered MoS2 lamellae with high surface areas. The synthesis of WS2 onion-like nanoparticles by means of a single-step MOCVD process is discussed. Furthermore, the results of the successful transfer of the two-step MO¬CVD based synthesis of MoQ2 nanoparticles (Q = S, Se), comprising the formation of amorphous precursor particles and followed by the formation of fullerene-like particles in a subsequent annealing step to the W-S system, are presented. Based on a study of the temperature dependence of the reactions a set of conditions for the formation of onion-like structures in a one-step reaction could be derived. The MOCVD approach allows a selective synthesis of open and filled fullerene-like chalcogenide nanoparticles. An in situ heating stage transmission electron microscopy (TEM) study was employed to comparatively investigate the growth mechanism of MoS2 and WS2 nanoparticles obtained from MOCVD upon annealing. Round, mainly amorphous particles in the pristine sample trans¬form to hollow onion-like particles upon annealing. A significant difference between both compounds could be demonstrated in their crystallization conduct. Finally, the results of the in situ hea¬ting experiments are compared to those obtained from an ex situ annealing process under Ar. Eventually, a low temperature synthesis of monodisperse ZrO2 nanoparticles with diameters of ~ 8 nm is introduced. Whereas the solvent could be omitted, the synthesis in an autoclave is crucial for gaining nano-sized (n) ZrO2 by thermal decomposition of Zr(C2O4)2. The n-ZrO2 particles exhibits high specific surface areas (up to 385 m2/g) which make them promising candidates as catalysts and catalyst supports. Co-existence of m- and t-ZrO2 nano-particles of 6-9 nm in diameter, i.e. above the critical particle size of 6 nm, demonstrates that the particle size is not the only factor for stabilization of the t-ZrO2 modification at room temperature. In conclusion, synthesis within an autoclave (with and without solvent) and the MOCVD process could be successfully adapted to the synthesis of MoS2, WS2 and ZrO2 nanoparticles. A comparative in situ heating stage TEM study elucidated the growth mechanism of MoS2 and WS2 fullerene-like particles. As the general processes are similar, a transfer of this synthesis approach to other layered transition metal chalcogenide systems is to be expected. Application of the obtained nanomaterials as lubricants (MoS2, WS2) or as dental filling materials (ZrO2) is currently under investigation.
Resumo:
A study of the pyrolysis and oxidation (phi 0.5-1-2) of methane and methyl formate (phi 0.5) in a laboratory flow reactor (Length = 50 cm, inner diameter = 2.5 cm) has been carried out at 1-4 atm and 300-1300 K temperature range. Exhaust gaseous species analysis was realized using a gas chromatographic system, Varian CP-4900 PRO Mirco-GC, with a TCD detector and using helium as carrier for a Molecular Sieve 5Å column and nitrogen for a COX column, whose temperatures and pressures were respectively of 65°C and 150kPa. Model simulations using NTUA [1], Fisher et al. [12], Grana [13] and Dooley [14] kinetic mechanisms have been performed with CHEMKIN. The work provides a basis for further development and optimization of existing detailed chemical kinetic schemes.
Resumo:
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20\% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Compiler optimizations help to make code run faster at runtime. When the compilation is done before the program is run, compilation time is less of an issue, but how do on-the-fly compilation and optimization impact the overall runtime? If the compiler must compete with the running application for resources, the running application will take more time to complete. This paper investigates the impact of specific compiler optimizations on the overall runtime of an application. A foldover Plackett and Burman design is used to choose compiler optimizations that appear to contribute to shorter overall runtimes. These selected optimizations are compared with the default optimization levels in the Jikes RVM. This method selects optimizations that result in a shorter overall runtime than the default O0, O1, and O2 levels. This shows that careful selection of compiler optimizations can have a significant, positive impact on overall runtime.
Resumo:
Micro Combined Heat and Power (Micro-CHP) system produces both electricity and heat required for residential or small business applications. Use of Micro-CHP in a residential application not only creates energy and economic savings but also reduces the carbon foot print of the house or small business. Additionally, micro-CHP can subsidize its cost of operation by selling excess electricity produced back to the grid. Even though Micro-CHP remains attractive on paper, high initial cost and optimization issues in residential scale heat and electrical requirement has kept this technology from becoming a success. To understand and overcome all disadvantages posed my Micro-CHP system, a laboratory is developed to test different scenarios of Micro-CHP applications so that we can learn and improve the current technology. This report focuses on the development of this Micro-CHP laboratory including installation of Ecopower micro-CHP unit, developing fuel line and exhaust line for Ecopower unit, design of electrical and thermal loop, installing all the instrumentation required for data collection on the Ecopower unit and developing controls for heat load simulation using thermal loop. Also a simulation of Micro-CHP running on Syngas is done in Matlab. This work was supported through the donation of ‘Ecopower’ a Micro-CHP unit by Marathon Engine and through the support of Michigan Tech REF-IF grand.
Resumo:
An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.
Resumo:
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
Resumo:
Several strategies relying on kriging have recently been proposed for adaptively estimating contour lines and excursion sets of functions under severely limited evaluation budget. The recently released R package KrigInv 3 is presented and offers a sound implementation of various sampling criteria for those kinds of inverse problems. KrigInv is based on the DiceKriging package, and thus benefits from a number of options concerning the underlying kriging models. Six implemented sampling criteria are detailed in a tutorial and illustrated with graphical examples. Different functionalities of KrigInv are gradually explained. Additionally, two recently proposed criteria for batch-sequential inversion are presented, enabling advanced users to distribute function evaluations in parallel on clusters or clouds of machines. Finally, auxiliary problems are discussed. These include the fine tuning of numerical integration and optimization procedures used within the computation and the optimization of the considered criteria.
Resumo:
XENON is a dark matter direct detection project, consisting of a time projection chamber (TPC) filled with liquid xenon as detection medium. The construction of the next generation detector, XENON1T, is presently taking place at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy. It aims at a sensitivity to spin-independent cross sections of 2 10-47 c 2 for WIMP masses around 50 GeV2, which requires a background reduction by two orders of magnitude compared to XENON100, the current generation detector. An active system that is able to tag muons and muon-induced backgrounds is critical for this goal. A water Cherenkov detector of ~ 10 m height and diameter has been therefore developed, equipped with 8 inch photomultipliers and cladded by a reflective foil. We present the design and optimization study for this detector, which has been carried out with a series of Monte Carlo simulations. The muon veto will reach very high detection efficiencies for muons (>99.5%) and showers of secondary particles from muon interactions in the rock (>70%). Similar efficiencies will be obtained for XENONnT, the upgrade of XENON1T, which will later improve the WIMP sensitivity by another order of magnitude. With the Cherenkov water shield studied here, the background from muon-induced neutrons in XENON1T is negligible.
Resumo:
OBJECTIVE The improvement in diagnostic accuracy and optimization of treatment planning in periodontology through the use of three-dimensional imaging with cone beam computed tomography (CBCT) is discussed controversially in the literature. The objective was to identify the best available external evidence for the indications of CBCT for periodontal diagnosis and treatment planning in specific clinical situations. DATA SOURCES A systematic literature search was performed for articles published by 2 March 2015 using electronic databases and hand search. Two reviewers performed the study selection, data collection, and validity assessment. PICO and PRISMA criteria were applied. From the combined search, seven studies were finally included. CONCLUSION The case series were published from the years 2009 to 2014. Five of the included publications refer to maxillary and/or mandibular molars and two to aspects related to vertical bony defects. Two studies show a high accuracy of CBCT in detecting intrabony defect morphology when compared to periapical radiographs. Particularly, in maxillary molars, CBCT provides high accuracy for detecting furcation involvement and morphology of surrounding periodontal tissues. CBCT has demonstrated advantages, when more invasive treatment approaches were considered in terms of decision making and cost benefit. Within their limits, the available data suggest that CBCT may improve diagnostic accuracy and optimize treatment planning in periodontal defects, particularly in maxillary molars with furcation involvement, and that the higher irradiation doses and cost-benefit ratio should be carefully analyzed before using CBCT for periodontal diagnosis and treatment planning.
Resumo:
During the selection, implementation and stabilization phases, as well as the operations and optimization phase of an ERP system (ERP-lifecycle), numerous companies consider to utilize the support of an external service provider. This paper analyses how different categories of knowledge influence the sourcing decision of crucial tasks within the ERP lifecycle. Based on a review of the IS outsourcing literature, essential knowledge-related determinants for the IS outsourcing decision are presented and aggregated in a structural model. It will be hypothesized that internal deficits in technological knowledge in comparison to external vendors as well as the specificity of the synthesis of special technological and specific business knowledge have a profound impact on the outsourcing decision. Then, a classification framework will be developed which facilitates the assignment of various tasks within the ERP lifecycle to their respective knowledge categories and knowledge carriers which might be internal or external stakeholders. The configuaration task will be used as an example to illustrate how the structural model and the classification framework may be applied to evaluate the outsourcing of tasks within the ERP lifecycle.