930 resultados para Balancing and Optimization of lines
Resumo:
Numerical modeling of cascade erbium-doped and holmium-doped fluoride fiber lasers is presented. Fiber lengths were optimized for cascade lasers that had fixed or free-running wavelengths using all known spectroscopic parameters. The performance of the cascade laser was tested against dopant concentration, energy transfer process, heat generation, output coupling, and pump schemes. The results suggest that the slope efficiencies and thresholds for both transitions increase with increasing Ho3+ or Er3+ concentration with the slope efficiency stabilizing after 1 mol% rare earth doping. The heat generation in the Ho3+-based system is lower compared to the Er 3+-based system at low dopant concentration as a result of the lower rates of multiphonon relaxation. Decreasing the output coupling for the upper (∼3 μm) transition decreases the threshold of the lower transition and the upper transition benefits from decreasing the output coupling for the lower transition for both cascade systems. The highest slope efficiency was achieved under counter-propagating pump conditions. Saturation of the output power occurs at comparatively higher pump power with dilute Er3+ doping compared with heavier doping. Overall, we show that the cascade Ho3+ -doped fluoride laser is the best candidate for high power output because of its higher slope efficiency and lower temperature excursion of the core and no saturation of the output. © 2013 IEEE.
Resumo:
An effective aperture approach is used as a tool for analysis and parameter optimization of mostly known ultrasound imaging systems - phased array systems, compounding systems and synthetic aperture imaging systems. Both characteristics of an imaging system, the effective aperture function and the corresponding two-way radiation pattern, provide information about two of the most important parameters of images produced by an ultrasound system - lateral resolution and contrast. Therefore, in the design, optimization of the effective aperture function leads to optimal choice of such parameters of an imaging systems that influence on lateral resolution and contrast of images produced by this imaging system. It is shown that the effective aperture approach can be used for optimization of a sparse synthetic transmit aperture (STA) imaging system. A new two-stage algorithm is proposed for optimization of both the positions of the transmitted elements and the weights of the receive elements. The proposed system employs a 64-element array with only four active elements used during transmit. The numerical results show that Hamming apodization gives the best compromise between the contrast of images and the lateral resolution.
Resumo:
The complex of questions connected with the analysis, estimation and structural-parametrical optimization of dynamic system is considered in this article. Connection of such problems with tasks of control by beams of trajectories is emphasized. The special attention is concentrated on the review and analysis of spent scientific researches, the attention is stressed to their constructability and applied directedness. Efficiency of the developed algorithmic and software is demonstrated on the tasks of modeling and optimization of output beam characteristics in linear resonance accelerators.
Resumo:
Today, over 15,000 Ion Mobility Spectrometry (IMS) analyzers are employed at worldwide security checkpoints to detect explosives and illicit drugs. Current portal IMS instruments and other electronic nose technologies detect explosives and drugs by analyzing samples containing the headspace air and loose particles residing on a surface. Canines can outperform these systems at sampling and detecting the low vapor pressure explosives and drugs, such as RDX, PETN, cocaine, and MDMA, because these biological detectors target the volatile signature compounds available in the headspace rather than the non-volatile parent compounds of explosives and drugs.^ In this dissertation research volatile signature compounds available in the headspace over explosive and drug samples were detected using SPME as a headspace sampling tool coupled to an IMS analyzer. A Genetic Algorithm (GA) technique was developed to optimize the operating conditions of a commercial IMS (GE Itemizer 2), leading to the successful detection of plastic explosives (Detasheet, Semtex H, and C-4) and illicit drugs (cocaine, MDMA, and marijuana). Short sampling times (between 10 sec to 5 min) were adequate to extract and preconcentrate sufficient analytes (> 20 ng) representing the volatile signatures in the headspace of a 15 mL glass vial or a quart-sized can containing ≤ 1 g of the bulk explosive or drug.^ Furthermore, a research grade IMS with flexibility for changing operating conditions and physical configurations was designed and fabricated to accommodate future research into different analytes or physical configurations. The design and construction of the FIU-IMS were facilitated by computer modeling and simulation of ion’s behavior within an IMS. The simulation method developed uses SIMION/SDS and was evaluated with experimental data collected using a commercial IMS (PCP Phemto Chem 110). The FIU-IMS instrument has comparable performance to the GE Itemizer 2 (average resolving power of 14, resolution of 3 between two drugs and two explosives, and LODs range from 0.7 to 9 ng). ^ The results from this dissertation further advance the concept of targeting volatile components to presumptively detect the presence of concealed bulk explosives and drugs by SPME-IMS, and the new FIU-IMS provides a flexible platform for future IMS research projects.^
Resumo:
Bio-molecular interactions exist ubiquitously in all biological systems. This dissertation project was to construct a powerful surface plasmon resonance (SPR) sensor. The SPR system is used to study bio-molecular interactions in real time and without labeling. Surface plasmon is the oscillation of free electrons in metals coupled with surface electromagnetic waves. These surface electromagnetic waves provide a sensitive probe to study bio-molecular interactions on metal surfaces. This project resulted in the successful construction and optimization of a homemade SPR sensor and the development of several new powerful protocols to study bio-molecular interactions. It was discovered through this project that the limitations of earlier SPR sensors are related not only to the instrumentation design and operating procedures, but also to the complex behaviors of bio-molecules on sensor surfaces that were very different from that in solution. Based on these discoveries the instrumentation design and operating procedures were fully optimized. A set of existing sensor surface treatment protocols were tested and evaluated and new protocols were developed in this project. The new protocols have demonstrated excellent performance to study biomolecular interactions. The optimized home-made SPR sensor was used to study protein-surface interactions. These protein-surface interactions are responsible for many complex organic cell activities. The co-existence of different driving forces and their correlation with the structure of the protein and the surface make the understanding of the fundamental mechanism of protein-surface interactions a very challenging task. Using the improved SPR sensor, the electrostatic interaction and hydrophobic interaction were studied separately. The results of this project directly confirmed the theoretical predictions for electrostatic force between the protein and surface. In addition, this project demonstrated that the strength of the protein-surface hydrophobic interaction does not solely depend on the hydrophobicity as reported earlier. Surface structure also plays a significant role.
Resumo:
A tenet of modern radiotherapy (RT) is to identify the treatment target accurately, following which the high-dose treatment volume may be expanded into the surrounding tissues in order to create the clinical and planning target volumes. Respiratory motion can induce errors in target volume delineation and dose delivery in radiation therapy for thoracic and abdominal cancers. Historically, radiotherapy treatment planning in the thoracic and abdominal regions has used 2D or 3D images acquired under uncoached free-breathing conditions, irrespective of whether the target tumor is moving or not. Once the gross target volume has been delineated, standard margins are commonly added in order to account for motion. However, the generic margins do not usually take the target motion trajectory into consideration. That may lead to under- or over-estimate motion with subsequent risk of missing the target during treatment or irradiating excessive normal tissue. That introduces systematic errors into treatment planning and delivery. In clinical practice, four-dimensional (4D) imaging has been popular in For RT motion management. It provides temporal information about tumor and organ at risk motion, and it permits patient-specific treatment planning. The most common contemporary imaging technique for identifying tumor motion is 4D computed tomography (4D-CT). However, CT has poor soft tissue contrast and it induce ionizing radiation hazard. In the last decade, 4D magnetic resonance imaging (4D-MRI) has become an emerging tool to image respiratory motion, especially in the abdomen, because of the superior soft-tissue contrast. Recently, several 4D-MRI techniques have been proposed, including prospective and retrospective approaches. Nevertheless, 4D-MRI techniques are faced with several challenges: 1) suboptimal and inconsistent tumor contrast with large inter-patient variation; 2) relatively low temporal-spatial resolution; 3) it lacks a reliable respiratory surrogate. In this research work, novel 4D-MRI techniques applying MRI weightings that was not used in existing 4D-MRI techniques, including T2/T1-weighted, T2-weighted and Diffusion-weighted MRI were investigated. A result-driven phase retrospective sorting method was proposed, and it was applied to image space as well as k-space of MR imaging. Novel image-based respiratory surrogates were developed, improved and evaluated.
Resumo:
Rolling Isolation Systems provide a simple and effective means for protecting components from horizontal floor vibrations. In these systems a platform rolls on four steel balls which, in turn, rest within shallow bowls. The trajectories of the balls is uniquely determined by the horizontal and rotational velocity components of the rolling platform, and thus provides nonholonomic constraints. In general, the bowls are not parabolic, so the potential energy function of this system is not quadratic. This thesis presents the application of Gauss's Principle of Least Constraint to the modeling of rolling isolation platforms. The equations of motion are described in terms of a redundant set of constrained coordinates. Coordinate accelerations are uniquely determined at any point in time via Gauss's Principle by solving a linearly constrained quadratic minimization. In the absence of any modeled damping, the equations of motion conserve energy. This mathematical model is then used to find the bowl profile that minimizes response acceleration subject to displacement constraint.
Resumo:
The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.
Resumo:
In this study, thermal, exergetic analysis and performance evaluation of seawater and fresh wet cooling tower and the effect of parameters on its performance is investigated. With using of energy and mass balance equations, experimental results, a mathematical model and EES code developed. Due to lack of fresh water, seawater cooling is interesting choice for future of cooling, so the effect of seawater in the range of 1gr/kg to 60gr/kg for salinity on the performance characteristics like air efficiency, water efficiency, output water temperature of cooling tower, flow of the exergy, and the exergy efficiency with comparison with fresh water examined. Decreasing of air efficiency about 3%, increasing of water efficiency about 1.5% are some of these effects. Moreover with formation of fouling the performance of cooling tower decreased about 15% which this phenomena and its effects like increase in output water temperature and tower excess volume has been showed and also accommodate with others work. Also optimization for minimizing cost, maximizing air efficiency, and minimizing exergy destruction has been done, results showed that optimization on minimizing the exergy destruction has been satisfy both minimization of the cost and the maximization of the air efficiency, although it will not necessarily permanent for all inputs and optimizations. Validation of this work is done by comparing computational results and experimental data which showed that the model have a good accuracy.
Resumo:
Calcium sulfoaluminate (CSA) cements/mortars are receiving increasing attention since their manufacture produces less CO2 than ordinary Portland cement (OPC) (up to 22% of decrease depending on its composition). These systems are complex and there are many parameters affecting their hydration mechanism, such as water-to-cement (w/c) ratio, type and amount of sulfate source, and so on. Low w/c ratios, within certain limits, may reduce the porosity and consequently, improve the mechanical strengths. However, it is accompanied by an increasing of viscosity and lack of both workability and homogeneity, with the consequent negative effect on the mechanical properties. The dispersion of the particles through the adsorption of the right amount and type of additives, such as superplasticizers, is a key point to improve the workability of mortars allowing both the preparation of homogeneous mixtures and the reduction of the amount of mixing water. This work deals with the preparation and optimization of homogeneous CSA-mortars with improved mechanical strengths. The optimum amount of superplasticizer was optimized through rheological measurements. The effect of different amounts of the superplasticizer on the viscosity of the mortars, its hydration mechanism and corresponding mechanical properties has been studied and will be discussed.
Resumo:
In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.
In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.
The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.
We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.
In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.
Resumo:
Purpose: To develop and optimise some variables that influence fluoxetine orally disintegrating tablets (ODTs) formulation. Methods: Fluoxetine ODTs tablets were prepared using direct compression method. Three-factor, 3- level Box-Behnken design was used to optimize and develop fluoxetine ODT formulation. The design suggested 15 formulations of different lubricant concentration (X1), lubricant mixing time (X2), and compression force (X3) and then their effect was monitored on tablet weight (Y1), thickness (Y2), hardness (Y3), % friability (Y4), and disintegration time (Y5). Results: All powder blends showed acceptable flow properties, ranging from good to excellent. The disintegration time (Y5) was affected directly by lubricant concentration (X1). Lubricant mixing time (X2) had a direct effect on tablet thickness (Y2) and hardness (Y3), while compression force (X3) had a direct impact on tablet hardness (Y3), % friability (Y4) and disintegration time (Y5). Accordingly, Box-Behnken design suggested an optimized formula of 0.86 mg (X1), 15.3 min (X2), and 10.6 KN (X3). Finally, the prediction error percentage responses of Y1, Y2, Y3, Y4, and Y5 were 0.31, 0.52, 2.13, 3.92 and 3.75 %, respectively. Formula 4 and 8 achieved 90 % of drug release within the first 5 min of dissolution test. Conclusion: Fluoxetine ODT formulation has been developed and optimized successfully using Box- Behnken design and has also been manufactured efficiently using direct compression technique.
Resumo:
In this work, fabrication processes for daylight guiding systems based on micromirror arrays are developed, evaluated and optimized.Two different approaches are used: At first, nanoimprint lithography is used to fabricate large area micromirrors by means of Substrate Conformal Imprint Lithography (SCIL).Secondly,a new lithography technique is developed using a novel bi-layered photomask to fabricate large area micromirror arrays. The experimental results showing a reproducible stable process, high yield, and is consuming less material, time, cost and effort.
Resumo:
In most agroecosystems, nitrogen (N) is the most important nutrient limiting plant growth. One management strategy that affects N cycling and N use efficiency (NUE) is conservation agriculture (CA), an agricultural system based on a combination of minimum tillage, crop residue retention and crop rotation. Available results on the optimization of NUE in CA are inconsistent and studies that cover all three components of CA are scarce. Presently, CA is promoted in the Yaqui Valley in Northern Mexico, the country´s major wheat-producing area in which from 1968 to 1995, fertilizer application rates for the cultivation of irrigated durum wheat (Triticum durum L.) at 6 t ha-1 increased from 80 to 250 kg ha-1, demonstrating the high intensification potential in this region. Given major knowledge gaps on N availability in CA this thesis summarizes the current knowledge of N management in CA and provides insights in the effects of tillage practice, residue management and crop rotation on wheat grain quality and N cycling. Major aims of the study were to identify N fertilizer application strategies that improve N use efficiency and reduce N immobilization in CA with the ultimate goal to stabilize cereal yields, maintain grain quality, minimize N losses into the environment and reduce farmers’ input costs. Soil physical and chemical properties in CA were measured and compared with those in conventional systems and permanent beds with residue burning focusing on their relationship to plant N uptake and N cycling in the soil and how they are affected by tillage and N fertilizer timing, method and doses. For N fertilizer management, we analyzed how placement, time and amount of N fertilizer influenced yield and quality parameters of durum and bread wheat in CA systems. Overall, grain quality parameters, in particular grain protein concentration decreased with zero-tillage and increasing amount of residues left on the field compared with conventional systems. The second part of the dissertation provides an overview of applied methodologies to measure NUE and its components. We evaluated the methodology of ion exchange resin cartridges under irrigated, intensive agricultural cropping systems on Vertisols to measure nitrate leaching losses which through drainage channels ultimately end up in the Sea of Cortez where they lead to algae blooming. A throughout analysis of N inputs and outputs was conducted to calculate N balances in three different tillage-straw systems. As fertilizer inputs are high, N balances were positive in all treatments indicating the risk of N leaching or volatilization during or in subsequent cropping seasons and during heavy rain fall in summer. Contrary to common belief, we did not find negative effects of residue burning on soil nutrient status, yield or N uptake. A labeled fertilizer experiment with urea 15N was implemented in micro-plots to measure N fertilizer recovery and the effects of residual fertilizer N in the soil from summer maize on the following winter crop wheat. Obtained N fertilizer recovery rates for maize grain were with an average of 11% very low for all treatments.