923 resultados para Search Engine Optimization Methods
Resumo:
The potential energy surface for the first step of the alkaline hydrolysis of methyl acetate was explored by a variety of methods. The conformational search routine within SPARTAN was used to determine the lowest energy am1 and pm3 structures for the anionic tetrahedral intermediate. Ab initio single point and geometry optimization calculations were performed to determine the lowest energy conformer, and the linear synchronous transition (lst) method was used to provide an initial structure for transition state optimization. Transition states were obtained at the am1, pm3, 3-21G, and 3-21 + G levels of theory. These transition states were compared with the anionic tetrahedral intermediates to examine the assumption that the intermediate is a good model for the transition state. In addition, the Cramer/Truhlar sm3 solvation model was used at the semiempirical level to compare gas phase and aqueous alkaline hydrolysis of methyl acetate.
Resumo:
Breast cancer is the most common cancer among women. Tamoxifen is the preferred drug for estrogen receptor-positive breast cancer treatment, yet many of these cancers are intrinsically resistant to tamoxifen or acquire resistance during treatment. Therefore, scientists are searching for breast cancer drugs that have different molecular targets. Previous work revealed that 8-mer and cyclic 9-mer peptides inhibit breast cancer in mouse and rat model systems, interacting with an unknown receptor, while peptides smaller than eight amino acids did not inhibit breast cancer. We have shown that the use of replica exchange molecular dynamics predicts structure and dynamics of active peptides, leading to the discovery of smaller peptides with full biological activity. These simulations identified smaller peptide analogs with a conserved turn, a β-turn formed in the larger peptides. These analogs inhibit estrogen-dependent cell growth in a mouse uterine growth assay, a test showing reliable correlation with human breast cancer inhibition. We outline the computational methods that were tried and used with the experimental information that led to the successful completion of this research.
Resumo:
The Gaussian-3 (G3) model chemistry method has been used to calculate the relative ΔG° values for all possible conformers of neutral clusters of water, (H2O)n, where n = 3−5. A complete 12-fold conformational search around each hydrogen bond produced 144, 1728, and 20 736 initial starting structures of the water trimer, tetramer, and pentamer. These structures were optimized with PM3, followed by HF/6-31G* optimization, and then with the G3 model chemistry. Only two trimers are present on the G3 potential energy hypersurface. We identified 5 tetramers and 10 pentamers on the potential energy and free-energy hypersurfaces at 298 K. None of these 17 structures were linear; all linear starting models folded into cyclic or three-dimensional structures. The cyclic pentamer is the most stable isomer at 298 K. On the basis of this and previous studies, we expect the cyclic tetramers and pentamers to be the most significant cyclic water clusters in the atmosphere.
Resumo:
In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data was gathered on students’ usage of these tools. Regardless of the search system, students exhibited a marked inability to effectively evaluate sources and a heavy reliance on default search settings. On the quantitative benchmarks measured by this study, the EBSCO Discovery Service tool outperformed the other search systems in almost every category. This article describes these results and makes recommendations for libraries considering these tools.
Resumo:
A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.
Resumo:
Smoke spikes occurring during transient engine operation have detrimental health effects and increase fuel consumption by requiring more frequent regeneration of the diesel particulate filter. This paper proposes a decision tree approach to real-time detection of smoke spikes for control and on-board diagnostics purposes. A contemporary, electronically controlled heavy-duty diesel engine was used to investigate the deficiencies of smoke control based on the fuel-to-oxygen-ratio limit. With the aid of transient and steady state data analysis and empirical as well as dimensional modeling, it was shown that the fuel-to-oxygen ratio was not estimated correctly during the turbocharger lag period. This inaccuracy was attributed to the large manifold pressure ratios and low exhaust gas recirculation flows recorded during the turbocharger lag period, which meant that engine control module correlations for the exhaust gas recirculation flow and the volumetric efficiency had to be extrapolated. The engine control module correlations were based on steady state data and it was shown that, unless the turbocharger efficiency is artificially reduced, the large manifold pressure ratios observed during the turbocharger lag period cannot be achieved at steady state. Additionally, the cylinder-to-cylinder variation during this period were shown to be sufficiently significant to make the average fuel-to-oxygen ratio a poor predictor of the transient smoke emissions. The steady state data also showed higher smoke emissions with higher exhaust gas recirculation fractions at constant fuel-to-oxygen-ratio levels. This suggests that, even if the fuel-to-oxygen ratios were to be estimated accurately for each cylinder, they would still be ineffective as smoke limiters. A decision tree trained on snap throttle data and pruned with engineering knowledge was able to use the inaccurate engine control module estimates of the fuel-to-oxygen ratio together with information on the engine control module estimate of the exhaust gas recirculation fraction, the engine speed, and the manifold pressure ratio to predict 94% of all spikes occurring over the Federal Test Procedure cycle. The advantages of this non-parametric approach over other commonly used parametric empirical methods such as regression were described. An application of accurate smoke spike detection in which the injection pressure is increased at points with a high opacity to reduce the cumulative particulate matter emissions substantially with a minimum increase in the cumulative nitrogrn oxide emissions was illustrated with dimensional and empirical modeling.
Resumo:
PURPOSE: We studied the effects of reorganization and changes in the care process, including use of protocols for sedation and weaning from mechanical ventilation, on the use of sedative and analgesic drugs and on length of respiratory support and stay in the intensive care unit (ICU). MATERIALS AND METHODS: Three cohorts of 100 mechanically ventilated ICU patients, admitted in 1999 (baseline), 2000 (implementation I, after a change in ICU organization and in diagnostic and therapeutic approaches), and 2001 (implementation II, after introduction of protocols for weaning from mechanical ventilation and sedation), were studied retrospectively. RESULTS: Simplified Acute Physiology Score II (SAPS II), diagnostic groups, and number of organ failures were similar in all groups. Data are reported as median (interquartile range).Time on mechanical ventilation decreased from 18 (7-41) (baseline) to 12 (7-27) hours (implementation II) (P = .046), an effect which was entirely attributable to noninvasive ventilation, and length of ICU stay decreased in survivors from 37 (21-71) to 25 (19-63) hours (P = .049). The amount of morphine (P = .001) and midazolam (P = .050) decreased, whereas the amount of propofol (P = .052) and fentanyl increased (P = .001). Total Therapeutic Intervention Scoring System-28 (TISS-28) per patient decreased from 137 (99-272) to 113 (87-256) points (P = .009). Intensive care unit mortality was 19% (baseline), 8% (implementation I), and 7% (implementation II) (P = .020). CONCLUSIONS: Changes in organizational and care processes were associated with an altered pattern of sedative and analgesic drug prescription, a decrease in length of (noninvasive) respiratory support and length of stay in survivors, and decreases in resource use as measured by TISS-28 and mortality.
Resumo:
In developing countries many water distribution systems are branched networks with little redundancy. If any component in the distribution system fails, many users are left relying on secondary water sources. These sources oftentimes do not provide potable water and prolonged use leads to increased cases of water borne illnesses. Increasing redundancy in branched networks increases the reliability of the networks, but is oftentimes viewed as unaffordable. This paper presents a procedure for water system managers to use to determine which loops when added to a branch network provide the most benefit for users. Two methods are presented, one ranking the loops based on total number of users benefited, and one ranking the loops of number of vulnerable users benefited. A case study is presented using the water distribution system of Medina Bank Village, Belize. It was found that forming loops in upstream pipes connected to the main line had the potential to benefit the most users.
Resumo:
Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.
Resumo:
An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.
Resumo:
There is a need by engine manufactures for computationally efficient and accurate predictive combustion modeling tools for integration in engine simulation software for the assessment of combustion system hardware designs and early development of engine calibrations. This thesis discusses the process for the development and validation of a combustion modeling tool for Gasoline Direct Injected Spark Ignited Engine with variable valve timing, lift and duration valvetrain hardware from experimental data. Data was correlated and regressed from accepted methods for calculating the turbulent flow and flame propagation characteristics for an internal combustion engine. A non-linear regression modeling method was utilized to develop a combustion model to determine the fuel mass burn rate at multiple points during the combustion process. The computational fluid dynamic software Converge ©, was used to simulate and correlate the 3-D combustion system, port and piston geometry to the turbulent flow development within the cylinder to properly predict the experimental data turbulent flow parameters through the intake, compression and expansion processes. The engine simulation software GT-Power © is then used to determine the 1-D flow characteristics of the engine hardware being tested to correlate the regressed combustion modeling tool to experimental data to determine accuracy. The results of the combustion modeling tool show accurate trends capturing the combustion sensitivities to turbulent flow, thermodynamic and internal residual effects with changes in intake and exhaust valve timing, lift and duration.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.
Resumo:
OBJECTIVE: In search of an optimal compression therapy for venous leg ulcers, a systematic review and meta-analysis was performed of randomized controlled trials (RCT) comparing compression systems based on stockings (MCS) with divers bandages. METHODS: RCT were retrieved from six sources and reviewed independently. The primary endpoint, completion of healing within a defined time frame, and the secondary endpoints, time to healing, and pain were entered into a meta-analysis using the tools of the Cochrane Collaboration. Additional subjective endpoints were summarized. RESULTS: Eight RCT (published 1985-2008) fulfilled the predefined criteria. Data presentation was adequate and showed moderate heterogeneity. The studies included 692 patients (21-178/study, mean age 61 years, 56% women). Analyzed were 688 ulcerated legs, present for 1 week to 9 years, sizing 1 to 210 cm(2). The observation period ranged from 12 to 78 weeks. Patient and ulcer characteristics were evenly distributed in three studies, favored the stocking groups in four, and the bandage group in one. Data on the pressure exerted by stockings and bandages were reported in seven and two studies, amounting to 31-56 and 27-49 mm Hg, respectively. The proportion of ulcers healed was greater with stockings than with bandages (62.7% vs 46.6%; P < .00001). The average time to healing (seven studies, 535 patients) was 3 weeks shorter with stockings (P = .0002). In no study performed bandages better than MCS. Pain was assessed in three studies (219 patients) revealing an important advantage of stockings (P < .0001). Other subjective parameters and issues of nursing revealed an advantage of MCS as well. CONCLUSIONS: Leg compression with stockings is clearly better than compression with bandages, has a positive impact on pain, and is easier to use.
Resumo:
In this paper, a computer-aided diagnostic (CAD) system for the classification of hepatic lesions from computed tomography (CT) images is presented. Regions of interest (ROIs) taken from nonenhanced CT images of normal liver, hepatic cysts, hemangiomas, and hepatocellular carcinomas have been used as input to the system. The proposed system consists of two modules: the feature extraction and the classification modules. The feature extraction module calculates the average gray level and 48 texture characteristics, which are derived from the spatial gray-level co-occurrence matrices, obtained from the ROIs. The classifier module consists of three sequentially placed feed-forward neural networks (NNs). The first NN classifies into normal or pathological liver regions. The pathological liver regions are characterized by the second NN as cyst or "other disease." The third NN classifies "other disease" into hemangioma or hepatocellular carcinoma. Three feature selection techniques have been applied to each individual NN: the sequential forward selection, the sequential floating forward selection, and a genetic algorithm for feature selection. The comparative study of the above dimensionality reduction methods shows that genetic algorithms result in lower dimension feature vectors and improved classification performance.
Resumo:
In manual order picking systems, order pickers walk or drive through a distribution warehouse in order to collect items which are requested by (internal or external) customers. In order to perform these operations efficiently, it is usually required that customer orders are combined into (more substantial) picking orders of limited size. The Order Batching Problem considered in this paper deals with the question of how a given set of customer orders should be combined such that the total length of all tours is minimized which are necessary to collect all items. The authors introduce two metaheuristic approaches for the solution of this problem: the first one is based on Iterated Local Search; the second on Ant Colony Optimization. In a series of extensive numerical experiments, the newly developed approaches are benchmarked against classic solution methods. It is demonstrated that the proposed methods are not only superior to existing methods but provide solutions which may allow distribution warehouses to be operated significantly more efficiently.