893 resultados para Shape optimization method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The synthesis and optimization of two Li-ion solid electrolytes were studied in this work. Different combinations of precursors were used to prepare La0.5Li0.5TiO3 via mechanosynthesis. Despite the ability to form a perovskite phase by the mechanochemical reaction it was not possible to obtain a pure La0.5Li0.5TiO3 phase by this process. Of all the seven combinations of precursors and conditions tested, the one where La2O3, Li2CO3 and TiO2 were milled for 480min (LaOLiCO-480) showed the best results, with trace impurity phases still being observed. The main impurity phase was that of La2O3 after mechanosynthesis (22.84%) and Li2TiO3 after calcination (4.20%). Two different sol-gel methods were used to substitute boron on the Zr-site of Li1+xZr2-xBx(PO4)3 or the P-site of Li1+6xZr2(P1-xBxO4)3, with the doping being achieved on the Zr-site using a method adapted from Alamo et al (1989). The results show that the Zr-site is the preferential mechanism for B doping of LiZr2(PO4)3 and not the P-site. Rietveld refinement of the unit-cell parameters was performed and it was verified by consideration of Vegard’s law that it is possible to obtain phase purity up to x = 0.05. This corresponds with the phases present in the XRD data, that showed the additional presence of the low temperature (monoclinic) phase for the powder sintered at 1200ºC for 12h of compositions with x ≥ 0.075. The compositions inside the solid solution undergo the phase transition from triclinic (PDF#01-074-2562) to rhombohedral (PDF#01-070-6734) when heating from 25 to 100ºC, as reported in the literature for the base composition. Despite several efforts, it was not possible to obtain dense pellets and with physical integrity after sintering, requiring further work in order to obtain dense pellets for the electrochemical characterisation of Li Zr2(PO4)3 and Li1.05Zr1.95B0.05(PO4)3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topology optimization of linear elastic continuum structures is a challenging problem when considering local stress constraints. The reasons are the singular behavior of the constraint with the density design variables, combined with the large number of constraints even for small finite element meshes. This work presents an alternative formulation for the s-relaxation technique, which provides an workaround for the singularity of the stress constraint. It also presents a new global stress constraint formulation. Derivation of the sensitivities for the constraint by the adjoint method is shown. Results for single and multiple load cases show the potential of the new formulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selection of a set of requirements between all the requirements previously defined by customers is an important process, repeated at the beginning of each development step when an incremental or agile software development approach is adopted. The set of selected requirements will be developed during the actual iteration. This selection problem can be reformulated as a search problem, allowing its treatment with metaheuristic optimization techniques. This paper studies how to apply Ant Colony Optimization algorithms to select requirements. First, we describe this problem formally extending an earlier version of the problem, and introduce a method based on Ant Colony System to find a variety of efficient solutions. The performance achieved by the Ant Colony System is compared with that of Greedy Randomized Adaptive Search Procedure and Non-dominated Sorting Genetic Algorithm, by means of computational experiments carried out on two instances of the problem constructed from data provided by the experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today , Providing drinking water and process water is one of the major problems in most countries ; the surface water often need to be treated to achieve necessary quality, and in this way, technological and also financial difficulties cause great restrictions in operating the treatment units. Although water supply by simple and cheap systems has been one of the important objectives in most scientific and research centers in the world, still a great percent of population in developing countries, especially in rural areas, don't benefit well quality water. One of the big and available sources for providing acceptable water is sea water. There are two ways to treat sea water first evaporation and second reverse osmosis system. Nowadays R.O system has been used for desalination because of low budget price and easily to operate and maintenance. The sea water should be pretreated before R.O plants, because there is some difficulties in raw sea water that can decrease yield point of membranes in R.O system. The subject of this research may be useful in this way, and we hope to be able to achieve complete success in design and construction of useful pretreatment systems for R.O plant. One of the most important units in the sea water pretreatment plant is filtration, the conventional method for filtration is pressurized sand filters, and the subject of this research is about new filtration which is called continuous back wash sand filtration (CBWSF). The CBWSF designed and tested in this research may be used more economically with less difficulty. It consists two main parts first shell body and second central part comprising of airlift pump, raw water feeding pipe, air supply hose, backwash chamber and sand washer as well as inlet and outlet connections. The CBWSF is a continuously operating filter, i.e. the filter does not have to be taken out of operation for backwashing or cleaning. Inlet water is fed through the sand bed while the sand bed is moving downwards. The water gets filtered while the sand becomes dirty. Simultaneously, the dirty sand is cleaned in the sand washer and the suspended solids are discharged in backwash water. We analyze the behavior of CBWSF in pretreatment of sea water instead of pressurized sand filter. There is one important factor which is not suitable for R.O membranes, it is bio-fouling. This factor is defined by Silt Density Index (SDI).measured by SDI. In this research has been focused on decreasing of SDI and NTU. Based on this goal, the prototype of pretreatment had been designed and manufactured to test. The system design was done mainly by using the design fundamentals of CBWSF. The automatic backwash sand filter can be used in small and also big water supply schemes. In big water treatment plants, the units of filters perform the filtration and backwash stages separately, and in small treatment plants, the unit is usually compacted to achieve less energy consumption. The analysis of the system showed that it may be used feasibly for water treating, especially for limited population. The construction is rapid, simple and economic, and its performance is high enough because no mobile mechanical part is used in it, so it may be proposed as an effective method to improve the water quality and consequently the hygiene level in the remote places of the country.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Policy and decision makers dealing with environmental conservation and land use planning often require identifying potential sites for contributing to minimize sediment flow reaching riverbeds. This is the case of reforestation initiatives, which can have sediment flow minimization among their objectives. This paper proposes an Integer Programming (IP) formulation and a Heuristic solution method for selecting a predefined number of locations to be reforested in order to minimize sediment load at a given outlet in a watershed. Although the core structure of both methods can be applied for different sorts of flow, the formulations are targeted to minimization of sediment delivery. The proposed approaches make use of a Single Flow Direction (SFD) raster map covering the watershed in order to construct a tree structure so that the outlet cell corresponds to the root node in the tree. The results obtained with both approaches are in agreement with expert assessments of erosion levels, slopes and distances to the riverbeds, which in turn allows concluding that this approach is suitable for minimizing sediment flow. Since the results obtained with the IP formulation are the same as the ones obtained with the Heuristic approach, an optimality proof is included in the present work. Taking into consideration that the heuristic requires much less computation time, this solution method is more suitable to be applied in large sized problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Póster presentado en: 21st World Hydrogen Energy Conference 2016. Zaragoza, Spain. 13-16th June, 2016

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Biológicas, Programa de Pós-Graduação Nanociência e Nanobiotecnologia, 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis versa sobre el an álisis de la forma de objetos 2D. En visión articial existen numerosos aspectos de los que se pueden extraer información. Uno de los más usados es la forma o el contorno de esos objetos. Esta característica visual de los objetos nos permite, mediante el procesamiento adecuado, extraer información de los objetos, analizar escenas, etc. No obstante el contorno o silueta de los objetos contiene información redundante. Este exceso de datos que no aporta nuevo conocimiento debe ser eliminado, con el objeto de agilizar el procesamiento posterior o de minimizar el tamaño de la representación de ese contorno, para su almacenamiento o transmisión. Esta reducción de datos debe realizarse sin que se produzca una pérdida de información importante para representación del contorno original. Se puede obtener una versión reducida de un contorno eliminando puntos intermedios y uniendo los puntos restantes mediante segmentos. Esta representación reducida de un contorno se conoce como aproximación poligonal. Estas aproximaciones poligonales de contornos representan, por tanto, una versión comprimida de la información original. El principal uso de las mismas es la reducción del volumen de información necesario para representar el contorno de un objeto. No obstante, en los últimos años estas aproximaciones han sido usadas para el reconocimiento de objetos. Para ello los algoritmos de aproximaci ón poligonal se han usado directamente para la extracci ón de los vectores de caracter ísticas empleados en la fase de aprendizaje. Las contribuciones realizadas por tanto en esta tesis se han centrado en diversos aspectos de las aproximaciones poligonales. En la primera contribución se han mejorado varios algoritmos de aproximaciones poligonales, mediante el uso de una fase de preprocesado que acelera estos algoritmos permitiendo incluso mejorar la calidad de las soluciones en un menor tiempo. En la segunda contribución se ha propuesto un nuevo algoritmo de aproximaciones poligonales que obtiene soluciones optimas en un menor espacio de tiempo que el resto de métodos que aparecen en la literatura. En la tercera contribución se ha propuesto un algoritmo de aproximaciones que es capaz de obtener la solución óptima en pocas iteraciones en la mayor parte de los casos. Por último, se ha propuesto una versi ón mejorada del algoritmo óptimo para obtener aproximaciones poligonales que soluciona otro problema de optimización alternativo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.

In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.

The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.

We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.

In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To develop and optimise some variables that influence fluoxetine orally disintegrating tablets (ODTs) formulation. Methods: Fluoxetine ODTs tablets were prepared using direct compression method. Three-factor, 3- level Box-Behnken design was used to optimize and develop fluoxetine ODT formulation. The design suggested 15 formulations of different lubricant concentration (X1), lubricant mixing time (X2), and compression force (X3) and then their effect was monitored on tablet weight (Y1), thickness (Y2), hardness (Y3), % friability (Y4), and disintegration time (Y5). Results: All powder blends showed acceptable flow properties, ranging from good to excellent. The disintegration time (Y5) was affected directly by lubricant concentration (X1). Lubricant mixing time (X2) had a direct effect on tablet thickness (Y2) and hardness (Y3), while compression force (X3) had a direct impact on tablet hardness (Y3), % friability (Y4) and disintegration time (Y5). Accordingly, Box-Behnken design suggested an optimized formula of 0.86 mg (X1), 15.3 min (X2), and 10.6 KN (X3). Finally, the prediction error percentage responses of Y1, Y2, Y3, Y4, and Y5 were 0.31, 0.52, 2.13, 3.92 and 3.75 %, respectively. Formula 4 and 8 achieved 90 % of drug release within the first 5 min of dissolution test. Conclusion: Fluoxetine ODT formulation has been developed and optimized successfully using Box- Behnken design and has also been manufactured efficiently using direct compression technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To investigate the efficiency of silver nanoparticles synthesized by wet chemical method, and evaluate their antibacterial and anti-cancer activities. Methods: Wet chemical method was used to synthesize silver nanoparticles (AgNPs) from silver nitrate, trisodium citrate dehydrate (C6H5O7Na3.2H2O) and sodium borohydride (NaBH4) as reducing agent. The AgNPs and the reaction process were characterized by UV–visible spectrometry, zetasizer, transmission electron microscopy (TEM) and scanning electron microscopy (SEM) equipped with energy dispersive spectroscopy (EDS). The antibacterial and cytotoxic effects of the synthesized nanoparticles were investigated by agar diffusion method and MTT assay respectively. Results: The silver nanoparticles formed were spherical in shape with mean size of 10.3 nm. The results showed good antibacterial properties, killing both Gram-positive and Gram-negative bacteria, and its aqueous suspension displayed cytotoxic activity against colon adenocarcinoma (HCT-116) cell line. Conclusion: The findings indicate that silver nanoparticles synthesized by wet chemical method demonstrate good cytotoxic activity in colon adenocarcinoma (HCT-116) cell lines and strong antibacterial activity against various strains of bacteria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Direct sampling methods are increasingly being used to solve the inverse medium scattering problem to estimate the shape of the scattering object. A simple direct method using one incident wave and multiple measurements was proposed by Ito, Jin and Zou. In this report, we performed some analytic and numerical studies of the direct sampling method. The method was found to be effective in general. However, there are a few exceptions exposed in the investigation. Analytic solutions in different situations were studied to verify the viability of the method while numerical tests were used to validate the effectiveness of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study is to identify the optimal designs of converging-diverging supersonic and hypersonic nozzles that perform at maximum uniformity of thermodynamic and flow-field properties with respect to their average values at the nozzle exit. Since this is a multi-objective design optimization problem, the design variables used are parameters defining the shape of the nozzle. This work presents how variation of such parameters can influence the nozzle exit flow non-uniformities. A Computational Fluid Dynamics (CFD) software package, ANSYS FLUENT, was used to simulate the compressible, viscous gas flow-field in forty nozzle shapes, including the heat transfer analysis. The results of two turbulence models, k-e and k-ω, were computed and compared. With the analysis results obtained, the Response Surface Methodology (RSM) was applied for the purpose of performing a multi-objective optimization. The optimization was performed with ModeFrontier software package using Kriging and Radial Basis Functions (RBF) response surfaces. Final Pareto optimal nozzle shapes were then analyzed with ANSYS FLUENT to confirm the accuracy of the optimization process.