969 resultados para Computational method
Resumo:
Void formation during the injection phase of the liquid composite molding process can be explained as a consequence of the non-uniformity of the flow front progression. This is due to the dual porosity within the fiber perform (spacing between the fiber tows is much larger than between the fibers within in a tow) and therefore the best explanation can be provided by a mesolevel analysis, where the characteristic dimension is given by the fiber tow diameter of the order of millimeters. In mesolevel analysis, liquid impregnation along two different scales; inside fiber tows and within the open spaces between the fiber tows must be considered and the coupling between the flow regimes must be addressed. In such cases, it is extremely important to account correctly for the surface tension effects, which can be modeled as capillary pressure applied at the flow front. Numerical implementation of such boundary conditions leads to illposing of the problem, in terms of the weak classical as well as stabilized formulation. As a consequence, there is an error in mass conservation accumulated especially along the free flow front. A numerical procedure was formulated and is implemented in an existing Free Boundary Program to reduce this error significantly.
Resumo:
Naturally Occurring Radioactive Materials (NORM) are materials that are found naturally in the environment and contain radioactive isotopes that can cause negative effects on the health of workers who manipulate them. Present in underground work like mining and tunnel construction in granite zones, these materials are difficult to identify and characterize without appropriate equipment for risk evaluation. The assessing methods were exemplified with a case study applied to the handling and processing of phosphoric rock where one found significant amounts of radioactive isotopes and consequently elevated radon concentrations in enclosed spaces containing these materials. © 2015 Taylor & Francis Group, London.
Resumo:
The energy resource scheduling is becoming increasingly important, as the use of distributed resources is intensified and massive gridable vehicle (V2G) use is envisaged. This paper presents a methodology for day-ahead energy resource scheduling for smart grids considering the intensive use of distributed generation and V2G. The main focus is the comparison of different EV management approaches in the day-ahead energy resources management, namely uncontrolled charging, smart charging, V2G and Demand Response (DR) programs i n the V2G approach. Three different DR programs are designed and tested (trip reduce, shifting reduce and reduce+shifting). Othe r important contribution of the paper is the comparison between deterministic and computational intelligence techniques to reduce the execution time. The proposed scheduling is solved with a modified particle swarm optimization. Mixed integer non-linear programming is also used for comparison purposes. Full ac power flow calculation is included to allow taking into account the network constraints. A case study with a 33-bus distribution network and 2000 V2G resources is used to illustrate the performance of the proposed method.
Resumo:
Demand response can play a very relevant role in the context of power systems with an intensive use of distributed energy resources, from which renewable intermittent sources are a significant part. More active consumers participation can help improving the system reliability and decrease or defer the required investments. Demand response adequate use and management is even more important in competitive electricity markets. However, experience shows difficulties to make demand response be adequately used in this context, showing the need of research work in this area. The most important difficulties seem to be caused by inadequate business models and by inadequate demand response programs management. This paper contributes to developing methodologies and a computational infrastructure able to provide the involved players with adequate decision support on demand response programs and contracts design and use. The presented work uses DemSi, a demand response simulator that has been developed by the authors to simulate demand response actions and programs, which includes realistic power system simulation. It includes an optimization module for the application of demand response programs and contracts using deterministic and metaheuristic approaches. The proposed methodology is an important improvement in the simulator while providing adequate tools for demand response programs adoption by the involved players. A machine learning method based on clustering and classification techniques, resulting in a rule base concerning DR programs and contracts use, is also used. A case study concerning the use of demand response in an incident situation is presented.
Resumo:
The ventilation efficiency concept is an attempt to quantify a parameter that can easily distinguish the different options for air diffusion in the building spaces. Thirteen strategies of air diffusion were measured in a test chamber through the application of the tracer gas method, with the objective to validate the calculation by Computational fluid dynamics (CFD). Were compared the Air Change Efficiency (ACE) and the Contaminant Removal Effectiveness (CRE), the two indicators most internationally accepted. The main results from this work shows that the values of the numerical simulations are in good agreement with experimental measurements and also, that the solutions to be adopted for maximizing the ventilation efficiency should be the schemes that operate with low speeds of supply air and small differences between supply air temperature and the room temperature.
Resumo:
A low cost method (LCM) to produce a gaseous environment for the isolation of Helicobacter pylori, was compared with the standard Gas Park system. The LCM uses a carbonated antacid tablet, a plastic bag with tap water, a candle, and a wide-mouthed glass jar provided with a tight-fitting metalic screw cap and a rubber gasket. Antral gastric biopsies from 153 cases were incubated by duplicate on blood agar plates and treated with the two methods. In 95 cases the agent was isolated from both, and only from the standard method in 10 cases; the opposite condition was found in five cases, and 43 were negative. That difference is not significant (Pearson's X²= 93.25 p > 0,05)
Resumo:
Reporter genes are routinely used in every laboratory for molecular and cellular biology for studying heterologous gene expression and general cellular biological mechanisms, such as transfection processes. Although well characterized and broadly implemented, reporter genes present serious limitations, either by involving time-consuming procedures or by presenting possible side effects on the expression of the heterologous gene or even in the general cellular metabolism. Fourier transform mid-infrared (FT-MIR) spectroscopy was evaluated to simultaneously analyze in a rapid (minutes) and high-throughput mode (using 96-wells microplates), the transfection efficiency, and the effect of the transfection process on the host cell biochemical composition and metabolism. Semi-adherent HEK and adherent AGS cell lines, transfected with the plasmid pVAX-GFP using Lipofectamine, were used as model systems. Good partial least squares (PLS) models were built to estimate the transfection efficiency, either considering each cell line independently (R 2 ≥ 0.92; RMSECV ≤ 2 %) or simultaneously considering both cell lines (R 2 = 0.90; RMSECV = 2 %). Additionally, the effect of the transfection process on the HEK cell biochemical and metabolic features could be evaluated directly from the FT-IR spectra. Due to the high sensitivity of the technique, it was also possible to discriminate the effect of the transfection process from the transfection reagent on KEK cells, e.g., by the analysis of spectral biomarkers and biochemical and metabolic features. The present results are far beyond what any reporter gene assay or other specific probe can offer for these purposes.
Resumo:
The most common techniques for stress analysis/strength prediction of adhesive joints involve analytical or numerical methods such as the Finite Element Method (FEM). However, the Boundary Element Method (BEM) is an alternative numerical technique that has been successfully applied for the solution of a wide variety of engineering problems. This work evaluates the applicability of the boundary elem ent code BEASY as a design tool to analyze adhesive joints. The linearity of peak shear and peel stresses with the applied displacement is studied and compared between BEASY and the analytical model of Frostig et al., considering a bonded single-lap joint under tensile loading. The BEM results are also compared with FEM in terms of stress distributions. To evaluate the mesh convergence of BEASY, the influence of the mesh refinement on peak shear and peel stress distributions is assessed. Joint stress predictions are carried out numerically in BEASY and ABAQUS®, and analytically by the models of Volkersen, Goland, and Reissner and Frostig et al. The failure loads for each model are compared with experimental results. The preparation, processing, and mesh creation times are compared for all models. BEASY results presented a good agreement with the conventional methods.
Resumo:
Adhesive bonding is nowadays a serious candidate to replace methods such as fastening or riveting, because of attractive mechanical properties. As a result, adhesives are being increasingly used in industries such as the automotive, aerospace and construction. Thus, it is highly important to predict the strength of bonded joints to assess the feasibility of joining during the fabrication process of components (e.g. due to complex geometries) or for repairing purposes. This work studies the tensile behaviour of adhesive joints between aluminium adherends considering different values of adherend thickness (h) and the double-cantilever beam (DCB) test. The experimental work consists of the definition of the tensile fracture toughness (GIC) for the different joint configurations. A conventional fracture characterization method was used, together with a J-integral approach, that take into account the plasticity effects occurring in the adhesive layer. An optical measurement method is used for the evaluation of crack tip opening and adherends rotation at the crack tip during the test, supported by a Matlab® sub-routine for the automated extraction of these quantities. As output of this work, a comparative evaluation between bonded systems with different values of adherend thickness is carried out and complete fracture data is provided in tension for the subsequent strength prediction of joints with identical conditions.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
Trabalho Final de mestrado para obtenção do grau de Mestre em engenharia Mecância
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
The bending of simply supported composite plates is analyzed using a direct collocation meshless numerical method. In order to optimize node distribution the Direct MultiSearch (DMS) for multi-objective optimization method is applied. In addition, the method optimizes the shape parameter in radial basis functions. The optimization algorithm was able to find good solutions for a large variety of nodes distribution.
Resumo:
A multiobjective approach for optimization of passive damping for vibration reduction in sandwich structures is presented in this paper. Constrained optimization is conducted for maximization of modal loss factors and minimization of weight of sandwich beams and plates with elastic laminated constraining layers and a viscoelastic core, with layer thickness and material and laminate layer ply orientation angles as design variables. The problem is solved using the Direct MultiSearch (DMS) solver for derivative-free multiobjective optimization and solutions are compared with alternative ones obtained using genetic algorithms.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.