964 resultados para Non-load Bearing Walls
Resumo:
The use of fibre reinforced plastics – FRP’s – in structures is under a considerable increase. Advantages of their use are related with their low weight, high strength and stiffness. The improvement of the dynamic characteristics has been profitable for aeronautics, automobile, railway, naval and sporting goods industries. Drilling is a widely used machining technique as it is needed to assemble parts in a structure. This is a unique machining process, characterized by the existence of two different mechanisms: extrusion by the drill chisel edge and cutting by the rotating cutting lips. Drilling raises particular problems that can reduce mechanical and fatigue strength of the parts. In this work, quasi-isotropic hybrid laminates with 25% of carbon fibre reinforced plies and 4 mm thickness are produced, tested and drilled. Three different drill geometries are compared. Results considered are the interlaminar fracture toughness in Mode I – GIc –, thrust force during drilling and delamination extent after drilling. A bearing test is performed to evaluate tool influence on the load carrying capacity of the plate. Results consider the influence of drill geometry on delamination. A correlation linking plate damage to bearing test results is presented.
Resumo:
The interest in the development of climbing robots has grown rapidly in the last years. Climbing robots are useful devices that can be adopted in a variety of applications, such as maintenance and inspection in the process and construction industries. These systems are mainly adopted in places where direct access by a human operator is very expensive, because of the need for scaffolding, or very dangerous, due to the presence of an hostile environment. The main motivations are to increase the operation efficiency, by eliminating the costly assembly of scaffolding, or to protect human health and safety in hazardous tasks. Several climbing robots have already been developed, and other are under development, for applications ranging from cleaning to inspection of difficult to reach constructions. A wall climbing robot should not only be light, but also have large payload, so that it may reduce excessive adhesion forces and carry instrumentations during navigation. These machines should be capable of travelling over different types of surfaces, with different inclinations, such as floors, walls, or ceilings, and to walk between such surfaces (Elliot et al. (2006); Sattar et al. (2002)). Furthermore, they should be able of adapting and reconfiguring for various environment conditions and to be self-contained. Up to now, considerable research was devoted to these machines and various types of experimental models were already proposed (according to Chen et al. (2006), over 200 prototypes aimed at such applications had been developed in the world by the year 2006). However, we have to notice that the application of climbing robots is still limited. Apart from a couple successful industrialized products, most are only prototypes and few of them can be found in common use due to unsatisfactory performance in on-site tests (regarding aspects such as their speed, cost and reliability). Chen et al. (2006) present the main design problems affecting the system performance of climbing robots and also suggest solutions to these problems. The major two issues in the design of wall climbing robots are their locomotion and adhesion methods. With respect to the locomotion type, four types are often considered: the crawler, the wheeled, the legged and the propulsion robots. Although the crawler type is able to move relatively faster, it is not adequate to be applied in rough environments. On the other hand, the legged type easily copes with obstacles found in the environment, whereas generally its speed is lower and requires complex control systems. Regarding the adhesion to the surface, the robots should be able to produce a secure gripping force using a light-weight mechanism. The adhesion method is generally classified into four groups: suction force, magnetic, gripping to the surface and thrust force type. Nevertheless, recently new methods for assuring the adhesion, based in biological findings, were proposed. The vacuum type principle is light and easy to control though it presents the problem of supplying compressed air. An alternative, with costs in terms of weight, is the adoption of a vacuum pump. The magnetic type principle implies heavy actuators and is used only for ferromagnetic surfaces. The thrust force type robots make use of the forces developed by thrusters to adhere to the surfaces, but are used in very restricted and specific applications. Bearing these facts in mind, this chapter presents a survey of different applications and technologies adopted for the implementation of climbing robots locomotion and adhesion to surfaces, focusing on the new technologies that are recently being developed to fulfill these objectives. The chapter is organized as follows. Section two presents several applications of climbing robots. Sections three and four present the main locomotion principles, and the main "conventional" technologies for adhering to surfaces, respectively. Section five describes recent biological inspired technologies for robot adhesion to surfaces. Section six introduces several new architectures for climbing robots. Finally, section seven outlines the main conclusions.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.
Resumo:
The interlaminar fracture toughness in pure mode II (GIIc) of a Carbon-Fibre Reinforced Plastic (CFRP) composite is characterized experimentally and numerically in this work, using the End-Notched Flexure (ENF) fracture characterization test. The value of GIIc was extracted by a new data reduction scheme avoiding the crack length measurement, named Compliance-Based Beam Method (CBBM). This method eliminates the crack measurement errors, which can be non-negligible, and reflect on the accuracy of the fracture energy calculations. Moreover, it accounts for the Fracture Process Zone (FPZ) effects. A numerical study using the Finite Element Method (FEM) and a triangular cohesive damage model, implemented within interface finite elements and based on the indirect use of Fracture Mechanics, was performed to evaluate the suitability of the CBBM to obtain GIIc. This was performed comparing the input values of GIIc in the numerical models with the ones resulting from the application of the CBBM to the numerical load-displacement (P-) curve. In this numerical study, the Compliance Calibration Method (CCM) was also used to extract GIIc, for comparison purposes.
Resumo:
It is important to understand and forecast a typical or a particularly household daily consumption in order to design and size suitable renewable energy systems and energy storage. In this research for Short Term Load Forecasting (STLF) it has been used Artificial Neural Networks (ANN) and, despite the consumption unpredictability, it has been shown the possibility to forecast the electricity consumption of a household with certainty. The ANNs are recognized to be a potential methodology for modeling hourly and daily energy consumption and load forecasting. Input variables such as apartment area, numbers of occupants, electrical appliance consumption and Boolean inputs as hourly meter system were considered. Furthermore, the investigation carried out aims to define an ANN architecture and a training algorithm in order to achieve a robust model to be used in forecasting energy consumption in a typical household. It was observed that a feed-forward ANN and the Levenberg-Marquardt algorithm provided a good performance. For this research it was used a database with consumption records, logged in 93 real households, in Lisbon, Portugal, between February 2000 and July 2001, including both weekdays and weekend. The results show that the ANN approach provides a reliable model for forecasting household electric energy consumption and load profile. © 2014 The Author.
Resumo:
Mestrado integrado em Engenharia do Ambiente, perfil: Gestão de Sistemas Ambientais
Resumo:
The very high antiproliferative activity of [Co(Cl)(H2O)(phendione)(2)][BF4] (phendione is 1,10-phenanthroline-5,6-dione) against three human tumor cell lines (half-maximal inhibitory concentration below 1 mu M) and its slight selectivity for the colorectal tumor cell line compared with healthy human fibroblasts led us to explore the mechanisms of action underlying this promising antitumor potential. As previously shown by our group, this complex induces cell cycle arrest in S phase and subsequent cell death by apoptosis and it also reduces the expression of proteins typically upregulated in tumors. In the present work, we demonstrate that [Co(Cl)(phendione)(2)(H2O)][BF4] (1) does not reduce the viability of nontumorigenic breast epithelial cells by more than 85 % at 1 mu M, (2) promotes the upregulation of proapoptotic Bax and cell-cycle-related p21, and (3) induces release of lactate dehydrogenase, which is partially reversed by ursodeoxycholic acid. DNA interaction studies were performed to uncover the genotoxicity of the complex and demonstrate that even though it displays K (b) (+/- A standard error of the mean) of (3.48 +/- A 0.03) x 10(5) M-1 and is able to produce double-strand breaks in a concentration-dependent manner, it does not exert any clastogenic effect ex vivo, ruling out DNA as a major cellular target for the complex. Steady-state and time-resolved fluorescence spectroscopy studies are indicative of a strong and specific interaction of the complex with human serum albumin, involving one binding site, at a distance of approximately 1.5 nm for the Trp214 indole side chain with log K (b) similar to 4.7, thus suggesting that this complex can be efficiently transported by albumin in the blood plasma.
Resumo:
A detailed analysis of fabrics of the chilled margin of a thick dolerite dyke (Foum Zguid dyke, Southern Morocco) was performed in order to better understand the development of sub-fabrics during dyke emplacement and cooling. AMS data were complemented with measurements of paramagnetic and ferrimagnetic fabrics (measured with high field torque magnetometer), neutron texture and microstructural analyses. The ferrimagnetic and AMS fabrics are similar, indicating that the ferrimagnetic minerals dominate the AMS signal. The paramagnetic fabric is different from the previous ones. Based on the crystallization timing of the different mineralogical phases, the paramagnetic fabric appears related to the upward flow, while the ferrimagnetic fabric rather reflects the late-stage of dyke emplacement and cooling stresses. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We analyse the possibility that, in two Higgs doublet models, one or more of the Higgs couplings to fermions or to gauge bosons change sign, relative to the respective Higgs Standard Model couplings. Possible sign changes in the coupling of a neutral scalar to charged ones are also discussed. These wrong signs can have important physical consequences, manifesting themselves in Higgs production via gluon fusion or Higgs decay into two gluons or into two photons. We consider all possible wrong sign scenarios, and also the symmetric limit, in all possible Yukawa implementations of the two Higgs doublet model, in two different possibilities: the observed Higgs boson is the lightest CP-even scalar, or the heaviest one. We also analyse thoroughly the impact of the currently available LHC data on such scenarios. With all 8 TeV data analysed, all wrong sign scenarios are allowed in all Yukawa types, even at the 1 sigma level. However, we will show that B-physics constraints are crucial in excluding the possibility of wrong sign scenarios in the case where tan beta is below 1. We will also discuss the future prospects for probing the wrong sign scenarios at the next LHC run. Finally we will present a scenario where the alignment limit could be excluded due to non-decoupling in the case where the heavy CP-even Higgs is the one discovered at the LHC.
Resumo:
We introduce the notions of equilibrium distribution and time of convergence in discrete non-autonomous graphs. Under some conditions we give an estimate to the convergence time to the equilibrium distribution using the second largest eigenvalue of some matrices associated with the system.
Resumo:
Applied Mathematical Modelling, Vol.33
Resumo:
In the field of appearance-based robot localization, the mainstream approach uses a quantized representation of local image features. An alternative strategy is the exploitation of raw feature descriptors, thus avoiding approximations due to quantization. In this work, the quantized and non-quantized representations are compared with respect to their discriminativity, in the context of the robot global localization problem. Having demonstrated the advantages of the non-quantized representation, the paper proposes mechanisms to reduce the computational burden this approach would carry, when applied in its simplest form. This reduction is achieved through a hierarchical strategy which gradually discards candidate locations and by exploring two simplifying assumptions about the training data. The potential of the non-quantized representation is exploited by resorting to the entropy-discriminativity relation. The idea behind this approach is that the non-quantized representation facilitates the assessment of the distinctiveness of features, through the entropy measure. Building on this finding, the robustness of the localization system is enhanced by modulating the importance of features according to the entropy measure. Experimental results support the effectiveness of this approach, as well as the validity of the proposed computation reduction methods.
Resumo:
In this paper, we consider a Cournot competition between a nonprofit firm and a for-profit firm in a homogeneous goods market, with uncertain demand. Given an asymmetric tax schedule, we compute explicitly the Bayesian-Nash equilibrium. Furthermore, we analyze the effects of the tax rate and the degree of altruistic preference on market equilibrium outcomes.
Resumo:
Rhenium (I, III-V or VII) complexes bearing N-donor or oxo-ligands catalyse the Baeyer-Villiger oxidation of cyclic and linear ketones (e.g. 2-methylcyclohexanone, 2-methylcyclopentanone, cyclohexanone, cyclopentanone, cyclobutanone and 3,3-dimethyl-2-butanone) into the corresponding lactones or esters, in the presence of aqueous H2O2 (30%). The effects of various reaction parameters are studied allowing to achieve yields up to 54%.
Resumo:
A number of novel, water-stable redox-active cobalt complexes of the C-functionalized tripodal ligands tris(pyrazolyl)methane XC(pz)(3) (X = HOCH2, CH2OCH2Py or CH2OSO2Me) are reported along with their effects on DNA. The compounds were isolated as air-stable solids and fully characterized by IR and FIR spectroscopies, ESI-MS(+/-), cyclic voltammetry, controlled potential electrolysis, elemental analysis and, in a number of cases, also by single-crystal X-ray diffraction. They showed moderate cytotoxicity in vitro towards HCT116 colorectal carcinoma and HepG2 hepatocellular carcinoma human cancer cell lines. This viability loss is correlated with an increase of tumour cell lines apoptosis. Reactivity studies with biomolecules, such as reducing agents, H2O2, plasmid DNA and UV-visible titrations were also performed to provide tentative insights into the mode of action of the complexes. Incubation of Co(II) complexes with pDNA induced double strand breaks, without requiring the presence of any activator. This pDNA cleavage appears to be mediated by O-centred radical species.