38 resultados para Seeing the other
Resumo:
Component joining is typically performed by welding, fastening, or adhesive-bonding. For bonded aerospace applications, adhesives must withstand high-temperatures (200°C or above, depending on the application), which implies their mechanical characterization under identical conditions. The extended finite element method (XFEM) is an enhancement of the finite element method (FEM) that can be used for the strength prediction of bonded structures. This work proposes and validates damage laws for a thin layer of an epoxy adhesive at room temperature (RT), 100, 150, and 200°C using the XFEM. The fracture toughness (G Ic ) and maximum load ( ); in pure tensile loading were defined by testing double-cantilever beam (DCB) and bulk tensile specimens, respectively, which permitted building the damage laws for each temperature. The bulk test results revealed that decreased gradually with the temperature. On the other hand, the value of G Ic of the adhesive, extracted from the DCB data, was shown to be relatively insensitive to temperature up to the glass transition temperature (T g ), while above T g (at 200°C) a great reduction took place. The output of the DCB numerical simulations for the various temperatures showed a good agreement with the experimental results, which validated the obtained data for strength prediction of bonded joints in tension. By the obtained results, the XFEM proved to be an alternative for the accurate strength prediction of bonded structures.
Resumo:
Adhesively-bonded joints are extensively used in several fields of engineering. Cohesive Zone Models (CZM) have been used for the strength prediction of adhesive joints, as an add-in to Finite Element (FE) analyses that allows simulation of damage growth, by consideration of energetic principles. A useful feature of CZM is that different shapes can be developed for the cohesive laws, depending on the nature of the material or interface to be simulated, allowing an accurate strength prediction. This work studies the influence of the CZM shape (triangular, exponential or trapezoidal) used to model a thin adhesive layer in single-lap adhesive joints, for an estimation of its influence on the strength prediction under different material conditions. By performing this study, guidelines are provided on the possibility to use a CZM shape that may not be the most suited for a particular adhesive, but that may be more straightforward to use/implement and have less convergence problems (e.g. triangular shaped CZM), thus attaining the solution faster. The overall results showed that joints bonded with ductile adhesives are highly influenced by the CZM shape, and that the trapezoidal shape fits best the experimental data. Moreover, the smaller is the overlap length (LO), the greater is the influence of the CZM shape. On the other hand, the influence of the CZM shape can be neglected when using brittle adhesives, without compromising too much the accuracy of the strength predictions.
Resumo:
The higher education system in Europe is currently under stress and the debates over its reform and future are gaining momentum. Now that, for most countries, we are in a time for change, in the overall society and the whole education system, the legal and political dimensions have gained prominence, which has not been followed by a more integrative approach of the problem of order, its reform and the issue of regulation, beyond the typical static and classical cost-benefit analyses. The two classical approaches for studying (and for designing the policy measures of) the problem of the reform of the higher education system - the cost-benefit analysis and the legal scholarship description - have to be integrated. This is the argument of our paper that the very integration of economic and legal approaches, what Warren Samuels called the legal-economic nexus, is meaningful and necessary, especially if we want to address the problem of order (as formulated by Joseph Spengler) and the overall regulation of the system. On the one hand, and without neglecting the interest and insights gained from the cost-benefit analysis, or other approaches of value for money assessment, we will focus our study on the legal, social and political aspects of the regulation of the higher education system and its reform in Portugal. On the other hand, the economic and financial problems have to be taken into account, but in a more inclusive way with regard to the indirect and other socio-economic costs not contemplated in traditional or standard assessments of policies for the tertiary education sector. In the first section of the paper, we will discuss the theoretical and conceptual underpinning of our analysis, focusing on the evolutionary approach, the role of critical institutions, the legal-economic nexus and the problem of order. All these elements are related to the institutional tradition, from Veblen and Commons to Spengler and Samuels. The second section states the problem of regulation in the higher education system and the issue of policy formulation for tackling the problem. The current situation is clearly one of crisis with the expansion of the cohorts of young students coming to an end and the recurrent scandals in private institutions. In the last decade, after a protracted period of extension or expansion of the system, i. e., the continuous growth of students, universities and other institutions are competing harder to gain students and have seen their financial situation at risk. It seems that we are entering a period of radical uncertainty, higher competition and a new configuration that is slowly building up is the growth in intensity, which means upgrading the quality of the higher learning and getting more involvement in vocational training and life-long learning. With this change, and along with other deep ones in the Portuguese society and economy, the current regulation has shown signs of maladjustment. The third section consists of our conclusions on the current issue of regulation and policy challenge. First, we underline the importance of an evolutionary approach to a process of change that is essentially dynamic. A special attention will be given to the issues related to an evolutionary construe of policy analysis and formulation. Second, the integration of law and economics, through the notion of legal economic nexus, allows us to better define the issues of regulation and the concrete problems that the universities are facing. One aspect is the instability of the political measures regarding the public administration and on which the higher education system depends financially, legally and institutionally, to say the least. A corollary is the lack of clear strategy in the policy reforms. Third, our research criticizes several studies, such as the one made by the OECD in late 2006 for the Ministry of Science, Technology and Higher Education, for being too static and neglecting fundamental aspects of regulation such as the logic of actors, groups and organizations who are major players in the system. Finally, simply changing the legal rules will not necessary per se change the behaviors that the authorities want to change. By this, we mean that it is not only remiss of the policy maker to ignore some of the critical issues of regulation, namely the continuous non-respect by academic management and administrative bodies of universities of the legal rules that were once promulgated. Changing the rules does not change the problem, especially without the necessary debates form the different relevant quarters that make up the higher education system. The issues of social interaction remain as intact. Our treatment of the matter will be organized in the following way. In the first section, the theoretical principles are developed in order to be able to study more adequately the higher education transformation with a modest evolutionary theory and a legal and economic nexus of the interactions of the system and the policy challenges. After describing, in the second section, the recent evolution and current working of the higher education in Portugal, we will analyze the legal framework and the current regulatory practices and problems in light of the theoretical framework adopted. We will end with some conclusions on the current problems of regulation and the policy measures that are discusses in recent years.
Resumo:
Constrained and unconstrained Nonlinear Optimization Problems often appear in many engineering areas. In some of these cases it is not possible to use derivative based optimization methods because the objective function is not known or it is too complex or the objective function is non-smooth. In these cases derivative based methods cannot be used and Direct Search Methods might be the most suitable optimization methods. An Application Programming Interface (API) including some of these methods was implemented using Java Technology. This API can be accessed either by applications running in the same computer where it is installed or, it can be remotely accessed through a LAN or the Internet, using webservices. From the engineering point of view, the information needed from the API is the solution for the provided problem. On the other hand, from the optimization methods researchers’ point of view, not only the solution for the problem is needed. Also additional information about the iterative process is useful, such as: the number of iterations; the value of the solution at each iteration; the stopping criteria, etc. In this paper are presented the features added to the API to allow users to access to the iterative process data.
Resumo:
This paper applied MDS and Fourier transform to analyze different periods of the business cycle. With such purpose, four important stock market indexes (Dow Jones, Nasdaq, NYSE, S&P500) were studied over time. The analysis under the lens of the Fourier transform showed that the indexes have characteristics similar to those of fractional noise. By the other side, the analysis under the MDS lens identified patterns in the stock markets specific to each economic expansion period. Although the identification of patterns characteristic to each expansion period is interesting to practitioners (even if only in a posteriori fashion), further research should explore the meaning of such regularities and target to find a method to estimate future crisis.
Resumo:
For musicians, the impact of noise exposure is not yet fully characterized. Some inconsistencies can be found in the methodology used to evaluate noise exposure. This study aims to analyze the noise exposure of musicians in a symphonic orchestra to understand their risk for hearing loss, applying the methodology proposed by ISO 9612:2009. Noise levels were monitored among musicians during the rehearsal of eight different repertoires. Test subjects were selected according to their instrument and position in the orchestra. Participants wore noise dosimeters throughout the rehearsals. A sound meter was used to analyze the exposure of the conductor. The results showed that musicians are exposed to high noise levels that can damage hearing. Brass, woodwind and percussion and timpani musicians were exposed to noise levels in excess of the upper exposure action level of 85 dB (A), while the other instrumental groups had a lower exposure action level of 80 dB (A). Percussion musicians were exposed to high peak noise levels of 135 dB (C). Sound levels varied by instrument, repertoire and position. Octave frequency analyses showed differences among musicians. This study suggests that musicians are at risk for hearing loss. There is a need for more effective guidelines applicable to all countries, which should define standardized procedures for determining musician noise exposure and should allow exposure level normalization to the year, including different repertoires.
Resumo:
Dissertation to obtain the degree of Master in Music - Artistic Interpretation
Resumo:
Most machining tasks require high accuracy and are carried out by dedicated machine-tools. On the other hand, traditional robots are flexible and easy to program, but they are rather inaccurate for certain tasks. Parallel kinematic robots could combine the accuracy and flexibility that are usually needed in machining operations. Achieving this goal requires proper design of the parallel robot. In this chapter, a multi-objective particle swarm optimization algorithm is used to optimize the structure of a parallel robot according to specific criteria. Afterwards, for a chosen optimal structure, the best location of the workpiece with respect to the robot, in a machining robotic cell, is analyzed based on the power consumed by the manipulator during the machining process.
Resumo:
The Maxwell equations, expressing the fundamental laws of electricity and magnetism, only involve the integer-order calculus. However, several effects present in electromagnetism, motivated recently an analysis under the fractional calculus (FC) perspective. In fact, this mathematical concept allows a deeper insight into many phenomena that classical models overlook. On the other hand, genetic algorithms (GA) are an important tool to solve optimization problems that occur in engineering. In this work we use FC and GA to implement the electrical potential of fractional order. The performance of the GA scheme and the convergence of the resulting approximations are analyzed.
Resumo:
Certain materials used and produced in a wide range of non-nuclear industries contain enhanced activity concentrations of natural radionuclides. In particular, electricity production from coal is one of the major sources of increased exposure to man from enhanced naturally occurring materials. Over the past decades there has been some discussion about the elevated natural background radiation in the area near coal-fired power plants due to high uranium and thorium content present in coal. This work describes the methodology developed to assess the radiological impact due to natural radiation background increasing levels, potentially originated by a coal-fired power plant’s operation. Gamma radiation measurements have been done with two different instruments: a scintillometer (SPP2 NF, Saphymo) and a gamma ray spectrometer with energy discrimination (Falcon 5000, Canberra). A total of 40 relevant sampling points were established at locations within 20 km from the power plant: 15 urban and 25 suburban measured stations. The highest values were measured at the sampling points near to the power plant and those located in the area within the 6 and 20 km from the stacks. This may be explained by the presence of a huge coal pile (1.3 million tons) located near the stacks contributing to the dispersion of unburned coal and, on the other hand, the height of the stacks (225 m) which may influence ash’s dispersion up to a distance of 20 km. In situ gamma radiation measurements with energy discrimination identified natural emitting nuclides as well as their decay products (212Pb, 214Pb, 226Ra 232Th, 228Ac, 234Th 234Pa, 235U, etc.). This work has been primarily done to in order to assess the impact of a coal-fired power plant operation on the background radiation level in the surrounding area. According to the results, an increase or at least an influence has been identified both qualitatively and quantitatively.
Resumo:
A new method, based on linear correlation and phase diagrams was successfully developed for processes like the sedimentary process, where the deposition phase can have different time duration - represented by repeated values in a series - and where the erosion can play an important rule deleting values of a series. The sampling process itself can be the cause of repeated values - large strata twice sampled - or deleted values: tiny strata fitted between two consecutive samples. What we developed was a mathematical procedure which, based upon the depth chemical composition evolution, allows the establishment of frontiers as well as the periodicity of different sedimentary environments. The basic tool isn't more than a linear correlation analysis which allow us to detect the existence of eventual evolution rules, connected with cyclical phenomena within time series (considering the space assimilated to time), with the final objective of prevision. A very interesting discovery was the phenomenon of repeated sliding windows that represent quasi-cycles of a series of quasi-periods. An accurate forecast can be obtained if we are inside a quasi-cycle (it is possible to predict the other elements of the cycle with the probability related with the number of repeated and deleted points). We deal with an innovator methodology, reason why it's efficiency is being tested in some case studies, with remarkable results that shows it's efficacy. Keywords: sedimentary environments, sequence stratigraphy, data analysis, time-series, conditional probability.
Resumo:
Smart grids with an intensive penetration of distributed energy resources will play an important role in future power system scenarios. The intermittent nature of renewable energy sources brings new challenges, requiring an efficient management of those sources. Additional storage resources can be beneficially used to address this problem; the massive use of electric vehicles, particularly of vehicle-to-grid (usually referred as gridable vehicles or V2G), becomes a very relevant issue. This paper addresses the impact of Electric Vehicles (EVs) in system operation costs and in power demand curve for a distribution network with large penetration of Distributed Generation (DG) units. An efficient management methodology for EVs charging and discharging is proposed, considering a multi-objective optimization problem. The main goals of the proposed methodology are: to minimize the system operation costs and to minimize the difference between the minimum and maximum system demand (leveling the power demand curve). The proposed methodology perform the day-ahead scheduling of distributed energy resources in a distribution network with high penetration of DG and a large number of electric vehicles. It is used a 32-bus distribution network in the case study section considering different scenarios of EVs penetration to analyze their impact in the network and in the other energy resources management.
Resumo:
Phenylketonuria is an inborn error of metabolism, involving, in most cases, a deficient activity of phenylalanine hydroxylase. Neonatal diagnosis and a prompt special diet (low phenylalanine and natural-protein restricted diets) are essential to the treatment. The lack of data concerning phenylalanine contents of processed foodstuffs is an additional limitation for an already very restrictive diet. Our goals were to quantify protein (Kjeldahl method) and amino acid (18) content (HPLC/fluorescence) in 16 dishes specifically conceived for phenylketonuric patients, and compare the most relevant results with those of several international food composition databases. As might be expected, all the meals contained low protein levels (0.67–3.15 g/100 g) with the highest ones occurring in boiled rice and potatoes. These foods also contained the highest amounts of phenylalanine (158.51 and 62.65 mg/100 g, respectively). In contrast to the other amino acids, it was possible to predict phenylalanine content based on protein alone. Slight deviations were observed when comparing results with the different food composition databases.
Resumo:
The ready biodegradability of four chelating agents, N,N -(S,S)-bis[1-carboxy-2-(imidazol-4-yl)ethyl]ethylenediamine (BCIEE), N - ethylenedi-L-cysteine (EC), N,N -bis (4-imidazolymethyl)ethylenediamine (EMI) and 2,6-pyridine dicarboxylic acid (PDA), was tested according to the OECD guideline for testing of chemicals. PDA proved to be a readily biodegradable substance. However, none of the other three compounds were degraded during the 28 days of the test. Chemical simulations were performed for the four compounds in order to understand their ability to complex with some metal ions (Ca, Cd, Co, Cu, Fe, Mg, Mn, Ni, Pb, Zn) and discuss possible applications of these chelating agents. Two different conditions were simulated: (i) in the presence of the chelating agent and one metal ion, and (ii) in the simultaneous presence of the chelating agent and all metal ions with an excess of Ca. For those compounds that were revealed not to be readily biodegradable (BCIEE, EC and EMI), applications were evaluated where this property was not fundamental or even not required. Chemical simulations pointed out that possible applications for these chelating agents are: food fortification, food process, fertilizers, biocides, soil remediation and treatment of metal poisoning. Additionally, chemical simulations also predicted that PDA is an efficient chelating agent for Ca incrustations removal, detergents and for pulp metal ions removal process.
Resumo:
This study aims to analyze which determinants predict frailty in general and each frailty domain (physical, psychological, and social), considering the integral conceptual model of frailty, and particularly to examine the contribution of medication in this prediction. A cross-sectional study was designed using a non-probabilistic sample of 252 community-dwelling elderly from three Portuguese cities. Frailty and determinants of frailty were assessed with the Tilburg Frailty Indicator. The amount and type of different daily-consumed medication were also examined. Hierarchical regression analysis were conducted. The mean age of the participants was 79.2 years (±7.3), and most of them were women (75.8%), widowed (55.6%) and with a low educational level (0–4 years: 63.9%). In this study, determinants explained 46% of the variance of total frailty, and 39.8, 25.3, and 27.7% of physical, psychological, and social frailty respectively. Age, gender, income, death of a loved one in the past year, lifestyle, satisfaction with living environment and self-reported comorbidity predicted total frailty, while each frailty domain was associated with a different set of determinants. The number of daily-consumed drugs was independently associated with physical frailty, and the consumption of medication for the cardiovascular system and for the blood and blood-forming organs explained part of the variance of total and physical frailty. The adverse effects of polymedication and its direct link with the level of comorbidities could explain the independent contribution of the amount of prescribed drugs to frailty prediction. On the other hand, findings in regard to medication type provide further evidence of the association of frailty with cardiovascular risk. In the present study, a significant part of frailty was predicted, and the different contributions of each determinant to frailty domains highlight the relevance of the integral model of frailty. The added value of a simple assessment of medication was considerable, and it should be taken into account for effective identification of frailty.