29 resultados para Discrete choice experiments
Resumo:
Glass fibre-reinforced plastics (GFRP) have been considered inherently difficult to recycle due to both: cross-linked nature of thermoset resins, which cannot be remolded, and complex composition of the composite itself. Presently, most of the GFRP waste is landfilled leading to negative environmental impacts and supplementary added costs. With an increasing awareness of environmental matters and the subsequent desire to save resources, recycling would convert an expensive waste disposal into a profitable reusable material. In this study, efforts were made in order to recycle grinded GFRP waste, proceeding from pultrusion production scrap, into new and sustainable composite materials. For this purpose, GFRP waste recyclates, were incorporated into polyester based mortars as fine aggregate and filler replacements at different load contents and particle size distributions. Potential recycling solution was assessed by mechanical behaviour of resultant GFRP waste modified polymer mortars. Results revealed that GFRP waste filled polymer mortars present improved flexural and compressive behaviour over unmodified polyester based mortars, thus indicating the feasibility of the waste reuse in polymer mortars and concrete. © 2011, Advanced Engineering Solutions.
Resumo:
In this work, the effect of incorporation of recycled glass fibre reinforced plastics (GFRP) waste materials, obtained by means of shredding and milling processes, on mechanical behavior of polyester polymer mortar (PM) materials was assessed. For this purpose, different contents of GFRP recyclates (between 4% up to 12% in mass), were incorporated into polyester PM materials as sand aggregates and filler replacements. The effect of silane coupling agent addition to resin binder was also evaluated. Applied waste material was proceeding from the shredding of the leftovers resultant from the cutting and assembly processes of GFRP pultrusion profiles. Currently, these leftovers, jointly with unfinished products and scrap resulting from pultrusion manufacturing process, are landfilled, with supplementary added costs. Thus, besides the evident environmental benefits, a viable and feasible solution for these wastes would also conduct to significant economic advantages. Design of experiments and data treatment were accomplish by means of full factorial design approach and analysis of variance ANOVA. Experimental results were promising toward the recyclability of GFRP waste materials as aggregates and reinforcement for PM materials, with significant improvements on mechanical properties with regard to non-modified formulations.
Resumo:
Remote labs offer many unique advantages to students as they provide opportunities to access experiments and learning scenarios that would be otherwise unavailable. At the same time, however, these opportunities introduce real challenges to the institutions hosting the remote labs. This paper draws on the experiences of the REXNET project consortium to expose a number of these issues as a means of furthering the debate on the value of remote labs and the best practices in deploying them. The paper presents a brief outline of the various types of remote lab scenarios that might be deployed. It then describes the key human and technological actors that have an interest in or are intrinsic to a remote lab instance, with a description of the role of each actor and their interest. Some relationships between these various actors are then discussed with some factors that might influence those relationships. Finally some general issues are briefly described.
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.
Resumo:
This paper proposes a PSO based approach to increase the probability of delivering power to any load point by identifying new investments in distribution energy systems. The statistical failure and repair data of distribution components is the main basis of the proposed methodology that uses a fuzzyprobabilistic modeling for the components outage parameters. The fuzzy membership functions of the outage parameters of each component are based on statistical records. A Modified Discrete PSO optimization model is developed in order to identify the adequate investments in distribution energy system components which allow increasing the probability of delivering power to any customer in the distribution system at the minimum possible cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 180 bus distribution network.
Resumo:
Ammonia is an important gas in many power plants and industrial processes so its detection is of extreme importance in environmental monitoring and process control due to its high toxicity. Ammonia’s threshold limit is 25 ppm and the exposure time limit is 8 h, however exposure to 35 ppm is only secure for 10 min. In this work a brief introduction to ammonia aspects are presented, like its physical and chemical properties, the dangers in its manipulation, its ways of production and its sources. The application areas in which ammonia gas detection is important and needed are also referred: environmental gas analysis (e.g. intense farming), automotive-, chemical- and medical industries. In order to monitor ammonia gas in these different areas there are some requirements that must be attended. These requirements determine the choice of sensor and, therefore, several types of sensors with different characteristics were developed, like metal oxides, surface acoustic wave-, catalytic-, and optical sensors, indirect gas analyzers, and conducting polymers. All the sensors types are described, but more attention will be given to polyaniline (PANI), particularly to its characteristics, syntheses, chemical doping processes, deposition methods, transduction modes, and its adhesion to inorganic materials. Besides this, short descriptions of PANI nanostructures, the use of electrospinning in the formation of nanofibers/microfibers, and graphene and its characteristics are included. The created sensor is an instrument that tries to achieve a goal of the medical community in the control of the breath’s ammonia levels being an easy and non-invasive method for diagnostic of kidney malfunction and/or gastric ulcers. For that the device should be capable to detect different levels of ammonia gas concentrations. So, in the present work an ammonia gas sensor was developed using a conductive polymer composite which was immobilized on a carbon transducer surface. The experiments were targeted to ammonia measurements at ppb level. Ammonia gas measurements were carried out in the concentration range from 1 ppb to 500 ppb. A commercial substrate was used; screen-printed carbon electrodes. After adequate surface pre-treatment of the substrate, its electrodes were covered by a nanofibrous polymeric composite. The conducting polyaniline doped with sulfuric acid (H2SO4) was blended with reduced graphene oxide (RGO) obtained by wet chemical synthesis. This composite formed the basis for the formation of nanofibers by electrospinning. Nanofibers will increase the sensitivity of the sensing material. The electrospun PANI-RGO fibers were placed on the substrate and then dried at ambient temperature. Amperometric measurements were performed at different ammonia gas concentrations (1 to 500 ppb). The I-V characteristics were registered and some interfering gases were studied (NO2, ethanol, and acetone). The gas samples were prepared in a custom setup and were diluted with dry nitrogen gas. Electrospun nanofibers of PANI-RGO composite demonstrated an enhancement in NH3 gas detection when comparing with only electrospun PANI nanofibers. Was visible higher range of resistance at concentrations from 1 to 500 ppb. It was also observed that the sensor had stable, reproducible and recoverable properties. Moreover, it had better response and recovery times. The new sensing material of the developed sensor demonstrated to be a good candidate for ammonia gas determination.
Resumo:
A theory of free vibrations of discrete fractional order (FO) systems with a finite number of degrees of freedom (dof) is developed. A FO system with a finite number of dof is defined by means of three matrices: mass inertia, system rigidity and FO elements. By adopting a matrix formulation, a mathematical description of FO discrete system free vibrations is determined in the form of coupled fractional order differential equations (FODE). The corresponding solutions in analytical form, for the special case of the matrix of FO properties elements, are determined and expressed as a polynomial series along time. For the eigen characteristic numbers, the system eigen main coordinates and the independent eigen FO modes are determined. A generalized function of visoelastic creep FO dissipation of energy and generalized forces of system with no ideal visoelastic creep FO dissipation of energy for generalized coordinates are formulated. Extended Lagrange FODE of second kind, for FO system dynamics, are also introduced. Two examples of FO chain systems are analyzed and the corresponding eigen characteristic numbers determined. It is shown that the oscillatory phenomena of a FO mechanical chain have analogies to electrical FO circuits. A FO electrical resistor is introduced and its constitutive voltage–current is formulated. Also a function of thermal energy FO dissipation of a FO electrical relation is discussed.
Resumo:
With the implementation of the Bologna Process several challenges have been posed to higher education institution, particularly in Portugal. One of the main implications is related to the change of the paradigm of a teacher centered education, to a paradigm that is student centered. This change implies the change of the way to assess courses in higher education institutions. Continuous and formative assessments emerged as the focus, catalyzed by electronic assessment, or e-assessment. This paper presents a case of the implementation of an e-assessment strategy, implemented in order to allow continuous, formative assessment in numerous mathematics classes using multiple-choice questions tests implement in Moodle open-source learning management system. The implementation can be considered a success.
Resumo:
With the implementation of the Bologna Process several challenges have been posed to higher education institution, particularly in Portugal. One of the main implications is related to the change of the paradigm of a teacher centered education, to a paradigm that is student centered. This change implies the change of the way to assess courses in higher education institutions. Continuous and formative assessments emerged as the focus, catalyzed by electronic assessment, or e-assessment. This paper presents a case of the implementation of an e-assessment strategy, implemented in order to allow continuous, formative assessment in numerous mathematics classes using multiple-choice questions tests implement in Moodle open-source learning management system. The implementation can be considered a success.
Resumo:
The paper presents a RFDSCA automated synthesis procedure. This algorithm determines several RFDSCA circuits from the top-level system specifications all with the same maximum performance. The genetic synthesis tool optimizes a fitness function proportional to the RFDSCA quality factor and uses the epsiv-concept and maximin sorting scheme to achieve a set of solutions well distributed along a non-dominated front. To confirm the results of the algorithm, three RFDSCAs were simulated in SpectreRF and one of them was implemented and tested. The design used a 0.25 mum BiCMOS process. All the results (synthesized, simulated and measured) are very close, which indicate that the genetic synthesis method is a very useful tool to design optimum performance RFDSCAs.
Resumo:
In this paper we present a set of field tests for detection of human in the water with an unmanned surface vehicle using infrared and color cameras. These experiments aimed to contribute in the development of victim target tracking and obstacle avoidance for unmanned surface vehicles operating in marine search and rescue missions. This research is integrated in the work conducted in the European FP7 research project Icarus aiming to develop robotic tools for large scale rescue operations. The tests consisted in the use of the ROAZ unmanned surface vehicle equipped with a precision GPS system for localization and both visible spectrum and IR cameras to detect the target. In the experimental setup, the test human target was deployed in the water wearing a life vest and a diver suit (thus having lower temperature signature in the body except hands and head) and was equipped with a GPS logger. Multiple target approaches were performed in order to test the system with different sun incidence relative angles. The experimental setup, detection method and preliminary results from the field trials performed in the summer of 2013 in Sesimbra, Portugal and in La Spezia, Italy are also presented in this work.
Resumo:
A optimização nas aplicações modernas assume um carácter fortemente interdisciplinar, relacionando-se com a necessidade de integração de diferentes técnicas e paradigmas na resolução de problemas reais complexos. O problema do escalonamento é recorrente no planeamento da produção. Sempre que uma ordem de fabrico é lançada, é necessário determinar que recursos serão utilizados e em que sequência as atividades serão executadas, para otimizar uma dada medida de desempenho. Embora ainda existam empresas a abordar o problema do escalonamento através de simples heurísticas, a proposta de sistemas de escalonamento tem-se evidenciado na literatura. Pretende-se nesta dissertação, a realização da análise de desempenho de Técnicas de Optimização, nomeadamente as meta-heurísticas, na resolução de problemas de optimização complexos – escalonamento de tarefas, particularmente no problema de minimização dos atrasos ponderados, 1||ΣwjTj. Assim sendo, foi desenvolvido um protótipo que serviu de suporte ao estudo computacional, com vista à avaliação do desempenho do Simulated Annealing (SA) e o Discrete Artificial Bee Colony (DABC). A resolução eficiente de um problema requer, em geral, a aplicação de diferentes métodos, e a afinação dos respetivos parâmetros. A afinação dos parâmetros pode permitir uma maior flexibilidade e robustez mas requer uma inicialização cuidadosa. Os parâmetros podem ter uma grande influência na eficiência e eficácia da pesquisa. A sua definição deve resultar de um cuidadoso esforço experimental no sentido da respectiva especificação. Foi usado, no âmbito deste trabalho de mestrado, para suportar a fase de parametrização das meta-heurísticas em análise, o planeamento de experiências de Taguchi. Da análise dos resultados, foi possível concluir que existem vantagem estatisticamente significativa no desempenho do DABC, mas quando analisada a eficiência é possível concluir que há vantagem do SA, que necessita de menos tempo computacional.
Resumo:
Na tentativa de se otimizar o processo de fabrico associado a uma tinta base aquosa (TBA), para minimizar os desvios de viscosidade final verificados, e de desenvolver um novo adjuvante plastificante para betão, recorreu-se a métodos e ferramentas estatísticas para a concretização do projeto. Relativamente à TBA, procedeu-se numa primeira fase a um acompanhamento do processo de fabrico, a fim de se obter todos os dados mais relevantes que poderiam influenciar a viscosidade final da tinta. Através de uma análise de capacidade ao parâmetro viscosidade, verificou-se que esta não estava sempre dentro das especificações do cliente, sendo o cpk do processo inferior a 1. O acompanhamento do processo resultou na escolha de 4 fatores, que culminou na realização de um plano fatorial 24. Após a realização dos ensaios, efetuou-se uma análise de regressão a um modelo de primeira ordem, não tendo sido esta significativa, o que implicou a realização de mais 8 ensaios nos pontos axiais. Com arealização de uma regressão passo-a-passo, obteve-se uma aproximação viável a um modelo de segunda ordem, que culminou na obtenção dos melhores níveis para os 4 fatores que garantem que a resposta viscosidade se situa no ponto médio do intervalo de especificação (1400 mPa.s). Quanto ao adjuvante para betão, o objetivo é o uso de polímeros SIKA ao invés da matériaprima comum neste tipo de produtos, tendo em conta o custo final da formulação. Escolheram-se 3 fatores importantes na formulação do produto (mistura de polímeros, mistura de hidrocarbonetos e % de sólidos), que resultou numa matriz fatorial 23. Os ensaios foram realizados em triplicado, em pasta de cimento, um para cada tipo de cimento mais utilizado em Portugal. Ao efetuar-se a análise estatística de dados obtiveram-se modelos de primeira ordem para cada tipo de cimento. O processo de otimização consistiu em otimizar uma função custo associada à formulação, garantindo sempre uma resposta superior à observada pelo produto considerado padrão. Os resultados foram animadores uma vez que se obteve para os 3 tipos de cimentocustos abaixo do requerido e espalhamento acima do observado pelo padrão.
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.