34 resultados para Penetrance (rate, value)
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Mestrado em Contabilidade
Resumo:
International statistics show that the problem of the accidents at work is far away to be solved (ILO estimates that every year about 270 million work accidents and 160 million occupational diseases resulting in the death of more than 2 million workers occurs in the world). That's why the EU global goal concerning the community' strategy for occupational health and safety for 2007-2012 is to reduce in 25% the incidence rate of occupational accidents and diseases. In this prospect it is presented a case study which justify the need to develop studies in Safety, Hygiene and Health at Work area as a way to encourage the managers to implement preventive actions and strategies, besides meeting the legal requirements, in order to reduce the occurrence of work accidents, improve the work conditions and therefore obtain benefits in added values and reinforced competition. The general objective of this study is to describe the work situations, identify the dangers and associate the potential risks and consequences; evaluate and value the risk. The study uses the Failure Table methodology and, in the business area of an organization which will be from now on designated as MANTEM that works in the electromechanical maintenance area. The results were, amongst others, some actions to be implemented to eliminate/minimize risks.
Resumo:
An experimental and theoretical study of the electro-rheological effects observed in the nematic phase of 4-n-heptyl-4'-cyanobiphenyl has been conducted. This liquid crystal appears to be a model system, in which the observed rheological behaviour can be interpreted by the Leslie-Ericksen continuum theory for low molecular weight liquid crystals. Flow curves are illustrated at different temperatures and under the influence of an external electric field ranging from 0 to 3 kV mm-1, applied perpendicular to the direction of flow. Also presented is the apparent viscosity as a function of temperature, over similar values of electric field, obtained at different shear rates. A master flow curve has been constructed for each temperature by dividing the shear rate by the square of the electric field and multiplying by the square of a reference value of electric field. In a log-log plot, two Newtonian plateaux are found to appear at low and high shear rates, connected by a shear-thinning region. We have applied the Leslie-Ericksen continuum theory, in which the director alignment angle is a function of the electric field and the flow field boundary conditions are neglected, to determine viscoelastic parameters and the dielectric anisotropy.
Resumo:
Reinforcement Learning is an area of Machine Learning that deals with how an agent should take actions in an environment such as to maximize the notion of accumulated reward. This type of learning is inspired by the way humans learn and has led to the creation of various algorithms for reinforcement learning. These algorithms focus on the way in which an agent’s behaviour can be improved, assuming independence as to their surroundings. The current work studies the application of reinforcement learning methods to solve the inverted pendulum problem. The importance of the variability of the environment (factors that are external to the agent) on the execution of reinforcement learning agents is studied by using a model that seeks to obtain equilibrium (stability) through dynamism – a Cart-Pole system or inverted pendulum. We sought to improve the behaviour of the autonomous agents by changing the information passed to them, while maintaining the agent’s internal parameters constant (learning rate, discount factors, decay rate, etc.), instead of the classical approach of tuning the agent’s internal parameters. The influence of changes on the state set and the action set on an agent’s capability to solve the Cart-pole problem was studied. We have studied typical behaviour of reinforcement learning agents applied to the classic BOXES model and a new form of characterizing the environment was proposed using the notion of convergence towards a reference value. We demonstrate the gain in performance of this new method applied to a Q-Learning agent.
Resumo:
Nowadays, the cooperative intelligent transport systems are part of a largest system. Transportations are modal operations integrated in logistics and, logistics is the main process of the supply chain management. The supply chain strategic management as a simultaneous local and global value chain is a collaborative/cooperative organization of stakeholders, many times in co-opetition, to perform a service to the customers respecting the time, place, price and quality levels. The transportation, like other logistics operations must add value, which is achieved in this case through compression lead times and order fulfillments. The complex supplier's network and the distribution channels must be efficient and the integral visibility (monitoring and tracing) of supply chain is a significant source of competitive advantage. Nowadays, the competition is not discussed between companies but among supply chains. This paper aims to evidence the current and emerging manufacturing and logistics system challenges as a new field of opportunities for the automation and control systems research community. Furthermore, the paper forecasts the use of radio frequency identification (RFID) technologies integrated into an information and communication technologies (ICT) framework based on distributed artificial intelligence (DAI) supported by a multi-agent system (MAS), as the most value advantage of supply chain management (SCM) in a cooperative intelligent logistics systems. Logistical platforms (production or distribution) as nodes of added value of supplying and distribution networks are proposed as critical points of the visibility of the inventory, where these technological needs are more evident.
Resumo:
Interest rate risk is one of the major financial risks faced by banks due to the very nature of the banking business. The most common approach in the literature has been to estimate the impact of interest rate risk on banks using a simple linear regression model. However, the relationship between interest rate changes and bank stock returns does not need to be exclusively linear. This article provides a comprehensive analysis of the interest rate exposure of the Spanish banking industry employing both parametric and non parametric estimation methods. Its main contribution is to use, for the first time in the context of banks’ interest rate risk, a nonparametric regression technique that avoids the assumption of a specific functional form. One the one hand, it is found that the Spanish banking sector exhibits a remarkable degree of interest rate exposure, although the impact of interest rate changes on bank stock returns has significantly declined following the introduction of the euro. Further, a pattern of positive exposure emerges during the post-euro period. On the other hand, the results corresponding to the nonparametric model support the expansion of the conventional linear model in an attempt to gain a greater insight into the actual degree of exposure.
Resumo:
This work focuses on the appraisal of public and environmental projects and, more specifically, on the calculation of the social discount rate (SDR) for this kind of very long-term investment projects. As a rule, we can state that the instantaneous discount rate must be equal to the hazard rate of the public good or to the mortality rate of the population that the project is intended to. The hazard can be due to technical failures of the system, but, in this paper, we are going to consider different independent variables that can cause the hazard. That is, we are going to consider a multivariate hazard rate. In our empirical application, the Spanish forest surface will be the system and the forest fire will be the fail that can be caused by several factors. The aim of this work is to integrate the different variables that produce the fail in the calculation of the SDR from a multivariate hazard rate approach.
Resumo:
Financial literature and financial industry use often zero coupon yield curves as input for testing hypotheses, pricing assets or managing risk. They assume this provided data as accurate. We analyse implications of the methodology and of the sample selection criteria used to estimate the zero coupon bond yield term structure on the resulting volatility of spot rates with different maturities. We obtain the volatility term structure using historical volatilities and Egarch volatilities. As input for these volatilities we consider our own spot rates estimation from GovPX bond data and three popular interest rates data sets: from the Federal Reserve Board, from the US Department of the Treasury (H15), and from Bloomberg. We find strong evidence that the resulting zero coupon bond yield volatility estimates as well as the correlation coefficients among spot and forward rates depend significantly on the data set. We observe relevant differences in economic terms when volatilities are used to price derivatives.
Resumo:
Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).
Resumo:
Mestrado em Tecnologia de Diagnóstico e Intervenção Cardiovascular. Área de especialização: Intervenção Cardiovascular.
Resumo:
Introdução – Avaliar a força de preensão mostrou ser de primordial importância pela sua relação com a capacidade funcional dos indivíduos, permitindo determinar níveis de risco para incapacidade futura e, assim, estabelecer estratégias de prevenção. Grande parte dos estudos utiliza o dinamómetro hidráulico JAMAR que fornece o valor da força isométrica obtida durante a execução do movimento de preensão palmar. Contudo, existem outros dinamómetros disponíveis, como é o caso do dinamómetro portátil computorizado E‑Link (Biometrics) que fornece o valor da força máxima (peak force), para além de outras variáveis, como a taxa de fadiga. Não existem, contudo, estudos que nos permitam aceitar e comparar ou não os valores obtidos com os dois equipamentos e porventura utilizá‑los indistintamente. Objetivos – Avaliar a concordância entre as medições da força de preensão (força máxima ou peak force em Kg) obtida a partir de dois equipamentos diferentes (dinamómetros portáteis): um computorizado (E‑Link, Biometrics) e outro hidráulico (JAMAR). Metodologia – Foram avaliados 29 indivíduos (13H; 16M; 22±7 anos; 23,2±3,3 kg/m2) em 2 dias consecutivos, na mesma altura do dia. A posição de teste escolhida foi a recomendada pela Associação Americana de Terapeutas Ocupacionais e foi escolhido o melhor resultado de entre 3 tentativas para a mão dominante. Realizou‑se uma análise correlacional entre os valores obtidos na variável analisada em cada equipamento (coeficiente de Spearman) e uma análise de Bland & Altman para verificar a concordância entre as duas medições. Resultados – O coeficiente de correlação entre as duas medições foi elevado (rS= 0,956; p<0,001) e, pela análise de Bland & Altman, os valores obtidos encontram‑se todos dentro do intervalo da média±2SD. Conclusões – As duas medições mostraram ser concordantes, revelando que os dinamómetros testados podem ser comparáveis ou utilizados indistintamente em diferentes estudos e populações. ABSTRACT: Introduction – Assess grip strength has proved to be of vital importance because of its relationship with functional capacity of individuals, in order to determine levels of risk for future disability and thereby establish prevention strategies. Most studies use the JAMAR Hydraulic dynamometer that provides the value of isometric force obtained during the performance of grip movement. However, there are other dynamometers available, such as portable computerized dynamometer E‑Link (Biometrics), which provides the value of maximum force (peak force) in addition to other variables as the rate of fatigue. There are no studies that allow us to accept or not and compare values obtained with both devices and perhaps use them interchangeably. Purpose – To evaluate the agreement between the measurements of grip strength (peak force or maximum force in kg) obtained from two different devices (portable dynamometers): a computerized (E‑Link, Biometrics) and a hydraulic (JAMAR). Methodology – 29 subjects (13H, 16M, 22 ± 7 years, 23.2 ± 3.3 kg/m2) were assessed on two consecutive days at the same time of day. The test position chosen was recommended by the American Association of Occupational Therapists and was considered the best result from three attempts for the dominant hand. A correlation was studied between values obtained in the variable analyzed in each equipment (Spearman coefficient) and Bland‑Altman analysis to assess the agreement between the two measurements. Results – The correlation coefficient between the two measurements was high (rs = 0,956, p <0,001) and Bland & Altman analysis of the values obtained are all within the range of mean±2SD. Conclusions – The two measurements were shown to be concordant, revealing that the tested dynamometers can be comparable or used interchangeably in different studies and populations.
Resumo:
Mestrado em Tecnologia de Diagnóstico e Intervenção Cardiovascular. Área de Especialização: Ultrassonografia Cardiovascular
Resumo:
In-plane deformation of foams was studied experimentally by subjecting bidisperse foams to cycles of traction and compression at a prescribed rate. Each foam contained bubbles of two sizes with given area ratio and one of three initial arrangements: sorted perpendicular to the axis of deformation (iso-strain), sorted parallel to the axis of deformation (iso-stress), or randomly mixed. Image analysis was used to measure the characteristics of the foams, including the number of edges separating small from large bubbles N-sl, the perimeter (surface energy), the distribution of the number of sides of the bubbles, and the topological disorder mu(2)(N). Foams that were initially mixed were found to remain mixed after the deformation. The response of sorted foams, however, depended on the initial geometry, including the area fraction of small bubbles and the total number of bubbles. For a given experiment we found that (i) the perimeter of a sorted foam varied little; (ii) each foam tended towards a mixed state, measured through the saturation of N-sl; and (iii) the topological disorder mu(2)(N) increased up to an "equilibrium" value. The results of different experiments showed that (i) the change in disorder, Delta mu(2)(N), decreased with the area fraction of small bubbles under iso-strain, but was independent of it under iso-stress; and (ii) Delta mu(2)(N) increased with Delta N-sl under iso-strain, but was again independent of it under iso-stress. We offer explanations for these effects in terms of elementary topological processes induced by the deformations that occur at the bubble scale.
Resumo:
This paper addresses the voltage droop compensation associated with long pulses generated by solid-stated based high-voltage Marx topologies. In particular a novel design scheme for voltage droop compensation in solid-state based bipolar Marx generators, using low-cost circuitry design and control, is described. The compensation consists of adding one auxiliary PWM stage to the existing Marx stages, without changing the modularity and topology of the circuit, which controls the output voltage and a LC filter that smoothes the voltage droop in both the positive and negative output pulses. Simulation results are presented for 5 stages Marx circuit using 1 kV per stage, with 1 kHz repetition rate and 10% duty cycle.
Resumo:
Cork processing wastewater is an aqueous complex mixture of organic compounds that have been extracted from cork planks during the boiling process. These compounds, such as polysaccharides and polyphenols, have different biodegradability rates, which depend not only on the natureof the compound but also on the size of the compound. The aim of this study is to determine the biochemical oxygen demands (BOD) and biodegradationrate constants (k) for different cork wastewater fractions with different organic matter characteristics. These wastewater fractions were obtained using membrane separation processes, namely nanofiltration (NF) and ultrafiltration (UF). The nanofiltration and ultrafiltration membranes molecular weight cut-offs (MWCO) ranged from 0.125 to 91 kDa. The results obtained showed that the biodegradation rate constant for the cork processing wastewater was around 0.3 d(-1) and the k values for the permeates varied between 0.27-0.72 d(-1), being the lower values observed for permeates generated by the membranes with higher MWCO and the higher values observed for the permeates generated by the membranes with lower MWCO. These higher k values indicate that the biodegradable organic matter that is permeated by the membranes with tighter MWCO is more readily biodegradated.