31 resultados para Probability Weight : Rank-dependent Utility
em Instituto Politécnico do Porto, Portugal
Resumo:
In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results.
Resumo:
The intensification of agricultural productivity is an important challenge worldwide. However, environmental stressors can provide challenges to this intensification. The progressive occurrence of the cyanotoxins cylindrospermopsin (CYN) and microcystin-LR (MC-LR) as a potential consequence of eutrophication and climate change is of increasing concern in the agricultural sector because it has been reported that these cyanotoxins exert harmful effects in crop plants. A proteomic-based approach has been shown to be a suitable tool for the detection and identification of the primary responses of organisms exposed to cyanotoxins. The aim of this study was to compare the leaf-proteome profiles of lettuce plants exposed to environmentally relevant concentrations of CYN and a MC-LR/CYN mixture. Lettuce plants were exposed to 1, 10, and 100 lg/l CYN and a MC-LR/CYN mixture for five days. The proteins of lettuce leaves were separated by twodimensional electrophoresis (2-DE), and those that were differentially abundant were then identified by matrix-assisted laser desorption/ionization time of flight-mass spectrometry (MALDI-TOF/TOF MS). The biological functions of the proteins that were most represented in both experiments were photosynthesis and carbon metabolism and stress/defense response. Proteins involved in protein synthesis and signal transduction were also highly observed in the MC-LR/CYN experiment. Although distinct protein abundance patterns were observed in both experiments, the effects appear to be concentration-dependent, and the effects of the mixture were clearly stronger than those of CYN alone. The obtained results highlight the putative tolerance of lettuce to CYN at concentrations up to 100 lg/l. Furthermore, the combination of CYN with MC-LR at low concentrations (1 lg/l) stimulated a significant increase in the fresh weight (fr. wt) of lettuce leaves and at the proteomic level resulted in the increase in abundance of a high number of proteins. In contrast, many proteins exhibited a decrease in abundance or were absent in the gels of the simultaneous exposure to 10 and 100 lg/l MC-LR/CYN. In the latter, also a significant decrease in the fr. wt of lettuce leaves was obtained. These findings provide important insights into the molecular mechanisms of the lettuce response to CYN and MC-LR/CYN and may contribute to the identification of potential protein markers of exposure and proteins that may confer tolerance to CYN and MC-LR/CYN. Furthermore, because lettuce is an important crop worldwide, this study may improve our understanding of the potential impact of these cyanotoxins on its quality traits (e.g., presence of allergenic proteins).
Resumo:
One of the main arguments in favour of the adoption and convergence with the international accounting standards published by the IASB (i.e. IAS/IFRS) is that these will allow comparability of financial reporting across countries. However, because these standards use verbal probability expressions (v.g. “probable”) when establishing the recognition and disclosure criteria for accounting elements, they require professional accountants to interpret and classify the probability of an outcome or event taking into account those terms and expressions and to best decide in terms of financial reporting. This paper reports part of a research we carried out on the interpretation of “in context” verbal probability expressions used in the IAS/IFRS by the auditors registered with the Portuguese Securities Market Commission, the Comissão do Mercado de Valores Mobiliários (CMVM). Our results provide support for the hypothesis that culture affects the CMVM registered auditors’ interpretation of verbal probability expressions through its influence on the accounting value (or attitude) of conservatism. Our results also suggest that there are significant differences in their interpretation of the term “probable”, which is consistent with literature in general. Since “probable” is the most frequent verbal probability expression used in the IAS/IFRS, this may have a negative impact on financial statements comparability.
Resumo:
With the electricity market liberalization, the distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity consumers. A fair insight on the consumers’ behavior will permit the definition of specific contract aspects based on the different consumption patterns. In order to form the different consumers’ classes, and find a set of representative consumption patterns we use electricity consumption data from a utility client’s database and two approaches: Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) for combining partitions in a clustering ensemble. While EAC uses a voting mechanism to produce a co-association matrix based on the pairwise associations obtained from N partitions and where each partition has equal weight in the combination process, the WEACS approach uses subsampling and weights differently the partitions. As a complementary step to the WEACS approach, we combine the partitions obtained in the WEACS approach with the ALL clustering ensemble construction method and we use the Ward Link algorithm to obtain the final data partition. The characterization of the obtained consumers’ clusters was performed using the C5.0 classification algorithm. Experiment results showed that the WEACS approach leads to better results than many other clustering approaches.
Resumo:
The paper proposes a methodology to increase the probability of delivering power to any load point by identifying new investments in distribution energy systems. The proposed methodology is based on statistical failure and repair data of distribution components and it uses a fuzzy-probabilistic modeling for the components outage parameters. The fuzzy membership functions of the outage parameters of each component are based on statistical records. A mixed integer nonlinear programming optimization model is developed in order to identify the adequate investments in distribution energy system components which allow increasing the probability of delivering power to any customer in the distribution system at the minimum possible cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 180 bus distribution network.
Resumo:
Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM is a multi-agent electricity market simulator to model market players and simulate their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. MASCEM provides several dynamic strategies for agents’ behavior. This paper presents a method that aims to provide market players with strategic bidding capabilities, allowing them to obtain the higher possible gains out of the market. This method uses a reinforcement learning algorithm to learn from experience how to choose the best from a set of possible bids. These bids are defined accordingly to the cost function that each producer presents.
Resumo:
Introduction: In the XXI Century ’s Society the scientific investigation process has been growing steadily , and the field of the pharmaceutical research is one of the most enthusiastic and relevant . Here, it is very important to correlate observed functional alterations with possibly modified drug bio distribution patterns . Cancer, inflammation and inf ection are processes that induce many molecular intermediates like cytokines, chemokines and other chemical complexes that can alter the pharmacokinetics of many drugs. One cause of such changes is thought to be the modulator action of these complexes in t he P - Glyco p rotein activity, because they can act like inducers/inhibitors of MDR - 1 expression. This protein results from the expression of MDR - 1 gene, and acts as an ATP energy - dependent efflux pump, with their substrates including many drugs , like antiretrovirals, anticancers, anti - infectives, immunosuppressants, steroids or opioids . Objectives: Because of the lack of methods to provide helpful information in the investigation of in vivo molecular changes in Pgp activity during infection/infl ammation processes, and its value in the explanation of the altered drug pharmacokinetic, this paper want to evaluate the potential utility of 99m Tc - Sestamibi scintigraphy during this kind of health sciences investigation. Although the a im is indeed to create a technique to the in vivo study of Pgp activity, this preliminary Project only reaches the in vitro study phase, assumed as the first step in a n evaluation period for a new tool development. Materials and Methods: For that reason , we are performing in vitro studies of influx and efflux of 99m Tc - Sestamibi ( that is a substrate of Pgp) in hepatocytes cell line (HepG2). We are interested in clarify the cellular behavior of this radiopharmaceutical in Lipopolysaccharide(LPS) stimulated cells ( well known in vitro model of inflammation) to possibly approve this methodology. To validate the results, the Pgp expression will be finally evaluated using Western Blot technique. Results: Up to this moment , we still don’t have the final results, but we have already enough data to let us believe that LPS stimulation induce a downregulation of MDR - 1, and consequently Pgp, which could conduce to a prolonged retention of 99m Tc - Sestamibi in the inflamed cells . Conclusions: If and when this methodology demonstrate the promising results we expect, one will be able to con clude that Nuclear Medicine is an important tool to help evidence based research also on this specific field .
Resumo:
Este trabalho teve como propósito fazer uma avaliação do desempenho energético e da qualidade do ar no interior das instalações de uma Piscina Municipal Coberta, localizada na zona norte de Portugal, sendo estabelecidos os seguintes objetivos: caracterização geral da piscina, no que respeita aos seus diferentes espaços e equipamentos, cálculo dos consumos térmicos e elétricos bem como o registo das concentrações de elementos poluentes para controlo da qualidade do ar no interior da piscina, tendo como base a legislação atualmente em vigor. A caracterização geral da piscina permitiu verificar algumas inconformidades como a temperatura da água nos tanques de natação que tem valores superiores aos recomendados e a sala de primeiros socorros que não possui acesso direto ao exterior. Acrescente-se que o pavimento nos chuveiros da casa de banho feminina e os valores de pH para água do tanque grande e pequeno não estão sempre dentro da gama de recomendação. O caudal da renovação de ar está a ser operado manualmente e quando está a funcionar a 50% da sua capacidade máxima, que acontece numa parte do dia, apenas consegue renovar 77,5% do caudal recomendado pelo RSECE. Para se obter o valor recomendado é necessário ter pelo menos 7 horas com o caudal a 100% da capacidade máxima. A avaria na UTA2 originou que 40% dos registos diários da humidade relativa interior estivessem fora da gama de valores recomendados e que esta é fortemente dependente da humidade no exterior e pode ser agravada quando as portas dos envidraçados da nave são abertas. Analisando ainda a quantidade de água removida na desumidificação do ar com a água evaporada em condições de Outono-Inverno ou Primavera-Verão, este estudo permitiu concluir que todas as combinações demonstraram a necessidade de desumidificação salvo a combinação Outono-Inverno e UTA2 a funcionar a 100% da sua capacidade máxima. Os isolamentos das tubagens na sala das caldeiras foram observados e comparados com as soluções recomendadas pelas empresas especialistas e verificou-se que alguns estão mal colocados com parcial ou total degradação, promovendo perdas térmicas. No caso das perdas calorificas por evaporação, estas representaram cerca de 67,78% das perdas totais. Como tal, estudou-se a aplicação de uma cobertura sobre o plano de água durante o período de inatividade da piscina (8 horas) e verificou-se que o resultado seria uma poupança de 654,8 kWh/dia, na ausência de evaporação da água, mais 88,00 kWh/dia do período da UTA2 a funcionar a 50% da sua capacidade, perfazendo um total de 742,8 kWh/dia. A aplicação da cobertura permite obter um VAL de valor positivo, uma TIR de 22,77% e sendo este valor superior ao WACC (Weight Average Cost of Capital), o projeto torna-se viável com um Pay-Back de 3,17 anos. Caracterizou-se também o consumo total diário em eletricidade, e verificou-se que as unidades de climatização, as bombas de circulação de água, a iluminação, e outros equipamentos representam, respetivamente, cerca de 67,81, 25,26, 2,68 e 3,91% da energia elétrica total consumida. Por fim, a análise à qualidade do ar no interior da nave em Maio e Setembro identificou que as concentrações de ozono apresentavam valores no limite do aceitável em Maio e superiores ao valor de emissão em Setembro. Os compostos orgânicos voláteis também apresentavam valores em Maio 4,98 vezes superior e em Setembro 6,87 vezes superior aos valores máximos exigidos pelo D.L. nº 79/2006. Houve ainda altas concentrações de radão registadas na casa dos filtros, em Maio com um valor 11,49 vezes superior, no entanto esse valor desceu em Setembro para 1,08 vezes, mesmo assim superior ao exigido pelo D.L. nº 79/2006.
Resumo:
The development of neonatal intensive care has led to an increase in the prevalence of children with low birth weight and associated morbidity. The objectives of this study are to verify (1) The association between birth weight (BW) and neuromotor performance? (2) Is the neuromotor performance of twins within the normal range? (3) Are intra-pair similarities in neuromotor development of Monozygotic (MZ) and Disygotic (DZ) twins of unequal magnitude? The sample consisted of 191 children (78 MZ and 113 DZ), 8.9+3.1 years of age and with an average BW of 2246.3+485.4g. In addition to gestational characteristics, sports participation and Zurich Neuromotor Assessment (ZNA) were observed at childhood age. The statistical analysis was carried out with software SPSS 18.0, the STATA 10 and the ZNA performance scores. The level of significance was 0.05. For the neuromotor items high intra and inter-investigator reliabilities were obtained (0.793
Resumo:
Due to the growing complexity and adaptability requirements of real-time systems, which often exhibit unrestricted Quality of Service (QoS) inter-dependencies among supported services and user-imposed quality constraints, it is increasingly difficult to optimise the level of service of a dynamic task set within an useful and bounded time. This is even more difficult when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand and tasks may be inter-dependent. This paper focuses on optimising a dynamic local set of inter-dependent tasks that can be executed at varying levels of QoS to achieve an efficient resource usage that is constantly adapted to the specific constraints of devices and users, nature of executing tasks and dynamically changing system conditions. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves as the algorithms are given more time to run, with a minimum overhead when compared against their traditional versions.
Resumo:
This technical report describes the PDFs which have been implemented to model the behaviours of certain parameters of the Repeater-Based Hybrid Wired/Wireless PROFIBUS Network Simulator (RHW2PNetSim) and Bridge-Based Hybrid Wired/Wireless PROFIBUS Network Simulator (BHW2PNetSim).
Resumo:
Due to the growing complexity and dynamism of many embedded application domains (including consumer electronics, robotics, automotive and telecommunications), it is increasingly difficult to react to load variations and adapt the system's performance in a controlled fashion within an useful and bounded time. This is particularly noticeable when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand and tasks may exhibit unrestricted QoS inter-dependencies. This paper proposes a novel anytime adaptive QoS control policy in which the online search for the best set of QoS levels is combined with each user's personal preferences on their services' adaptation behaviour. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves as the algorithms are given more time to run, with a minimum overhead when compared against their traditional versions.
Resumo:
In this paper, we analyze the performance limits of the slotted CSMA/CA mechanism of IEEE 802.15.4 in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent) on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
Resumo:
The IEEE 802.15.4 has been adopted as a communication protocol standard for Low-Rate Wireless Private Area Networks (LRWPANs). While it appears as a promising candidate solution for Wireless Sensor Networks (WSNs), its adequacy must be carefully evaluated. In this paper, we analyze the performance limits of the slotted CSMA/CA medium access control (MAC) mechanism in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility and potential for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent), the number of nodes and the data frame size on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We also analytically evaluate the impact of the slotted CSMA/CA overheads on the saturation throughput. We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
Resumo:
In practice the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce also vibrations. In order to analyze these phenomena a robot signal acquisition system was developed. The manipulator motion produces vibrations, either from the structural modes or from endeffector impacts. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments, and electrical currents in the motors. Afterwards, an analysis package, running off-line, reads the data recorded by the acquisition system and extracts the signal characteristics. Due to the multiplicity of sensors, the data obtained can be redundant because the same type of information may be seen by two or more sensors. Because of the price of the sensors, this aspect can be considered in order to reduce the cost of the system. On the other hand, the placement of the sensors is an important issue in order to obtain the suitable signals of the vibration phenomenon. Moreover, the study of these issues can help in the design optimization of the acquisition system. In this line of thought a sensor classification scheme is presented. Several authors have addressed the subject of the sensor classification scheme. White (White, 1987) presents a flexible and comprehensive categorizing scheme that is useful for describing and comparing sensors. The author organizes the sensors according to several aspects: measurands, technological aspects, detection means, conversion phenomena, sensor materials and fields of application. Michahelles and Schiele (Michahelles & Schiele, 2003) systematize the use of sensor technology. They identified several dimensions of sensing that represent the sensing goals for physical interaction. A conceptual framework is introduced that allows categorizing existing sensors and evaluates their utility in various applications. This framework not only guides application designers for choosing meaningful sensor subsets, but also can inspire new systems and leads to the evaluation of existing applications. Today’s technology offers a wide variety of sensors. In order to use all the data from the diversity of sensors a framework of integration is needed. Sensor fusion, fuzzy logic, and neural networks are often mentioned when dealing with problem of combing information from several sensors to get a more general picture of a given situation. The study of data fusion has been receiving considerable attention (Esteban et al., 2005; Luo & Kay, 1990). A survey of the state of the art in sensor fusion for robotics can be found in (Hackett & Shah, 1990). Henderson and Shilcrat (Henderson & Shilcrat, 1984) introduced the concept of logic sensor that defines an abstract specification of the sensors to integrate in a multisensor system. The recent developments of micro electro mechanical sensors (MEMS) with unwired communication capabilities allow a sensor network with interesting capacity. This technology was applied in several applications (Arampatzis & Manesis, 2005), including robotics. Cheekiralla and Engels (Cheekiralla & Engels, 2005) propose a classification of the unwired sensor networks according to its functionalities and properties. This paper presents a development of a sensor classification scheme based on the frequency spectrum of the signals and on a statistical metrics. Bearing these ideas in mind, this paper is organized as follows. Section 2 describes briefly the robotic system enhanced with the instrumentation setup. Section 3 presents the experimental results. Finally, section 4 draws the main conclusions and points out future work.