986 resultados para 4-LEVEL SYSTEMS
Resumo:
In recent decades, all over the world, competition in the electric power sector has deeply changed the way this sector’s agents play their roles. In most countries, electric process deregulation was conducted in stages, beginning with the clients of higher voltage levels and with larger electricity consumption, and later extended to all electrical consumers. The sector liberalization and the operation of competitive electricity markets were expected to lower prices and improve quality of service, leading to greater consumer satisfaction. Transmission and distribution remain noncompetitive business areas, due to the large infrastructure investments required. However, the industry has yet to clearly establish the best business model for transmission in a competitive environment. After generation, the electricity needs to be delivered to the electrical system nodes where demand requires it, taking into consideration transmission constraints and electrical losses. If the amount of power flowing through a certain line is close to or surpasses the safety limits, then cheap but distant generation might have to be replaced by more expensive closer generation to reduce the exceeded power flows. In a congested area, the optimal price of electricity rises to the marginal cost of the local generation or to the level needed to ration demand to the amount of available electricity. Even without congestion, some power will be lost in the transmission system through heat dissipation, so prices reflect that it is more expensive to supply electricity at the far end of a heavily loaded line than close to an electric power generation. Locational marginal pricing (LMP), resulting from bidding competition, represents electrical and economical values at nodes or in areas that may provide economical indicator signals to the market agents. This article proposes a data-mining-based methodology that helps characterize zonal prices in real power transmission networks. To test our methodology, we used an LMP database from the California Independent System Operator for 2009 to identify economical zones. (CAISO is a nonprofit public benefit corporation charged with operating the majority of California’s high-voltage wholesale power grid.) To group the buses into typical classes that represent a set of buses with the approximate LMP value, we used two-step and k-means clustering algorithms. By analyzing the various LMP components, our goal was to extract knowledge to support the ISO in investment and network-expansion planning.
Resumo:
OBJECTIVE: Various support measures useful for promoting joint change approaches to the improvement of both shiftworking arrangements and safety and health management systems were reviewed. A particular focus was placed on enterprise-level risk reduction measures linking working hours and management systems. METHODS: Voluntary industry-based guidelines on night and shift work for department stores and the chemical, automobile and electrical equipment industries were examined. Survey results that had led to the compilation of practicable measures to be included in these guidelines were also examined. The common support measures were then compared with ergonomic checkpoints for plant maintenance work involving irregular nightshifts. On the basis of this analysis, a new night and shift work checklist was designed. RESULTS: Both the guidelines and the plant maintenance work checkpoints were found to commonly cover multiple issues including work schedules and various job-related risks. This close link between shiftwork arrangements and risk management was important as shiftworkers in these industries considered teamwork and welfare services to be essential for managing risks associated with night and shift work. Four areas found suitable for participatory improvement by managers and workers were work schedules, ergonomic work tasks, work environment and training. The checklist designed to facilitate participatory change processes covered all these areas. CONCLUSIONS: The checklist developed to describe feasible workplace actions was suitable for integration with comprehensive safety and health management systems and offered valuable opportunities for improving working time arrangements and job content together.
Resumo:
Shopping centers present a rich and heterogeneous environment, where IT systems can be implemented in order to support the needs of its actors. However, due to the environment complexity, several feasibility issues emerge when designing both the logical and physical architecture of such systems. Additionally, the system must be able to cope with the individual needs of each actor, and provide services that are easily adopted by them, taking into account several sociological and economical aspects. In this sense, we present an overview of current support systems for shopping center environments. From this overview, a high-level model of the domain (involving actors and services) is described along with challenges and possible features in the context of current Semantic Web, mobile device and sensor technologies.
Resumo:
A novel high throughput and scalable unified architecture for the computation of the transform operations in video codecs for advanced standards is presented in this paper. This structure can be used as a hardware accelerator in modern embedded systems to efficiently compute all the two-dimensional 4 x 4 and 2 x 2 transforms of the H.264/AVC standard. Moreover, its highly flexible design and hardware efficiency allows it to be easily scaled in terms of performance and hardware cost to meet the specific requirements of any given video coding application. Experimental results obtained using a Xilinx Virtex-5 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which presents a throughput per unit of area relatively higher than other similar recently published designs targeting the H.264/AVC standard. Such results also showed that, when integrated in a multi-core embedded system, this architecture provides speedup factors of about 120x concerning pure software implementations of the transform algorithms, therefore allowing the computation, in real-time, of all the above mentioned transforms for Ultra High Definition Video (UHDV) sequences (4,320 x 7,680 @ 30 fps).
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Introduction: Paper and thin layer chromatography methods are frequently used in Classic Nuclear Medicine for the determination of radiochemical purity (RCP) on radiopharmaceutical preparations. An aliquot of the radiopharmaceutical to be tested is spotted at the origin of a chromatographic strip (stationary phase), which in turn is placed in a chromatographic chamber in order to separate and quantify radiochemical species present in the radiopharmaceutical preparation. There are several methods for the RCP measurement, based on the use of equipment as dose calibrators, well scintillation counters, radiochromatografic scanners and gamma cameras. The purpose of this study was to compare these quantification methods for the determination of RCP. Material and Methods: 99mTc-Tetrofosmin and 99mTc-HDP are the radiopharmaceuticals chosen to serve as the basis for this study. For the determination of RCP of 99mTc-Tetrofosmin we used ITLC-SG (2.5 x 10 cm) and 2-butanone (99mTc-tetrofosmin Rf = 0.55, 99mTcO4- Rf = 1.0, other labeled impurities 99mTc-RH RF = 0.0). For the determination of RCP of 99mTc-HDP, Whatman 31ET and acetone was used (99mTc-HDP Rf = 0.0, 99mTcO4- Rf = 1.0, other labeled impurities RF = 0.0). After the development of the solvent front, the strips were allowed to dry and then imaged on the gamma camera (256x256 matrix; zoom 2; LEHR parallel-hole collimator; 5-minute image) and on the radiochromatogram scanner. Then, strips were cut in Rf 0.8 in the case of 99mTc-tetrofosmin and Rf 0.5 in the case of 99mTc-HDP. The resultant pieces were smashed in an assay tube (to minimize the effect of counting geometry) and counted in the dose calibrator and in the well scintillation counter (during 1 minute). The RCP was calculated using the formula: % 99mTc-Complex = [(99mTc-Complex) / (Total amount of 99mTc-labeled species)] x 100. Statistical analysis was done using the test of hypotheses for the difference between means in independent samples. Results:The gamma camera based method demonstrated higher operator-dependency (especially concerning the drawing of the ROIs) and the measures obtained using the dose calibrator are very sensitive to the amount of activity spotted in the chromatographic strip, so the use of a minimum of 3.7 MBq activity is essential to minimize quantification errors. Radiochromatographic scanner and well scintillation counter showed concordant results and demonstrated the higher level of precision. Conclusions: Radiochromatographic scanners and well scintillation counters based methods demonstrate to be the most accurate and less operator-dependant methods.
Resumo:
This document is a survey in the research area of User Modeling (UM) for the specific field of Adaptive Learning. The aims of this document are: To define what it is a User Model; To present existing and well known User Models; To analyze the existent standards related with UM; To compare existing systems. In the scientific area of User Modeling (UM), numerous research and developed systems already seem to promise good results, but some experimentation and implementation are still necessary to conclude about the utility of the UM. That is, the experimentation and implementation of these systems are still very scarce to determine the utility of some of the referred applications. At present, the Student Modeling research goes in the direction to make possible reuse a student model in different systems. The standards are more and more relevant for this effect, allowing systems communicate and to share data, components and structures, at syntax and semantic level, even if most of them still only allow syntax integration.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
The presented work was conducted within the Dissertation / Internship, branch of Environmental Protection Technology, associated to the Master thesis in Chemical Engineering by the Instituto Superior de Engenharia do Porto and it was developed in the Aquatest a.s, headquartered in Prague, in Czech Republic. The ore mining exploitation in the Czech Republic began in the thirteenth century, and has been extended until the twentieth century, being now evident the consequences of the intensive extraction which includes contamination of soil and sub-soil by high concentrations of heavy metals. The mountain region of Zlaté Hory was chosen for the implementation of the remediation project, which consisted in the construction of three cells (tanks), the first to raise the pH, the second for the sedimentation of the formed precipitates and a third to increase the process efficiency in order to reduce high concentrations of metals, with special emphasis on iron, manganese and sulfates. This project was initiated in 2005, being pioneer in this country and is still ongoing due to the complex chemical and biological phenomenon’s inherent to the system. At the site where the project was implemented, there is a natural lagoon, thereby enabling a comparative study of the two systems (natural and artificial) regarding the efficiency of both in the reduction/ removal of the referred pollutants. The study aimed to assist and cooperate in the ongoing investigation at the company Aquatest, in terms of field work conducted in Zlaté Hory and in terms of research methodologies used in it. Thereby, it was carried out a survey and analysis of available data from 2005 to 2008, being complemented by the treatment of new data from 2009 to 2010. Moreover, a theoretical study of the chemical and biological processes that occurs in both systems was performed. Regarding the field work, an active participation in the collection and in situ sample analyzing of water and soil from the natural pond has been attained, with the supervision of Engineer, Irena Šupiková. Laboratory analysis of water and soil were carried out by laboratory technicians. It was found that the natural lagoon is more efficient in reducing iron and manganese, being obtained removal percentages of 100%. The artificial lagoon had a removal percentage of 90% and 33% for iron and manganese respectively. Despite the minor efficiency of the constructed wetland, it must be pointed out that this system was designed for the treatment and consequent reduction of iron. In this context, it can conclude that the main goal has been achieved. In the case of sulphates, the removal optimization is yet a goal to be achieved not only in the Czech Republic but also in other places where this type of contamination persists. In fact, in the natural lagoon and in the constructed wetland, removal efficiencies of 45% and 7% were obtained respectively. It has been speculated that the water at the entrance of both systems has different sources. The analysis of the collected data shows at the entrance of the natural pond, a concentration of 4.6 mg/L of total iron, 14.6 mg/L of manganese and 951 mg/L of sulphates. In the artificial pond, the concentrations are 27.7 mg/L, 8.1 mg/L and 382 mg/L respectively for iron, manganese and sulphates. During 2010 the investigation has been expanded. The study of soil samples has started in order to observe and evaluate the contribution of bacteria in the removal of heavy metals being in its early phase. Summarizing, this technology has revealed to be an interesting solution, since in addition to substantially reduce the mentioned contaminants, mostly iron, it combines the low cost of implementation with an reduced maintenance, and it can also be installed in recreation parks, providing habitats for plants and birds.
Resumo:
OBJECTIVE: To assess a new impunity index and variables that have been found to predict variation in homicide rates in other geographical levels as predictive of state-level homicide rates in Brazil. METHODS: This was a cross-sectional ecological study. Data from the mortality information system relating to the 27 Brazilian states for the years 1996 to 2005 were analyzed. The outcome variables were taken to be homicide victim rates in 2005, for the entire population and for men aged 20-29 years. Measurements of economic and social development, economic inequality, demographic structure and life expectancy were analyzed as predictors. An "impunity index", calculated as the total number of homicides between 1996 and 2005 divided by the number of individuals in prison in 2007, was constructed. The data were analyzed by means of simple linear regression and negative binomial regression. RESULTS: In 2005, state-level crude total homicide rates ranged from 11 to 51 per 100,000; for young men, they ranged from 39 to 241. The impunity index ranged from 0.4 to 3.5 and was the most important predictor of this variability. From negative binomial regression, it was estimated that the homicide victim rate among young males increased by 50% for every increase of one point in this ratio. CONCLUSIONS: Classic predictive factors were not associated with homicides in this analysis of state-level variation in Brazil. However, the impunity index indicated that the greater the impunity, the higher the homicide rate.
Resumo:
We show that in two Higgs doublet models at tree-level the potential minimum preserving electric charge and CP symmetries, when it exists, is the global one. Furthermore, we derived a very simple condition, involving only the coefficients of the quartic terms of the potential, that guarantees spontaneous CP breaking. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.
Resumo:
Purpose - The study evaluates the pre- and post-training lesion localisation ability of a group of novice observers. Parallels are drawn with the performance of inexperienced radiographers taking part in preliminary clinical evaluation (PCE) and ‘red-dot’ systems, operating within radiography practice. Materials and methods - Thirty-four novice observers searched 92 images for simulated lesions. Pre-training and post-training evaluations were completed following the free-response the receiver operating characteristic (FROC) method. Training consisted of observer performance methodology, the characteristics of the simulated lesions and information on lesion frequency. Jackknife alternative FROC (JAFROC) and highest rating inferred ROC analyses were performed to evaluate performance difference on lesion-based and case-based decisions. The significance level of the test was set at 0.05 to control the probability of Type I error. Results - JAFROC analysis (F(3,33) = 26.34, p < 0.0001) and highest-rating inferred ROC analysis (F(3,33) = 10.65, p = 0.0026) revealed a statistically significant difference in lesion detection performance. The JAFROC figure-of-merit was 0.563 (95% CI 0.512,0.614) pre-training and 0.677 (95% CI 0.639,0.715) post-training. Highest rating inferred ROC figure-of-merit was 0.728 (95% CI 0.701,0.755) pre-training and 0.772 (95% CI 0.750,0.793) post-training. Conclusions - This study has demonstrated that novice observer performance can improve significantly. This study design may have relevance in the assessment of inexperienced radiographers taking part in PCE or commenting scheme for trauma.
Resumo:
Background Information:The incorporation of distance learning activities by institutions of higher education is considered an important contribution to create new opportunities for teaching at both, initial and continuing training. In Medicine and Nursing, several papers illustrate the adaptation of technological components and teaching methods are prolific, however, when we look at the Pharmaceutical Education area, the examples are scarce. In that sense this project demonstrates the implementation and assessment of a B-Learning Strategy for Therapeutics using a “case based learning” approach. Setting: Academic Pharmacy Methods:This is an exploratory study involving 2nd year students of the Pharmacy Degree at the School of Allied Health Sciences of Oporto. The study population consists of 61 students, divided in groups of 3-4 elements. The b-learning model was implemented during a time period of 8 weeks. Results:A B-learning environment and digital learning objects were successfully created and implemented. Collaboration and assessment techniques were carefully developed to ensure the active participation and fair assessment of all students. Moodle records show a consistent activity of students during the assignments. E-portfolios were also developed using Wikispaces, which promoted reflective writing and clinical reasoning. Conclusions:Our exploratory study suggests that the “case based learning” method can be successfully combined with the technological components to create and maintain a feasible online learning environment for the teaching of therapeutics.
Resumo:
A flow injection analysis (FIA) system having a chlormequat selective electrode is proposed. Several electrodes with poly(vinyl chloride) based membranes were constructed for this purpose. Comparative characterization suggestedthe use of membrane with chlormequat tetraphenylborate and dibutylphthalate. On a single-line FIA set-up, operating with 1x10-2 mol L-1 ionic strength and 6.3 pH, calibration curves presented slopes of 53.6±0.4mV decade-1 within 5.0x10-6 and1.0x10-3 mol L-1, andsquaredcorrelation coefficients >0.9953. The detection limit was 2.2x10-6 mol L-1 and the repeatability equal to ±0.68mV (0.7%). A dual-channel FIA manifold was therefore constructed, enabling automatic attainment of previous ionic strength andpH conditions and thus eliminating sample preparation steps. Slopes of 45.5±0.2mV decade -1 along a concentration range of 8.0x10-6 to 1.0x10-3 mol L-1 with a repeatability ±0.4mV (0.69%) were obtained. Analyses of real samples were performed, and recovery gave results ranging from 96.6 to 101.1%.