972 resultados para real-effort task
Resumo:
This paper presents the development of a solar photovoltaic (PV) model based on PSCAD/EMTDC - Power System Computer Aided Design – including a mathematical model study. An additional algorithm has been implemented in MATLAB software in order to calculate several parameters required by the PSCAD developed model. All the simulation study has been performed in PSCAD/MATLAB software simulation tool. A real data base concerning irradiance, cell temperature and PV power generation was used in order to support the evaluation of the implemented PV model.
Resumo:
This work describes a methodology to extract symbolic rules from trained neural networks. In our approach, patterns on the network are codified using formulas on a Lukasiewicz logic. For this we take advantage of the fact that every connective in this multi-valued logic can be evaluated by a neuron in an artificial network having, by activation function the identity truncated to zero and one. This fact simplifies symbolic rule extraction and allows the easy injection of formulas into a network architecture. We trained this type of neural network using a back-propagation algorithm based on Levenderg-Marquardt algorithm, where in each learning iteration, we restricted the knowledge dissemination in the network structure. This makes the descriptive power of produced neural networks similar to the descriptive power of Lukasiewicz logic language, minimizing the information loss on the translation between connectionist and symbolic structures. To avoid redundance on the generated network, the method simplifies them in a pruning phase, using the "Optimal Brain Surgeon" algorithm. We tested this method on the task of finding the formula used on the generation of a given truth table. For real data tests, we selected the Mushrooms data set, available on the UCI Machine Learning Repository.
Resumo:
Recent changes in power systems mainly due to the substantial increase of distributed generation and to the operation in competitive environments has created new challenges to operation and planning. In this context, Virtual Power Players (VPP) can aggregate a diversity of players, namely generators and consumers, and a diversity of energy resources, including electricity generation based on several technologies, storage and demand response. Demand response market implementation has been done in recent years. Several implementation models have been considered. An important characteristic of a demand response program is the trigger criterion. A program for which the event trigger depends on the Locational Marginal Price (LMP) used by the New England Independent System operator (ISO-NE) inspired the present paper. This paper proposes a methodology to support VPP demand response programs management. The proposed method has been computationally implemented and its application is illustrated using a 32 bus network with intensive use of distributed generation. Results concerning the evaluation of the impact of using demand response events are also presented.
Resumo:
In competitive electricity markets with deep concerns for the efficiency level, demand response programs gain considerable significance. As demand response levels have decreased after the introduction of competition in the power industry, new approaches are required to take full advantage of demand response opportunities. This paper presents DemSi, a demand response simulator that allows studying demand response actions and schemes in distribution networks. It undertakes the technical validation of the solution using realistic network simulation based on PSCAD. The use of DemSi by a retailer in a situation of energy shortage, is presented. Load reduction is obtained using a consumer based price elasticity approach supported by real time pricing. Non-linear programming is used to maximize the retailer’s profit, determining the optimal solution for each envisaged load reduction. The solution determines the price variations considering two different approaches, price variations determined for each individual consumer or for each consumer type, allowing to prove that the approach used does not significantly influence the retailer’s profit. The paper presents a case study in a 33 bus distribution network with 5 distinct consumer types. The obtained results and conclusions show the adequacy of the used methodology and its importance for supporting retailers’ decision making.
Resumo:
Dust is a complex mixture of particles of organic and inorganic origin and different gases absorbed in aerosol droplets. In a poultry unit include dried faecal matter and urine, skin flakes, ammonia, carbon dioxide, pollens, feed and litter particles, feathers, grain mites, fungi spores, bacteria, viruses and their constituents. Dust particles vary in size and differentiation between particle size fractions is important in health studies in order to quantify penetration within the respiratory system. A descriptive study was developed in order to assess exposure to particles in a poultry unit during different operations, namely routine examination and floor turn over. Direct-reading equipment was used (Lighthouse, model 3016 IAQ). Particle measurement was performed in 5 different sizes (PM0.5; PM1.0; PM2.5; PM5.0; PM10). The chemical composition of poultry litter was also determined by neutron activation analysis. Normally, the litter of poultry pavilions is turned over weekly and it was during this operation that the higher exposure of particles was observed. In all the tasks considered PM5.0 and PM10.0 were the sizes with higher concentrations values. PM10 is what turns out to have higher values and PM0.5 the lowest values. The chemical element with the highest concentration was Mg (5.7E6 mg.kg-1), followed by K (1.5E4 mg.kg-1), Ca (4.8E3 mg.kg-1), Na (1.7E3 mg.kg-1), Fe (2.1E2 mg.kg-1) and Zn (4.2E1 mg.kg-1). This high presence of particles in the respirable range (<5–7μm) means that poultry dust particles can penetrate into the gas exchange region of the lung. Larger particles (PM10) present a range of concentrations from 5.3E5 and 3.0E6 mg/m3.
Resumo:
OBJECTIVE: To examine the effects of the length and timing of nighttime naps on performance and physiological functions, an experimental study was carried out under simulated night shift schedules. METHODS: Six students were recruited for this study that was composed of 5 experiments. Each experiment involved 3 consecutive days with one night shift (22:00-8:00) followed by daytime sleep and night sleep. The experiments had 5 conditions in which the length and timing of naps were manipulated: 0:00-1:00 (E60), 0:00-2:00 (E120), 4:00-5:00 (L60), 4:00-6:00 (L120), and no nap (No-nap). During the night shifts, participants underwent performance tests. A questionnaire on subjective fatigue and a critical flicker fusion frequency test were administered after the performance tests. Heart rate variability and rectal temperature were recorded continuously during the experiments. Polysomnography was also recorded during the nap. RESULTS: Sleep latency was shorter and sleep efficiency was higher in the nap in L60 and L120 than that in E60 and E120. Slow wave sleep in the naps in E120 and L120 was longer than that in E60 and L60. The mean reaction time in L60 became longer after the nap, and faster in E60 and E120. Earlier naps serve to counteract the decrement in performance and physiological functions during night shifts. Performance was somewhat improved by taking a 2-hour nap later in the shift, but deteriorated after a one-hour nap. CONCLUSIONS: Naps in the latter half of the night shift were superior to earlier naps in terms of sleep quality. However performance declined after a 1-hour nap taken later in the night shift due to sleep inertia. This study suggests that appropriate timing of a short nap must be carefully considered, such as a 60-min nap during the night shift.
Resumo:
The idea behind creating this special issue on real world applications of intelligent tutoring systems was to bring together in a single publication some of the most important examples of success in the use of ITS technology. This will serve as a reference to all researchers working in the area. It will also be an important resource for the industry, showing the maturity of ITS technology and creating an atmosphere for funding new ITS projects. Simultaneously, it will be valuable to academic groups, motivating students for new ideas of ITS and promoting new academic research work in the area.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
The population growth of a Staphylococcus aureus culture, an active colloidal system of spherical cells, was followed by rheological measurements, under steady-state and oscillatory shear flows. We observed a rich viscoelastic behavior as a consequence of the bacteria activity, namely, of their multiplication and density-dependent aggregation properties. In the early stages of growth (lag and exponential phases), the viscosity increases by about a factor of 20, presenting several drops and full recoveries. This allows us to evoke the existence of a percolation phenomenon. Remarkably, as the bacteria reach their late phase of development, in which the population stabilizes, the viscosity returns close to its initial value. Most probably, this is caused by a change in the bacteria physiological activity and in particular, by the decrease of their adhesion properties. The viscous and elastic moduli exhibit power-law behaviors compatible with the "soft glassy materials" model, whose exponents are dependent on the bacteria growth stage. DOI: 10.1103/PhysRevE.87.030701.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Copyright © 2014 The Authors. Methods in Ecology and Evolution © 2014 British Ecological Society.
Resumo:
[...]. Para exemplificar o conceito de limite, vamos considerar um círculo com 15 centímetros de raio. Matematicamente, um círculo é o conjunto dos pontos do plano cuja distância ao centro é menor ou igual que um dado valor, ao qual chamamos raio. [...].
Resumo:
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.
Resumo:
The BALA project (Biodiversity of Arthropods of Laurisilva of the Azores) is a research initiative to quantify the spatial distribution of arthropod biodiversity in native forests of the Azores archipelago. Arthropods were collected using a combination of two techniques, targeting epigean (ground dwelling) and canopy (arboreal) arthropods: pitfall traps (with Turquin and Ethylene solutions) and beating samples (using the three most dominant plant species). A total of 109 transects distributed amongst 18 forest fragments in seven of the nine Azorean islands were used in this study. The performance of alternative sampling methods and effort were tested. No significant differences were found in the accumulated number of species captured whether an alternative method was used or whether another transect with similar effort was established in another location within the same fragment. A combination of Ethylene and Turquin traps captured more species per individual, Turquin and beating captured more species per sample, and Turquin captured more species per unit time. An optimization exercise was performed and we found that the protocol applied during recent years is very close to optimal, allowing its future replication with confidence. The minimum combinations of sampling effort and methods, in order to monitor or to inventory diversity, taking into account different proportions of sample completeness are discussed.