930 resultados para Distributed multimedia content adaptation
Resumo:
In the energy management of a small power system, the scheduling of the generation units is a crucial problem for which adequate methodologies can maximize the performance of the energy supply. This paper proposes an innovative methodology for distributed energy resources management. The optimal operation of distributed generation, demand response and storage resources is formulated as a mixed-integer linear programming model (MILP) and solved by a deterministic optimization technique CPLEX-based implemented in General Algebraic Modeling Systems (GAMS). The paper deals with a vision for the grids of the future, focusing on conceptual and operational aspects of electrical grids characterized by an intensive penetration of DG, in the scope of competitive environments and using artificial intelligence methodologies to attain the envisaged goals. These concepts are implemented in a computational framework which includes both grid and market simulation.
Resumo:
Demand response can play a very relevant role in future power systems in which distributed generation can help to assure service continuity in some fault situations. This paper deals with the demand response concept and discusses its use in the context of competitive electricity markets and intensive use of distributed generation. The paper presents DemSi, a demand response simulator that allows studying demand response actions and schemes using a realistic network simulation based on PSCAD. Demand response opportunities are used in an optimized way considering flexible contracts between consumers and suppliers. A case study evidences the advantages of using flexible contracts and optimizing the available generation when there is a lack of supply.
Resumo:
The conventional methods used to evaluate chitin content in fungi, such as biochemical assessment of glucosamine release after acid hydrolysis or epifluorescence microscopy, are low throughput, laborious, time-consuming, and cannot evaluate a large number of cells. We developed a flow cytometric assay, efficient, and fast, based on Calcofluor White staining to measure chitin content in yeast cells. A staining index was defined, its value was directly related to chitin amount and taking into consideration the different levels of autofluorecence. Twenty-two Candida spp. and four Cryptococcus neoformans clinical isolates with distinct susceptibility profiles to caspofungin were evaluated. Candida albicans clinical isolate SC5314, and isogenic strains with deletions in chitin synthase 3 (chs3Δ/chs3Δ) and genes encoding predicted Glycosyl Phosphatidyl Inositol (GPI)-anchored proteins (pga31Δ/Δ and pga62Δ/Δ), were used as controls. As expected, the wild-type strain displayed a significant higher chitin content (P < 0.001) than chs3Δ/chs3Δ and pga31Δ/Δ especially in the presence of caspofungin. Ca. parapsilosis, Ca. tropicalis, and Ca. albicans showed higher cell wall chitin content. Although no relationship between chitin content and antifungal drug susceptibility phenotype was found, an association was established between the paradoxical growth effect in the presence of high caspofungin concentrations and the chitin content. This novel flow cytometry protocol revealed to be a simple and reliable assay to estimate cell wall chitin content of fungi.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
Myocardial perfusion-gated-SPECT (MP-gated-SPECT) imaging often shows radiotracer uptake in abdominal organs. This accumulation interferes frequently with qualitative and quantitative assessment of the infero-septal region of myocardium. The objective of this study is to evaluate the effect of ingestion of different fat content on the reduction of extra-myocardial uptake and to improve MP-gated-SPECT image quality. In this study, 150 patients (65 ^ 18 years) who were referred for MP-gated-SPECT underwent a 1-day-protocol including imaging after stress (physical or pharmacological) and resting conditions. All patients gave written informed consent. Patients were subdivided into five groups: GI, GII, GIII, GIV and GV. In the first four groups, patients ate two chocolate bars with different fat content. Patients in GV – control group (CG) – had just water. Uptake indices (UI) of myocardium (M)/liver(L) and M/stomach–proximal bowel(S) revealed lower UI of M/S at rest in all groups. Both stress and rest studies using different food intake indicate that patients who ate chocolate with different fat content showed better UI of M/L than the CG. The UI of M/L and M/S of groups obtained under physical stress are clearly superior to that of groups obtained under pharmacological stress. These differences are only significant in patients who ate high-fat chocolate or drank water. The analysis of all stress studies together (GI, GII, GIII and GIV) in comparison with CG shows higher mean ranks of UI of M/L for those who ate high-fat chocolate. After pharmacological stress, the mean ranks of UI of M/L were higher for patients who ate high- and low-fat chocolate. In conclusion, eating food with fat content after radiotracer injection increases, respectively, the UI of M/L after stress and rest in MP-gated-SPECT studies. It is, therefore, recommended that patients eat a chocolate bar after radiotracer injection and before image acquisition.
Resumo:
The most sold and/or prescribed liquid oral medicines for children in Tubarão, Southern Brazil, were assessed. Their sugar concentration was tested and compared to those in their directions for use. All pharmacies and pediatricians working in the city were visited by a previously trained interviewer. Pre-tested questionnaires were applied in order to assess the most sold pediatric as well as the most prescribed pediatric liquid oral medicines. Three samples of each medicine were analyzed by Lane-Eynon general volumetric method. Among the 14 most sold/prescribed medicines only four did not have sugar contents (analgesic, cortisone, and syrups). Sugar concentration ranged from 8.59 g/100 g of drug (SD=0.29 g/100 g) to 67.0 g/100 g of drug (SD=6.07 g/100 g). Only 50.0% of the total medicines that presented sugar in their ingredients showed this information in their directions.
Resumo:
The aim of this paper is to present an adaptation model for an Adaptive Educational Hypermedia System, PCMAT. The adaptation of the application is based on progressive self-assessment (exercises, tasks, and so on) and applies the constructivist learning theory and the learning styles theory. Our objective is the creation of a better, more adequate adaptation model that takes into account the complexities of different users.
Resumo:
Group decision making plays an important role in organizations, especially in the present-day economy that demands high-quality, yet quick decisions. Group decision-support systems (GDSSs) are interactive computer-based environments that support concerted, coordinated team efforts toward the completion of joint tasks. The need for collaborative work in organizations has led to the development of a set of general collaborative computer-supported technologies and specific GDSSs that support distributed groups (in time and space) in various domains. However, each person is unique and has different reactions to various arguments. Many times a disagreement arises because of the way we began arguing, not because of the content itself. Nevertheless, emotion, mood, and personality factors have not yet been addressed in GDSSs, despite how strongly they influence results. Our group’s previous work considered the roles that emotion and mood play in decision making. In this article, we reformulate these factors and include personality as well. Thus, this work incorporates personality, emotion, and mood in the negotiation process of an argumentbased group decision-making process. Our main goal in this work is to improve the negotiation process through argumentation using the affective characteristics of the involved participants. Each participant agent represents a group decision member. This representation lets us simulate people with different personalities. The discussion process between group members (agents) is made through the exchange of persuasive arguments. Although our multiagent architecture model4 includes two types of agents—the facilitator and the participant— this article focuses on the emotional, personality, and argumentation components of the participant agent.
Resumo:
This paper presents a distributed model predictive control (DMPC) for indoor thermal comfort that simultaneously optimizes the consumption of a limited shared energy resource. The control objective of each subsystem is to minimize the heating/cooling energy cost while maintaining the indoor temperature and used power inside bounds. In a distributed coordinated environment, the control uses multiple dynamically decoupled agents (one for each subsystem/house) aiming to achieve satisfaction of coupling constraints. According to the hourly power demand profile, each house assigns a priority level that indicates how much is willing to bid in auction for consume the limited clean resource. This procedure allows the bidding value vary hourly and consequently, the agents order to access to the clean energy also varies. Despite of power constraints, all houses have also thermal comfort constraints that must be fulfilled. The system is simulated with several houses in a distributed environment.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this work, a microwave-assisted extraction (MAE) methodology was compared with several conventional extraction methods (Soxhlet, Bligh & Dyer, modified Bligh & Dyer, Folch, modified Folch, Hara & Radin, Roese-Gottlieb) for quantification of total lipid content of three fish species: horse mackerel (Trachurus trachurus), chub mackerel (Scomber japonicus), and sardine (Sardina pilchardus). The influence of species, extraction method and frozen storage time (varying from fresh to 9 months of freezing) on total lipid content was analysed in detail. The efficiencies of methods MAE, Bligh & Dyer, Folch, modified Folch and Hara & Radin were the highest and although they were not statistically different, differences existed in terms of variability, with MAE showing the highest repeatability (CV = 0.034). Roese-Gottlieb, Soxhlet, and modified Bligh & Dyer methods were very poor in terms of efficiency as well as repeatability (CV between 0.13 and 0.18).
Resumo:
A growth trial with Senegalese Sole (Solea senegalensis Kaup, 1858) juveniles fed with diets containing increasing replacement levels of fishmeal by mixtures of plant protein sources was conducted over 12 weeks. Total fat contents of muscle, liver, viscera, skin, fins and head tissues were determined, as well as fatty acid profiles of muscle and liver (GC-FID analysis). Liver was the preferential local for fat deposition (5.5–10.8% of fat) followed by fins (3.4–6.7% fat). Increasing levels of plant protein in the diets seems to be related to increased levels of total lipids in the liver. Sole muscle is lean (2.4–4.0% fat), with total lipids being similar among treatments. Liver fatty acid profile varied significantly among treatments. Plant protein diets induced increased levels of C16:1 and C18:2 n -6 and a decrease in ARA and EPA levels. Muscle fatty acid profile also evidenced increasing levels of C18:2 n 6, while ARA and DHA remained similar among treatments. Substitution of fishmeal by plant protein is hence possible without major differences on the lipid content and fatty acid profile of the main edible portion of the fish – the muscle.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
In the initial stage of this work, two potentiometric methods were used to determine the salt (sodium chloride) content in bread and dough samples from several cities in the north of Portugal. A reference method (potentiometric precipitation titration) and a newly developed ion-selective chloride electrode (ISE) were applied. Both methods determine the sodium chloride content through the quantification of chloride. To evaluate the accuracy of the ISE, bread and respective dough samples were analyzed by both methods. Statistical analysis (0.05 significance level) indicated that the results of these methods did not differ significantly. Therefore the ISE is an adequate alternative for the determination of chloride in the analyzed samples. To compare the results of these chloride-based methods with a sodium-based method, sodium was quantified in the same samples by a reference method (atomic absorption spectrometry). Significant differences between the results were verified. In several cases the sodium chloride content exceeded the legal limit when the chloride-based methods were used, but when the sodium-based method was applied this was not the case. This could lead to the erroneous application of fines and therefore the authorities should supply additional information regarding the analytical procedure for this particular control.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.