992 resultados para Distributed Source Coding
Resumo:
We report the isolation of Fonsecaea pedrosoi from thorns of the plant Mimosa pudica L. at the place of infection identified by one of our patients. Clinical diagnosis of chromoblastomycosis was established by direct microscopic examination and cultures from the patient's lesion. The same species was isolated from the patient and from the plant. Scanning electron microscopy of the surface of the thorns showed the characteristic conidial arrangement of F. pedrosoi. These data indicate that M. pudica could be a natural source of infection for the fungus F. pedrosoi.
Resumo:
Distributed real-time systems such as automotive applications are becoming larger and more complex, thus, requiring the use of more powerful hardware and software architectures. Furthermore, those distributed applications commonly have stringent real-time constraints. This implies that such applications would gain in flexibility if they were parallelized and distributed over the system. In this paper, we consider the problem of allocating fixed-priority fork-join Parallel/Distributed real-time tasks onto distributed multi-core nodes connected through a Flexible Time Triggered Switched Ethernet network. We analyze the system requirements and present a set of formulations based on a constraint programming approach. Constraint programming allows us to express the relations between variables in the form of constraints. Our approach is guaranteed to find a feasible solution, if one exists, in contrast to other approaches based on heuristics. Furthermore, approaches based on constraint programming have shown to obtain solutions for these type of formulations in reasonable time.
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
In this paper, we propose the Distributed using Optimal Priority Assignment (DOPA) heuristic that finds a feasible partitioning and priority assignment for distributed applications based on the linear transactional model. DOPA partitions the tasks and messages in the distributed system, and makes use of the Optimal Priority Assignment (OPA) algorithm known as Audsley’s algorithm, to find the priorities for that partition. The experimental results show how the use of the OPA algorithm increases in average the number of schedulable tasks and messages in a distributed system when compared to the use of Deadline Monotonic (DM) usually favoured in other works. Afterwards, we extend these results to the assignment of Parallel/Distributed applications and present a second heuristic named Parallel-DOPA (P-DOPA). In that case, we show how the partitioning process can be simplified by using the Distributed Stretch Transformation (DST), a parallel transaction transformation algorithm introduced in [1].
Resumo:
RTUWO Advances in Wireless and Optical Communications 2015 (RTUWO 2015). 5-6 Nov Riga, Latvia.
Resumo:
XXXIII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (SBRC 2015), III Workshop de Comunicação em Sistemas Embarcados Críticos. Vitória, Brasil.
Resumo:
Fractional Calculus (FC) goes back to the beginning of the theory of differential calculus. Nevertheless, the application of FC just emerged in the last two decades, due to the progress in the area of chaos that revealed subtle relationships with the FC concepts. In the field of dynamical systems theory some work has been carried out but the proposed models and algorithms are still in a preliminary stage of establishment. Having these ideas in mind, the paper discusses a FC perspective in the study of the dynamics and control of some distributed parameter systems.
Resumo:
Previous experiments revealed that DHH1, a RNA helicase involved in the regulation of mRNA stability and translation, complemented the phenotype of a Saccharomyces cerevisiae mutant affected in the expression of genes coding for monocarboxylic-acids transporters, JEN1 and ADY2 (Paiva S, Althoff S, Casal M, Leao C. FEMS Microbiol Lett, 1999, 170∶301–306). In wild type cells, JEN1 expression had been shown to be undetectable in the presence of glucose or formic acid, and induced in the presence of lactate. In this work, we show that JEN1 mRNA accumulates in a dhh1 mutant, when formic acid was used as sole carbon source. Dhh1 interacts with the decapping activator Dcp1 and with the deadenylase complex. This led to the hypothesis that JEN1 expression is post-transcriptionally regulated by Dhh1 in formic acid. Analyses of JEN1 mRNAs decay in wild-type and dhh1 mutant strains confirmed this hypothesis. In these conditions, the stabilized JEN1 mRNA was associated to polysomes but no Jen1 protein could be detected, either by measurable lactate carrier activity, Jen1-GFP fluorescence detection or western blots. These results revealed the complexity of the expression regulation of JEN1 in S. cerevisiae and evidenced the importance of DHH1 in this process. Additionally, microarray analyses of dhh1 mutant indicated that Dhh1 plays a large role in metabolic adaptation, suggesting that carbon source changes triggers a complex interplay between transcriptional and post-transcriptional effects.
Resumo:
Smart Grids (SGs) have emerged as the new paradigm for power system operation and management, being designed to include large amounts of distributed energy resources. This new paradigm requires new Energy Resource Management (ERM) methodologies considering different operation strategies and the existence of new management players such as several types of aggregators. This paper proposes a methodology to facilitate the coalition between distributed generation units originating Virtual Power Players (VPP) considering a game theory approach. The proposed approach consists in the analysis of the classifications that were attributed by each VPP to the distributed generation units, as well as in the analysis of the previous established contracts by each player. The proposed classification model is based in fourteen parameters including technical, economical and behavioural ones. Depending of the VPP strategies, size and goals, each parameter has different importance. VPP can also manage other type of energy resources, like storage units, electric vehicles, demand response programs or even parts of the MV and LV distribution network. A case study with twelve VPPs with different characteristics and one hundred and fifty real distributed generation units is included in the paper.
Resumo:
Further improvements in demand response programs implementation are needed in order to take full advantage of this resource, namely for the participation in energy and reserve market products, requiring adequate aggregation and remuneration of small size resources. The present paper focuses on SPIDER, a demand response simulation that has been improved in order to simulate demand response, including realistic power system simulation. For illustration of the simulator’s capabilities, the present paper is proposes a methodology focusing on the aggregation of consumers and generators, providing adequate tolls for the demand response program’s adoption by evolved players. The methodology proposed in the present paper focuses on a Virtual Power Player that manages and aggregates the available demand response and distributed generation resources in order to satisfy the required electrical energy demand and reserve. The aggregation of resources is addressed by the use of clustering algorithms, and operation costs for the VPP are minimized. The presented case study is based on a set of 32 consumers and 66 distributed generation units, running on 180 distinct operation scenarios.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.
Resumo:
Eruca sativa (rocket salad) has been intensely consumed all over the world, insomuch as, this work was undertaken to evaluate the antioxidant status and the environmental contamination (positive and negative nutritional contribution) of leaves and stems from this vegetable. Antioxidant capacity of rocket salad was assessed by mean of optical methods, such as the total phenolic content (TPC), reducing power assay and DPPH radical scavenging activity. The extent of the environmental contamination was reached through the quantification of thirteen organochlorine pesticides (OCP) by using gas chromatography coupled with electron-capture detector (GC-ECD) and compound confirmations employing gas chromatography tandem mass-spectrometry (GC-MS/MS). The OCP residues were extracted by using Quick, Easy, Cheap, Effective, Rugged and Safe (QuEChERS) methodology.The extent of the environmental contamination was reached through the quantification of thirteen OCP by using gas chromatography coupled with electron-capture detector (GC-ECD) and compound confirmations employing GC-MS/MS. The OCP residues were extracted by using Quick, Easy, Cheap, Effective, Rugged and Safe (QuEChERS) methodology. This demonstrated that leaves presented more antioxidant activity than stems, emphasizing that leaves contained six times more polyphenolic compounds than stems. In what concerns the OCP occurrence, the average recoveries obtained at the three levels tested (40, 60 and 80 µg kg−1) ranged from 55% to 149% with a relative standard deviation of 11%, (except hexachrorobenzene). Three vegetables samples were collected from supermarkets and analysed following this study. According to data, only one sample achieved 16.21 of β-hexachlorocyclohexane, confirmed by GC-MS/MS. About OCP quantification, the data indicated that only one sample achieved 16.21 µg kg−1 of β-hexachlorocyclohexane, confirmed by GC-MS/MS, being the QuEChERS a good choice for the of OCPs extraction. Furthermore, the leaves consumption guaranty higher levels of antioxidants than stems.
Resumo:
O crescimento dos sistemas de informação e a sua utilização massiva criou uma nova realidade no acesso a experiências remotas que se encontram geograficamente distribuídas. Nestes últimos tempos, a temática dos laboratórios remotos apareceu nos mais diversos campos como o do ensino ou o de sistemas industriais de controlo e monitorização. Como o acesso aos laboratórios é efectuado através de um meio permissivo como é o caso da Internet, a informação pode estar à mercê de qualquer atacante. Assim, é necessário garantir a segurança do acesso, de forma a criar condições para que não se verifique a adulteração dos valores obtidos, bem como a existência de acessos não permitidos. Os mecanismos de segurança adoptados devem ter em consideração a necessidade de autenticação e autorização, sendo estes pontos críticos no que respeita à segurança, pois estes laboratórios podem estar a controlar equipamentos sensíveis e dispendiosos, podendo até eventualmente comprometer em certos casos o controlo e a monotorização de sistemas industriais. Este trabalho teve como objectivo a análise da segurança em redes, tendo sido realizado um estudo sobre os vários conceitos e mecanismos de segurança necessários para garantir a segurança nas comunicações entre laboratórios remotos. Dele resultam as três soluções apresentadas de comunicação segura para laboratórios remotos distribuídos geograficamente, recorrendo às tecnologias IPSec, OpenVPN e PPTP. De forma a minimizar custos, toda a implementação foi assente em software de código aberto e na utilização de um computador de baixo custo. No que respeita à criação das VPNs, estas foram configuradas de modo a permitir obter os resultados pretendidos na criação de uma ligação segura para laboratórios remotos. O pfSense mostrou-se a escolha acertada visto que suporta nativamente quaisquer das tecnologias que foram estudadas e implementadas, sem necessidade de usar recursos físicos muito caros, permitindo o uso de tecnologias de código aberto sem comprometer a segurança no funcionamento das soluções que suportam a segurança nas comunicações dos laboratórios remotos.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies