787 resultados para Green IT framework
Resumo:
The CotA laccase-catalysed oxidation of the meta, para-disubstituted arylamine 2,4-diaminophenyldiamine delivers, under mild reaction conditions, a benzocarbazole derivative (1) (74% yield), a key structural motif of a diverse range of applications. This work extends the scope of aromatic frameworks obtained using these enzymes and represents a new efficient and clean method to construct in one step C-C and C-N bonds.
Resumo:
Atualmente, verifica-se um aumento na necessidade de software feito à medida do cliente, que se consiga adaptar de forma rápida as constantes mudanças da sua área de negócio. Cada cliente tem os seus problemas concretos que precisa de resolver, não lhe sendo muitas vezes possível dispensar uma elevada quantidade de recursos para atingir os fins pretendidos. De forma a dar resposta a estes problemas surgiram várias arquiteturas e metodologias de desenvolvimento de software, que permitem o desenvolvimento ágil de aplicações altamente configuráveis, que podem ser personalizadas por qualquer utilizador das mesmas. Este dinamismo, trazido para as aplicações sobre a forma de modelos que são personalizados pelos utilizadores e interpretados por uma plataforma genérica, cria maiores desafios no momento de realizar testes, visto existir um número de variáveis consideravelmente maior que numa aplicação com uma arquitetura tradicional. É necessário, em todos os momentos, garantir a integridade de todos os modelos, bem como da plataforma responsável pela sua interpretação, sem ser necessário o desenvolvimento constante de aplicações para suportar os testes sobre os diferentes modelos. Esta tese debruça-se sobre uma aplicação, a plataforma myMIS, que permite a interpretação de modelos orientados à gestão, escritos numa linguagem específica de domínio, sendo realizada a avaliação do estado atual e definida uma proposta de práticas de testes a aplicar no desenvolvimento da mesma. A proposta resultante desta tese permitiu verificar que, apesar das dificuldades inerentes à arquitetura da aplicação, o desenvolvimento de testes de uma forma genérica é possível, podendo as mesmas lógicas ser utilizadas para o teste de diversos modelos distintos.
Resumo:
Os videojogos são cada vez mais uma das maiores áreas da indústria de entretenimento, tendo esta vindo a expandir-se de ano para ano. Para além disso, os videojogos estão cada vez mais presentes no nosso dia-adia, quer através dos dispositivos móveis ou das novas consolas. Com base nesta premissa, é seguro de afirmar que o investimento neste campo trará mais ganhos do que perdas. Esta Dissertação tem como objetivo o estudo do estado da indústria dos videojogos, tendo como principal foco a conceção de um videojogo, a partir duma Framework Modular, desenvolvida também no âmbito desta Dissertação. Para isso, é feito um estudo sobre o estado da arte tecnológico, onde várias ferramentas de criação de videojogos foram estudadas e analisadas, de forma a perceber as forças e fraquezas de cada uma, e um estudo sobre a arte do negócio, ficando assim com uma ideia mais concreta dos vários pontos necessários para a criação de um videojogo. De seguida são discutidos os diferentes géneros de videojogos existentes e é conceptualizado um pequeno videojogo, tendo ainda em conta os diferentes tipos de interfaces que são mais utilizados na indústria dos videojogos, de forma a entender qual será a forma mais viável, conforme o género, e as diferentes mecânicas presentes no videojogo a criar. A Framework Modular é desenvolvida tendo em conta toda a análise previamente realizada, e o videojogo conceptualizado. Esta tem como grande objetivo uma elevada personalização e manutenibilidade, sendo que todos os módulos implementados podem ser substituídos por outros sem criar conflitos entre si. Finalmente, de forma a unir todos os temas analisados ao longo desta Dissertação, é ainda desenvolvido um Protótipo de forma a comprovar o bom funcionamento da Framework, aplicando todas as decisões previamente feitas.
Resumo:
This paper consists in the characterization of medium voltage (MV) electric power consumers based on a data clustering approach. It is intended to identify typical load profiles by selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The best partition is selected using several cluster validity indices. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers’ behavior. The data-mining-based methodology presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partitions. To validate our approach, a case study with a real database of 1.022 MV consumers was used.
Resumo:
Os videojogos são cada vez mais parte integrante da sociedade, sendo que a massificação dos vários dispositivos que se encontram atualmente veio ajudar os videojogos a estarem mais presentes no dia-a-dia das pessoas. Mas a criação de jogos só é possível através de ferramentas bem específicas, as frameworks ou motores de videojogos. Com estas é possível criar os mais diferentes géneros de videojogos para os mais diferentes dispositivos. Contudo, nem todas essas ferramentas são gratuitas, e as que são encontram-se pouco documentadas ou limitadas em determinadas funcionalidades, o que poderá levar mais tempo no desenvolvimento de um videojogo. O trabalho desenvolvido nesta dissertação visa a criação de uma framework capaz de suportar diferentes géneros de videojogos, mas também que facilmente possibilite a alteração ou substituição de diferentes partes internas da framework sem que esta deixe de funcionar. Para isso, foi realizada uma análise ao estado atual do mercado dos videojogos, bem como das ferramentas que possibilitam a criação dos mesmos, passando também pelas interfaces gráficas existentes nos videojogos. Como forma de demonstrar as funcionalidades implementadas na framework, foi desenvolvido um protótipo de um videojogo de luta, tirando-se, assim, partido de algumas das características dessa ferramenta.
Resumo:
In the light of Portuguese legal system, cooperative enterprises may include an enterprise carried out by a subsidiary, provided they conform to certain requirements. The aim of this paper is to reflect on the issue of the legal framework of the relationship between the cooperative and the subsidiary. There are several problems to be addressed in this paper: (i) How to qualify such a relationship since corresponding to mere investments made by the cooperative? Should it be classified as non-member cooperative transactions or as extraordinary activities? (ii) How to qualify such a relationship when related to the development of preparatory or complementary activities for the economic activity developed between the cooperative and its members? May we speak, in this situation, of a concept of “indirect mutuality”, as provided in other legal systems? (iii) How should we classify and what is the regime of the economic results from the activity developed by the subsidiary? We will conclude, advocating: (i) That the cooperative enterprise may include an enterprise carried out by a subsidiary if this is deemed necessary to satisfy the interests of the members; (ii) The inadmissibility of the concept of “indirect mutuality”; (iii) The inadequacy of qualifying the legal relationship between the cooperative partner (iv) The application, to the economic results coming from the activity developed by the subsidiary, of the regime provided for in the Portuguese Cooperative Code to the results from non-member cooperative transactions; (v) The economic results coming from the activity developed by the subsidiary cannot be appropriated by individual co-operators members, and so should be allocated to indivisible reserves.
Resumo:
IEEE 802.11 is one of the most well-established and widely used standard for wireless LAN. Its Medium Access control (MAC) layer assumes that the devices adhere to the standard’s rules and timers to assure fair access and sharing of the medium. However, wireless cards driver flexibility and configurability make it possible for selfish misbehaving nodes to take advantages over the other well-behaving nodes. The existence of selfish nodes degrades the QoS for the other devices in the network and may increase their energy consumption. In this paper we propose a green solution for selfish misbehavior detection in IEEE 802.11-based wireless networks. The proposed scheme works in two phases: Global phase which detects whether the network contains selfish nodes or not, and Local phase which identifies which node or nodes within the network are selfish. Usually, the network must be frequently examined for selfish nodes during its operation since any node may act selfishly. Our solution is green in the sense that it saves the network resources as it avoids wasting the nodes energy by examining all the individual nodes of being selfish when it is not necessary. The proposed detection algorithm is evaluated using extensive OPNET simulations. The results show that the Global network metric clearly indicates the existence of a selfish node while the Local nodes metric successfully identified the selfish node(s). We also provide mathematical analysis for the selfish misbehaving and derived formulas for the successful channel access probability.
Resumo:
The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.
Resumo:
Faecal samples were collected from fifty three freshly captured monkeys which were kept at the Barbados Primate Research Centre and Wildlife Reserve (BPRCWR). Examination of these samples for gastrointestinal helminths using the zinc sulphate floatation method revealed an overall infection rate of 88.7%.The parasites observed included Strongyloides (62.4%), Physaloptera (58.5%), Trichuris (52.8%), Hookworm (34.0%), Oesophagostomum (30.2%), Trichostrongylus (3.8%) and Ascaris (5.7%). No significant differences in overall prevalence were observed according to sex or age. Polyparasitism appeared to be common as it was observed in 92.5% of all monkeys examined. It is concluded that these monkeys could act as reservoirs of some of the parasites which can infect man.
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
OCEANS, 2001. MTS/IEEE Conference and Exhibition (Volume:2 )
Resumo:
The world is increasingly in a global community. The rapid technological development of communication and information technologies allows the transmission of knowledge in real-time. In this context, it is imperative that the most developed countries are able to develop their own strategies to stimulate the industrial sector to keep up-to-date and being competitive in a dynamic and volatile global market so as to maintain its competitive capacities and by consequence, permits the maintenance of a pacific social state to meet the human and social needs of the nation. The path traced of competitiveness through technological differentiation in industrialization allows a wider and innovative field of research. Already we are facing a new phase of organization and industrial technology that begins to change the way we relate with the industry, society and the human interaction in the world of work in current standards. This Thesis, develop an analysis of Industrie 4.0 Framework, Challenges and Perspectives. Also, an analysis of German reality in facing to approach the future challenge in this theme, the competition expected to win in future global markets, points of domestic concerns felt in its industrial fabric household face this challenge and proposes recommendations for a more effective implementation of its own strategy. The methods of research consisted of a comprehensive review and strategically analysis of existing global literature on the topic, either directly or indirectly, in parallel with the analysis of questionnaires and data analysis performed by entities representing the industry at national and world global placement. The results found by this multilevel analysis, allowed concluding that this is a theme that is only in the beginning for construction the platform to engage the future Internet of Things in the industrial environment Industrie 4.0. This dissertation allows stimulate the need of achievements of more strategically and operational approach within the society itself as a whole to clarify the existing weaknesses in this area, so that the National Strategy can be implemented with effective approaches and planned actions for a direct training plan in a more efficiently path in education for the theme.
Resumo:
In the last decades nanotechnology has become increasingly important because it offers indisputable advantages to almost every area of expertise, including environmental remediation. In this area the synthesis of highly reactive nanomaterials (e.g. zero-valent iron nanoparticles, nZVI) is gaining the attention of the scientific community, service providers and other stakeholders. The synthesis of nZVI by the recently developed green bottom-up method is extremely promising. However, the lack of information about the characteristics of the synthetized particles hinders a wider and more extensive application. This work aims to evaluate the characteristics of nZVI synthesized through the green method using leaves from different trees. Considering the requirements of a product for environmental remediation the following characteristics were studied: size, shape, reactivity and agglomeration tendency. The mulberry and pomegranate leaf extracts produced the smallest nZVIs (5–10 nm), the peach, pear and vine leaf extracts produced the most reactive nZVIs while the ones produced with passion fruit, medlar and cherry extracts did not settle at high nZVI concentrations (931 and 266 ppm). Considering all tests, the nZVIs obtained from medlar and vine leaf extracts are the ones that could present better performances in the environmental remediation. The information gathered in this paper will be useful to choose the most appropriate leaf extracts and operational conditions for the application of the green nZVIs in environmental remediation.
Resumo:
The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of High-Performance Computing (HPC) and Embedded Computing (EC) domains. On one side, new kinds of HPC applications are being required by markets needing huge amounts of information to be processed within a bounded amount of time. On the other side, EC systems are increasingly concerned with providing higher performance in real-time, challenging the performance capabilities of current architectures. The advent of next-generation many-core embedded platforms has the chance of intercepting this converging need for predictable high-performance, allowing HPC and EC applications to be executed on efficient and powerful heterogeneous architectures integrating general-purpose processors with many-core computing fabrics. To this end, it is of paramount importance to develop new techniques for exploiting the massively parallel computation capabilities of such platforms in a predictable way. P-SOCRATES will tackle this important challenge by merging leading research groups from the HPC and EC communities. The time-criticality and parallelisation challenges common to both areas will be addressed by proposing an integrated framework for executing workload-intensive applications with real-time requirements on top of next-generation commercial-off-the-shelf (COTS) platforms based on many-core accelerated architectures. The project will investigate new HPC techniques that fulfil real-time requirements. The main sources of indeterminism will be identified, proposing efficient mapping and scheduling algorithms, along with the associated timing and schedulability analysis, to guarantee the real-time and performance requirements of the applications.