969 resultados para Efficient technology


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Turtle excluder devices (TEDs) are being trialed on a voluntary basis in many Australian prawn (shrimp) trawl fisheries to reduce sea turtle captures. Analysis of TED introductions into shrimp trawl fisheries of the United States provided major insights into why conflicts occurred between shrimpers, conservationists, and government agencies. A conflict over the introduction and subsequent regulation of TEDs occurred because the problem and the solution were perceived differently by the various stakeholders. Attempts to negotiate and mediate the conflict broke down, resulting in litigation against the U.S. government by conservationists and shrimpers. Litigation was not an efficient resolution to the sea turtle-TED-trawl conflict but it appears that litigation was the only remaining path of resolution once the issue became polarized. We review two major Australian trawl fisheries to identify any significant differences in circumstances that may affect TED acceptance. Australian trawl fisheries are structured differently and good communication occurs between industry and researchers. TEDs are being introduced as mature technology. Furthermore, bycatch issues are of increasing concern to all stakeholders. These factors, combined with insights derived from previous conflicts concerning TEDs in the United Stares, increase the possibilities that TEDs will be introduced to Australian fishers with better acceptance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Described in this article is a novel device that facilitates study of the cross-sectional anatomy of the human head. In designing our device, we aimed to protect sections of the head from the destructive action of handling during anatomy laboratory while also ensuring excellent visualization of the anatomic structures. We used an electric saw to create 15-mm sections of three cadaver heads in the three traditional anatomic planes and inserted each section into a thin, perforated display box made of transparent acrylic material. The thin display boxes with head sections are kept in anatomical order in a larger transparent acrylic storage box containing formaldehyde solution, which preserves the specimens but also permits direct observation of the structures and their anatomic relationships to each other. This box-within-box design allows students to easily view sections of a head in its anatomical position as well as to examine internal structures by manipulating individual display boxes without altering the integrity of the preparations. This methodology for demonstrating cross-section anatomy allows efficient use of cadaveric material and technician time while also giving learners the best possible handling and visualization of complex anatomic structures. Our approach to teaching cross-sectional anatomy of the head can be applied to any part of human body, and the value of our device design will only increase as more complicated understandings of cross-sectional anatomy are required by advances and proliferation of imaging technology. Anat Sci Educ 3: 141-143, 2010. (C) 2010 American Association of Anatomists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BP Refinery (Bulwer Island) Ltd (BP) located on the eastern Australian coast is currently undergoing a major expansion as a part of the Queensland Clean Fuels Project. The associated wastewater treatment plant upgrade will provide a better quality of treated effluent than is currently possible with the existing infrastructure, and which will be of a sufficiently high standard to meet not only the requirements of imposed environmental legislation but also BP's environmental objectives. A number of challenges were faced when considering the upgrade, particularly; cost constraints and limited plot space, highly variable wastewater, toxicity issues, and limited available hydraulic head. Sequencing Batch Reactor (SBR) Technology was chosen for the lagoon upgrade based on the following; SBR technology allowed a retro-fit of the existing earthen lagoon without the need for any additional substantial concrete structures, a dual lagoon system allowed partial treatment of wastewaters during construction, SBRs give substantial process flexibility, SBRs have the ability to easily modify process parameters without any physical modifications, and significant cost benefits. This paper presents the background to this application, an outline of laboratory studies carried out on the wastewater and details the full scale design issues and methods for providing a cost effective, efficient treatment system using the existing lagoon system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents experimental results of the communication performance evaluation of a prototype ZigBee-based patient monitoring system commissioned in an in-patient floor of a Portuguese hospital (HPG – Hospital Privado de Guimar~aes). Besides, it revisits relevant problems that affect the performance of nonbeacon-enabled ZigBee networks. Initially, the presence of hidden-nodes and the impact of sensor node mobility are discussed. It was observed, for instance, that the message delivery ratio in a star network consisting of six wireless electrocardiogram sensor devices may decrease from 100% when no hidden-nodes are present to 83.96% when half of the sensor devices are unable to detect the transmissions made by the other half. An additional aspect which affects the communication reliability is a deadlock condition that can occur if routers are unable to process incoming packets during the backoff part of the CSMA-CA mechanism. A simple approach to increase the message delivery ratio in this case is proposed and its effectiveness is verified. The discussion and results presented in this paper aim to contribute to the design of efficient networks,and are valid to other scenarios and environments rather than hospitals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We focus on large-scale and dense deeply embedded systems where, due to the large amount of information generated by all nodes, even simple aggregate computations such as the minimum value (MIN) of the sensor readings become notoriously expensive to obtain. Recent research has exploited a dominance-based medium access control(MAC) protocol, the CAN bus, for computing aggregated quantities in wired systems. For example, MIN can be computed efficiently and an interpolation function which approximates sensor data in an area can be obtained efficiently as well. Dominance-based MAC protocols have recently been proposed for wireless channels and these protocols can be expected to be used for achieving highly scalable aggregate computations in wireless systems. But no experimental demonstration is currently available in the research literature. In this paper, we demonstrate that highly scalable aggregate computations in wireless networks are possible. We do so by (i) building a new wireless hardware platform with appropriate characteristics for making dominance-based MAC protocols efficient, (ii) implementing dominance-based MAC protocols on this platform, (iii) implementing distributed algorithms for aggregate computations (MIN, MAX, Interpolation) using the new implementation of the dominance-based MAC protocol and (iv) performing experiments to prove that such highly scalable aggregate computations in wireless networks are possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present work is to provide insight into the mechanism of laccase reactions using syringyl-type mediators. We studied the pH dependence and the kinetics of oxidation of syringyl-type phenolics using the low CotA and the high redox potential TvL laccases. Additionally, the efficiency of these compounds as redox mediators for the oxidation of non-phenolic lignin units was tested at different pH values and increasing mediator/non-phenolic ratios. Finally, the intermediates and products of reactions were identified by LC-MS and H-1 NMR. These approaches allow concluding on the (1) mechanism involved in the oxidation of phenolics by bacterial laccases, (2) importance of the chemical nature and properties of phenolic mediators, (3) apparent independence of the enzyme's properties on the yields of non-phenolics conversion, (4) competitive routes involved in the catalytic cycle of the laccase-mediator system with several new C-O coupling type structures being proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTED2010, the 4th International Technology, Education and Development Conference was held in Valencia (Spain), on March 8, 9 and 10, 2010.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation presented to obtain a Master degree in Biotechnology at the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological Water Quality - Water Treatment and Reuse

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently the world around us "reboots" every minute and “staying at the forefront” seems to be a very arduous task. The continuous and “speeded” progress of society requires, from all the actors, a dynamic and efficient attitude both in terms progress monitoring and moving adaptation. With regard to education, no matter how updated we are in relation to the contents, the didactic strategies and technological resources, we are inevitably compelled to adapt to new paradigms and rethink the traditional teaching methods. It is in this context that the contribution of e-learning platforms arises. Here teachers and students have at their disposal new ways to enhance the teaching and learning process, and these platforms are seen, at the present time, as significant virtual teaching and learning supporting environments. This paper presents a Project and attempts to illustrate the potential that new technologies present as a “backing” tool in different stages of teaching and learning at different levels and areas of knowledge, particularly in Mathematics. We intend to promote a constructive discussion moment, exposing our actual perception - that the use of the Learning Management System Moodle, by Higher Education teachers, as supplementary teaching-learning environment for virtual classroom sessions can contribute for greater efficiency and effectiveness of teaching practice and to improve student achievement. Regarding the Learning analytics experience we will present a few results obtained with some assessment Learning Analytics tools, where we profoundly felt that the assessment of students’ performance in online learning environments is a challenging and demanding task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the report for the unit “Project IV” of the PhD programme on Technology Assessment under the supervision of Dr.-Ing. Marcel Weil and Prof. Dr. António Brandão Moniz. The report was presented and discussed at the Doctorate Conference on Technologogy Assessment in July 2013 at the University Nova Lisboa, Caparica campus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de Doutoramento Ciência e Engenharia de Polímeros e Compósitos.