907 resultados para Flexible Bronchoscopy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new high throughput and scalable architecture for unified transform coding in H.264/AVC is proposed in this paper. Such flexible structure is capable of computing all the 4x4 and 2x2 transforms for Ultra High Definition Video (UHDV) applications (4320x7680@ 30fps) in real-time and with low hardware cost. These significantly high performance levels were proven with the implementation of several different configurations of the proposed structure using both FPGA and ASIC 90 nm technologies. In addition, such experimental evaluation also demonstrated the high area efficiency of theproposed architecture, which in terms of Data Throughput per Unit of Area (DTUA) is at least 1.5 times more efficient than its more prominent related designs(1).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To examine whether any impairments in health and social lives can be found under different kinds of flexible working hours, and whether such effects are related to specific characteristics of these working hours. METHODS: Two studies - a company based survey (N=660) and an internet survey (N=528) - have been conducted. The first one was a questionnaire study (paper and pencil) on employees working under some 'typical' kinds of different flexible working time arrangements in different companies and different occupational fields (health care, manufacturing, retail, administration, call centres). The second study was an internet-based survey, using an adaptation of the questionnaire from the first study. RESULTS: The results of both studies consistently show that high variability of working hours is associated with increased impairments in health and well-being and this is especially true if this variability is company controlled. These effects are less pronounced if variability is self-controlled; however, autonomy does not compensate the effects of variability. CONCLUSIONS: Recommendations for an appropriate design of flexible working hours should be developed in order to minimize any impairing effects on health and psychosocial well-being; these recommendations should include - besides allowing for discretion in controlling one's (flexible) working hours - that variability in flexible working hours should be kept low (or at least moderate), even if this variability is self-controlled.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system's performance. The paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The results show that the proposed heuristics achieve a reasonably higher system's availability than static offline decisions when lower replication ratios are imposed due to resource or cost limitations. The paper introduces a novel approach to coordinate the activation of passive replicas in interdependent distributed environments. The proposed distributed coordination model reduces the complexity of the needed interactions among nodes and is faster to converge to a globally acceptable solution than a traditional centralised approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system’s performance. This paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The activation of passive replicas is coordinated through a fast convergence protocol that reduces the complexity of the needed interactions among nodes until a new collective global service solution is determined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Within the European project R-Fieldbus (http://www.hurray.isep.ipp.pt/activities/rfieldbus/), an industrial manufacturing field trial was developed. This field trial was conceived as a demonstration test bed for the technologies developed during the project. Because the R-Fieldbus field trial included prototype hardware devices, the purpose of this equipment changed and since the conclusion of the project, several new technologies also emerged, therefore an update of the field trial was required. This document describes an update of the manufacturing field trial. The purpose of this update, the changes and improvements introduced are described in the document. Additionally, this document also provides a reliable source of documentation for the equipment, configuration and software components of the manufacturing field trial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solar cells on lightweight and flexible substrates have advantages over glass-or wafer-based photovoltaic devices in both terrestrial and space applications. Here, we report on development of amorphous silicon thin film photovoltaic modules fabricated at maximum deposition temperature of 150 degrees C on 100 mu m thick polyethylene-naphtalate plastic films. Each module of 10 cm x 10 cm area consists of 72 a-Si:H n-i-p rectangular structures with transparent conducting oxide top electrodes with Al fingers and metal back electrodes deposited through the shadow masks. Individual structures are connected in series forming eight rows with connection ports provided for external blocking diodes. The design optimization and device performance analysis are performed using a developed SPICE model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data analytic applications are characterized by large data sets that are subject to a series of processing phases. Some of these phases are executed sequentially but others can be executed concurrently or in parallel on clusters, grids or clouds. The MapReduce programming model has been applied to process large data sets in cluster and cloud environments. For developing an application using MapReduce there is a need to install/configure/access specific frameworks such as Apache Hadoop or Elastic MapReduce in Amazon Cloud. It would be desirable to provide more flexibility in adjusting such configurations according to the application characteristics. Furthermore the composition of the multiple phases of a data analytic application requires the specification of all the phases and their orchestration. The original MapReduce model and environment lacks flexible support for such configuration and composition. Recognizing that scientific workflows have been successfully applied to modeling complex applications, this paper describes our experiments on implementing MapReduce as subworkflows in the AWARD framework (Autonomic Workflow Activities Reconfigurable and Dynamic). A text mining data analytic application is modeled as a complex workflow with multiple phases, where individual workflow nodes support MapReduce computations. As in typical MapReduce environments, the end user only needs to define the application algorithms for input data processing and for the map and reduce functions. In the paper we present experimental results when using the AWARD framework to execute MapReduce workflows deployed over multiple Amazon EC2 (Elastic Compute Cloud) instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to provide a more flexible learning environment in physics, the developed projectile launch apparatus enables students to determine the acceleration of gravity and the dependence of a set of parameters in the projectile movement. This apparatus is remotely operated and accessed via web, by first scheduling an access time slot. This machine has a number of configuration parameters that support different learning scenarios with different complexities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

IEEE Electron Device Letters, VOL. 29, NO. 9,

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Applications refactorings that imply the schema evolution are common activities in programming practices. Although modern object-oriented databases provide transparent schema evolution mechanisms, those refactorings continue to be time consuming tasks for programmers. In this paper we address this problem with a novel approach based on aspect-oriented programming and orthogonal persistence paradigms, as well as our meta-model. An overview of our framework is presented. This framework, a prototype based on that approach, provides applications with aspects of persistence and database evolution. It also provides a new pointcut/advice language that enables the modularization of the instance adaptation crosscutting concern of classes, which were subject to a schema evolution. We also present an application that relies on our framework. This application was developed without any concern regarding persistence and database evolution. However, its data is recovered in each execution, as well as objects, in previous schema versions, remain available, transparently, by means of our framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation presented to obtain the Doutoramento (Ph.D.) degree in Biochemistry at the Instituto de Tecnologia Qu mica e Biol ogica da Universidade Nova de Lisboa

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics