987 resultados para Compliant parallel mechanisms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two methods were compared for determining the concentration of penetrative biomass during growth of Rhizopus oligosporus on an artificial solid substrate consisting of an inert gel and starch as the sole source of carbon and energy. The first method was based on the use of a hand microtome to make sections of approximately 0.2- to 0.4-mm thickness parallel to the substrate surface and the determination of the glucosamine content in each slice. Use of glucosamine measurements to estimate biomass concentrations was shown to be problematic due to the large variations in glucosamine content with mycelial age. The second method was a novel method based on the use of confocal scanning laser microscopy to estimate the fractional volume occupied by the biomass. Although it is not simple to translate fractional volumes into dry weights of hyphae due to the lack of experimentally determined conversion factors, measurement of the fractional volumes in themselves is useful for characterizing fungal penetration into the substrate. Growth of penetrative biomass in the artificial model substrate showed two forms of growth with an indistinct mass in the region close to the substrate surface and a few hyphae penetrating perpendicularly to the surface in regions further away from the substrate surface. The biomass profiles against depth obtained from the confocal microscopy showed two linear regions on log-linear plots, which are possibly related to different oxygen availability at different depths within the substrate. Confocal microscopy has the potential to be a powerful tool in the investigation of fungal growth mechanisms in solid-state fermentation. (C) 2003 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The maintenance of arterial pressure at levels adequate to perfuse the tissues is a basic requirement for the constancy of the internal environment and survival. The objective of the present review was to provide information about the basic reflex mechanisms that are responsible for the moment-to-moment regulation of the cardiovascular system. We demonstrate that this control is largely provided by the action of arterial and non-arterial reflexes that detect and correct changes in arterial pressure (baroreflex), blood volume or chemical composition (mechano- and chemosensitive cardiopulmonary reflexes), and changes in blood-gas composition (chemoreceptor reflex). The importance of the integration of these cardiovascular reflexes is well understood and it is clear that processing mainly occurs in the nucleus tractus solitarii, although the mechanism is poorly understood. There are several indications that the interactions of baroreflex, chemoreflex and Bezold-Jarisch reflex inputs, and the central nervous system control the activity of autonomic preganglionic neurons through parallel afferent and efferent pathways to achieve cardiovascular homeostasis. It is surprising that so little appears in the literature about the integration of these neural reflexes in cardiovascular function. Thus, our purpose was to review the interplay between peripheral neural reflex mechanisms of arterial blood pressure and blood volume regulation in physiological and pathophysiological states. Special emphasis is placed on the experimental model of arterial hypertension induced by N-nitro-L-arginine methyl ester (L-NAME) in which the interplay of these three reflexes is demonstrable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last years there has been a huge growth and consolidation of the Data Mining field. Some efforts are being done that seek the establishment of standards in the area. Included on these efforts there can be enumerated SEMMA and CRISP-DM. Both grow as industrial standards and define a set of sequential steps that pretends to guide the implementation of data mining applications. The question of the existence of substantial differences between them and the traditional KDD process arose. In this paper, is pretended to establish a parallel between these and the KDD process as well as an understanding of the similarities between them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last years there has been a huge growth and consolidation of the Data Mining field. Some efforts are being done that seek the establishment of standards in the area. Included on these efforts there can be enumerated SEMMA and CRISP-DM. Both grow as industrial standards and define a set of sequential steps that pretends to guide the implementation of data mining applications. The question of the existence of substantial differences between them and the traditional KDD process arose. In this paper, is pretended to establish a parallel between these and the KDD process as well as an understanding of the similarities between them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, power systems (PS) already accommodate a substantial penetration of distributed generation (DG) and operate in competitive environments. In the future, as the result of the liberalisation and political regulations, PS will have to deal with large-scale integration of DG and other distributed energy resources (DER), such as storage and provide market agents to ensure a flexible and secure operation. This cannot be done with the traditional PS operational tools used today like the quite restricted information systems Supervisory Control and Data Acquisition (SCADA) [1]. The trend to use the local generation in the active operation of the power system requires new solutions for data management system. The relevant standards have been developed separately in the last few years so there is a need to unify them in order to receive a common and interoperable solution. For the distribution operation the CIM models described in the IEC 61968/70 are especially relevant. In Europe dispersed and renewable energy resources (D&RER) are mostly operated without remote control mechanisms and feed the maximal amount of available power into the grid. To improve the network operation performance the idea of virtual power plants (VPP) will become a reality. In the future power generation of D&RER will be scheduled with a high accuracy. In order to realize VPP decentralized energy management, communication facilities are needed that have standardized interfaces and protocols. IEC 61850 is suitable to serve as a general standard for all communication tasks in power systems [2]. The paper deals with international activities and experiences in the implementation of a new data management and communication concept in the distribution system. The difficulties in the coordination of the inconsistent developed in parallel communication and data management standards - are first addressed in the paper. The upcoming unification work taking into account the growing role of D&RER in the PS is shown. It is possible to overcome the lag in current practical experiences using new tools for creating and maintenance the CIM data and simulation of the IEC 61850 protocol – the prototype of which is presented in the paper –. The origin and the accuracy of the data requirements depend on the data use (e.g. operation or planning) so some remarks concerning the definition of the digital interface incorporated in the merging unit idea from the power utility point of view are presented in the paper too. To summarize some required future work has been identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This synopsis summarizes the key chemical and bacteriological characteristics of β-lactams, penicillins, cephalosporins, carbanpenems, monobactams and others. Particular notice is given to first-generation to fifth-generation cephalosporins. This review also summarizes the main resistance mechanism to antibiotics, focusing particular attention to those conferring resistance to broad-spectrum cephalosporins by means of production of emerging cephalosporinases (extended-spectrum β-lactamases and AmpC β-lactamases), target alteration (penicillin-binding proteins from methicillin-resistant Staphylococcus aureus) and membrane transporters that pump β-lactams out of the bacterial cell.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

International Conference with Peer Review 2012 IEEE International Conference in Geoscience and Remote Sensing Symposium (IGARSS), 22-27 July 2012, Munich, Germany

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Orientador: Doutor, José Domingos Silva Fernandes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Monitoring is a very important aspect to consider when developing real-time systems. However, it is also important to consider the impact of the monitoring mechanisms in the actual application. The use of Reflection can provide a clear separation between the real-time application and the implemented monitoring mechanisms, which can be introduced (reflected) into the underlying system without changing the actual application part of the code. Nevertheless, controlling the monitoring system itself is still a topic of research. The monitoring mechanisms must contain knowledge about “how to get the information out”. Therefore, this paper presents the ongoing work to define a suitable strategy for monitoring real-time systems through the use of Reflection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This synopsis summarizes the key chemical and bacteriological characteristics of β-lactams, penicillins, cephalosporins, carbanpenems, monobactams and others. Particular notice is given to first-generation to fifth-generation cephalosporins. This reviewalso summarizes the main resistancemechanism to antibiotics, focusing particular attention to those conferring resistance to broad-spectrum cephalosporins by means of production of emerging cephalosporinases (extended-spectrum β-lactamases and AmpC β-lactamases), target alteration (penicillin-binding proteins from methicillin-resistant Staphylococcus aureus) and membrane transporters that pump β-lactams out of the bacterial cell.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we address the real-time capabilities of P-NET, which is a multi-master fieldbus standard based on a virtual token passing scheme. We show how P-NET’s medium access control (MAC) protocol is able to guarantee a bounded access time to message requests. We then propose a model for implementing fixed prioritybased dispatching mechanisms at each master’s application level. In this way, we diminish the impact of the first-come-first-served (FCFS) policy that P-NET uses at the data link layer. The proposed model rises several issues well known within the real-time systems community: message release jitter; pre-run-time schedulability analysis in non pre-emptive contexts; non-independence of tasks at the application level. We identify these issues in the proposed model and show how results available for priority-based task dispatching can be adapted to encompass priority-based message dispatching in P-NET networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past few years Tabling has emerged as a powerful logic programming model. The integration of concurrent features into the implementation of Tabling systems is demanded by need to use recently developed tabling applications within distributed systems, where a process has to respond concurrently to several requests. The support for sharing of tables among the concurrent threads of a Tabling process is a desirable feature, to allow one of Tabling’s virtues, the re-use of computations by other threads and to allow efficient usage of available memory. However, the incremental completion of tables which are evaluated concurrently is not a trivial problem. In this dissertation we describe the integration of concurrency mechanisms, by the way of multi-threading, in a state of the art Tabling and Prolog system, XSB. We begin by reviewing the main concepts for a formal description of tabled computations, called SLG resolution and for the implementation of Tabling under the SLG-WAM, the abstract machine supported by XSB. We describe the different scheduling strategies provided by XSB and introduce some new properties of local scheduling, a scheduling strategy for SLG resolution. We proceed to describe our implementation work by describing the process of integrating multi-threading in a Prolog system supporting Tabling, without addressing the problem of shared tables. We describe the trade-offs and implementation decisions involved. We then describe an optimistic algorithm for the concurrent sharing of completed tables, Shared Completed Tables, which allows the sharing of tables without incurring in deadlocks, under local scheduling. This method relies on the execution properties of local scheduling and includes full support for negation. We provide a theoretical framework and discuss the implementation’s correctness and complexity. After that, we describe amethod for the sharing of tables among threads that allows parallelism in the computation of inter-dependent subgoals, which we name Concurrent Completion. We informally argue for the correctness of Concurrent Completion. We give detailed performance measurements of the multi-threaded XSB systems over a variety of machines and operating systems, for both the Shared Completed Tables and the Concurrent Completion implementations. We focus our measurements inthe overhead over the sequential engine and the scalability of the system. We finish with a comparison of XSB with other multi-threaded Prolog systems and we compare our approach to concurrent tabling with parallel and distributed methods for the evaluation of tabling. Finally, we identify future research directions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a global multiprocessor scheduling algorithm for the Linux kernel that combines the global EDF scheduler with a priority-aware work-stealing load balancing scheme, enabling parallel real-time tasks to be executed on more than one processor at a given time instant. We state that some priority inversion may actually be acceptable, provided it helps reduce contention, communication, synchronisation and coordination between parallel threads, while still guaranteeing the expected system’s predictability. Experimental results demonstrate the low scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.