960 resultados para Pinhão-manso - Espaçamento
Resumo:
O presente trabalho apresenta um estudo laboratorial que teve como finalidade o reconhecimento de perda de propriedades da madeira de edifícios antigos quando degradada por carunchos. Foi estudada madeira antiga de pinho e de choupo, com idades compreendidas entre 100 e 200 anos. Como abordagem inicial são apresentadas as características da madeira e dos edifícios pombalinos e gaioleiros onde esta era bastante usada, não só como elemento deacabamento, mas principalmente como elemento estrutural. São apresentados os vários fatores que levam à degradação da madeira assim como alguns métodos de avaliação e diagnóstico dos estados de conservação dos elementos de madeira que se encontram nos edifícios. É também desenvolvido um estudo sobre o caruncho grande e caruncho pequeno, seu ciclo de vida e forma como degrada a madeira, servindo de base ao estudo laboratorial. No desenvolvimento foi avaliado o estado de degradação de provetes de madeira antiga com 30 x 30 x 90 mm e de seguida correlacionado com a sua resistência à compressão, o seu módulo de elasticidade e a extensão em fase plástica. Desta forma pretende-se estudar o modo como os diferentes estados de degradação por caruncho influenciam as caraterísticas mecânicas das peças de madeira.
Resumo:
Soit La peste soit En attendant Godot traitent de l‘absurde de la condition humaine et de la quête du salut. La vie se présente sans avenir et comme un enchaînement de scènes pénibles qui se ressemblent. L‘humanité veut désespérément atteindre le bonheur et préserver sa dignité, mais, malgré tout effort, rien ne change : c‘est la terrible stabilité du monde. Presque tous les personnages manquent d‘identité. Ce sont des prisonniers de leur propre existence; des guignols manipulés arbitrairement par le destin; des victimes de la gratuité de la grâce divine. Les personnages, le temps, l‘espace, le langage, les silences, tout y sert à exprimer l‘absurde de la condition humaine. Dans ces deux oeuvres, l‘air est plein de cris de révolte. Les plaintes deviennent le langage naturel de l‘humanité souffrante, parce que, après tout, ni Godot ni le salut n‘arrivent jamais.
Resumo:
Bisphenol A (BPA) is an endocrine disrupting chemical (EDC) whose migration from food packaging is recognized worldwide. However, the real overall food contamination and related consequences are yet largely unknown. Among humans, children’s exposure to BPA has been emphasized because of the immaturity of their biological systems. The main aim of this study was to assess the reproductive impact of BPA leached from commercially available plastic containers used or related to child nutrition, performing ecotoxicological tests using the biomonitoring species Daphnia magna. Acute and chronic tests, as well as single and multigenerational tests were done. Migration of BPA from several baby bottles and other plastic containers evaluated by GC-MS indicated that a broader range of foodstuff may be contaminated when packed in plastics. Ecotoxicological test results performed using defined concentrations of BPA were in agreement with literature, although a precocious maturity of daphnids was detected at 3.0 mg/L. Curiously, an increased reproductive output (neonates per female) was observed when daphnids were bred in the polycarbonate (PC) containers (145.1±4.3 % to 264.7±3.8 %), both in single as in multigenerational tests, in comparison with the negative control group (100.3±1.6 %). A strong correlated dose-dependent ecotoxicological effect was observed, providing evidence that BPA leached from plastic food packaging materials act as functional estrogen in vivo at very low concentrations. In contrast, neonate production by daphnids cultured in polypropylene and non-PC bottles was slightly but not significantly enhanced (92.5±2.0 % to 118.8±1.8 %). Multigenerational tests also revealed magnification of the adverse effects, not only on fecundity but also on mortality, which represents a worrying trend for organisms that are chronically exposed to xenoestrogens for many generations. Two plausible explanations for the observed results could be given: a non-monotonic dose–response relationship or a mixture toxicity effect.
Resumo:
Soil vapor extraction (SVE) and bioremediation (BR) are two of the most common soil remediation technologies. Their application is widespread; however, both present limitations, namely related to the efficiencies of SVE on organic soils and to the remediation times of some BR processes. This work aimed to study the combination of these two technologies in order to verify the achievement of the legal clean-up goals in soil remediation projects involving seven different simulated soils separately contaminated with toluene and xylene. The remediations consisted of the application of SVE followed by biostimulation. The results show that the combination of these two technologies is effective and manages to achieve the clean-up goals imposed by the Spanish Legislation. Under the experimental conditions used in this work, SVE is sufficient for the remediation of soils, contaminated separately with toluene and xylene, with organic matter contents (OMC) below 4 %. In soils with higher OMC, the use of BR, as a complementary technology, and when the concentration of contaminant in the gas phase of the soil reaches values near 1 mg/L, allows the achievement of the clean-up goals. The OMC was a key parameter because it hindered SVE due to adsorption phenomena but enhanced the BR process because it acted as a microorganism and nutrient source.
Resumo:
In Distributed Computer-Controlled Systems (DCCS), a special emphasis must be given to the communication infrastructure, which must provide timely and reliable communication services. CAN networks are usually suitable to support small-scale DCCS. However, they are known to present some reliability problems, which can lead to an unreliable behaviour of the supported applications. In this paper, an atomic multicast protocol for CAN networks is proposed. This protocol explores the CAN synchronous properties, providing a timely and reliable service to the supported applications. The implementation of such protocol in Ada, on top of the Ada version of Real-Time Linux is presented, which is used to demonstrate the advantages and disadvantages of the platform to support reliable communications in DCCS.
Resumo:
In this paper, we present some of the fault tolerance management mechanisms being implemented in the Multi-μ architecture, namely its support for replica non-determinism. In this architecture, fault tolerance is achieved by node active replication, with software based replica management and fault tolerance transparent algorithms. A software layer implemented between the application and the real-time kernel, the Fault Tolerance Manager (FTManager), is the responsible for the transparent incorporation of the fault tolerance mechanisms The active replication model can be implemented either imposing replica determinism or keeping replica consistency at critical points, by means of interactive agreement mechanisms. One of the Multi-μ architecture goals is to identify such critical points, relieving the underlying system from performing the interactive agreement in every Ada dispatching point.
Resumo:
This paper presents an architecture (Multi-μ) being implemented to study and develop software based fault tolerant mechanisms for Real-Time Systems, using the Ada language (Ada 95) and Commercial Off-The-Shelf (COTS) components. Several issues regarding fault tolerance are presented and mechanisms to achieve fault tolerance by software active replication in Ada 95 are discussed. The Multi-μ architecture, based on a specifically proposed Fault Tolerance Manager (FTManager), is then described. Finally, some considerations are made about the work being done and essential future developments.
Resumo:
Embedded real-time applications increasingly present high computation requirements, which need to be completed within specific deadlines, but that present highly variable patterns, depending on the set of data available in a determined instant. The current trend to provide parallel processing in the embedded domain allows providing higher processing power; however, it does not address the variability in the processing pattern. Dimensioning each device for its worst-case scenario implies lower average utilization, and increased available, but unusable, processing in the overall system. A solution for this problem is to extend the parallel execution of the applications, allowing networked nodes to distribute the workload, on peak situations, to neighbour nodes. In this context, this report proposes a framework to develop parallel and distributed real-time embedded applications, transparently using OpenMP and Message Passing Interface (MPI), within a programming model based on OpenMP. The technical report also devises an integrated timing model, which enables the structured reasoning on the timing behaviour of these hybrid architectures.
Resumo:
High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Smartphones and other internet enabled devices are now common on our everyday life, thus unsurprisingly a current trend is to adapt desktop PC applications to execute on them. However, since most of these applications have quality of service (QoS) requirements, their execution on resource-constrained mobile devices presents several challenges. One solution to support more stringent applications is to offload some of the applications’ services to surrogate devices nearby. Therefore, in this paper, we propose an adaptable offloading mechanism which takes into account the QoS requirements of the application being executed (particularly its real-time requirements), whilst allowing offloading services to several surrogate nodes. We also present how the proposed computing model can be implemented in an Android environment
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, parallel programming models are proposed as promising solutions for more effectively using multi-core processors. This paper discusses some of the existent models and frameworks for parallel programming, leading to outline a draft parallel programming model for Ada.
Resumo:
This paper proposes an one-step decentralised coordination model based on an effective feedback mechanism to reduce the complexity of the needed interactions among interdependent nodes of a cooperative distributed system until a collective adaptation behaviour is determined. Positive feedback is used to reinforce the selection of the new desired global service solution, while negative feedback discourages nodes to act in a greedy fashion as this adversely impacts on the provided service levels at neighbouring nodes. The reduced complexity and overhead of the proposed decentralised coordination model are validated through extensive evaluations.
Resumo:
Mobile applications are becoming increasingly more complex and making heavier demands on local system resources. Moreover, mobile systems are nowadays more open, allowing users to add more and more applications, including third-party developed ones. In this perspective, it is increasingly expected that users will want to execute in their devices applications which supersede currently available resources. It is therefore important to provide frameworks which allow applications to benefit from resources available on other nodes, capable of migrating some or all of its services to other nodes, depending on the user needs. These requirements are even more stringent when users want to execute Quality of Service (QoS) aware applications, such as voice or video. The required resources to guarantee the QoS levels demanded by an application can vary with time, and consequently, applications should be able to reconfigure themselves. This paper proposes a QoS-aware service-based framework able to support distributed, migration-capable, QoS-enabled applications on top of the Android Operating system.
Resumo:
This paper proposes a dynamic scheduler that supports the coexistence of guaranteed and non-guaranteed bandwidth servers to efficiently handle soft-tasks’ overloads by making additional capacity available from two sources: (i) residual capacity allocated but unused when jobs complete in less than their budgeted execution time; (ii) stealing capacity from inactive non-isolated servers used to schedule best-effort jobs. The effectiveness of the proposed approach in reducing the mean tardiness of periodic jobs is demonstrated through extensive simulations. The achieved results become even more significant when tasks’ computation times have a large variance.