414 resultados para Link-22
Resumo:
The prediction of the time and the efficiency of the remediation of contaminated soils using soil vapor extraction remain a difficult challenge to the scientific community and consultants. This work reports the development of multiple linear regression and artificial neural network models to predict the remediation time and efficiency of soil vapor extractions performed in soils contaminated separately with benzene, toluene, ethylbenzene, xylene, trichloroethylene, and perchloroethylene. The results demonstrated that the artificial neural network approach presents better performances when compared with multiple linear regression models. The artificial neural network model allowed an accurate prediction of remediation time and efficiency based on only soil and pollutants characteristics, and consequently allowing a simple and quick previous evaluation of the process viability.
Resumo:
The yeast Saccharomyces cerevisiae is a useful model organism for studying lead (Pb) toxicity. Yeast cells of a laboratory S. cerevisiae strain (WT strain) were incubated with Pb concentrations up to 1,000 μmol/l for 3 h. Cells exposed to Pb lost proliferation capacity without damage to the cell membrane, and they accumulated intracellular superoxide anion (O2 .−) and hydrogen peroxide (H2O2). The involvement of the mitochondrial electron transport chain (ETC) in the generation of reactive oxygen species (ROS) induced by Pb was evaluated. For this purpose, an isogenic derivative ρ0 strain, lacking mitochondrial DNA, was used. The ρ0 strain, without respiratory competence, displayed a lower intracellular ROS accumulation and a higher resistance to Pb compared to the WT strain. The kinetic study of ROS generation in yeast cells exposed to Pb showed that the production of O2 .− precedes the accumulation of H2O2, which is compatible with the leakage of electrons from the mitochondrial ETC. Yeast cells exposed to Pb displayed mutations at the mitochondrial DNA level. This is most likely a consequence of oxidative stress. In conclusion, mitochondria are an important source of Pb-induced ROS and, simultaneously, one of the targets of its toxicity.
Resumo:
Siderophore production by Bacillus megaterium was detected, in an iron-deficient culture medium, during the exponential growth phase, prior to the sporulation, in the presence of glucose; these results suggested that the onset of siderophore production did not require glucose depletion and was not related with the sporulation. The siderophore production by B. megaterium was affected by the carbon source used. The growth on glycerol promoted the very high siderophore production (1,182 μmol g−1 dry weight biomass); the opposite effect was observed in the presence of mannose (251 μmol g−1 dry weight biomass). The growth in the presence of fructose, galactose, glucose, lactose, maltose or sucrose, originated similar concentrations of siderophore (546–842 μmol g−1 dry weight biomass). Aeration had a positive effect on the production of siderophore. Incubation of B. megaterium under static conditions delayed and reduced the growth and the production of siderophore, compared with the incubation in stirred conditions.
Resumo:
Lead is an important environmental pollutant. The role of vacuole, in Pb detoxification, was studied using a vacuolar protein sorting mutant strain (vps16D), belonging to class C mutants. Cells disrupted in VPS16 gene, did not display a detectable vacuolar-like structure. Based on the loss of cell proliferation capacity, it was found that cells from vps16D mutant exhibited a hypersensitivity to Pb-induced toxicity, compared to wild type (WT) strain. The function of vacuolar H?-ATPase (VATPase), in Pb detoxification, was evaluated using mutants with structurally normal vacuoles but defective in subunits of catalytic (vma1D or vma2D) or membrane domain (vph1D or vma3D) of V-ATPase. All mutants tested, lacking a functional V-ATPase, displayed an increased susceptibility to Pb, comparatively to cells from WT strain. Modification of vacuolar morphology, in Pb-exposed cells, was visualized using a Vma2p-GFP strain. The treatment of yeast cells with Pb originated the fusion of the medium size vacuolar lobes into one enlarged vacuole. In conclusion, it was found that vacuole plays an important role in the detoxification of Pb in Saccharomyces cerevisiae; in addition, a functional V-ATPase was required for Pb compartmentalization.
Resumo:
Earthquakes are associated with negative events, such as large number of casualties, destruction of buildings and infrastructures, or emergence of tsunamis. In this paper, we apply the Multidimensional Scaling (MDS) analysis to earthquake data. MDS is a set of techniques that produce spatial or geometric representations of complex objects, such that, objects perceived to be similar/distinct in some sense are placed nearby/distant on the MDS maps. The interpretation of the charts is based on the resulting clusters since MDS produces a different locus for each similarity measure. In this study, over three million seismic occurrences, covering the period from January 1, 1904 up to March 14, 2012 are analyzed. The events, characterized by their magnitude and spatiotemporal distributions, are divided into groups, either according to the Flinn–Engdahl seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Space-time and Space-frequency correlation indices are proposed to quantify the similarities among events. MDS has the advantage of avoiding sensitivity to the non-uniform spatial distribution of seismic data, resulting from poorly instrumented areas, and is well suited for accessing dynamics of complex systems. MDS maps are proven as an intuitive and useful visual representation of the complex relationships that are present among seismic events, which may not be perceived on traditional geographic maps. Therefore, MDS constitutes a valid alternative to classic visualization tools, for understanding the global behavior of earthquakes.
Resumo:
We study exotic patterns appearing in a network of coupled Chen oscillators. Namely, we consider a network of two rings coupled through a “buffer” cell, with Z3×Z5 symmetry group. Numerical simulations of the network reveal steady states, rotating waves in one ring and quasiperiodic behavior in the other, and chaotic states in the two rings, to name a few. The different patterns seem to arise through a sequence of Hopf bifurcations, period-doubling, and halving-period bifurcations. The network architecture seems to explain certain observed features, such as equilibria and the rotating waves, whereas the properties of the chaotic oscillator may explain others, such as the quasiperiodic and chaotic states. We use XPPAUT and MATLAB to compute numerically the relevant states.
Resumo:
The aim of this work was to simulate the radionuclides dispersion in the surrounding area of a coal-fired power plant, operational during the last 25 years. The dispersion of natural radionuclides (236Ra, 232Th and 40K) was simulated by a Gaussian plume dispersion model with three different stability classes estimating the radionuclides concentration at ground level. Measurements of the environmen-tal activity concentrations were carried out by γ-spectrometry and compared with results from the air dispersion and deposition model which showed that the stabil-ity class D causes the dispersion to longer distances up to 20 km from the stacks.
Resumo:
Wireless Body Area Network (WBAN) is the most convenient, cost-effective, accurate, and non-invasive technology for e-health monitoring. The performance of WBAN may be disturbed when coexisting with other wireless networks. Accordingly, this paper provides a comprehensive study and in-depth analysis of coexistence issues and interference mitigation solutions in WBAN technologies. A thorough survey of state-of-the art research in WBAN coexistence issues is conducted. The survey classified, discussed, and compared the studies according to the parameters used to analyze the coexistence problem. Solutions suggested by the studies are then classified according to the followed techniques and concomitant shortcomings are identified. Moreover, the coexistence problem in WBAN technologies is mathematically analyzed and formulas are derived for the probability of successful channel access for different wireless technologies with the coexistence of an interfering network. Finally, extensive simulations are conducted using OPNET with several real-life scenarios to evaluate the impact of coexistence interference on different WBAN technologies. In particular, three main WBAN wireless technologies are considered: IEEE 802.15.6, IEEE 802.15.4, and low-power WiFi. The mathematical analysis and the simulation results are discussed and the impact of interfering network on the different wireless technologies is compared and analyzed. The results show that an interfering network (e.g., standard WiFi) has an impact on the performance of WBAN and may disrupt its operation. In addition, using low-power WiFi for WBANs is investigated and proved to be a feasible option compared to other wireless technologies.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.
Resumo:
The development of an intelligent wheelchair (IW) platform that may be easily adapted to any commercial electric powered wheelchair and aid any person with special mobility needs is the main objective of this project. To be able to achieve this main objective, three distinct control methods were implemented in the IW: manual, shared and automatic. Several algorithms were developed for each of these control methods. This paper presents three of the most significant of those algorithms with emphasis on the shared control method. Experiments were performed by users suffering from cerebral palsy, using a realistic simulator, in order to validate the approach. The experiments revealed the importance of using shared (aided) controls for users with severe disabilities. The patients still felt having complete control over the wheelchair movement when using a shared control at a 50% level and thus this control type was very well accepted. Thus it may be used in intelligent wheelchairs since it is able to correct the direction in case of involuntary movements of the user but still gives him a sense of complete control over the IW movement.
Resumo:
In this paper we address the problem of computing multiple roots of a system of nonlinear equations through the global optimization of an appropriate merit function. The search procedure for a global minimizer of the merit function is carried out by a metaheuristic, known as harmony search, which does not require any derivative information. The multiple roots of the system are sequentially determined along several iterations of a single run, where the merit function is accordingly modified by penalty terms that aim to create repulsion areas around previously computed minimizers. A repulsion algorithm based on a multiplicative kind penalty function is proposed. Preliminary numerical experiments with a benchmark set of problems show the effectiveness of the proposed method.
Resumo:
The multiprocessor scheduling scheme NPS-F for sporadic tasks has a high utilisation bound and an overall number of preemptions bounded at design time. NPS-F binpacks tasks offline to as many servers as needed. At runtime, the scheduler ensures that each server is mapped to at most one of the m processors, at any instant. When scheduled, servers use EDF to select which of their tasks to run. Yet, unlike the overall number of preemptions, the migrations per se are not tightly bounded. Moreover, we cannot know a priori which task a server will be currently executing at the instant when it migrates. This uncertainty complicates the estimation of cache-related preemption and migration costs (CPMD), potentially resulting in their overestimation. Therefore, to simplify the CPMD estimation, we propose an amended bin-packing scheme for NPS-F allowing us (i) to identify at design time, which task migrates at which instant and (ii) bound a priori the number of migrating tasks, while preserving the utilisation bound of NPS-F.
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.
Resumo:
Hand-off (or hand-over), the process where mobile nodes select the best access point available to transfer data, has been well studied in wireless networks. The performance of a hand-off process depends on the specific characteristics of the wireless links. In the case of low-power wireless networks, hand-off decisions must be carefully taken by considering the unique properties of inexpensive low-power radios. This paper addresses the design, implementation and evaluation of smart-HOP, a hand-off mechanism tailored for low-power wireless networks. This work has three main contributions. First, it formulates the hard hand-off process for low-power networks (such as typical wireless sensor networks - WSNs) with a probabilistic model, to investigate the impact of the most relevant channel parameters through an analytical approach. Second, it confirms the probabilistic model through simulation and further elaborates on the impact of several hand-off parameters. Third, it fine-tunes the most relevant hand-off parameters via an extended set of experiments, in a realistic experimental scenario. The evaluation shows that smart-HOP performs well in the transitional region while achieving more than 98 percent relative delivery ratio and hand-off delays in the order of a few tens of a milliseconds.