999 resultados para 378.014
Resumo:
In this article, we present the first study on probabilistic tsunami hazard assessment for the Northeast (NE) Atlantic region related to earthquake sources. The methodology combines the probabilistic seismic hazard assessment, tsunami numerical modeling, and statistical approaches. We consider three main tsunamigenic areas, namely the Southwest Iberian Margin, the Gloria, and the Caribbean. For each tsunamigenic zone, we derive the annual recurrence rate for each magnitude range, from Mw 8.0 up to Mw 9.0, with a regular interval, using the Bayesian method, which incorporates seismic information from historical and instrumental catalogs. A numerical code, solving the shallow water equations, is employed to simulate the tsunami propagation and compute near shore wave heights. The probability of exceeding a specific tsunami hazard level during a given time period is calculated using the Poisson distribution. The results are presented in terms of the probability of exceedance of a given tsunami amplitude for 100- and 500-year return periods. The hazard level varies along the NE Atlantic coast, being maximum along the northern segment of the Morocco Atlantic coast, the southern Portuguese coast, and the Spanish coast of the Gulf of Cadiz. We find that the probability that a maximum wave height exceeds 1 m somewhere in the NE Atlantic region reaches 60 and 100 % for 100- and 500-year return periods, respectively. These probability values decrease, respectively, to about 15 and 50 % when considering the exceedance threshold of 5 m for the same return periods of 100 and 500 years.
Resumo:
Background: Complex medication regimens may adversely affect compliance and treatment outcomes. Complexity can be assessed with the medication regimen complexity index (MRCI), which has proved to be a valid, reliable tool, with potential uses in both practice and research. Objective: To use the MRCI to assess medication regimen complexity in institutionalized elderly people. Setting: Five nursing homes in mainland Portugal. Methods: A descriptive, cross-sectional study of institutionalized elderly people (n = 415) was performed from March to June 2009, including all inpatients aged 65 and over taking at least one medication per day. Main outcome measure: Medication regimen complexity index. Results: The mean age of the sample was 83.9 years (±6.6 years), and 60.2 % were women. The elderly patients were taking a large number of drugs, with 76.6 % taking more than five medications per day. The average medication regimen complexity was 18.2 (±SD = 9.6), and was higher in the females (p < 0.001). The most decisive factors contributing to the complexity were the number of drugs and dosage frequency. In regimens with the same number of medications, schedule was the most relevant factor in the final score (r = 0.922), followed by pharmaceutical forms (r = 0.768) and additional instructions (r = 0.742). Conclusion: Medication regimen complexity proved to be high. There is certainly potential for the pharmacist's intervention to reduce it as part as the medication review routine in all the patients.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
The current study aims to study Hospital volunteers’ intention to stay in an organization through understanding motivation, management factors, and satisfaction. A total of 304 Hospital volunteers, mainly women, completed a questionnaire measuring motivations, management factors, satisfaction, and intention to stay. In this study, structural equation modeling was used. Results demonstrate that there is a positive relationship between (a) motivation and satisfaction, (b) management factors and satisfaction, (c) satisfaction and intention to stay, and (d) motivation and management factors. These results present important outcomes that should be reflected in the way organizations operate. This research indicates aspects which are most valued by volunteers and allows NPOs to design and establish appropriate and assertive management policies.
Resumo:
This study aims to optimize the water quality monitoring of a polluted watercourse (Leça River, Portugal) through the principal component analysis (PCA) and cluster analysis (CA). These statistical methodologies were applied to physicochemical, bacteriological and ecotoxicological data (with the marine bacterium Vibrio fischeri and the green alga Chlorella vulgaris) obtained with the analysis of water samples monthly collected at seven monitoring sites and during five campaigns (February, May, June, August, and September 2006). The results of some variables were assigned to water quality classes according to national guidelines. Chemical and bacteriological quality data led to classify Leça River water quality as “bad” or “very bad”. PCA and CA identified monitoring sites with similar pollution pattern, giving to site 1 (located in the upstream stretch of the river) a distinct feature from all other sampling sites downstream. Ecotoxicity results corroborated this classification thus revealing differences in space and time. The present study includes not only physical, chemical and bacteriological but also ecotoxicological parameters, which broadens new perspectives in river water characterization. Moreover, the application of PCA and CA is very useful to optimize water quality monitoring networks, defining the minimum number of sites and their location. Thus, these tools can support appropriate management decisions.
Resumo:
The most consumed squid species worldwide were characterized regarding their concentrations of minerals, fatty acids, cholesterol and vitamin E. Interspecific comparisons were assessed among species and geographical origin. The health benefits derived from squid consumption were assessed based on daily minerals intake and on nutritional lipid quality indexes. Squids contribute significantly to daily intake of several macro (Na, K, Mg and P) and micronutrients (Cu, Zn and Ni). Despite their low fat concentration, they are rich in long-chain omega-3 fatty acids, particularly docosahexaenoic (DHA) and eicosapentanoic (EPA) acids, with highly favorable ω-3/ω-6 ratios (from 5.7 to 17.7), reducing the significance of their high cholesterol concentration (140–549 mg/100 g ww). Assessment of potential health risks based on minerals intake, non-carcinogenic and carcinogenic risks indicated that Loligo gahi (from Atlantic Ocean), Loligo opalescens (from Pacific Ocean) and Loligo duvaucelii (from Indic Ocean) should be eaten with moderation due to the high concentrations of Cu and/or Cd. Canonical discriminant analysis identified the major fatty acids (C14:0, C18:0, C18:1, C18:3ω-3, C20:4ω-6 and C22:5ω-6), P, K, Cu and vitamin E as chemical discriminators for the selected species. These elements and compounds exhibited the potential to prove authenticity of the commercially relevant squid species.
Resumo:
The prediction of the time and the efficiency of the remediation of contaminated soils using soil vapor extraction remain a difficult challenge to the scientific community and consultants. This work reports the development of multiple linear regression and artificial neural network models to predict the remediation time and efficiency of soil vapor extractions performed in soils contaminated separately with benzene, toluene, ethylbenzene, xylene, trichloroethylene, and perchloroethylene. The results demonstrated that the artificial neural network approach presents better performances when compared with multiple linear regression models. The artificial neural network model allowed an accurate prediction of remediation time and efficiency based on only soil and pollutants characteristics, and consequently allowing a simple and quick previous evaluation of the process viability.
Resumo:
The yeast Saccharomyces cerevisiae is a useful model organism for studying lead (Pb) toxicity. Yeast cells of a laboratory S. cerevisiae strain (WT strain) were incubated with Pb concentrations up to 1,000 μmol/l for 3 h. Cells exposed to Pb lost proliferation capacity without damage to the cell membrane, and they accumulated intracellular superoxide anion (O2 .−) and hydrogen peroxide (H2O2). The involvement of the mitochondrial electron transport chain (ETC) in the generation of reactive oxygen species (ROS) induced by Pb was evaluated. For this purpose, an isogenic derivative ρ0 strain, lacking mitochondrial DNA, was used. The ρ0 strain, without respiratory competence, displayed a lower intracellular ROS accumulation and a higher resistance to Pb compared to the WT strain. The kinetic study of ROS generation in yeast cells exposed to Pb showed that the production of O2 .− precedes the accumulation of H2O2, which is compatible with the leakage of electrons from the mitochondrial ETC. Yeast cells exposed to Pb displayed mutations at the mitochondrial DNA level. This is most likely a consequence of oxidative stress. In conclusion, mitochondria are an important source of Pb-induced ROS and, simultaneously, one of the targets of its toxicity.
Resumo:
We study exotic patterns appearing in a network of coupled Chen oscillators. Namely, we consider a network of two rings coupled through a “buffer” cell, with Z3×Z5 symmetry group. Numerical simulations of the network reveal steady states, rotating waves in one ring and quasiperiodic behavior in the other, and chaotic states in the two rings, to name a few. The different patterns seem to arise through a sequence of Hopf bifurcations, period-doubling, and halving-period bifurcations. The network architecture seems to explain certain observed features, such as equilibria and the rotating waves, whereas the properties of the chaotic oscillator may explain others, such as the quasiperiodic and chaotic states. We use XPPAUT and MATLAB to compute numerically the relevant states.
Resumo:
João Vinagre, Vasco Pinto and Ricardo Celestino contributed equally to the manuscript.
Resumo:
The Chaves basin is a pull-apart tectonic depression implanted on granites, schists, and graywackes, and filled with a sedimentary sequence of variable thickness. It is a rather complex structure, as it includes an intricate network of faults and hydrogeological systems. The topography of the basement of the Chaves basin still remains unclear, as no drill hole has ever intersected the bottom of the sediments, and resistivity surveys suffer from severe equivalence issues resulting from the geological setting. In this work, a joint inversion approach of 1D resistivity and gravity data designed for layered environments is used to combine the consistent spatial distribution of the gravity data with the depth sensitivity of the resistivity data. A comparison between the results from the inversion of each data set individually and the results from the joint inversion show that although the joint inversion has more difficulty adjusting to the observed data, it provides more realistic and geologically meaningful models than the ones calculated by the inversion of each data set individually. This work provides a contribution for a better understanding of the Chaves basin, while using the opportunity to study further both the advantages and difficulties comprising the application of the method of joint inversion of gravity and resistivity data.
Resumo:
Wireless Body Area Network (WBAN) is the most convenient, cost-effective, accurate, and non-invasive technology for e-health monitoring. The performance of WBAN may be disturbed when coexisting with other wireless networks. Accordingly, this paper provides a comprehensive study and in-depth analysis of coexistence issues and interference mitigation solutions in WBAN technologies. A thorough survey of state-of-the art research in WBAN coexistence issues is conducted. The survey classified, discussed, and compared the studies according to the parameters used to analyze the coexistence problem. Solutions suggested by the studies are then classified according to the followed techniques and concomitant shortcomings are identified. Moreover, the coexistence problem in WBAN technologies is mathematically analyzed and formulas are derived for the probability of successful channel access for different wireless technologies with the coexistence of an interfering network. Finally, extensive simulations are conducted using OPNET with several real-life scenarios to evaluate the impact of coexistence interference on different WBAN technologies. In particular, three main WBAN wireless technologies are considered: IEEE 802.15.6, IEEE 802.15.4, and low-power WiFi. The mathematical analysis and the simulation results are discussed and the impact of interfering network on the different wireless technologies is compared and analyzed. The results show that an interfering network (e.g., standard WiFi) has an impact on the performance of WBAN and may disrupt its operation. In addition, using low-power WiFi for WBANs is investigated and proved to be a feasible option compared to other wireless technologies.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.
Resumo:
The multiprocessor scheduling scheme NPS-F for sporadic tasks has a high utilisation bound and an overall number of preemptions bounded at design time. NPS-F binpacks tasks offline to as many servers as needed. At runtime, the scheduler ensures that each server is mapped to at most one of the m processors, at any instant. When scheduled, servers use EDF to select which of their tasks to run. Yet, unlike the overall number of preemptions, the migrations per se are not tightly bounded. Moreover, we cannot know a priori which task a server will be currently executing at the instant when it migrates. This uncertainty complicates the estimation of cache-related preemption and migration costs (CPMD), potentially resulting in their overestimation. Therefore, to simplify the CPMD estimation, we propose an amended bin-packing scheme for NPS-F allowing us (i) to identify at design time, which task migrates at which instant and (ii) bound a priori the number of migrating tasks, while preserving the utilisation bound of NPS-F.
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.