926 resultados para Coastal Environments
Resumo:
To optimally manage a metapopulation, managers and conservation biologists can favor a type of habitat spatial distribution (e.g. aggregated or random). However, the spatial distribution that provides the highest habitat occupancy remains ambiguous and numerous contradictory results exist. Habitat occupancy depends on the balance between local extinction and colonization. Thus, the issue becomes even more puzzling when various forms of relationships - positive or negative co-variation - between local extinction and colonization rate within habitat types exist. Using an analytical model we demonstrate first that the habitat occupancy of a metapopulation is significantly affected by the presence of habitat types that display different extinction-colonization dynamics, considering: (i) variation in extinction or colonization rate and (ii) positive and negative co-variation between the two processes within habitat types. We consequently examine, with a spatially explicit stochastic simulation model, how different degrees of habitat aggregation affect occupancy predictions under similar scenarios. An aggregated distribution of habitat types provides the highest habitat occupancy when local extinction risk is spatially heterogeneous and high in some places, while a random distribution of habitat provides the highest habitat occupancy when colonization rates are high. Because spatial variability in local extinction rates always favors aggregation of habitats, we only need to know about spatial variability in colonization rates to determine whether aggregating habitat types increases, or not, metapopulation occupancy. From a comparison of the results obtained with the analytical and with the spatial-explicit stochastic simulation model we determine the conditions under which a simple metapopulation model closely matches the results of a more complex spatial simulation model with explicit heterogeneity.
Resumo:
In this paper address we the question as to why participants tend to respond realistically to situations and events portrayed within an Immersive Virtual Reality (IVR) system. The idea is put forward, based on experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is"being there", often called"presence", the qualia of having a sensation of being in a real place. We call this Place Illusion (PI). Second, Plausibility Illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that that they are not"there" and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, and the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality.
Resumo:
This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful.
Resumo:
The objective of this work was to develop uni- and multivariate models to predict maximum soil shear strength (τmax) under different normal stresses (σn), water contents (U), and soil managements. The study was carried out in a Rhodic Haplustox under Cerrado (control area) and under no-tillage and conventional tillage systems. Undisturbed soil samples were taken in the 0.00-0.05 m layer and subjected to increasing U and σn, in shear strength tests. The uni- and multivariate models - respectively τmax=10(a+bU) and τmax=10(a+bU+cσn) - were significant in all three soil management systems evaluated and they satisfactorily explain the relationship between U, σn, and τmax. The soil under Cerrado has the highest shear strength (τ) estimated with the univariate model, regardless of the soil water content, whereas the soil under conventional tillage shows the highest values with the multivariate model, which were associated to the lowest water contents at the soil consistency limits in this management system.
Resumo:
The decomposition process of Ruppia cirrhosa was studied in a Mediterranean coastal lagoon in the Delta of the River Ebro (NE Spain). Leaves and shoots of Ruppia were enclosed in 1 mm-mesh and 100 pm-mesh litter bags to ascertain the effect of detritivores, macroinvertebrates, and bacteria and fungi, respectively. Changes in biomass and carbon, and, nitrogen and phosphorus concentrations in the detritus were studied at the sediment-water interface and in the sediment. Significant differences in biomass decay were observed between the two bag types. Significant differences in decomposition were observed between the two experimental conditions studied using 100 pm-mesh bags. These differences were not significant when using the 1 mm-mesh bags. The carbon content in the detritus remained constant during the decomposition process. The percentage of nitrogen increased progressively from an initial 2.4 % to 3 %. The percentage of phosphorus decreased rapidly during the first two days of decomposition from an initial 0.26 % to 0.17 %. This loss is greater in the sediment than in the water column or at the sediment-water interface. From these results we deduce that the activity of microorganisms seems to be more important in the sediment than in the water-sediment interface, and that grazing by macroinvertebrates has less importance in the sediment than in the water column.
Resumo:
The effect of dissolved nutrients on growth, nutrient content and uptake rates of Chaetomorpha linum in a Mediterranean coastal lagoon (Tancada, Ebro delta, NE Spain) was studied in laboratory experiments. Water was enriched with distinct forms of nitrogen, such as nitrate or ammonium and phosphorus. Enrichment with N, P or with both nutrients resulted in a significant increase in the tissue content of these nutrients. N-enrichment was followed by an increase in chlorophyll content after 4 days of treatment, although the difference was only significant when nitrate was added without P. P-enrichment had no significant effect on chlorophyll content. In all the treatments an increase in biomass was obseved after 10 days. This increase was higher in the N+P treatments. In all the treatments the uptake rate was significantly higher when nutrients were added than in control jars. The uptake rate of N, as ammonium, and P were significantly higher when they were added alone while that of N as nitrate was higher in the N+P treatment. In the P-enriched cultures, the final P-content of macroalgal tissues was ten-fold that of the initial tissue concentrations, thereby indicating luxury P-uptake. Moreover, at the end of the incubation the N:P ratio increased to 80, showing that P rather than N was the limiting factor for C. linum in the Tancada lagoon. The relatively high availability of N is related to the N inputs from rice fields that surround the lagoon and to P binding in sediments.
Resumo:
We have recently described 95 predicted alpha-helical coiled-coil peptides derived from putative Plasmodium falciparum erythrocytic stage proteins. Seventy peptides recognized with the highest level of prevalence by sera from three endemic areas were selected for further studies. In this study, we sequentially examined antibody responses to these synthetic peptides in two cohorts of children at risk of clinical malaria in Kilifi district in coastal Kenya, in order to characterize the level of peptide recognition by age, and the role of anti-peptide antibodies in protection from clinical malaria. Antibody levels from 268 children in the first cohort (Chonyi) were assayed against 70 peptides. Thirty-nine peptides were selected for further study in a second cohort (Junju). The rationale for the second cohort was to confirm those peptides identified as protective in the first cohort. The Junju cohort comprised of children aged 1-6 years old (inclusive). Children were actively followed up to identify episodes of febrile malaria in both cohorts. Of the 70 peptides examined, 32 showed significantly (p<0.05) increased antibody recognition in older children and 40 showed significantly increased antibody recognition in parasitaemic children. Ten peptides were associated with a significantly reduced odds ratio (OR) for an episode of clinical malaria in the first cohort of children and two of these peptides (LR146 and AS202.11) were associated with a significantly reduced OR in both cohorts. LR146 is derived from hypothetical protein PFB0145c in PlasmoDB. Previous work has identified this protein as a target of antibodies effective in antibody dependent cellular inhibition (ADCI). The current study substantiates further the potential of protein PFB0145c and also identifies protein PF11_0424 as another likely target of protective antibodies against P. falciparum malaria
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
The solid-rotor induction motor provides a mechanically and thermally reliable solution for demanding environments where other rotor solutions are prohibited or questionable. Solid rotors, which are manufactured of single pieces of ferromagnetic material, are commonly used in motors in which the rotationspeeds exceed substantially the conventional speeds of laminated rotors with squirrel-cage. During the operation of a solid-rotor electrical machine, the rotor core forms a conductor for both the magnetic flux and the electrical current. This causes an increase in the rotor resistance and rotor leakage inductance, which essentially decreases the power factor and the efficiency of the machine. The electromagnetic problems related to the solid-rotor induction motor are mostly associated with the low performance of the rotor. Therefore, the main emphasis in this thesis is put on the solid steel rotor designs. The rotor designs studied in thisthesis are based on the fact that the rotor construction should be extremely robust and reliable to withstand the high mechanical stresses caused by the rotational velocity of the rotor. In addition, the demanding operation environment sets requirements for the applied materials because of the high temperatures and oxidizing acids, which may be present in the cooling fluid. Therefore, the solid rotors analyzed in this thesis are made of a single piece of ferromagnetic material without any additional parts, such as copper end-rings or a squirrel-cage. A pure solid rotor construction is rigid and able to keep its balance over a large speed range. It also may tolerate other environmental stresses such as corroding substances or abrasive particles. In this thesis, the main target is to improve the performance of an induction motor equipped with a solid steel rotor by traditional methods: by axial slitting of the rotor, by selecting a proper rotor core material and by coating the rotor with a high-resistive stainless ferromagnetic material. In the solid steel rotor calculation, the rotor end-effects have a significant effect on the rotor characteristics. Thus, the emphasis is also put on the comparison of different rotor endfactors. In addition, a corrective slip-dependent end-factor is proposed. The rotor designs covered in this thesis are the smooth solid rotor, the axially slitted solid rotor and the slitted rotor having a uniform ferromagnetic coating cylinder. The thesis aims at design rules for multi-megawatt machines. Typically, mega-watt-size solidrotor machines find their applications mainly in the field of electric-motor-gas-compression systems, in steam-turbine applications, and in various types of largepower pump applications, where high operational speeds are required. In this thesis, a 120 kW, 10 000 rpm solid-rotor induction motor is usedas a small-scale model for such megawatt-range solid-rotor machines. The performance of the 120 kW solid-rotor induction motors is determined by experimental measurements and finite element calculations.
Resumo:
The resource utilization level in open laboratories of several universities has been shown to be very low. Our aim is to take advantage of those idle resources for parallel computation without disturbing the local load. In order to provide a system that lets us execute parallel applications in such a non-dedicated cluster, we use an integral scheduling system that considers both Space and Time sharing concerns. For dealing with the Time Sharing (TS) aspect, we use a technique based on the communication-driven coscheduling principle. This kind of TS system has some implications on the Space Sharing (SS) system, that force us to modify the way job scheduling is traditionally done. In this paper, we analyze the relation between the TS and the SS systems in a non-dedicated cluster. As a consequence of this analysis, we propose a new technique, termed 3DBackfilling. This proposal implements the well known SS technique of backfilling, but applied to an environment with a MultiProgramming Level (MPL) of the parallel applications that is greater than one. Besides, 3DBackfilling considers the requirements of the local workload running on each node. Our proposal was evaluated in a PVM/MPI Linux cluster, and it was compared with several more traditional SS policies applied to non-dedicated environments.
Resumo:
In this work, we present an integral scheduling system for non-dedicated clusters, termed CISNE-P, which ensures the performance required by the local applications, while simultaneously allocating cluster resources to parallel jobs. Our approach solves the problem efficiently by using a social contract technique. This kind of technique is based on reserving computational resources, preserving a predetermined response time to local users. CISNE-P is a middleware which includes both a previously developed space-sharing job scheduler and a dynamic coscheduling system, a time sharing scheduling component. The experimentation performed in a Linux cluster shows that these two scheduler components are complementary and a good coordination improves global performance significantly. We also compare two different CISNE-P implementations: one developed inside the kernel, and the other entirely implemented in the user space.
Resumo:
Despite the progressive ageing of a worldwide population, negative attitudes towards old age have proliferated thanks to cultural constructs and myths that, for decades, have presented old age as a synonym of decay, deterioration and loss. Moreover, even though every human being knows he/she will age and that ageing is a process that cannot be stopped, it always seems distant, far off in the future and, therefore, remains invisible. In this paper, I aim to analyse the invisibility of old age and its spaces through two contemporary novels and their ageing females protagonists –Maudie Fowler in Doris Lessing ’s The Diary of a Good Neighbour and Erica March in Rose Tremain ’s The Cupboard. Although invisible to the rest of society, these elderly characters succeed in becoming significant in the lives of younger protagonists who, immersed in their active lives, become aware of the need to enlarge our vision of old age.
Resumo:
In this work we will prove that SiC-based MIS capacitors can work in environments with extremely high concentrations of water vapor and still be sensitive to hydrogen, CO and hydrocarbons, making these devices suitable for monitoring the exhaust gases of hydrogen or hydrocarbons based fuel cells. Under the harshest conditions (45% of water vapor by volume ratio to nitrogen), Pt/TaOx/SiO2/SiC MIS capacitors are able to detect the presence of 1 ppm of hydrogen, 2 ppm of CO, 100 ppm of ethane or 20 ppm of ethene, concentrations that are far below the legal permissible exposure limits.