678 resultados para GPS-Denied Environments


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To optimally manage a metapopulation, managers and conservation biologists can favor a type of habitat spatial distribution (e.g. aggregated or random). However, the spatial distribution that provides the highest habitat occupancy remains ambiguous and numerous contradictory results exist. Habitat occupancy depends on the balance between local extinction and colonization. Thus, the issue becomes even more puzzling when various forms of relationships - positive or negative co-variation - between local extinction and colonization rate within habitat types exist. Using an analytical model we demonstrate first that the habitat occupancy of a metapopulation is significantly affected by the presence of habitat types that display different extinction-colonization dynamics, considering: (i) variation in extinction or colonization rate and (ii) positive and negative co-variation between the two processes within habitat types. We consequently examine, with a spatially explicit stochastic simulation model, how different degrees of habitat aggregation affect occupancy predictions under similar scenarios. An aggregated distribution of habitat types provides the highest habitat occupancy when local extinction risk is spatially heterogeneous and high in some places, while a random distribution of habitat provides the highest habitat occupancy when colonization rates are high. Because spatial variability in local extinction rates always favors aggregation of habitats, we only need to know about spatial variability in colonization rates to determine whether aggregating habitat types increases, or not, metapopulation occupancy. From a comparison of the results obtained with the analytical and with the spatial-explicit stochastic simulation model we determine the conditions under which a simple metapopulation model closely matches the results of a more complex spatial simulation model with explicit heterogeneity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper address we the question as to why participants tend to respond realistically to situations and events portrayed within an Immersive Virtual Reality (IVR) system. The idea is put forward, based on experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is"being there", often called"presence", the qualia of having a sensation of being in a real place. We call this Place Illusion (PI). Second, Plausibility Illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that that they are not"there" and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, and the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work was to develop uni- and multivariate models to predict maximum soil shear strength (τmax) under different normal stresses (σn), water contents (U), and soil managements. The study was carried out in a Rhodic Haplustox under Cerrado (control area) and under no-tillage and conventional tillage systems. Undisturbed soil samples were taken in the 0.00-0.05 m layer and subjected to increasing U and σn, in shear strength tests. The uni- and multivariate models - respectively τmax=10(a+bU) and τmax=10(a+bU+cσn) - were significant in all three soil management systems evaluated and they satisfactorily explain the relationship between U, σn, and τmax. The soil under Cerrado has the highest shear strength (τ) estimated with the univariate model, regardless of the soil water content, whereas the soil under conventional tillage shows the highest values with the multivariate model, which were associated to the lowest water contents at the soil consistency limits in this management system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000-luvun vaihteessa paikkatietoa hyödyntävistä matkapuhelinpalveluista odotettiin muodostuvan eräs merkittävimmistä kilpailuvalteista eri matkapuhelinoperaattoreiden välillä. Kiinnostusta paikkatietoa hyödyntäviin palveluihin lisäsi kaupallisten sovellusten lisäksi Yhdysvaltojen liittovaltioiden liikenneministeriön säätämä laki, joka velvoittaa paikantamaan yleiseen hätänumeroon soitetut puhelut. Erityisesti laite- ja verkkotoimittajat odottivat tämänluovan heille uusia markkinoita. Markkinoille tuli useita kilpailevia menetelmiä, joilla matkapuhelimia voitiin paikantaa. Suurin osa näistä menetelmistä hyödynsi GSM-verkon signalointia paikannuksen tekemiseen. Samaan aikaan kohonnut matkapuhelinten suorituskyky mahdollisti GPS-vastaanottimien integroinnin matkapuhelimiin ja ensimmäiset tällaiset matkapuhelimet ilmestyivät markkinoille. Matkapuhelinten paikantamiseen liittyvä standardointi on melko hajanaista. ETSI on standardoinut joukon erilaisia menetelmiä, joilla matkapuhelin paikkatieto voidaan selvittää. Nämä standardit eivät kuitenkaan määrittele sitä, kuinka paikkatieto siirretään sitä hyödyntävien palveluiden käyttöön. Paikkatiedonsiirtämiseen ja esittämiseen liittyvässä standardoinnissa eri laite- ja ohjelmistovalmistajat ovat tehneet liittoutumia keskenään ja esitelleet keskenään kilpailevia standardeja ja suosituksia. Tälläkään osa-alueella mikään liittoutuma ei ole saavuttanut määräävää markkina-asemaa. Tässä työssä suunniteltiin ja toteutettiin järjestelmä, jonka avulla voidaan paikantaa sellaisia GSM-päätelaitteita, joihin on integroitu GPS-vastaanotin. Toteutettu järjestelmä liitettiin uudeksi paikannusmenetelmäksi solupaikannuksen rinnalle Sonera Pointer paikannusjärjestelmään. Työn aikana testattiin joukko markkinoilla olleita GSM-puhelimia, joihin oli integroitu GPS-vastaanotin. Matkapuhelinten testauksessa erityinen huomio kiinnittyi siihen, kuinka GPS-paikkatieto saadaan siirrettyä matkapuhelimesta verkossa sijaitsevien sovellusten käyttöön. Toteutetun järjestelmän suunnittelussa täkein lähtökohta oli järjestelmän joustavuus. Standardien hajanaisuus ja osittainen puuttuminen aiheuttivat sen, että järjestelmästä oli tehtävä mahdollisimman helposti laajennettava. Toinen merkittävä suunnitteluun vaikuttanut tekijä oli operaattoririippumattomuus, koska Sonera Pointer järjestelmää oli tarkoitus myydä myös muille matkapuhelinoperaattoreille.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed.As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solid-rotor induction motor provides a mechanically and thermally reliable solution for demanding environments where other rotor solutions are prohibited or questionable. Solid rotors, which are manufactured of single pieces of ferromagnetic material, are commonly used in motors in which the rotationspeeds exceed substantially the conventional speeds of laminated rotors with squirrel-cage. During the operation of a solid-rotor electrical machine, the rotor core forms a conductor for both the magnetic flux and the electrical current. This causes an increase in the rotor resistance and rotor leakage inductance, which essentially decreases the power factor and the efficiency of the machine. The electromagnetic problems related to the solid-rotor induction motor are mostly associated with the low performance of the rotor. Therefore, the main emphasis in this thesis is put on the solid steel rotor designs. The rotor designs studied in thisthesis are based on the fact that the rotor construction should be extremely robust and reliable to withstand the high mechanical stresses caused by the rotational velocity of the rotor. In addition, the demanding operation environment sets requirements for the applied materials because of the high temperatures and oxidizing acids, which may be present in the cooling fluid. Therefore, the solid rotors analyzed in this thesis are made of a single piece of ferromagnetic material without any additional parts, such as copper end-rings or a squirrel-cage. A pure solid rotor construction is rigid and able to keep its balance over a large speed range. It also may tolerate other environmental stresses such as corroding substances or abrasive particles. In this thesis, the main target is to improve the performance of an induction motor equipped with a solid steel rotor by traditional methods: by axial slitting of the rotor, by selecting a proper rotor core material and by coating the rotor with a high-resistive stainless ferromagnetic material. In the solid steel rotor calculation, the rotor end-effects have a significant effect on the rotor characteristics. Thus, the emphasis is also put on the comparison of different rotor endfactors. In addition, a corrective slip-dependent end-factor is proposed. The rotor designs covered in this thesis are the smooth solid rotor, the axially slitted solid rotor and the slitted rotor having a uniform ferromagnetic coating cylinder. The thesis aims at design rules for multi-megawatt machines. Typically, mega-watt-size solidrotor machines find their applications mainly in the field of electric-motor-gas-compression systems, in steam-turbine applications, and in various types of largepower pump applications, where high operational speeds are required. In this thesis, a 120 kW, 10 000 rpm solid-rotor induction motor is usedas a small-scale model for such megawatt-range solid-rotor machines. The performance of the 120 kW solid-rotor induction motors is determined by experimental measurements and finite element calculations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The resource utilization level in open laboratories of several universities has been shown to be very low. Our aim is to take advantage of those idle resources for parallel computation without disturbing the local load. In order to provide a system that lets us execute parallel applications in such a non-dedicated cluster, we use an integral scheduling system that considers both Space and Time sharing concerns. For dealing with the Time Sharing (TS) aspect, we use a technique based on the communication-driven coscheduling principle. This kind of TS system has some implications on the Space Sharing (SS) system, that force us to modify the way job scheduling is traditionally done. In this paper, we analyze the relation between the TS and the SS systems in a non-dedicated cluster. As a consequence of this analysis, we propose a new technique, termed 3DBackfilling. This proposal implements the well known SS technique of backfilling, but applied to an environment with a MultiProgramming Level (MPL) of the parallel applications that is greater than one. Besides, 3DBackfilling considers the requirements of the local workload running on each node. Our proposal was evaluated in a PVM/MPI Linux cluster, and it was compared with several more traditional SS policies applied to non-dedicated environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we present an integral scheduling system for non-dedicated clusters, termed CISNE-P, which ensures the performance required by the local applications, while simultaneously allocating cluster resources to parallel jobs. Our approach solves the problem efficiently by using a social contract technique. This kind of technique is based on reserving computational resources, preserving a predetermined response time to local users. CISNE-P is a middleware which includes both a previously developed space-sharing job scheduler and a dynamic coscheduling system, a time sharing scheduling component. The experimentation performed in a Linux cluster shows that these two scheduler components are complementary and a good coordination improves global performance significantly. We also compare two different CISNE-P implementations: one developed inside the kernel, and the other entirely implemented in the user space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Providing support for research is one of the key issues in the ongoing attempts to improve Primary Care. However, when patient care takes up a significant part of a GP's time, conducting research is difficult. In this study we examine the working conditions and profile of GPs who publish in three leading medical journals and propose possible remedial policy actions. Findings: The authors of all articles published in 2006 and 2007 in three international Family Medicine journals - Annals of Family Medicine, Family Practice, and Journal of Family Practice - were contacted by E-mail. They were asked to complete a questionnaire investigating the following variables: availability of specific time for research, time devoted to research, number of patients attended, and university affiliation. Only GPs were included in the study. Three hundred and ten relevant articles published between 2006 and 2007 were identified and the authors contacted using a survey tool. 124 researchers responded to our questionnaire; 45% of respondents who were not GPs were excluded. On average GPs spent 2.52 days per week and 6.9 hours per day on patient care, seeing 45 patients per week. Seventy-five per cent of GPs had specific time assigned to research, on average 13 hours per week; 79% were affiliated to a university and 69% held teaching positions. Conclusions: Most GPs who publish original articles in leading journals have time specifically assigned to research as part of their normal working schedule. They see a relatively small number of patients. Improving the working conditions of family physicians who intend to investigate is likely to lead to better research results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the progressive ageing of a worldwide population, negative attitudes towards old age have proliferated thanks to cultural constructs and myths that, for decades, have presented old age as a synonym of decay, deterioration and loss. Moreover, even though every human being knows he/she will age and that ageing is a process that cannot be stopped, it always seems distant, far off in the future and, therefore, remains invisible. In this paper, I aim to analyse the invisibility of old age and its spaces through two contemporary novels and their ageing females protagonists –Maudie Fowler in Doris Lessing ’s The Diary of a Good Neighbour and Erica March in Rose Tremain ’s The Cupboard. Although invisible to the rest of society, these elderly characters succeed in becoming significant in the lives of younger protagonists who, immersed in their active lives, become aware of the need to enlarge our vision of old age.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we will prove that SiC-based MIS capacitors can work in environments with extremely high concentrations of water vapor and still be sensitive to hydrogen, CO and hydrocarbons, making these devices suitable for monitoring the exhaust gases of hydrogen or hydrocarbons based fuel cells. Under the harshest conditions (45% of water vapor by volume ratio to nitrogen), Pt/TaOx/SiO2/SiC MIS capacitors are able to detect the presence of 1 ppm of hydrogen, 2 ppm of CO, 100 ppm of ethane or 20 ppm of ethene, concentrations that are far below the legal permissible exposure limits.