800 resultados para Intelligent environments
Resumo:
In the wake of the success of Peer-to-Peer (P2P) networking, security has arisen as one of its main concerns, becoming a key issue when evaluating a P2P system. Unfortunately, some systems' design focus targeted issues such as scalabil-ity or overall performance, but not security. As a result, security mechanisms must be provided at a later stage, after the system has already been designed and partially (or even fully) implemented, which may prove a cumbersome proposition. This work exposes how a security layer was provided under such circumstances for a specic Java based P2P framework: JXTA-Overlay.
Resumo:
The objective of this work was to evaluate the growth of the mangrove oyster Crassostrea gasar cultured in marine and estuarine environments. Oysters were cultured for 11 months in a longline system in two study sites - São Francisco do Sul and Florianópolis -, in the state of Santa Catarina, Southern Brazil. Water chlorophyll-α concentration, temperature, and salinity were measured weekly. The oysters were measured monthly (shell size and weight gain) to assess growth. At the end of the culture period, the average wet flesh weight, dry flesh weight, and shell weight were determined, as well as the distribution of oysters per size class. Six nonlinear models (logistic, exponential, Gompertz, Brody, Richards, and Von Bertalanffy) were adjusted to the oyster growth data set. Final mean shell sizes were higher in São Francisco do Sul than in Florianópolis. In addition, oysters cultured in São Francisco do Sul were more uniformly distributed in the four size classes than those cultured in Florianópolis. The highest average values of wet flesh weight and shell weight were observed in São Francisco do Sul, whereas dry flesh weight did not differ between the sites. The estuary environment is more promising for the cultivation of oysters.
Resumo:
The objective of this work was to estimate the repeatability of adaptability and stability parameters of common bean between years, within each biennium from 2003 to 2012, in Minas Gerais state, Brazil. Grain yield data from trials of value for cultivation and use common bean were analyzed. Grain yield, ecovalence, regression coefficient, and coefficient of determination were estimated considering location and sowing season per year, within each biennium. Subsequently, a analysis of variance these estimates was carried out, and repeatability was estimated in the biennia. Repeatability estimate for grain yield in most of the biennia was relatively high, but for ecovalence and regression coefficient it was null or of small magnitude, which indicates that confidence on identification of common bean lines for recommendation is greater when using means of yield, instead of stability parameters.
Resumo:
The objective of this work was to identify by biometric analyses the most stable soybean parents, with higher oil or protein contents, cultivated at different seasons and locations of the state of Minas Gerais, Brazil. Forty-nine genotypes were evaluated in the municipalities of Viçosa, Visconde do Rio Branco, and São Gotardo, in the state of Minas Gerais, from 2009 to 2011. Protein and oil contents were analyzed by infrared spectrometry using a FT-NIR analyzer. The effects of genotype, environment, and genotype x environment interaction were significant. The BARC-8 soybean genotype is the best parent to increase protein contents in the progenies, followed by BR 8014887 and CS 3032PTA276-3-4. Selection for high oil content is more efficient when the crossings involve the Suprema, CD 01RR8384, and A7002 genotypes, which show high mean phenotypic values, wide adaptability, and greater stability to environmental variation.
Resumo:
To optimally manage a metapopulation, managers and conservation biologists can favor a type of habitat spatial distribution (e.g. aggregated or random). However, the spatial distribution that provides the highest habitat occupancy remains ambiguous and numerous contradictory results exist. Habitat occupancy depends on the balance between local extinction and colonization. Thus, the issue becomes even more puzzling when various forms of relationships - positive or negative co-variation - between local extinction and colonization rate within habitat types exist. Using an analytical model we demonstrate first that the habitat occupancy of a metapopulation is significantly affected by the presence of habitat types that display different extinction-colonization dynamics, considering: (i) variation in extinction or colonization rate and (ii) positive and negative co-variation between the two processes within habitat types. We consequently examine, with a spatially explicit stochastic simulation model, how different degrees of habitat aggregation affect occupancy predictions under similar scenarios. An aggregated distribution of habitat types provides the highest habitat occupancy when local extinction risk is spatially heterogeneous and high in some places, while a random distribution of habitat provides the highest habitat occupancy when colonization rates are high. Because spatial variability in local extinction rates always favors aggregation of habitats, we only need to know about spatial variability in colonization rates to determine whether aggregating habitat types increases, or not, metapopulation occupancy. From a comparison of the results obtained with the analytical and with the spatial-explicit stochastic simulation model we determine the conditions under which a simple metapopulation model closely matches the results of a more complex spatial simulation model with explicit heterogeneity.
Resumo:
In this paper address we the question as to why participants tend to respond realistically to situations and events portrayed within an Immersive Virtual Reality (IVR) system. The idea is put forward, based on experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is"being there", often called"presence", the qualia of having a sensation of being in a real place. We call this Place Illusion (PI). Second, Plausibility Illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that that they are not"there" and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, and the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality.
Resumo:
This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful.
Resumo:
The objective of this work was to develop uni- and multivariate models to predict maximum soil shear strength (τmax) under different normal stresses (σn), water contents (U), and soil managements. The study was carried out in a Rhodic Haplustox under Cerrado (control area) and under no-tillage and conventional tillage systems. Undisturbed soil samples were taken in the 0.00-0.05 m layer and subjected to increasing U and σn, in shear strength tests. The uni- and multivariate models - respectively τmax=10(a+bU) and τmax=10(a+bU+cσn) - were significant in all three soil management systems evaluated and they satisfactorily explain the relationship between U, σn, and τmax. The soil under Cerrado has the highest shear strength (τ) estimated with the univariate model, regardless of the soil water content, whereas the soil under conventional tillage shows the highest values with the multivariate model, which were associated to the lowest water contents at the soil consistency limits in this management system.
Resumo:
Web-portaalien aiheenmukaista luokittelua voidaan hyödyntää tunnistamaan käyttäjän kiinnostuksen kohteet keräämällä tilastotietoa hänen selaustottumuksistaan eri kategorioissa. Tämä diplomityö käsittelee web-sovelluksien osa-alueita, joissa kerättyä tilastotietoa voidaan hyödyntää personalisoinnissa. Yleisperiaatteet sisällön personalisoinnista, Internet-mainostamisesta ja tiedonhausta selitetään matemaattisia malleja käyttäen. Lisäksi työssä kuvaillaan yleisluontoiset ominaisuudet web-portaaleista sekä tilastotiedon keräämiseen liittyvät seikat.
Resumo:
Tämä diplomityö käsittelee kartonginmuovaukseen käytettävien puristintyökalujen kehittämistä. Työntavoitteina oli kehittää työkalutekniikan suunnittelua ja valmistusta edullisemmaksi, nopeammaksi ja työkaluja toiminnoiltaan tehokkaammiksi. Työn tuli sisältää myös ohjeet työkalujen suunnittelemiseksi ja valmistamiseksi jatkoa ajatellen. Työn aikana selvitettiin mahdollisia työkalurakennevaihtoehtoja, valmistusmateriaaleja sekä niiden käsittelymenetelmiä ja lastuamista sekä sen tarjoamia mahdollisuuksia valmistusmenetelmänä. Työkalupari suunniteltiin modulaariseksi siten, että uusia työkaluja varten vain osa komponenteista täytyy valmistaa uudelleen, samalla työkalun osien lukumäärää pienennettiin merkittävästi. Valmistusmateriaaliksi valittiin hyvin lastuttava työkaluteräs ja sen koneistaminen tapahtui vaakakaraisessa koneistuskeskuksessa. Työn loppuvaiheessa työkalukokonaisuudelle tehtiin kustannuslaskelma jaoteltuna eri työvaiheille sekä komponenteittain. Työkalu asennettiin puristimeen ja sille suoritettiin käyttötestaus. Työn aikana karttuneen kokemuksen sekä koekäytön perusteella tehtiin jatkokehitysehdotuksia.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
The solid-rotor induction motor provides a mechanically and thermally reliable solution for demanding environments where other rotor solutions are prohibited or questionable. Solid rotors, which are manufactured of single pieces of ferromagnetic material, are commonly used in motors in which the rotationspeeds exceed substantially the conventional speeds of laminated rotors with squirrel-cage. During the operation of a solid-rotor electrical machine, the rotor core forms a conductor for both the magnetic flux and the electrical current. This causes an increase in the rotor resistance and rotor leakage inductance, which essentially decreases the power factor and the efficiency of the machine. The electromagnetic problems related to the solid-rotor induction motor are mostly associated with the low performance of the rotor. Therefore, the main emphasis in this thesis is put on the solid steel rotor designs. The rotor designs studied in thisthesis are based on the fact that the rotor construction should be extremely robust and reliable to withstand the high mechanical stresses caused by the rotational velocity of the rotor. In addition, the demanding operation environment sets requirements for the applied materials because of the high temperatures and oxidizing acids, which may be present in the cooling fluid. Therefore, the solid rotors analyzed in this thesis are made of a single piece of ferromagnetic material without any additional parts, such as copper end-rings or a squirrel-cage. A pure solid rotor construction is rigid and able to keep its balance over a large speed range. It also may tolerate other environmental stresses such as corroding substances or abrasive particles. In this thesis, the main target is to improve the performance of an induction motor equipped with a solid steel rotor by traditional methods: by axial slitting of the rotor, by selecting a proper rotor core material and by coating the rotor with a high-resistive stainless ferromagnetic material. In the solid steel rotor calculation, the rotor end-effects have a significant effect on the rotor characteristics. Thus, the emphasis is also put on the comparison of different rotor endfactors. In addition, a corrective slip-dependent end-factor is proposed. The rotor designs covered in this thesis are the smooth solid rotor, the axially slitted solid rotor and the slitted rotor having a uniform ferromagnetic coating cylinder. The thesis aims at design rules for multi-megawatt machines. Typically, mega-watt-size solidrotor machines find their applications mainly in the field of electric-motor-gas-compression systems, in steam-turbine applications, and in various types of largepower pump applications, where high operational speeds are required. In this thesis, a 120 kW, 10 000 rpm solid-rotor induction motor is usedas a small-scale model for such megawatt-range solid-rotor machines. The performance of the 120 kW solid-rotor induction motors is determined by experimental measurements and finite element calculations.
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.