966 resultados para Distributed processing
Resumo:
Most research on D-STBC has assumed that cooperative relay nodes are perfectly synchronised. Since such an assumption is difficult to achieve in many practical systems, this paper proposes a simple yet optimum detector for the case of two relay nodes, which proves to be much more robust against timing misalignment than the conventional STBC detector.
Resumo:
Semiotics is the study of signs. Application of semiotics in information systems design is based on the notion that information systems are organizations within which agents deploy signs in the form of actions according to a set of norms. An analysis of the relationships among the agents, their actions and the norms would give a better specification of the system. Distributed multimedia systems (DMMS) could be viewed as a system consisted of many dynamic, self-controlled normative agents engaging in complex interaction and processing of multimedia information. This paper reports the work of applying the semiotic approach to the design and modeling of DMMS, with emphasis on using semantic analysis under the semiotic framework. A semantic model of DMMS describing various components and their ontological dependencies is presented, which then serves as a design model and implemented in a semantic database. Benefits of using the semantic database are discussed with reference to various design scenarios.
Resumo:
Much is known about the functional mechanisms involved in visual search. Yet, the fundamental question of whether the visual system can perform different types of visual analysis at different spatial resolutions still remains unsettled. In the visual-attention literature, the distinction between different spatial scales of visual processing corresponds to the distinction between distributed and focused attention. Some authors have argued that singleton detection can be performed in distributed attention, whereas others suggest that even such a simple visual operation involves focused attention. Here we showed that microsaccades were spatially biased during singleton discrimination but not during singleton detection. The results provide support to the hypothesis that some coarse visual analysis can be performed in a distributed attention mode.
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
Distributed energy and water balance models require time-series surfaces of the meteorological variables involved in hydrological processes. Most of the hydrological GIS-based models apply simple interpolation techniques to extrapolate the point scale values registered at weather stations at a watershed scale. In mountainous areas, where the monitoring network ineffectively covers the complex terrain heterogeneity, simple geostatistical methods for spatial interpolation are not always representative enough, and algorithms that explicitly or implicitly account for the features creating strong local gradients in the meteorological variables must be applied. Originally developed as a meteorological pre-processing tool for a complete hydrological model (WiMMed), MeteoMap has become an independent software. The individual interpolation algorithms used to approximate the spatial distribution of each meteorological variable were carefully selected taking into account both, the specific variable being mapped, and the common lack of input data from Mediterranean mountainous areas. They include corrections with height for both rainfall and temperature (Herrero et al., 2007), and topographic corrections for solar radiation (Aguilar et al., 2010). MeteoMap is a GIS-based freeware upon registration. Input data include weather station records and topographic data and the output consists of tables and maps of the meteorological variables at hourly, daily, predefined rainfall event duration or annual scales. It offers its own pre and post-processing tools, including video outlook, map printing and the possibility of exporting the maps to images or ASCII ArcGIS formats. This study presents the friendly user interface of the software and shows some case studies with applications to hydrological modeling.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The productivity of 28 tomato cultivars was evaluated over three stages of harvest. The study was carried out during from June to December of 1999 in an open field at the experimental area of the Section of Olericulture and Aromatic Medicinal Plants, Department of Crop Production at FCAV-UNESP, Jaboticabal, SP, Brazil. The cultivars studied were H 7155, Hypeel, Andino, U 573, H 9036, IPA 6, H 9494, AG 33, Yuba, RPT 1294, AG 72, Pelmeech, Curico, Hypeel 45, RPT 1478, H 9492, H 9498, H 2710, Hitech 45, Halley, Botu 13, H 9553, U 646, NK 1570, AG 45, RPT 1095, RPT 1570, and PSX 37511. The experimental design was a randomized block design with four repetitions, with five plants per plot. Productivity was evaluated at three stages of harvest at 119, 149 and 179 days after seeding. There were no significant differences among the cultivars at the first harvest (119 days). The majority of the cultivars produced their highest yield at the second harvest; the most productive cultivars were Curicó and AG 72, which yielded 4.69 and 4.67 kg/plant, respectively, although they did not differ statistically from the cultivars Hypeel 45 (4.35 kg/plant) and H 9498 (4.16 kg/plant). Yields of the cultivars Andino and H 9494 were evenly distributed between the second and third harvests. At the third harvest, cultivar IPA 6 had the highest yield (2.9 kg/plant) and was statistically different from all other cultivars except H 9036 (2.34 kg). These two cultivars had the most delayed and concentrated maturity, making them suitable for mechanical harvesting, although at a later time. Cultivar AG 72 had the greatest total yield (5.76 kg/plant), but it was not statistically different from cultivars Hypeel 45 (5.43 kg), Curico (4.17 kg), H 9498 (4.83 kg), H 7155 (4.58 kg) and Halley (4.55 kg). All of the cultivars, with the exception of cultivars H 9036, IPA 6, Andino and H 9494 showed in the second harvest concentrated maturity, making it suitable for mechanical harvesting.
Resumo:
Most of architectures proposed for developing Distributed Virtual Environment (DVE) allow limited number of users. To support the development of applications using the internet infrastructure, with hundred or, perhaps, thousands users logged simultaneously on DVE, several techniques for managing resources, such as bandwidth and capability of processing, must be implemented. The strategy presented in this paper combines methods to attain the scalability required, In special the multicast protocol at application level.
Resumo:
In order to simplify computer management, several system administrators are adopting advanced techniques to manage software configuration of enterprise computer networks, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. Virtualization is an established technology, however its use is been more focused on server consolidation and virtual desktop infrastructure, not for managing distributed computers over a network. This paper discusses the feasibility of the Distributed Virtual Machine Environment, a new approach for enterprise computer management that combines virtualization and distributed system architecture as the basis of the management architecture. © 2008 IEEE.
Resumo:
In large distributed systems, where shared resources are owned by distinct entities, there is a need to reflect resource ownership in resource allocation. An appropriate resource management system should guarantee that resource's owners have access to a share of resources proportional to the share they provide. In order to achieve that some policies can be used for revoking access to resources currently used by other users. In this paper, a scheduling policy based in the concept of distributed ownership is introduced called Owner Share Enforcement Policy (OSEP). OSEP goal is to guarantee that owner do not have their jobs postponed for longer periods of time. We evaluate the results achieved with the application of this policy using metrics that describe policy violation, loss of capacity, policy cost and user satisfaction in environments with and without job checkpointing. We also evaluate and compare the OSEP policy with the Fair-Share policy, and from these results it is possible to capture the trade-offs from different ways to achieve fairness based on the user satisfaction. © 2009 IEEE.
Resumo:
The discovery of participation of astrocytes as active elements in glutamatergic tripartite synapses (composed by functional units of two neurons and one astrocyte) has led to the construction of models of cognitive functioning in the human brain, focusing on associative learning, sensory integration, conscious processing and memory formation/retrieval. We have modelled human cognitive functions by means of an ensemble of functional units (tripartite synapses) connected by gap junctions that link distributed astrocytes, allowing the formation of intra- and intercellular calcium waves that putatively mediate large-scale cognitive information processing. The model contains a diagram of molecular mechanisms present in tripartite synapses and contributes to explain the physiological bases of cognitive functions. It can be potentially expanded to explain emotional functions and psychiatric phenomena. © MSM 2011.
Resumo:
We evaluated the effect of adding by-products from the processing of oil seeds in the diet of lambs on the carcass and meat traits. Twenty-four non-castrated weaned male Santa Inês lambs with approximately 70 days of age and initial average weight of 19.11 ± 2.12 kg were distributed into a completely randomized design. Treatments consisted of diets containing by-products with 70% of concentrate and 30% of tifton hay (Cynodon spp.) and were termed SM: control with soybean meal; SC: formulated with soybean cake; SUC: formulated with sunflower cake and PC: formulated with peanut cake. Diets had no effects on the carcass traits evaluated. There was no significant effect on the mean values of perirenal, omental and mesenteric fats (0.267, 0.552 and 0.470 kg, respectively) and there was no influence on the percentages of moisture, ether extract, crude protein or ash in the loin between experimental diets. Diets containing by-products from the processing of oil seeds did not change fatty acids found in lamb meat. The use of by-products from oil seeds provided similar carcass and meat traits, thus their use can be recommended as eventual protein and energy sources for feedlot lambs.
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.