886 resultados para Distributed File System


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global Software Development (GSD) is an emerging distributive software engineering practice, in which a higher communication overhead due to temporal and geographical separation among developers is traded with gains in reduced development cost, improved flexibility and mobility for developers, increased access to skilled resource-pools and convenience of customer involvements. However, due to its distributive nature, GSD faces many fresh challenges in aspects relating to project coordination, awareness, collaborative coding and effective communication. New software engineering methodologies and processes are required to address these issues. Research has shown that, with adequate support tools, Distributed Extreme Programming (DXP) – a distributive variant of an agile methodology – Extreme Programming (XP) can be both efficient and beneficial to GDS projects. In this paper, we present the design and realization of a collaborative environment, called Moomba, which assists a distributed team in both instantiation and execution of a DXP process in GSD projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the performance of a multiple input multiple output (MIMO) wireless communication system operating in an indoor environment, featuring both line of sight (LOS) and non-line of sight (NLOS) signal propagation, is assessed. In the model the scattering objects are assumed to be uniformly distributed in an area surrounding the transmitting and receiving array antennas. Mutual coupling effects in the arrays are treated in an exact manner. However interactions with scattering objects are taken into account via a single bounce approach. Computer simulations are carried out for the system capacity for varying inter-element spacing in the receiving array for assumed values of LOS/NLOS power fraction and signal to noise ratio (SNR).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite decades of research, the takeup of formal methods for developing provably correct software in industry remains slow. One reason for this is the high cost of proof construction, an activity that, due to the complexity of the required proofs, is typically carried out using interactive theorem provers. In this paper we propose an agent-oriented architecture for interactive theorem proving with the aim of reducing the user interactions (and thus the cost) of constructing software verification proofs. We describe a prototype implementation of our architecture and discuss its application to a small, but non-trivial case study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context-aware systems represent extremely complex and heterogeneous distributed systems, composed of sensors, actuators, application components, and a variety of context processing components that manage the flow of context information between the sensors/actuators and applications. The need for middleware to seamlessly bind these components together is well recognised. Numerous attempts to build middleware or infrastructure for context-aware systems have been made, but these have provided only partial solutions; for instance, most have not adequately addressed issues such as mobility, fault tolerance or privacy. One of the goals of this paper is to provide an analysis of the requirements of a middleware for context-aware systems, drawing from both traditional distributed system goals and our experiences with developing context-aware applications. The paper also provides a critical review of several middleware solutions, followed by a comprehensive discussion of our own PACE middleware. Finally, it provides a comparison of our solution with the previous work, highlighting both the advantages of our middleware and important topics for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years many real time applications need to handle data streams. We consider the distributed environments in which remote data sources keep on collecting data from real world or from other data sources, and continuously push the data to a central stream processor. In these kinds of environments, significant communication is induced by the transmitting of rapid, high-volume and time-varying data streams. At the same time, the computing overhead at the central processor is also incurred. In this paper, we develop a novel filter approach, called DTFilter approach, for evaluating the windowed distinct queries in such a distributed system. DTFilter approach is based on the searching algorithm using a data structure of two height-balanced trees, and it avoids transmitting duplicate items in data streams, thus lots of network resources are saved. In addition, theoretical analysis of the time spent in performing the search, and of the amount of memory needed is provided. Extensive experiments also show that DTFilter approach owns high performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DMAPS (Distributed Multi-Agent Planning System) is a planning system developed for distributed multi-robot teams based on MAPS(Multi-Agent Planning System). MAPS assumes that each agent has the same global view of the environment in order to determine the most suitable actions. This assumption fails when perception is local to the agents: each agent has only a partial and unique view of the environment. DMAPS addresses this problem by creating a probabilistic global view on each agent by fusing the perceptual information from each robot. The experimental results on consuming tasks show that while the probabilistic global view is not identical on each robot, the shared view is still effective in increasing performance of the team.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To determine the distribution of the pathological changes in the neocortex in multiple-system atrophy (MSA). METHOD: The vertical distribution of the abnormal neurons (neurons with enlarged or atrophic perikarya), surviving neurons, glial cytoplasmic inclusions (GCI) and neuronal cytoplasmic inclusions (NI) were studied in alpha-synuclein-stained material of frontal and temporal cortex in ten cases of MSA. RESULTS: Abnormal neurons exhibited two common patterns of distribution, viz., density was either maximal in the upper cortex or a bimodal distribution was present with a density peak in the upper and lower cortex. The NI were either located in the lower cortex or were more uniformly distributed down the cortical profile. The distribution of the GCI varied considerably between gyri and cases. The density of the glial cell nuclei was maximal in the lower cortex in the majority of gyri. In a number of gyri, there was a positive correlation between the vertical densities of the abnormal neurons, the total number of surviving neurons, and the glial cell nuclei. The vertical densities of the GCI were not correlated with those of the surviving neurons or glial cells but the GCI and NI were positively correlated in a small number of gyri. CONCLUSION: The data suggest that there is significant degeneration of the frontal and temporal lobes in MSA, the lower laminae being affected more significantly than the upper laminae. Cortical degeneration in MSA is likely to be secondary to pathological changes occurring within subcortical areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to survive in the increasingly customer-oriented marketplace, continuous quality improvement marks the fastest growing quality organization’s success. In recent years, attention has been focused on intelligent systems which have shown great promise in supporting quality control. However, only a small number of the currently used systems are reported to be operating effectively because they are designed to maintain a quality level within the specified process, rather than to focus on cooperation within the production workflow. This paper proposes an intelligent system with a newly designed algorithm and the universal process data exchange standard to overcome the challenges of demanding customers who seek high-quality and low-cost products. The intelligent quality management system is equipped with the ‘‘distributed process mining” feature to provide all levels of employees with the ability to understand the relationships between processes, especially when any aspect of the process is going to degrade or fail. An example of generalized fuzzy association rules are applied in manufacturing sector to demonstrate how the proposed iterative process mining algorithm finds the relationships between distributed process parameters and the presence of quality problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The density and spatial distribution of the vacuoles, glial cell nuclei and glial cytoplasmic inclusions (GCI) were studied in the white matter of various cortical and subcortical areas in 10 cases of multiple system atrophy (MSA). Vacuolation was more prevalent in subcortical than cortical areas and especially in the central tegmental tract. Glial cell nuclei widespread in all areas of the white matter studied; overall densities of glial cell nuclei being significantly greater in the central tegmental tract and frontal cortex compared with areas of the pons. The GCI were present most consistently in the external and internal capsules, the central tegmental tract and the white matter of the cerebellar cortex. The density of the vacuoles was greater in the MSA brains than in the control brains but glial cell density was similar in both groups. In the majority of areas, the pathological changes were distributed across the white matter randomly, uniformly, or in large diffuse clusters. In most areas, there were no spatial correlations between the vacuoles, glial cell nuclei and GCI. These results suggest: (i) there is significant degeneration of the white matter in MSA characterized by vacuolation and GCI; (ii) the central tegmental tract is affected significantly more than the cortical tracts; (iii) pathological changes are diffusely rather than topographically distributed across the white matter; and (iv) the development of the vacuoles and GCI appear to be unrelated phenomena. © 2007 Japanese Society of Neuropathology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To study the topographic distribution of the pathology in multiple system atrophy (MSA). Pattern analysis was carried out using a-synuclein immunohistochemistry in 10 MSA cases. The glial cytoplasmic inclusions (GCI) were distributed randomly or in large clusters. The neuronal inclusions (NI) and abnormal neurons were distributed in regular clusters. Clusters of the NI and abnormal neurons were spatially correlated whereas the GCI were not spatially correlated with either the NI or the abnormal neurons. The data suggest that the GCI represent the primary change in MSA and the neuronal pathology develops secondary to the glial pathology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In cases of multiple system atrophy (MSA), glial cytoplasmic inclusions (GCI) were distributed randomly or present in large diffuse clusters (>1,600 μm in diameter) in most areas studied. These spatial patterns contrast with those reported for filamentous neuronal inclusions in the tauopathies and α-synucleinopathies. © 2003 Movement Disorder Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Issues of wear and tribology are increasingly important in computer hard drives as slider flying heights are becoming lower and disk protective coatings thinner to minimise spacing loss and allow higher areal density. Friction, stiction and wear between the slider and disk in a hard drive were studied using Accelerated Friction Test (AFT) apparatus. Contact Start Stop (CSS) and constant speed drag tests were performed using commercial rigid disks and two different air bearing slider types. Friction and stiction were captured during testing by a set of strain gauges. System parameters were varied to investigate their effect on tribology at the head/disk interface. Chosen parameters were disk spinning velocity, slider fly height, temperature, humidity and intercycle pause. The effect of different disk texturing methods was also studied. Models were proposed to explain the influence of these parameters on tribology. Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM) were used to study head and disk topography at various test stages and to provide physical parameters to verify the models. X-ray Photoelectron Spectroscopy (XPS) was employed to identify surface composition and determine if any chemical changes had occurred as a result of testing. The parameters most likely to influence the interface were identified for both CSS and drag testing. Neural Network modelling was used to substantiate results. Topographical AFM scans of disk and slider were exported numerically to file and explored extensively. Techniques were developed which improved line and area analysis. A method for detecting surface contacts was also deduced, results supported and explained observed AFT behaviour. Finally surfaces were computer generated to simulate real disk scans, this allowed contact analysis of many types of surface to be performed. Conclusions were drawn about what disk characteristics most affected contacts and hence friction, stiction and wear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fast spread of the Internet and the increasing demands of the service are leading to radical changes in the structure and management of underlying telecommunications systems. Active networks (ANs) offer the ability to program the network on a per-router, per-user, or even per-packet basis, thus promise greater flexibility than current networks. To make this new network paradigm of active network being widely accepted, a lot of issues need to be solved. Management of the active network is one of the challenges. This thesis investigates an adaptive management solution based on genetic algorithm (GA). The solution uses a distributed GA inspired by bacterium on the active nodes within an active network, to provide adaptive management for the network, especially the service provision problems associated with future network. The thesis also reviews the concepts, theories and technologies associated with the management solution. By exploring the implementation of these active nodes in hardware, this thesis demonstrates the possibility of implementing a GA based adaptive management in the real network that being used today. The concurrent programming language, Handel-C, is used for the description of the design system and a re-configurable computer platform based on a FPGA process element is used for the hardware implementation. The experiment results demonstrate both the availability of the hardware implementation and the efficiency of the proposed management solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.