892 resultados para Distributed computer systems
Resumo:
El projecte ACME va néixer, l’any 1998, amb la finalitat de millorar la docència de les matemàtiques en els estudis d’Enginyeria Industrial i Enginyeries Tècniques de l’Escola Politècnica de la Universitat de Girona, amb la intenció de buscar la manera de que els alumnes s’impliquessin i participessin més activament en aquesta matèria fent ús de la xarxa com a via de comunicació. El projecte ACME, Avaluació Continuada i Millora de l’Ensenyament, és una eina d’e-learning que té com a objectiu fer una avaluació continuada dels alumnes a través d’uns dossiers personalitzats de problemes que el professor proposa. El sistema permet fer el seguiment, per part del professor, del progrés del conjunt de la classe, o d’un alumne individual, en una assignatura concreta. Això fa que l’ACME tingui un gran valor pedagògic. Els objectius d’aquest projecte final de carrera són els següents: • Desenvolupar un sistema de gestió d’usuaris convidats que ens permeti acceptar o denegar l’accés a usuaris que prèviament han fet una petició per poder utilitzar l’ACME. • Crear un sistema que faciliti la tasca d’assignar als usuaris els temes i els problemes més adients pel seu perfil. • Desenvolupar tot un seguit de millores i ampliacions amb la intenció d’integrar el nou mòdul dins les interfícies existents: Ampliar la interfície d’administració en l’apartat d’usuari amb la opció per gestionar els usuaris convidats. Ampliar la interfície d’administració en l’apartat del quadern de problemes amb les opcions adients per tal de gestionar les assignatures dels convidats.
Resumo:
One of the main challenges for developers of new human-computer interfaces is to provide a more natural way of interacting with computer systems, avoiding excessive use of hand and finger movements. In this way, also a valuable alternative communication pathway is provided to people suffering from motor disabilities. This paper describes the construction of a low cost eye tracker using a fixed head setup. Therefore a webcam, laptop and an infrared lighting source were used together with a simple frame to fix the head of the user. Furthermore, detailed information on the various image processing techniques used for filtering the centre of the pupil and different methods to calculate the point of gaze are discussed. An overall accuracy of 1.5 degrees was obtained while keeping the hardware cost of the device below 100 euros.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them also involves complicated workflows implemented as shell scripts. A new grid middleware system that is well suited to climate modelling applications is presented in this paper. Grid Remote Execution (G-Rex) allows climate models to be deployed as Web services on remote computer systems and then launched and controlled as if they were running on the user's own computer. Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model. G-Rex has a REST architectural style, featuring a Java client program that can easily be incorporated into existing scientific workflow scripts. Some technical details of G-Rex are presented, with examples of its use by climate modellers.
Resumo:
G-Rex is light-weight Java middleware that allows scientific applications deployed on remote computer systems to be launched and controlled as if they are running on the user's own computer. G-Rex is particularly suited to ocean and climate modelling applications because output from the model is transferred back to the user while the run is in progress, which prevents the accumulation of large amounts of data on the remote cluster. The G-Rex server is a RESTful Web application that runs inside a servlet container on the remote system, and the client component is a Java command line program that can easily be incorporated into existing scientific work-flow scripts. The NEMO and POLCOMS ocean models have been deployed as G-Rex services in the NERC Cluster Grid, and G-Rex is the core grid middleware in the GCEP and GCOMS e-science projects.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
The purpose of this study is to analyse current data continuity mechanisms employed by the target group of businesses and to identify any inadequacies in the mechanisms as a whole. The questionnaire responses indicate that 47% of respondents do perceive backup methodologies as important, with a total of 70% of respondents having some backup methodology already in place. Businesses in Moulton Park perceive the loss of data to have a significant effect upon their business’ ability to function. Only 14% of respondents indicated that loss of data on computer systems would not affect their business at all, with 54% of respondents indicating that there would be either a “major effect” (or greater) on their ability to operate. Respondents that have experienced data loss were more likely to have backup methodologies in place (53%) than respondents that had not experienced data loss (18%). Although the number of respondents clearly affected the quality and conclusiveness of the results returned, the level of backup methodologies in place appears to be proportional to the company size. Further investigation is recommended into the subject in order to validate the information gleaned from the small number of respondents.
Resumo:
This article describes an application of computers to a consumer-based production engineering environment. Particular consideration is given to the utilisation of low-cost computer systems for the visual inspection of components on a production line in real time. The process of installation is discussed, from identifying the need for artificial vision and justifying the cost, through to choosing a particular system and designing the physical and program structure.
Resumo:
Monitoring Earth's terrestrial water conditions is critically important to many hydrological applications such as global food production; assessing water resources sustainability; and flood, drought, and climate change prediction. These needs have motivated the development of pilot monitoring and prediction systems for terrestrial hydrologic and vegetative states, but to date only at the rather coarse spatial resolutions (∼10–100 km) over continental to global domains. Adequately addressing critical water cycle science questions and applications requires systems that are implemented globally at much higher resolutions, on the order of 1 km, resolutions referred to as hyperresolution in the context of global land surface models. This opinion paper sets forth the needs and benefits for a system that would monitor and predict the Earth's terrestrial water, energy, and biogeochemical cycles. We discuss six major challenges in developing a system: improved representation of surface‐subsurface interactions due to fine‐scale topography and vegetation; improved representation of land‐atmospheric interactions and resulting spatial information on soil moisture and evapotranspiration; inclusion of water quality as part of the biogeochemical cycle; representation of human impacts from water management; utilizing massively parallel computer systems and recent computational advances in solving hyperresolution models that will have up to 109 unknowns; and developing the required in situ and remote sensing global data sets. We deem the development of a global hyperresolution model for monitoring the terrestrial water, energy, and biogeochemical cycles a “grand challenge” to the community, and we call upon the international hydrologic community and the hydrological science support infrastructure to endorse the effort.
Resumo:
Aim: To determine the prevalence and nature of prescribing errors in general practice; to explore the causes, and to identify defences against error. Methods: 1) Systematic reviews; 2) Retrospective review of unique medication items prescribed over a 12 month period to a 2% sample of patients from 15 general practices in England; 3) Interviews with 34 prescribers regarding 70 potential errors; 15 root cause analyses, and six focus groups involving 46 primary health care team members Results: The study involved examination of 6,048 unique prescription items for 1,777 patients. Prescribing or monitoring errors were detected for one in eight patients, involving around one in 20 of all prescription items. The vast majority of the errors were of mild to moderate severity, with one in 550 items being associated with a severe error. The following factors were associated with increased risk of prescribing or monitoring errors: male gender, age less than 15 years or greater than 64 years, number of unique medication items prescribed, and being prescribed preparations in the following therapeutic areas: cardiovascular, infections, malignant disease and immunosuppression, musculoskeletal, eye, ENT and skin. Prescribing or monitoring errors were not associated with the grade of GP or whether prescriptions were issued as acute or repeat items. A wide range of underlying causes of error were identified relating to the prescriber, patient, the team, the working environment, the task, the computer system and the primary/secondary care interface. Many defences against error were also identified, including strategies employed by individual prescribers and primary care teams, and making best use of health information technology. Conclusion: Prescribing errors in general practices are common, although severe errors are unusual. Many factors increase the risk of error. Strategies for reducing the prevalence of error should focus on GP training, continuing professional development for GPs, clinical governance, effective use of clinical computer systems, and improving safety systems within general practices and at the interface with secondary care.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.
Resumo:
We outline our first steps towards marrying two new and emerging technologies; the Virtual Observatory (e.g, Astro- Grid) and the computational grid. We discuss the construction of VOTechBroker, which is a modular software tool designed to abstract the tasks of submission and management of a large number of computational jobs to a distributed computer system. The broker will also interact with the AstroGrid workflow and MySpace environments. We present our planned usage of the VOTechBroker in computing a huge number of n–point correlation functions from the SDSS, as well as fitting over a million CMBfast models to the WMAP data.
Resumo:
Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals.
Resumo:
Embedded computer systems equipped with wireless communication transceivers are nowadays used in a vast number of application scenarios. Energy consumption is important in many of these scenarios, as systems are battery operated and long maintenance-free operation is required. To achieve this goal, embedded systems employ low-power communication transceivers and protocols. However, currently used protocols cannot operate efficiently when communication channels are highly erroneous. In this study, we show how average diversity combining (ADC) can be used in state-of-the-art low-power communication protocols. This novel approach improves transmission reliability and in consequence energy consumption and transmission latency in the presence of erroneous channels. Using a testbed, we show that highly erroneous channels are indeed a common occurrence in situations, where low-power systems are used and we demonstrate that ADC improves low-power communication dramatically.