20 resultados para Time-sharing computer systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A characterization of observability for linear time-varying descriptor systemsE(t)x(t)+F(t)x(t)=B(t)u(t), y(t)=C(t)x(t) was recently developed. NeitherE norC were required to have constant rank. This paper defines a dual system, and a type of controllability so that observability of the original system is equivalent to controllability of the dual system. Criteria for observability and controllability are given in terms of arrays of derivatives of the original coefficients. In addition, the duality results of this paper lead to an improvement on a previous fundamental structure result for solvable systems of the formE(t)x(t)+F(t)x(t)=f(tt).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study linear variable coefficient control problems in descriptor form. Based on a behaviour approach and the general theory for linear differential algebraic systems we give the theoretical analysis and describe numerically stable methods to determine the structural properties of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article describes an application of computers to a consumer-based production engineering environment. Particular consideration is given to the utilisation of low-cost computer systems for the visual inspection of components on a production line in real time. The process of installation is discussed, from identifying the need for artificial vision and justifying the cost, through to choosing a particular system and designing the physical and program structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Terahertz pulse imaging (TPI) is a novel noncontact, nondestructive technique for the examination of cultural heritage artifacts. It has the advantage of broadband spectral range, time-of-flight depth resolution, and penetration through optically opaque materials. Fiber-coupled, portable, time-domain terahertz systems have enabled this technique to move out of the laboratory and into the field. Much like the rings of a tree, stratified architectural materials give the chronology of their environmental and aesthetic history. This work concentrates on laboratory models of stratified mosaics and fresco paintings, specimens extracted from a neolithic excavation site in Catalhoyuk, Turkey, and specimens measured at the medieval Eglise de Saint Jean-Baptiste in Vif, France. Preparatory spectroscopic studies of various composite materials, including lime, gypsum and clay plasters are presented to enhance the interpretation of results and with the intent to aid future computer simulations of the TPI of stratified architectural material. The breadth of the sample range is a demonstration of the cultural demand and public interest in the life history of buildings. The results are an illustration of the potential role of TPI in providing both a chronological history of buildings and in the visualization of obscured wall paintings and mosaics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Assimilating the diagnosis complete spinal cord injury (SCI) takes time and is not easy, as patients know that there is no 'cure' at the present time. Brain-computer interfaces (BCIs) can facilitate daily living. However, inter-subject variability demands measurements with potential user groups and an understanding of how they differ to healthy users BCIs are more commonly tested with. Thus, a three-class motor imagery (MI) screening (left hand, right hand, feet) was performed with a group of 10 able-bodied and 16 complete spinal-cord-injured people (paraplegics, tetraplegics) with the objective of determining what differences were present between the user groups and how they would impact upon the ability of these user groups to interact with a BCI. APPROACH: Electrophysiological differences between patient groups and healthy users are measured in terms of sensorimotor rhythm deflections from baseline during MI, electroencephalogram microstate scalp maps and strengths of inter-channel phase synchronization. Additionally, using a common spatial pattern algorithm and a linear discriminant analysis classifier, the classification accuracy was calculated and compared between groups. MAIN RESULTS: It is seen that both patient groups (tetraplegic and paraplegic) have some significant differences in event-related desynchronization strengths, exhibit significant increases in synchronization and reach significantly lower accuracies (mean (M) = 66.1%) than the group of healthy subjects (M = 85.1%). SIGNIFICANCE: The results demonstrate significant differences in electrophysiological correlates of motor control between healthy individuals and those individuals who stand to benefit most from BCI technology (individuals with SCI). They highlight the difficulty in directly translating results from healthy subjects to participants with SCI and the challenges that, therefore, arise in providing BCIs to such individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them also involves complicated workflows implemented as shell scripts. A new grid middleware system that is well suited to climate modelling applications is presented in this paper. Grid Remote Execution (G-Rex) allows climate models to be deployed as Web services on remote computer systems and then launched and controlled as if they were running on the user's own computer. Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model. G-Rex has a REST architectural style, featuring a Java client program that can easily be incorporated into existing scientific workflow scripts. Some technical details of G-Rex are presented, with examples of its use by climate modellers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

G-Rex is light-weight Java middleware that allows scientific applications deployed on remote computer systems to be launched and controlled as if they are running on the user's own computer. G-Rex is particularly suited to ocean and climate modelling applications because output from the model is transferred back to the user while the run is in progress, which prevents the accumulation of large amounts of data on the remote cluster. The G-Rex server is a RESTful Web application that runs inside a servlet container on the remote system, and the client component is a Java command line program that can easily be incorporated into existing scientific work-flow scripts. The NEMO and POLCOMS ocean models have been deployed as G-Rex services in the NERC Cluster Grid, and G-Rex is the core grid middleware in the GCEP and GCOMS e-science projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Constructing a building is a long process which can take several years. Most building services products are installed while a building is constructed, but they are not operated until the building is commissioned. The warranty term for the building service systems may cover the time starting from their installation to the end of the warranty period. Prior to the commissioning of the building, the building services systems are protected by warranty although they are not operated. The bum in time for such systems is important when warranty costs is analyzed. In this paper, warranty cost models for products with burn in periods are presented. Two burn in policies are developed to optimize the total mean warranty cost. A special case on the relationship between the failure rates of the product at the dormant state and at the I operating state is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study is to analyse current data continuity mechanisms employed by the target group of businesses and to identify any inadequacies in the mechanisms as a whole. The questionnaire responses indicate that 47% of respondents do perceive backup methodologies as important, with a total of 70% of respondents having some backup methodology already in place. Businesses in Moulton Park perceive the loss of data to have a significant effect upon their business’ ability to function. Only 14% of respondents indicated that loss of data on computer systems would not affect their business at all, with 54% of respondents indicating that there would be either a “major effect” (or greater) on their ability to operate. Respondents that have experienced data loss were more likely to have backup methodologies in place (53%) than respondents that had not experienced data loss (18%). Although the number of respondents clearly affected the quality and conclusiveness of the results returned, the level of backup methodologies in place appears to be proportional to the company size. Further investigation is recommended into the subject in order to validate the information gleaned from the small number of respondents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monitoring Earth's terrestrial water conditions is critically important to many hydrological applications such as global food production; assessing water resources sustainability; and flood, drought, and climate change prediction. These needs have motivated the development of pilot monitoring and prediction systems for terrestrial hydrologic and vegetative states, but to date only at the rather coarse spatial resolutions (∼10–100 km) over continental to global domains. Adequately addressing critical water cycle science questions and applications requires systems that are implemented globally at much higher resolutions, on the order of 1 km, resolutions referred to as hyperresolution in the context of global land surface models. This opinion paper sets forth the needs and benefits for a system that would monitor and predict the Earth's terrestrial water, energy, and biogeochemical cycles. We discuss six major challenges in developing a system: improved representation of surface‐subsurface interactions due to fine‐scale topography and vegetation; improved representation of land‐atmospheric interactions and resulting spatial information on soil moisture and evapotranspiration; inclusion of water quality as part of the biogeochemical cycle; representation of human impacts from water management; utilizing massively parallel computer systems and recent computational advances in solving hyperresolution models that will have up to 109 unknowns; and developing the required in situ and remote sensing global data sets. We deem the development of a global hyperresolution model for monitoring the terrestrial water, energy, and biogeochemical cycles a “grand challenge” to the community, and we call upon the international hydrologic community and the hydrological science support infrastructure to endorse the effort.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new generation of advanced surveillance systems is being conceived as a collection of multi-sensor components such as video, audio and mobile robots interacting in a cooperating manner to enhance situation awareness capabilities to assist surveillance personnel. The prominent issues that these systems face are: the improvement of existing intelligent video surveillance systems, the inclusion of wireless networks, the use of low power sensors, the design architecture, the communication between different components, the fusion of data emerging from different type of sensors, the location of personnel (providers and consumers) and the scalability of the system. This paper focuses on the aspects pertaining to real-time distributed architecture and scalability. For example, to meet real-time requirements, these systems need to process data streams in concurrent environments, designed by taking into account scheduling and synchronisation. The paper proposes a framework for the design of visual surveillance systems based on components derived from the principles of Real Time Networks/Data Oriented Requirements Implementation Scheme (RTN/DORIS). It also proposes the implementation of these components using the well-known middleware technology Common Object Request Broker Architecture (CORBA). Results using this architecture for video surveillance are presented through an implemented prototype.