932 resultados para Interoperability of Applications
Resumo:
The Southampton Hand Assessment Procedure (SHAP) was devised to assess quantitatively the functional range of injured and healthy adult hands. It was designed to be a practical tool for use in a busy clinical setting; thus, it was made simple to use and easy to interpret. This paper describes four examples of its use: before and after a surgical procedure, to observe the impact of an injury, use with prostheses, and during recovery following a fracture. The cases show that the SHAP is capable of monitoring progress and recovery, identifying functional abilities in prosthetic hands and comparing the capabilities of different groups of injuries.
Resumo:
Most of studies on interoperability of systems integration focus on technical and semantic levels, but hardly extend investigations on pragmatic level. Our past work has addressed pragmatic interoperability, which is concerned with the relationship between signs and the potential behaviour and intention of responsible agents. We also define the pragmatic interoperability as a level concerning with the aggregation and optimisation of various business processes for achieving intended purposes of different information systems. This paper, as the extension of our previous research, is to propose an assessment method for measuring pragmatic interoperability of information systems. We firstly propose interoperability analysis framework, which is based on the concept of semiosis. We then develop pragmatic interoperability assessment process from two dimensions including six aspects (informal, formal, technical, substantive, communication, and control). We finally illustrate the assessment process in an example.
Resumo:
Recently, two international standard organizations, ISO and OGC, have done the work of standardization for GIS. Current standardization work for providing interoperability among GIS DB focuses on the design of open interfaces. But, this work has not considered procedures and methods for designing river geospatial data. Eventually, river geospatial data has its own model. When we share the data by open interface among heterogeneous GIS DB, differences between models result in the loss of information. In this study a plan was suggested both to respond to these changes in the information envirnment and to provide a future Smart River-based river information service by understanding the current state of river geospatial data model, improving, redesigning the database. Therefore, primary and foreign key, which can distinguish attribute information and entity linkages, were redefined to increase the usability. Database construction of attribute information and entity relationship diagram have been newly redefined to redesign linkages among tables from the perspective of a river standard database. In addition, this study was undertaken to expand the current supplier-oriented operating system to a demand-oriented operating system by establishing an efficient management of river-related information and a utilization system, capable of adapting to the changes of a river management paradigm.
Resumo:
In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.
Resumo:
High Performance Computing e una tecnologia usata dai cluster computazionali per creare sistemi di elaborazione che sono in grado di fornire servizi molto piu potenti rispetto ai computer tradizionali. Di conseguenza la tecnologia HPC e diventata un fattore determinante nella competizione industriale e nella ricerca. I sistemi HPC continuano a crescere in termini di nodi e core. Le previsioni indicano che il numero dei nodi arrivera a un milione a breve. Questo tipo di architettura presenta anche dei costi molto alti in termini del consumo delle risorse, che diventano insostenibili per il mercato industriale. Un scheduler centralizzato non e in grado di gestire un numero di risorse cosi alto, mantenendo un tempo di risposta ragionevole. In questa tesi viene presentato un modello di scheduling distribuito che si basa sulla programmazione a vincoli e che modella il problema dello scheduling grazie a una serie di vincoli temporali e vincoli sulle risorse che devono essere soddisfatti. Lo scheduler cerca di ottimizzare le performance delle risorse e tende ad avvicinarsi a un profilo di consumo desiderato, considerato ottimale. Vengono analizzati vari modelli diversi e ognuno di questi viene testato in vari ambienti.
Resumo:
BACKGROUND: Not all clinical trials are published, which may distort the evidence that is available in the literature. We studied the publication rate of a cohort of clinical trials and identified factors associated with publication and nonpublication of results. METHODS: We analysed the protocols of randomized clinical trials of drug interventions submitted to the research ethics committee of University Hospital (Inselspital) Bern, Switzerland from 1988 to 1998. We identified full articles published up to 2006 by searching the Cochrane CENTRAL database (issue 02/2006) and by contacting investigators. We analyzed factors associated with the publication of trials using descriptive statistics and logistic regression models. RESULTS: 451 study protocols and 375 corresponding articles were analyzed. 233 protocols resulted in at least one publication, a publication rate of 52%. A total of 366 (81%) trials were commercially funded, 47 (10%) had non-commercial funding. 346 trials (77%) were multi-centre studies and 272 of these (79%) were international collaborations. In the adjusted logistic regression model non-commercial funding (Odds Ratio [OR] 2.42, 95% CI 1.14-5.17), multi-centre status (OR 2.09, 95% CI 1.03-4.24), international collaboration (OR 1.87, 95% CI 0.99-3.55) and a sample size above the median of 236 participants (OR 2.04, 95% CI 1.23-3.39) were associated with full publication. CONCLUSIONS: In this cohort of applications to an ethics committee in Switzerland, only about half of clinical drug trials were published. Large multi-centre trials with non-commercial funding were more likely to be published than other trials, but most trials were funded by industry.
Resumo:
Higher education institutions are increasingly using social software tools to support teaching and learning. Despite the fact that social software is often used in a social context, these applications can significantly contribute to the educational experience of a student. However, as the social software domain comprises a considerable diversity of tools, the respective tools can be expected to differ in the way they can contribute to teaching and learning. In this review on the educational use of social software, we systematically analyze and compare the diverse social software tools and identify their contributions to teaching and learning. By integrating established learning theory and the extant literature on the individual social software applications we seek to contribute to a theoretical foundation for social software use and the choice of tools. Case vignettes from several UK higher education institutions are used to illustrate the different applications of social software tools in teaching and learning.