883 resultados para pacs: information technolgy applications
Resumo:
In the globalizing world, knowledge and information (and the social and technological settings for their production and communication) are now seen as keys to economic prosperity. The economy of a knowledge city creates value-added products using research, technology, and brainpower. The social benefit of knowledge-based urban development (KBUD); however, extends beyond aggregate economic growth.
Resumo:
Most social network users hold more than one social network account and utilize them in different ways depending on the digital context. For example, friendly chat on Facebook, professional discussion on LinkedIn, and health information exchange on PatientsLikeMe. Thus many web users need to manage many disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time consuming, inefficient, and leads to lost opportunity. In this paper we propose a framework for multiple profile management of online social networks and showcase a demonstrator utilising an open source platform. The result of the research enables a user to create and manage an integrated profile and share/synchronise their profiles with their social networks. A number of use cases were created to capture the functional requirements and describe the interactions between users and the online services. An innovative application of this project is in public health informatics. We utilize the prototype to examine how the framework can benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians.
Resumo:
Focuses on the various aspects of advances in future information communication technology and its applications Presents the latest issues and progress in the area of future information communication technology Applicable to both researchers and professionals These proceedings are based on the 2013 International Conference on Future Information & Communication Engineering (ICFICE 2013), which will be held at Shenyang in China from June 24-26, 2013. The conference is open to all over the world, and participation from Asia-Pacific region is particularly encouraged. The focus of this conference is on all technical aspects of electronics, information, and communications ICFICE-13 will provide an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of FICE. In addition, the conference will publish high quality papers which are closely related to the various theories and practical applications in FICE. Furthermore, we expect that the conference and its publications will be a trigger for further related research and technology improvements in this important subject. "This work was supported by the NIPA (National IT Industry Promotion Agency) of Korea Grant funded by the Korean Government (Ministry of Science, ICT & Future Planning)."
Resumo:
This practice framework is designed for health practitioners and allied health care workers. The framework provides empirically-based descriptions of ageing Australians’ experiences of health information literacy and suggests how these may provide a foundation for helping ageing Australians enhance their health information literacy. Health information literacy is understood here to be people’s use of relevant information to learn about health.
Resumo:
Building information models are increasingly being utilised for facility management of large facilities such as critical infrastructures. In such environments, it is valuable to utilise the vast amount of data contained within the building information models to improve access control administration. The use of building information models in access control scenarios can provide 3D visualisation of buildings as well as many other advantages such as automation of essential tasks including path finding, consistency detection, and accessibility verification. However, there is no mathematical model for building information models that can be used to describe and compute these functions. In this paper, we show how graph theory can be utilised as a representation language of building information models and the proposed security related functions. This graph-theoretic representation allows for mathematically representing building information models and performing computations using these functions.
Resumo:
Background: With the advances in DNA sequencer-based technologies, it has become possible to automate several steps of the genotyping process leading to increased throughput. To efficiently handle the large amounts of genotypic data generated and help with quality control, there is a strong need for a software system that can help with the tracking of samples and capture and management of data at different steps of the process. Such systems, while serving to manage the workflow precisely, also encourage good laboratory practice by standardizing protocols, recording and annotating data from every step of the workflow Results: A laboratory information management system (LIMS) has been designed and implemented at the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) that meets the requirements of a moderately high throughput molecular genotyping facility. The application is designed as modules and is simple to learn and use. The application leads the user through each step of the process from starting an experiment to the storing of output data from the genotype detection step with auto-binning of alleles; thus ensuring that every DNA sample is handled in an identical manner and all the necessary data are captured. The application keeps track of DNA samples and generated data. Data entry into the system is through the use of forms for file uploads. The LIMS provides functions to trace back to the electrophoresis gel files or sample source for any genotypic data and for repeating experiments. The LIMS is being presently used for the capture of high throughput SSR (simple-sequence repeat) genotyping data from the legume (chickpea, groundnut and pigeonpea) and cereal (sorghum and millets) crops of importance in the semi-arid tropics. Conclusions: A laboratory information management system is available that has been found useful in the management of microsatellite genotype data in a moderately high throughput genotyping laboratory. The application with source code is freely available for academic users and can be downloaded from http://www.icrisat.org/bt-software-d-lims.htm
Resumo:
Quantum ensembles form easily accessible architectures for studying various phenomena in quantum physics, quantum information science and spectroscopy. Here we review some recent protocols for measurements in quantum ensembles by utilizing ancillary systems. We also illustrate these protocols experimentally via nuclear magnetic resonance techniques. In particular, we shall review noninvasive measurements, extracting expectation values of various operators, characterizations of quantum states and quantum processes, and finally quantum noise engineering.
Resumo:
188 p.
Resumo:
B:RUN is a low-level GIS software designed to help formulate options for the management of the coastal zone of Brunei Darussalam. This contribution presents the oil spill simulation module of B:RUN. This simple module, based largely on wind and sea surface current vector parameters, may be helpful in formulating relevant oil spill contingency plans. It can be easily adapted to other areas, as can the B:RUN software itself.
Resumo:
This contribution is the first part of a four-part series documenting the development of B:RUN, a software program which reads data for common spreadsheets and presents them as low-resolution maps of slates and processes. The program emerged from a need which arose during a project in Brunei Darussalam for a 'low level' approach for researchers to communicate findings as efficiently and expeditiously as possible. Part I provides a overview of the concept and design elements of B:RUN. Part II will highlight results of the economics components of the program evaluating different fishing regimes, sailing distances from ports and fleet operating costs. Environmental aspects will be presented in Part III in the form of overlay maps. Part IV will summarize the implications of B:RUN results to coastal and fishery resources management in Brunei Darussalam and show how this approach can be adapted to other coastlines and used as a teaching and training tool. The following three parts will be published in future editions of Naga, the ICLARM Quarterly. The program is available through ICLARM.
Resumo:
Digitization is the main feature of modern Information Science. Conjoining the digits and the coordinates, the relation between Information Science and high-dimensional space is consanguineous, and the information issues are transformed to the geometry problems in some high-dimensional spaces. From this basic idea, we propose Computational Information Geometry (CIG) to make information analysis and processing. Two kinds of applications of CIG are given, which are blurred image restoration and pattern recognition. Experimental results are satisfying. And in this paper, how to combine with groups of simple operators in some 2D planes to implement the geometrical computations in high-dimensional space is also introduced. Lots of the algorithms have been realized using software.
Resumo:
The Continuous Plankton Recorder (CPR) survey was conceived from the outset as a programme of applied research designed to assist the fishing industry. Its survival and continuing vigour after 70 years is a testament to its utility, which has been achieved in spite of great changes in our understanding of the marine environment and in our concerns over how to manage it. The CPR has been superseded in several respects by other technologies, such as acoustics and remote sensing, but it continues to provide unrivalled seasonal and geographic information about a wide range of zooplankton and phytoplankton taxa. The value of this coverage increases with time and provides the basis for placing recent observations into the context of long-term, large-scale variability and thus suggesting what the causes are likely to be. Information from the CPR is used extensively in judging environmental impacts and producing quality status reports (QSR); it has shown the distributions of fish stocks, which had not previously been exploited; it has pointed to the extent of ungrazed phytoplankton production in the North Atlantic, which was a vital element in establishing the importance of carbon sequestration by phytoplankton. The CPR continues to be the principal source of large-scale, long-term information about the plankton ecosystem of the North Atlantic. It has recently provided extensive information about the biodiversity of the plankton and about the distribution of introduced species. It serves as a valuable example for the design of future monitoring of the marine environment and it has been essential to the design and implementation of most North Atlantic plankton research.
Resumo:
The iterative nature of turbo-decoding algorithms increases their complexity compare to conventional FEC decoding algorithms. Two iterative decoding algorithms, Soft-Output-Viterbi Algorithm (SOVA) and Maximum A posteriori Probability (MAP) Algorithm require complex decoding operations over several iteration cycles. So, for real-time implementation of turbo codes, reducing the decoder complexity while preserving bit-error-rate (BER) performance is an important design consideration. In this chapter, a modification to the Max-Log-MAP algorithm is presented. This modification is to scale the extrinsic information exchange between the constituent decoders. The remainder of this chapter is organized as follows: An overview of the turbo encoding and decoding processes, the MAP algorithm and its simplified versions the Log-MAP and Max-Log-MAP algorithms are presented in section 1. The extrinsic information scaling is introduced, simulation results are presented, and the performance of different methods to choose the best scaling factor is discussed in Section 2. Section 3 discusses trends and applications of turbo coding from the perspective of wireless applications.