923 resultados para pacs: information technolgy applications
Resumo:
Most Internet search engines are keyword-based. They are not efficient for the queries where geographical location is important, such as finding hotels within an area or close to a place of interest. A natural interface for spatial searching is a map, which can be used not only to display locations of search results but also to assist forming search conditions. A map-based search engine requires a well-designed visual interface that is intuitive to use yet flexible and expressive enough to support various types of spatial queries as well as aspatial queries. Similar to hyperlinks for text and images in an HTML page, spatial objects in a map should support hyperlinks. Such an interface needs to be scalable with the size of the geographical regions and the number of websites it covers. In spite of handling typically a very large amount of spatial data, a map-based search interface should meet the expectation of fast response time for interactive applications. In this paper we discuss general requirements and the design for a new map-based web search interface, focusing on integration with the WWW and visual spatial query interface. A number of current and future research issues are discussed, and a prototype for the University of Queensland is presented. (C) 2001 Published by Elsevier Science Ltd.
Resumo:
A literature review was conducted to investigate the extent to which telehealth has been researched within the domain of speech-language pathology and the outcomes of this research. A total of 13 studies were identified. Three early studies demonstrated that telehealth was feasible, although there was no discussion of the cost-effectiveness of this process in terms of patient outcomes. The majority of the subsequent studies indicated positive or encouraging outcomes resulting from telehealth. However, there were a number of shortcomings in the research, including a lack of cost-benefit information, failure to evaluate the technology itself, an absence of studies of the educational and informational aspects of telehealth in relation to speech-language pathology, and the use of telehealth in a limited range of communication disorders. Future research into the application of telehealth to speech-language pathology services must adopt a scientific approach, and have a well defined development and evaluation framework that addresses the effectiveness of the technique, patient outcomes and satisfaction, and the cost-benefit relationship.
Resumo:
The enormous amount of information generated through sequencing of the human genome has increased demands for more economical and flexible alternatives in genomics, proteomics and drug discovery. Many companies and institutions have recognised the potential of increasing the size and complexity of chemical libraries by producing large chemical libraries on colloidal support beads. Since colloid-based compounds in a suspension are randomly located, an encoding system such as optical barcoding is required to permit rapid elucidation of the compound structures. We describe in this article innovative methods for optical barcoding of colloids for use as support beads in both combinatorial and non-combinatorial libraries. We focus in particular on the difficult problem of barcoding extremely large libraries, which if solved, will transform the manner in which genomics, proteomics and drug discovery research is currently performed.
Resumo:
One of the most important advantages of database systems is that the underlying mathematics is rich enough to specify very complex operations with a small number of statements in the database language. This research covers an aspect of biological informatics that is the marriage of information technology and biology, involving the study of real-world phenomena using virtual plants derived from L-systems simulation. L-systems were introduced by Aristid Lindenmayer as a mathematical model of multicellular organisms. Not much consideration has been given to the problem of persistent storage for these simulations. Current procedures for querying data generated by L-systems for scientific experiments, simulations and measurements are also inadequate. To address these problems the research in this paper presents a generic process for data-modeling tools (L-DBM) between L-systems and database systems. This paper shows how L-system productions can be generically and automatically represented in database schemas and how a database can be populated from the L-system strings. This paper further describes the idea of pre-computing recursive structures in the data into derived attributes using compiler generation. A method to allow a correspondence between biologists' terms and compiler-generated terms in a biologist computing environment is supplied. Once the L-DBM gets any specific L-systems productions and its declarations, it can generate the specific schema for both simple correspondence terminology and also complex recursive structure data attributes and relationships.
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
Results on the use of a double a-SiC:H p-i-n heterostructure for signal multiplexing and demultiplexing applications in the visible range are presented. Pulsed monochromatic beams together (multiplexing mode), or a single polychromatic beam (demultiplexing mode) impinge on the device and are absorbed, accordingly to their wavelength. Red, green and blue pulsed input channels are transmitted together, each one with a specific transmission rate. The combined optical signal is analyzed by reading out, under different applied voltages, the generated photocurrent. Results show that in the multiplexing mode the output signal is balanced by the wavelength and transmission rate of each input channel, keeping the memory of the incoming optical carriers. In the demultiplexing mode the photocurrent is controlled by the applied voltage allowing regaining the transmitted information. A physical model supported by a numerical simulation gives insight into the device operation.
Resumo:
In this paper we present results on the optimization of device architectures for colour and imaging applications, using a device with a TCO/pinpi'n/TCO configuration. The effect of the applied voltage on the color selectivity is discussed. Results show that the spectral response curves demonstrate rather good separation between the red, green and blue basic colors. Combining the information obtained under positive and negative applied bias a colour image is acquired without colour filters or pixel architecture. A low level image processing algorithm is used for the colour image reconstruction.
Resumo:
Object-oriented programming languages presently are the dominant paradigm of application development (e. g., Java,. NET). Lately, increasingly more Java applications have long (or very long) execution times and manipulate large amounts of data/information, gaining relevance in fields related with e-Science (with Grid and Cloud computing). Significant examples include Chemistry, Computational Biology and Bio-informatics, with many available Java-based APIs (e. g., Neobio). Often, when the execution of such an application is terminated abruptly because of a failure (regardless of the cause being a hardware of software fault, lack of available resources, etc.), all of its work already performed is simply lost, and when the application is later re-initiated, it has to restart all its work from scratch, wasting resources and time, while also being prone to another failure and may delay its completion with no deadline guarantees. Our proposed solution to address these issues is through incorporating mechanisms for checkpointing and migration in a JVM. These make applications more robust and flexible by being able to move to other nodes, without any intervention from the programmer. This article provides a solution to Java applications with long execution times, by extending a JVM (Jikes research virtual machine) with such mechanisms. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
7th Mediterranean Conference on Information Systems, MCIS 2012, Guimaraes, Portugal, September 8-10, 2012, Proceedings Series: Lecture Notes in Business Information Processing, Vol. 129
Resumo:
To select each node by devices and by contexts in urban computing, users have to put their plan information and their requests into a computing environment (ex. PDA, Smart Devices, Laptops, etc.) in advance and they will try to keep the optimized states between users and the computing environment. However, because of bad contexts, users may get the wrong decision, so, one of the users’ demands may be requesting the good server which has higher security. To take this issue, we define the structure of Dynamic State Information (DSI) which takes a process about security including the relevant factors in sending/receiving contexts, which select the best during user movement with server quality and security states from DSI. Finally, whenever some information changes, users and devices get the notices including security factors, then an automatic reaction can be possible; therefore all users can safely use all devices in urban computing.
Resumo:
A voltage limiter circuit for indoor light energy harvesting applications is presented. This circuit is a part of a bigger system, whose function is to harvest indoor light energy, process it and store it, so that it can be used at a later time. This processing consists on maximum power point tracking (MPPT) and stepping-up, of the voltage from the photovoltaic (PV) harvester cell. The circuit here described, ensures that even under strong illumination, the generated voltage will not exceed the limit allowed by the technology, avoiding the degradation, or destruction, of the integrated die. A prototype of the limiter circuit was designed in a 130 nm CMOS technology. The layout of the circuit has a total area of 23414 mu m(2). Simulation results, using Spectre, are presented.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Aims: This paper aims to address some of the main possible applications of actual Nuclear Medicine Imaging techniques and methodologies in the specific context of Sports Medicine, namely in two critical systems: musculoskeletal and cardiovascular. Discussion: At the musculoskeletal level, bone scintigraphy techniques proved to be a mean of diagnosis of functional orientation and high sensibility compared with other morphological imaging techniques in the detection and temporal evaluation of pathological situations, for instance allowing the acquisition of information of great relevance in athletes with stress fractures. On the other hand, infection/inflammation studies might be of an important added value to characterize specific situations, early diagnose of potential critical issues – so giving opportunity to precise, complete and fast solutions – while allowing the evaluation and eventual optimization of training programs. At cardiovascular system level, Nuclear Medicine had proved to be crucial in differential diagnosis between cardiac hypertrophy secondary to physical activity (the so called "athlete's heart") and hypertrophic cardiomyopathy, in the diagnosis and prognosis of changes in cardiac function in athletes, as well as in direct - and non-invasive - in vivo visualization of sympathetic cardiac innervation, something that seems to take more and more importance nowadays, namely in order to try to avoid sudden death episodes at intense physical effort. Also the clinical application of Positron Emission Tomography (PET) has becoming more and more widely recognized as promising. Conclusions: It has been concluded that Nuclear Medicine can become an important application in Sports Medicine. Its well established capabilities to early detection of processes involving functional properties allied to its high sensibility and the actual technical possibilities (namely those related with hybrid imaging, that allows to add information provided by high resolution morphological imaging techniques, such as CT and/or MRI) make it a powerful diagnostic tool, claiming to be used on an each day higher range of clinical applications related with all levels of sport activities. Since the improvements at equipment characteristics and detection levels allows the use of smaller and smaller doses, so minimizing radiation exposure it is believed by the authors that the increase of the use of NM tools in the Sports Medicine area should be considered.