900 resultados para Chunk-based information diffusion
Resumo:
Abstrakti
Resumo:
The overall goal of the study was to describe adoption of information technology (IT)-based patient education (PE) developed for patients and nurses use in psychiatric nursing. The data were collected in three phases during the period 2000-2006 in a variety of psychiatric settings in Finland. Firstly, the development process of IT-based PE for patients with schizophrenia spectrum psychosis was described. Secondly, nurses’ adoption of IT-based PE and the variables explaining adoption were demonstrated. Moreover, use of daily IT-based PE in clinical practice and factors associated with use were identified and described. And thirdly, nurses’ experiences of the IT-based PE after one year clinical use were evaluated. IT-based PE program was developed in several stages based on users’ needs and it included information and multimedia applications. Altogether, almost 500 IT-based PE sessions were carried out by the nurses on the study wards and revealed nurses’ activity in educating patients using IT to vary and depend on the hospital in which they worked. Almost 80% of all the possible IT-based PE sessions involved 93 patients and 83 nurses. Less than 2% of the IT-based PE sessions were interrupted and less than 10% suffered disturbances due to the patients or external causes. Moreover, the patients whose education took more days had poorer mental status than those whose education was carried out over a shorter period. After a year’s experience, advantages and disadvantages were described by the nurses for both patients and nurses of the IT-based PE. IT-based PE can be used even on closed acute psychiatric wards with patients with serious mental health disorders. However, technology adoption requires time, and therefore, it must fit in with clinical practice. Collaboration between users and developers is needed when developing user-centered methods in the area of mental health services. Moreover, it is important to understand factors that affect IT adoption in healthcare settings. IT-based PE is one option in interactive and co-operative health care practice between patients and nurses. Therefore the staff should begin to refer patients to established, credible and well-maintained Internet sites that provide information on common psychological problems. Even if every nurse should be trained and engaged to carry out IT-based PE, by targeting the training especially for the most active nurses aids them to support the less active ones. Adoption should also be understood from a perspective that includes aspects related to the context where it is implemented and examine how and in what circumstances it works.
Resumo:
The transition of project based manufacturing business, even more into global networks, sets up challenges for companies to manage their business in this new operating environment. One way to tackle these challenges is the successful management of product information through an extended product’s lifecycle. Thus, one objective of this research is to find ways how product information management in global project based manufacturing can be improved. Another objective is to find a solution how the target company can improve its product information management in the offer-to-procurement business process. Due to the nature of the topic, the study follows constructive research methodology with qualitative methods. By combining literature related to this topic a framework is created to improve product information management in global project based manufacturing. The improvement process in this framework is based on a systematic approach from the current state towards target state. A general aim for improvements should be the integrated product and project lifecycle information management through Lean approach. This introduced framework is applied to the target company through two case projects. Data for building view of current state and analysis is collected mostly by theme interviews and also utilizing other material from the target company. Used tools help to analyzing was the BPMN and the Trace matrix for business chains. Results of the improvement process are collected in a solution proposal which contain the strategic target state as well as long and short term objectives. The strategic target state is defined as controlled customization. Also during the improvement process are created the Information requirements chart in the offer-to-procurement business process, and the Project related initial information questionnaire to customer.
Resumo:
Affiliation: Département de biochimie, Faculté de médecine, Université de Montréal
Resumo:
Un fichier intitulé Charbonneau_Nathalie_2008_AnimationAnnexeT accompagne la thèse. Il contient une séquence animée démontrant le type de parcours pouvant être effectué au sein des environnements numériques développés. Il s'agit d'un fichier .wmv qui a été compressé.
Resumo:
RÉSUMÉ - Les images satellitales multispectrales, notamment celles à haute résolution spatiale (plus fine que 30 m au sol), représentent une source d’information inestimable pour la prise de décision dans divers domaines liés à la gestion des ressources naturelles, à la préservation de l’environnement ou à l’aménagement et la gestion des centres urbains. Les échelles d’étude peuvent aller du local (résolutions plus fines que 5 m) à des échelles régionales (résolutions plus grossières que 5 m). Ces images caractérisent la variation de la réflectance des objets dans le spectre qui est l’information clé pour un grand nombre d’applications de ces données. Or, les mesures des capteurs satellitaux sont aussi affectées par des facteurs « parasites » liés aux conditions d’éclairement et d’observation, à l’atmosphère, à la topographie et aux propriétés des capteurs. Deux questions nous ont préoccupé dans cette recherche. Quelle est la meilleure approche pour restituer les réflectances au sol à partir des valeurs numériques enregistrées par les capteurs tenant compte des ces facteurs parasites ? Cette restitution est-elle la condition sine qua non pour extraire une information fiable des images en fonction des problématiques propres aux différents domaines d’application des images (cartographie du territoire, monitoring de l’environnement, suivi des changements du paysage, inventaires des ressources, etc.) ? Les recherches effectuées les 30 dernières années ont abouti à une série de techniques de correction des données des effets des facteurs parasites dont certaines permettent de restituer les réflectances au sol. Plusieurs questions sont cependant encore en suspens et d’autres nécessitent des approfondissements afin, d’une part d’améliorer la précision des résultats et d’autre part, de rendre ces techniques plus versatiles en les adaptant à un plus large éventail de conditions d’acquisition des données. Nous pouvons en mentionner quelques unes : - Comment prendre en compte des caractéristiques atmosphériques (notamment des particules d’aérosol) adaptées à des conditions locales et régionales et ne pas se fier à des modèles par défaut qui indiquent des tendances spatiotemporelles à long terme mais s’ajustent mal à des observations instantanées et restreintes spatialement ? - Comment tenir compte des effets de « contamination » du signal provenant de l’objet visé par le capteur par les signaux provenant des objets environnant (effet d’adjacence) ? ce phénomène devient très important pour des images de résolution plus fine que 5 m; - Quels sont les effets des angles de visée des capteurs hors nadir qui sont de plus en plus présents puisqu’ils offrent une meilleure résolution temporelle et la possibilité d’obtenir des couples d’images stéréoscopiques ? - Comment augmenter l’efficacité des techniques de traitement et d’analyse automatique des images multispectrales à des terrains accidentés et montagneux tenant compte des effets multiples du relief topographique sur le signal capté à distance ? D’autre part, malgré les nombreuses démonstrations par des chercheurs que l’information extraite des images satellitales peut être altérée à cause des tous ces facteurs parasites, force est de constater aujourd’hui que les corrections radiométriques demeurent peu utilisées sur une base routinière tel qu’est le cas pour les corrections géométriques. Pour ces dernières, les logiciels commerciaux de télédétection possèdent des algorithmes versatiles, puissants et à la portée des utilisateurs. Les algorithmes des corrections radiométriques, lorsqu’ils sont proposés, demeurent des boîtes noires peu flexibles nécessitant la plupart de temps des utilisateurs experts en la matière. Les objectifs que nous nous sommes fixés dans cette recherche sont les suivants : 1) Développer un logiciel de restitution des réflectances au sol tenant compte des questions posées ci-haut. Ce logiciel devait être suffisamment modulaire pour pouvoir le bonifier, l’améliorer et l’adapter à diverses problématiques d’application d’images satellitales; et 2) Appliquer ce logiciel dans différents contextes (urbain, agricole, forestier) et analyser les résultats obtenus afin d’évaluer le gain en précision de l’information extraite par des images satellitales transformées en images des réflectances au sol et par conséquent la nécessité d’opérer ainsi peu importe la problématique de l’application. Ainsi, à travers cette recherche, nous avons réalisé un outil de restitution de la réflectance au sol (la nouvelle version du logiciel REFLECT). Ce logiciel est basé sur la formulation (et les routines) du code 6S (Seconde Simulation du Signal Satellitaire dans le Spectre Solaire) et sur la méthode des cibles obscures pour l’estimation de l’épaisseur optique des aérosols (aerosol optical depth, AOD), qui est le facteur le plus difficile à corriger. Des améliorations substantielles ont été apportées aux modèles existants. Ces améliorations concernent essentiellement les propriétés des aérosols (intégration d’un modèle plus récent, amélioration de la recherche des cibles obscures pour l’estimation de l’AOD), la prise en compte de l’effet d’adjacence à l’aide d’un modèle de réflexion spéculaire, la prise en compte de la majorité des capteurs multispectraux à haute résolution (Landsat TM et ETM+, tous les HR de SPOT 1 à 5, EO-1 ALI et ASTER) et à très haute résolution (QuickBird et Ikonos) utilisés actuellement et la correction des effets topographiques l’aide d’un modèle qui sépare les composantes directe et diffuse du rayonnement solaire et qui s’adapte également à la canopée forestière. Les travaux de validation ont montré que la restitution de la réflectance au sol par REFLECT se fait avec une précision de l’ordre de ±0.01 unités de réflectance (pour les bandes spectrales du visible, PIR et MIR), même dans le cas d’une surface à topographie variable. Ce logiciel a permis de montrer, à travers des simulations de réflectances apparentes à quel point les facteurs parasites influant les valeurs numériques des images pouvaient modifier le signal utile qui est la réflectance au sol (erreurs de 10 à plus de 50%). REFLECT a également été utilisé pour voir l’importance de l’utilisation des réflectances au sol plutôt que les valeurs numériques brutes pour diverses applications courantes de la télédétection dans les domaines des classifications, du suivi des changements, de l’agriculture et de la foresterie. Dans la majorité des applications (suivi des changements par images multi-dates, utilisation d’indices de végétation, estimation de paramètres biophysiques, …), la correction des images est une opération cruciale pour obtenir des résultats fiables. D’un point de vue informatique, le logiciel REFLECT se présente comme une série de menus simples d’utilisation correspondant aux différentes étapes de saisie des intrants de la scène, calcul des transmittances gazeuses, estimation de l’AOD par la méthode des cibles obscures et enfin, l’application des corrections radiométriques à l’image, notamment par l’option rapide qui permet de traiter une image de 5000 par 5000 pixels en 15 minutes environ. Cette recherche ouvre une série de pistes pour d’autres améliorations des modèles et méthodes liés au domaine des corrections radiométriques, notamment en ce qui concerne l’intégration de la FDRB (fonction de distribution de la réflectance bidirectionnelle) dans la formulation, la prise en compte des nuages translucides à l’aide de la modélisation de la diffusion non sélective et l’automatisation de la méthode des pentes équivalentes proposée pour les corrections topographiques.
Resumo:
Thèse doctorale effectuée en cotutelle au département d'histoire de l'Université de Montréal et à l'École doctorale d'archéologie de l'Université Paris 1 Panthéon-Sorbonne - UMR 7041, Archéologies et Sciences de l'Antiquité - Archéologie du monde grec.
Resumo:
The thesis introduced the octree and addressed the complete nature of problems encountered, while building and imaging system based on octrees. An efficient Bottom-up recursive algorithm and its iterative counterpart for the raster to octree conversion of CAT scan slices, to improve the speed of generating the octree from the slices, the possibility of utilizing the inherent parallesism in the conversion programme is explored in this thesis. The octree node, which stores the volume information in cube often stores the average density information could lead to “patchy”distribution of density during the image reconstruction. In an attempt to alleviate this problem and explored the possibility of using VQ to represent the imformation contained within a cube. Considering the ease of accommodating the process of compressing the information during the generation of octrees from CAT scan slices, proposed use of wavelet transforms to generate the compressed information in a cube. The modified algorithm for generating octrees from the slices is shown to accommodate the eavelet compression easily. Rendering the stored information in the form of octree is a complex task, necessarily because of the requirement to display the volumetric information. The reys traced from each cube in the octree, sum up the density en-route, accounting for the opacities and transparencies produced due to variations in density.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron. The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.
Resumo:
The goal of this work is to develop an Open Agent Architecture for Multilingual information retrieval from Relational Database. The query for information retrieval can be given in plain Hindi or Malayalam; two prominent regional languages of India. The system supports distributed processing of user requests through collaborating agents. Natural language processing techniques are used for meaning extraction from the plain query and information is given back to the user in his/ her native language. The system architecture is designed in a structured way so that it can be adapted to other regional languages of India
Resumo:
A GIS has been designed with limited Functionalities; but with a novel approach in Aits design. The spatial data model adopted in the design of KBGIS is the unlinked vector model. Each map entity is encoded separately in vector fonn, without referencing any of its neighbouring entities. Spatial relations, in other words, are not encoded. This approach is adequate for routine analysis of geographic data represented on a planar map, and their display (Pages 105-106). Even though spatial relations are not encoded explicitly, they can be extracted through the specially designed queries. This work was undertaken as an experiment to study the feasibility of developing a GIS using a knowledge base in place of a relational database. The source of input spatial data was accurate sheet maps that were manually digitised. Each identifiable geographic primitive was represented as a distinct object, with its spatial properties and attributes defined. Composite spatial objects, made up of primitive objects, were formulated, based on production rules defining such compositions. The facts and rules were then organised into a production system, using OPS5
Resumo:
Integration of inputs by cortical neurons provides the basis for the complex information processing performed in the cerebral cortex. Here, we propose a new analytic framework for understanding integration within cortical neuronal receptive fields. Based on the synaptic organization of cortex, we argue that neuronal integration is a systems--level process better studied in terms of local cortical circuitry than at the level of single neurons, and we present a method for constructing self-contained modules which capture (nonlinear) local circuit interactions. In this framework, receptive field elements naturally have dual (rather than the traditional unitary influence since they drive both excitatory and inhibitory cortical neurons. This vector-based analysis, in contrast to scalarsapproaches, greatly simplifies integration by permitting linear summation of inputs from both "classical" and "extraclassical" receptive field regions. We illustrate this by explaining two complex visual cortical phenomena, which are incompatible with scalar notions of neuronal integration.
Resumo:
The automatic interpretation of conventional traffic signs is very complex and time consuming. The paper concerns an automatic warning system for driving assistance. It does not interpret the standard traffic signs on the roadside; the proposal is to incorporate into the existing signs another type of traffic sign whose information will be more easily interpreted by a processor. The type of information to be added is profuse and therefore the most important object is the robustness of the system. The basic proposal of this new philosophy is that the co-pilot system for automatic warning and driving assistance can interpret with greater ease the information contained in the new sign, whilst the human driver only has to interpret the "classic" sign. One of the codings that has been tested with good results and which seems to us easy to implement is that which has a rectangular shape and 4 vertical bars of different colours. The size of these signs is equivalent to the size of the conventional signs (approximately 0.4 m2). The colour information from the sign can be easily interpreted by the proposed processor and the interpretation is much easier and quicker than the information shown by the pictographs of the classic signs