819 resultados para Task-based information access
Resumo:
The Swedish public health care organisation could very well be undergoing its most significant change since its specialisation during the late 19th and early 20th century. At the heart of this change is a move from using manual patient journals to electronic health records (EHR). EHR are complex integrated organisational wide information systems (IS) that promise great benefits and value as well as presenting great challenges to the organisation. The Swedish public health care is not the first organisation to implement integrated IS, and by no means alone in their quest for realising the potential benefits and value that it has to offer. As organisations invest in IS they embark on a journey of value-creation and capture. A journey where a costbased approach towards their IS-investments is replaced with a value-centric focus, and where the main challenges lie in the practical day-to-day task of finding ways to intertwine technology, people and business processes. This has however proven to be a problematic task. The problematic situation arises from a shift of perspective regarding how to manage IS in order to gain value. This is a shift from technology delivery to benefits delivery; from an ISimplementation plan to a change management plan. The shift gives rise to challenges related to the inability of IS and the elusiveness of value. As a response to these challenges the field of IS-benefits management has emerged offering a framework and a process in order to better understand and formalise benefits realisation activities. In this thesis the benefits realisation efforts of three Swedish hospitals within the same county council are studied. The thesis focuses on the participants of benefits analysis projects; their perceptions, judgments, negotiations and descriptions of potential benefits. The purpose is to address the process where organisations seek to identify which potential IS-benefits to pursue and realise, this in order to better understand what affects the process, so that realisation actions of potential IS-benefits could be supported. A qualitative case study research design is adopted and provides a framework for sample selection, data collection, and data analysis. It also provides a framework for discussions of validity, reliability and generalizability. Findings displayed a benefits fluctuation, which showed that participants’ perception of what constituted potential benefits and value changed throughout the formal benefits management process. Issues like structure, knowledge, expectation and experience affected perception differently, and this in the end changed the amount and composition of potential benefits and value. Five dimensions of benefits judgment were identified and used by participants when finding accommodations of potential benefits and value to pursue. Identified dimensions affected participants’ perceptions, which in turn affected the amount and composition of potential benefits. During the formal benefits management process participants shifted between judgment dimensions. These movements emerged through debates and interactions between participants. Judgments based on what was perceived as expected due to one’s role and perceived best for the organisation as a whole were the two dominant benefits judgment dimensions. A benefits negotiation was identified. Negotiations were divided into two main categories, rational and irrational, depending on participants’ drive when initiating and participating in negotiations. In each category three different types of negotiations were identified having different characteristics and generating different outcomes. There was also a benefits negotiation process identified that displayed management challenges corresponding to its five phases. A discrepancy was also found between how IS-benefits are spoken of and how actions of IS benefits realisation are understood. This was a discrepancy between an evaluation and a realisation focus towards IS value creation. An evaluation focus described IS-benefits as well-defined and measurable effects and a realisation focus spoke of establishing and managing an on-going place of value creation. The notion of valuescape was introduced in order to describe and support the understanding of IS value creation. Valuescape corresponded to a realisation focus and outlined a value configuration consisting of activities, logic, structure, drivers and role of IS.
Resumo:
The paper-and-pencil digit-comparison task for assessing negative priming (NP) was introduced, using a referent-size-selection procedure that was demonstrated to enhance the effect. NP is indicated by slower responses to recently ignored items, and proposed within the clinical-experimental framework as a major cognitive index of active suppression of distracting information, critical to executive functioning. The digit-comparison task requires circling digits of a list with digit-asterisk pairs (a baseline measure for digit-selection), and the larger of two digits in each pair of the unrelated (with different digits in successive digit-pairs) and related lists (in which the smaller digit subsequently became a target). A total of 56 students (18-38 years) participated in two experiments that explored practice effects across lists and demonstrated reliable NP, i.e., slowing to complete the related list relative to the unrelated list, (F(2, 44) = 52.42, P < 0.0001). A 3rd experiment examined age-related effects. In the paper-and-pencil digit-comparison task, NP was reliable for the younger (N = 8, 18-24 years) and middle-aged adults (N = 8, 31-54 years), but absent for the older group (N = 8, 68-77 years). NP was also reduced with aging in a computer-implemented digit-comparison task, and preserved in a task typically used to test location-specific NP, accounting for the dissociation between identity- and spatial-based suppression of distractors (Rao R(3, 12) = 16.02, P < 0.0002). Since the paper-and-pencil digit-comparison task can be administered easily, it can be useful for neuropsychologists seeking practical measures of NP that do not require cumbersome technical equipment.
Resumo:
Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.
Resumo:
In Canada freedom of information must be viewed in the context of governing -- how do you deal with an abundance of information while balancing a diversity of competing interests? How can you ensure people are informed enough to participate in crucial decision-making, yet willing enough to let some administrative matters be dealt with in camera without their involvement in every detail. In an age when taxpayers' coalition groups are on the rise, and the government is encouraging the establishment of Parent Council groups for schools, the issues and challenges presented by access to information and protection of privacy legislation are real ones. The province of Ontario's decision to extend freedom of information legislation to local governments does not ensure, or equate to, full public disclosure of all facts or necessarily guarantee complete public comprehension of an issue. The mere fact that local governments, like school boards, decide to collect, assemble or record some information and not to collect other information implies that a prior decision was made by "someone" on what was important to record or keep. That in itself means that not all the facts are going to be disclosed, regardless of the presence of legislation. The resulting lack of information can lead to public mistrust and lack of confidence in those who govern. This is completely contrary to the spirit of the legislation which was to provide interested members of the community with facts so that values like political accountability and trust could be ensured and meaningful criticism and input obtained on matters affecting the whole community. This thesis first reviews the historical reasons for adopting freedom of information legislation, reasons which are rooted in our parliamentary system of government. However, the same reasoning for enacting such legislation cannot be applied carte blanche to the municipal level of government in Ontario, or - ii - more specifially to the programs, policies or operations of a school board. The purpose of this thesis is to examine whether the Municipal Freedom of Information and Protection of Privacy Act, 1989 (MFIPPA) was a neccessary step to ensure greater openness from school boards. Based on a review of the Orders made by the Office of the Information and Privacy Commissioner/Ontario, it also assesses how successfully freedom of information legislation has been implemented at the municipal level of government. The Orders provide an opportunity to review what problems school boards have encountered, and what guidance the Commissioner has offered. Reference is made to a value framework as an administrative tool in critically analyzing the suitability of MFIPPA to school boards. The conclusion is drawn that MFIPPA appears to have inhibited rather than facilitated openness in local government. This may be attributed to several factors inclusive of the general uncertainty, confusion and discretion in interpreting various provisions and exemptions in the Act. Some of the uncertainty is due to the fact that an insufficient number of school board staff are familiar with the Act. The complexity of the Act and its legalistic procedures have over-formalized the processes of exchanging information. In addition there appears to be a concern among municipal officials that granting any access to information may be violating personal privacy rights of others. These concerns translate into indecision and extreme caution in responding to inquiries. The result is delay in responding to information requests and lack of uniformity in the responses given. However, the mandatory review of the legislation does afford an opportunity to address some of these problems and to make this complex Act more suitable for application to school boards. In order for the Act to function more efficiently and effectively legislative changes must be made to MFIPPA. It is important that the recommendations for improving the Act be adopted before the government extends this legislation to any other public entities.
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
Age-related differences in information processing have often been explained through deficits in older adults' ability to ignore irrelevant stimuli and suppress inappropriate responses through inhibitory control processes. Functional imaging work on young adults by Nelson and colleagues (2003) has indicated that inferior frontal and anterior cingulate cortex playa key role in resolving interference effects during a delay-to-match memory task. Specifically, inferior frontal cortex appeared to be recruited under conditions of context interference while the anterior cingulate was associated with interference resolution at the stage of response selection. Related work has shown that specific neural activities related to interference resolution are not preserved in older adults, supporting the notion of age-related declines in inhibitory control (Jonides et aI., 2000, West et aI., 2004b). In this study the time course and nature of these inhibition-related processes were investigated in young and old adults using high-density ERPs collected during a modified Sternberg task. Participants were presented with four target letters followed by a probe that either did or did not match one of the target letters held in working memory. Inhibitory processes were evoked by manipulating the nature of cognitive conflict in a particular trial. Conflict in working memory was elicited through the presentation of a probe letter in immediately previous target sets. Response-based conflict was produced by presenting a negative probe that had just been viewed as a positive probe on the previous trial. Younger adults displayed a larger orienting response (P3a and P3b) to positive probes relative to a non-target baseline. Older adults produced the orienting P3a and 3 P3b waveforms but their responses did not differentiate between target and non-target stimuli. This age-related change in response to targetness is discussed in terms of "early selection/late correction" models of cognitive ageing. Younger adults also showed a sensitivity in their N450 response to different levels of interference. Source analysis of the N450 responses to the conflict trials of younger adults indicated an initial dipole in inferior frontal cortex and a subsequent dipole in anterior cingulate cortex, suggesting that inferior prefrontal regions may recruit the anterior cingulate to exert cognitive control functions. Individual older adults did show some evidence of an N450 response to conflict; however, this response was attenuated by a co-occurring positive deflection in the N450 time window. It is suggested that this positivity may reflect a form of compensatory activity in older adults to adapt to their decline in inhibitory control.
Resumo:
Affiliation: Département de biochimie, Faculté de médecine, Université de Montréal
Resumo:
La formation des sociétés fondées sur la connaissance, le progrès de la technologie de communications et un meilleur échange d'informations au niveau mondial permet une meilleure utilisation des connaissances produites lors des décisions prises dans le système de santé. Dans des pays en voie de développement, quelques études sont menées sur des obstacles qui empêchent la prise des décisions fondées sur des preuves (PDFDP) alors que des études similaires dans le monde développé sont vraiment rares. L'Iran est le pays qui a connu la plus forte croissance dans les publications scientifiques au cours de ces dernières années, mais la question qui se pose est la suivante : quels sont les obstacles qui empêchent l'utilisation de ces connaissances de même que celle des données mondiales? Cette étude embrasse trois articles consécutifs. Le but du premier article a été de trouver un modèle pour évaluer l'état de l'utilisation des connaissances dans ces circonstances en Iran à l’aide d'un examen vaste et systématique des sources suivie par une étude qualitative basée sur la méthode de la Grounded Theory. Ensuite au cours du deuxième et troisième article, les obstacles aux décisions fondées sur des preuves en Iran, sont étudiés en interrogeant les directeurs, les décideurs du secteur de la santé et les chercheurs qui travaillent à produire des preuves scientifiques pour la PDFDP en Iran. Après avoir examiné les modèles disponibles existants et la réalisation d'une étude qualitative, le premier article est sorti sous le titre de «Conception d'un modèle d'application des connaissances». Ce premier article sert de cadre pour les deux autres articles qui évaluent les obstacles à «pull» et «push» pour des PDFDP dans le pays. En Iran, en tant que pays en développement, les problèmes se situent dans toutes les étapes du processus de production, de partage et d’utilisation de la preuve dans la prise de décision du système de santé. Les obstacles qui existent à la prise de décision fondée sur des preuves sont divers et cela aux différents niveaux; les solutions multi-dimensionnelles sont nécessaires pour renforcer l'impact de preuves scientifiques sur les prises de décision. Ces solutions devraient entraîner des changements dans la culture et le milieu de la prise de décision afin de valoriser la prise de décisions fondées sur des preuves. Les critères de sélection des gestionnaires et leur nomination inappropriée ainsi que leurs remplaçants rapides et les différences de paiement dans les secteurs public et privé peuvent affaiblir la PDFDP de deux façons : d’une part en influant sur la motivation des décideurs et d'autre part en détruisant la continuité du programme. De même, tandis que la sélection et le remplacement des chercheurs n'est pas comme ceux des gestionnaires, il n'y a aucun critère pour encourager ces deux groupes à soutenir le processus décisionnel fondés sur des preuves dans le secteur de la santé et les changements ultérieurs. La sélection et la promotion des décideurs politiques devraient être basées sur leur performance en matière de la PDFDP et les efforts des universitaires doivent être comptés lors de leurs promotions personnelles et celles du rang de leur institution. Les attitudes et les capacités des décideurs et des chercheurs devraient être encouragés en leur donnant assez de pouvoir et d’habiliter dans les différentes étapes du cycle de décision. Cette étude a révélé que les gestionnaires n'ont pas suffisamment accès à la fois aux preuves nationales et internationales. Réduire l’écart qui sépare les chercheurs des décideurs est une étape cruciale qui doit être réalisée en favorisant la communication réciproque. Cette question est très importante étant donné que l'utilisation des connaissances ne peut être renforcée que par l'étroite collaboration entre les décideurs politiques et le secteur de la recherche. Dans ce but des programmes à long terme doivent être conçus ; la création des réseaux de chercheurs et de décideurs pour le choix du sujet de recherche, le classement des priorités, et le fait de renforcer la confiance réciproque entre les chercheurs et les décideurs politiques semblent être efficace.
Resumo:
Objectives In April 2010, the Université de Montréal’s Health Sciences Library has implemented shared filters in its institutional PubMed account. Most of these filters are designed to highlight resources for evidence-based practice, such as Clinical Queries, Systematic Reviews and Evidence-based Synopsis. We now want to measure how those filters are perceived and used by our users. Methods For one month, data was gathered through an online questionnaire proposed to users of Université de Montréal’s PubMed account. A print version was also distributed to participants in information literacy workshops given by the health sciences librarians. Respondents were restricted to users affiliated to Université de Montréal’s faculties of Medicine, Dentistry, Veterinary Sciences, Nursing and Pharmacy. Basic user information such as year/program of study or department affiliation was also collected. The questionnaire allowed users to identify the filters they use, assess the relevance of filters, and also suggest new ones. Results Survey results showed that the shared filters of Université de Montreal’s PubMed account were found useful by the majority of respondents. Filters allowing rapid access to secondary resources ranked among the most relevant (Reviews, Systematic Reviews, Cochrane Database of Systematic Reviews, Practice Guidelines and Clinical Evidence). For Clinical Study Queries, Randomized Controlled Trial (Therapy/Narrow) was considered the most useful. Some new shared filters have been suggested by respondents. Finally, 18% of the respondents indicated that they did not quite understand the relevance of filters. Conclusion Based on the survey results, shared filters considered most useful will be kept, some will be enhanced and others removed so that suggested ones could be added. The fact that some respondents did not understand well the relevance of filters could potentially be addressed through our PubMed workshops, online library guides or by renaming some filters in a more meaningful way.
Resumo:
Une grande proportion de personnes aux prises avec des problèmes de santé mentale vit dans l’isolement social. Les infirmières en santé communautaire sont interpellées au premier rang pour accompagner ces personnes dans leur processus de rétablissement et pour atténuer leur isolement social. La participation au sein d’organismes communautaires optimise l’expérience de rétablissement, diminue l’isolement social et renforce les réseaux sociaux de personnes ayant des problèmes de santé mentale. Toutefois, la participation des personnes utilisatrices de services dans la structure d’organisation des organismes communautaires est encore peu documentée. Afin de pallier cette lacune, cette étude avait pour objectifs de documenter, décrire la nature de la participation des personnes utilisatrices de services en santé mentale et d’explorer des facteurs facilitatants et des barrières à cette participation. Un devis de méthodes mixtes, qualitatif et quantitatif, a été utilisé. Dans le premier de deux volets, une enquête impliquant la réalisation d’entretiens semi-dirigés a été menée auprès de douze directeurs d’organismes communautaires œuvrant dans le domaine des services en santé mentale. Une version française du questionnaire « Adapted User Involvement » (Diamond, Parkin, Morris, Bettinis, & Bettesworth, 2003) a été administrée afin de documenter l’étendue de la participation des personnes utilisatrices de services dans les organismes visés. Pour le deuxième volet, deux organismes communautaires ont été sélectionnés à partir des résultats du questionnaire et de l’analyse documentaire de documents publics de ces organismes. Les scores obtenus au questionnaire ont ainsi permis de sélectionner des organismes présentant des résultats contrastés en matière de participation des personnes utilisatrices de services. Les entretiens semi-dirigés ont été menés avec différents groupes de répondants (membres de conseil d’administration, personnes utilisatrices de services, employés, directeurs) afin de recueillir de l’information sur les thèmes suivants: la nature de la participation des personnes utilisatrices de services, ainsi que les facteurs facilitants et les défis qui y sont associés. Les résultats de l’analyse montrent que: (1) les facteurs qui favorisent la participation des personnes utilisatrices sont: l’accès à un espace de participation pour les personnes utilisatrices et l’accompagnement de celles-ci par les intervenants de diverses disciplines pendant leur participation au sein des organismes communautaires, (2) les barrières de la participation des personnes utilisatrices au sein des organismes communautaires sont la stigmatisation sociale et les caractéristiques personnelles reliées aux problèmes de santé mentale chez les personnes utilisatrices, et (3) les avantages principaux de la participation des personnes utilisatrices de services se déclinent en services mieux adaptés à leurs besoins et leurs demandes, en leur appropriation du pouvoir (dans leur participation dans l’organisme communautaire) et en leur sentiment d’appartenance à l’organisme. À la lumière des ces constats, l’accompagnement des personnes utilisatrices de services dans leur participation apparaît une avenue prometteuse pour les infirmières en santé mentale communautaire afin de faciliter leur appropriation du pouvoir et d’améliorer leur bien-être.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
The theme of the thesis is centred around one important aspect of wireless sensor networks; the energy-efficiency.The limited energy source of the sensor nodes calls for design of energy-efficient routing protocols. The schemes for protocol design should try to minimize the number of communications among the nodes to save energy. Cluster based techniques were found energy-efficient. In this method clusters are formed and data from different nodes are collected under a cluster head belonging to each clusters and then forwarded it to the base station.Appropriate cluster head selection process and generation of desirable distribution of the clusters can reduce energy consumption of the network and prolong the network lifetime. In this work two such schemes were developed for static wireless sensor networks.In the first scheme, the energy wastage due to cluster rebuilding incorporating all the nodes were addressed. A tree based scheme is presented to alleviate this problem by rebuilding only sub clusters of the network. An analytical model of energy consumption of proposed scheme is developed and the scheme is compared with existing cluster based scheme. The simulation study proved the energy savings observed.The second scheme concentrated to build load-balanced energy efficient clusters to prolong the lifetime of the network. A voting based approach to utilise the neighbor node information in the cluster head selection process is proposed. The number of nodes joining a cluster is restricted to have equal sized optimum clusters. Multi-hop communication among the cluster heads is also introduced to reduce the energy consumption. The simulation study has shown that the scheme results in balanced clusters and the network achieves reduction in energy consumption.The main conclusion from the study was the routing scheme should pay attention on successful data delivery from node to base station in addition to the energy-efficiency. The cluster based protocols are extended from static scenario to mobile scenario by various authors. None of the proposals addresses cluster head election appropriately in view of mobility. An elegant scheme for electing cluster heads is presented to meet the challenge of handling cluster durability when all the nodes in the network are moving. The scheme has been simulated and compared with a similar approach.The proliferation of sensor networks enables users with large set of sensor information to utilise them in various applications. The sensor network programming is inherently difficult due to various reasons. There must be an elegant way to collect the data gathered by sensor networks with out worrying about the underlying structure of the network. The final work presented addresses a way to collect data from a sensor network and present it to the users in a flexible way.A service oriented architecture based application is built and data collection task is presented as a web service. This will enable composition of sensor data from different sensor networks to build interesting applications. The main objective of the thesis was to design energy-efficient routing schemes for both static as well as mobile sensor networks. A progressive approach was followed to achieve this goal.