932 resultados para Knowledge Information Objects
Resumo:
We consider a mobile sensor network monitoring a spatio-temporal field. Given limited cache sizes at the sensor nodes, the goal is to develop a distributed cache management algorithm to efficiently answer queries with a known probability distribution over the spatial dimension. First, we propose a novel distributed information theoretic approach in which the nodes locally update their caches based on full knowledge of the space-time distribution of the monitored phenomenon. At each time instant, local decisions are made at the mobile nodes concerning which samples to keep and whether or not a new sample should be acquired at the current location. These decisions account for minimizing an entropic utility function that captures the average amount of uncertainty in queries given the probability distribution of query locations. Second, we propose a different correlation-based technique, which only requires knowledge of the second-order statistics, thus relaxing the stringent constraint of having a priori knowledge of the query distribution, while significantly reducing the computational overhead. It is shown that the proposed approaches considerably improve the average field estimation error by maintaining efficient cache content. It is further shown that the correlation-based technique is robust to model mismatch in case of imperfect knowledge of the underlying generative correlation structure.
Resumo:
A mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion history images" that are then recognized via a mixture of Gaussian classifier. The system was tested in recognizing various dynamic human outdoor activities; e.g., running, walking, roller blading, and cycling. Experiments with synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.
Resumo:
Much work on the performance of Web proxy caching has focused on high-level metrics such as hit rate and byte hit rate, but has ignored all the information related to the cachability of Web objects. Uncachable objects include those fetched by dynamic requests, objects with uncachable HTTP status code, objects with the uncachable HTTP header, objects with an HTTP 1.0 cookie, and objects without a last-modified header. Although some researchers filter the Web traces before they use them for analysis or simulation,many do not have a comprehensive understanding of the cachability of Web objects. In this paper we evaluate all the reasons that a Web object might be uncachable. We use traces from NLANR. Since these traces do not contain HTTP header information, we replay them using request generator to get the response header information. We find that between 15% and 40% of Web objects in our traces can not be cached by a Web proxy server. We use a LRU simulator to show the performance gap when the cachability is either considered or not. We show the characteristics of the cachable data set and find that all its characteristics are fairly similar to that of total data set. Finally, we present some additional results for the cachable and total data set: (1) The main reasons for uncachability are: dynamic requests, responses without last-modified header, responses with HTTP "302 Moved Temporarily" status code, and responses with a HTTP/1.0 cookie. (2) The cachability of Web objects can not be ignored in simulation because uncachable objects comprise a huge percentage of the total trace. Simulations without cachability consideration will be misleading.
Resumo:
The recognition of 3-D objects from sequences of their 2-D views is modeled by a family of self-organizing neural architectures, called VIEWNET, that use View Information Encoded With NETworks. VIEWNET incorporates a preprocessor that generates a compressed but 2-D invariant representation of an image, a supervised incremental learning system that classifies the preprocessed representations into 2-D view categories whose outputs arc combined into 3-D invariant object categories, and a working memory that makes a 3-D object prediction by accumulating evidence from 3-D object category nodes as multiple 2-D views are experienced. The simplest VIEWNET achieves high recognition scores without the need to explicitly code the temporal order of 2-D views in working memory. Working memories are also discussed that save memory resources by implicitly coding temporal order in terms of the relative activity of 2-D view category nodes, rather than as explicit 2-D view transitions. Variants of the VIEWNET architecture may also be used for scene understanding by using a preprocessor and classifier that can determine both What objects are in a scene and Where they are located. The present VIEWNET preprocessor includes the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and suppresses image noise. This boundary segmentation is rendered invariant under 2-D translation, rotation, and dilation by use of a log-polar transform. The invariant spectra undergo Gaussian coarse coding to further reduce noise and 3-D foreshortening effects, and to increase generalization. These compressed codes are input into the classifier, a supervised learning system based on the fuzzy ARTMAP algorithm. Fuzzy ARTMAP learns 2-D view categories that are invariant under 2-D image translation, rotation, and dilation as well as 3-D image transformations that do not cause a predictive error. Evidence from sequence of 2-D view categories converges at 3-D object nodes that generate a response invariant under changes of 2-D view. These 3-D object nodes input to a working memory that accumulates evidence over time to improve object recognition. ln the simplest working memory, each occurrence (nonoccurrence) of a 2-D view category increases (decreases) the corresponding node's activity in working memory. The maximally active node is used to predict the 3-D object. Recognition is studied with noisy and clean image using slow and fast learning. Slow learning at the fuzzy ARTMAP map field is adapted to learn the conditional probability of the 3-D object given the selected 2-D view category. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of l28x128 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view and of up to 98.5% correct with three 2-D views. The properties of 2-D view and 3-D object category nodes are compared with those of cells in monkey inferotemporal cortex.
Resumo:
The Internet and World Wide Web have had, and continue to have, an incredible impact on our civilization. These technologies have radically influenced the way that society is organised and the manner in which people around the world communicate and interact. The structure and function of individual, social, organisational, economic and political life begin to resemble the digital network architectures upon which they are increasingly reliant. It is increasingly difficult to imagine how our ‘offline’ world would look or function without the ‘online’ world; it is becoming less meaningful to distinguish between the ‘actual’ and the ‘virtual’. Thus, the major architectural project of the twenty-first century is to “imagine, build, and enhance an interactive and ever changing cyberspace” (Lévy, 1997, p. 10). Virtual worlds are at the forefront of this evolving digital landscape. Virtual worlds have “critical implications for business, education, social sciences, and our society at large” (Messinger et al., 2009, p. 204). This study focuses on the possibilities of virtual worlds in terms of communication, collaboration, innovation and creativity. The concept of knowledge creation is at the core of this research. The study shows that scholars increasingly recognise that knowledge creation, as a socially enacted process, goes to the very heart of innovation. However, efforts to build upon these insights have struggled to escape the influence of the information processing paradigm of old and have failed to move beyond the persistent but problematic conceptualisation of knowledge creation in terms of tacit and explicit knowledge. Based on these insights, the study leverages extant research to develop the conceptual apparatus necessary to carry out an investigation of innovation and knowledge creation in virtual worlds. The study derives and articulates a set of definitions (of virtual worlds, innovation, knowledge and knowledge creation) to guide research. The study also leverages a number of extant theories in order to develop a preliminary framework to model knowledge creation in virtual worlds. Using a combination of participant observation and six case studies of innovative educational projects in Second Life, the study yields a range of insights into the process of knowledge creation in virtual worlds and into the factors that affect it. The study’s contributions to theory are expressed as a series of propositions and findings and are represented as a revised and empirically grounded theoretical framework of knowledge creation in virtual worlds. These findings highlight the importance of prior related knowledge and intrinsic motivation in terms of shaping and stimulating knowledge creation in virtual worlds. At the same time, they highlight the importance of meta-knowledge (knowledge about knowledge) in terms of guiding the knowledge creation process whilst revealing the diversity of behavioural approaches actually used to create knowledge in virtual worlds and. This theoretical framework is itself one of the chief contributions of the study and the analysis explores how it can be used to guide further research in virtual worlds and on knowledge creation. The study’s contributions to practice are presented as actionable guide to simulate knowledge creation in virtual worlds. This guide utilises a theoretically based classification of four knowledge-creator archetypes (the sage, the lore master, the artisan, and the apprentice) and derives an actionable set of behavioural prescriptions for each archetype. The study concludes with a discussion of the study’s implications in terms of future research.
Resumo:
Comfort is, in essence, satisfaction with the environment, and with respect to the indoor environment it is primarily satisfaction with the thermal conditions and air quality. Improving comfort has social, health and economic benefits, and is more financially significant than any other building cost. Despite this, comfort is not strictly managed throughout the building lifecycle. This is mainly due to the lack of an appropriate system to adequately manage comfort knowledge through the construction process into operation. Previous proposals to improve knowledge management have not been successfully adopted by the construction industry. To address this, the BabySteps approach was devised. BabySteps is an approach, proposed by this research, which states that for an innovation to be adopted into the industry it must be implementable through a number of small changes. This research proposes that improving the management of comfort knowledge will improve comfort. ComMet is a new methodology proposed by this research that manages comfort knowledge. It enables comfort knowledge to be captured, stored and accessed throughout the building life-cycle and so allowing it to be re-used in future stages of the building project and in future projects. It does this using the following: Comfort Performances – These are simplified numerical representations of the comfort of the indoor environment. Comfort Performances quantify the comfort at each stage of the building life-cycle using standard comfort metrics. Comfort Ratings - These are a means of classifying the comfort conditions of the indoor environment according to an appropriate standard. Comfort Ratings are generated by comparing different Comfort Performances. Comfort Ratings provide additional information relating to the comfort conditions of the indoor environment, which is not readily determined from the individual Comfort Performances. Comfort History – This is a continuous descriptive record of the comfort throughout the project, with a focus on documenting the items and activities, proposed and implemented, which could potentially affect comfort. Each aspect of the Comfort History is linked to the relevant comfort entity it references. These three components create a comprehensive record of the comfort throughout the building lifecycle. They are then stored and made available in a common format in a central location which allows them to be re-used ad infinitum. The LCMS System was developed to implement the ComMet methodology. It uses current and emerging technologies to capture, store and allow easy access to comfort knowledge as specified by ComMet. LCMS is an IT system that is a combination of the following six components: Building Standards; Modelling & Simulation; Physical Measurement through the specially developed Egg-Whisk (Wireless Sensor) Network; Data Manipulation; Information Recording; Knowledge Storage and Access.Results from a test case application of the LCMS system - an existing office room at a research facility - highlighted that while some aspects of comfort were being maintained, the building’s environment was not in compliance with the acceptable levels as stipulated by the relevant building standards. The implementation of ComMet, through LCMS, demonstrates how comfort, typically only considered during early design, can be measured and managed appropriately through systematic application of the methodology as means of ensuring a healthy internal environment in the building.
Resumo:
The aim of this research, which focused on the Irish adult population, was to generate information for policymakers by applying statistical analyses and current technologies to oral health administrative and survey databases. Objectives included identifying socio-demographic influences on oral health and utilisation of dental services, comparing epidemiologically-estimated dental treatment need with treatment provided, and investigating the potential of a dental administrative database to provide information on utilisation of services and the volume and types of treatment provided over time. Information was extracted from the claims databases for the Dental Treatment Benefit Scheme (DTBS) for employed adults and the Dental Treatment Services Scheme (DTSS) for less-well-off adults, the National Surveys of Adult Oral Health, and the 2007 Survey of Lifestyle Attitudes and Nutrition in Ireland. Factors associated with utilisation and retention of natural teeth were analysed using count data models and logistic regression. The chi-square test and the student’s t-test were used to compare epidemiologically-estimated need in a representative sample of adults with treatment provided. Differences were found in dental care utilisation and tooth retention by Socio-Economic Status. An analysis of the five-year utilisation behaviour of a 2003 cohort of DTBS dental attendees revealed that age and being female were positively associated with visiting annually and number of treatments. Number of adults using the DTBS increased, and mean number of treatments per patient decreased, between 1997 and 2008. As a percentage of overall treatments, restorations, dentures, and extractions decreased, while prophylaxis increased. Differences were found between epidemiologically-estimated treatment need and treatment provided for those using the DTBS and DTSS. This research confirms the utility of survey and administrative data to generate knowledge for policymakers. Public administrative databases have not been designed for research purposes, but they have the potential to provide a wealth of knowledge on treatments provided and utilisation patterns.
Resumo:
This dissertation critically examines Ireland’s knowledge economy policy, the country’s basis for economic recovery and growth, to enhance future policy decisions and debate. Much has been written internationally on the ‘knowledge economy’ with its emergence closely related to globalisation and technological progression in the 1990s. Since the late 1990s, Irish policy-makers have been firmly committed to positioning Ireland as a leading knowledge economy. Transforming the country’s competitive base to a knowledge economy is pivotal, directly shaping the course of Ireland’s economy and society. Given Ireland’s current economic crisis, limited resources, global competition from leaders in science and technology and growing challenges from emerging economies, a systematic study of Ireland’s major competitive policy is imperative. Above all, this study explores the processes behind the policy and the multiple actors from different institutions who follow and seek to influence decisions. The advocacy coalition framework is used to identify the advocacy coalition operating in the knowledge economy policy subsystem. The theoretical insights of this framework are also combined with other public policy approaches, providing complementary insights into the policy process. The research is framed around three elements - the beliefs underpinning the policy; who is driving the policy; and the prospects of the policy. Primary information is collected by way of semi-structured in-depth interviews with 49 Irish elites (politicians, senior bureaucrats, academics and business leaders) involved in the formation and implementation of the policy. This study finds that a strong advocacy coalition has formed in this policy subsystem whose members are collectively driving the policy. Both exogenous and endogenous forces help frame a common perception of the problems the policy addresses and the solutions it offers. Evidence suggests that this policy is a sustainable option for Ireland’s economic future and the study concludes with policy recommendations for advancing Ireland’s knowledge economy.
Resumo:
This thesis argues that examining the attitudes, perceptions, behaviors, and knowledge of a community towards their specific watershed can reveal their social vulnerability to climate change. Understanding and incorporating these elements of the human dimension in coastal zone management will lead to efficient and effective strategies that safeguard the natural resources for the benefit of the community. By having healthy natural resources, ecological and community resilience to climate change will increase, thus decreasing vulnerability. In the Pacific Ocean, climate and SLR are strongly modulated by the El Niño Southern Oscillation. SLR is three times the global average in the Western Pacific Ocean (Merrifield and Maltrud 2011; Merrifield 2011). Changes in annual rainfall in the Western North Pacific sub‐region from 1950-2010 show that islands in the east are getting much less than in the past, while the islands in the west are getting slightly more rainfall (Keener et al. 2013). For Guam, a small island owned by the United States and located in the Western Pacific Ocean, these factors mean that SLR is higher than any other place in the world and will most likely see increased precipitation. Knowing this, the social vulnerability may be examined. Thus, a case-study of the community residing in the Manell and Geus watersheds was conducted on the island of Guam. Measuring their perceptions, attitudes, knowledge, and behaviors should bring to light their vulnerability to climate change. In order to accomplish this, a household survey was administered from July through August 2010. Approximately 350 surveys were analysed using SPSS. To supplement this quantitative data, informal interviews were conducted with the elders of the community to glean traditional ecological knowledge about perceived climate change. A GIS analysis was conducted to understand the physical geography of the Manell and Geus watersheds. This information about the human dimension is valuable to CZM managers. It may be incorporated into strategic watershed plans, to better administer the natural resources within the coastal zone. The research conducted in this thesis is the basis of a recent watershed management plan for the Guam Coastal Management Program (see King 2014).
Resumo:
PURPOSE: To demonstrate the feasibility of using a knowledge base of prior treatment plans to generate new prostate intensity modulated radiation therapy (IMRT) plans. Each new case would be matched against others in the knowledge base. Once the best match is identified, that clinically approved plan is used to generate the new plan. METHODS: A database of 100 prostate IMRT treatment plans was assembled into an information-theoretic system. An algorithm based on mutual information was implemented to identify similar patient cases by matching 2D beam's eye view projections of contours. Ten randomly selected query cases were each matched with the most similar case from the database of prior clinically approved plans. Treatment parameters from the matched case were used to develop new treatment plans. A comparison of the differences in the dose-volume histograms between the new and the original treatment plans were analyzed. RESULTS: On average, the new knowledge-based plan is capable of achieving very comparable planning target volume coverage as the original plan, to within 2% as evaluated for D98, D95, and D1. Similarly, the dose to the rectum and dose to the bladder are also comparable to the original plan. For the rectum, the mean and standard deviation of the dose percentage differences for D20, D30, and D50 are 1.8% +/- 8.5%, -2.5% +/- 13.9%, and -13.9% +/- 23.6%, respectively. For the bladder, the mean and standard deviation of the dose percentage differences for D20, D30, and D50 are -5.9% +/- 10.8%, -12.2% +/- 14.6%, and -24.9% +/- 21.2%, respectively. A negative percentage difference indicates that the new plan has greater dose sparing as compared to the original plan. CONCLUSIONS: The authors demonstrate a knowledge-based approach of using prior clinically approved treatment plans to generate clinically acceptable treatment plans of high quality. This semiautomated approach has the potential to improve the efficiency of the treatment planning process while ensuring that high quality plans are developed.
Resumo:
BACKGROUND: Implementing new practices, such as health information technology (HIT), is often difficult due to the disruption of the highly coordinated, interdependent processes (e.g., information exchange, communication, relationships) of providing care in hospitals. Thus, HIT implementation may occur slowly as staff members observe and make sense of unexpected disruptions in care. As a critical organizational function, sensemaking, defined as the social process of searching for answers and meaning which drive action, leads to unified understanding, learning, and effective problem solving -- strategies that studies have linked to successful change. Project teamwork is a change strategy increasingly used by hospitals that facilitates sensemaking by providing a formal mechanism for team members to share ideas, construct the meaning of events, and take next actions. METHODS: In this longitudinal case study, we aim to examine project teams' sensemaking and action as the team prepares to implement new information technology in a tiertiary care hospital. Based on management and healthcare literature on HIT implementation and project teamwork, we chose sensemaking as an alternative to traditional models for understanding organizational change and teamwork. Our methods choices are derived from this conceptual framework. Data on project team interactions will be prospectively collected through direct observation and organizational document review. Through qualitative methods, we will identify sensemaking patterns and explore variation in sensemaking across teams. Participant demographics will be used to explore variation in sensemaking patterns. DISCUSSION: Outcomes of this research will be new knowledge about sensemaking patterns of project teams, such as: the antecedents and consequences of the ongoing, evolutionary, social process of implementing HIT; the internal and external factors that influence the project team, including team composition, team member interaction, and interaction between the project team and the larger organization; the ways in which internal and external factors influence project team processes; and the ways in which project team processes facilitate team task accomplishment. These findings will lead to new methods of implementing HIT in hospitals.
Resumo:
Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.
Resumo:
Researchers currently debate whether new semantic knowledge can be learned and retrieved despite extensive damage to medial temporal lobe (MTL) structures. The authors explored whether H. M., a patient with amnesia, could acquire new semantic information in the context of his lifelong hobby of solving crossword puzzles. First, H. M. was tested on a series of word-skills tests believed important in solving crosswords. He also completed 3 new crosswords: 1 puzzle testing pre-1953 knowledge, another testing post-1953 knowledge, and another combining the 2 by giving postoperative semantic clues for preoperative answers. From the results, the authors concluded that H. M. can acquire new semantic knowledge, at least temporarily, when he can anchor it to mental representations established preoperatively.
Resumo:
Nowadays multi-touch devices (MTD) can be found in all kind of contexts. In the learning context, MTD availability leads many teachers to use them in their class room, to support the use of the devices by students, or to assume that it will enhance the learning processes. Despite the raising interest for MTD, few researches studying the impact in term of performance or the suitability of the technology for the learning context exist. However, even if the use of touch-sensitive screens rather than a mouse and keyboard seems to be the easiest and fastest way to realize common learning tasks (as for instance web surfing behaviour), we notice that the use of MTD may lead to a less favourable outcome. The complexity to generate an accurate fingers gesture and the split attention it requires (multi-tasking effect) make the use of gestures to interact with a touch-sensitive screen more difficult compared to the traditional laptop use. More precisely, it is hypothesized that efficacy and efficiency decreases, as well as the available cognitive resources making the users’ task engagement more difficult. Furthermore, the presented study takes into account the moderator effect of previous experiences with MTD. Two key factors of technology adoption theories were included in the study: familiarity and self-efficacy with the technology.Sixty university students, invited to a usability lab, are asked to perform information search tasks on an online encyclopaedia. The different tasks were created in order to execute the most commonly used mouse actions (e.g. right click, left click, scrolling, zooming, key words encoding…). Two different conditions were created: (1) MTD use and (2) laptop use (with keyboard and mouse). The cognitive load, self-efficacy, familiarity and task engagement scales were adapted to the MTD context. Furthermore, the eye-tracking measurement would offer additional information about user behaviours and their cognitive load.Our study aims to clarify some important aspects towards the usage of MTD and the added value compared to a laptop in a student learning context. More precisely, the outcomes will enhance the suitability of MTD with the processes at stakes, the role of previous knowledge in the adoption process, as well as some interesting insights into the user experience with such devices.
Resumo:
Contraceptive prevalence in Haiti remains low despite extensive foreign aid targeted at improving family planning. [1] Earlier studies have found that peer-informed learning have been successful in promoting sexual and reproductive health. [2-5] This pilot project was implemented as a three-month, community-based, educational intervention to assess the impact of peer education in increasing contraceptive knowledge among women in Fondwa, Haiti. Research investigators conducted contraceptive information trainings to pre-identified female leaders of existing women’s groups in Fondwa, who were recruited as peer educators (n=4). Later, these female leaders shared the knowledge from the training with the test participants in the women’s group (n=23) through an information session. Structured surveys measuring knowledge of contraceptives were conducted with all participants before the intervention began, at the end of the intervention, and four weeks after the intervention. The surveys measured general contraceptive knowledge, knowledge about eight selected types of modern contraceptives and contraceptive preferences and attitudes. Only test participants showed significant improvement in their general contraceptive knowledge score (p<0.001), but both test participants and peer educators showed significant improvement in overall knowledge scores for identifying the types and uses of modern contraceptive methods. Assessment for knowledge retention remained significantly higher four weeks after the intervention than prior to the intervention. Therefore, a one-time, three-hour peer-based educational intervention using existing social structures is effective, and might be valuable in a population with minimal access to education and little to no knowledge about contraceptives.