952 resultados para knowledge application
Resumo:
Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.
Collection-Level Subject Access in Aggregations of Digital Collections: Metadata Application and Use
Resumo:
Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.
Resumo:
The amounts of farm dairy effluent stored in ponds and irrigated to land have steadily increased with the steady growth of New Zealand's dairy industry. About 80% of dairy farms now operate with effluent storage ponds allowing deferred irrigation. These storage and irrigation practices cause emissions of greenhouse gases (GHG) and ammonia. The current knowledge of the processes causing these emissions and the amounts emitted is reviewed here. Methane emissions from ponds are the largest contributor to the total GHG emissions from effluent in managed manure systems in New Zealand. Nitrous oxide emissions from anaerobic ponds are negligible, while ammonia emissions vary widely between different studies, probably because they depend strongly on pH and manure composition. The second-largest contribution to GHG emissions from farm dairy effluent comes from nitrous oxide emissions from land application. Ammonia emissions from land application of effluent in New Zealand were found to be less than those reported elsewhere from the application of slurries. Recent studies have suggested that New Zealand's current GHG inventory method to estimate methane emissions from effluent ponds should be revised. The increasing importance of emissions from ponds, while being a challenge for the inventory, also provides an opportunity to achieve mitigation of emissions due to the confined location of where these emissions occur. © 2015 © 2015 The Royal Society of New Zealand.
Resumo:
This paper presents a new hyper-heuristic method using Case-Based Reasoning (CBR) for solving course timetabling problems. The term Hyper-heuristics has recently been employed to refer to 'heuristics that choose heuristics' rather than heuristics that operate directly on given problems. One of the overriding motivations of hyper-heuristic methods is the attempt to develop techniques that can operate with greater generality than is currently possible. The basic idea behind this is that we maintain a case base of information about the most successful heuristics for a range of previous timetabling problems to predict the best heuristic for the new problem in hand using the previous knowledge. Knowledge discovery techniques are used to carry out the training on the CBR system to improve the system performance on the prediction. Initial results presented in this paper are good and we conclude by discussing the con-siderable promise for future work in this area.
Resumo:
Introduction. Research design should take into account both (a) the specific nature of the object under scrutiny, and (b) approaches to its study in the past. This is to ensure that informed decisions are made regarding research design in future empirical studies. Here these factors are taken into account with reference to methodological choice for a doctoral study on tacit knowledge sharing, and the extent to tacit knowledge sharing may be facilitated by online tools. The larger study responds to calls for the two domains of knowledge management and human information behaviour to be considered together in terms of their research approaches and theory development. Method. Relevant literature – both domain-specific (knowledge management) and general (research methods in social science) - was identified and analysed to identify the most appropriate approaches for an empirical study of tacit knowledge sharing. Analysis. The analysis shows that there are a number of challenges associated with studying an intangible entity such as tacit knowledge. Quantitative, qualitative and mixed methods have been adopted in prior work on this theme, each with their own strengths and weaknesses. Results. The analysis has informed a decision to adopt a research approach that deploys mixed methods for an inductive case study to extend knowledge of the influence of online tools on tacit knowledge sharing. Conclusion. This work intends to open the debate on methodological choice and routes to implementation for studies that are subject to practical constraints imposed by the context in which they are situated.
Resumo:
Enterprise architecture (EA) is a tool that aligns organization’s business-process with application and information technology (IT) through EAmodels. This EA model allows the organization to cut off unnecessary IT expenses and determines the future and current IT requirements and boosts organizational performance. Enterprise architecture may be employed in every firm where the firm or organization requires configurations between information technology and business functions. This research investigates the role of enterprise architecture in healthcare organizations and suggests the suitable EA framework for knowledge-based medical diagnostic system for EA modeling by comparing the two most widely used EA frameworks. The results of the comparison identified that the proposed EA has a better framework for knowledge-based medical diagnostic system.
Resumo:
The continuous flow of technological developments in communications and electronic industries has led to the growing expansion of the Internet of Things (IoT). By leveraging the capabilities of smart networked devices and integrating them into existing industrial, leisure and communication applications, the IoT is expected to positively impact both economy and society, reducing the gap between the physical and digital worlds. Therefore, several efforts have been dedicated to the development of networking solutions addressing the diversity of challenges associated with such a vision. In this context, the integration of Information Centric Networking (ICN) concepts into the core of IoT is a research area gaining momentum and involving both research and industry actors. The massive amount of heterogeneous devices, as well as the data they produce, is a significant challenge for a wide-scale adoption of the IoT. In this paper we propose a service discovery mechanism, based on Named Data Networking (NDN), that leverages the use of a semantic matching mechanism for achieving a flexible discovery process. The development of appropriate service discovery mechanisms enriched with semantic capabilities for understanding and processing context information is a key feature for turning raw data into useful knowledge and ensuring the interoperability among different devices and applications. We assessed the performance of our solution through the implementation and deployment of a proof-of-concept prototype. Obtained results illustrate the potential of integrating semantic and ICN mechanisms to enable a flexible service discovery in IoT scenarios.
Resumo:
The not criminally responsible on account of mental disorder (NCRMD) defence is used when claims can be made that offenders are not responsible for their actions due to symptoms of a mental disorder. Bill C-14, now enacted in Canada, has implemented changes making it more difficult for NCRMD defendants to be released back into the public. This enactment appears to have been primarily due to public perceptions rather than actual knowledge of the defence. Thus it seems important to assess what members of the public actually know about the defence. To assess this, 127 participants completed a survey assessing their knowledge of the illnesses generally involved in the NCRMD defence, crimes committed, and punishments received. On average, only 31.6% of responses were answered within 20% of the factual statistics. Results suggest a general lack of knowledge about the defence and demonstrate why important changes should be based on factual information rather than public opinion.
Resumo:
The main purpose of the current study was to examine the role of vocabulary knowledge (VK) and syntactic knowledge (SK) in L2 listening comprehension, as well as their relative significance. Unlike previous studies, the current project employed assessment tasks to measure aural and proceduralized VK and SK. In terms of VK, to avoid under-representing the construct, measures of both breadth (VB) and depth (VD) were included. Additionally, the current study examined the role of VK and SK by accounting for individual differences in two important cognitive factors in L2 listening: metacognitive knowledge (MK) and working memory (WM). Also, to explore the role of VK and SK more fully, the current study accounted for the negative impact of anxiety on WM and L2 listening. The study was carried out in an English as a Foreign Language (EFL) context, and participants were 263 Iranian learners at a wide range of English proficiency from lower-intermediate to advanced. Participants took a battery of ten linguistic, cognitive and affective measures. Then, the collected data were subjected to several preliminary analyses, but structural equation modeling (SEM) was then used as the primary analysis method to answer the study research questions. Results of the preliminary analyses revealed that MK and WM were significant predictors of L2 listening ability; thus, they were kept in the main SEM analyses. The significant role of WM was only observed when the negative effect of anxiety on WM was accounted for. Preliminary analyses also showed that VB and VD were not distinct measures of VK. However, the results also showed that if VB and VD were considered separate, VD was a better predictor of L2 listening success. The main analyses of the current study revealed a significant role for both VK and SK in explaining success in L2 listening comprehension, which differs from findings from previous empirical studies. However, SEM analysis did not reveal a statistically significant difference in terms of the predictive power of the two linguistic factors. Descriptive results of the SEM analysis, along with results from regression analysis, indicated to a more significant role for VK.
Resumo:
Knowledge is one of the most important assets for surviving in the modern business environment. The effective management of that asset mandates continuous adaptation by organizations, and requires employees to strive to improve the company's work processes. Organizations attempt to coordinate their unique knowledge with traditional means as well as in new and distinct ways, and to transform them into innovative resources better than those of their competitors. As a result, how to manage the knowledge asset has become a critical issue for modern organizations, and knowledge management is considered the most feasible solution. Knowledge management is a multidimensional process that identifies, acquires, develops, distributes, utilizes, and stores knowledge. However, many related studies focus only on fragmented or limited knowledge-management perspectives. In order to make knowledge management more effective, it is important to identify the qualitative and quantitative issues that are the foundation of the challenge of effective knowledge management in organizations. The main purpose of this study was to integrate the fragmented knowledge management perspectives into the holistic framework, which includes knowledge infrastructure capability (technology, structure, and culture) and knowledge process capability (acquisition, conversion, application, and protection), based on Gold's (2001) study. Additionally, because the effect of incentives ̶̶ which is widely acknowledged as a prime motivator in facilitating the knowledge management process ̶̶ was missing in the original framework, this study included the importance of incentives in the knowledge management framework. This study also identified the relationship of organizational performance from the standpoint of the Balanced Scorecard, which includes the customer-related, internal business process, learning & growth, and perceptual financial aspects of organizational performance in the Korean business context. Moreover, this study identified the relationship with the objective financial performance by calculating the Tobin's q ratio. Lastly, this study compared the group differences between larger and smaller organizations, and manufacturing and nonmanufacturing firms in the study of knowledge management. Since this study was conducted in Korea, the original instrument was translated into Korean through the back translation technique. A confirmatory factor analysis (CFA) was used to examine the validity and reliability of the instrument. To identify the relationship between knowledge management capabilities and organizational performance, structural equation modeling (SEM) and multiple regression analysis were conducted. A Student's t test was conducted to examine the mean differences. The results of this study indicated that there is a positive relationship between effective knowledge management and organizational performance. However, no empirical evidence was found to suggest that knowledge management capabilities are linked to the objective financial performance, which remains a topic for future review. Additionally, findings showed that knowledge management is affected by organization's size, but not by type of organization. The results of this study are valuable in establishing a valid and reliable survey instrument, as well as in providing strong evidence that knowledge management capabilities are essential to improving organizational performance currently and making important recommendations for future research.
Resumo:
Purpose – The purpose of this research is to show how the self-archiving of journal papers is a major step towards providing open access to research. However, copyright transfer agreements (CTAs) that are signed by an author prior to publication often indicate whether, and in what form, self-archiving is allowed. The SHERPA/RoMEO database enables easy access to publishers' policies in this area and uses a colour-coding scheme to classify publishers according to their self-archiving status. The database is currently being redeveloped and renamed the Copyright Knowledge Bank. However, it will still assign a colour to individual publishers indicating whether pre-prints can be self-archived (yellow), post-prints can be self-archived (blue), both pre-print and post-print can be archived (green) or neither (white). The nature of CTAs means that these decisions are rarely as straightforward as they may seem, and this paper describes the thinking and considerations that were used in assigning these colours in the light of the underlying principles and definitions of open access. Approach – Detailed analysis of a large number of CTAs led to the development of controlled vocabulary of terms which was carefully analysed to determine how these terms equate to the definition and “spirit” of open access. Findings – The paper reports on how conditions outlined by publishers in their CTAs, such as how or where a paper can be self-archived, affect the assignment of a self-archiving colour to the publisher. Value – The colour assignment is widely used by authors and repository administrators in determining whether academic papers can be self-archived. This paper provides a starting-point for further discussion and development of publisher classification in the open access environment.
Resumo:
Aim of the study: To introduce and describe FlorNExT®, a free cloud computing application to estimate growth and yield of maritime pine (Pinus pinaster Ait.) even-aged stands in the Northeast of Portugal (NE Portugal). Area of study: NE Portugal. Material and methods: FlorNExT® implements a dynamic growth and yield modelling framework which integrates transition functions for dominant height (site index curves) and basal area, as well as output functions for tree and stand volume, biomass, and carbon content. Main results: FlorNExT® is freely available from any device with an Internet connection at: http://flornext.esa.ipb.pt/. Research highlights: This application has been designed to make it possible for any stakeholder to easily estimate standing volume, biomass, and carbon content in maritime pine stands from stand data, as well as to estimate growth and yield based on four stand variables: age, density, dominant height, and basal area. FlorNExT® allows planning thinning treatments. FlorNExT® is a fundamental tool to support forest mobilization at local and regional scales in NE Portugal. Material and methods: FlorNExT® implements a dynamic growth and yield modelling framework which integrates transition functions for dominant height (site index curves) and basal area, as well as output functions for tree and stand volume, biomass, and carbon content. Main results: FlorNExT® is freely available from any device with an Internet connection at: http://flornext.esa.ipb.pt/. Research highlights: This application has been designed to make it possible for any stakeholder to easily estimate standing volume, biomass, and carbon content in maritime pine stands from stand data, as well as to estimate growth and yield based on four stand variables: age, density, dominant height, and basal area. FlorNExT® allows planning thinning treatments. FlorNExT® is a fundamental tool to support forest mobilization at local and regional scales in NE Portugal.
Resumo:
This paper examines the relationship between the state and the individual in relation to an aspect of mundane family life – the feeding of babies and young children. The nutritional status of children has long been a matter of national concern and infant feeding is an aspect of family life that has been subjected to substantial state intervention. It exemplifies the imposition upon women the ‘biologico-moral responsibility’ for the welfare of children (Foucault 1991b). The state’s attempts to influence mothers’ feeding practices operate largely through education and persuasion. Through an elaborate state-sponsored apparatus, a strongly medicalised expert discourse is disseminated to mothers. This discourse warns mothers of the risks of certain feeding practices and the benefits of others. It constrains mothers through a series of ‘quiet coercions’ (Foucault 1991c) which seek to render them self-regulating subjects. Using data from a longitudinal interview study, this paper explores how mothers who are made responsible in these medical discourses around child nutrition, engage with, resist and refuse expert advice. It examines, in particular, the rhetorical strategies which mothers use to defend themselves against the charges of maternal irresponsibility that arise when their practices do not conform to expert medical recommendations.
Resumo:
Public involvement in healthcare is a prominent policy in countries across the economically developed world. A growing body of academic literature has focused on public participation, often presenting dichotomies between good and bad practice: between initiatives that offer empowerment and those constrained by consumerism, or between those which rely for recruitment on self-selecting members of the public, and those including a more broad-based, statistically representative group. In this paper I discuss the apparent tensions between differing rationales for participation, relating recent discussions about the nature of representation in public involvement to parallel writings about the contribution of laypeople’s expertise and experience. In the academic literature, there is, I suggest, a thin line between democratic justifications for involvement, suggesting a representative role for involved publics, and technocratic ideas about the potential ‘expert’ contributions of particular subgroups of the public. Analysing recent policy documents on participation in healthcare in England, I seek moreover to show how contemporary policy transcends both categories, demanding complex roles of involved publics which invoke various qualities seen as important in governing the interface between state and society. I relate this to social-theoretical perspectives on the relationship between governmental authority and citizens in late-modern society.
Resumo:
Objective: To investigate the knowledge and use of asthma control measurement (ACM) tools in the management of asthma among doctors working in family and internal medicine practice in Nigeria. Method: A questionnaire based on the global initiative on asthma (GINA) guideline was self-administered by 194 doctors. It contains 12 test items on knowledge of ACM tools and its application. The knowledge score was obtained by adding the correct answers and classified as good if the score ≥ 9, satisfactory if score was 6-8 and poor if < 6. Results: The overall doctors knowledge score of ACM tools was 4.49±2.14 (maximum of 12). Pulmonologists recorded the highest knowledge score of 10.75±1.85. The majority (69.6%) had poor knowledge score of ACM tools. Fifty (25.8%) assessed their patients’ level of asthma control and 34(17.5%) at every visit. Thirty-nine (20.1%) used ACM tools in their consultation, 29 (15.0%) of them used GINA defined control while 10 (5.2 %) used asthma control test (ACT). The use of the tools was associated with pulmonologists, having attended CME within six months and graduated within five years prior to the survey. Conclusion: The results highlight the poor knowledge and use of ACM tools and the need to address the knowledge gap.