968 resultados para Interdisciplinary approach to knowledge
Resumo:
Students may have difficulty in understanding some of the complex concepts which they have been taught in the general areas of science and engineering. Whilst practical work such as a laboratory based examination of the performance of structures has an important role in knowledge construction this does have some limitations. Blended learning supports different learning styles, hence further benefits knowledge building. This research involves the empirical studies of how an innovative use of vodcasts (video-podcasts) can enrich learning experience in the structural properties of materials laboratory of an undergraduate course. Students were given the opportunity of downloading and viewing the vodcasts on the theory before and after the experimental work. It is the choice of the students when (before or after, before and after) and how many times they would like to view the vodcasts. In blended learning, the combination of face-to-face teaching, vodcasts, printed materials, practical experiments, writing reports and instructors’ feedbacks benefits different learning styles of the learners. For the preparation of the practical laboratory work, the students were informed about the availability of the vodcasts prior to the practical session. After the practical work, students submit an individual laboratory report for the assessment of the structures laboratory. The data collection consists of a questionnaire completed by the students, and the practical reports submitted by them for assessment. The results from the questionnaire were analysed quantitatively, whilst the data from the assessment reports were analysed qualitatively. The analysis shows that students who have not fully grasped the theory after the practical were successful in gaining the required knowledge by viewing the vodcasts. Some students who have understood the theory may choose to view it once or not at all. Their understanding was demonstrated by the quality of their explanations in their reports. This is illustrated by the approach they took to explicate the results of their experimental work, for example, they can explain how to calculate the Young’s Modulus properly and provided the correct value for it. The research findings are valuable to instructors who design, develop and deliver different types of blended learning, and beneficial to learners who try different blended approaches. Recommendations were made on the role of the innovative application of vodcasts in the knowledge construction for structures laboratory and to guide future work in this area of research.
Resumo:
Classical risk assessment approaches for animal diseases are influenced by the probability of release, exposure and consequences of a hazard affecting a livestock population. Once a pathogen enters into domestic livestock, potential risks of exposure and infection both to animals and people extend through a chain of economic activities related to producing, buying and selling of animals and products. Therefore, in order to understand economic drivers of animal diseases in different ecosystems and to come up with effective and efficient measures to manage disease risks from a country or region, the entire value chain and related markets for animal and product needs to be analysed to come out with practical and cost effective risk management options agreed by actors and players on those value chains. Value chain analysis enriches disease risk assessment providing a framework for interdisciplinary collaboration, which seems to be in increasing demand for problems concerning infectious livestock diseases. The best way to achieve this is to ensure that veterinary epidemiologists and social scientists work together throughout the process at all levels.
Resumo:
The paper develops a more precise specification and understanding of the process of national-level knowledge accumulation and absorptive capabilities by applying the reasoning and evidence from the firm-level analysis pioneered by Cohen and Levinthal (1989, 1990). In doing so, we acknowledge that significant cross-border effects due to the role of both inward and outward FDI exist and that assimilation of foreign knowledge is not only confined to catching-up economies but is also carried out by countries at the frontier-sharing phase. We postulate a non-linear relationship between national absorptive capacity and the technological gap, due to the effects of the cumulative nature of the learning process and the increase in complexity of external knowledge as the country approaches the technological frontier. We argue that national absorptive capacity and the accumulation of knowledge stock are simultaneously determined. This implies that different phases of technological development require different strategies. During the catching-up phase, knowledge accumulation occurs predominately through the absorption of trade and/or inward FDI-related R&D spillovers. At the pre-frontier-sharing phase onwards, increases in the knowledge base occur largely through independent knowledge creation and actively accessing foreign-located technological spillovers, inter alia through outward FDI-related R&D, joint ventures and strategic alliances.
Resumo:
Traditionally, the formal scientific output in most fields of natural science has been limited to peer- reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI) – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE) to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.
Resumo:
Flexibility of information systems (IS) have been studied to improve the adaption in support of the business agility as the set of capabilities to compete more effectively and adapt to rapid changes in market conditions (Glossary of business agility terms, 2003). However, most of work on IS flexibility has been limited to systems architecture, ignoring the analysis of interoperability as a part of flexibility from the requirements. This paper reports a PhD project, which proposes an approach to develop IS with flexibility features, considering some challenges of flexibility in small and medium enterprises (SMEs) such as the lack of interoperability and the agility of their business. The motivation of this research are the high prices of IS in developing countries and the usefulness of organizational semiotics to support the analysis of requirements for IS. (Liu, 2005).
Resumo:
Information can be interpreted as in-formation, which refers to the potential of the form for a mediation of meaning. In this paper we focus on reasoning information and consider the question how form involved in reasoning can be used for an analysis of accounting narratives in corporate disclosures. An evaluation of experimental results is included.
Resumo:
Background 29 autoimmune diseases, including Rheumatoid Arthritis, gout, Crohn’s Disease, and Systematic Lupus Erythematosus affect 7.6-9.4% of the population. While effective therapy is available, many patients do not follow treatment or use medications as directed. Digital health and Web 2.0 interventions have demonstrated much promise in increasing medication and treatment adherence, but to date many Internet tools have proven disappointing. In fact, most digital interventions continue to suffer from high attrition in patient populations, are burdensome for healthcare professionals, and have relatively short life spans. Objective Digital health tools have traditionally centered on the transformation of existing interventions (such as diaries, trackers, stage-based or cognitive behavioral therapy programs, coupons, or symptom checklists) to electronic format. Advanced digital interventions have also incorporated attributes of Web 2.0 such as social networking, text messaging, and the use of video. Despite these efforts, there has not been little measurable impact in non-adherence for illnesses that require medical interventions, and research must look to other strategies or development methodologies. As a first step in investigating the feasibility of developing such a tool, the objective of the current study is to systematically rate factors of non-adherence that have been reported in past research studies. Methods Grounded Theory, recognized as a rigorous method that facilitates the emergence of new themes through systematic analysis, data collection and coding, was used to analyze quantitative, qualitative and mixed method studies addressing the following autoimmune diseases: Rheumatoid Arthritis, gout, Crohn’s Disease, Systematic Lupus Erythematosus, and inflammatory bowel disease. Studies were only included if they contained primary data addressing the relationship with non-adherence. Results Out of the 27 studies, four non-modifiable and 11 modifiable risk factors were discovered. Over one third of articles identified the following risk factors as common contributors to medication non-adherence (percent of studies reporting): patients not understanding treatment (44%), side effects (41%), age (37%), dose regimen (33%), and perceived medication ineffectiveness (33%). An unanticipated finding that emerged was the need for risk stratification tools (81%) with patient-centric approaches (67%). Conclusions This study systematically identifies and categorizes medication non-adherence risk factors in select autoimmune diseases. Findings indicate that patients understanding of their disease and the role of medication are paramount. An unexpected finding was that the majority of research articles called for the creation of tailored, patient-centric interventions that dispel personal misconceptions about disease, pharmacotherapy, and how the body responds to treatment. To our knowledge, these interventions do not yet exist in digital format. Rather than adopting a systems level approach, digital health programs should focus on cohorts with heterogeneous needs, and develop tailored interventions based on individual non-adherence patterns.
Resumo:
Promoting the inclusion of students with disabilities in e-learning systems has brought many challenges for researchers and educators. The use of synchronous communication tools such as interactive whiteboards has been regarded as an obstacle for inclusive education. In this paper, we present the proposal of an inclusive approach to provide blind students with the possibility to participate in live learning sessions with whiteboard software. The approach is based on the provision of accessible textual descriptions by a live mediator. With the accessible descriptions, students are able to navigate through the elements and explore the content of the class using screen readers. The method used for this study consisted of the implementation of a software prototype within a virtual learning environment and a case study with the participation of a blind student in a live distance class. The results from the case study have shown that this approach can be very effective, and may be a starting point to provide blind students with resources they had previously been deprived from. The proof of concept implemented has shown that many further possibilities may be explored to enhance the interaction of blind users with educational content in whiteboards, and further pedagogical approaches can be investigated from this proposal. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
Component-based software engineering has recently emerged as a promising solution to the development of system-level software. Unfortunately, current approaches are limited to specific platforms and domains. This lack of generality is particularly problematic as it prevents knowledge sharing and generally drives development costs up. In the past, we have developed a generic approach to component-based software engineering for system-level software called OpenCom. In this paper, we present OpenComL an instantiation of OpenCom to Linux environments and show how it can be profiled to meet a range of system-level software in Linux environments. For this, we demonstrate its application to constructing a programmable router platform and a middleware for parallel environments.
Resumo:
This paper presents an automatic method to detect and classify weathered aggregates by assessing changes of colors and textures. The method allows the extraction of aggregate features from images and the automatic classification of them based on surface characteristics. The concept of entropy is used to extract features from digital images. An analysis of the use of this concept is presented and two classification approaches, based on neural networks architectures, are proposed. The classification performance of the proposed approaches is compared to the results obtained by other algorithms (commonly considered for classification purposes). The obtained results confirm that the presented method strongly supports the detection of weathered aggregates.
Resumo:
Automatic summarization of texts is now crucial for several information retrieval tasks owing to the huge amount of information available in digital media, which has increased the demand for simple, language-independent extractive summarization strategies. In this paper, we employ concepts and metrics of complex networks to select sentences for an extractive summary. The graph or network representing one piece of text consists of nodes corresponding to sentences, while edges connect sentences that share common meaningful nouns. Because various metrics could be used, we developed a set of 14 summarizers, generically referred to as CN-Summ, employing network concepts such as node degree, length of shortest paths, d-rings and k-cores. An additional summarizer was created which selects the highest ranked sentences in the 14 systems, as in a voting system. When applied to a corpus of Brazilian Portuguese texts, some CN-Summ versions performed better than summarizers that do not employ deep linguistic knowledge, with results comparable to state-of-the-art summarizers based on expensive linguistic resources. The use of complex networks to represent texts appears therefore as suitable for automatic summarization, consistent with the belief that the metrics of such networks may capture important text features. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
Alzheimer`s disease is an ultimately fatal neurodegenerative disease, and BACE-1 has become an attractive validated target for its therapy, with more than a hundred crystal structures deposited in the PDB. In the present study, we present a new methodology that integrates ligand-based methods with structural information derived from the receptor. 128 BACE-1 inhibitors recently disclosed by GlaxoSmithKline R&D were selected specifically because the crystal structures of 9 of these compounds complexed to BACE-1, as well as five closely related analogs, have been made available. A new fragment-guided approach was designed to incorporate this wealth of structural information into a CoMFA study, and the methodology was systematically compared to other popular approaches, such as docking, for generating a molecular alignment. The influence of the partial charges calculation method was also analyzed. Several consistent and predictive models are reported, including one with r (2) = 0.88, q (2) = 0.69 and r (pred) (2) = 0.72. The models obtained with the new methodology performed consistently better than those obtained by other methodologies, particularly in terms of external predictive power. The visual analyses of the contour maps in the context of the enzyme drew attention to a number of possible opportunities for the development of analogs with improved potency. These results suggest that 3D-QSAR studies may benefit from the additional structural information added by the presented methodology.
A robust Bayesian approach to null intercept measurement error model with application to dental data
Resumo:
Measurement error models often arise in epidemiological and clinical research. Usually, in this set up it is assumed that the latent variable has a normal distribution. However, the normality assumption may not be always correct. Skew-normal/independent distribution is a class of asymmetric thick-tailed distributions which includes the Skew-normal distribution as a special case. In this paper, we explore the use of skew-normal/independent distribution as a robust alternative to null intercept measurement error model under a Bayesian paradigm. We assume that the random errors and the unobserved value of the covariate (latent variable) follows jointly a skew-normal/independent distribution, providing an appealing robust alternative to the routine use of symmetric normal distribution in this type of model. Specific distributions examined include univariate and multivariate versions of the skew-normal distribution, the skew-t distributions, the skew-slash distributions and the skew contaminated normal distributions. The methods developed is illustrated using a real data set from a dental clinical trial. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper reports an expert system (SISTEMAT) developed for structural determination of diverse chemical classes of natural products, including lignans, based mainly on 13C NMR and 1H NMR data of these compounds. The system is composed of five programs that analyze specific data of a lignan and shows a skeleton probability for the compound. At the end of analyses, the results are grouped, the global probability is computed, and the most probable skeleton is exhibited to the user. SISTEMAT was able to properly predict the skeletons of 80% of the 30 lignans tested, demonstrating its advantage during the structural elucidation course in a short period of time.