819 resultados para Task-based information access


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article considers the Internet/Intranet information systems as the tool for distance learning. Author considers the model of the 3-tier WEB based information system, the idea of the language for implementing and customized solution, which includes the original language and processor for fast prototyping and implementing small and middle sized Internet/Intranet information systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The formal model of natural language processing in knowledge-based information systems is considered. The components realizing functions of offered formal model are described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data integration for the purposes of tracking, tracing and transparency are important challenges in the agri-food supply chain. The Electronic Product Code Information Services (EPCIS) is an event-oriented GS1 standard that aims to enable tracking and tracing of products through the sharing of event-based datasets that encapsulate the Electronic Product Code (EPC). In this paper, the authors propose a framework that utilises events and EPCs in the generation of "linked pedigrees" - linked datasets that enable the sharing of traceability information about products as they move along the supply chain. The authors exploit two ontology based information models, EEM and CBVVocab within a distributed and decentralised framework that consumes real time EPCIS events as linked data to generate the linked pedigrees. The authors exemplify the usage of linked pedigrees within the fresh fruit and vegetables supply chain in the agri-food sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presentation of cultural heritage is difficult comprehensive and constantly updated topic. Researchers often focus more on the different techniques to digitize artifacts of cultural heritage. This work focuses on the overall shape and structure of future multimedia application whose specificity is determined by the topic - Odrysian kingdom. Below is presented a concept for structure and content-based information available for individual kings from Odryssae dynasty. Special attention is paid to the presentation of preserved artifacts associated with the reign of specific rulers. The main concept of multimedia application dedicated to the Odrysian kingdom, it is to be used in teaching programs related to cultural heritage and history of antiquity in universities. The aim of designers is that it can be modified easy for use in museums also.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The article offers the authors insights on how to manage children who eat a limited diet. Topics discussed include the role of parents and caregivers in helping children to develop healthy eating habits, the Child Feeding Guide consists of ways how to increase fruit and vegetable intake of children, and the Child Feeding Guide app for tablets and smartphones provides evidence-based information for people who are concerned about the eating behavior of children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wikis are quickly emerging as a new corporate medium for communication and collaboration. They allow dispersed groups of collaborators to asynchronously engage in persistent conversations, the result of which is stored on a common server as a single, shared truth. To gauge the enterprise value of wikis, the authors draw on Media Choice Theories (MCTs) as an evaluation framework. MCTs reveal core capabilities of communication media and their fit with the communication task. Based on the evaluation, the authors argue that wikis are equivalent or superior to existing asynchronous communication media in key characteristics. Additionally argued is the notion that wiki technology challenges some of the held beliefs of existing media choice theories, as wikis introduce media characteristics not previously envisioned. The authors thus predict a promising future for wiki use in enterprises.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The population of English Language Learners (ELLs) globally has been increasing substantially every year. In the United States alone, adult ELLs are the fastest growing portion of learners in adult education programs (Yang, 2005). There is a significant need to improve the teaching of English to ELLs in the United States and other English-speaking dominant countries. However, for many ELLs, speaking, especially to Native English Speakers (NESs), causes considerable language anxiety, which in turn plays a vital role in hindering their language development and academic progress (Pichette, 2009; Woodrow, 2006). ^ Task-based Language Teaching (TBLT), such as simulation activities, has long been viewed as an effective approach for second-language development. The current advances in technology and rapid emergence of Multi-User Virtual Environments (MUVEs) have provided an opportunity for educators to consider conducting simulations online for ELLs to practice speaking English to NESs. Yet to date, empirical research on the effects of MUVEs on ELLs' language development and speaking is limited (Garcia-Ruiz, Edwards, & Aquino-Santos, 2007). ^ This study used a true experimental treatment control group repeated measures design to compare the perceived speaking anxiety levels (as measured by an anxiety scale administered per simulation activity) of 11 ELLs (5 in the control group, 6 in the experimental group) when speaking to Native English Speakers (NESs) during 10 simulation activities. Simulations in the control group were done face-to-face, while those in the experimental group were done in the MUVE of Second Life. ^ The results of the repeated measures ANOVA revealed after the Huynh-Feldt epsilon correction, demonstrated for both groups a significant decrease in anxiety levels over time from the first simulation to the tenth and final simulation. When comparing the two groups, the results revealed a statistically significant difference, with the experimental group demonstrating a greater anxiety reduction. These results suggests that language instructors should consider including face-to-face and MUVE simulations with ELLs paired with NESs as part of their language instruction. Future investigations should investigate the use of other multi-user virtual environments and/or measure other dimensions of the ELL/NES interactions.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this action research of mixed-methods was investigating the role of the tasks proposed by the Task-Based Learning, TBL (WILLIS, 1996) in the process of development of speech production in English as a foreign language (EFL) at the public school. Twenty-three students from a grade of secondary school from a state school in Rio Grande do Norte were exposed systematically to the implementation of the learning tasks focused in the speech production in EFL during two months. The instruments used at the data collection – pre and post-questionnaire; field notes; focal group; and pre and post-tests - generated two kinds of data: a) qualitative (the perception of the students about their speech production and the teaching of this ability at the public school; and, the usage of strategies of communication for these learners facing TBL); and, b) quantitative (the development of pronunciation; of accuracy in the proficiency tests (test KET – Cambridge, adapted); and, of Global Oral Proficiency (POG) of these learners after the accomplishment of the learning tasks). The quantitative results of the study indicate that there was a statistically significant development of pronunciation and accuracy at the proficiency tests, after the tasks experience. The qualitative findings, in turn, represented by the learners‟ reports and from the research teacher, show that there has been greater focus on the use of communicative strategies during the learners‟ oral production throughout the intervention with the tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the Court’s reasoning in interpreting the EU Charter, using recent case law on horizontal effect as a case study. It identifies two possible means of interpreting the provisions of the Charter: firstly, an approach based on common values (e.g. equality or solidarity) and, secondly, an approach based on access to the public sphere. It argues in favour of the latter. Whereas an approach based on common values is more consonant with the development of the case law so far, it is conceptually problematic: it involves subjective assessments of the importance and degree of ‘sharedness’ of the value in question, which can undermine the equal constitutional status of different Charter provisions. Furthermore, it marginalises the Charter’s overall politically constructional character, which distinguishes it from other sources of rights protection listed in Art 6 TEU. The paper argues that, as the Charter’s provisions concretise the notion of political status in the EU, they have a primarily constitutional, rather than ethical, basis. Interpreting the Charter based on the very commitment to a process of sharing, drawing on Hannah Arendt’s idea of the ‘right to have rights’ (a right to access a political community on equal terms), is therefore preferable. This approach retains the pluralistic, post-national fabric of the EU polity, as it accommodates multiple narratives about its underlying values, while also having an inclusionary impact on previously underrepresented groups (e.g. non-market-active citizens or the sans-papiers) by recognising their equal political disposition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers how far Anglo-Saxon conceptions of have influenced European Union vocational education and training policy, especially given the disparate approaches to VET across Europe. Two dominant approaches can be identified: the dual system (exemplified by Germany); and output based models (exemplified by the NVQ ‘English style’). Within the EU itself, the design philosophy of the English output-based model proved in the first instance influential in attempts to develop tools to establish equivalence between vocational qualifications across Europe, resulting in the learning outcomes approach of the European Qualifications Framework, the credit-based model of European VET Credit System and the task-based construction of occupation profiles exemplified by European Skills, Competences and Occupations. The governance model for the English system is, however, predicated on employer demand for ‘skills’ and this does not fit well with the social partnership model encompassing knowledge, skills and competences that is dominant in northern Europe. These contrasting approaches have led to continual modifications to the tools, as these sought to harmonise and reconcile national VET requirements with the original design. A tension is evident in particular between national and regional approaches to vocational education and training, on the one hand, and the policy tools adopted to align European vocational education and training better with the demands of the labour market, including at sectoral level, on the other. This paper explores these tensions and considers the prospects for the successful operation of these tools, paying particular attention to the European Qualifications Framework, European VET Credit System and European Skills, Competences and Occupations tool and the relationships between them and drawing on studies of the construction and furniture industries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artículo describe una propuesta de innovación docente basada en la corriente educativa de la Educación para el Desarrollo, así como la mejora de Competencia Comunicativa en L2 y de las Competencias Literarias e Interculturales por medio de un taller diseñado para tal fin. El propósito de este artículo es doble: por un lado mostrar las posibilidades que ofrece el Taller de Escritura e Ilustración Creativa para el desarrollo de las Competencias Literaria, Intercultural y Comunicativa en L2. Se muestra cómo el taller cumple con las directrices marcadas por la Educación para el Desarrollo que se describe en el marco teórico. El segundo objetivo es narrar cómo se han organizado, coordinado e implementado el Taller de Escritura e Ilustración Creativa en la Universidade Federal do Amazonas en Manaos (Brasil), basándose en la metodología del aprendizaje basado en tareas, y cómo se ha conseguido (i) promover la creación de puentes para la consolidación de las relaciones bilaterales entre universidades; (ii) motivar la colaboración científica con los centros brasileños que cuentan con un departamento de español, y (iii) emplear y crear herramientas que permitan incluir la Educación para el Desarrollo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compulsory education laws oblige primary and secondary schools to give each pupil positive encouragement in, for example, social, emotional, cognitive, creative, and ethical respects. This is a fairly smooth process for most pupils, but it is not as easy to achieve with others. A pattern of pupil, home or family, and school variables turns out to be responsible for a long-term process that may lead to a pupil’s dropping out of education. A systemic approach will do much to introduce more clarity into the diagnosis, potential reduction and possible prevention of some persistent educational problems that express themselves in related phenomena, for example low school motivation and achievement; forced underachievement of high ability pupils; concentration of bullying and violent behaviour in and around some types of classes and schools; and drop-out percentages that are relatively constant across time. Such problems have a negative effect on pupils, teachers, parents, schools, and society alike. In this address, I would therefore like to clarify some of the systemic causes and processes that we have identified between specific educational and pupil characteristics. Both theory and practice can assist in developing, implementing, and checking better learning methods and coaching procedures, particularly for pupils at risk. This development approach will take time and require co-ordination, but it will result in much better processes and outcomes than we are used to. First, I will diagnose some systemic aspects of education that do not seem to optimise the learning processes and school careers of some types of pupils in particular. Second, I will specify cognitive, social, motivational, and self-regulative aspects of learning tasks and relate corresponding learning processes to relevant instructional and wider educational contexts. I will elaborate these theoretical notions into an educational design with systemic instructional guidelines and multilevel procedures that may improve learning processes for different types of pupils. Internet-based Information and Communication Technology, or ICT, also plays a major role here. Third, I will report on concrete developments made in prototype research and trials. The development process concerns ICT-based differentiation of learning materials and procedures, and ICT-based strategies to improve pupil development and learning. Fourth, I will focus on the experience gained in primary and secondary educational practice with respect to implementation. We can learn much from such practical experience, in particular about the conditions for developing and implementing the necessary changes in and around schools. Finally, I will propose future research. As I hope to make clear, theory-based development and implementation research can join forces with systemic innovation and differentiated assessment in educational practice, to pave the way for optimal “learning for self-regulation” for pupils, teachers, parents, schools, and society at large.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective
To explore the concerns, needs and knowledge of women diagnosed with Gestational Diabetes Mellitus (GDM).
Design
A qualitative study of women with GDM or a history of GDM.
Methods
Nineteen women who were both pregnant and recently diagnosed with GDM or post- natal with a recent history of GDM were recruited from outpatient diabetes care clinics. This qualitative study utilised focus groups. Participants were asked a series of open-ended questions to explore 1) current knowledge of GDM; 2) anxiety when diagnosed with GDM, and whether this changed overtime; 3) understanding and managing GDM and 4) the future impact of GDM. The data were analysed using a conventional content analysis approach.
Findings
Women experience a steep learning curve when initially diagnosed and eventually become skilled at managing their disease effectively. The use of insulin is associated with fear and guilt. Diet advice was sometimes complex and not culturally appropriate. Women appear not to be fully aware of the short or long-term consequences of a diagnosis of GDM.
Conclusions
Midwives and other Health Care Professionals need to be cognisant of the impact of a diagnosis of GDM and give individual and culturally appropriate advice (especially with regards to diet). High quality, evidence based information resources need to be made available to this group of women. Future health risks and lifestyle changes need to be discussed at diagnosis to ensure women have the opportunity to improve their health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

VITULLO, Nadia Aurora Vanti. Avaliação do banco de dissertações e teses da Associação Brasileira de Antropologia: uma análise cienciométrica. 2001. 143 f. Dissertaçao (Mestrado) - Curso de Mestrado em Biblioteconomia e Ciência da Informação, Pontifícia Universidade Católica de Campinas, Campinas, 2001.