949 resultados para user generated content
Resumo:
Image processing has been a challenging and multidisciplinary research area since decades with continuing improvements in its various branches especially Medical Imaging. The healthcare industry was very much benefited with the advances in Image Processing techniques for the efficient management of large volumes of clinical data. The popularity and growth of Image Processing field attracts researchers from many disciplines including Computer Science and Medical Science due to its applicability to the real world. In the meantime, Computer Science is becoming an important driving force for the further development of Medical Sciences. The objective of this study is to make use of the basic concepts in Medical Image Processing and develop methods and tools for clinicians’ assistance. This work is motivated from clinical applications of digital mammograms and placental sonograms, and uses real medical images for proposing a method intended to assist radiologists in the diagnostic process. The study consists of two domains of Pattern recognition, Classification and Content Based Retrieval. Mammogram images of breast cancer patients and placental images are used for this study. Cancer is a disaster to human race. The accuracy in characterizing images using simplified user friendly Computer Aided Diagnosis techniques helps radiologists in detecting cancers at an early stage. Breast cancer which accounts for the major cause of cancer death in women can be fully cured if detected at an early stage. Studies relating to placental characteristics and abnormalities are important in foetal monitoring. The diagnostic variability in sonographic examination of placenta can be overlooked by detailed placental texture analysis by focusing on placental grading. The work aims on early breast cancer detection and placental maturity analysis. This dissertation is a stepping stone in combing various application domains of healthcare and technology.
Resumo:
Retrieval of similar anatomical structures of brain MR images across patients would help the expert in diagnosis of diseases. In this paper, modified local binary pattern with ternary encoding called modified local ternary pattern (MOD-LTP) is introduced, which is more discriminant and less sensitive to noise in near-uniform regions, to locate slices belonging to the same level from the brain MR image database. The ternary encoding depends on a threshold, which is a user-specified one or calculated locally, based on the variance of the pixel intensities in each window. The variancebased local threshold makes the MOD-LTP more robust to noise and global illumination changes. The retrieval performance is shown to improve by taking region-based moment features of MODLTP and iteratively reweighting the moment features of MOD-LTP based on the user’s feedback. The average rank obtained using iterated and weighted moment features of MOD-LTP with a local variance-based threshold, is one to two times better than rotational invariant LBP (Unay, D., Ekin, A. and Jasinschi, R.S. (2010) Local structure-based region-of-interest retrieval in brain MR images. IEEE Trans. Inf. Technol. Biomed., 14, 897–903.) in retrieving the first 10 relevant images
Resumo:
More Open Education Resources (OER) and learning environments are being created and starting to mature and there are a number of barriers to learning and creator participation. One often overlooked barrier that has been given less attention, especially within OERs, is user experience (UX). UX is the way a person feels about using a product, system or service. We are creatures with emotional needs and, in the rush to get great content open and available sometimes the usability, the wow factor and good design principles get left by the wayside. I will demonstrate ways to think about UX for your OER and learning environments and why this is an important factor in helping engage learners with our educational materials. ‘The real payoff comes when we can make that remarkability last. When we can make people continually feel our work is worthy of discussion. When—for weeks, months, maybe even years— the people who engage with our work continue to sing its praises to everybody they meet’– (Jared Spool in Walter, A. Designing for Emotion). Walter, A. (2011) Designing for Emotion, A Book Apart. http://www.abookapart.com/products/designing-for-emotion
Resumo:
Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.
Resumo:
La iluminación eficiente consiste en brindar luz a un espacio determinado utilizando recursos que consuman poca energía, produzcan confort para quien las utiliza y reduzcan el costo ambiental de producirlas. Este tipo de iluminación pretende generar valor agregado tanto al usuario como al inversionista que la posee, amortizando su inversión a través del ahorro generado al utilizar nuevas tecnologías y produciendo beneficios a través del tiempo debido a su equilibrio con el medio ambiente y vida útil extendida. La idea del presente estudio es brindar un modelo de iluminación eficiente que incluya los parámetros básicos que necesite un sistema de iluminación para ser considerado de valor agregado y amigable con el medio ambiente demostrando sus ventajas y desventajas en el momento de efectuar la selección a partir de unos criterios que incluyen aspectos económicos, técnicos y ambientales.
Resumo:
The incursion of Internet has created new forms of information and communication. As a result, today’s generation is culturally socialized by the influence of information and communication technologies in their various forms. This has generated a series of characteristics of social and cultural behaviour which are derivative of didactic, academic or recreational use. Nevertheless the use of the Internet from an early age represents not only a useful educational tool; it can constitute a great danger when it is used to access contents unsuitable for their adaptive development. Accordingly, it is necessary to study the legal regulation of internet content and to evaluate how such regulation may affect rights. Further, it is also important to study of the impact and use of this technological tool at level of the familiar unit, to understand better how it can suggest appropriate social mechanisms for the constructive use of Internet. The present investigation involves these two aspects with the purpose of uniting the legal and social perspective in a joint analysis that allows one more a more integral vision of this problem of great interest at the global level.
Resumo:
En Colombia los medios de comunicación, más específicamente las cadenas radiales nacionales más importantes, no se han quedado atrás y han optado por unirse desde hace unos años a twitter, incursionando sobre la marcha. Las cuentas de twitter de Bluradio, Caracol Radio y RCN Radio analizadas junto a las entrevistas con los encargados de las mismas y las emisiones de los programas de noticias escogidos, muestran que las redes sociales siguen siendo un desafío en la actualidad. Uno de esos desafíos es encontrar el verdadero objetivo de los medios en twitter y que este no sólo consista en la escucha activa en busca de las noticias de último momento. Para alcanzar una legitimidad comunicativa en la web 2.0 es necesario adaptarse a la cultura de la participación de las redes, de lo contrario se está condenado a obtener tan solo un nuevo canal de difusión de noticias.
Resumo:
Explanations are an important by-product of medical decisionsupport activities, as they have proved to favour compliance and correct treatment performance. To achieve this purpose, these texts should have a strong argumentation content and should adapt to emotional, as well as to rational attitudes of the Addressee. This paper describes how Rhetorical Sentence Planning can contribute to this aim: the rulebased plan discourse revision is introduced between Text Planning and Linguistic Realization, and exploits knowledge about the user personality and emotions and about the potential impact of domain items on user compliance and memory recall. The proposed approach originates from analytical and empirical evaluation studies of computer generated explanation texts in the domain of drug prescription. This work was partially supported by a British-Italian Collaboration in Research and Higher Education Project, which involved the Universities of Reading and of Bari, in 1996.
Resumo:
In this paper we describe how we generated written explanations to ‘indirect users’ of a knowledge-based system in the domain of drug prescription. We call ‘indirect users’ the intended recipients of explanations, to distinguish them from the prescriber (the ‘direct’ user) who interacts with the system. The Explanation Generator was designed after several studies about indirect users' information needs and physicians' explanatory attitudes in this domain. It integrates text planning techniques with ATN-based surface generation. A double modeling component enables adapting the information content, order and style to the indirect user to whom explanation is addressed. Several examples of computer-generated texts are provided, and they are contrasted with the physicians' explanations to discuss advantages and limits of the approach adopted.
Resumo:
The potential of the τ-ω model for retrieving the volumetric moisture content of bare and vegetated soil from dual polarisation passive microwave data acquired at single and multiple angles is tested. Measurement error and several additional sources of uncertainty will affect the theoretical retrieval accuracy. These include uncertainty in the soil temperature, the vegetation structure and consequently its microwave singlescattering albedo, and uncertainty in soil microwave emissivity based on its roughness. To test the effects of these uncertainties for simple homogeneous scenes, we attempt to retrieve soil moisture from a number of simulated microwave brightness temperature datasets generated using the τ-ω model. The uncertainties for each influence are estimated and applied to curves generated for typical scenarios, and an inverse model used to retrieve the soil moisture content, vegetation optical depth and soil temperature. The effect of each influence on the theoretical soil moisture retrieval limit is explored, the likelihood of each sensor configuration meeting user requirements is assessed, and the most effective means of improving moisture retrieval indicated.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
A dataset of 1,846,990 completed lactation record,; was created Using milk recording data from 8,967 commercial dairy farms in the United Kingdom over a five year period. Herd-specific lactation curves describing levels of milk, Cat and protein by lactation number and month of calving were generated for each farm. The actual yield of milk and protein proportion at the first milk recording of individual cow lactations were compared with the levels taken from the lactation curves. Logistic regression analysis showed that cows production milk with a lower percentage of protein than average had a significantly lower probability of being in-calf at 100 days post calving and it significantly higher probability of being culled at the end of lactation. The culling rates derived from the studied database demonstrate the current high wastage rate of commercial dairy cows. Well of this wastage is due to involuntary culling as a result of reproductive failure.
Resumo:
This paper describes a framework architecture for the automated re-purposing and efficient delivery of multimedia content stored in CMSs. It deploys specifically designed templates as well as adaptation rules based on a hierarchy of profiles to accommodate user, device and network requirements invoked as constraints in the adaptation process. The user profile provides information in accordance with the opt-in principle, while the device and network profiles provide the operational constraints such as for example resolution and bandwidth limitations. The profiles hierarchy ensures that the adaptation privileges the users' preferences. As part of the adaptation, we took into account the support for users' special needs, and therefore adopted a template-based approach that could simplify the adaptation process integrating accessibility-by-design in the template.
Resumo:
Information provision to address the changing requirements can be best supported by content management. The Current information technology enables information to be stored and provided from various distributed sources. To identify and retrieve relevant information requires effective mechanisms for information discovery and assembly. This paper presents a method, which enables the design of such mechanisms, with a set of techniques for articulating and profiling users' requirements, formulating information provision specifications, realising management of information content in repositories, and facilitating response to the user's requirements dynamically during the process of knowledge construction. These functions are represented in an ontology which integrates the capability of the mechanisms. The ontological modelling in this paper has adopted semiotics principles with embedded norms to ensure coherent course of actions represented in these mechanisms. (C) 2008 Elsevier B.V. All rights reserved.