953 resultados para digital text


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The methods for estimating patient exposure in x-ray imaging are based on the measurement of radiation incident on the patient. In digital imaging, the useful dose range of the detector is large and excessive doses may remain undetected. Therefore, real-time monitoring of radiation exposure is important. According to international recommendations, the measurement uncertainty should be lower than 7% (confidence level 95%). The kerma-area product (KAP) is a measurement quantity used for monitoring patient exposure to radiation. A field KAP meter is typically attached to an x-ray device, and it is important to recognize the effect of this measurement geometry on the response of the meter. In a tandem calibration method, introduced in this study, a field KAP meter is used in its clinical position and calibration is performed with a reference KAP meter. This method provides a practical way to calibrate field KAP meters. However, the reference KAP meters require comprehensive calibration. In the calibration laboratory it is recommended to use standard radiation qualities. These qualities do not entirely correspond to the large range of clinical radiation qualities. In this work, the energy dependence of the response of different KAP meter types was examined. According to our findings, the recommended accuracy in KAP measurements is difficult to achieve with conventional KAP meters because of their strong energy dependence. The energy dependence of the response of a novel large KAP meter was found out to be much lower than with a conventional KAP meter. The accuracy of the tandem method can be improved by using this meter type as a reference meter. A KAP meter cannot be used to determine the radiation exposure of patients in mammography, in which part of the radiation beam is always aimed directly at the detector without attenuation produced by the tissue. This work assessed whether pixel values from this detector area could be used to monitor the radiation beam incident on the patient. The results were congruent with the tube output calculation, which is the method generally used for this purpose. The recommended accuracy can be achieved with the studied method. New optimization of radiation qualities and dose level is needed when other detector types are introduced. In this work, the optimal selections were examined with one direct digital detector type. For this device, the use of radiation qualities with higher energies was recommended and appropriate image quality was achieved by increasing the low dose level of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The loss and degradation of forest cover is currently a globally recognised problem. The fragmentation of forests is further affecting the biodiversity and well-being of the ecosystems also in Kenya. This study focuses on two indigenous tropical montane forests in the Taita Hills in southeastern Kenya. The study is a part of the TAITA-project within the Department of Geography in the University of Helsinki. The study forests, Ngangao and Chawia, are studied by remote sensing and GIS methods. The main data includes black and white aerial photography from 1955 and true colour digital camera data from 2004. This data is used to produce aerial mosaics from the study areas. The land cover of these study areas is studied by visual interpretation, pixel-based supervised classification and object-oriented supervised classification. The change of the forest cover is studied with GIS methods using the visual interpretations from 1955 and 2004. Furthermore, the present state of the study forests is assessed with leaf area index and canopy closure parameters retrieved from hemispherical photographs as well as with additional, previously collected forest health monitoring data. The canopy parameters are also compared with textural parameters from digital aerial mosaics. This study concludes that the classification of forest areas by using true colour data is not an easy task although the digital aerial mosaics are proved to be very accurate. The best classifications are still achieved with visual interpretation methods as the accuracies of the pixel-based and object-oriented supervised classification methods are not satisfying. According to the change detection of the land cover in the study areas, the area of indigenous woodland in both forests has decreased in 1955 2004. However in Ngangao, the overall woodland area has grown mainly because of plantations of exotic species. In general, the land cover of both study areas is more fragmented in 2004 than in 1955. Although the forest area has decreased, forests seem to have a more optimistic future than before. This is due to the increasing appreciation of the forest areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - There are many library automation packages available as open-source software, comprising two modules: staff-client module and online public access catalogue (OPAC). Although the OPAC of these library automation packages provides advanced features of searching and retrieval of bibliographic records, none of them facilitate full-text searching. Most of the available open-source digital library software facilitates indexing and searching of full-text documents in different formats. This paper makes an effort to enable full-text search features in the widely used open-source library automation package Koha, by integrating it with two open-source digital library software packages, Greenstone Digital Library Software (GSDL) and Fedora Generic Search Service (FGSS), independently. Design/methodology/approach - The implementation is done by making use of the Search and Retrieval by URL (SRU) feature available in Koha, GSDL and FGSS. The full-text documents are indexed both in Koha and GSDL and FGSS. Findings - Full-text searching capability in Koha is achieved by integrating either GSDL or FGSS into Koha and by passing an SRU request to GSDL or FGSS from Koha. The full-text documents are indexed both in the library automation package (Koha) and digital library software (GSDL, FGSS) Originality/value - This is the first implementation enabling the full-text search feature in a library automation software by integrating it into digital library software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a technique for irreversible watermarking approach robust to affine transform attacks in camera, biomedical and satellite images stored in the form of monochrome bitmap images. The watermarking approach is based on image normalisation in which both watermark embedding and extraction are carried out with respect to an image normalised to meet a set of predefined moment criteria. The normalisation procedure is invariant to affine transform attacks. The result of watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. Here, direct-sequence code division multiple access approach is used to embed multibit text information in DCT and DWT transform domains. The proposed watermarking schemes are robust against various types of attacks such as Gaussian noise, shearing, scaling, rotation, flipping, affine transform, signal processing and JPEG compression. Performance analysis results are measured using image processing metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Text segmentation and localization algorithms are proposed for the born-digital image dataset. Binarization and edge detection are separately carried out on the three colour planes of the image. Connected components (CC's) obtained from the binarized image are thresholded based on their area and aspect ratio. CC's which contain sufficient edge pixels are retained. A novel approach is presented, where the text components are represented as nodes of a graph. Nodes correspond to the centroids of the individual CC's. Long edges are broken from the minimum spanning tree of the graph. Pair wise height ratio is also used to remove likely non-text components. A new minimum spanning tree is created from the remaining nodes. Horizontal grouping is performed on the CC's to generate bounding boxes of text strings. Overlapping bounding boxes are removed using an overlap area threshold. Non-overlapping and minimally overlapping bounding boxes are used for text segmentation. Vertical splitting is applied to generate bounding boxes at the word level. The proposed method is applied on all the images of the test dataset and values of precision, recall and H-mean are obtained using different approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using scientific methods in the humanities is at the forefront of objective literary analysis. However, processing big data is particularly complex when the subject matter is qualitative rather than numerical. Large volumes of text require specialized tools to produce quantifiable data from ideas and sentiments. Our team researched the extent to which tools such as Weka and MALLET can test hypotheses about qualitative information. We examined the claim that literary commentary exists within political environments and used US periodical articles concerning Russian literature in the early twentieth century as a case study. These tools generated useful quantitative data that allowed us to run stepwise binary logistic regressions. These statistical tests allowed for time series experiments using sea change and emergency models of history, as well as classification experiments with regard to author characteristics, social issues, and sentiment expressed. Both types of experiments supported our claim with varying degrees, but more importantly served as a definitive demonstration that digitally enhanced quantitative forms of analysis can apply to qualitative data. Our findings set the foundation for further experiments in the emerging field of digital humanities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing availability of large, detailed digital representations of the Earth’s surface demands the application of objective and quantitative analyses. Given recent advances in the understanding of the mechanisms of formation of linear bedform features from a range of environments, objective measurement of their wavelength, orientation, crest and trough positions, height and asymmetry is highly desirable. These parameters are also of use when determining observation-based parameters for use in many applications such as numerical modelling, surface classification and sediment transport pathway analysis. Here, we (i) adapt and extend extant techniques to provide a suite of semi-automatic tools which calculate crest orientation, wavelength, height, asymmetry direction and asymmetry ratios of bedforms, and then (ii) undertake sensitivity tests on synthetic data, increasingly complex seabeds and a very large-scale (39 000km2) aeolian dune system. The automated results are compared with traditional, manually derived,measurements at each stage. This new approach successfully analyses different types of topographic data (from aeolian and marine environments) from a range of sources, with tens of millions of data points being processed in a semi-automated and objective manner within minutes rather than hours or days. The results from these analyses show there is significant variability in all measurable parameters in what might otherwise be considered uniform bedform fields. For example, the dunes of the Rub’ al Khali on the Arabian peninsula are shown to exhibit deviations in dimensions from global trends. Morphological and dune asymmetry analysis of the Rub’ al Khali suggests parts of the sand sea may be adjusting to a changed wind regime from that during their formation 100 to 10 ka BP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a study of digital literacy where the researcher worked with one group of English language arts teacher candidates and one of adolescents, reading and writing hypertext fiction. The findings suggest that the adolescent readers/writers brought a more flexible and multiliterate approach to their digital literacy processes than the teacher candidates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss collaborative learning strategies based on the use of digital stories in corporate training and lifelong learning. The text starts with a concise review on theoretical and technical foundations about the use of digital technologies in collaborative strategies in lifelong learning. We will also discuss if the corporate training may be improved by the use of individual audio-visual experience in learning process. Careful planning, scripting and production of audio-visual digital stories can help in the construction of collaborative learning spaces in which adults are in the context of vocational training throughout life. Our analysis concludes emphasizing on the need to experience the routing performance of digital stories in the context of corporate training, following the reference levels mentioned here, so we can have in a future more theoretical and empirical elements for the validation and conceptualization in the use of digital stories in the context of corporate training. Ultimately we believe that lifelong learning can be improved with the use of strategies that promote the production of personal audio-visual for those involved in teaching and learning process in organizational context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently new digital tools used in architecture are often at the service of a conception of architecture as a consumer society’s cultural good. Within this neoliberal cultural frame, architects’ social function is no longer seen as the production of urban facts with sense of duty, but as a part within the symbolic logic that rules the social production of cultural values as it was defined by Veblen and developed by Baudrillard. As a result, the potential given by the new digital tools used in representation has shifted from an instrument used to verify a built project to two different main models: At the one hand the development of pure virtual architectures that are exclusively configured within their symbolic value as artistic “images” easily reproducible. On the other hand the development of all those projects which -even maintaining their attention to architecture as a built fact- base their symbolic value on the author’s image and on virtual aesthetics and logics that prevail over architecture’s materiality. Architects’ sense of duty has definitely reached a turning point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies on work instruction delivery for complex assembly tasks have shown that the mode and delivery method for the instructions in an engineering context can influence both build time and product quality. The benefits of digital, animated instructional formats when compared to static pictures and text only formats have already been demonstrated. Although pictograms have found applications for relatively straight forward operations and activities, their applicability to relatively complex assembly tasks has yet to be demonstrated. This study compares animated instructions and pictograms for the assembly of an aircraft panel. Based around a series of build experiments, the work records build time as well as the number of media references to measure and compare build efficiency. The number of build errors and the time required to correct them is also recorded. The experiments included five participants completing five builds over five consecutive days for each media type. Results showed that on average the total build time was 13.1% lower for the group using animated instructions. The benefit of animated instructions on build time was most prominent in the first three builds, by build four this benefit had disappeared. There were a similar number of instructional references for the two groups over the five builds but the pictogram users required a lot more references during build 1. There were more errors among the group using pictograms requiring more time for corrections during the build.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Ph.D., by thesis, proposes a speculative lens to read Internet Art via the concept of digital debris. In order to do so, the research explores the idea of digital debris in Internet Art from 1993 to 2011 in a series of nine case studies. Here, digital debris are understood as words typed in search engines and which then disappear; bits of obsolete codes which are lingering on the Internet, abandoned website, broken links or pieces of ephemeral information circulating on the Internet and which are used as a material by practitioners. In this context, the thesis asks what are digital debris? The thesis argues that the digital debris of Internet Art represent an allegorical and entropic resistance to the what Art Historian David Joselit calls the Epistemology of Search. The ambition of the research is to develop a language in-between the agency of the artist and the autonomy of the algorithm, as a way of introducing Internet Art to a pluridisciplinary audience, hence the presence of the comparative studies unfolding throughout the thesis, between Internet Art and pionners in the recycling of waste in art, the use of instructions as a medium and the programming of poetry. While many anthropological and ethnographical studies are concerned with the material object of the computer as debris once it becomes obsolete, very few studies have analysed waste as discarded data. The research shifts the focus from an industrial production of digital debris (such as pieces of hardware) to obsolete pieces of information in art practice. The research demonstrates that illustrations of such considerations can be found, for instance, in Cory Arcangel’s work Data Diaries (2001) where QuickTime files are stolen, disassembled, and then re-used in new displays. The thesis also looks at Jodi’s approach in Jodi.org (1993) and Asdfg (1998), where websites and hyperlinks are detourned, deconstructed, and presented in abstract collages that reveals the architecture of the Internet. The research starts in a typological manner and classifies the pieces of Internet Art according to the structure at play in the work. Indeed if some online works dealing with discarded documents offer a self-contained and closed system, others nurture the idea of openness and unpredictability. The thesis foregrounds the ideas generated through the artworks and interprets how those latter are visually constructed and displayed. Not only does the research questions the status of digital debris once they are incorporated into art practice but it also examine the method according to which they are retrieved, manipulated and displayed to submit that digital debris of Internet Art are the result of both semantic and automated processes, rendering them both an object of discourse and a technical reality. Finally, in order to frame the serendipity and process-based nature of the digital debris, the Ph.D. concludes that digital debris are entropic . In other words that they are items of language to-be, paradoxically locked in a constant state of realisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Jornalismo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As an introduction to a series of articles focused on the exploration of particular tools and/or methods to bring together digital technology and historical research, the aim of this paper is mainly to highlight and discuss in what measure those methodological approaches can contribute to improve analytical and interpretative capabilities available to historians. In a moment when the digital world present us with an ever-increasing variety of tools to perform extraction, analysis and visualization of large amounts of text, we thought it would be relevant to bring the digital closer to the vast historical academic community. More than repeating an idea of digital revolution introduced in the historical research, something recurring in the literature since the 1980s, the aim was to show the validity and usefulness of using digital tools and methods, as another set of highly relevant tools that the historians should consider. For this several case studies were used, combining the exploration of specific themes of historical knowledge and the development or discussion of digital methodologies, in order to highlight some changes and challenges that, in our opinion, are already affecting the historians' work, such as a greater focus given to interdisciplinarity and collaborative work, and a need for the form of communication of historical knowledge to become more interactive.