995 resultados para buttons (information artifacts)
Resumo:
Inscription: Verso: New York.
Resumo:
Inscription: Verso: women's rights demonstration Bryant Park, New York.
Resumo:
Inscription: Verso: International Women's Day march, New York.
Effectuation and its implications for socio-technical design science research in information systems
Resumo:
We study the implications of the effectuation concept for socio-technical artifact design as part of the design science research (DSR) process in information systems (IS). Effectuation logic is the opposite of causal logic. Ef-fectuation does not focus on causes to achieve a particular effect, but on the possibilities that can be achieved with extant means and resources. Viewing so-cio-technical IS DSR through an effectuation lens highlights the possibility to design the future even without set goals. We suggest that effectuation may be a useful perspective for design in dynamic social contexts leading to a more dif-ferentiated view on the instantiation of mid-range artifacts for specific local ap-plication contexts. Design science researchers can draw on this paper’s conclu-sions to view their DSR projects through a fresh lens and to reexamine their re-search design and execution. The paper also offers avenues for future research to develop more concrete application possibilities of effectuation in socio-technical IS DSR and, thus, enrich the discourse.
Resumo:
Karaoke singing is a popular form of entertainment in several parts of the world. Since this genre of performance attracts amateurs, the singing often has artifacts related to scale, tempo, and synchrony. We have developed an approach to correct these artifacts using cross-modal multimedia streams information. We first perform adaptive sampling on the user's rendition and then use the original singer's rendition as well as the video caption highlighting information in order to correct the pitch, tempo and the loudness. A method of analogies has been employed to perform this correction. The basic idea is to manipulate the user's rendition in a manner to make it as similar as possible to the original singing. A pre-processing step of noise removal due to feedback and huffing also helps improve the quality of the user's audio. The results are described in the paper which shows the effectiveness of this multimedia approach.
Resumo:
The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.
Resumo:
The enhanced functional sensitivity offered by ultra-high field imaging may significantly benefit simultaneous EEG-fMRI studies, but the concurrent increases in artifact contamination can strongly compromise EEG data quality. In the present study, we focus on EEG artifacts created by head motion in the static B0 field. A novel approach for motion artifact detection is proposed, based on a simple modification of a commercial EEG cap, in which four electrodes are non-permanently adapted to record only magnetic induction effects. Simultaneous EEG-fMRI data were acquired with this setup, at 7T, from healthy volunteers undergoing a reversing-checkerboard visual stimulation paradigm. Data analysis assisted by the motion sensors revealed that, after gradient artifact correction, EEG signal variance was largely dominated by pulse artifacts (81-93%), but contributions from spontaneous motion (4-13%) were still comparable to or even larger than those of actual neuronal activity (3-9%). Multiple approaches were tested to determine the most effective procedure for denoising EEG data incorporating motion sensor information. Optimal results were obtained by applying an initial pulse artifact correction step (AAS-based), followed by motion artifact correction (based on the motion sensors) and ICA denoising. On average, motion artifact correction (after AAS) yielded a 61% reduction in signal power and a 62% increase in VEP trial-by-trial consistency. Combined with ICA, these improvements rose to a 74% power reduction and an 86% increase in trial consistency. Overall, the improvements achieved were well appreciable at single-subject and single-trial levels, and set an encouraging quality mark for simultaneous EEG-fMRI at ultra-high field.
Resumo:
How does the manipulation of visual representations play a role in the practices of generating, evolving and exchanging knowledge? The role of visual representation in mediating knowledge work is explored in a study of design work of an architectural practice, Edward Cullinan Architects. The intensity of interactions with visual representations in the everyday activities on design projects is immediately striking. Through a discussion of observed design episodes, two ways are articulated in which visual representations act as 'artefacts of knowing'. As communication media they are symbolic representations, rich in meaning, through which ideas are articulated, developed and exchanged. Furthermore, as tangible artefacts they constitute material entities with which to interact and thereby develop knowledge. The communicative and interactive properties of visual representations constitute them as central elements of knowledge work. The paper explores emblematic knowledge practices supported by visual representation and concludes by pinpointing avenues for further research.
Resumo:
Reasoning under uncertainty is a human capacity that in software system is necessary and often hidden. Argumentation theory and logic make explicit non-monotonic information in order to enable automatic forms of reasoning under uncertainty. In human organization Distributed Cognition and Activity Theory explain how artifacts are fundamental in all cognitive process. Then, in this thesis we search to understand the use of cognitive artifacts in an new argumentation framework for an agent-based artificial society.
Issues of spectral quality in clinical 1H-magnetic resonance spectroscopy and a gallery of artifacts
Resumo:
In spite of the facts that magnetic resonance spectroscopy (MRS) is applied as clinical tool in non-specialized institutions and that semi-automatic acquisition and processing tools can be used to produce quantitative information from MRS exams without expert information, issues of spectral quality and quality assessment are neglected in the literature of MR spectroscopy. Even worse, there is no consensus among experts on concepts or detailed criteria of quality assessment for MR spectra. Furthermore, artifacts are not at all conspicuous in MRS and can easily be taken for true, interpretable features. This article aims to increase interest in issues of spectral quality and quality assessment, to start a larger debate on generally accepted criteria that spectra must fulfil to be clinically and scientifically acceptable, and to provide a sample gallery of artifacts, which can be used to raise awareness for potential pitfalls in MRS.
Resumo:
Nurses prepare knowledge representations, or summaries of patient clinical data, each shift. These knowledge representations serve multiple purposes, including support of working memory, workload organization and prioritization, critical thinking, and reflection. This summary is integral to internal knowledge representations, working memory, and decision-making. Study of this nurse knowledge representation resulted in development of a taxonomy of knowledge representations necessary to nursing practice.This paper describes the methods used to elicit the knowledge representations and structures necessary for the work of clinical nurses, described the development of a taxonomy of this knowledge representation, and discusses translation of this methodology to the cognitive artifacts of other disciplines. Understanding the development and purpose of practitioner's knowledge representations provides important direction to informaticists seeking to create information technology alternatives. The outcome of this paper is to suggest a process template for transition of cognitive artifacts to an information system.
A repository for integration of software artifacts with dependency resolution and federation support
Resumo:
While developing new IT products, reusability of existing components is a key aspect that can considerably improve the success rate. This fact has become even more important with the rise of the open source paradigm. However, integrating different products and technologies is not always an easy task. Different communities employ different standards and tools, and most times is not clear which dependencies a particular piece of software has. This is exacerbated by the transitive nature of these dependencies, making component integration a complicated affair. To help reducing this complexity we propose a model-based repository, capable of automatically resolve the required dependencies. This repository needs to be expandable, so new constraints can be analyzed, and also have federation support, for the integration with other sources of artifacts. The solution we propose achieves these working with OSGi components and using OSGi itself.
Resumo:
We investigate digitalization and security of the Bulgarian and Indian cultural artifacts in multimedia archive. In the paper we describe project implementation and methods for intellectual property protection that are result of bilateral cultural and scientific cooperation between research-workers in India and Bulgaria.
Resumo:
Within the framework of heritage preservation, 3D scanning and modeling for heritage documentation has increased significantly in recent years, mainly due to the evolution of laser and image-based techniques, modeling software, powerful computers and virtual reality. 3D laser acquisition constitutes a real development opportunity for 3D modeling based previously on theoretical data. The representation of the object information rely on the knowledge of its historic and theoretical frame to reconstitute a posteriori its previous states. This project proposes an approach dealing with data extraction based on architectural knowledge and Laser statement informing measurements, the whole leading to 3D reconstruction. The experimented Khmer objects are exposed at Guimet museum in Paris. The purpose of this digital modeling meets the need of exploitable models for simulation projects, prototyping, exhibitions, promoting cultural tourism and particularly for archiving against any likely disaster and as an aided tool for the formulation of virtual museum concept.