869 resultados para artifacts
Resumo:
Radiocarbon dating has been used infrequently as a chronological tool for research in Anglo-Saxon archaeology. Primarily, this is because the uncertainty of calibrated dates provides little advantage over traditional archaeological dating in this period. Recent advances in Bayesian methodology in conjunction with high-precision 14C dating have, however, created the possibility of both testing and refining the established Anglo-Saxon chronologies based on typology of artifacts. The calibration process within such a confined age range, however, relies heavily on the structural accuracy of the calibration curve. We have previously reported decadal measurements on a section of the Irish oak chronology for the period AD 495–725 (McCormac et al. 2004). In this paper, we present decadal measurements for the periods AD 395–485 and AD 735–805,which extends the original calibration set.
Resumo:
For any proposed software project, when the software requirements specification has been established, requirements changes may result in not only a modification of the requirements specification but also a series of modifications of all existing artifacts during the development. Then it is necessary to provide effective and flexible requirements changes management. In this paper, we present an approach to managing requirements changes based on Booth’s negotiation-style framework for belief revision. Informally, we consider the current requirements specification as a belief set about the system-to-be. The request of requirements change is viewed as new information about the same system-to-be. Then the process of executing the requirements change is a process of revising beliefs about the system-to-be. We design a family of belief negotiation models appropriate for different processes of requirements revision, including the setting of the request of requirements change being fully accepted, the setting of the current requirements specification being fully preserved, and that of the current specification and the request of requirements change reaching a compromise. In particular, the prioritization of requirements plays an important role in reaching an agreement in each belief negotiation model designed in this paper.
Resumo:
Studies concerning the physiological significance of Ca2+ sparks often depend on the detection and measurement of large populations of events in noisy microscopy images. Automated detection methods have been developed to quickly and objectively distinguish potential sparks from noise artifacts. However, previously described algorithms are not suited to the reliable detection of sparks in images where the local baseline fluorescence and noise properties can vary significantly, and risk introducing additional bias when applied to such data sets. Here, we describe a new, conceptually straightforward approach to spark detection in linescans that addresses this issue by combining variance stabilization with local baseline subtraction. We also show that in addition to greatly increasing the range of images in which sparks can be automatically detected, the use of a more accurate noise model enables our algorithm to achieve similar detection sensitivities with fewer false positives than previous approaches when applied both to synthetic and experimental data sets. We propose, therefore, that it might be a useful tool for improving the reliability and objectivity of spark analysis in general, and describe how it might be further optimized for specific applications.
Resumo:
We consider the stimulated Raman transition between two long-lived states via multiple intermediate states, such as between hyperfine ground states in the alkali-metal atoms. We present a concise treatment of the general, multilevel, off-resonant case, and we show how the lightshift emerges naturally in this approach. We illustrate our results by application to alkali-metal atoms and we make specific reference to cesium. We comment on some artifacts, due solely to the geometrical overlap of states, which are relevant to existing experiments.
Resumo:
Artifact removal from physiological signals is an essential component of the biosignal processing pipeline. The need for powerful and robust methods for this process has become particularly acute as healthcare technology deployment undergoes transition from the current hospital-centric setting toward a wearable and ubiquitous monitoring environment. Currently, determining the relative efficacy and performance of the multiple artifact removal techniques available on real world data can be problematic, due to incomplete information on the uncorrupted desired signal. The majority of techniques are presently evaluated using simulated data, and therefore, the quality of the conclusions is contingent on the fidelity of the model used. Consequently, in the biomedical signal processing community, there is considerable focus on the generation and validation of appropriate signal models for use in artifact suppression. Most approaches rely on mathematical models which capture suitable approximations to the signal dynamics or underlying physiology and, therefore, introduce some uncertainty to subsequent predictions of algorithm performance. This paper describes a more empirical approach to the modeling of the desired signal that we demonstrate for functional brain monitoring tasks which allows for the procurement of a ground truth signal which is highly correlated to a true desired signal that has been contaminated with artifacts. The availability of this ground truth, together with the corrupted signal, can then aid in determining the efficacy of selected artifact removal techniques. A number of commonly implemented artifact removal techniques were evaluated using the described methodology to validate the proposed novel test platform. © 2012 IEEE.
Resumo:
Particle-in-cell (PIC) simulations of relativistic shocks are in principle capable of predicting the spectra of photons that are radiated incoherently by the accelerated particles. The most direct method evaluates the spectrum using the fields given by the Lienard-Wiechart potentials. However, for relativistic particles this procedure is computationally expensive. Here we present an alternative method that uses the concept of the photon formation length. The algorithm is suitable for evaluating spectra both from particles moving in a specific realization of a turbulent electromagnetic field or from trajectories given as a finite, discrete time series by a PIC simulation. The main advantage of the method is that it identifies the intrinsic spectral features and filters out those that are artifacts of the limited time resolution and finite duration of input trajectories.
Resumo:
Biosignal measurement and processing is increasingly being deployed in ambulatory situations particularly in connected health applications. Such an environment dramatically increases the likelihood of artifacts which can occlude features of interest and reduce the quality of information available in the signal. If multichannel recordings are available for a given signal source, then there are currently a considerable range of methods which can suppress or in some cases remove the distorting effect of such artifacts. There are, however, considerably fewer techniques available if only a single-channel measurement is available and yet single-channel measurements are important where minimal instrumentation complexity is required. This paper describes a novel artifact removal technique for use in such a context. The technique known as ensemble empirical mode decomposition with canonical correlation analysis (EEMD-CCA) is capable of operating on single-channel measurements. The EEMD technique is first used to decompose the single-channel signal into a multidimensional signal. The CCA technique is then employed to isolate the artifact components from the underlying signal using second-order statistics. The new technique is tested against the currently available wavelet denoising and EEMD-ICA techniques using both electroencephalography and functional near-infrared spectroscopy data and is shown to produce significantly improved results. © 1964-2012 IEEE.
Resumo:
There is extensive debate concerning the cognitive and behavioral adaptation of Neanderthals, especially in the period when the earliest anatomically modern humans dispersed into Western Europe, around 35,000–40,000 B.P. The site of the Grotte du Renne (at Arcy-sur-Cure) is of great importance because it provides the most persuasive evidence for behavioral complexity among Neanderthals. A range of ornaments and tools usually associated with modern human industries, such as the Aurignacian, were excavated from three of the Châtelperronian levels at the site, along with Neanderthal fossil remains (mainly teeth). This extremely rare occurrence has been taken to suggest that Neanderthals were the creators of these items. Whether Neanderthals independently achieved this level of behavioral complexity and whether this was culturally transmitted or mimicked via incoming modern humans has been contentious. At the heart of this discussion lies an assumption regarding the integrity of the excavated remains. One means of testing this is by radiocarbon dating; however, until recently, our ability to generate both accurate and precise results for this period has been compromised. A series of 31 accelerator mass spectrometry ultra?ltered dates on bones, antlers, artifacts, and teeth from six key archaeological levels shows an unexpected degree of variation. This suggests that some mixing of material may have occurred, which implies a more complex depositional history at the site and makes it dif?cult to be con?dent about the association of artifacts with human remains in the Châtelperronian levels.
Resumo:
In finite difference time domain simulation of room acoustics, source functions are subject to various constraints. These depend on the way sources are injected into the grid and on the chosen parameters of the numerical scheme being used. This paper addresses the issue of selecting and designing sources for finite difference simulation, by first reviewing associated aims and constraints, and evaluating existing source models against these criteria. The process of exciting a model is generalized by introducing a system of three cascaded filters, respectively, characterizing the driving pulse, the source mechanics, and the injection of the resulting source function into the grid. It is shown that hard, soft, and transparent sources can be seen as special cases within this unified approach. Starting from the mechanics of a small pulsating sphere, a parametric source model is formulated by specifying suitable filters. This physically constrained source model is numerically consistent, does not scatter incoming waves, and is free from zero- and low-frequency artifacts. Simulation results are employed for comparison with existing source formulations in terms of meeting the spectral and temporal requirements on the outward propagating wave.
Resumo:
Forming peer alliances to share and build knowledge is an important aspect of community arts practice, and these co-creation processes are increasingly being mediated by the internet. This paper offers guidance for practitioners who are interested in better utilising the internet to connect, share, and make new knowledge. It argues that new approaches are required to foster the organising activities that underpin online co-creation, building from the premise that people have become increasingly networked as individuals rather than in groups (Rainie and Wellman 2012: 6), and that these new ways of connecting enable new modes of peer-to-peer production and exchange. This position advocates that practitioners move beyond situating the internet as a platform for dissemination and a tool for co-creating media, to embrace its knowledge collaboration potential.
Drawing on a design experiment I developed to promote online knowledge co-creation, this paper suggests three development phases – developing connections, developing ideas, and developing agility – to ground six methods. They are: switching and routing, engaging in small trades of ideas with networked individuals; organising, co-ordinating networked individuals and their data; beta-release, offering ‘beta’ artifacts as knowledge trades; beta-testing, trialing and modifying other peoples ‘beta’ ideas; adapting, responding to technological disruption; and, reconfiguring, embracing opportunities offered by technological disruption. These approaches position knowledge co-creation as another capability of the community artist, along with co-creating art and media.
Resumo:
This paper describes an end-user model for a domestic pervasive computing platform formed by regular home objects. The platform does not rely on pre-planned infrastructure; instead, it exploits objects that are already available in the home and exposes their joint sensing, actuating and computing capabilities to home automation applications. We advocate an incremental process of the platform formation and introduce tangible, object-like artifacts for representing important platform functions. One of those artifacts, the application pill, is a tiny object with a minimal user interface, used to carry the application, as well as to start and stop its execution and provide hints about its operational status. We also emphasize streamlining the user's interaction with the platform. The user engages any UI-capable object of his choice to configure applications, while applications issue notifications and alerts exploiting whichever available objects can be used for that purpose. Finally, the paper briefly describes an actual implementation of the presented end-user model. © (2010) by International Academy, Research, and Industry Association (IARIA).
Resumo:
Dental Panoramic Tomography (DPT) is a widely used and valuable examination in dentistry. One area prone to artefacts and therefore misinterpretation is the anterior region of the mandible. This case study discusses a periapical radiolucency related to lower anterior teeth that is discovered to be a radiographic artefact. Possible causes of the artefact include a pronounced depression in the mental region of the mandible or superimposition of intervertebral spaces. Additional limitations of the DPT image include superimposition of radio-opaque structures, reduced image detail compared to intra-oral views and uneven magnification. These problems often make the DPT inappropriate for imaging the anterior mandible.
CLINICAL RELEVANCE: Panoramic radiography is often unsuitable for radiographic examination of the anterior mandible.
Resumo:
In Britain, the majority of Lower and Middle Paleolithic archaeological finds come from river terrace deposits. The impressive “staircase” terrace sequences of southeast England, and research facilitated by aggregate extraction have provided a considerable body of knowledge about the terrace chronology and associated archaeology in that area. Such research has been essential in considering rates of uplift, climatic cycles, archaeological chronologies, and the landscapes in which hominins lived. It has also promoted the view that southeast England was a major hominin route into Britain. By contrast, the terrace deposits of the southwest have been little studied. The Palaeolithic Rivers of South West Britain (PRoSWEB) project employed a range of geoarchaeological methodologies to address similar questions at different scales, focusing on the rivers Exe, Axe, Otter, and the paleo-Doniford, all of which were located south of the maximum Pleistocene glacial limit (marine oxygen isotope stage [MIS] 4–2). Preliminary analysis of the fieldwork results suggests that although the evolution of these catchments is complex, most conform to a standard staircase-type model, with the exception of the Axe, and, to a lesser extent, the paleo-Doniford, which are anomalous. Although the terrace deposits are less extensive than in southeast Britain, differentiation between terraces does exist, and new dates show that some of these terraces are of great antiquity (MIS 10+). The project also reexamined the distribution of artifacts in the region and confirms the distributional bias to the river valleys, and particularly the rivers draining southward to the paleo–Channel River system. This distribution is consistent with a model of periodic occupation of the British peninsula along and up the major river valleys from the paleo–Channel River corridor. These data have a direct impact on our understanding of the paleolandscapes of the southwest region, and therefore our interpretations of the Paleolithic occupation of the edge of the continental landmass.
Resumo:
Recentemente assistimos a uma evolução da relação do Homem com a tecnologia, em larga medida acompanhada por novos modelos de interacção que modificam a forma de conceber os artefactos e constroem novos contextos de uso. Procuramos investigar, na presente tese, uma das abordagens emergentes, o universo dos media tangíveis, articulando a perspectiva do design da tecnologia orientada para a Human-Computer Interation (HCI), com a dimensão social, cultural e estética no uso da tecnologia. Os media tangíveis, ao contrário do que sucede com os conteúdos digitais convencionais, têm espessura e expressão física e, porque são dotados de um corpo que habita o espaço das disposições físicas, estão sujeitos à acção do mundo cultural e das práticas sociais que regem os demais objectos físicos que podemos encontrar no nosso quotidiano. Esta nova relação com a tecnologia digital obrigará as disciplinas que se encontram mais próximas do desenvolvimento tecnológico, tais como o Design de Interacção e a HCI, a abrirem-se aos contributos e abordagens das ciências humanas. Admitindo que a natureza subjacente ao processo da adaptabilidade no ambiente doméstico altera o equilíbrio da relação entre o design e o uso da tecnologia, julgamos ser essencial o desenvolvimento de uma fenomenologia da interação. Por outro lado, a adaptabilidade dos media tangíveis apresenta um conjunto de dificuldades, não apenas de ordem técnica, mas também de natureza conceptual, que têm dificultado o desenvolvimento e a implementação no terreno de tecnologias personalizáveis. Um dos objectivos da presente tese consiste em investigar um quadro conceptual capaz de enquadrar o fenómeno da adaptabilidade dos media tangíveis, e desenvolver uma tecnologia que possa servir de objecto a um estudo empírico com base numa abordagem etnográfica.
Resumo:
The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.