988 resultados para FIXED PARTIAL DENTURES
Resumo:
In forensic investigations, it is common for forensic investigators to obtain a photograph of evidence left at the scene of crimes to aid them catch the culprit(s). Although, fingerprints are the most popular evidence that can be used, scene of crime officers claim that more than 30% of the evidence recovered from crime scenes originate from palms. Usually, palmprints evidence left at crime scenes are partial since very rarely full palmprints are obtained. In particular, partial palmprints do not exhibit a structured shape and often do not contain a reference point that can be used for their alignment to achieve efficient matching. This makes conventional matching methods based on alignment and minutiae pairing, as used in fingerprint recognition, to fail in partial palmprint recognition problems. In this paper a new partial-to-full palmprint recognition based on invariant minutiae descriptors is proposed where the partial palmprint’s minutiae are extracted and considered as the distinctive and discriminating features for each palmprint image. This is achieved by assigning to each minutiae a feature descriptor formed using the values of all the orientation histograms of the minutiae at hand. This allows for the descriptors to be rotation invariant and as such do not require any image alignment at the matching stage. The results obtained show that the proposed technique yields a recognition rate of 99.2%. The solution does give a high confidence to the judicial jury in their deliberations and decision.
Resumo:
This paper explores the theme of exhibiting architectural research through a particular example, the development of the Irish pavilion for the 14th architectural biennale, Venice 2014. Responding to Rem Koolhaas’s call to investigate the international absorption of modernity, the Irish pavilion became a research project that engaged with the development of the architectures of infrastructure in Ireland in the twentieth and twenty-first centuries. Central to this proposition was that infrastructure is simultaneously a technological and cultural construct, one that for Ireland occupied a critical position in the building of a new, independent post-colonial nation state, after 1921.
Presupposing infrastructure as consisting of both visible and invisible networks, the idea of a matrix become a central conceptual and visual tool in the curatorial and design process for the exhibition and pavilion. To begin with this was a two-dimensional grid used to identify and order what became described as a series of ten ‘infrastructural episodes’. These were determined chronologically across the decades between 1914 and 2014 and their spatial manifestations articulated in terms of scale: micro, meso and macro. At this point ten academics were approached as researchers. Their purpose was twofold, to establish the broader narratives around which the infrastructures developed and to scrutinise relevant archives for compelling visual material. Defining the meso scale as that of the building, the media unearthed was further filtered and edited according to a range of categories – filmic/image, territory, building detail, and model – which sought to communicate the relationship between the pieces of architecture and the larger systems to which they connect. New drawings realised by the design team further iterated these relationships, filling in gaps in the narrative by providing composite, strategic or detailed drawings.
Conceived as an open-ended and extendable matrix, the pavilion was influenced by a series of academic writings, curatorial practices, artworks and other installations including: Frederick Kiesler’s City of Space (1925), Eduardo Persico and Marcello Nizzoli’s Medaglio d’Oro room (1934), Sol Le Witt’s Incomplete Open Cubes (1974) and Rosalind Krauss’s seminal text ‘Grids’ (1979). A modular frame whose structural bays would each hold and present an ‘episode’, the pavilion became both a visual analogue of the unseen networks embodying infrastructural systems and a reflection on the predominance of framed structures within the buildings exhibited. Sharing the aspiration of adaptability of many of these schemes, its white-painted timber components are connected by easily-dismantled steel fixings. These and its modularity allow the structure to be both taken down and re-erected subsequently in different iterations. The pavilion itself is, therefore, imagined as essentially provisional and – as with infrastructure – as having no fixed form. Presenting archives and other material over time, the transparent nature of the space allowed these to overlap visually conveying the nested nature of infrastructural production. Pursuing a means to evoke the qualities of infrastructural space while conveying a historical narrative, the exhibition’s termination in the present is designed to provoke in the visitor, a perceptual extension of the matrix to engage with the future.
Resumo:
Pre-processing (PP) of received symbol vector and channel matrices is an essential pre-requisite operation for Sphere Decoder (SD)-based detection of Multiple-Input Multiple-Output (MIMO) wireless systems. PP is a highly complex operation, but relative to the total SD workload it represents a relatively small fraction of the overall computational cost of detecting an OFDM MIMO frame in standards such as 802.11n. Despite this, real-time PP architectures are highly inefficient, dominating the resource cost of real-time SD architectures. This paper resolves this issue. By reorganising the ordering and QR decomposition sub operations of PP, we describe a Field Programmable Gate Array (FPGA)-based PP architecture for the Fixed Complexity Sphere Decoder (FSD) applied to 4 × 4 802.11n MIMO which reduces resource cost by 50% as compared to state-of-the-art solutions whilst maintaining real-time performance.
Resumo:
In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.
Resumo:
This work investigated the differences in the reactivity of Sarda (primiparous n= 18, multiparous n= 17) and Dorset (multiparous n= 8) breeds of sheep and their singleton lambs to two challenging test situations involving a mother-lamb partial separation test and an isolation test. Non-parametric analysis used single behavioural variables and fear scores to evaluate the effect of parity, sex of lambs, and the association between mother-lamb behaviour. Amongst ewes, Dorset were characterised by a more calm temperament while Sarda (especially primiparous ewes) were more active in their response to challenge (i.e. more attempts to escape). As with their dams, lambs reflected to a certain extent this divergence and overall during isolation lamb fear score was on average significantly higher than dams. Correlations between measures of behavioural reactivity across tests were carried out to search for predictive measures of fear. A very strong correlation emerged linking vocalisation to locomotor activity. Vocalisation could be a good candidate as predictor factor of an active reaction of sheep to a fearful situation.
Resumo:
We have measured mass spectra for positive ions for low-energy electron impact on thymine using a reflectron time-of-flight mass spectrometer. Using computer controlled data acquisition, mass spectra have been acquired for electron impact energies up to 100 eV in steps of 0.5 eV. Ion yield curves for most of the fragment ions have been determined by fitting groups of adjacent peaks in the mass spectra with sequences of normalized Gaussians. The ion yield curves have been normalized by comparing the sum of the ion yields to the average of calculated total ionization cross sections. Appearance energies have been determined. The nearly equal appearance energies of 83 u and 55 u observed in the present work strongly indicate that near threshold the 55 u ion is formed directly by the breakage of two bonds in the ring, rather than from a successive loss of HNCO and CO from the parent ion. Likewise 54 u is not formed by CO loss from 82 u. The appearance energies are in a number of cases consistent with the loss of one or more hydrogen atoms from a heavier fragment, but 70 u is not formed by hydrogen loss from 71 u.
Resumo:
The genetic code establishes the rules that govern gene translation into proteins. It was established more than 3.5 billion years ago and it is one of the most conserved features of life. Despite this, several alterations to the standard genetic code have been discovered in both prokaryotes and eukaryotes, namely in the fungal CTG clade where a unique seryl transfer RNA (tRNACAG Ser) decodes leucine CUG codons as serine. This tRNACAG Ser appeared 272±25 million years ago through insertion of an adenosine in the middle position of the anticodon of a tRNACGA Ser gene, which changed its anticodon from 5´-CGA-3´ to 5´-CAG-3´. This most dramatic genetic event restructured the proteome of the CTG clade species, but it is not yet clear how and why such deleterious genetic event was selected and became fixed in those fungal genomes. In this study we have attempted to shed new light on the evolution of this fungal genetic code alteration by reconstructing its evolutionary pathway in vivo in the yeast Saccharomyces cerevisiae. For this, we have expressed wild type and mutant versions of the C. albicans tRNACGA Ser gene into S. cerevisiae and evaluated the impact of the mutant tRNACGA Ser on fitness, tRNA stability, translation efficiency and aminoacylation kinetics. Our data demonstrate that these mutants are expressed and misincorporate Ser at CUGs, but their expression is repressed through an unknown molecular mechanism. We further demonstrate, using in vivo forced evolution methodologies, that the tRNACAG Ser can be easily inactivated through natural mutations that prevent its recognition by the seryl-tRNA synthetase. The overall data show that repression of expression of the mistranslating tRNACAG Ser played a critical role on the evolution of CUG reassignment from Leu to Ser. In order to better understand the evolution of natural genetic code alterations, we have also engineered partial reassignment of various codons in yeast. The data confirmed that genetic code ambiguity affects fitness, induces protein aggregation, interferes with the cell cycle and results in nuclear and morphologic alterations, genome instability and gene expression deregulation. Interestingly, it also generates phenotypic variability and phenotypes that confer growth advantages in certain environmental conditions. This study provides strong evidence for direct and critical roles of the environment on the evolution of genetic code alterations.
Resumo:
The ability and right to have secrets may be a condition of social ethics (Derrida, A Taste for the Secret), but at the same time the nature of secrets is that they undermine themselves. Once told, secrets are no longer secret but are known. Even to name them as possibilities is to bring them into view as objects of knowledge. Secrets are thus always in some ways partial secrets, but their “openness” also connotes the lack of certainty of any knowledge about them, their evasiveness, their lack of fixity, and hence, their partial character and openness to change. In this article, I explore partial secrets in relation to a 2011 interview study of HIV support in the United Kingdom, where HIV’s relatively low prevalence and high treatment access tends toward its invisibilization. I suggest that in this context, HIV is positioned ambiguously, as a “partial secret,” in an ongoing and precarious tension between public knowledge and acceptance of HIV, HIV’s constitution as a condition of citizenship attended by full human rights, and HIV’s being resecreted through ongoing illness, constrained resources, citizenly exclusion, and the psychological and social isolation of those affected.
Resumo:
The use of preference-based measures of health in the measurement of Health Related Quality of Life has become widely used in health economics. Hence, the development of preference-based measures of health has been a major concern for researchers throughout the world. This study aims to model health state preference data using a new preference-based measure of health (the SF- 6D) and to suggest alternative models for predicting health state utilities using fixed and random effects models. It also seeks to investigate the problems found in the SF-6D and to suggest eventual changes to it.
Resumo:
Tese de mestrado integrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2014
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.
Resumo:
In this paper we address the real-time capabilities of P-NET, which is a multi-master fieldbus standard based on a virtual token passing scheme. We show how P-NET’s medium access control (MAC) protocol is able to guarantee a bounded access time to message requests. We then propose a model for implementing fixed prioritybased dispatching mechanisms at each master’s application level. In this way, we diminish the impact of the first-come-first-served (FCFS) policy that P-NET uses at the data link layer. The proposed model rises several issues well known within the real-time systems community: message release jitter; pre-run-time schedulability analysis in non pre-emptive contexts; non-independence of tasks at the application level. We identify these issues in the proposed model and show how results available for priority-based task dispatching can be adapted to encompass priority-based message dispatching in P-NET networks.
Resumo:
The long term evolution (LTE) is one of the latest standards in the mobile communications market. To achieve its performance, LTE networks use several techniques, such as multi-carrier technique, multiple-input-multiple-output and cooperative communications. Inside cooperative communications, this paper focuses on the fixed relaying technique, presenting a way for determining the best position to deploy the relay station (RS), from a set of empirical good solutions, and also to quantify the associated performance gain using different cluster size configurations. The best RS position was obtained through realistic simulations, which set it as the middle of the cell's circumference arc. Additionally, it also confirmed that network's performance is improved when the number of RSs is increased. It was possible to conclude that, for each deployed RS, the percentage of area served by an RS increases about 10 %. Furthermore, the mean data rate in the cell has been increased by approximately 60 % through the use of RSs. Finally, a given scenario with a larger number of RSs, can experience the same performance as an equivalent scenario without RSs, but with higher reuse distance. This conduces to a compromise solution between RS installation and cluster size, in order to maximize capacity, as well as performance.