69 resultados para FINGERPRINT VERIFICATION
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
PURPOSE: We investigated the influence of beam modulation on treatment planning by comparing four available stereotactic radiosurgery (SRS) modalities: Gamma-Knife-Perfexion, Novalis-Tx Dynamic-Conformal-Arc (DCA) and Dynamic-Multileaf-Collimation-Intensity-Modulated-radiotherapy (DMLC-IMRT), and Cyberknife. MATERIAL AND METHODS: Patients with arteriovenous malformation (n = 10) or acoustic neuromas (n = 5) were planned with different treatment modalities. Paddick conformity index (CI), dose heterogeneity (DH), gradient index (GI) and beam-on time were used as dosimetric indices. RESULTS: Gamma-Knife-Perfexion can achieve high degree of conformity (CI = 0.77 ± 0.04) with limited low-doses (GI = 2.59 ± 0.10) surrounding the inhomogeneous dose distribution (D(H) = 0.84 ± 0.05) at the cost of treatment time (68.1 min ± 27.5). Novalis-Tx-DCA improved this inhomogeneity (D(H) = 0.30 ± 0.03) and treatment time (16.8 min ± 2.2) at the cost of conformity (CI = 0.66 ± 0.04) and Novalis-TX-DMLC-IMRT improved the DCA CI (CI = 0.68 ± 0.04) and inhomogeneity (D(H) = 0.18 ± 0.05) at the cost of low-doses (GI = 3.94 ± 0.92) and treatment time (21.7 min ± 3.4) (p<0.01). Cyberknife achieved comparable conformity (CI = 0.77 ± 0.06) at the cost of low-doses (GI = 3.48 ± 0.47) surrounding the homogeneous (D(H) = 0.22 ± 0.02) dose distribution and treatment time (28.4min±8.1) (p<0.01). CONCLUSIONS: Gamma-Knife-Perfexion will comply with all SRS constraints (high conformity while minimizing low-dose spread). Multiple focal entries (Gamma-Knife-Perfexion and Cyberknife) will achieve better conformity than High-Definition-MLC of Novalis-Tx at the cost of treatment time. Non-isocentric beams (Cyberknife) or IMRT-beams (Novalis-Tx-DMLC-IMRT) will spread more low-dose than multiple isocenters (Gamma-Knife-Perfexion) or dynamic arcs (Novalis-Tx-DCA). Inverse planning and modulated fluences (Novalis-Tx-DMLC-IMRT and CyberKnife) will deliver the most homogeneous treatment. Furthermore, Linac-based systems (Novalis and Cyberknife) can perform image verification at the time of treatment delivery.
Resumo:
The aim of this paper is to evaluate the risks associated with the use of fake fingerprints on a livescan supplied with a method of liveness detection. The method is based on optical properties of the skin. The sensor uses several polarizations and illuminations to capture the information of the different layers of the human skin. These experiments also allow for the determination under which conditions the system is deceived and if there is an influence respectively of the nature of the fake, the mould used for the production or the individuals involved in the attack. These experiments showed that current multispectral sensors can be deceived by the use of fake fingerprints created with or without the cooperation of the subject. Fakes created from direct casts perform better than those produced by fakes created from indirect casts. The results showed that the success of the attack is influenced by two main factors. The first is the quality of the fakes, and by extension the quality of the original fingerprint. The second is the combination of the general patterns involved in the attacks since an appropriate combination can strongly increase the rates of successful attacks.
Resumo:
Because of low incidence, mixed study populations and paucity of clinical and histological data, the management of adult brainstem gliomas (BSGs) remains non-standardized. We here describe characteristics, treatment and outcome of patients with exclusively histologically confirmed adult BSGs. A retrospective chart review of adults (age >18 years) was conducted. BSG was defined as a glial tumor located in the midbrain, pons or medulla. Characteristics, management and outcome were analyzed. Twenty one patients (17 males; median age 41 years) were diagnosed between 2004 and 2012 by biopsy (n = 15), partial (n = 4) or complete resection (n = 2). Diagnoses were glioblastoma (WHO grade IV, n = 6), anaplastic astrocytoma (WHO grade III, n = 7), diffuse astrocytoma (WHO grade II, n = 6) and pilocytic astrocytoma (WHO grade I, n = 2). Diffuse gliomas were mainly located in the pons and frequently showed MRI contrast enhancement. Endophytic growth was common (16 vs. 5). Postoperative therapy in low-grade (WHO grade I/II) and high-grade gliomas (WHO grade III/IV) consisted of radiotherapy alone (three in each group), radiochemotherapy (2 vs. 6), chemotherapy alone (0 vs. 2) or no postoperative therapy (3 vs. 1). Median PFS (24.1 vs. 5.8 months; log-rank, p = 0.009) and mOS (30.5 vs. 11.5 months; log-rank, p = 0.028) was significantly better in WHO grade II than in WHO grade III/IV tumors. Second-line therapy considerably varied. Histologically verification of adult BSGs is feasible and has an impact on postoperative treatment. Low-grade gliomas can simple be followed or treated with radiotherapy alone. Radiochemotherapy with temozolomide can safely be prescribed for high-grade gliomas without additional CNS toxicities.
Resumo:
Le pentecôtisme a fait du miracle le coeur de sa théologie et l'élément central de ses activités d'évangélisation. Le catholicisme, par contre, a toujours voulu contrôler l'ensemble des déclarations de manifestations divines. Apparitions et guérisons miraculeuses ont donc systématiquement, et de plus en plus, été soumises à de lentes et rigoureuses procédures d'authentification. Les pentecôtistes voient Dieu comme un être extérieur qui surgit sur la terre pour chasser le mal qui l'envahit. Tous les convertis ont donc droit à la libération et personne ne doit accepter sagement la souffrance. Or, les pèlerins catholiques que nous avons étudiés ne partagent pas ces convictions pentecôtistes. Dieu agit de l'intérieur, non pas en les délivrant, mais en les soutenant dans leurs épreuves quotidiennes. Rare et peu recherchée, la guérison physique cède la place à la guérison spirituelle, accessible à tous. Il nous semble que ces deux types de représentations placent les fidèles dans des dispositions d'esprit très divergentes suscitant, dans un cas ou dans l'autre, des espoirs adaptés à la capacité du groupe à produire des miracles. Pentecostalism placed miracles at the centre of its theology as a key element of its evangelization activities. Catholicism, on the other hand, has always tried to control all declarations of divine demonstrations. Miraculous appearances and recoveries have been more and more systematically subjected to slow and rigorous procedures of verification. The Pentecostals see God as an external force which manifests itself on earth to drive out the evil which invades it. All believers have the right to be free from evil, and nobody should have to accept pain meekly. But the Catholic pilgrims we studied do not share these Pentecostal convictions. God acts from inside, not by delivering them but by supporting them in their daily tests. Physical recovery is rare and not very sought after so it takes second place to spiritual recovery which is accessible to everyone. It seems to us that these two types of representation place believers in very divergent frames of mind giving rise, in one group or the other, to hopes that correspond to the group's capacity to produce miracles.
Resumo:
Devolatilization reactions and subsequent transfer of fluid from subducted oceanic crust into the overlying mantle wedge are important processes, which are responsible for the specific geochemical characteristics of subduction-related metamorphic rocks, as well as those of arc magmatism. To better understand the geochemical fingerprint induced by fluid mobilization during dehydration and rehydration processes related to subduction zone metamorphism, the trace element and rare earth element (REE) distribution patterns in HP-LT metamorphic assemblages in eclogite-, blueschist- and greenschist-facies rocks of the Ile de Groix were obtained by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) analysis. This study focuses on 10 massive basic rocks representing former hydrothermally altered mid-ocean ridge basalts (MORB), four banded basic rocks of volcano-sedimentary origin and one micaschist. The main hosts for incompatible trace elements are epidote (REE, Th, U, Pb, Sr), garnet [Y, heavy REE (HREE)], phengite (Cs, Rb, Ba, B), titanite [Ti, Nb, Ta, REE; HREE > LREE (light REE)], rutile (Ti, Nb, Ta) and apatite (REE, Sr). The trace element contents of omphacite, amphibole, albite and chlorite are low. The incompatible trace element contents of minerals are controlled by the stable metamorphic mineral assemblage and directly related to the appearance, disappearance and reappearance of minerals, especially epidote, garnet, titanite, rutile and phengite, during subduction zone metamorphism. Epidote is a key mineral in the trace element exchange process because of its large stability field, ranging from lower greenschist- to blueschist- and eclogite-facies conditions. Different generations of epidote are generally observed and related to the coexisting phases at different stages of the metamorphic cycle (e.g. lawsonite, garnet, titanite). Epidote thus controls most of the REE budget during the changing P-T conditions along the prograde and retrograde path. Phengite also plays an important role in determining the large ion lithophile element (LILE) budget, as it is stable to high P-T conditions. The breakdown of phengite causes the release of LILE during retrogression. A comparison of trace element abundances in whole-rocks and minerals shows that the HP-LT metamorphic rocks largely retain the geochemical characteristics of their basic, volcano-sedimentary and pelitic protoliths, including a hydrothermal alteration overprint before the subduction process. A large part of the incompatible trace elements remained trapped in the rocks and was recycled within the various metamorphic assemblages stable under changing metamorphic conditions during the subduction process, indicating that devolatilization reactions in massive basic rocks do not necessarily imply significant simultaneous trace element and REE release.
Resumo:
3D dose reconstruction is a verification of the delivered absorbed dose. Our aim was to describe and evaluate a 3D dose reconstruction method applied to phantoms in the context of narrow beams. A solid water phantom and a phantom containing a bone-equivalent material were irradiated on a 6 MV linac. The transmitted dose was measured by using one array of a 2D ion chamber detector. The dose reconstruction was obtained by an iterative algorithm. A phantom set-up error and organ interfraction motion were simulated to test the algorithm sensitivity. In all configurations convergence was obtained within three iterations. A local reconstructed dose agreement of at least 3% / 3mm with respect to the planned dose was obtained, except in a few points of the penumbra. The reconstructed primary fluences were consistent with the planned ones, which validates the whole reconstruction process. The results validate our method in a simple geometry and for narrow beams. The method is sensitive to a set-up error of a heterogeneous phantom and interfraction heterogeneous organ motion.
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.
Resumo:
Contact structure is believed to have a large impact on epidemic spreading and consequently using networks to model such contact structure continues to gain interest in epidemiology. However, detailed knowledge of the exact contact structure underlying real epidemics is limited. Here we address the question whether the structure of the contact network leaves a detectable genetic fingerprint in the pathogen population. To this end we compare phylogenies generated by disease outbreaks in simulated populations with different types of contact networks. We find that the shape of these phylogenies strongly depends on contact structure. In particular, measures of tree imbalance allow us to quantify to what extent the contact structure underlying an epidemic deviates from a null model contact network and illustrate this in the case of random mixing. Using a phylogeny from the Swiss HIV epidemic, we show that this epidemic has a significantly more unbalanced tree than would be expected from random mixing.
Resumo:
Degradation of unsaturated fatty acids through the peroxisomal beta-oxidation pathway requires the participation of auxiliary enzymes in addition to the enzymes of the core beta-oxidation cycle. The auxiliary enzyme delta(3,5),delta(2,4)-dienoyl-coenzyme A (CoA) isomerase has been well studied in yeast (Saccharomyces cerevisiae) and mammals, but no plant homolog had been identified and characterized at the biochemical or molecular level. A candidate gene (At5g43280) was identified in Arabidopsis (Arabidopsis thaliana) encoding a protein showing homology to the rat (Rattus norvegicus) delta(3,5),delta(2,4)-dienoyl-CoA isomerase, and possessing an enoyl-CoA hydratase/isomerase fingerprint as well as aspartic and glutamic residues shown to be important for catalytic activity of the mammalian enzyme. The protein, named AtDCI1, contains a peroxisome targeting sequence at the C terminus, and fusion of a fluorescent protein to AtDCI1 directed the chimeric protein to the peroxisome in onion (Allium cepa) cells. AtDCI1 expressed in Escherichia coli was shown to have delta(3,5),delta(2,4)-dienoyl-CoA isomerase activity in vitro. Furthermore, using the synthesis of polyhydroxyalkanoate in yeast peroxisomes as an analytical tool to study the beta-oxidation cycle, expression of AtDCI1 was shown to complement the yeast mutant deficient in the delta(3,5),delta(2,4)-dienoyl-CoA isomerase, thus showing that AtDCI1 is also appropriately targeted to the peroxisome in yeast and has delta(3,5),delta(2,4)-dienoyl-CoA isomerase activity in vivo. The AtDCI1 gene is expressed constitutively in several tissues, but expression is particularly induced during seed germination. Proteins showing high homology with AtDCI1 are found in gymnosperms as well as angiosperms belonging to the Monocotyledon or Dicotyledon classes.
Resumo:
The process of comparing a fingermark recovered from a crime scene with the fingerprint taken from a known individual involves the characterization and comparison of different ridge details on both the mark and the print. Fingerprints examiners commonly classify these characteristics into three different groups, depending on their level of discriminating power. It is commonly considered that the general pattern of the ridge flow constitutes first-level detail, specific ridge flow and minutiaes (e.g. ending ridges, bifurcations) constitutes second-level detail, and fine ridge details (e. g. pore positions and shapes) are described as third-level details.In this study, the reproducibility of a selection of third-level characteristics is investigated. The reproducibility of these features is examined on serveral recordings of a same finger, first acquired using only optical visualization techniques and second on impressions developed using common firngermark development techniques. Prior to the evaluation of the reproducibility of the considered characteristics, digital images of the fingerprints were recorded at two different resolutions (1000 and 2000 ppi). This allowed the study to also examine the influence of higher resolution on the considered characteristics. It was observed that the increase in the resolution did not result in better feature detection or comparison between images.The examination of the reproducibility of a selection of third-level characteristics showed that the most reproducible features observed were minutiae shapes and pore positions along the ridges.
Resumo:
OBJECTIVE: Accuracy studies of Patient Safety Indicators (PSIs) are critical but limited by the large samples required due to low occurrence of most events. We tested a sampling design based on test results (verification-biased sampling [VBS]) that minimizes the number of subjects to be verified. METHODS: We considered 3 real PSIs, whose rates were calculated using 3 years of discharge data from a university hospital and a hypothetical screen of very rare events. Sample size estimates, based on the expected sensitivity and precision, were compared across 4 study designs: random and VBS, with and without constraints on the size of the population to be screened. RESULTS: Over sensitivities ranging from 0.3 to 0.7 and PSI prevalence levels ranging from 0.02 to 0.2, the optimal VBS strategy makes it possible to reduce sample size by up to 60% in comparison with simple random sampling. For PSI prevalence levels below 1%, the minimal sample size required was still over 5000. CONCLUSIONS: Verification-biased sampling permits substantial savings in the required sample size for PSI validation studies. However, sample sizes still need to be very large for many of the rarer PSIs.
Resumo:
Rare earth elements (REE), while not essential for the physiologic functions of animals, are ingested and incorporated in ppb concentrations in bones and teeth. Nd isotope compositions of modern bones of animals from isotopically distinct habitats demonstrate that the (143)Nd/(144)Nd of the apatite can be used as a fingerprint for bedrock geology or ambient water mass. This potentially allows the provenance and migration of extant vertebrates to be traced, similar to the use of Sr isotopes. Although REE may be enriched by up to 5 orders of magnitude during diagenesis and recrystallization of bone apatite, in vivo (143)Nd/(144)Nd may be preserved in the inner cortex of fossil bones or enamel. However, tracking the provenance of ancient or extinct vertebrates is possible only for well-preserved archeological and paleontological skeletal remains with in vivo-like Nd contents at the ppb-level. Intra-bone and -tooth REE analysis can be used to screen for appropriate areas. Large intra-bone Nd concentration gradients of 10(1)-10(3) are often measured. Nd concentrations in the inner bone cortex increase over timescales of millions of years, while bone rims may be enriched over millenial timescales. Nevertheless, epsilon(Nd) values are often similar within one epsilon(Nd) unit within a single bone. Larger intra-bone differences in specimens may either reflect a partial preservation of in vivo values or changing epsilon(Nd) values of the diagenetic fluid during fossilization. However, most fossil specimens and the outer rims of bones will record taphonomic (143)Nd/(144)Nd incorporated post mortem during diagenesis. Unlike REE patterns, (143)Nd/(144)Nd are not biased by fractionation processes during REE-uptake into the apatite crystal lattice, hence the epsilon(Nd) value is an important tracer for taphonomy and reworking. Bones and teeth from autochthonous fossil assemblages have small variations of +/- 1 epsilon(Nd) unit only. In contrast, fossil bones and teeth from over 20 different marine and terrestrial fossil sites have a total range of epsilon(Nd) values from -13.0 to 4.9 (n = 80), often matching the composition of the embedding sediment. This implies that the surrounding sediment is the source of Nd in the fossil bones and that the specimens of this study seem not to have been reworked. Differences in epsilon(Nd) values between skeletal remains and embedding sediment may either indicate reworking of fossils and/or a REE-uptake from a diagenetic fluid with non-sediment derived epsilon(Nd) values. The latter often applies to fossil shark teeth, which may preserve paleo-seawater values. Complementary to epsilon(Nd) values, (87)Sr/(86)Sr can help to further constrain the fossil provenance and reworking. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Many types of tumors exhibit characteristic chromosomal losses or gains, as well as local amplifications and deletions. Within any given tumor type, sample specific amplifications and deletions are also observed. Typically, a region that is aberrant in more tumors, or whose copy number change is stronger, would be considered as a more promising candidate to be biologically relevant to cancer. We sought for an intuitive method to define such aberrations and prioritize them. We define V, the "volume" associated with an aberration, as the product of three factors: (a) fraction of patients with the aberration, (b) the aberration's length and (c) its amplitude. Our algorithm compares the values of V derived from the real data to a null distribution obtained by permutations, and yields the statistical significance (p-value) of the measured value of V. We detected genetic locations that were significantly aberrant, and combine them with chromosomal arm status (gain/loss) to create a succinct fingerprint of the tumor genome. This genomic fingerprint is used to visualize the tumors, highlighting events that are co-occurring or mutually exclusive. We apply the method on three different public array CGH datasets of Medulloblastoma and Neuroblastoma, and demonstrate its ability to detect chromosomal regions that were known to be altered in the tested cancer types, as well as to suggest new genomic locations to be tested. We identified a potential new subtype of Medulloblastoma, which is analogous to Neuroblastoma type 1.
Resumo:
Recent studies show that the composition of fingerprint residue varies significantly from the same donor as well as between donors. This variability is a major drawback in latent print dating issues. This study aimed, therefore, at the definition of a parameter that is less variable from print to print, using a ratio of peak area of a target compound degrading over time divided by the summed area of peaks of more stable compounds also found in latent print residues.Gas chromatography-mass spectrometry (GC/MS) analysis of the initial lipid composition of latent prints identifies four main classes of compounds that can be used in the definition of an aging parameter: fatty acids, sterols, sterol precursors, and wax esters (WEs). Although the entities composing the first three groups are quite well known, those composing WEs are poorly reported. Therefore, the first step of the present work was to identify WE compounds present in latent print residues deposited by different donors. Of 29 WEs recorded in the chromatograms, seven were observed in the majority of samples.The identified WE compounds were subsequently used in the definition of ratios in combination with squalene and cholesterol to reduce the variability of the initial composition between latent print residues from different persons and more particularly from the same person. Finally, the influence of a latent print enhancement process on the initial composition was studied by analyzing traces after treatment with magnetic powder, 1,2-indanedione, and cyanoacrylate.