908 resultados para Feature detector
Resumo:
This Master’s Thesis is dedicated to the simulation of new p-type pixel strip detector with enhanced multiplication effect. It is done for high-energy physics experiments upgrade such as Super Large Hadron Collider especially for Compact Muon Solenoid particle track silicon detectors. These detectors are used in very harsh radiation environment and should have good radiation hardness. The device engineering technology for developing more radiation hard particle detectors is used for minimizing the radiation degradation. New detector structure with enhanced multiplication effect is proposed in this work. There are studies of electric field and electric charge distribution of conventional and new p-type detector under reverse voltage bias and irradiation. Finally, the dependence of the anode current from the applied cathode reverse voltage bias under irradiation is obtained in this Thesis. For simulation Silvaco Technology Computer Aided Design software was used. Athena was used for creation of doping profiles and device structures and Atlas was used for getting electrical characteristics of the studied devices. The program codes for this software are represented in Appendixes.
Resumo:
Adrenocortical tumors (ACT) in children under 15 years of age exhibit some clinical and biological features distinct from ACT in adults. Cell proliferation, hypertrophy and cell death in adrenal cortex during the last months of gestation and the immediate postnatal period seem to be critical for the origin of ACT in children. Studies with large numbers of patients with childhood ACT have indicated a median age at diagnosis of about 4 years. In our institution, the median age was 3 years and 5 months, while the median age for first signs and symptoms was 2 years and 5 months (N = 72). Using the comparative genomic hybridization technique, we have reported a high frequency of 9q34 amplification in adenomas and carcinomas. This finding has been confirmed more recently by investigators in England. The lower socioeconomic status, the distinctive ethnic groups and all the regional differences in Southern Brazil in relation to patients in England indicate that these differences are not important to determine 9q34 amplification. Candidate amplified genes mapped to this locus are currently being investigated and Southern blot results obtained so far have discarded amplification of the abl oncogene. Amplification of 9q34 has not been found to be related to tumor size, staging, or malignant histopathological features, nor does it seem to be responsible for the higher incidence of ACT observed in Southern Brazil, but could be related to an ACT from embryonic origin.
Resumo:
The Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) will have a Long Shutdown sometime during 2017 or 2018. During this time there will be maintenance and a possibility to install new detectors. After the shutdown the LHC will have a higher luminosity. A promising new type of detector for this high luminosity phase is a Triple-GEM detector. During the shutdown these detectors will be installed at the Compact Muon Solenoid (CMS) experiment. The Triple-GEM detectors are now being developed at CERN and alongside also a readout ASIC chip for the detector. In this thesis a simulation model was developed for the ASICs analog front end. The model will help to carry out more extensive simulations and also simulate the whole chip before the whole design is finished. The proper functioning of the model was tested with simulations, which are also presented in the thesis.
Resumo:
The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).
Resumo:
In a serial feature-positive conditional discrimination procedure the properties of a target stimulus A are defined by the presence or not of a feature stimulus X preceding it. In the present experiment, composite features preceded targets associated with two different topography operant responses (right and left bar pressing); matching and non-matching-to-sample arrangements were also used. Five water-deprived Wistar rats were trained in 6 different trials: X-R®Ar and X-L®Al, in which X and A were same modality visual stimuli and the reinforcement was contingent to pressing either the right (r) or left (l) bar that had the light on during the feature (matching-to-sample); Y-R®Bl and Y-L®Br, in which Y and B were same modality auditory stimuli and the reinforcement was contingent to pressing the bar that had the light off during the feature (non-matching-to-sample); A- and B- alone. After 100 training sessions, the animals were submitted to transfer tests with the targets used plus a new one (auditory click). Average percentages of stimuli with a response were measured. Acquisition occurred completely only for Y-L®Br+; however, complex associations were established along training. Transfer was not complete during the tests since concurrent effects of extinction and response generalization also occurred. Results suggest the use of both simple conditioning and configurational strategies, favoring the most recent theories of conditional discrimination learning. The implications of the use of complex arrangements for discussing these theories are considered.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.
Resumo:
Tässä työssä testattiin partikkelikokojakaumien analysoinnissa käytettävää kuvankäsittelyohjelmaa INCA Feature. Partikkelikokojakaumat määritettiin elektronimikroskooppikuvista INCA Feature ohjelmaa käyttäen partikkeleiden projektiokuvista päällystyspigmenttinä käytettävälle talkille ja kahdelle eri karbonaattilaadulle. Lisäksi määritettiin partikkelikokojakaumat suodatuksessa ja puhdistuksessa apuaineina käytettäville piidioksidi- ja alumiinioksidihiukkasille. Kuvankäsittelyohjelmalla määritettyjä partikkelikokojakaumia verrattiin partikkelin laskeutumisnopeuteen eli sedimentaatioon perustuvalla SediGraph 5100 analysaattorilla ja laserdiffraktioon perustuvalla Coulter LS 230 menetelmällä analysoituihin partikkelikokojakaumiin. SediGraph 5100 ja kuva-analyysiohjelma antoivat talkkipartikkelien kokojakaumalle hyvin samankaltaisen keskiarvon. Sen sijaan Coulter LS 230 laitteen antama kokojakauman keskiarvo poikkesi edellisistä. Kaikki vertailussa olleet partikkelikokojakaumamenetelmät asettivat eri näytteiden partikkelit samaan kokojärjestykseen. Kuitenkaan menetelmien tuloksia ei voida numeerisesti verrata toisiinsa, sillä kaikissa käytetyissä analyysimenetelmissä partikkelikoon mittaus perustuu partikkelin eri ominaisuuteen. Työn perusteella kaikki testatut analyysimenetelmät soveltuvat paperipigmenttien partikkelikokojakaumien määrittämiseen. Tässä työssä selvitettiin myös kuva-analyysiin tarvittava partikkelien lukumäärä, jolla analyysitulos on luotettava. Työssä todettiin, että analysoitavien partikkelien lukumäärän tulee olla vähintään 300 partikkelia. Liian suuri näytemäärä lisää kokojakauman hajontaa ja pidentää analyysiin käytettyä aikaa useaan tuntiin. Näytteenkäsittely vaatii vielä lisää tutkimuksia, sillä se on tärkein ja kriittisin vaihe SEM ja kuva-analyysiohjelmalla tehtävää partikkelikokoanalyysiä. Automaattisten mikroskooppien yleistyminen helpottaa ja nopeuttaa analyysien tekoa, jolloin menetelmän suosio tulee kasvamaan myös paperipigmenttien tutkimuksessa. Laitteiden korkea hinta ja käyttäjältä vaadittava eritysosaaminen tulevat rajaamaan käytön ainakin toistaiseksi tutkimuslaitoksiin.
Resumo:
The Beckman Helium Discharge Detector has been found to be sensitive to the fixed gases oxygen, nitrogen, and hydrogen at detection levels 10-100 times more sensitive than possible with a Bow-Mac Thermal Conductivity Detector. Detection levels o~ approximately 1.9 E-4 ~ v/v oxygen, 3.1 E-4 ~ v/v nitrogen, and 3.0 E-3 ~ v/v hydrogen are estimated. Response of the Helium Discharge Detector was not linear, but is useable for quantitation over limited ranges of concentration using suitably prepared working standards. Cleanliness of the detector discharge electrodes and purity of the helium carrier and discharge gas were found to be critical to the operation of the detector. Higher sensitivities of the Helium Discharge Detector may be possible by the design and installation of a sensitive, solid-state electrometer.
Resumo:
A feature-based fitness function is applied in a genetic programming system to synthesize stochastic gene regulatory network models whose behaviour is defined by a time course of protein expression levels. Typically, when targeting time series data, the fitness function is based on a sum-of-errors involving the values of the fluctuating signal. While this approach is successful in many instances, its performance can deteriorate in the presence of noise. This thesis explores a fitness measure determined from a set of statistical features characterizing the time series' sequence of values, rather than the actual values themselves. Through a series of experiments involving symbolic regression with added noise and gene regulatory network models based on the stochastic 'if-calculus, it is shown to successfully target oscillating and non-oscillating signals. This practical and versatile fitness function offers an alternate approach, worthy of consideration for use in algorithms that evaluate noisy or stochastic behaviour.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and deterministic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel metaheuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS metaheuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis- tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.
Resumo:
New Feature at Niagara – Clark Hill Islands (5 islands situated in the rapids of the Niagara River). These islands are currently known as Dufferin Islands, 22 ½ cm. x 15 ½ cm, n.d.
Resumo:
Tesis (Maestría en Ciencias Químicas con Especialidad en Química Analítica) U.A.N.L.
Resumo:
Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.
Resumo:
Utilisant les plus récentes données recueillies par le détecteur ATLAS lors de collisions pp à 7 et 8 TeV au LHC, cette thèse établira des contraintes sévères sur une multitude de modèles allant au-delà du modèle standard (MS) de la physique des particules. Plus particulièrement, deux types de particules hypothétiques, existant dans divers modèles théoriques et qui ne sont pas présentes dans le MS, seront étudiés et sondés. Le premier type étudié sera les quarks-vectoriels (QV) produits lors de collisions pp par l’entremise de couplages électrofaibles avec les quarks légers u et d. On recherchera ces QV lorsqu’ils se désintègrent en un boson W ou Z, et un quark léger. Des arguments théoriques établissent que sous certaines conditions raisonnables la production simple dominerait la production en paires des QV. La topologie particulière des évènements en production simple des QV permettra alors la mise en oeuvre de techniques d’optimisation efficaces pour leur extraction des bruits de fond électrofaibles. Le deuxième type de particules recherché sera celles qui se désintègrent en WZ lorsque ces bosons de jauges W, et Z se désintègrent leptoniquement. Les états finaux détectés par ATLAS seront par conséquent des évènements ayant trois leptons et de l’énergie transverse manquante. La distribution de la masse invariante de ces objets sera alors examinée pour déterminer la présence ou non de nouvelles résonances qui se manifesterait par un excès localisé. Malgré le fait qu’à première vue ces deux nouveaux types de particules n’ont que très peu en commun, ils ont en réalité tous deux un lien étroit avec la brisure de symétrie électrofaible. Dans plusieurs modèles théoriques, l’existence hypothétique des QV est proposé pour annuler les contributions du quark top aux corrections radiatives de la masse du Higgs du MS. Parallèlement, d’autres modèles prédisent quant à eux des résonances en WZ tout en suggérant que le Higgs est une particule composite, chambardant ainsi tout le sector Higgs du MS. Ainsi, les deux analyses présentées dans cette thèse ont un lien fondamental avec la nature même du Higgs, élargissant par le fait même nos connaissances sur l’origine de la masse intrinsèque des particules. En fin de compte, les deux analyses n’ont pas observé d’excès significatif dans leurs régions de signal respectives, ce qui permet d’établir des limites sur la section efficace de production en fonction de la masse des résonances.