864 resultados para pacs: data handling techniques
Resumo:
The Dipteran a native Brazilian insect that has become a valuable model system for developmental biology research because it provides an interesting opportunity to study a different type of insect oogenesis. Sequences from a cDNA library that was constructed with poly A + RNA from the ovaries of larvae at different ages were analyzed. Molecular characterization confirmed interesting findings, such as the presence of . The gene encodes a conserved RNA-binding protein that is required during early development for the maintenance and division of the primordial germ cells of Diptera. plays an important role in specifying the posterior regions of insect embryos and is important for abdomen formation. In the present work, we showed the spatial and temporal expression profiles of this important gene, which is involved in oogenesis and early development. Data mining techniques were used to obtain the complete sequence of . Bioinformatic tools were used to determine the following: (1) the secondary structure of the 3'-untranslated region of the mRNA, (2) the encoded protein of the isolated gene, (3) the conserved zinc-finger domains of the Nanos protein, and (4) phylogenetic analyses. Furthermore, RNA in situ hybridization and immunolocalization were used to determine mRNA and protein expression in the tissues that were studied and to define as a germ cell molecular marker.
Resumo:
Statement of problem: Resin cements are widely used to cement intraradicular posts, but bond strength is significantly influenced by the technique and material used for cementation. Purpose: The purpose of this study was to evaluate the bond strength of 3 self-adhesive cements used to cement intraradicular glass fiber posts. The cements all required different application and handling techniques. Material and Methods: Forty-five human maxillary canines were selected and randomly divided into 3 groups n= 15 by drawing lots: Group BIS – Biscem, Group BRE – Breeze, and Group MAX – Maxcem. Each group was divided into 3 subgroups according to application and handling techniques: Sub-group A – Automix/Point tip applicator, Sub-group L – Handmix/Lentulo, and Sub-group C – Handmix/Centrix. Cementation of the posts was performed according to the manufacturers’ instructions. The push-out test was performed with a crosshead speed of 0.5 mm/min, and bond strength was expressed in megapascals. The results were evaluated by 2-way ANOVA and the all pairwise multiple comparison procedures (Tukey test) (?=.05). Results: Breeze cement showed the highest average for the subgroups A, L, and C when compared to the Biscem cement and Maxcem Elite (P<.05). Statistically significant differences among the subgroups were only observed for Biscem. Conclusions: This study shows that application and handling techniques may influence the bond strength of different self-adhesive cements when used for intraradicular post cementation.
Resumo:
The Space Telescope Imaging Spectrograph (STIS) has been on orbit for approximately 16 years as one of the 2nd generation instruments on the Hubble Space Telescope (HST). Its operations were interrupted by an electronics failure in 2004, but STIS was successfully repaired in May 2009 during Service Mission 4 (SM4) allowing it to resume science observations. The Instrument team continues to monitor its performance and work towards improving the quality of its products. Here we present updated information on the status of the FUV and NUV MAMA and the CCD detectors onboard STIS and describe recent changes to the STIS calibration pipeline. We also discuss the status of efforts to apply a pixel-based correction for charge transfer inefficiency (CTI) effects to STIS CCD data. These techniques show promise for ameliorating the effects of ongoing radiation damage on the quality of STIS CCD data.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Trabajo Fin de Grado de la doble titulación de Grado en Ingeniería Informática y Grado en Administración y Dirección de Empresas.
Resumo:
Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.
Resumo:
Der 'gestopfte Hochquarz' ß-Eukryptit (LiAlSiO4) ist bekannt für seine außergewöhnliche anisotrope Li-Ionenleitfähigkeit und die nahe Null liegende thermische Ausdehnung.Untersucht wurde die temperaturabhängige ß-Eukryptit-Phasenabfolge, insbesondere die modulierte Phase. Deren Satellitenreflexe sind gegenüber den normalen Reflexen erheblich verbreitert, überlappen miteinander sowie mit den dazwischen liegenden 'a-Reflexen' zu Tripletts. Für die Separation der Triplett-Intensitäten waren bisherige Standardverfahren zur Beugungsdatensammlung ungeeignet. Mit 'axialen q-Scans' wurde ein neuartiges Verfahren entwickelt. Intensitäten wurden mit dem neu-entwickelten least squares-Programm GKLS aus 2000 Profilen seriell und automatisch gewonnen und erfolgreich auf Standarddaten skaliert. Die Verwendung verbreiterter Reflexprofile erwies sich als zulässig.Die Verbreiterung wurde auf eine verminderte Fernordnung der Modulation von 11 bis 16 Perioden zurückgeführt (Analyse mit der Gitterfunktion), womit ein ungewöhnliches beugungswinkelabhängiges Verhalten der Reflexbreiten korrespondiert und mit typischen Antiphasendomänendurchmessern (andere Autoren) korreliert.Eine verminderte Si-/ Al-Ordnung wird als ursächlich für geringe Domänengrößen und Fernordnung angesehen, sowie für Eigenschaften wie z.B. a/c-Verhältnisse, Ausdehnungskoeffizienten, Ionenleitfähigkeit, Strukturtyp und Umwandlungstemperaturen. Änderungen des SiO2-Gehaltes, der Temperatur oder der Si- /Al-Ordnung zeitigen für einige Eigenschaften ähnliche Wirkungen.Die gemittelte Struktur der modulierten Phase wurde erstmals zuverlässig bestimmt, die Rolle des Li charakterisiert, Zweifel an der hexagonalen Symmetrie des ß-Eukryptits wurden ausgeräumt und die Bestimmung der modulierten Struktur wurde weitgehend vorbereitet.
Resumo:
n the last few years, the vision of our connected and intelligent information society has evolved to embrace novel technological and research trends. The diffusion of ubiquitous mobile connectivity and advanced handheld portable devices, amplified the importance of the Internet as the communication backbone for the fruition of services and data. The diffusion of mobile and pervasive computing devices, featuring advanced sensing technologies and processing capabilities, triggered the adoption of innovative interaction paradigms: touch responsive surfaces, tangible interfaces and gesture or voice recognition are finally entering our homes and workplaces. We are experiencing the proliferation of smart objects and sensor networks, embedded in our daily living and interconnected through the Internet. This ubiquitous network of always available interconnected devices is enabling new applications and services, ranging from enhancements to home and office environments, to remote healthcare assistance and the birth of a smart environment. This work will present some evolutions in the hardware and software development of embedded systems and sensor networks. Different hardware solutions will be introduced, ranging from smart objects for interaction to advanced inertial sensor nodes for motion tracking, focusing on system-level design. They will be accompanied by the study of innovative data processing algorithms developed and optimized to run on-board of the embedded devices. Gesture recognition, orientation estimation and data reconstruction techniques for sensor networks will be introduced and implemented, with the goal to maximize the tradeoff between performance and energy efficiency. Experimental results will provide an evaluation of the accuracy of the presented methods and validate the efficiency of the proposed embedded systems.
Resumo:
Because of the potentially irreversible impact of groundwater quality deterioration in the Ferrara coastal aquifer, answers concerning the assessment of the extent of the salinization problem, the understanding of the mechanisms governing salinization processes, and the sustainability of the current water resources management are urgent. In this light, the present thesis aims to achieve the following objectives: Characterization of the lowland coastal aquifer of Ferrara: hydrology, hydrochemistry and evolution of the system The importance of data acquisition techniques in saltwater intrusion monitoring Predicting salinization trends in the lowland coastal aquifer Ammonium occurrence in a salinized lowland coastal aquifer Trace elements mobility in a saline coastal aquifer
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
Autism Spectrum Disorders (ASDs) describe a set of neurodevelopmental disorders. ASD represents a significant public health problem. Currently, ASDs are not diagnosed before the 2nd year of life but an early identification of ASDs would be crucial as interventions are much more effective than specific therapies starting in later childhood. To this aim, cheap an contact-less automatic approaches recently aroused great clinical interest. Among them, the cry and the movements of the newborn, both involving the central nervous system, are proposed as possible indicators of neurological disorders. This PhD work is a first step towards solving this challenging problem. An integrated system is presented enabling the recording of audio (crying) and video (movements) data of the newborn, their automatic analysis with innovative techniques for the extraction of clinically relevant parameters and their classification with data mining techniques. New robust algorithms were developed for the selection of the voiced parts of the cry signal, the estimation of acoustic parameters based on the wavelet transform and the analysis of the infant’s general movements (GMs) through a new body model for segmentation and 2D reconstruction. In addition to a thorough literature review this thesis presents the state of the art on these topics that shows that no studies exist concerning normative ranges for newborn infant cry in the first 6 months of life nor the correlation between cry and movements. Through the new automatic methods a population of control infants (“low-risk”, LR) was compared to a group of “high-risk” (HR) infants, i.e. siblings of children already diagnosed with ASD. A subset of LR infants clinically diagnosed as newborns with Typical Development (TD) and one affected by ASD were compared. The results show that the selected acoustic parameters allow good differentiation between the two groups. This result provides new perspectives both diagnostic and therapeutic.
Resumo:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
Resumo:
The radiation environment of space presents a significant threat to the reliability of nonvolatile memory technologies. Ionizing radiation disturbs the charge stored on floating gates, and cosmic rays can permanently damage thin oxides. A new memory technology based on the magnetic tunneling junction (MTJ) appears to offer superior resistance to radiation effects and virtually unlimited write endurance. A magnetic flip flop has a number of potential applications, such as the configuration memory in field-programmable logic devices. However, using MTJs in a flip flop requires radically different circuitry for storing and retrieving data. New techniques are needed to insure that magnetic flip flops are reliable in the radiation environment of space. We propose a new radiation-tolerant magnetic flip flop that uses the inherent resistance of the MTJ to increase its immunity to single event upset and employs a robust “Pac-man” magnetic element.
Resumo:
Several practical obstacles in data handling and evaluation complicate the use of quantitative localized magnetic resonance spectroscopy (qMRS) in clinical routine MR examinations. To overcome these obstacles, a clinically feasible MR pulse sequence protocol based on standard available MR pulse sequences for qMRS has been implemented along with newly added functionalities to the free software package jMRUI-v5.0 to make qMRS attractive for clinical routine. This enables (a) easy and fast DICOM data transfer from the MR console and the qMRS-computer, (b) visualization of combined MR spectroscopy and imaging, (c) creation and network transfer of spectroscopy reports in DICOM format, (d) integration of advanced water reference models for absolute quantification, and (e) setup of databases containing normal metabolite concentrations of healthy subjects. To demonstrate the work-flow of qMRS using these implementations, databases for normal metabolite concentration in different regions of brain tissue were created using spectroscopic data acquired in 55 normal subjects (age range 6-61 years) using 1.5T and 3T MR systems, and illustrated in one clinical case of typical brain tumor (primitive neuroectodermal tumor). The MR pulse sequence protocol and newly implemented software functionalities facilitate the incorporation of qMRS and reference to normal value metabolite concentration data in daily clinical routine. Magn Reson Med, 2013. © 2012 Wiley Periodicals, Inc.
Resumo:
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique which is commonly used to quantify changes in blood oxygenation and flow coupled to neuronal activation. One of the primary goals of fMRI studies is to identify localized brain regions where neuronal activation levels vary between groups. Single voxel t-tests have been commonly used to determine whether activation related to the protocol differs across groups. Due to the generally limited number of subjects within each study, accurate estimation of variance at each voxel is difficult. Thus, combining information across voxels in the statistical analysis of fMRI data is desirable in order to improve efficiency. Here we construct a hierarchical model and apply an Empirical Bayes framework on the analysis of group fMRI data, employing techniques used in high throughput genomic studies. The key idea is to shrink residual variances by combining information across voxels, and subsequently to construct an improved test statistic in lieu of the classical t-statistic. This hierarchical model results in a shrinkage of voxel-wise residual sample variances towards a common value. The shrunken estimator for voxelspecific variance components on the group analyses outperforms the classical residual error estimator in terms of mean squared error. Moreover, the shrunken test-statistic decreases false positive rate when testing differences in brain contrast maps across a wide range of simulation studies. This methodology was also applied to experimental data regarding a cognitive activation task.