936 resultados para Data Acquisition Methods.
Resumo:
A good system of preventive bridge maintenance enhances the ability of engineers to manage and monitor bridge conditions, and take proper action at the right time. Traditionally infrastructure inspection is performed via infrequent periodical visual inspection in the field. Wireless sensor technology provides an alternative cost-effective approach for constant monitoring of infrastructures. Scientific data-acquisition systems make reliable structural measurements, even in inaccessible and harsh environments by using wireless sensors. With advances in sensor technology and availability of low cost integrated circuits, a wireless monitoring sensor network has been considered to be the new generation technology for structural health monitoring. The main goal of this project was to implement a wireless sensor network for monitoring the behavior and integrity of highway bridges. At the core of the system is a low-cost, low power wireless strain sensor node whose hardware design is optimized for structural monitoring applications. The key components of the systems are the control unit, sensors, software and communication capability. The extensive information developed for each of these areas has been used to design the system. The performance and reliability of the proposed wireless monitoring system is validated on a 34 feet span composite beam in slab bridge in Black Hawk County, Iowa. The micro strain data is successfully extracted from output-only response collected by the wireless monitoring system. The energy efficiency of the system was investigated to estimate the battery lifetime of the wireless sensor nodes. This report also documents system design, the method used for data acquisition, and system validation and field testing. Recommendations on further implementation of wireless sensor networks for long term monitoring are provided.
Resumo:
Many transportation agencies maintain grade as an attribute in roadway inventory databases; however, the information is often in an aggregated format. Cross slope is rarely included in large roadway inventories. Accurate methods available to collect grade and cross slope include global positioning systems, traditional surveying, and mobile mapping systems. However, most agencies do not have the resources to utilize these methods to collect grade and cross slope on a large scale. This report discusses the use of LIDAR to extract roadway grade and cross slope for large-scale inventories. Current data collection methods and their advantages and disadvantages are discussed. A pilot study to extract grade and cross slope from a LIDAR data set, including methodology, results, and conclusions, is presented. This report describes the regression methodology used to extract and evaluate the accuracy of grade and cross slope from three dimensional surfaces created from LIDAR data. The use of LIDAR data to extract grade and cross slope on tangent highway segments was evaluated and compared against grade and cross slope collected using an automatic level for 10 test segments along Iowa Highway 1. Grade and cross slope were measured from a surface model created from LIDAR data points collected for the study area. While grade could be estimated to within 1%, study results indicate that cross slope cannot practically be estimated using a LIDAR derived surface model.
Resumo:
Land plants have had the reputation of being problematic for DNA barcoding for two general reasons: (i) the standard DNA regions used in algae, animals and fungi have exceedingly low levels of variability and (ii) the typically used land plant plastid phylogenetic markers (e.g. rbcL, trnL-F, etc.) appear to have too little variation. However, no one has assessed how well current phylogenetic resources might work in the context of identification (versus phylogeny reconstruction). In this paper, we make such an assessment, particularly with two of the markers commonly sequenced in land plant phylogenetic studies, plastid rbcL and internal transcribed spacers of the large subunits of nuclear ribosomal DNA (ITS), and find that both of these DNA regions perform well even though the data currently available in GenBank/EBI were not produced to be used as barcodes and BLAST searches are not an ideal tool for this purpose. These results bode well for the use of even more variable regions of plastid DNA (such as, for example, psbA-trnH) as barcodes, once they have been widely sequenced. In the short term, efforts to bring land plant barcoding up to the standards being used now in other organisms should make swift progress. There are two categories of DNA barcode users, scientists in fields other than taxonomy and taxonomists. For the former, the use of mitochondrial and plastid DNA, the two most easily assessed genomes, is at least in the short term a useful tool that permits them to get on with their studies, which depend on knowing roughly which species or species groups they are dealing with, but these same DNA regions have important drawbacks for use in taxonomic studies (i.e. studies designed to elucidate species limits). For these purposes, DNA markers from uniparentally (usually maternally) inherited genomes can only provide half of the story required to improve taxonomic standards being used in DNA barcoding. In the long term, we will need to develop more sophisticated barcoding tools, which would be multiple, low-copy nuclear markers with sufficient genetic variability and PCR-reliability; these would permit the detection of hybrids and permit researchers to identify the 'genetic gaps' that are useful in assessing species limits.
Resumo:
BACKGROUND: PCR has the potential to detect and precisely quantify specific DNA sequences, but it is not yet often used as a fully quantitative method. A number of data collection and processing strategies have been described for the implementation of quantitative PCR. However, they can be experimentally cumbersome, their relative performances have not been evaluated systematically, and they often remain poorly validated statistically and/or experimentally. In this study, we evaluated the performance of known methods, and compared them with newly developed data processing strategies in terms of resolution, precision and robustness. RESULTS: Our results indicate that simple methods that do not rely on the estimation of the efficiency of the PCR amplification may provide reproducible and sensitive data, but that they do not quantify DNA with precision. Other evaluated methods based on sigmoidal or exponential curve fitting were generally of both poor resolution and precision. A statistical analysis of the parameters that influence efficiency indicated that it depends mostly on the selected amplicon and to a lesser extent on the particular biological sample analyzed. Thus, we devised various strategies based on individual or averaged efficiency values, which were used to assess the regulated expression of several genes in response to a growth factor. CONCLUSION: Overall, qPCR data analysis methods differ significantly in their performance, and this analysis identifies methods that provide DNA quantification estimates of high precision, robustness and reliability. These methods allow reliable estimations of relative expression ratio of two-fold or higher, and our analysis provides an estimation of the number of biological samples that have to be analyzed to achieve a given precision.
Resumo:
This report is concerned with the prediction of the long-time creep and shrinkage behavior of concrete. It is divided into three main areas. l. The development of general prediction methods that can be used by a design engineer when specific experimental data are not available. 2. The development of prediction methods based on experimental data. These methods take advantage of equations developed in item l, and can be used to accurately predict creep and shrinkage after only 28 days of data collection. 3. Experimental verification of items l and 2, and the development of specific prediction equations for four sand-lightweight aggregate concretes tested in the experimental program. The general prediction equations and methods are developed in Chapter II. Standard Equations to estimate the creep of normal weight concrete (Eq. 9), sand-lightweight concrete (Eq. 12), and lightweight concrete (Eq. 15) are recommended. These equations are developed for standard conditions (see Sec. 2. 1) and correction factors required to convert creep coefficients obtained from equations 9, 12, and 15 to valid predictions for other conditions are given in Equations 17 through 23. The correction factors are shown graphically in Figs. 6 through 13. Similar equations and methods are developed for the prediction of the shrinkage of moist cured normal weight concrete (Eq. 30}, moist cured sand-lightweight concrete (Eq. 33}, and moist cured lightweight concrete (Eq. 36). For steam cured concrete the equations are Eq. 42 for normal weight concrete, and Eq. 45 for lightweight concrete. Correction factors are given in Equations 47 through 52 and Figs., 18 through 24. Chapter III summarizes and illustrates, by examples, the prediction methods developed in Chapter II. Chapters IV and V describe an experimental program in which specific prediction equations are developed for concretes made with Haydite manufactured by Hydraulic Press Brick Co. (Eqs. 53 and 54}, Haydite manufactured by Buildex Inc. (Eqs. 55 and 56), Haydite manufactured by The Cater-Waters Corp. (Eqs. 57 and 58}, and Idealite manufactured by Idealite Co. (Eqs. 59 and 60). General prediction equations are also developed from the data obtained in the experimental program (Eqs. 61 and 62) and are compared to similar equations developed in Chapter II. Creep and Shrinkage prediction methods based on 28 day experimental data are developed in Chapter VI. The methods are verified by comparing predicted and measured values of the long-time creep and shrinkage of specimens tested at the University of Iowa (see Chapters IV and V) and elsewhere. The accuracy obtained is shown to be superior to other similar methods available to the design engineer.
Resumo:
For the development and evaluation of cardiac magnetic resonance (MR) imaging sequences and methodologies, the availability of a periodically moving phantom to model respiratory and cardiac motion would be of substantial benefit. Given the specific physical boundary conditions in an MR environment, the choice of materials and power source of such phantoms is heavily restricted. Sophisticated commercial solutions are available; however, they are often relatively costly and user-specific modifications may not easily be implemented. We therefore sought to construct a low-cost MR-compatible motion phantom that could be easily reproduced and had design flexibility. A commercially available K'NEX construction set (Hyper Space Training Tower, K'NEX Industries, Inc., Hatfield, PA) was used to construct a periodically moving phantom head. The phantom head performs a translation with a superimposed rotation, driven by a motor over a 2-m rigid rod. To synchronize the MR data acquisition with phantom motion (without introducing radiofrequency-related image artifacts), a fiberoptic control unit generates periodic trigger pulses synchronized to the phantom motion. Total material costs of the phantom are US$ < 200.00, and a total of 80 man-hours were required to design and construct the original phantom. With schematics of the present solution, the phantom reproduction may be achieved in approximately 15 man-hours. The presented MR-compatible periodically moving phantom can easily be reproduced, and user-specific modifications may be implemented. Such an approach allows a detailed investigation of motion-related phenomena in MR images.
Resumo:
BACKGROUND: Several European HIV observational data bases have, over the last decade, accumulated a substantial number of resistance test results and developed large sample repositories, There is a need to link these efforts together, We here describe the development of such a novel tool that allows to bind these data bases together in a distributed fashion for which the control and data remains with the cohorts rather than classic data mergers.METHODS: As proof-of-concept we entered two basic queries into the tool: available resistance tests and available samples. We asked for patients still alive after 1998-01-01, and between 180 and 195 cm of height, and how many samples or resistance tests there would be available for these patients, The queries were uploaded with the tool to a central web server from which each participating cohort downloaded the queries with the tool and ran them against their database, The numbers gathered were then submitted back to the server and we could accumulate the number of available samples and resistance tests.RESULTS: We obtained the following results from the cohorts on available samples/resistance test: EuResist: not availableI11,194; EuroSIDA: 20,71611,992; ICONA: 3,751/500; Rega: 302/302; SHCS: 53,78311,485, In total, 78,552 samples and 15,473 resistance tests were available amongst these five cohorts. Once these data items have been identified, it is trivial to generate lists of relevant samples that would be usefuI for ultra deep sequencing in addition to the already available resistance tests, Saon the tool will include small analysis packages that allow each cohort to pull a report on their cohort profile and also survey emerging resistance trends in their own cohort,CONCLUSIONS: We plan on providing this tool to all cohorts within the Collaborative HIV and Anti-HIV Drug Resistance Network (CHAIN) and will provide the tool free of charge to others for any non-commercial use, The potential of this tool is to ease collaborations, that is, in projects requiring data to speed up identification of novel resistance mutations by increasing the number of observations across multiple cohorts instead of awaiting single cohorts or studies to reach the critical number needed to address such issues.
Resumo:
It is well established that cancer cells can recruit CD11b(+) myeloid cells to promote tumor angiogenesis and tumor growth. Increasing interest has emerged on the identification of subpopulations of tumor-infiltrating CD11b(+) myeloid cells using flow cytometry techniques. In the literature, however, discrepancies exist on the phenotype of these cells (Coffelt et al., Am J Pathol 2010;176:1564-1576). Since flow cytometry analysis requires particular precautions for accurate sample preparation and trustable data acquisition, analysis, and interpretation, some discrepancies might be due to technical reasons rather than biological grounds. We used the syngenic orthotopic 4T1 mammary tumor model in immunocompetent BALB/c mice to analyze and compare the phenotype of CD11b(+) myeloid cells isolated from peripheral blood and from tumors, using six-color flow cytometry. We report here that the nonspecific antibody binding through Fc receptors, the presence of dead cells and cell doublets in tumor-derived samples concur to generate artifacts in the phenotype of tumor-infiltrating CD11b(+) subpopulations. We show that the heterogeneity of tumor-infiltrating CD11b(+) subpopulations analyzed without particular precautions was greatly reduced upon Fc block treatment, dead cells, and cell doublets exclusion. Phenotyping of tumor-infiltrating CD11b(+) cells was particularly sensitive to these parameters compared to circulating CD11b(+) cells. Taken together, our results identify Fc block treatment, dead cells, and cell doublets exclusion as simple but crucial steps for the proper analysis of tumor-infiltrating CD11b(+) cell populations.
Resumo:
Teollisuuden automaatiojärjestelmät digitalisoituvat, samalla niiden tuottaman reaaliaikaisen datan määrä kasvaa ja etenkin sen saatavuus helpottuu. Samanaikaisesti laitevalmistajan liiketoimintamallit ovat muuttumassa perinteisestä konevalmistuksesta kohti palveluntarjontaa. Muuttuneessa tilanteessa laitteiden ohjauksessa käytettäviltä järjestelmiltä vaaditaan uusia ominaisuuksia. Informaation käsittely ja jalostaminen muodostuvat tärkeiksi kilpailu-tekijöiksi. Kirjallisuusosassa on tarkasteltu, miten data jalostuu informaatioksi ja siitä edelleen tietämykseksi. Työssä myös selvitetään, miten niitä voidaan hyödyntää liiketoiminnassa. Samalla perehdytään teollisuudesta löytyviin informaatio- ja tietämysjärjestelmiin. Kokeellisessa osassa esitellään toimiva tiedonkeruu- ja raportointijärjestelmä ja tutkitaan, miten sitä tulisi kehittää, jotta se sopisi paremmin muuttuviin liiketoimintamalleihin. Lopputuloksena kehitettiin mallijärjestelmä, jolla pystytään täyttämään laitevalmistajan ja loppukäyttäjän muuttuneet informaatiotarpeet osana laiteohjausta.
Resumo:
BACKGROUND: Used in conjunction with biological surveillance, behavioural surveillance provides data allowing for a more precise definition of HIV/STI prevention strategies. In 2008, mapping of behavioural surveillance in EU/EFTA countries was performed on behalf of the European Centre for Disease prevention and Control. METHOD: Nine questionnaires were sent to all 31 member States and EEE/EFTA countries requesting data on the overall behavioural and second generation surveillance system and on surveillance in the general population, youth, men having sex with men (MSM), injecting drug users (IDU), sex workers (SW), migrants, people living with HIV/AIDS (PLWHA), and sexually transmitted infection (STI) clinics patients. Requested data included information on system organisation (e.g. sustainability, funding, institutionalisation), topics covered in surveys and main indicators. RESULTS: Twenty-eight of the 31 countries contacted supplied data. Sixteen countries reported an established behavioural surveillance system, and 13 a second generation surveillance system (combination of biological surveillance of HIV/AIDS and STI with behavioural surveillance). There were wide differences as regards the year of survey initiation, number of populations surveyed, data collection methods used, organisation of surveillance and coordination with biological surveillance. The populations most regularly surveyed are the general population, youth, MSM and IDU. SW, patients of STI clinics and PLWHA are surveyed less regularly and in only a small number of countries, and few countries have undertaken behavioural surveys among migrant or ethnic minorities populations. In many cases, the identification of populations with risk behaviour and the selection of populations to be included in a BS system have not been formally conducted, or are incomplete. Topics most frequently covered are similar across countries, although many different indicators are used. In most countries, sustainability of surveillance systems is not assured. CONCLUSION: Although many European countries have established behavioural surveillance systems, there is little harmonisation as regards the methods and indicators adopted. The main challenge now faced is to build and maintain organised and functional behavioural and second generation surveillance systems across Europe, to increase collaboration, to promote robust, sustainable and cost-effective data collection methods, and to harmonise indicators.
Resumo:
L'expérience LHCb sera installée sur le futur accélérateur LHC du CERN. LHCb est un spectromètre à un bras consacré aux mesures de précision de la violation CP et à l'étude des désintégrations rares des particules qui contiennent un quark b. Actuellement LHCb se trouve dans la phase finale de recherche et développement et de conception. La construction a déjà commencé pour l'aimant et les calorimètres. Dans le Modèle Standard, la violation CP est causée par une phase complexe dans la matrice 3x3 CKM (Cabibbo-Kobayashi-Maskawa) de mélange des quarks. L'expérience LHCb compte utiliser les mesons B pour tester l'unitarité de cette matrice, en mesurant de diverses manières indépendantes tous les angles et côtés du "triangle d'unitarité". Cela permettra de surdéterminer le modèle et, peut-être, de mettre en évidence des incohérences qui seraient le signal de l'existence d'une physique au-delà du Modèle Standard. La reconstruction du vertex de désintégration des particules est une condition fondamentale pour l'expérience LHCb. La présence d'un vertex secondaire déplacé est une signature de la désintégration de particules avec un quark b. Cette signature est utilisée dans le trigger topologique du LHCb. Le Vertex Locator (VeLo) doit fournir des mesures précises de coordonnées de passage des traces près de la région d'interaction. Ces points sont ensuite utilisés pour reconstruire les trajectoires des particules et l'identification des vertices secondaires et la mesure des temps de vie des hadrons avec quark b. L'électronique du VeLo est une partie essentielle du système d'acquisition de données et doit se conformer aux spécifications de l'électronique de LHCb. La conception des circuits doit maximiser le rapport signal/bruit pour obtenir la meilleure performance de reconstruction des traces dans le détecteur. L'électronique, conçue en parallèle avec le développement du détecteur de silicium, a parcouru plusieurs phases de "prototyping" décrites dans cette thèse.<br/><br/>The LHCb experiment is being built at the future LHC accelerator at CERN. It is a forward single-arm spectrometer dedicated to precision measurements of CP violation and rare decays in the b quark sector. Presently it is finishing its R&D and final design stage. The construction already started for the magnet and calorimeters. In the Standard Model, CP violation arises via the complex phase of the 3 x 3 CKM (Cabibbo-Kobayashi-Maskawa) quark mixing matrix. The LHCb experiment will test the unitarity of this matrix by measuring in several theoretically unrelated ways all angles and sides of the so-called "unitary triangle". This will allow to over-constrain the model and - hopefully - to exhibit inconsistencies which will be a signal of physics beyond the Standard Model. The Vertex reconstruction is a fundamental requirement for the LHCb experiment. Displaced secondary vertices are a distinctive feature of b-hadron decays. This signature is used in the LHCb topology trigger. The Vertex Locator (VeLo) has to provide precise measurements of track coordinates close to the interaction region. These are used to reconstruct production and decay vertices of beauty-hadrons and to provide accurate measurements of their decay lifetimes. The Vertex Locator electronics is an essential part of the data acquisition system and must conform to the overall LHCb electronics specification. The design of the electronics must maximise the signal to noise ratio in order to achieve the best tracking reconstruction performance in the detector. The electronics is being designed in parallel with the silicon detector development and went trough several prototyping phases, which are described in this thesis.
Resumo:
In this study, the evaluation of the accuracy and performance of a light detection and ranging (LIDAR) sensor for vegetation using distance and reflection measurements aiming to detect and discriminate maize plants and weeds from soil surface was done. The study continues a previous work carried out in a maize field in Spain with a LIDAR sensor using exclusively one index, the height profile. The current system uses a combination of the two mentioned indexes. The experiment was carried out in a maize field at growth stage 12–14, at 16 different locations selected to represent the widest possible density of three weeds: Echinochloa crus-galli (L.) P.Beauv., Lamium purpureum L., Galium aparine L.and Veronica persica Poir.. A terrestrial LIDAR sensor was mounted on a tripod pointing to the inter-row area, with its horizontal axis and the field of view pointing vertically downwards to the ground, scanning a vertical plane with the potential presence of vegetation. Immediately after the LIDAR data acquisition (distances and reflection measurements), actual heights of plants were estimated using an appropriate methodology. For that purpose, digital images were taken of each sampled area. Data showed a high correlation between LIDAR measured height and actual plant heights (R2 = 0.75). Binary logistic regression between weed presence/absence and the sensor readings (LIDAR height and reflection values) was used to validate the accuracy of the sensor. This permitted the discrimination of vegetation from the ground with an accuracy of up to 95%. In addition, a Canonical Discrimination Analysis (CDA) was able to discriminate mostly between soil and vegetation and, to a far lesser extent, between crop and weeds. The studied methodology arises as a good system for weed detection, which in combination with other principles, such as vision-based technologies, could improve the efficiency and accuracy of herbicide spraying.
Resumo:
This paper presents the quantitative and qualitative findings from an experiment designed to evaluate a developing model of affective postures for full-body virtual characters in immersive virtual environments (IVEs). Forty-nine participants were each requested to explore a virtual environment by asking two virtual characters for instructions. The participants used a CAVE-like system to explore the environment. Participant responses and their impression of the virtual characters were evaluated through a wide variety of both quantitative and qualitative methods. Combining a controlled experimental approach with various data-collection methods provided a number of advantages such as providing a reason to the quantitative results. The quantitative results indicate that posture plays an important role in the communication of affect by virtual characters. The qualitative findings indicated that participants attribute a variety of psychological states to the behavioral cues displayed by virtual characters. In addition, participants tended to interpret the social context portrayed by the virtual characters in a holistic manner. This suggests that one aspect of the virtual scene colors the perception of the whole social context portrayed by the virtual characters. We conclude by discussing the importance of designing holistically congruent virtual characters especially in immersive settings.
Resumo:
Tässä diplomityössä tutustutaan IGBT:n ja tehodiodin rakenteisiin, lämmön muodostumiseen kyseisissä komponenteissa sekä menetelmiin, joilla komponenttien lämpötilat voidaan määrittää. Työssä suunnitellaan ja rakennetaan mittausjärjestelmä, jolla IGBT:n ja tehodiodin lämpötilat voidaan määrittää suoraan mittaamalla sekä matemaattisten mallien avulla. Mittausjärjestelmä koostuu DC-chopper -kytkennästä, kuormavirran, välipiirin jännitteen sekä lämpötilan mittauksista. Lämpötilan mittauksissa käytettiin komponenttien pintoihin liitettyjä termopareja. Matemaattisia malleja varten mittausjärjestelmään lisättiin välipiirin jännitteen sekä kuormavirran mittaus. Laitteiston ohjaus sekä mittaustulosten tallentaminen toteutettiin dSPACE -laitteistolla. Mittausjärjestelmän toimivuus testattiin Lappeenrannan teknillisen yliopiston Säätötekniikan laboratoriossa tehdyillä mittauksilla.