985 resultados para Data handling
Resumo:
Gravimetric and Bailey-Andrew methods are tedious and provide inflated results. Spectrofotometry is adequate for caffeine analysis but is lengthy. Gas chromatography also is applied to the caffeine analysis but derivatization is needed. High performance liquid chromatography with ultraviolet detection (HPLC-UV) and reversed phase is simple and rapid for xanthine multianalysis. In HPLC-UV-gel permeation, organic solvents are not used. HPLC-mass spectrometry provides an unequivocal structural identification of xanthines. Capillary electrophoresis is fast and the solvent consumption is smaller than in HPLC. Chemometric methods offer an effective means for chemical data handling in multivariate analysis. Infrared spectroscopy alone or associated with chemometries could predict the caffeine content in a very accurate form. Electroanalytical methods are considered of low cost and easy application in caffeine analysis.
Resumo:
An activity for introducing hierarchical cluster analysis (HCA) and principal component analysis (PCA) during the Instrumental Analytical Chemistry course is presented. The posed problem involves the discrimination of mineral water samples according to their geographical origin. Thirty-seven samples of 9 different brands were considered and the results from the determination of Na, K, Mg, Ca, Sr and Ba were taken into account. Non-supervised methods for pattern recognition were explored to construct a dendrogram, score and loading plots. The devised activity can be adopted for introducing Chemometrics devoted to data handling, stressing its importance in the context of modern Analytical Chemistry.
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
The recent digitization, fragmentation of the media landscape and consumers’ changing media behavior are all changes that have had drastic effects on creating marketing communications. In order to create effective marketing communications large advertisers are now co-operating with a variety of marketing communications companies. The purpose of the study is to understand how advertisers perceive these different companies and more importantly how do advertisers expect their roles to change in the future as the media landscape continues to evolve. Especially the changing roles of advertising agencies and media agencies are examined as they are at the moment the most relevant partners of the advertisers. However, the research is conducted from a network perspective rather than focusing on single actors of the marketing communications industry network. The research was conducted using a qualitative theme interview method. The empirical data was gathered by interviewing representatives from nine of the 50 largest Finnish advertisers measured by media spending. Thus, the research was conducted solely from large B2C advertisers’ perspective while the views of their other relevant actors of the network were left unexplored. The interviewees were chosen with a focus on variety of points of view. The analytical framework that was used to analyze the gathered data was built the IMP group’s industrial network model that consists of actors, their resources and activities. As technology driven media landscape fragmentation and consumers’ changing media behavior continue to increase the complexity of creating marketing communications, advertisers are going to need to rely on a growing number of partnerships as they see that the current actors of the network will not be able to widen their expertise to answer to these new needs. The advertisers expect to form new partnerships with actors that are more specialized and able to react and produce activities more quickly than at the moment. Thus, new smaller and more agile actors with looser structures are going to appear to fill these new needs. Therefore, the need of co-operation between the actors is going to become more important. These changes pose the biggest threat for traditional advertising agencies as they were seen as being most unable to cope with the ongoing change. Media agencies are in a more favorable position for remaining relevant for the advertisers as they will be able to justify their activities and provided value by leveraging their data handling abilities. In general the advertisers expect to be working with a limited number of close actors and in addition having a network of smaller actors, which are used on a more ad hoc basis.
Resumo:
We report a fast (less than 3 h) and cost-effective melting temperature assay method for the detection of single-nucleotide polymorphisms in the MBL2 gene. The protocol, which is based on the Corbett Rotor Gene real time PCR platform and SYBR Green I chemistry, yielded, in the cohorts studied, sensitive (100%) and specific (100%) PCR amplification without the use of costly fluorophore-labeled probes or post-PCR manipulation. At the end of the PCR, the dissociation protocol included a slow heating from 60º to 95ºC in 0.2ºC steps, with an 8-s interval between steps. Melting curve profiles were obtained using the dissociation software of the Rotor Gene-3000 apparatus. Samples were analyzed in duplicate and in different PCR runs to test the reproducibility of this technique. No supplementary data handling is required to determine the MBL2 genotype. MBL2 genotyping performed on a cohort of 164 HIV-1-positive Brazilian children and 150 healthy controls, matched for age and sex and ethnic origin, yielded reproducible results confirmed by direct sequencing of the amplicon performed in blind. The three MBL2 variants (Arg52Cys, Gly54Asp, Gly57Glu) were grouped together and called allele 0, while the combination of three wild-type alleles was called allele A. The frequency of the A/A homozygotes was significantly higher among healthy controls (0.68) than in HIV-infected children (0.55; P = 0.0234) and the frequency of MBL2 0/0 homozygotes was higher among HIV-1-infected children than healthy controls (P = 0.0296). The 0 allele was significantly more frequent among the 164 HIV-1-infected children (0.29) than among the 150 healthy controls (0.18; P = 0.0032). Our data confirm the association between the presence of the mutated MBL2 allele (allele 0) and HIV-1 infection in perinatally exposed children. Our results are in agreement with the literature data which indicate that the presence of the allele 0 confers a relative risk of 1.37 for HIV-1 infection through vertical transmission.
Resumo:
The Chartered Institute of Building Service Engineers (CIBSE) produced a technical memorandum (TM36) presenting research on future climate impacting building energy use and thermal comfort. One climate projection for each of four CO2 emissions scenario were used in TM36, so providing a deterministic outlook. As part of the UK Climate Impacts Programme (UKCIP) probabilistic climate projections are being studied in relation to building energy simulation techniques. Including uncertainty in climate projections is considered an important advance to climate impacts modelling and is included in the latest UKCIP data (UKCP09). Incorporating the stochastic nature of these new climate projections in building energy modelling requires a significant increase in data handling and careful statistical interpretation of the results to provide meaningful conclusions. This paper compares the results from building energy simulations when applying deterministic and probabilistic climate data. This is based on two case study buildings: (i) a mixed-mode office building with exposed thermal mass and (ii) a mechanically ventilated, light-weight office building. Building (i) represents an energy efficient building design that provides passive and active measures to maintain thermal comfort. Building (ii) relies entirely on mechanical means for heating and cooling, with its light-weight construction raising concern over increased cooling loads in a warmer climate. Devising an effective probabilistic approach highlighted greater uncertainty in predicting building performance, depending on the type of building modelled and the performance factors under consideration. Results indicate that the range of calculated quantities depends not only on the building type but is strongly dependent on the performance parameters that are of interest. Uncertainty is likely to be particularly marked with regard to thermal comfort in naturally ventilated buildings.
Resumo:
The control of molecular architectures has been exploited in layer-by-layer (LbL) films deposited on Au interdigitated electrodes, thus forming an electronic tongue (e-tongue) system that reached an unprecedented high sensitivity (down to 10-12 M) in detecting catechol. Such high sensitivity was made possible upon using units containing the enzyme tyrosinase, which interacted specifically with catechol, and by processing impedance spectroscopy data with information visualization methods. These latter methods, including the parallel coordinates technique, were also useful for identifying the major contributors to the high distinguishing ability toward catechol. Among several film architectures tested, the most efficient had a tyrosinase layer deposited atop LbL films of alternating layers of dioctadecyldimethylammonium bromide (DODAB) and 1,2-dipalmitoyl-sn-3-glycero-fosfo-rac-(1-glycerol) (DPPG), viz., (DODAB/DPPG)5/DODAB/Tyr. The latter represents a more suitable medium for immobilizing tyrosinase when compared to conventional polyelectrolytes. Furthermore, the distinction was more effective at low frequencies where double-layer effects on the film/liquid sample dominate the electrical response. Because the optimization of film architectures based on information visualization is completely generic, the approach presented here may be extended to designing architectures for other types of applications in addition to sensing and biosensing. © 2013 American Chemical Society.
Resumo:
Pós-graduação em Educação - IBRC
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Pós-graduação em Desenvolvimento Humano e Tecnologias - IBRC
Resumo:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
Resumo:
Several practical obstacles in data handling and evaluation complicate the use of quantitative localized magnetic resonance spectroscopy (qMRS) in clinical routine MR examinations. To overcome these obstacles, a clinically feasible MR pulse sequence protocol based on standard available MR pulse sequences for qMRS has been implemented along with newly added functionalities to the free software package jMRUI-v5.0 to make qMRS attractive for clinical routine. This enables (a) easy and fast DICOM data transfer from the MR console and the qMRS-computer, (b) visualization of combined MR spectroscopy and imaging, (c) creation and network transfer of spectroscopy reports in DICOM format, (d) integration of advanced water reference models for absolute quantification, and (e) setup of databases containing normal metabolite concentrations of healthy subjects. To demonstrate the work-flow of qMRS using these implementations, databases for normal metabolite concentration in different regions of brain tissue were created using spectroscopic data acquired in 55 normal subjects (age range 6-61 years) using 1.5T and 3T MR systems, and illustrated in one clinical case of typical brain tumor (primitive neuroectodermal tumor). The MR pulse sequence protocol and newly implemented software functionalities facilitate the incorporation of qMRS and reference to normal value metabolite concentration data in daily clinical routine. Magn Reson Med, 2013. © 2012 Wiley Periodicals, Inc.
Resumo:
As education providers increasingly integrate digital learning media into their education processes, the need for the systematic management of learning materials and learning arrangements becomes clearer. Digital repositories, often called Learning Object Repositories (LOR), promise to provide an answer to this challenge. This article is composed of two parts. In this part, we derive technological and pedagogical requirements for LORs from a concretization of information quality criteria for e-learning technology. We review the evolution of learning object repositories and discuss their core features in the context of pedagogical requirements, information quality demands, and e-learning technology standards. We conclude with an outlook in Part 2, which presents concrete technical solutions, in particular networked repository architectures.
Resumo:
In Part 1 of this article we discussed the need for information quality and the systematic management of learning materials and learning arrangements. Digital repositories, often called Learning Object Repositories (LOR), were introduced as a promising answer to this challenge. We also derived technological and pedagogical requirements for LORs from a concretization of information quality criteria for e-learning technology. This second part presents technical solutions that particularly address the demands of open education movements, which aspire to a global reuse and sharing culture. From this viewpoint, we develop core requirements for scalable network architectures for educational content management. We then present edu-sharing, an advanced example of a network of homogeneous repositories for learning resources, and discuss related technology. We conclude with an outlook in terms of emerging developments towards open and networked system architectures in e-learning.
Resumo:
Unterstützungssysteme für die Programmierausbildung sind weit verbreitet, doch gängige Standards für den Austausch von allgemeinen (Lern-) Inhalten und Tests erfüllen nicht die speziellen Anforderungen von Programmieraufgaben wie z. B. den Umgang mit komplexen Einreichungen aus mehreren Dateien oder die Kombination verschiedener (automatischer) Bewertungsverfahren. Dadurch können Aufgaben nicht zwischen Systemen ausgetauscht werden, was aufgrund des hohen Aufwands für die Entwicklung guter Aufgaben jedoch wünschenswert wäre. In diesem Beitrag wird ein erweiterbares XML-basiertes Format zum Austausch von Programmieraufgaben vorgestellt, das bereits von mehreren Systemen prototypisch genutzt wird. Die Spezifikation des Austauschformats ist online verfügbar [PFMA].