858 resultados para [JEL:C5] Mathematical and Quantitative Methods - Econometric Modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the beginning of the National Program for Production and Use of Biodiesel in Brazil, in 2004, different raw materials were evaluated for biodiesel production, trying to combine the agricultural diversity of the country to the desire to reduce production coasts. To determine the chemical composition of biodiesel produced from common vegetables oils, international methods have been used widely in Brazil. However, for analyzing biodiesel samples produced from some alternative raw materials analytical problems have been detected. That was the case of biodiesel from castor oil. Due the need to overcome these problems, new methodologies were developed using different chromatographic columns, standards and quantitative methods. The priority was simplifying the equipment configuration, realizing faster analyses, reducing the costs and facilitating the routine of biodiesel research and production laboratories. For quantifying free glycerin, the ethylene glycol was used in instead of 1,2,4-butanetriol, without loss of quality results. The ethylene glycol is a cheaper and easier standard. For methanol analyses the headspace was not used and the cost of the equipment used was lower. A detailed determination of the esters helped the deeper knowledge of the biodiesel composition. The report of the experiments and conclusions of the research that resulted in the development of alternative methods for quality control of the composition of the biodiesel produced in Brazil, a country with considerable variability of species in agriculture, are the goals of this thesis and are reported in the following pages

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis was to describe and explore how the partner relationship of patient–partner dyads isaffected following cardiac disease and, in particular, atrial fibrillation (AF) in one of the spouses. The thesis is based on four individual studies with different designs: descriptive (I), explorative (II, IV), and cross-sectional (III). Applied methods comprised a systematic review (I) and qualitative (II, IV) and quantitative methods (III). Participants in the studies were couples in which one of the spouses was afflicted with AF. Coherent with a systemic perspective, the research focused on the dyad as the unit of analysis. To identify and describe the current research position and knowledge base, the data for the systematic review were analyzed using an integrative approach. To explore couples’ main concern, interview data (n=12 couples) in study II were analyzed using classical grounded theory. Associations between patients and partners (n=91 couples) where analyzed through the Actor–Partner Interdependence Model using structural equation modelling (III). To explore couples’ illness beliefs, interview data (n=9 couples) in study IV were analyzed using Gadamerian hermeneutics. Study I revealed five themes of how the partner relationship is affected following cardiac disease: overprotection, communication deficiency, sexual concerns, changes in domestic roles, and adjustment to illness. Study II showed that couples living with AF experienced uncertainty as the common main concern, rooted in causation of AF and apprehension about AF episodes. The theory of Managing Uncertainty revealed the strategies of explicit sharing (mutual collaboration and finding resemblance) and implicit sharing (keeping distance and tacit understanding). Patients and spouses showed significant differences in terms of self-reported physical and mental health where patients rated themselves lower than spouses did (III). Several actor effects were identified, suggesting that emotional distress affects and is associated with perceived health. Patient partner effects and spouse partner effects were observed for vitality, indicating that higher levels of symptoms of depression in patients and spouses were associated with lower vitality in their partners. In study IV, couples’ core and secondary illness beliefs were revealed. From the core illness belief that “the heart is a representation of life,” two secondary illness beliefs were derived: AF is a threat to life, and AF can and must be explained. From the core illness belief that “change is an integral part of life,” two secondary illness beliefs were derived: AF is a disruption in our lives, and AF will not interfere with our lives. Finally, from the core illness belief that “adaptation is fundamental in life,” two secondary illness beliefs were derived: AF entails adjustment in daily life, and AF entails confidence in and adherence to professional care. In conclusion, the thesis result suggests that illness, in terms of cardiac disease and AF, affected and influenced the couple on aspects such as making sense of AF, responding to AF, and mutually incorporating and dealing with AF in their daily lives. In the light of this, the thesis results suggest that clinicians working with persons with AF and their partners should employ a systemic view with consideration of couple’s reciprocity and interdependence, but also have knowledge regarding AF, in terms of pathophysiology, the nature of AF (i.e., cause, consequences, and trajectory), and treatments. A possible approach to achieve this is a clinical utilization of an FSN based framework, such as the FamHC. Even if a formalized FSN framework is not utilized, partners should not be neglected but, rather, be considered a resource and be a part of clinical caring activities. This could be met by inviting partners to take part in rounds, treatment decisions, discharge calls or follow-up visits or other clinical caring activities. Likewise, interventional studies should include the couple as a unit of analysis as well as the target of interventions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this case study is to report on the use of learning journals as a strategy to encourage critical reflection in the field of graphic design. Very little empirical research has been published regarding the use of critical reflection in learning journals in this field. Furthermore, nothing has been documented at the college level. To that end, the goal of this research endeavor was to investigate whether second-year students in the NewMedia and Publication Design Program at a small Anglophone CEGEP in Québec, enrolled in a Page Layout and Design course, learn more deeply by reflecting in action during design projects or reflecting on action after completing design projects. Secondarily, indications of a possible change in self-efficacy were examined. Two hypotheses were posited: 1) reflection-on-action journaling will promote a deeper approach to learning than reflection-in-action journaling, and 2) the level of self-efficacy in graphic design improves as students are encouraged to think reflectively. Using both qualitative and quantitative methods, a mixed methods approach was used to collect and analyze the data. Content analysis of journal entries and interview responses was the primary method used to address the first hypothesis. Students were required to journal twice for each of three projects, once during the project and again one week after the project had been submitted. In addition, data regarding the students' perception of journaling was obtained through administering a survey and conducting interviews. For the second hypothesis, quantitative methods were used through the use of two surveys, one administered early in the Fall 2011 semester and the second administered early in the Winter 2012 semester. Supplementary data regarding self-efficacy was obtained in the form of content analysis of journal entries and interviews. Coded journal entries firmly supported the hypothesis that reflection-on-action journaling promotes deep learning. Using a taxonomy developed by Kember et al. (1999) wherein "critical reflection" is considered the highest level of reflection, it was found that only 5% of the coded responses in the reflection-in-action journals were deemed of the highest level, whereas 39% were considered critical reflection in the reflection-on-action journals. The findings from the interviews suggest that students had some initial concerns about the value of journaling, but these concerns were later dismissed as students learned that journaling was a valuable tool that helped them reflect and learn. All participants indicated that journaling changed their learning processes as they thought much more about what they were doing while they were doing it. They were taking the learning they had acquired and thinking about how they would apply it to new projects; this is critical reflection. The survey findings did not support the conclusive results of the comparison of journal instruments, where an increase of 35% in critical reflection was noted in the reflection-on-action journals. In Chapter 5, reasons for this incongruence are explored. Furthermore, based on the journals, surveys, and interviews, there is not enough evidence at this time to support the hypothesis that self-efficacy improves when students are encouraged to think reflectively. It could be hypothesized, however, that one's self-efficacy does not change in such a short period of time. In conclusion, the findings established in this case study make a practical contribution to the literature concerning the promotion of deep learning in the field of graphic design, as this researcher's hypothesis was supported that reflection-on-action journaling promoted deeper learning than reflection-in-action journaling. When examining the increases in critical reflection from reflection-in-action to the reflection-on-action journals, it was found that all students but one showed an increase in critical reflection in reflection-on-action journals. It is therefore recommended that production-oriented program instructors consider integrating reflection-on-action journaling into their courses where projects are given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study presents a review of new instruments for the impact assessment of libraries and a case study of the evaluation impact of the Library of the Faculty of Science, University of Porto (FCUP), from the students’ point of view. We con ducted a mixed methods research, i.e., which includes both qualitative data, to describe characteristics, in particular human actions, and quantitative data, represented by numbers that indicate exact amounts which can be statistically manipulated. Applying International Standard ISO16439:2014 (E) - Information and documentation - Methods and procedures for assessing the impact of libraries, we collected, 20 opinion texts from students of different nationalities, published in «Notícias da Biblioteca», from January 2013 to December 2014 and have conducted seven interviews.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The volume aims at providing an outlet for some of the best papers presented at the 15th Annual Conference of the African Econometric Society, which is one of thechapters” of the International Econometric Society. Many of these papers represent the state of the art in financial econometrics and applied econometric modeling, and some also provide useful simulations that shed light on the models' ability to generate meaningful scenarios for forecasting and policy analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Gender differences in cycling are well-documented. However, most analyses of gender differences make broad comparisons, with few studies modeling male and female cycling patterns separately for recreational and transport cycling. This modeling is important, in order to improve our efforts to promote cycling to women and men in countries like Australia with low rates of transport cycling. The main aim of this study was to examine gender differences in cycling patterns and in motivators and constraints to cycling, separately for recreational and transport cycling. Methods: Adult members of a Queensland, Australia, community bicycling organization completed an online survey about their cycling patterns; cycling purposes; and personal, social and perceived environmental motivators and constraints (47% response rate). Closed and open-end questions were completed. Using the quantitative data, multivariable linear, logistic and ordinal regression models were used to examine associations between gender and cycling patterns, motivators and constraints. The qualitative data were thematically analysed to expand upon the quantitative findings. Results: In this sample of 1862 bicyclists, men were more likely than women to cycle for recreation and for transport, and they cycled for longer. Most transport cycling was for commuting, with men more likely than women to commute by bicycle. Men were more likely to cycle on-road, and women off-road. However, most men and women did not prefer to cycle on-road without designed bicycle lanes, and qualitative data indicated a strong preference by men and women for bicycle-only off-road paths. Both genders reported personal factors (health and enjoyment related) as motivators for cycling, although women were more likely to agree that other personal, social and environmental factors were also motivating. The main constraints for both genders and both cycling purposes were perceived environmental factors related to traffic conditions, motorist aggression and safety. Women, however, reported more constraints, and were more likely to report as constraints other environmental factors and personal factors. Conclusion: Differences found in men’s and women’s cycling patterns, motivators and constraints should be considered in efforts to promote cycling, particularly in efforts to increase cycling for transport.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors present a qualitative and quantitative comparison of various similarity measures that form the kernel of common area-based stereo-matching systems. The authors compare classical difference and correlation measures as well as nonparametric measures based on the rank and census transforms for a number of outdoor images. For robotic applications, important considerations include robustness to image defects such as intensity variation and noise, the number of false matches, and computational complexity. In the absence of ground truth data, the authors compare the matching techniques based on the percentage of matches that pass the left-right consistency test. The authors also evaluate the discriminatory power of several match validity measures that are reported in the literature for eliminating false matches and for estimating match confidence. For guidance applications, it is essential to have and estimate of confidence in the three-dimensional points generated by stereo vision. Finally, a new validity measure, the rank constraint, is introduced that is capable of resolving ambiguous matches for rank transform-based matching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airport efficiency is important because it has a direct impact on customer safety and satisfaction and therefore the financial performance and sustainability of airports, airlines, and affiliated service providers. This is especially so in a world characterized by an increasing volume of both domestic and international air travel, price and other forms of competition between rival airports, airport hubs and airlines, and rapid and sometimes unexpected changes in airline routes and carriers. It also reflects expansion in the number of airports handling regional, national, and international traffic and the growth of complementary airport facilities including industrial, commercial, and retail premises. This has fostered a steadily increasing volume of research aimed at modeling and providing best-practice measures and estimates of airport efficiency using mathematical and econometric frontiers. The purpose of this chapter is to review these various methods as they apply to airports throughout the world. Apart from discussing the strengths and weaknesses of the different approaches and their key findings, the paper also examines the steps faced by researchers as they move through the modeling process in defining airport inputs and outputs and the purported efficiency drivers. Accordingly, the chapter provides guidance to those conducting empirical research on airport efficiency and serves as an aid for aviation regulators and airport operators among others interpreting airport efficiency research outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Many studies have found associations between climatic conditions and dengue transmission. However, there is a debate about the future impacts of climate change on dengue transmission. This paper reviewed epidemiological evidence on the relationship between climate and dengue with a focus on quantitative methods for assessing the potential impacts of climate change on global dengue transmission. Methods A literature search was conducted in October 2012, using the electronic databases PubMed, Scopus, ScienceDirect, ProQuest, and Web of Science. The search focused on peer-reviewed journal articles published in English from January 1991 through October 2012. Results Sixteen studies met the inclusion criteria and most studies showed that the transmission of dengue is highly sensitive to climatic conditions, especially temperature, rainfall and relative humidity. Studies on the potential impacts of climate change on dengue indicate increased climatic suitability for transmission and an expansion of the geographic regions at risk during this century. A variety of quantitative modelling approaches were used in the studies. Several key methodological issues and current knowledge gaps were identified through this review. Conclusions It is important to assemble spatio-temporal patterns of dengue transmission compatible with long-term data on climate and other socio-ecological changes and this would advance projections of dengue risks associated with climate change. Keywords: Climate; Dengue; Models; Projection; Scenarios

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many species inhabit fragmented landscapes, resulting either from anthropogenic or from natural processes. The ecological and evolutionary dynamics of spatially structured populations are affected by a complex interplay between endogenous and exogenous factors. The metapopulation approach, simplifying the landscape to a discrete set of patches of breeding habitat surrounded by unsuitable matrix, has become a widely applied paradigm for the study of species inhabiting highly fragmented landscapes. In this thesis, I focus on the construction of biologically realistic models and their parameterization with empirical data, with the general objective of understanding how the interactions between individuals and their spatially structured environment affect ecological and evolutionary processes in fragmented landscapes. I study two hierarchically structured model systems, which are the Glanville fritillary butterfly in the Åland Islands, and a system of two interacting aphid species in the Tvärminne archipelago, both being located in South-Western Finland. The interesting and challenging feature of both study systems is that the population dynamics occur over multiple spatial scales that are linked by various processes. My main emphasis is in the development of mathematical and statistical methodologies. For the Glanville fritillary case study, I first build a Bayesian framework for the estimation of death rates and capture probabilities from mark-recapture data, with the novelty of accounting for variation among individuals in capture probabilities and survival. I then characterize the dispersal phase of the butterflies by deriving a mathematical approximation of a diffusion-based movement model applied to a network of patches. I use the movement model as a building block to construct an individual-based evolutionary model for the Glanville fritillary butterfly metapopulation. I parameterize the evolutionary model using a pattern-oriented approach, and use it to study how the landscape structure affects the evolution of dispersal. For the aphid case study, I develop a Bayesian model of hierarchical multi-scale metapopulation dynamics, where the observed extinction and colonization rates are decomposed into intrinsic rates operating specifically at each spatial scale. In summary, I show how analytical approaches, hierarchical Bayesian methods and individual-based simulations can be used individually or in combination to tackle complex problems from many different viewpoints. In particular, hierarchical Bayesian methods provide a useful tool for decomposing ecological complexity into more tractable components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative reverse-transcription polymerase chain reaction (qRT-PCR) is a standard assay in molecular medicine for gene expression analysis. Samples from incisional/needle biopsies, laser-microdissected tumor cells and other biologic sources, normally available in clinical cancer studies, generate very small amounts of RNA that are restrictive for expression analysis. As a consequence, an RNA amplification procedure is required to assess the gene expression levels of such sample types. The reproducibility and accuracy of relative gene expression data produced by sensitive methodology as qRT-PCR when cDNA converted from amplified (A) RNA is used as template has not yet been properly addressed. In this study, to properly evaluate this issue, we performed 1 round of linear RNA amplification in 2 breast cell lines (C5.2 and HB4a) and assessed the relative expression of 34 genes using cDNA converted from both nonamplified (NA) and A RNA. Relative gene expression was obtained from beta actin or glyceraldehyde 3-phosphate dehydrogenase normalized data using different dilutions of cDNA, wherein the variability and fold-change differences in the expression of the 2 methods were compared. Our data showed that 1 round of linear RNA amplification, even with suboptimal-quality RNA, is appropriate to generate reproducible and high-fidelity qRT-PCR relative expression data that have similar confidence levels as those from NA samples. The use of cDNA that is converted from both A and NA RNA in a single qRT-PCR experiment clearly creates bias in relative gene expression data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of new statistical and computational methods is increasingly making it possible to bridge the gap between hard sciences and humanities. In this study, we propose an approach based on a quantitative evaluation of attributes of objects in fields of humanities, from which concepts such as dialectics and opposition are formally defined mathematically. As case studies, we analyzed the temporal evolution of classical music and philosophy by obtaining data for 8 features characterizing the corresponding fields for 7 well-known composers and philosophers, which were treated with multivariate statistics and pattern recognition methods. A bootstrap method was applied to avoid statistical bias caused by the small sample data set, with which hundreds of artificial composers and philosophers were generated, influenced by the 7 names originally chosen. Upon defining indices for opposition, skewness and counter-dialectics, we confirmed the intuitive analysis of historians in that classical music evolved according to a master apprentice tradition, while in philosophy changes were driven by opposition. Though these case studies were meant only to show the possibility of treating phenomena in humanities quantitatively, including a quantitative measure of concepts such as dialectics and opposition, the results are encouraging for further application of the approach presented here to many other areas, since it is entirely generic.