954 resultados para approach to making exercises
Resumo:
Using the Pricing Equation, in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) mimicking portfolio which relies on the fact that its logarithm is the ìcommon featureîin every asset return of the economy. Our estimator is a simple function of asset returns and does not depend on any parametric function representing preferences, making it suitable for testing di§erent preference speciÖcations or investigating intertemporal substitution puzzles.
Resumo:
The proposed research aims at consolidating two years of practical experience in developing a classroom experiential learning pedagogic approach for the problem structuring methods (PSMs) of operational research. The results will be prepared as papers to be submitted, respectively, to the Brazilian ISSS-sponsored system theory conference in São Paulo, and to JORS. These two papers follow the submission (in 2004) of one related paper to JORS which is about to be resubmitted following certain revisions. This first paper draws from the PSM and experiential learning literatures in order to introduce a basic foundation upon which a pedagogic framework for experiential learning of PSMs may be built. It forms, in other words, an integral part of my research in this area. By September, the area of pedagogic approaches to PSM learning will have received its first official attention - at the UK OR Society conference. My research and paper production during July-December, therefore, coincide with an important time in this area, enabling me to form part of the small cohort of published researchers creating the foundations upon which future pedagogic research will build. On the institutional level, such pioneering work also raises the national and international profile of FGVEAESP, making it a reference for future researchers in this area.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Resin solvation properties affect the efficiency of the coupling reactions in solid-phase peptide synthesis. Here we report a novel approach to evaluate resin solvation properties, making use of spin label electron paramagnetic resonance (EPR) spectroscopy. The aggregating VVLGAAIV and ING sequences were assembled in benzhydrylamine-resin with different amino group contents (up to 2.6 mmol/g) to examine the extent of chain association within the beads. These model peptidyl-resins were first labeled at their N-terminus with the amino acid spin label 2,2,6,6-tetramethylpiperidine-N-oxyl-4-amino-4-carboxylic acid (Toac). Their solvation properties in different solvents were estimated, either by bead swelling measurement or by assessing the dynamics of their polymeric matrixes through the analysis of Toac EPR spectra, and were correlated with the yield of the acylation reaction. In most cases the coupling rate was found to depend on bead swelling. Comparatively, the EPR approach was more effective. Line shape analysis allowed the detection of more than one peptide chain population, which influenced the reaction. The results demonstrated the unique potential of EPR spectroscopy not only for improving the yield of peptide synthesis, even in challenging conditions, but also for other relevant polymer-supported methodologies in chemistry and biology.
Resumo:
An approach using straight lines as features to solve the photogrammetric space resection problem is presented. An explicit mathematical model relating straight lines, in both object and image space, is used. Based on this model, Kalman Filtering is applied to solve the space resection problem. The recursive property of the filter is used in an iterative process which uses the sequentially estimated camera location parameters to feedback to the feature extraction process in the image. This feedback process leads to a gradual reduction of the image space for feature searching, and consequently eliminates the bottleneck due to the high computational cost of the image segmentation phase. It also enables feature extraction and the determination of feature correspondence in image and object space in an automatic way, i.e., without operator interference. Results obtained from simulated and real data show that highly accurate space resection parameters are obtained as well as a progressive processing time reduction. The obtained accuracy, the automatic correspondence process, and the short related processing time show that the proposed approach can be used in many real-time machine vision systems, making possible the implementation of applications not feasible until now.
Resumo:
Includes bibliography
Resumo:
In the past few years, vehicular ad hoc networks(VANETs) was studied extensively by researchers. VANETs is a type of P2P network, though it has some distinct characters (fast moving, short lived connection etc.). In this paper, we present several limitations of current trust management schemes in VANETs and propose ways to counter them. We first review several trust management techniques in VANETs and argue that the ephemeral nature of VANETs render them useless in practical situations. We identify that the problem of information cascading and oversampling, which commonly arise in social networks, also adversely affects trust management schemes in VANETs. To the best of our knowledge, we are the first to introduce information cascading and oversampling to VANETs. We show that simple voting for decision making leads to oversampling and gives incorrect results in VANETs. To overcome this problem, we propose a novel voting scheme. In our scheme, each vehicle has different voting weight according to its distance from the event. The vehicle which is more closer to the event possesses higher weight. Simulations show that our proposed algorithm performs better than simple voting, increasing the correctness of voting. © 2012 Springer Science + Business Media, LLC.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Connectivity is the basic factor for the proper operation of any wireless network. In a mobile wireless sensor network it is a challenge for applications and protocols to deal with connectivity problems, as links might get up and down frequently. In these scenarios, having knowledge of the node remaining connectivity time could both improve the performance of the protocols (e.g. handoff mechanisms) and save possible scarce nodes resources (CPU, bandwidth, and energy) by preventing unfruitful transmissions. The current paper provides a solution called Genetic Machine Learning Algorithm (GMLA) to forecast the remainder connectivity time in mobile environments. It consists in combining Classifier Systems with a Markov chain model of the RF link quality. The main advantage of using an evolutionary approach is that the Markov model parameters can be discovered on-the-fly, making it possible to cope with unknown environments and mobility patterns. Simulation results show that the proposal is a very suitable solution, as it overcomes the performance obtained by similar approaches.
Resumo:
We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.
Resumo:
The development of new statistical and computational methods is increasingly making it possible to bridge the gap between hard sciences and humanities. In this study, we propose an approach based on a quantitative evaluation of attributes of objects in fields of humanities, from which concepts such as dialectics and opposition are formally defined mathematically. As case studies, we analyzed the temporal evolution of classical music and philosophy by obtaining data for 8 features characterizing the corresponding fields for 7 well-known composers and philosophers, which were treated with multivariate statistics and pattern recognition methods. A bootstrap method was applied to avoid statistical bias caused by the small sample data set, with which hundreds of artificial composers and philosophers were generated, influenced by the 7 names originally chosen. Upon defining indices for opposition, skewness and counter-dialectics, we confirmed the intuitive analysis of historians in that classical music evolved according to a master apprentice tradition, while in philosophy changes were driven by opposition. Though these case studies were meant only to show the possibility of treating phenomena in humanities quantitatively, including a quantitative measure of concepts such as dialectics and opposition, the results are encouraging for further application of the approach presented here to many other areas, since it is entirely generic.
Resumo:
In the recent years TNFRSF13B coding variants have been implicated by clinical genetics studies in Common Variable Immunodeficiency (CVID), the most common clinically relevant primary immunodeficiency in individuals of European ancestry, but their functional effects in relation to the development of the disease have not been entirely established. To examine the potential contribution of such variants to CVID, the more comprehensive perspective of an evolutionary approach was applied in this study, underling the belief that evolutionary genetics methods can play a role in dissecting the origin, causes and diffusion of human diseases, representing a powerful tool also in human health research. For this purpose, TNFRSF13B coding region was sequenced in 451 healthy individuals belonging to 26 worldwide populations, in addition to 96 control, 77 CVID and 38 Selective IgA Deficiency (IgAD) individuals from Italy, leading to the first achievement of a global picture of TNFRSF13B nucleotide diversity and haplotype structure and making suggestion of its evolutionary history possible. A slow rate of evolution, within our species and when compared to the chimpanzee, low levels of genetic diversity geographical structure and the absence of recent population specific selective pressures were observed for the examined genomic region, suggesting that geographical distribution of its variability is more plausibly related to its involvement also in innate immunity rather than in adaptive immunity only. This, together with the extremely subtle disease/healthy samples differences observed, suggests that CVID might be more likely related to still unknown environmental and genetic factors, rather than to the nature of TNFRSF13B variants only.
Resumo:
In recent years, new precision experiments have become possible withthe high luminosity accelerator facilities at MAMIand JLab, supplyingphysicists with precision data sets for different hadronic reactions inthe intermediate energy region, such as pion photo- andelectroproduction and real and virtual Compton scattering.By means of the low energy theorem (LET), the global properties of thenucleon (its mass, charge, and magnetic moment) can be separated fromthe effects of the internal structure of the nucleon, which areeffectively described by polarizabilities. Thepolarizabilities quantify the deformation of the charge andmagnetization densities inside the nucleon in an applied quasistaticelectromagnetic field. The present work is dedicated to develop atool for theextraction of the polarizabilities from these precise Compton data withminimum model dependence, making use of the detailed knowledge of pionphotoproduction by means of dispersion relations (DR). Due to thepresence of t-channel poles, the dispersion integrals for two ofthe six Compton amplitudes diverge. Therefore, we have suggested to subtract the s-channel dispersion integrals at zero photon energy($nu=0$). The subtraction functions at $nu=0$ are calculated through DRin the momentum transfer t at fixed $nu=0$, subtracted at t=0. For this calculation, we use the information about the t-channel process, $gammagammatopipito Nbar{N}$. In this way, four of thepolarizabilities can be predicted using the unsubtracted DR in the $s$-channel. The other two, $alpha-beta$ and $gamma_pi$, are free parameters in ourformalism and can be obtained from a fit to the Compton data.We present the results for unpolarized and polarized RCS observables,%in the kinematics of the most recent experiments, and indicate anenhanced sensitivity to the nucleon polarizabilities in theenergy range between pion production threshold and the $Delta(1232)$-resonance.newlineindentFurthermore,we extend the DR formalism to virtual Compton scattering (radiativeelectron scattering off the nucleon), in which the concept of thepolarizabilities is generalized to the case of avirtual initial photon by introducing six generalizedpolarizabilities (GPs). Our formalism provides predictions for the fourspin GPs, while the two scalar GPs $alpha(Q^2)$ and $beta(Q^2)$ have to befitted to the experimental data at each value of $Q^2$.We show that at energies betweenpion threshold and the $Delta(1232)$-resonance position, thesensitivity to the GPs can be increased significantly, as compared tolow energies, where the LEX is applicable. Our DR formalism can be used for analysing VCS experiments over a widerange of energy and virtuality $Q^2$, which allows one to extract theGPs from VCS data in different kinematics with a minimum of model dependence.
Resumo:
The main problem addressed by this research was that of what are the relations between TV viewing at home and studying literature at school, and how an adequate position on this can be reached. As well as the theoretical background, it involved an experimental study with classes of second and sixth grade students, discussing and observing their reactions to and interpretations of a number of animated cartoons. The work is divided into four parts - Is there a Class in this Text?, Stories of Reading, Narratives of Animation and Animation of Narratives, and The (Three) Unrepeatable (Pigs). Beginning, Middle End - which examine the tensions between the "undiscriminating sequence" of the televisual flow and a way of "thinking", "making" and "doing" education that presupposes a fundamental belief in possible re-productions, copies unescaping, following the original, or competing. The work focuses on animated cartoons, seeing them not merely as a part of the flow of television, but as an allegory of reading this flow, of the flow within the flow itself. What they question - "identity", "end", "followability" - is what is most important to teaching. Thus the interest in the metamorphoses of animated films is an interest in the tensions which their "strange law/flow" introduces into the field of teaching - this totally forbidden place of saying everything.