942 resultados para Matching-to-sample arbitrário


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A leitura compartilhada de livros para crianças é uma atividade que tem sido estudada como forma de ensino incidental de vocabulário, que envolve, dentre outros processos, o responder por exclusão. O objetivo do presente trabalho foi investigar a ocorrência de aprendizagem de relações entre estímulos visuais (figuras) com seus respectivos estímulos auditivos (palavras) a partir de diferentes condições de leitura compartilhada de livros para crianças com Síndrome de Down (SD) e com desenvolvimento típico (DT). Para a pesquisa foram desenvolvidos dois estudos. No Estudo 1, participaram seis crianças com SD com seis a sete anos, e seis crianças com DT com três a quatro anos (amostras pareadas em função do nível de vocabulário). Foi utilizado um livro de história produzido pela pesquisadora, no qual havia dois substantivos e dois adjetivos desconhecidos (estímulos visuais S1, S2, A1, A2), apresentados uma única vez na história. Esse livro foi lido para cada criança duas vezes em sequência por sessão e em cada sessão foi realizada uma condição de leitura diferente. Foram apresentadas três condições de leitura e cada criança passou por todas, mas em diferentes ordens (contrabalanceamento). Na Condição 1, o livro foi lido para a criança sem intervenções. Na Condição 2, o livro foi lido para a criança e ela tinha que repetir o nome dos estímulos desconhecidos. Na Condição 3, o livro foi lido e foram realizadas perguntas relacionadas aos estímulos-alvo. Ao final de cada sessão foram realizadas sondas de aprendizagem (sondas de emparelhamento ao modelo e nomeação), e após uma semana da última sessão foi aplicada uma sonda de manutenção e uma de generalização. As crianças com DT apresentaram maior número de acertos que as com SD, e os acertos foram mais relacionados ao estímulo S1. As crianças não aprenderam a relação nome-cor. A análise dos resultados sugeriu que o número de estímulos-alvo era excessivo e com apresentações insuficientes no livro. No Estudo 2 participaram seis crianças com DT de 3 a 4 anos e seis crianças com SD, de 5 a 8 anos. O procedimento utilizado no Estudo 2 foi semelhante ao primeiro com as seguintes alterações no livro: utilização de apenas duas relações-alvo (um substantivo-alvo e um adjetivo-alvo - S2 e A3), cada uma sendo apresentada três vezes ao longo da história, em figuras que possibilitavam o responder por exclusão. Também foi acrescentada uma tentativa de exclusão nas sondas de aprendizagem. Nesse estudo, todas as crianças com DT conseguiram selecionar e nomear estímulo S2 e duas mostraram indícios de aprendizagem do estímulo A3. As crianças com SD apresentaram um menor número de acertos nas sondas de emparelhamento, mas apresentaram algumas nomeações corretas, o que não foi observado no Estudo 1. Os dados sugerem que as mudanças realizadas no livro melhoram o desempenho das crianças com DT, mas não o das crianças com SD. Não foram encontradas diferenças entre as condições de leituras nos dois estudos. No entanto, são necessários estudos adicionais para avaliar essas diferentes condições e as variáveis envolvidas na aprendizagem de palavras a partir da leitura compartilhada de livro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study evaluated the use of stimulus equivalence in teaching monetary skills to school aged children with autism. An AB within-subject design with periodic probes was used. At pretest, three participants demonstrated relation DA, an auditory-visual relation (matching dictated coin values to printed coin prices). Using a three-choice match-to-sample procedure, with a multi-component intervention package, these participants were taught two trained relations, BA (matching coins to printed prices) and CA (matching coin combinations to printed prices). Two participants achieved positive tests of equivalence, and the third participant demonstrated emergent performances with a symmetric and transitive relation. In addition, two participants were able to show generalization of learned skills with a parent, in a second naturalistic setting. The present research replicates and extends the results of previous studies by demonstrating that stimulus equivalence can be used to teach an adaptive skill to children with autism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Support for the adverse effect of high income inequality on population health has come from studies that focus on larger areas, such as the US states, while studies at smaller geographical areas (eg, neighbourhoods) have found mixed results. Methods We used propensity score matching to examine the relationship between income inequality and mortality rates across 96 neighbourhoods (distritos) of the municipality of Sao Paulo, Brazil. Results Prior to matching, higher income inequality distritos (Gini >= 0.25) had slightly lower overall mortality rates (2.23 per 10 000, 95% CI -23.92 to 19.46) compared to lower income inequality areas (Gini <0.25). After propensity score matching, higher inequality was associated with a statistically significant higher mortality rate (41.58 per 10 000, 95% CI 8.85 to 73.3). Conclusion In Sao Paulo, the more egalitarian communities are among some of the poorest, with the worst health profiles. Propensity score matching was used to avoid inappropriate comparisons between the health status of unequal (but wealthy) neighbourhoods versus equal (but poor) neighbourhoods. Our methods suggest that, with proper accounting of heterogeneity between areas, income inequality is associated with worse population health in Sao Paulo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed a sensitive resonant four-wave mixing technique based on two-photon parametric four-wave mixing with the addition of a phase matched ''seeder'' field. Generation of the seeder field via the same four-wave mixing process in a high pressure cell enables automatic phase matching to be achieved in a low pressure sample cell. This arrangement facilitates sensitive detection of complex molecular spectra by simply tuning the pump laser. We demonstrate the technique with the detection of nitric oxide down to concentrations more than 4 orders of magnitude below the capability of parametric four-wave mixing alone, with an estimated detection threshold of 10(12) molecules/cm(3).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The deep-sea environment is difficult to sample, and often only small quantities of samples can be obtained when using less destructive methods than dredging. When working with marine animals that are difficult to sample and with limited quantities of tissue to extract lipids, it is essential to ensure that the used method extracts the maximum possible quantity of lipids. This study evaluates the efficiency of introducing modifications to the method originally described by Bligh & Dyer (1959). This lipid extraction method is broadly used with modifications, although these usually lack proper description and evaluation of increment in lipids. In this study we consider the improvement in terms of amount of lipids extracted by changing the method. Lipid content was determined by gravimetric measurements in eight invertebrates from the deep-sea, including deep-sea hydrothermal vents animals, using three different approaches. Results show increases of 14% to 30% in lipid contents obtained from hydrothermal vent invertebrate tissues and whole animals by placing the samples in methanol for 24 hours before applying the Bligh & Dyer mixture. Efficiency of the extractions using frozen and freeze-dried samples was also compared. For large sponges, the use of lyophilized materials resulted in increases of 3 to 7 times more lipids extracted when compared with extractions using frozen samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An ab initio structure prediction approach adapted to the peptide-major histocompatibility complex (MHC) class I system is presented. Based on structure comparisons of a large set of peptide-MHC class I complexes, a molecular dynamics protocol is proposed using simulated annealing (SA) cycles to sample the conformational space of the peptide in its fixed MHC environment. A set of 14 peptide-human leukocyte antigen (HLA) A0201 and 27 peptide-non-HLA A0201 complexes for which X-ray structures are available is used to test the accuracy of the prediction method. For each complex, 1000 peptide conformers are obtained from the SA sampling. A graph theory clustering algorithm based on heavy atom root-mean-square deviation (RMSD) values is applied to the sampled conformers. The clusters are ranked using cluster size, mean effective or conformational free energies, with solvation free energies computed using Generalized Born MV 2 (GB-MV2) and Poisson-Boltzmann (PB) continuum models. The final conformation is chosen as the center of the best-ranked cluster. With conformational free energies, the overall prediction success is 83% using a 1.00 Angstroms crystal RMSD criterion for main-chain atoms, and 76% using a 1.50 Angstroms RMSD criterion for heavy atoms. The prediction success is even higher for the set of 14 peptide-HLA A0201 complexes: 100% of the peptides have main-chain RMSD values &lt; or =1.00 Angstroms and 93% of the peptides have heavy atom RMSD values &lt; or =1.50 Angstroms. This structure prediction method can be applied to complexes of natural or modified antigenic peptides in their MHC environment with the aim to perform rational structure-based optimizations of tumor vaccines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attrition in longitudinal studies can lead to biased results. The study is motivated by the unexpected observation that alcohol consumption decreased despite increased availability, which may be due to sample attrition of heavy drinkers. Several imputation methods have been proposed, but rarely compared in longitudinal studies of alcohol consumption. The imputation of consumption level measurements is computationally particularly challenging due to alcohol consumption being a semi-continuous variable (dichotomous drinking status and continuous volume among drinkers), and the non-normality of data in the continuous part. Data come from a longitudinal study in Denmark with four waves (2003-2006) and 1771 individuals at baseline. Five techniques for missing data are compared: Last value carried forward (LVCF) was used as a single, and Hotdeck, Heckman modelling, multivariate imputation by chained equations (MICE), and a Bayesian approach as multiple imputation methods. Predictive mean matching was used to account for non-normality, where instead of imputing regression estimates, "real" observed values from similar cases are imputed. Methods were also compared by means of a simulated dataset. The simulation showed that the Bayesian approach yielded the most unbiased estimates for imputation. The finding of no increase in consumption levels despite a higher availability remained unaltered. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Confidence in decision making is an important dimension of managerialbehavior. However, what is the relation between confidence, on the onehand, and the fact of receiving or expecting to receive feedback ondecisions taken, on the other hand? To explore this and related issuesin the context of everyday decision making, use was made of the ESM(Experience Sampling Method) to sample decisions taken by undergraduatesand business executives. For several days, participants received 4 or 5SMS messages daily (on their mobile telephones) at random moments at whichpoint they completed brief questionnaires about their current decisionmaking activities. Issues considered here include differences between thetypes of decisions faced by the two groups, their structure, feedback(received and expected), and confidence in decisions taken as well as inthe validity of feedback. No relation was found between confidence indecisions and whether participants received or expected to receivefeedback on those decisions. In addition, although participants areclearly aware that feedback can provide both confirming and disconfirming evidence, their ability to specify appropriatefeedback is imperfect. Finally, difficulties experienced inusing the ESM are discussed as are possibilities for further researchusing this methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summary The specific CD8+ T cell immune response against tumors relies on the recognition by the T cell receptor (TCR) on cytotoxic T lymphocytes (CTL) of antigenic peptides bound to the class I major histocompatibility complex (MHC) molecule. Such tumor associated antigenic peptides are the focus of tumor immunotherapy with peptide vaccines. The strategy for obtaining an improved immune response often involves the design of modified tumor associated antigenic peptides. Such modifications aim at creating higher affinity and/or degradation resistant peptides and require precise structures of the peptide-MHC class I complex. In addition, the modified peptide must be cross-recognized by CTLs specific for the parental peptide, i.e. preserve the structure of the epitope. Detailed structural information on the modified peptide in complex with MHC is necessary for such predictions. In this thesis, the main focus is the development of theoretical in silico methods for prediction of both structure and cross-reactivity of peptide-MHC class I complexes. Applications of these methods in the context of immunotherapy are also presented. First, a theoretical method for structure prediction of peptide-MHC class I complexes is developed and validated. The approach is based on a molecular dynamics protocol to sample the conformational space of the peptide in its MHC environment. The sampled conformers are evaluated using conformational free energy calculations. The method, which is evaluated for its ability to reproduce 41 X-ray crystallographic structures of different peptide-MHC class I complexes, shows an overall prediction success of 83%. Importantly, in the clinically highly relevant subset of peptide-HLAA*0201 complexes, the prediction success is 100%. Based on these structure predictions, a theoretical approach for prediction of cross-reactivity is developed and validated. This method involves the generation of quantitative structure-activity relationships using three-dimensional molecular descriptors and a genetic neural network. The generated relationships are highly predictive as proved by high cross-validated correlation coefficients (0.78-0.79). Together, the here developed theoretical methods open the door for efficient rational design of improved peptides to be used in immunotherapy. Résumé La réponse immunitaire spécifique contre des tumeurs dépend de la reconnaissance par les récepteurs des cellules T CD8+ de peptides antigéniques présentés par les complexes majeurs d'histocompatibilité (CMH) de classe I. Ces peptides sont utilisés comme cible dans l'immunothérapie par vaccins peptidiques. Afin d'augmenter la réponse immunitaire, les peptides sont modifiés de façon à améliorer l'affinité et/ou la résistance à la dégradation. Ceci nécessite de connaître la structure tridimensionnelle des complexes peptide-CMH. De plus, les peptides modifiés doivent être reconnus par des cellules T spécifiques du peptide natif. La structure de l'épitope doit donc être préservée et des structures détaillées des complexes peptide-CMH sont nécessaires. Dans cette thèse, le thème central est le développement des méthodes computationnelles de prédiction des structures des complexes peptide-CMH classe I et de la reconnaissance croisée. Des applications de ces méthodes de prédiction à l'immunothérapie sont également présentées. Premièrement, une méthode théorique de prédiction des structures des complexes peptide-CMH classe I est développée et validée. Cette méthode est basée sur un échantillonnage de l'espace conformationnel du peptide dans le contexte du récepteur CMH classe I par dynamique moléculaire. Les conformations sont évaluées par leurs énergies libres conformationnelles. La méthode est validée par sa capacité à reproduire 41 structures des complexes peptide-CMH classe I obtenues par cristallographie aux rayons X. Le succès prédictif général est de 83%. Pour le sous-groupe HLA-A*0201 de complexes de grande importance pour l'immunothérapie, ce succès est de 100%. Deuxièmement, à partir de ces structures prédites in silico, une méthode théorique de prédiction de la reconnaissance croisée est développée et validée. Celle-ci consiste à générer des relations structure-activité quantitatives en utilisant des descripteurs moléculaires tridimensionnels et un réseau de neurones couplé à un algorithme génétique. Les relations générées montrent une capacité de prédiction remarquable avec des valeurs de coefficients de corrélation de validation croisée élevées (0.78-0.79). Les méthodes théoriques développées dans le cadre de cette thèse ouvrent la voie du design de vaccins peptidiques améliorés.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pursuit of high response rates to minimise the threat of nonresponse bias continues to dominate decisions about resource allocation in survey research. Yet a growing body of research has begun to question this practice. In this study, we use previously unavailable data from a new sampling frame based on population registers to assess the value of different methods designed to increase response rates on the European Social Survey in Switzerland. Using sampling data provides information about both respondents and nonrespondents, making it possible to examine how changes in response rates resulting from the use of different fieldwork methods relate to changes in the composition and representativeness of the responding sample. We compute an R-indicator to assess representativity with respect to the sampling register variables, and find little improvement in the sample composition as response rates increase. We then examine the impact of response rate increases on the risk of nonresponse bias based on Maximal Absolute Bias (MAB), and coefficients of variation between subgroup response rates, alongside the associated costs of different types of fieldwork effort. The results show that increases in response rate help to reduce MAB, while only small but important improvements to sample representativity are gained by varying the type of effort. These findings lend further support to research that has called into question the value of extensive investment in procedures aimed at reaching response rate targets and the need for more tailored fieldwork strategies aimed both at reducing survey costs and minimising the risk of bias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examined the effects of ibotenic acid-induced lesions of the hippocampus, subiculum and hippocampus +/- subiculum upon the capacity of rats to learn and perform a series of allocentric spatial learning tasks in an open-field water maze. The lesions were made by infusing small volumes of the neurotoxin at a total of 26 (hippocampus) or 20 (subiculum) sites intended to achieve complete target cell loss but minimal extratarget damage. The regional extent and axon-sparing nature of these lesions was evaluated using both cresyl violet and Fink - Heimer stained sections. The behavioural findings indicated that both the hippocampus and subiculum lesions caused impairment to the initial postoperative acquisition of place navigation but did not prevent eventual learning to levels of performance almost as effective as those of controls. However, overtraining of the hippocampus + subiculum lesioned rats did not result in significant place learning. Qualitative observations of the paths taken to find a hidden escape platform indicated that different strategies were deployed by hippocampal and subiculum lesioned groups. Subsequent training on a delayed matching to place task revealed a deficit in all lesioned groups across a range of sample choice intervals, but the subiculum lesioned group was less impaired than the group with the hippocampal lesion. Finally, unoperated control rats given both the initial training and overtraining were later given either a hippocampal lesion or sham surgery. The hippocampal lesioned rats were impaired during a subsequent retention/relearning phase. Together, these findings suggest that total hippocampal cell loss may cause a dual deficit: a slower rate of place learning and a separate navigational impairment. The prospect of unravelling dissociable components of allocentric spatial learning is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimus keskittyy kansainväliseen hajauttamiseen suomalaisen sijoittajan näkökulmasta. Tutkimuksen toinen tavoite on selvittää tehostavatko uudet kovarianssimatriisiestimaattorit minimivarianssiportfolion optimointiprosessia. Tavallisen otoskovarianssimatriisin lisäksi optimoinnissa käytetään kahta kutistusestimaattoria ja joustavaa monimuuttuja-GARCH(1,1)-mallia. Tutkimusaineisto koostuu Dow Jonesin toimialaindekseistä ja OMX-H:n portfolioindeksistä. Kansainvälinen hajautusstrategia on toteutettu käyttäen toimialalähestymistapaa ja portfoliota optimoidaan käyttäen kahtatoista komponenttia. Tutkimusaieisto kattaa vuodet 1996-2005 eli 120 kuukausittaista havaintoa. Muodostettujen portfolioiden suorituskykyä mitataan Sharpen indeksillä. Tutkimustulosten mukaan kansainvälisesti hajautettujen investointien ja kotimaisen portfolion riskikorjattujen tuottojen välillä ei ole tilastollisesti merkitsevää eroa. Myöskään uusien kovarianssimatriisiestimaattoreiden käytöstä ei synnytilastollisesti merkitsevää lisäarvoa verrattuna otoskovarianssimatrisiin perustuvaan portfolion optimointiin.