966 resultados para Binary hypothesis testing
Resumo:
There is a well-developed framework, the Black-Scholes theory, for the pricing of contracts based on the future prices of certain assets, called options. This theory assumes that the probability distribution of the returns of the underlying asset is a Gaussian distribution. However, it is observed in the market that this hypothesis is flawed, leading to the introduction of a fudge factor, the so-called volatility smile. Therefore, it would be interesting to explore extensions of the Black-Scholes theory to non-Gaussian distributions. In this paper, we provide an explicit formula for the price of an option when the distributions of the returns of the underlying asset is parametrized by an Edgeworth expansion, which allows for the introduction of higher independent moments of the probability distribution, namely skewness and kurtosis. We test our formula with options in the Brazilian and American markets, showing that the volatility smile can be reduced. We also check whether our approach leads to more efficient hedging strategies of these instruments. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The disturbance vicariance hypothesis (DV) has been proposed to explain speciation in Amazonia, especially its edge regions, e. g. in eastern Guiana Shield harlequin frogs (Atelopus) which are suggested to have derived from a cool-adapted Andean ancestor. In concordance with DV predictions we studied that (i) these amphibians display a natural distribution gap in central Amazonia; (ii) east of this gap they constitute a monophyletic lineage which is nested within a pre-Andean/western clade; (iii) climate envelopes of Atelopus west and east of the distribution gap show some macroclimatic divergence due to a regional climate envelope shift; (iv) geographic distributions of climate envelopes of western and eastern Atelopus range into central Amazonia but with limited spatial overlap. We tested if presence and apparent absence data points of Atelopus were homogenously distributed with Ripley's K function. A molecular phylogeny (mitochondrial 16S rRNA gene) was reconstructed using Maximum Likelihood and Bayesian Inference to study if Guianan Atelopus constitute a clade nested within a larger genus phylogeny. We focused on climate envelope divergence and geographic distribution by computing climatic envelope models with MaxEnt based on macroscale bioclimatic parameters and testing them by using Schoener's index and modified Hellinger distance. We corroborated existing DV predictions and, for the first time, formulated new DV predictions aiming on species' climate envelope change. Our results suggest that cool-adapted Andean Atelopus ancestors had dispersed into the Amazon basin and further onto the eastern Guiana Shield where, under warm conditions, they were forced to change climate envelopes. © 2010 The Author(s).
Resumo:
Background: The evaluation of associations between genotypes and diseases in a case-control framework plays an important role in genetic epidemiology. This paper focuses on the evaluation of the homogeneity of both genotypic and allelic frequencies. The traditional test that is used to check allelic homogeneity is known to be valid only under Hardy-Weinberg equilibrium, a property that may not hold in practice. Results: We first describe the flaws of the traditional (chi-squared) tests for both allelic and genotypic homogeneity. Besides the known problem of the allelic procedure, we show that whenever these tests are used, an incoherence may arise: sometimes the genotypic homogeneity hypothesis is not rejected, but the allelic hypothesis is. As we argue, this is logically impossible. Some methods that were recently proposed implicitly rely on the idea that this does not happen. In an attempt to correct this incoherence, we describe an alternative frequentist approach that is appropriate even when Hardy-Weinberg equilibrium does not hold. It is then shown that the problem remains and is intrinsic of frequentist procedures. Finally, we introduce the Full Bayesian Significance Test to test both hypotheses and prove that the incoherence cannot happen with these new tests. To illustrate this, all five tests are applied to real and simulated datasets. Using the celebrated power analysis, we show that the Bayesian method is comparable to the frequentist one and has the advantage of being coherent. Conclusions: Contrary to more traditional approaches, the Full Bayesian Significance Test for association studies provides a simple, coherent and powerful tool for detecting associations.
Resumo:
Objectives. The null hypothesis was that mechanical testing systems used to determine polymerization stress (sigma(pol)) would rank a series of composites similarly. Methods. Two series of composites were tested in the following systems: universal testing machine (UTM) using glass rods as bonding substrate, UTM/acrylic rods, "low compliance device", and single cantilever device ("Bioman"). One series had five experimental composites containing BisGMA:TEGDMA in equimolar concentrations and 60, 65, 70, 75 or 80 wt% of filler. The other series had five commercial composites: Filtek Z250 (3M ESPE), Filtek A110 (3M ESPE), Tetric Ceram (Ivoclar), Heliomolar (Ivoclar) and Point 4 (Kerr). Specimen geometry, dimensions and curing conditions were similar in all systems. sigma(pol) was monitored for 10 min. Volumetric shrinkage (VS) was measured in a mercury dilatometer and elastic modulus (E) was determined by three-point bending. Shrinkage rate was used as a measure of reaction kinetics. ANOVA/Tukey test was performed for each variable, separately for each series. Results. For the experimental composites, sigma(pol) decreased with filler content in all systems, following the variation in VS. For commercial materials, sigma(pol) did not vary in the UTM/acrylic system and showed very few similarities in rankings in the others tests system. Also, no clear relationships were observed between sigma(pol) and VS or E. Significance. The testing systems showed a good agreement for the experimental composites, but very few similarities for the commercial composites. Therefore, comparison of polymerization stress results from different devices must be done carefully. (c) 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Resumo:
Despite favourable gravitational instability and ridge-push, elastic and frictional forces prevent subduction initiation fromarising spontaneously at passive margins. Here,we argue that forces arising fromlarge continental topographic gradients are required to initiate subduction at passivemargins. In order to test this hypothesis,we use 2Dnumerical models to assess the influence of the Andean Plateau on stressmagnitudes and deformation patterns at the Brazilian passive margin. The numerical results indicate that “plateau-push” in this region is a necessary additional force to initiate subduction. As the SE Brazilianmargin currently shows no signs of self-sustained subduction, we examined geological and geophysical data to determine if themargin is in the preliminary stages of subduction initiation. The compiled data indicate that the margin is presently undergoing tectonic inversion, which we infer as part of the continental–oceanic overthrusting stage of subduction initiation. We refer to this early subduction stage as the “Brazilian Stage”, which is characterized by N10 kmdeep reverse fault seismicity at themargin, recent topographic uplift on the continental side, thick continental crust at themargin, and bulging on the oceanic side due to loading by the overthrusting continent. The combined results of the numerical simulations and passivemargin analysis indicate that the SE Brazilian margin is a prototype candidate for subduction initiation.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Nell'era genomica moderna, la mole di dati generata dal sequenziamento genetico è diventata estremamente elevata. L’analisi di dati genomici richiede l’utilizzo di metodi di significatività statistica per quantificare la robustezza delle correlazioni individuate nei dati. La significatività statistica ci permette di capire se le relazioni nei dati che stiamo analizzando abbiano effettivamente un peso statistico, cioè se l’evento che stiamo analizzando è successo “per caso” o è effettivamente corretto pensare che avvenga con una probabilità utile. Indipendentemente dal test statistico utilizzato, in presenza di test multipli di verifica (“Multiple Testing Hypothesis”) è necessario utilizzare metodi per la correzione della significatività statistica (“Multiple Testing Correction”). Lo scopo di questa tesi è quello di rendere disponibili le implementazioni dei più noti metodi di correzione della significatività statistica. È stata creata una raccolta di questi metodi, sottoforma di libreria, proprio perché nel panorama bioinformatico moderno non è stato trovato nulla del genere.
Resumo:
Since the late eighties, economists have been regarding the transition from command to market economies in Central and Eastern Europe with intense interest. In addition to studying the transition per se, they have begun using the region as a testing ground on which to investigate the validity of certain classic economic propositions. In his research, comprising three articles written in English and totalling 40 pages, Mr. Hanousek uses the so-called "Czech national experiment" (voucher privatisation scheme) to test the permanent income hypothesis (PIH). He took as his inspiration Kreinin's recommendation: "Since data concerning the behaviour of windfall income recipients is relatively scanty, and since such data can constitute an important test of the permanent income hypothesis, it is of interest to bring to bear on the hypothesis whatever information is available". Mr. Hanousek argues that, since the transfer of property to Czech citizens from 1992 to 1994 through the voucher scheme was not anticipated, it can be regarded as windfall income. The average size of the windfall was more than three month's salary and over 60 percent of the Czech population received this unexpected income. Furthermore, there are other reasons for conducting such an analysis in the Czech Republic. Firstly, the privatisation process took place quickly. Secondly, both the economy and consumer behaviour have been very stable. Thirdly, out of a total population of 10 million Czech citizens, an astonishing 6 million, that is, virtually every household, participated in the scheme. Thus Czech voucher privatisation provides a sample for testing the PIH almost equivalent to a full population, thus avoiding problems with the distribution of windfalls. Compare this, for instance with the fact that only 4% of the Israeli urban population received personal restitution from Germany, while the number of veterans who received the National Service Life Insurance Dividends amounted to less than 9% of the US population and were concentrated in certain age groups. But to begin with, Mr. Hanousek considers the question of whether the public percieves the transfer from the state to individual as an increase in net wealth. It can be argued that the state is only divesting itself of assets that would otherwise provide a future source of transfers. According to this argument, assigning these assets to individuals creates an offsetting change in the present value of potential future transfers so that individuals are no better off after the transfer. Mr. Hanousek disagrees with this approach. He points out that a change in the ownership of inefficient state-owned enterprises should lead to higher efficiency, which alone increases the value of enterprises and creates a windfall increase in citizens' portfolios. More importantly, the state and individuals had very different preferences during the transition. Despite government propaganda, it is doubtful that citizens of former communist countries viewed government-owned enterprises as being operated in the citizens' best interest. Moreover, it is unlikely that the public fully comprehended the sophisticated links between the state budget, state-owned enterprises, and transfers to individuals. Finally, the transfers were not equal across the population. Mr. Hanousek conducted a survey on 1263 individuals, dividing them into four monthly earnings categories. After determining whether the respondent had participated in the voucher process, he asked those who had how much of what they received from voucher privatisation had been (a) spent on goods and services, (b) invested elsewhere, (c) transferred to newly emerging pension funds, (d) given to a family member, and (e) retained in their original form as an investment. Both the mean and the variance of the windfall rise with income. He obtained similar results with respect to education, where the mean (median) windfall for those with a basic school education was 13,600 Czech Crowns (CZK), a figure that increased to 15,000 CZK for those with a high school education without exams, 19,900 CZK for high school graduates with exams, and 24,600 CZK for university graduates. Mr. Hanousek concludes that it can be argued that higher income (and better educated) groups allocated their vouchers or timed the disposition of their shares better. He turns next to an analysis of how respondents reported using their windfalls. The key result is that only a relatively small number of individuals reported spending on goods. Overall, the results provide strong support for the permanent income hypothesis, the only apparent deviation being the fact that both men and women aged 26 to 35 apparently consume more than they should if the windfall were annuitised. This finding is still fully consistent with the PIH, however, if this group is at a stage in their life-cycle where, without the windfall, they would be borrowing to finance consumption associated with family formation etc. Indeed, the PIH predicts that individuals who would otherwise borrow to finance consumption would consume the windfall up to the level equal to the annuitised fraction of the increase in lifetime income plus the full amount of the previously planned borrowing for consumption. Greater consumption would then be financed, not from investing the windfall, but from avoidance of future repayment obligations for debts that would have been incurred without the windfall.
Resumo:
There has been a continuous evolutionary process in asphalt pavement design. In the beginning it was crude and based on past experience. Through research, empirical methods were developed based on materials response to specific loading at the AASHO Road Test. Today, pavement design has progressed to a mechanistic-empirical method. This methodology takes into account the mechanical properties of the individual layers and uses empirical relationships to relate them to performance. The mechanical tests that are used as part of this methodology include dynamic modulus and flow number, which have been shown to correlate with field pavement performance. This thesis was based on a portion of a research project being conducted at Michigan Technological University (MTU) for the Wisconsin Department of Transportation (WisDOT). The global scope of this project dealt with the development of a library of values as they pertain to the mechanical properties of the asphalt pavement mixtures paved in Wisconsin. Additionally, a comparison with the current associated pavement design to that of the new AASHTO Design Guide was conducted. This thesis describes the development of the current pavement design methodology as well as the associated tests as part of a literature review. This report also details the materials that were sampled from field operations around the state of Wisconsin and their testing preparation and procedures. Testing was conducted on available round robin and three Wisconsin mixtures and the main results of the research were: The test history of the Superpave SPT (fatigue and permanent deformation dynamic modulus) does not affect the mean response for both dynamic modulus and flow number, but does increase the variability in the test results of the flow number. The method of specimen preparation, compacting to test geometry versus sawing/coring to test geometry, does not statistically appear to affect the intermediate and high temperature dynamic modulus and flow number test results. The 2002 AASHTO Design Guide simulations support the findings of the statistical analyses that the method of specimen preparation did not impact the performance of the HMA as a structural layer as predicted by the Design Guide software. The methodologies for determining the temperature-viscosity relationship as stipulated by Witczak are sensitive to the viscosity test temperatures employed. The increase in asphalt binder content by 0.3% was found to actually increase the dynamic modulus at the intermediate and high test temperature as well as flow number. This result was based the testing that was conducted and was contradictory to previous research and the hypothesis that was put forth for this thesis. This result should be used with caution and requires further review. Based on the limited results presented herein, the asphalt binder grade appears to have a greater impact on performance in the Superpave SPT than aggregate angularity. Dynamic modulus and flow number was shown to increase with traffic level (requiring an increase in aggregate angularity) and with a decrease in air voids and confirm the hypotheses regarding these two factors. Accumulated micro-strain at flow number as opposed to the use of flow number appeared to be a promising measure for comparing the quality of specimens within a specific mixture. At the current time the Design Guide and its associate software needs to be further improved prior to implementation by owner/agencies.
Resumo:
As the development of genotyping and next-generation sequencing technologies, multi-marker testing in genome-wide association study and rare variant association study became active research areas in statistical genetics. This dissertation contains three methodologies for association study by exploring different genetic data features and demonstrates how to use those methods to test genetic association hypothesis. The methods can be categorized into in three scenarios: 1) multi-marker testing for strong Linkage Disequilibrium regions, 2) multi-marker testing for family-based association studies, 3) multi-marker testing for rare variant association study. I also discussed the advantage of using these methods and demonstrated its power by simulation studies and applications to real genetic data.
Resumo:
Context. The first soft gamma-ray repeater was discovered over three decades ago, and was subsequently identified as a magnetar, a class of highly magnetised neutron star. It has been hypothesised that these stars power some of the brightest supernovae known, and that they may form the central engines of some long duration gamma-ray bursts. However there is currently no consenus on the formation channel(s) of these objects. Aims. The presence of a magnetar in the starburst cluster Westerlund 1 implies a progenitor with a mass ≥40 M⊙, which favours its formation in a binary that was disrupted at supernova. To test this hypothesis we conducted a search for the putative pre-SN companion. Methods. This was accomplished via a radial velocity survey to identify high-velocity runaways, with subsequent non-LTE model atmosphere analysis of the resultant candidate, Wd1-5. Results. Wd1-5 closely resembles the primaries in the short-period binaries, Wd1-13 and 44, suggesting a similar evolutionary history, although it currently appears single. It is overluminous for its spectroscopic mass and we find evidence of He- and N-enrichement, O-depletion, and critically C-enrichment, a combination of properties that is difficult to explain under single star evolutionary paradigms. We infer a pre-SN history for Wd1-5 which supposes an initial close binary comprising two stars of comparable (~ 41 M⊙ + 35 M⊙) masses. Efficient mass transfer from the initially more massive component leads to the mass-gainer evolving more rapidly, initiating luminous blue variable/common envelope evolution. Reverse, wind-driven mass transfer during its subsequent WC Wolf-Rayet phase leads to the carbon pollution of Wd1-5, before a type Ibc supernova disrupts the binary system. Under the assumption of a physical association between Wd1-5 and J1647-45, the secondary is identified as the magnetar progenitor; its common envelope evolutionary phase prevents spin-down of its core prior to SN and the seed magnetic field for the magnetar forms either in this phase or during the earlier episode of mass transfer in which it was spun-up. Conclusions. Our results suggest that binarity is a key ingredient in the formation of at least a subset of magnetars by preventing spin-down via core-coupling and potentially generating a seed magnetic field. The apparent formation of a magnetar in a Type Ibc supernova is consistent with recent suggestions that superluminous Type Ibc supernovae are powered by the rapid spin-down of these objects.
Resumo:
Many multifactorial biologic effects, particularly in the context of complex human diseases, are still poorly understood. At the same time, the systematic acquisition of multivariate data has become increasingly easy. The use of such data to analyze and model complex phenotypes, however, remains a challenge. Here, a new analytic approach is described, termed coreferentiality, together with an appropriate statistical test. Coreferentiality is the indirect relation of two variables of functional interest in respect to whether they parallel each other in their respective relatedness to multivariate reference data, which can be informative for a complex effect or phenotype. It is shown that the power of coreferentiality testing is comparable to multiple regression analysis, sufficient even when reference data are informative only to a relatively small extent of 2.5%, and clearly exceeding the power of simple bivariate correlation testing. Thus, coreferentiality testing uses the increased power of multivariate analysis, however, in order to address a more straightforward interpretable bivariate relatedness. Systematic application of this approach could substantially improve the analysis and modeling of complex phenotypes, particularly in the context of human study where addressing functional hypotheses by direct experimentation is often difficult.
Resumo:
This study examined the utility of a stress/coping model in explaining adaptation in two groups of people at-risk for Huntington's Disease (HD): those who have not approached genetic testing services (non-testees) and those who have engaged a testing service (testees). The aims were (1) to compare testees and non-testees on stress/coping variables, (2) to examine relations between adjustment and the stress/coping predictors in the two groups, and (3) to examine relations between the stress/coping variables and testees' satisfaction with their first counselling session. Participants were 44 testees and 40 non-testees who completed questionnaires which measured the stress/coping variables: adjustment (global distress, depression, health anxiety, social and dyadic adjustment), genetic testing concerns, testing context (HD contact, experience, knowledge), appraisal (control, threat, self-efficacy), coping strategies (avoidance, self-blame, wishful thinking, seeking support, problem solving), social support and locus of control. Testees also completed a genetic counselling session satisfaction scale. As expected, non-testees reported lower self-efficacy and control appraisals, higher threat and passive avoidant coping than testees. Overall, results supported the hypothesis that within each group poorer adjustment would be related to higher genetic testing concerns, contact with HD, threat appraisals, passive avoidant coping and external locus of control, and lower levels of positive experiences with HD, social support, internal locus of control, self-efficacy, control appraisals, problem solving, emotional approach and seeking social support coping. Session satisfaction scores were positively correlated with dyadic adjustment, problem solving and positive experience with HD, and inversely related to testing concerns, and threat and control appraisals. Findings support the utility of the stress/coping model in explaining adaptation in people who have decided not to seek genetic testing for HD and those who have decided to engage a genetic testing service.
Resumo:
We outline and evaluate competing explanations of three relationships that have consistently been found between cannabis use and the use of other illicit drugs, namely, ( 1) that cannabis use typically precedes the use of other illicit drugs; and that ( 2) the earlier cannabis is used, and ( 3) the more regularly it is used, the more likely a young person is to use other illicit drugs. We consider three major competing explanations of these patterns: ( 1) that the relationship is due to the fact that there is a shared illicit market for cannabis and other drugs which makes it more likely that other illicit drugs will be used if cannabis is used; ( 2) that they are explained by the characteristics of those who use cannabis; and ( 3) that they reflect a causal relationship in which the pharmacological effects of cannabis on brain function increase the likelihood of using other illicit drugs. These explanations are evaluated in the light of evidence from longitudinal epidemiological studies, simulation studies, discordant twin studies and animal studies. The available evidence indicates that the association reflects in part but is not wholly explained by: ( 1) the selective recruitment to heavy cannabis use of persons with pre-existing traits ( that may be in part genetic) that predispose to the use of a variety of different drugs; ( 2) the affiliation of cannabis users with drug using peers in settings that provide more opportunities to use other illicit drugs at an earlier age; ( 3) supported by socialisation into an illicit drug subculture with favourable attitudes towards the use of other illicit drugs. Animal studies have raised the possibility that regular cannabis use may have pharmacological effects on brain function that increase the likelihood of using other drugs. We conclude with suggestions for the type of research studies that will enable a decision to be made about the relative contributions that social context, individual characteristics, and drug effects make to the relationship between cannabis use and the use of other drugs.
Resumo:
This paper presents a fast part-based subspace selection algorithm, termed the binary sparse nonnegative matrix factorization (B-SNMF). Both the training process and the testing process of B-SNMF are much faster than those of binary principal component analysis (B-PCA). Besides, B-SNMF is more robust to occlusions in images. Experimental results on face images demonstrate the effectiveness and the efficiency of the proposed B-SNMF.