765 resultados para sample complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes whether standard covariance matrix tests work whendimensionality is large, and in particular larger than sample size. Inthe latter case, the singularity of the sample covariance matrix makeslikelihood ratio tests degenerate, but other tests based on quadraticforms of sample covariance matrix eigenvalues remain well-defined. Westudy the consistency property and limiting distribution of these testsas dimensionality and sample size go to infinity together, with theirratio converging to a finite non-zero limit. We find that the existingtest for sphericity is robust against high dimensionality, but not thetest for equality of the covariance matrix to a given matrix. For thelatter test, we develop a new correction to the existing test statisticthat makes it robust against high dimensionality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Business organisations are excellent representations of what in physics and mathematics are designated "chaotic" systems. Because a culture of innovation will be vital for organisational survival in the 21st century, the present paper proposes that viewing organisations in terms of "complexity theory" may assist leaders in fine-tuning managerial philosophies that provide orderly management emphasizing stability within a culture of organised chaos, for it is on the "boundary of chaos" that the greatest creativity occurs. It is argued that 21st century companies, as chaotic social systems, will no longer be effectively managed by rigid objectives (MBO) nor by instructions (MBI). Their capacity for self-organisation will be derived essentially from how their members accept a shared set of values or principles for action (MBV). Complexity theory deals with systems that show complex structures in time or space, often hiding simple deterministic rules. This theory holds that once these rules are found, it is possible to make effective predictions and even to control the apparent complexity. The state of chaos that self-organises, thanks to the appearance of the "strange attractor", is the ideal basis for creativity and innovation in the company. In this self-organised state of chaos, members are not confined to narrow roles, and gradually develop their capacity for differentiation and relationships, growing continuously toward their maximum potential contribution to the efficiency of the organisation. In this way, values act as organisers or "attractors" of disorder, which in the theory of chaos are equations represented by unusually regular geometric configurations that predict the long-term behaviour of complex systems. In business organisations (as in all kinds of social systems) the starting principles end up as the final principles in the long term. An attractor is a model representation of the behavioral results of a system. The attractor is not a force of attraction or a goal-oriented presence in the system; it simply depicts where the system is headed based on its rules of motion. Thus, in a culture that cultivates or shares values of autonomy, responsibility, independence, innovation, creativity, and proaction, the risk of short-term chaos is mitigated by an overall long-term sense of direction. A more suitable approach to manage the internal and external complexities that organisations are currently confronting is to alter their dominant culture under the principles of MBV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the link between brand performance and cultural primes in high-risk,innovation-based sectors. In theory section, we propose that the level of cultural uncertaintyavoidance embedded in a firm determine its marketing creativity by increasing the complexityand the broadness of a brand. It determines also the rate of firm product innovations.Marketing creativity and product innovation influence finally the firm marketingperformance. Empirically, we study trademarked promotion in the Software Security Industry(SSI). Our sample consists of 87 firms that are active in SSI from 11 countries in the period1993-2000. We use the data coming from SSI-related trademarks registered by these firms,ending up with 2,911 SSI-related trademarks and a panel of 18,213 observations. We estimatea two stage model in which first we predict the complexity and the broadness of a trademarkas a measure of marketing creativity and the rate of product innovations. Among severalcontrol variables, our variable of theoretical interest is the Hofstede s uncertainty avoidancecultural index. Then, we estimate the trademark duration with a hazard model using thepredicted complexity and broadness as well as the rate of product innovations, along with thesame control variables. Our evidence confirms that the cultural avoidance affects the durationof the trademarks through the firm marketing creativity and product innovation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Usingthe new inequality we obtain tight bounds for empirical loss minimization learning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the complexity of rationalizing choice behavior. We do so by analyzing two polar cases, and a number of intermediate ones. In our most structured case, that is where choice behavior is defined in universal choice domains and satisfies the "weak axiom of revealed preference," finding the complete preorder rationalizing choice behavior is a simple matter. In the polar case, where no restriction whatsoever is imposed, either on choice behavior or on choice domain, finding the complete preordersthat rationalize behavior turns out to be intractable. We show that the task of finding the rationalizing complete preorders is equivalent to a graph problem. This allows the search for existing algorithms in the graph theory literature, for the rationalization of choice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND/PURPOSE: A new coordinated interdisciplinary unit was created in the acute section of the department of clinical neurosciences, the Acute NeuroRehabilitation (NRA) unit. The objective was to evaluate the impact of the unit and its neurosensory programme on the management of tracheostomy patients in terms of reduction in the average time taken for weaning, weaning success rate and therapeutic efficiency. METHODS: This 49-month retrospective study compares 2 groups of tracheostomy patients before (n = 34) and after (n = 46) NRA intervention. The outcome measures evaluate the benefits of the NRA unit intervention (time to decannulation, weaning and complication rates) and the benefits of the coordination (time to registration in a rehabilitation centre and rate of non-compliance with standards of care). RESULTS: Weaning failure rate was reduced from 27.3% to 9.1%, no complications or recannulations were observed in the post-intervention group after weaning and time to decannulation following admission to our unit decreased from 19.13 to 12.75 days. The rate of non-compliance with patient standards of care was significantly reduced from 45% to 30% (Mann-Whitney p = 0.003). DISCUSSION/CONCLUSIONS: This interdisciplinary weaning programme helped to reduce weaning time and weaning failure, without increased complications, in the sample studied. Coordination improved the efficiency of the interdisciplinary team in the multiplicity and complexity of the different treatments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of variance is commonly used in morphometry in order to ascertain differences in parameters between several populations. Failure to detect significant differences between populations (type II error) may be due to suboptimal sampling and lead to erroneous conclusions; the concept of statistical power allows one to avoid such failures by means of an adequate sampling. Several examples are given in the morphometry of the nervous system, showing the use of the power of a hierarchical analysis of variance test for the choice of appropriate sample and subsample sizes. In the first case chosen, neuronal densities in the human visual cortex, we find the number of observations to be of little effect. For dendritic spine densities in the visual cortex of mice and humans, the effect is somewhat larger. A substantial effect is shown in our last example, dendritic segmental lengths in monkey lateral geniculate nucleus. It is in the nature of the hierarchical model that sample size is always more important than subsample size. The relative weight to be attributed to subsample size thus depends on the relative magnitude of the between observations variance compared to the between individuals variance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cannabis use is highly prevalent among people with schizophrenia, and coupled with impaired cognition, is thought to heighten the risk of illness onset. However, while heavy cannabis use has been associated with cognitive deficits in long-term users, studies among patients with schizophrenia have been contradictory. This article consists of 2 studies. In Study I, a meta-analysis of 10 studies comprising 572 patients with established schizophrenia (with and without comorbid cannabis use) was conducted. Patients with a history of cannabis use were found to have superior neuropsychological functioning. This finding was largely driven by studies that included patients with a lifetime history of cannabis use rather than current or recent use. In Study II, we examined the neuropsychological performance of 85 patients with first-episode psychosis (FEP) and 43 healthy nonusing controls. Relative to controls, FEP patients with a history of cannabis use (FEP + CANN; n = 59) displayed only selective neuropsychological impairments while those without a history (FEP - CANN; n = 26) displayed generalized deficits. When directly compared, FEP + CANN patients performed better on tests of visual memory, working memory, and executive functioning. Patients with early onset cannabis use had less neuropsychological impairment than patients with later onset use. Together, these findings suggest that patients with schizophrenia or FEP with a history of cannabis use have superior neuropsychological functioning compared with nonusing patients. This association between better cognitive performance and cannabis use in schizophrenia may be driven by a subgroup of "neurocognitively less impaired" patients, who only developed psychosis after a relatively early initiation into cannabis use.