989 resultados para uniform linear hypothesis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the direction of arrival (DOA) estimation problem, we encounter both finite data and insufficient knowledge of array characterization. It is therefore important to study how subspace-based methods perform in such conditions. We analyze the finite data performance of the multiple signal classification (MUSIC) and minimum norm (min. norm) methods in the presence of sensor gain and phase errors, and derive expressions for the mean square error (MSE) in the DOA estimates. These expressions are first derived assuming an arbitrary array and then simplified for the special case of an uniform linear array with isotropic sensors. When they are further simplified for the case of finite data only and sensor errors only, they reduce to the recent results given in [9-12]. Computer simulations are used to verify the closeness between the predicted and simulated values of the MSE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The three-point bending behavior of sandwich beams made up of jute epoxy skins and piecewise linear functionally graded (FG) rubber core reinforced with fly ash filler is investigated. This work studies the influence of the parameters such as weight fraction of fly ash, core to thickness ratio, and orientation of jute on specific bending modulus and strength. The load displacement response of the sandwich is traced to evaluate the specific modulus and strength. FG core samples are prepared by using conventional casting technique and sandwich by hand layup. Presence of gradation is quantified experimentally. Results of bending test indicate that specific modulus and strength are primarily governed by filler content and core to sandwich thickness ratio. FG sandwiches with different gradation configurations (uniform, linear, and piecewise linear) are modeled using finite element analysis (ANSYS 5.4) to evaluate specific strength which is subsequently compared with the experimental results and the best gradation configuration is presented. POLYM. COMPOS., 32:1541-1551, 2011. (C) 2011 Society of Plastics Engineers

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ENGLISH: Catches of skipjack tuna supporting major fisheries in parts of the western, central and eastern Pacific Ocean have increased in recent years; thus, it is important to examine the dynamics of the fishery to determine man's effect on the abundance of the stocks. A general linear hypothesis model was developed to standardize fishing effort to a single vessel size and gear type. Standardized effort was then used to compute an index of abundance which accounts for seasonal variability in the fishing area. The indices of abundance were highly variable from year to year in both the northern and southern areas of the fishery but indicated a generally higher abundance in the south. Data from 438 fish tagged and recovered in the eastern Pacific Ocean were used to compute growth curves. A least-squares technique was used to estimate the parameters of the von Bertalanffy growth function. Two estimates of the parameters were made by analyzing the same data in different ways. For the first set of estimates, K= 0.819 on an annual instantaneous basis and L= 729 mm; for the second, K = 0.431 and L=881. These compared well with estimates derived using the Chapman-Richards growth function, which includes the von Bertalanffy function as a special case. It was concluded that the latter function provided an adequate empirical fit to the skipjack data since the more complicated function did not significantly improve the fit. Tagging data from three cruises involving 8852 releases and 1777 returns were used to compute mortality rates during the time the fish were in the fishery. Two models were used in the analyses. The best estimates of the catchability coefficient (q) in the north and south were 8.4 X 10- 4 and 5.0 X 10- 5 respectively. The other loss rate (X), which included losses due to emigration, natural mortality and mortality due to carrying a tag, was 0.14 on an annual instantaneous basis for both areas. To detect the possible effect of fishing on abundance and total yield, the relation between abundance and effort and between total catch and effort was examined. It was found that at levels of intensity observed in the fishery, fishing does not appear to have had any measurable effect on the stocks. It was concluded therefore that the total catch could probably be increased by substantially increasing total effort beyond the present level, and that the fluctuations in abundance are fishery-independent. The estimates of growth, mortality and fishing effort were used to compute yield-per-recruitment isopleths for skipjack in both the northern and southern areas. For a size at first entry of about 425 mm, the yield per recruitment was calculated at 3 pounds in the north and 1.5 pounds in the south. In both areas it would be possible to increase the yield per recruitment by increasing fishing effort. It was not possible to assess potential production of the skipjack stocks fished in the eastern Pacific, except to note that the fishery had not affected their abundance and that they were certainly under-exploited. It was concluded that the northern and southern stocks could support increased harvests, especially the latter. SPANISH: Las capturas de atún barrilete que sostienen las pesquerías principales de la parte occidental, central y oriental del Océano Pacífico han aumentado en los últimos años; así que es importante examinar la dinámica de la pesquería para determinar el efecto que pueda tener sobre la abundancia de los stocks. Se desarrolló un modelo hipotético, lineal para standardizar el esfuerzo de pesca a un solo tamaño de barco y tipo de arte. Luego se usó el esfuerzo standardizado para computar un índice de la abundancia que pueda dar razón de la variabilidad estacional en el área de pesca. Los índices de la abundancia variaron mucho de un año a otro tanto en el área septentrional como en el área meridional de la pesquería, pero indicaron una abundancia generalmente superior en el sur. Se emplearon los datos de 438 peces marcados y recuperados en el Océano Pacífico oriental para computar las curvas de crecimiento. Una técnica de mínimos cuadrados fue usada para estimar los parámetros de la función de crecimiento de van Bertalanffy. Se hicieron dos estimativos de los parámetros mediante el análisis de los mismos datos, de diferente manera. Para el primer juego de estimativos, K=0.819 sobre una base anual instantánea y L∞=729 mm; para el segundo, K=0.431 y L∞=881. Estos se correlacionaron bien con los estimativos obtenidos usando la función de crecimiento de Chapman-Richards, que incluye la de von Bertalanffy como un caso especial. Se decidió que la última función proveía un ajuste empírico, adecuado a los datos del barrilete, ya que la función más complicada no mejoró significativamente el ajuste. Los datos de marcación de tres cruceros incluyendo 8852 liberaciones y 1777 retornos, fueron usados para computar las tasas de mortalidad durante el tiempo en que los peces estuvieron en la pesquería. Se usaron dos modelos en los análisis. Los mejores estimativos del coeficiente de capturabilidad (q) en el norte y en el sur fueron 8.4 X 10-4 y 5.0 X 10-5 , respectivamente. La otra tasa de pérdida (X), la cual incluyó pérdidas debidas a la emigración, mortalidad natural y mortalidad debida a llevar una marca, fue 0.14 sobre una base anual instantánea para las dos áreas. Con el fin de descubrir el efecto que posiblemente pueda tener la pesca sobre la abundancia y el rendimiento total, se examinó la relación entre la abundancia y el esfuerzo y entre la captura total y el esfuerzo. Se encontró que a los niveles de la intensidad observada en la pesquería, la pesca no parece haber tenido ningún efecto perceptible en los stocks. Por lo tanto se decidió que mediante un aumento substancial del esfuerzo total, más allá del nivel actual, la captura total probablemente podría aumentarse, y que las fluctuaciones de la abundancia son independientes de la pesquería. Los estimativos del crecimiento, mortalidad y esfuerzo de pesca fueron usados para computar las isopletas del rendimiento por recluta del barrilete, tanto en las áreas del norte como del sur. Para una talla de primera entrada de unos 425 mm, el rendimiento por recluta fue calculado en 3 libras en el norte y 1.5 libras en el sur. En ambas áreas sería posible aumentar el rendimiento por recluta mediante un aumento del esfuerzo de pesca. No fue posible determinar la producción potencial de los stocks del barrilete pescado en el Pacífico oriental, excepto para observar que la pesquería no ha afectado su abundancia y que ciertamente se encuentran subexplotados. Se concluyó que los stocks norte y sur pueden soportar un aumento en el rendimiento, especialmente este último. (PDF contains 274 pages.)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the potential improvement in signal reliability for indoor off-body communications channels operating at 5.8 GHz using switched diversity techniques. In particular we investigate the performance of switch-and-stay combining (SSC), switch-and-examine combining (SEC) and switch-and-examine combining with post-examining selection (SECps) schemes which utilize multiple spatially separated antennas at the base station. During the measurements a test subject, wearing an antenna on his chest, performed a number of walking movements towards and then away from a uniform linear array. It was found that all of the considered diversity schemes provided a worthwhile signal improvement. However, the performance of the diversity systems varied according to the switching threshold that was adopted. To model the fading envelope observed at the output of each of the combiners, we have applied diversity specific equations developed under the assumption of Nakagami-$m$ fading. As a measure of the goodness-of-fit, the Kullback-Leibler divergence between the empirical and theoretical probability density functions (PDFs) was calculated and found to be close to 0. To assist with the interpretation of the goodness-of-fit achieved in this study, the standard deviation, $\sigma$, of a zero-mean, $\sigma^2$ variance Gaussian PDF used to approximate a zero-mean, unit variance Gaussian PDF is also presented. These were generally quite close to 1 indicating that the theoretical models provided an adequate fit to the measured data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The construction sector has a major role to play in delivering the transition to a low carbon economy and in contributing to sustainable development; however, integrating sustainability into everyday business remains a major challenge for the sector. This research explores the experience of three large construction and engineering consultancy firms in mainstreaming sustainability. The aim of the paper is to identify and explain variations in firm level strategies for mainstreaming sustainability. The three cases vary in the way in which sustainability is ramed – as a problem of risk, business opportunity or culture – and in its location within the firm. The research postulates that the mainstreaming of sustainability is not the uniform linear process often articulated in theories of strategic change and management, but varies with the dominant organisational culture and history of each firm. he paper concludes with a reflection on the implications of this analysis for management theories and for firm level strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a concurrent semantics (i.e. a semantics where concurrency is explicitely represented) for CC programs with atomic tells. This allows to derive concurrency, dependency, and nondeterminism information for such languages. The ability to treat failure information puts CLP programs also in the range of applicability of our semantics: although such programs are not concurrent, the concurrency information derived in the semantics may be interpreted as possible parallelism, thus allowing to safely parallelize those computation steps which appear to be concurrent in the net. Dually, the dependency information may also be interpreted as necessary sequentialization, thus possibly exploiting it to schedule CC programs. The fact that the semantical structure contains dependency information suggests a new tell operation, which checks for consistency only the constraints it depends on, achieving a reasonable trade-off between efficiency and atomicity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La informática teórica es una disciplina básica ya que la mayoría de los avances en informática se sustentan en un sólido resultado de esa materia. En los últimos a~nos debido tanto al incremento de la potencia de los ordenadores, como a la cercanía del límite físico en la miniaturización de los componentes electrónicos, resurge el interés por modelos formales de computación alternativos a la arquitectura clásica de von Neumann. Muchos de estos modelos se inspiran en la forma en la que la naturaleza resuelve eficientemente problemas muy complejos. La mayoría son computacionalmente completos e intrínsecamente paralelos. Por este motivo se les está llegando a considerar como nuevos paradigmas de computación (computación natural). Se dispone, por tanto, de un abanico de arquitecturas abstractas tan potentes como los computadores convencionales y, a veces, más eficientes: alguna de ellas mejora el rendimiento, al menos temporal, de problemas NPcompletos proporcionando costes no exponenciales. La representación formal de las redes de procesadores evolutivos requiere de construcciones, tanto independientes, como dependientes del contexto, dicho de otro modo, en general una representación formal completa de un NEP implica restricciones, tanto sintácticas, como semánticas, es decir, que muchas representaciones aparentemente (sintácticamente) correctas de casos particulares de estos dispositivos no tendrían sentido porque podrían no cumplir otras restricciones semánticas. La aplicación de evolución gramatical semántica a los NEPs pasa por la elección de un subconjunto de ellos entre los que buscar los que solucionen un problema concreto. En este trabajo se ha realizado un estudio sobre un modelo inspirado en la biología celular denominado redes de procesadores evolutivos [55, 53], esto es, redes cuyos nodos son procesadores muy simples capaces de realizar únicamente un tipo de mutación puntual (inserción, borrado o sustitución de un símbolo). Estos nodos están asociados con un filtro que está definido por alguna condición de contexto aleatorio o de pertenencia. Las redes están formadas a lo sumo de seis nodos y, teniendo los filtros definidos por una pertenencia a lenguajes regulares, son capaces de generar todos los lenguajes enumerables recursivos independientemente del grafo subyacente. Este resultado no es sorprendente ya que semejantes resultados han sido documentados en la literatura. Si se consideran redes con nodos y filtros definidos por contextos aleatorios {que parecen estar más cerca a las implementaciones biológicas{ entonces se pueden generar lenguajes más complejos como los lenguajes no independientes del contexto. Sin embargo, estos mecanismos tan simples son capaces de resolver problemas complejos en tiempo polinomial. Se ha presentado una solución lineal para un problema NP-completo, el problema de los 3-colores. Como primer aporte significativo se ha propuesto una nueva dinámica de las redes de procesadores evolutivos con un comportamiento no determinista y masivamente paralelo [55], y por tanto todo el trabajo de investigación en el área de la redes de procesadores se puede trasladar a las redes masivamente paralelas. Por ejemplo, las redes masivamente paralelas se pueden modificar de acuerdo a determinadas reglas para mover los filtros hacia las conexiones. Cada conexión se ve como un canal bidireccional de manera que los filtros de entrada y salida coinciden. A pesar de esto, estas redes son computacionalmente completas. Se pueden también implementar otro tipo de reglas para extender este modelo computacional. Se reemplazan las mutaciones puntuales asociadas a cada nodo por la operación de splicing. Este nuevo tipo de procesador se denomina procesador splicing. Este modelo computacional de Red de procesadores con splicing ANSP es semejante en cierto modo a los sistemas distribuidos en tubos de ensayo basados en splicing. Además, se ha definido un nuevo modelo [56] {Redes de procesadores evolutivos con filtros en las conexiones{ , en el cual los procesadores tan solo tienen reglas y los filtros se han trasladado a las conexiones. Dicho modelo es equivalente, bajo determinadas circunstancias, a las redes de procesadores evolutivos clásicas. Sin dichas restricciones el modelo propuesto es un superconjunto de los NEPs clásicos. La principal ventaja de mover los filtros a las conexiones radica en la simplicidad de la modelización. Otras aportaciones de este trabajo ha sido el dise~no de un simulador en Java [54, 52] para las redes de procesadores evolutivos propuestas en esta Tesis. Sobre el término "procesador evolutivo" empleado en esta Tesis, el proceso computacional descrito aquí no es exactamente un proceso evolutivo en el sentido Darwiniano. Pero las operaciones de reescritura que se han considerado pueden interpretarse como mutaciones y los procesos de filtrado se podrían ver como procesos de selección. Además, este trabajo no abarca la posible implementación biológica de estas redes, a pesar de ser de gran importancia. A lo largo de esta tesis se ha tomado como definición de la medida de complejidad para los ANSP, una que denotaremos como tama~no (considerando tama~no como el número de nodos del grafo subyacente). Se ha mostrado que cualquier lenguaje enumerable recursivo L puede ser aceptado por un ANSP en el cual el número de procesadores está linealmente acotado por la cardinalidad del alfabeto de la cinta de una máquina de Turing que reconoce dicho lenguaje L. Siguiendo el concepto de ANSP universales introducido por Manea [65], se ha demostrado que un ANSP con una estructura de grafo fija puede aceptar cualquier lenguaje enumerable recursivo. Un ANSP se puede considerar como un ente capaz de resolver problemas, además de tener otra propiedad relevante desde el punto de vista práctico: Se puede definir un ANSP universal como una subred, donde solo una cantidad limitada de parámetros es dependiente del lenguaje. La anterior característica se puede interpretar como un método para resolver cualquier problema NP en tiempo polinomial empleando un ANSP de tama~no constante, concretamente treinta y uno. Esto significa que la solución de cualquier problema NP es uniforme en el sentido de que la red, exceptuando la subred universal, se puede ver como un programa; adaptándolo a la instancia del problema a resolver, se escogerín los filtros y las reglas que no pertenecen a la subred universal. Un problema interesante desde nuestro punto de vista es el que hace referencia a como elegir el tama~no optimo de esta red.---ABSTRACT---This thesis deals with the recent research works in the area of Natural Computing {bio-inspired models{, more precisely Networks of Evolutionary Processors first developed by Victor Mitrana and they are based on P Systems whose father is Georghe Paun. In these models, they are a set of processors connected in an underlying undirected graph, such processors have an object multiset (strings) and a set of rules, named evolution rules, that transform objects inside processors[55, 53],. These objects can be sent/received using graph connections provided they accomplish constraints defined at input and output filters processors have. This symbolic model, non deterministic one (processors are not synchronized) and massive parallel one[55] (all rules can be applied in one computational step) has some important properties regarding solution of NP-problems in lineal time and of course, lineal resources. There are a great number of variants such as hybrid networks, splicing processors, etc. that provide the model a computational power equivalent to Turing machines. The origin of networks of evolutionary processors (NEP for short) is a basic architecture for parallel and distributed symbolic processing, related to the Connection Machine as well as the Logic Flow paradigm, which consists of several processors, each of them being placed in a node of a virtual complete graph, which are able to handle data associated with the respective node. All the nodes send simultaneously their data and the receiving nodes handle also simultaneously all the arriving messages, according to some strategies. In a series of papers one considers that each node may be viewed as a cell having genetic information encoded in DNA sequences which may evolve by local evolutionary events, that is point mutations. Each node is specialized just for one of these evolutionary operations. Furthermore, the data in each node is organized in the form of multisets of words (each word appears in an arbitrarily large number of copies), and all the copies are processed in parallel such that all the possible events that can take place do actually take place. Obviously, the computational process just described is not exactly an evolutionary process in the Darwinian sense. But the rewriting operations we have considered might be interpreted as mutations and the filtering process might be viewed as a selection process. Recombination is missing but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration. It is clear that filters associated with each node allow a strong control of the computation. Indeed, every node has an input and output filter; two nodes can exchange data if it passes the output filter of the sender and the input filter of the receiver. Moreover, if some data is sent out by some node and not able to enter any node, then it is lost. In this paper we simplify the ANSP model considered in by moving the filters from the nodes to the edges. Each edge is viewed as a two-way channel such that the input and output filters coincide. Clearly, the possibility of controlling the computation in such networks seems to be diminished. For instance, there is no possibility to loose data during the communication steps. In spite of this and of the fact that splicing is not a powerful operation (remember that splicing systems generates only regular languages) we prove here that these devices are computationally complete. As a consequence, we propose characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of restricted splicing processors with filtered connections. We proposed a uniform linear time solution to SAT based on ANSPFCs with linearly bounded resources. This solution should be understood correctly: we do not solve SAT in linear time and space. Since any word and auxiliary word appears in an arbitrarily large number of copies, one can generate in linear time, by parallelism and communication, an exponential number of words each of them having an exponential number of copies. However, this does not seem to be a major drawback since by PCR (Polymerase Chain Reaction) one can generate an exponential number of identical DNA molecules in a linear number of reactions. It is worth mentioning that the ANSPFC constructed above remains unchanged for any instance with the same number of variables. Therefore, the solution is uniform in the sense that the network, excepting the input and output nodes, may be viewed as a program according to the number of variables, we choose the filters, the splicing words and the rules, then we assign all possible values to the variables, and compute the formula.We proved that ANSP are computationally complete. Do the ANSPFC remain still computationally complete? If this is not the case, what other problems can be eficiently solved by these ANSPFCs? Moreover, the complexity class NP is exactly the class of all languages decided by ANSP in polynomial time. Can NP be characterized in a similar way with ANSPFCs?

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Currently, a variety of linear and nonlinear measures is in use to investigate spatiotemporal interrelation patterns of multivariate time series. Whereas the former are by definition insensitive to nonlinear effects, the latter detect both nonlinear and linear interrelation. In the present contribution we employ a uniform surrogate-based approach, which is capable of disentangling interrelations that significantly exceed random effects and interrelations that significantly exceed linear correlation. The bivariate version of the proposed framework is explored using a simple model allowing for separate tuning of coupling and nonlinearity of interrelation. To demonstrate applicability of the approach to multivariate real-world time series we investigate resting state functional magnetic resonance imaging (rsfMRI) data of two healthy subjects as well as intracranial electroencephalograms (iEEG) of two epilepsy patients with focal onset seizures. The main findings are that for our rsfMRI data interrelations can be described by linear cross-correlation. Rejection of the null hypothesis of linear iEEG interrelation occurs predominantly for epileptogenic tissue as well as during epileptic seizures.