57 resultados para Functions of complex variables.
Resumo:
Ras of complex proteins (ROC) domains were identified in 2003 as GTP binding modules in large multidomain proteins from Dictyostelium discoideum. Research into the function of these domains exploded with their identification in a number of proteins linked to human disease, including leucine-rich repeat kinase 2 (LRRK2) and death-associated protein kinase 1 (DAPK1) in Parkinson’s disease and cancer, respectively. This surge in research has resulted in a growing body of data revealing the role that ROC domains play in regulating protein function and signaling pathways. In this review, recent advances in the structural informa- tion available for proteins containing ROC domains, along with insights into enzymatic function and the integration of ROC domains as molecular switches in a cellular and organismal context, are explored.
Resumo:
We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behaviour on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.
Resumo:
Background: The electroencephalogram (EEG) may be described by a large number of different feature types and automated feature selection methods are needed in order to reliably identify features which correlate with continuous independent variables. New method: A method is presented for the automated identification of features that differentiate two or more groups inneurologicaldatasets basedupona spectraldecompositionofthe feature set. Furthermore, the method is able to identify features that relate to continuous independent variables. Results: The proposed method is first evaluated on synthetic EEG datasets and observed to reliably identify the correct features. The method is then applied to EEG recorded during a music listening task and is observed to automatically identify neural correlates of music tempo changes similar to neural correlates identified in a previous study. Finally,the method is applied to identify neural correlates of music-induced affective states. The identified neural correlates reside primarily over the frontal cortex and are consistent with widely reported neural correlates of emotions. Comparison with existing methods: The proposed method is compared to the state-of-the-art methods of canonical correlation analysis and common spatial patterns, in order to identify features differentiating synthetic event-related potentials of different amplitudes and is observed to exhibit greater performance as the number of unique groups in the dataset increases. Conclusions: The proposed method is able to identify neural correlates of continuous variables in EEG datasets and is shown to outperform canonical correlation analysis and common spatial patterns.
Resumo:
For a particular family of long-range potentials V, we prove that the eigenvalues of the indefinite Sturm–Liouville operator A = sign(x)(−Δ+V(x)) accumulate to zero asymptotically along specific curves in the complex plane. Additionally, we relate the asymptotics of complex eigenvalues to the two-term asymptotics of the eigenvalues of associated self-adjoint operators.
Resumo:
This study investigated the contribution of stereoscopic depth cues to the reliability of ordinal depth judgments in complex natural scenes. Participants viewed photographs of cluttered natural scenes, either monocularly or stereoscopically. On each trial, they judged which of two indicated points in the scene was closer in depth. We assessed the reliability of these judgments over repeated trials, and how well they correlated with the actual disparities of the points between the left and right eyes' views. The reliability of judgments increased as their depth separation increased, was higher when the points were on separate objects, and deteriorated for point pairs that were more widely separated in the image plane. Stereoscopic viewing improved sensitivity to depth for points on the same surface, but not for points on separate objects. Stereoscopic viewing thus provides depth information that is complementary to that available from monocular occlusion cues.
Resumo:
The hereditary spastic paraplegias are a heterogeneous group of degenerative disorders that are clinically classified as either pure with predominant lower limb spasticity, or complex where spastic paraplegia is complicated with additional neurological features, and are inherited in autosomal dominant, autosomal recessive or X-linked patterns. Genetic defects have been identified in over 40 different genes, with more than 70 loci in total. Complex recessive spastic paraplegias have in the past been frequently associated with mutations in SPG11 (spatacsin), ZFYVE26/SPG15, SPG7 (paraplegin) and a handful of other rare genes, but many cases remain genetically undefined. The overlap with other neurodegenerative disorders has been implied in a small number of reports, but not in larger disease series. This deficiency has been largely due to the lack of suitable high throughput techniques to investigate the genetic basis of disease, but the recent availability of next generation sequencing can facilitate the identification of disease- causing mutations even in extremely heterogeneous disorders. We investigated a series of 97 index cases with complex spastic paraplegia referred to a tertiary referral neurology centre in London for diagnosis or management. The mean age of onset was 16 years (range 3 to 39). The SPG11 gene was first analysed, revealing homozygous or compound heterozygous mutations in 30/97 (30.9%) of probands, the largest SPG11 series reported to date, and by far the most common cause of complex spastic paraplegia in the UK, with severe and progressive clinical features and other neurological manifestations, linked with magnetic resonance imaging defects. Given the high frequency of SPG11 mutations, we studied the autophagic response to starvation in eight affected SPG11 cases and control fibroblast cell lines, but in our restricted study we did not observe correlations between disease status and autophagic or lysosomal markers. In the remaining cases, next generation sequencing was carried out revealing variants in a number of other known complex spastic paraplegia genes, including five in SPG7 (5/97), four in FA2H (also known as SPG35) (4/97) and two in ZFYVE26/SPG15. Variants were identified in genes usually associated with pure spastic paraplegia and also in the Parkinson’s disease-associated gene ATP13A2, neuronal ceroid lipofuscinosis gene TPP1 and the hereditary motor and sensory neuropathy DNMT1 gene, highlighting the genetic heterogeneity of spastic paraplegia. No plausible genetic cause was identified in 51% of probands, likely indicating the existence of as yet unidentified genes.
Resumo:
The formation of complexes in solutions containing positively charged polyions (polycations) and a variable amount of negatively charged polyions (polyanions) has been investigated by Monte Carlo simulations. The polyions were described as flexible chains of charged hard spheres interacting through a screened Coulomb potential. The systems were analyzed in terms of cluster compositions, structure factors, and radial distribution functions. At 50% charge equivalence or less, complexes involving two polycations and one polyanion were frequent, while closer to charge equivalence, larger clusters were formed. Small and neutral complexes dominated the solution at charge equivalence in a monodisperse system, while larger clusters again dominated the solution when the polyions were made polydisperse. The cluster composition and solution structure were also examined as functions of added salt by varying the electrostatic screening length. The observed formation of clusters could be rationalized by a few simple rules.
Resumo:
Acid mine drainage (AMD) is a widespread environmental problem associated with both working and abandoned mining operations. As part of an overall strategy to determine a long-term treatment option for AMD, a pilot passive treatment plant was constructed in 1994 at Wheal Jane Mine in Cornwall, UK. The plant consists of three separate systems, each containing aerobic reed beds, anaerobic cell and rock filters, and represents the largest European experimental facility of its kind. The systems only differ by the type of pretreatment utilised to increase the pH of the influent minewater (pH <4): lime dosed (LD), anoxic limestone drain (ALD) and lime free (LF), which receives no form of pretreatment. Historical data (1994-1997) indicate median Fe reduction between 55% and 92%, sulphate removal in the range of 3-38% and removal of target metals (cadmium, copper and zinc) below detection limits, depending on pretreatment and flow rates through the system. A new model to simulate the processes and dynamics of the wetlands systems is described, as well as the application of the model to experimental data collected at the pilot plant. The model is process based, and utilises reaction kinetic approaches based on experimental microbial techniques rather than an equilibrium approach to metal precipitation. The model is dynamic and utilises numerical integration routines to solve a set of differential equations that describe the behaviour of 20 variables over the 17 pilot plant cells on a daily basis. The model outputs at each cell boundary are evaluated and compared with the measured data, and the model is demonstrated to provide a good representation of the complex behaviour of the wetland system for a wide range of variables. (C) 2004 Elsevier B.V/ All rights reserved.
Resumo:
Two cobalt complexes, [Co(L-Se)(phen)]center dot CH2Cl2 (1) and [Co(L-Se)(N,N-Me(2)en)(CH3COO-)] (2) have been synthesized and characterized by elemental analyses, magnetic measurements, i.r. studies etc. Single crystal X- ray studies reveal that in complex (1) cobalt atom is in +2 oxidation state with trigonal bipyramidal geometry, while in complex (2) it is in +3 oxidation state and surrounded octahedrally. The asymmetric unit of complex (2) contains two crystallographically independent discrete molecules. Complex (1) was found to be paramagnetic with mu(eff) = 2.19 BM indicating a low spin cobalt(II) d(7) system, whereas complex (2) is found to be diamagnetic with cobalt(III) in low spin d(6) state. The kinetic studies on the reduction of (2) by ascorbic acid in 80% MeCN-20% H2O (v/v) at 25 degrees C reveal that the reaction proceeds through the rapid formation of inner-sphere adduct, probably by replacing the loosely coordinated AcO- group, followed by electron transfer in a slow step and is supported by a large Q (formation constant) value.
Resumo:
Utilising the expressive power of S-Expressions in Learning Classifier Systems often prohibitively increases the search space due to increased flexibility of the endcoding. This work shows that selection of appropriate S-Expression functions through domain knowledge improves scaling in problems, as expected. It is also known that simple alphabets perform well on relatively small sized problems in a domain, e.g. ternary alphabet in the 6, 11 and 20 bit MUX domain. Once fit ternary rules have been formed it was investigated whether higher order learning was possible and whether this staged learning facilitated selection of appropriate functions in complex alphabets, e.g. selection of S-Expression functions. This novel methodology is shown to provide compact results (135-MUX) and exhibits potential for scaling well (1034-MUX), but is only a small step towards introducing abstraction to LCS.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
In the earth sciences, data are commonly cast on complex grids in order to model irregular domains such as coastlines, or to evenly distribute grid points over the globe. It is common for a scientist to wish to re-cast such data onto a grid that is more amenable to manipulation, visualization, or comparison with other data sources. The complexity of the grids presents a significant technical difficulty to the regridding process. In particular, the regridding of complex grids may suffer from severe performance issues, in the worst case scaling with the product of the sizes of the source and destination grids. We present a mechanism for the fast regridding of such datasets, based upon the construction of a spatial index that allows fast searching of the source grid. We discover that the most efficient spatial index under test (in terms of memory usage and query time) is a simple look-up table. A kd-tree implementation was found to be faster to build and to give similar query performance at the expense of a larger memory footprint. Using our approach, we demonstrate that regridding of complex data may proceed at speeds sufficient to permit regridding on-the-fly in an interactive visualization application, or in a Web Map Service implementation. For large datasets with complex grids the new mechanism is shown to significantly outperform algorithms used in many scientific visualization packages.
Sustained monitoring of the Southern Ocean at Drake Passage: past achievements and future priorities
Resumo:
Drake Passage is the narrowest constriction of the Antarctic Circumpolar Current (ACC) in the Southern Ocean, with implications for global ocean circulation and climate. We review the long-term sustained monitoring programmes that have been conducted at Drake Passage, dating back to the early part of the twentieth century. Attention is drawn to numerous breakthroughs that have been made from these programmes, including (a) the first determinations of the complex ACC structure and early quantifications of its transport; (b) realization that the ACC transport is remarkably steady over interannual and longer periods, and a growing understanding of the processes responsible for this; (c) recognition of the role of coupled climate modes in dictating the horizontal transport, and the role of anthropogenic processes in this; (d) understanding of mechanisms driving changes in both the upper and lower limbs of the Southern Ocean overturning circulation, and their impacts. It is argued that monitoring of this passage remains a high priority for oceanographic and climate research, but that strategic improvements could be made concerning how this is conducted. In particular, long-term programmes should concentrate on delivering quantifications of key variables of direct relevance to large-scale environmental issues: in this context, the time-varying overturning circulation is, if anything, even more compelling a target than the ACC flow. Further, there is a need for better international resource-sharing, and improved spatio-temporal coordination of the measurements. If achieved, the improvements in understanding of important climatic issues deriving from Drake Passage monitoring can be sustained into the future.
Resumo:
Developing high-quality scientific research will be most effective if research communities with diverse skills and interests are able to share information and knowledge, are aware of the major challenges across disciplines, and can exploit economies of scale to provide robust answers and better inform policy. We evaluate opportunities and challenges facing the development of a more interactive research environment by developing an interdisciplinary synthesis of research on a single geographic region. We focus on the Amazon as it is of enormous regional and global environmental importance and faces a highly uncertain future. To take stock of existing knowledge and provide a framework for analysis we present a set of mini-reviews from fourteen different areas of research, encompassing taxonomy, biodiversity, biogeography, vegetation dynamics, landscape ecology, earth-atmosphere interactions, ecosystem processes, fire, deforestation dynamics, hydrology, hunting, conservation planning, livelihoods, and payments for ecosystem services. Each review highlights the current state of knowledge and identifies research priorities, including major challenges and opportunities. We show that while substantial progress is being made across many areas of scientific research, our understanding of specific issues is often dependent on knowledge from other disciplines. Accelerating the acquisition of reliable and contextualized knowledge about the fate of complex pristine and modified ecosystems is partly dependent on our ability to exploit economies of scale in shared resources and technical expertise, recognise and make explicit interconnections and feedbacks among sub-disciplines, increase the temporal and spatial scale of existing studies, and improve the dissemination of scientific findings to policy makers and society at large. Enhancing interaction among research efforts is vital if we are to make the most of limited funds and overcome the challenges posed by addressing large-scale interdisciplinary questions. Bringing together a diverse scientific community with a single geographic focus can help increase awareness of research questions both within and among disciplines, and reveal the opportunities that may exist for advancing acquisition of reliable knowledge. This approach could be useful for a variety of globally important scientific questions.
Resumo:
For forecasting and economic analysis many variables are used in logarithms (logs). In time series analysis, this transformation is often considered to stabilize the variance of a series. We investigate under which conditions taking logs is beneficial for forecasting. Forecasts based on the original series are compared to forecasts based on logs. For a range of economic variables, substantial forecasting improvements from taking logs are found if the log transformation actually stabilizes the variance of the underlying series. Using logs can be damaging for the forecast precision if a stable variance is not achieved.