26 resultados para Sheaf of differential operators
Resumo:
When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.
Resumo:
We studied the effects of the composition of impregnating solution and heat treatment conditions on the activity of catalytic systems for the low-temperature oxidation of CO obtained by the impregnation of Busofit carbon-fiber cloth with aqueous solutions of palladium, copper, and iron salts. The formation of an active phase in the synthesized catalysts at different stages of their preparation was examined with the use of differential thermal and thermogravimetric analyses, X-ray diffraction analysis, X-ray photoelectron spectroscopy, and elemental spectral analysis. The catalytic system prepared by the impregnation of electrochemically treated Busofit with the solutions of PdCl, FeCl, CuBr, and Cu(NO ) and activated under optimum conditions ensured 100% CO conversion under a respiratory regime at both low (0.03%) and high (0.5%) carbon monoxide contents of air. It was found that the activation of a catalytic system at elevated temperatures (170-180°C) leads to the conversion of Pd(II) into Pd(I), which was predominantly localized in a near-surface layer. The promoting action of copper nitrate consists in the formation of a crystalline phase of the rhombic atacamite CuCl(OH). The catalyst surface is finally formed under the conditions of a catalytic reaction, when a joint Pd(I)-Cu(I) active site is formed. © 2014 Pleiades Publishing, Ltd.
Resumo:
Background—The molecular mechanisms underlying similarities and differences between physiological and pathological left ventricular hypertrophy (LVH) are of intense interest. Most previous work involved targeted analysis of individual signaling pathways or screening of transcriptomic profiles. We developed a network biology approach using genomic and proteomic data to study the molecular patterns that distinguish pathological and physiological LVH. Methods and Results—A network-based analysis using graph theory methods was undertaken on 127 genome-wide expression arrays of in vivo murine LVH. This revealed phenotype-specific pathological and physiological gene coexpression networks. Despite >1650 common genes in the 2 networks, network structure is significantly different. This is largely because of rewiring of genes that are differentially coexpressed in the 2 networks; this novel concept of differential wiring was further validated experimentally. Functional analysis of the rewired network revealed several distinct cellular pathways and gene sets. Deeper exploration was undertaken by targeted proteomic analysis of mitochondrial, myofilament, and extracellular subproteomes in pathological LVH. A notable finding was that mRNA–protein correlation was greater at the cellular pathway level than for individual loci. Conclusions—This first combined gene network and proteomic analysis of LVH reveals novel insights into the integrated pathomechanisms that distinguish pathological versus physiological phenotypes. In particular, we identify differential gene wiring as a major distinguishing feature of these phenotypes. This approach provides a platform for the investigation of potentially novel pathways in LVH and offers a freely accessible protocol (http://sites.google.com/site/cardionetworks) for similar analyses in other cardiovascular diseases.
Resumo:
The objective of this study was to investigate the effects of circularity, comorbidity, prevalence and presentation variation on the accuracy of differential diagnoses made in optometric primary care using a modified form of naïve Bayesian sequential analysis. No such investigation has ever been reported before. Data were collected for 1422 cases seen over one year. Positive test outcomes were recorded for case history (ethnicity, age, symptoms and ocular and medical history) and clinical signs in relation to each diagnosis. For this reason only positive likelihood ratios were used for this modified form of Bayesian analysis that was carried out with Laplacian correction and Chi-square filtration. Accuracy was expressed as the percentage of cases for which the diagnoses made by the clinician appeared at the top of a list generated by Bayesian analysis. Preliminary analyses were carried out on 10 diagnoses and 15 test outcomes. Accuracy of 100% was achieved in the absence of presentation variation but dropped by 6% when variation existed. Circularity artificially elevated accuracy by 0.5%. Surprisingly, removal of Chi-square filtering increased accuracy by 0.4%. Decision tree analysis showed that accuracy was influenced primarily by prevalence followed by presentation variation and comorbidity. Analysis of 35 diagnoses and 105 test outcomes followed. This explored the use of positive likelihood ratios, derived from the case history, to recommend signs to look for. Accuracy of 72% was achieved when all clinical signs were entered. The drop in accuracy, compared to the preliminary analysis, was attributed to the fact that some diagnoses lacked strong diagnostic signs; the accuracy increased by 1% when only recommended signs were entered. Chi-square filtering improved recommended test selection. Decision tree analysis showed that accuracy again influenced primarily by prevalence, followed by comorbidity and presentation variation. Future work will explore the use of likelihood ratios based on positive and negative test findings prior to considering naïve Bayesian analysis as a form of artificial intelligence in optometric practice.
Resumo:
The Generative Topographic Mapping (GTM) algorithm of Bishop et al. (1997) has been introduced as a principled alternative to the Self-Organizing Map (SOM). As well as avoiding a number of deficiencies in the SOM, the GTM algorithm has the key property that the smoothness properties of the model are decoupled from the reference vectors, and are described by a continuous mapping from a lower-dimensional latent space into the data space. Magnification factors, which are approximated by the difference between code-book vectors in SOMs, can therefore be evaluated for the GTM model as continuous functions of the latent variables using the techniques of differential geometry. They play an important role in data visualization by highlighting the boundaries between data clusters, and are illustrated here for both a toy data set, and a problem involving the identification of crab species from morphological data.
Resumo:
Magnification factors specify the extent to which the area of a small patch of the latent (or `feature') space of a topographic mapping is magnified on projection to the data space, and are of considerable interest in both neuro-biological and data analysis contexts. Previous attempts to consider magnification factors for the self-organizing map (SOM) algorithm have been hindered because the mapping is only defined at discrete points (given by the reference vectors). In this paper we consider the batch version of SOM, for which a continuous mapping can be defined, as well as the Generative Topographic Mapping (GTM) algorithm of Bishop et al. (1997) which has been introduced as a probabilistic formulation of the SOM. We show how the techniques of differential geometry can be used to determine magnification factors as continuous functions of the latent space coordinates. The results are illustrated here using a problem involving the identification of crab species from morphological data.
Resumo:
Motion discontinuities can signal object boundaries where few or no other cues, such as luminance, colour, or texture, are available. Hence, motion-defined contours are an ecologically important counterpart to luminance contours. We developed a novel motion-defined Gabor stimulus to investigate the nature of neural operators analysing visual motion fields in order to draw parallels with known luminance operators. Luminance-defined Gabors have been successfully used to discern the spatial-extent and spatial-frequency specificity of possible visual contour detectors. We now extend these studies into the motion domain. We define a stimulus using limited-lifetime moving dots whose velocity is described over 2-D space by a Gabor pattern surrounded by randomly moving dots. Participants were asked to determine whether the orientation of the Gabor pattern (and hence of the motion contours) was vertical or horizontal in a 2AFC task, and the proportion of correct responses was recorded. We found that with practice participants became highly proficient at this task, able in certain cases to reach 90% accuracy with only 12 limited-lifetime dots. However, for both practised and novice participants we found that the ability to detect a single boundary saturates with the size of the Gaussian envelope of the Gabor at approximately 5 deg full-width at half-height. At this optimal size we then varied spatial frequency and found the optimum was at the lowest measured spatial frequency (0.1 cycle deg-1 ) and then steadily decreased with higher spatial frequencies, suggesting that motion contour detectors may be specifically tuned to a single, isolated edge.
Resumo:
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.
Resumo:
This thesis presents research within empirical financial economics with focus on liquidity and portfolio optimisation in the stock market. The discussion on liquidity is focused on measurement issues, including TAQ data processing and measurement of systematic liquidity factors (FSO). Furthermore, a framework for treatment of the two topics in combination is provided. The liquidity part of the thesis gives a conceptual background to liquidity and discusses several different approaches to liquidity measurement. It contributes to liquidity measurement by providing detailed guidelines on the data processing needed for applying TAQ data to liquidity research. The main focus, however, is the derivation of systematic liquidity factors. The principal component approach to systematic liquidity measurement is refined by the introduction of moving and expanding estimation windows, allowing for time-varying liquidity co-variances between stocks. Under several liability specifications, this improves the ability to explain stock liquidity and returns, as compared to static window PCA and market average approximations of systematic liquidity. The highest ability to explain stock returns is obtained when using inventory cost as a liquidity measure and a moving window PCA as the systematic liquidity derivation technique. Systematic factors of this setting also have a strong ability in explaining a cross-sectional liquidity variation. Portfolio optimisation in the FSO framework is tested in two empirical studies. These contribute to the assessment of FSO by expanding the applicability to stock indexes and individual stocks, by considering a wide selection of utility function specifications, and by showing explicitly how the full-scale optimum can be identified using either grid search or the heuristic search algorithm of differential evolution. The studies show that relative to mean-variance portfolios, FSO performs well in these settings and that the computational expense can be mitigated dramatically by application of differential evolution.
Resumo:
A complex Ginzburg-Landau equation subjected to local and global time-delay feedback terms is considered. In particular, multiple oscillatory solutions and their properties are studied. We present novel results regarding the disappearance of limit cycle solutions, derive analytical criteria for frequency degeneration, amplitude degeneration, frequency extrema. Furthermore, we discuss the influence of the phase shift parameter and show analytically that the stabilization of the steady state and the decay of all oscillations (amplitude death) cannot happen for global feedback only. Finally, we explain the onset of traveling wave patterns close to the regime of amplitude death.
Resumo:
We consider the process of opinion formation in a society of interacting agents, where there is a set B of socially accepted rules. In this scenario, we observed that agents, represented by simple feed-forward, adaptive neural networks, may have a conservative attitude (mostly in agreement with B) or liberal attitude (mostly in agreement with neighboring agents) depending on how much their opinions are influenced by their peers. The topology of the network representing the interaction of the society's members is determined by a graph, where the agents' properties are defined over the vertexes and the interagent interactions are defined over the bonds. The adaptability of the agents allows us to model the formation of opinions as an online learning process, where agents learn continuously as new information becomes available to the whole society (online learning). Through the application of statistical mechanics techniques we deduced a set of differential equations describing the dynamics of the system. We observed that by slowly varying the average peer influence in such a way that the agents attitude changes from conservative to liberal and back, the average social opinion develops a hysteresis cycle. Such hysteretic behavior disappears when the variance of the social influence distribution is large enough. In all the cases studied, the change from conservative to liberal behavior is characterized by the emergence of conservative clusters, i.e., a closed knitted set of society members that follow a leader who agrees with the social status quo when the rule B is challenged.