15 resultados para merging

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose algorithms for combining and ranking answers from distributed heterogeneous data sources in the context of a multi-ontology Question Answering task. Our proposal includes a merging algorithm that aggregates, combines and filters ontology-based search results and three different ranking algorithms that sort the final answers according to different criteria such as popularity, confidence and semantic interpretation of results. An experimental evaluation on a large scale corpus indicates improvements in the quality of the search results with respect to a scenario where the merging and ranking algorithms were not applied. These collective methods for merging and ranking allow to answer questions that are distributed across ontologies, while at the same time, they can filter irrelevant answers, fuse similar answers together, and elicit the most accurate answer(s) to a question.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurement of lung ventilation is one of the most reliable techniques in diagnosing pulmonary diseases. The time-consuming and bias-prone traditional methods using hyperpolarized H 3He and 1H magnetic resonance imageries have recently been improved by an automated technique based on 'multiple active contour evolution'. This method involves a simultaneous evolution of multiple initial conditions, called 'snakes', eventually leading to their 'merging' and is entirely independent of the shapes and sizes of snakes or other parametric details. The objective of this paper is to show, through a theoretical analysis, that the functional dynamics of merging as depicted in the active contour method has a direct analogue in statistical physics and this explains its 'universality'. We show that the multiple active contour method has an universal scaling behaviour akin to that of classical nucleation in two spatial dimensions. We prove our point by comparing the numerically evaluated exponents with an equivalent thermodynamic model. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study suggests a novel application of Inverse Data Envelopment Analysis (InvDEA) in strategic decision making about mergers and acquisitions in banking. The conventional DEA assesses the efficiency of banks based on the information gathered about the quantities of inputs used to realize the observed level of outputs produced. The decision maker of a banking unit willing to merge/acquire another banking unit needs to decide about the inputs and/or outputs level if an efficiency target for the new banking unit is set. In this paper, a new InvDEA-based approach is developed to suggest the required level of the inputs and outputs for the merged bank to reach a predetermined efficiency target. This study illustrates the novelty of the proposed approach through the case of a bank considering merging with or acquiring one of its competitors to synergize and realize higher level of efficiency. A real data set of 42 banking units in Gulf Corporation Council countries is used to show the practicality of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The chemical industry in China is facing fierce competition and exposure to market forces as a result of changes in the country's economic policy. The Chinese government has applied administrative actions rather than simply relying on market forces to address the changing dynamics. It has attempted to privatise state-owned chemical enterprises (SOCEs) by corporatisation, coupled with industrial restructuring by merging individual state-owned enterprises into groups. Based on a quantitative survey in combination with case studies of two Chinese chemical enterprises, this paper concludes that in this industry building competences is more effective than privatisation and restructuring to improve performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The binocular Esterman visual field test (EVFT) is the current visual field test for driving in the UK. Merging of monocular field tests (Integrated Visual Field, IVF) has been proposed as an alternative for glaucoma patients. Aims: To examine the level of agreement between the EVFT and IVF for patients with binocular paracentral scotomata, caused by either ophthalmological or neurological conditions, and to compare outcomes with useful field of view (UFOV) performance, a test of visual attention thought to be important in driving. Methods: 60 patients with binocular paracentral scotomata but normal visual acuity (VA) were recruited prospectively. Subjects completed and were classified as “pass” or “fail” for the EVFT, IVF and UFOV. Results: Good agreement occurred between the EVFT and IVF in classifying subjects as “pass” or “fail” (kappa?=?0.84). Classifications disagreed for four subjects with paracentral scotomata of neurological origin (three “passed” IVF yet “failed” EVFT). Mean UFOV scores did not differ between those who “passed” and those who “failed” both visual field tests (p?=?0.11). Agreement between the visual field tests and UFOV was limited (EVFT kappa?=?0.22, IVF kappa 0.32). Conclusions: Although the IVF and EVFT agree well in classifying visual fields with regard to legal fitness to drive in the UK, the IVF “passes” some individuals currently classed as unfit to drive due to paracentral scotomata of non-glaucomatous origin. The suitability of the UFOV for assessing crash risk in those with visual field loss is questionable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. Most existing systems concentrate either on mining algorithms or on visualization techniques. Though visual methods developed in information visualization have been helpful, for improved understanding of a complex large high-dimensional dataset, there is a need for an effective projection of such a dataset onto a lower-dimension (2D or 3D) manifold. This paper introduces a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualization domain. The framework follows Shneiderman’s mantra to provide an effective user interface. The advantage of such an interface is that the user is directly involved in the data mining process. We integrate principled projection methods, such as Generative Topographic Mapping (GTM) and Hierarchical GTM (HGTM), with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, billboarding, and user interaction facilities, to provide an integrated visual data mining framework. Results on a real life high-dimensional dataset from the chemoinformatics domain are also reported and discussed. Projection results of GTM are analytically compared with the projection results from other traditional projection methods, and it is also shown that the HGTM algorithm provides additional value for large datasets. The computational complexity of these algorithms is discussed to demonstrate their suitability for the visual data mining framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper assesses the impact of regional technological diversification on the emergence of new innovators across EU regions. Integrating analyses from regional economics, economic geography and technological change literatures, we explore the role that the regional embeddedness of actors characterised by diverse technological competencies may have in fostering novel and sustained interactions leading to new technological combinations. In particular, we test whether greater technological diversification improve regional ‘combinatorial’ opportunities leading to the emergence of new innovators. The analysis is based on panel data obtained merging regional economic data from Eurostat and patent data from the CRIOS-PATSTAT database over the period 1997–2006, covering 178 regions across 10 EU Countries. Accounting for different measures of economic and innovative activity at the NUTS2 level, our findings suggest that the regional co-location of diverse technological competencies contributes to the entry of new innovators, thereby shaping technological change and industry dynamics. Thus, this paper brings to the fore a better understanding of the relationship between regional diversity and technological change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biological experiments often produce enormous amount of data, which are usually analyzed by data clustering. Cluster analysis refers to statistical methods that are used to assign data with similar properties into several smaller, more meaningful groups. Two commonly used clustering techniques are introduced in the following section: principal component analysis (PCA) and hierarchical clustering. PCA calculates the variance between variables and groups them into a few uncorrelated groups or principal components (PCs) that are orthogonal to each other. Hierarchical clustering is carried out by separating data into many clusters and merging similar clusters together. Here, we use an example of human leukocyte antigen (HLA) supertype classification to demonstrate the usage of the two methods. Two programs, Generating Optimal Linear Partial Least Square Estimations (GOLPE) and Sybyl, are used for PCA and hierarchical clustering, respectively. However, the reader should bear in mind that the methods have been incorporated into other software as well, such as SIMCA, statistiXL, and R.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The UK has a relatively low ratio of business R&D to GDP (the BERD ratio) compared to other leading economies. There has also been a small decline in UK’s BERD ratio in the 1990s, whereas other leading economies have experienced small rises. The relatively low BERD ratio cannot be explained solely by sectoral or industry-level differences between the UK and other countries. There is, therefore, considerable interest in understanding the firm-level determinants of investment in R&D. This report was commissioned by the DTI to analyse the link between R&D and productivity for a sample of firms derived from merging the ONS’s Business Research and Development Database (BERD) and the Annual Respondents Database (ARD). The analysis estimates the private rates of returns to R&D, and not the social rates of return, since it is the private returns that should drive firms’ decisions. A key objective of this research is to analyse the productivity of R&D in small and medium sized enterprises (SME). The analysis is intended to allow comparisons to the results in Rogers (2005), which uses publicly available data on R&D in medium to large UK firms in the 1990s.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ant Colony Optimisation algorithms mimic the way ants use pheromones for marking paths to important locations. Pheromone traces are followed and reinforced by other ants, but also evaporate over time. As a consequence, optimal paths attract more pheromone, whilst the less useful paths fade away. In the Multiple Pheromone Ant Clustering Algorithm (MPACA), ants detect features of objects represented as nodes within graph space. Each node has one or more ants assigned to each feature. Ants attempt to locate nodes with matching feature values, depositing pheromone traces on the way. This use of multiple pheromone values is a key innovation. Ants record other ant encounters, keeping a record of the features and colony membership of ants. The recorded values determine when ants should combine their features to look for conjunctions and whether they should merge into colonies. This ability to detect and deposit pheromone representative of feature combinations, and the resulting colony formation, renders the algorithm a powerful clustering tool. The MPACA operates as follows: (i) initially each node has ants assigned to each feature; (ii) ants roam the graph space searching for nodes with matching features; (iii) when departing matching nodes, ants deposit pheromones to inform other ants that the path goes to a node with the associated feature values; (iv) ant feature encounters are counted each time an ant arrives at a node; (v) if the feature encounters exceed a threshold value, feature combination occurs; (vi) a similar mechanism is used for colony merging. The model varies from traditional ACO in that: (i) a modified pheromone-driven movement mechanism is used; (ii) ants learn feature combinations and deposit multiple pheromone scents accordingly; (iii) ants merge into colonies, the basis of cluster formation. The MPACA is evaluated over synthetic and real-world datasets and its performance compares favourably with alternative approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Besides their well-described use as delivery systems for water-soluble drugs, liposomes have the ability to act as a solubilizing agent for drugs with low aqueous solubility. However, a key limitation in exploiting liposome technology is the availability of scalable, low-cost production methods for the preparation of liposomes. Here we describe a new method, using microfluidics, to prepare liposomal solubilising systems which can incorporate low solubility drugs (in this case propofol). The setup, based on a chaotic advection micromixer, showed high drug loading (41 mol%) of propofol as well as the ability to manufacture vesicles with at prescribed sizes (between 50 and 450 nm) in a high-throughput setting. Our results demonstrate the ability of merging liposome manufacturing and drug encapsulation in a single process step, leading to an overall reduced process time. These studies emphasise the flexibility and ease of applying lab-on-a-chip microfluidics for the solubilisation of poorly water-soluble drugs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In linear communication channels, spectral components (modes) defined by the Fourier transform of the signal propagate without interactions with each other. In certain nonlinear channels, such as the one modelled by the classical nonlinear Schrödinger equation, there are nonlinear modes (nonlinear signal spectrum) that also propagate without interacting with each other and without corresponding nonlinear cross talk, effectively, in a linear manner. Here, we describe in a constructive way how to introduce such nonlinear modes for a given input signal. We investigate the performance of the nonlinear inverse synthesis (NIS) method, in which the information is encoded directly onto the continuous part of the nonlinear signal spectrum. This transmission technique, combined with the appropriate distributed Raman amplification, can provide an effective eigenvalue division multiplexing with high spectral efficiency, thanks to highly suppressed channel cross talk. The proposed NIS approach can be integrated with any modulation formats. Here, we demonstrate numerically the feasibility of merging the NIS technique in a burst mode with high spectral efficiency methods, such as orthogonal frequency division multiplexing and Nyquist pulse shaping with advanced modulation formats (e.g., QPSK, 16QAM, and 64QAM), showing a performance improvement up to 4.5 dB, which is comparable to results achievable with multi-step per span digital back propagation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two classes of software that are notoriously difficult to develop on their own are rapidly merging into one. This will affect every key service that we rely upon in modern society, yet a successful merge is unlikely to be achievable using software development techniques specific to either class. This paper explains the growing demand for software capable of both self-adaptation and high integrity, and advocates the use of a collection of "@runtime" techniques for its development, operation and management. We summarise early research into the development of such techniques, and discuss the remaining work required to overcome the great challenge of self-adaptive high-integrity software. © 2011 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose - This paper aims to provide evidence to suggest that private social and environmental reporting (i.e. one-on-one meetings between institutional investors and investees on social and environmental issues) is beginning to merge with private financial reporting and that, as a result, integrated private reporting is emerging. Design/methodology/approach - In this paper, 19 FTSE100 companies and 20 UK institutional investors were interviewed to discover trends in private integrated reporting and to gauge whether private reporting is genuinely becoming integrated. The emergence of integrated private reporting through the lens of institutional logics was interpreted. The emergence of integrated private reporting as a merging of two hitherto separate and possibly rival institutional logics was framed. Findings - It was found that specialist socially responsible investment managers are starting to attend private financial reporting meetings, while mainstream fund managers are starting to attend private meetings on environmental, social and governance (ESG) issues. Further, senior company directors are becoming increasingly conversant with ESG issues. Research limitations/implications - The findings were interpreted as two possible scenarios: there is a genuine hybridisation occurring in the UK institutional investment such that integrated private reporting is emerging or the financial logic is absorbing and effectively neutralising the responsible investment logic. Practical implications - These findings provide evidence of emergent integrated private reporting which are useful to both the corporate and institutional investment communities as they plan their engagement meetings. Originality/value - No study has hitherto examined private social and environmental reporting through interview research from the perspective of emergent integrated private reporting. This is the first paper to discuss integrated reporting in the private reporting context.