37 resultados para ElGamal, CZK, Multiple discrete logarithm assumption, Extended linear algebra


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the angular distributions for elastic and. inelastic scattering of fast neutrons in fusion .reactor materials have been studied. Lithium and lead material are likely to be common components of fusion reactor wall configuration design. The measurements were performed using an associated particle time-of- flight technique. The 14 and 14.44 Mev neutrons were produced by the T(d,n} 4He reaction with deuterons being accelerated in a 150kev SAMES type J accelerator at ASTON and in.the 3. Mev DYNAMITRON at the Joint Radiation Centre, Birmingham respectively. The associated alpha-particles and fast. neutrons were detected.by means of a plastic scintillator mounted on a fast focused photomultiplier tube. The samples used were extended flat plates of thicknesses up to 0.9 mean-free-path for Lithium and 1.562 mean-free-path for Lead. The differential elastic scattering cross-sections were measured for 14 Mev neutrons for various thicknesses of Lithium and Lead in the angular range from zero to; 90º. In addition, the angular distributions of elastically scattered 14,.44 Mev .neutrons from Lithium samples were studied in the same angular range. Inelastic scattering to the 4.63 Mev state in 7Li and the 2.6 Mev state, and 4.1 Mev state in 208Pb have:been :measured.The results are compared to ENDF/B-IV data files and to previous measurements. For the Lead samples the differential neutron scattering:cross-sections for discrete 3 Mev ranges and the angular distributions were measured. The increase in effective cross-section due to multiple scattering effects,as the sample thickness increased:was found to be predicted by the empirical .relation ....... A good fit to the exoerimental data was obtained using the universal constant............ The differential elastic scattering cross-section data for thin samples of Lithium and Lead were analyzed in terms of optical model calculations using the. computer code. RAROMP. Parameter search procedures produced good fits to the·cross-sections. For the case of thick samples of Lithium and Lead, the measured angular distributions of :the scattered neutrons were compared to the predictions of the continuous slowing down model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A distinct feature of several recent models of contrast masking is that detecting mechanisms are divisively inhibited by a broadly tuned ‘gain pool’ of narrow-band spatial pattern mechanisms. The contrast gain control provided by this ‘cross-channel’ architecture achieves contrast normalisation of early pattern mechanisms, which is important for keeping them within the non-saturating part of their biological operating characteristic. These models superseded earlier ‘within-channel’ models, which had supposed that masking arose from direct stimulation of the detecting mechanism by the mask. To reveal the extent of masking, I measured the levels produced with large ranges of pattern spatial relationships that have not been explored before. Substantial interactions between channels tuned to different orientations and spatial frequencies were found. Differences in the masking levels produced with single and multiple component mask patterns provided insights into the summation rules within the gain pool. A widely used cross-channel masking model was tested on these data and was found to perform poorly. The model was developed and a version in which linear summation was allowed between all components within the gain pool but with the exception of the self-suppressing route typically provided the best account of the data. Subsequently, an adaptation paradigm was used to probe the processes underlying pooled responses in masking. This delivered less insight into the pooling than the other studies and areas were identified that require investigation for a new unifying model of masking and adaptation. In further experiments, levels of cross-channel masking were found to be greatly influenced by the spatio-temporal tuning of the channels involved. Old masking experiments and ideas relying on within-channel models were re-elevated in terms of contemporary cross-channel models (e.g. estimations of channel bandwidths from orientation masking functions) and this led to different conclusions than those originally arrived at. The investigation of effects with spatio-temporally superimposed patterns is focussed upon throughout this work, though it is shown how these enquiries might be extended to investigate effects across spatial and temporal position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In previous statnotes, the application of correlation and regression methods to the analysis of two variables (X,Y) was described. These methods can be used to determine whether there is a linear relationship between the two variables, whether the relationship is positive or negative, to test the degree of significance of the linear relationship, and to obtain an equation relating Y to X. This Statnote extends the methods of linear correlation and regression to situations where there are two or more X variables, i.e., 'multiple linear regression’.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional method of classifying neurodegenerative diseases is based on the original clinico-pathological concept supported by 'consensus' criteria and data from molecular pathological studies. This review discusses first, current problems in classification resulting from the coexistence of different classificatory schemes, the presence of disease heterogeneity and multiple pathologies, the use of 'signature' brain lesions in diagnosis, and the existence of pathological processes common to different diseases. Second, three models of neurodegenerative disease are proposed: (1) that distinct diseases exist ('discrete' model), (2) that relatively distinct diseases exist but exhibit overlapping features ('overlap' model), and (3) that distinct diseases do not exist and neurodegenerative disease is a 'continuum' in which there is continuous variation in clinical/pathological features from one case to another ('continuum' model). Third, to distinguish between models, the distribution of the most important molecular 'signature' lesions across the different diseases is reviewed. Such lesions often have poor 'fidelity', i.e., they are not unique to individual disorders but are distributed across many diseases consistent with the overlap or continuum models. Fourth, the question of whether the current classificatory system should be rejected is considered and three alternatives are proposed, viz., objective classification, classification for convenience (a 'dissection'), or analysis as a continuum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data envelopment analysis (DEA) as introduced by Charnes, Cooper, and Rhodes (1978) is a linear programming technique that has widely been used to evaluate the relative efficiency of a set of homogenous decision making units (DMUs). In many real applications, the input-output variables cannot be precisely measured. This is particularly important in assessing efficiency of DMUs using DEA, since the efficiency score of inefficient DMUs are very sensitive to possible data errors. Hence, several approaches have been proposed to deal with imprecise data. Perhaps the most popular fuzzy DEA model is based on a-cut. One drawback of the a-cut approach is that it cannot include all information about uncertainty. This paper aims to introduce an alternative linear programming model that can include some uncertainty information from the intervals within the a-cut approach. We introduce the concept of "local a-level" to develop a multi-objective linear programming to measure the efficiency of DMUs under uncertainty. An example is given to illustrate the use of this method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An inverse problem is considered where the structure of multiple sound-soft planar obstacles is to be determined given the direction of the incoming acoustic field and knowledge of the corresponding total field on a curve located outside the obstacles. A local uniqueness result is given for this inverse problem suggesting that the reconstruction can be achieved by a single incident wave. A numerical procedure based on the concept of the topological derivative of an associated cost functional is used to produce images of the obstacles. No a priori assumption about the number of obstacles present is needed. Numerical results are included showing that accurate reconstructions can be obtained and that the proposed method is capable of finding both the shapes and the number of obstacles with one or a few incident waves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a query is passed to multiple search engines, each search engine returns a ranked list of documents. Researchers have demonstrated that combining results, in the form of a "metasearch engine", produces a significant improvement in coverage and search effectiveness. This paper proposes a linear programming mathematical model for optimizing the ranked list result of a given group of Web search engines for an issued query. An application with a numerical illustration shows the advantages of the proposed method. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Multiple Pheromone Ant Clustering Algorithm (MPACA) models the collective behaviour of ants to find clusters in data and to assign objects to the most appropriate class. It is an ant colony optimisation approach that uses pheromones to mark paths linking objects that are similar and potentially members of the same cluster or class. Its novelty is in the way it uses separate pheromones for each descriptive attribute of the object rather than a single pheromone representing the whole object. Ants that encounter other ants frequently enough can combine the attribute values they are detecting, which enables the MPACA to learn influential variable interactions. This paper applies the model to real-world data from two domains. One is logistics, focusing on resource allocation rather than the more traditional vehicle-routing problem. The other is mental-health risk assessment. The task for the MPACA in each domain was to predict class membership where the classes for the logistics domain were the levels of demand on haulage company resources and the mental-health classes were levels of suicide risk. Results on these noisy real-world data were promising, demonstrating the ability of the MPACA to find patterns in the data with accuracy comparable to more traditional linear regression models. © 2013 Polish Information Processing Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this research is to develop a holistic approach to maximize the customer service level while minimizing the logistics cost by using an integrated multiple criteria decision making (MCDM) method for the contemporary transshipment problem. Unlike the prevalent optimization techniques, this paper proposes an integrated approach which considers both quantitative and qualitative factors in order to maximize the benefits of service deliverers and customers under uncertain environments. Design/methodology/approach – This paper proposes a fuzzy-based integer linear programming model, based on the existing literature and validated with an example case. The model integrates the developed fuzzy modification of the analytic hierarchy process (FAHP), and solves the multi-criteria transshipment problem. Findings – This paper provides several novel insights about how to transform a company from a cost-based model to a service-dominated model by using an integrated MCDM method. It suggests that the contemporary customer-driven supply chain remains and increases its competitiveness from two aspects: optimizing the cost and providing the best service simultaneously. Research limitations/implications – This research used one illustrative industry case to exemplify the developed method. Considering the generalization of the research findings and the complexity of the transshipment service network, more cases across multiple industries are necessary to further enhance the validity of the research output. Practical implications – The paper includes implications for the evaluation and selection of transshipment service suppliers, the construction of optimal transshipment network as well as managing the network. Originality/value – The major advantages of this generic approach are that both quantitative and qualitative factors under fuzzy environment are considered simultaneously and also the viewpoints of service deliverers and customers are focused. Therefore, it is believed that it is useful and applicable for the transshipment service network design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The linear stability of flow past two circular cylinders in a side-by-side arrangement is investigated theoretically, numerically and experimentally under the assumption of a two-dimensional flow field, in order to explore the origin of in-phase and antiphase oscillatory flows. Steady symmetric flow is realized at a small Reynolds number, but becomes unstable above a critical Reynolds number though the solution corresponding to the flow still satisfies the basic equations irrespective of the magnitude of the Reynolds number. We obtained the solution numerically and investigated its linear stability. We found that there are two kinds of unstable modes, i.e., antisymmetric and symmetric modes, which lead to in-phase and antiphase oscillatory flows, respectively. We determined the critical Reynolds numbers for the two modes and evaluated the critical distance at which the most unstable disturbance changes from the antisymmetric to the symmetric mode, or vice versa. ©2005 The Physical Society of Japan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the features of low-power and flexible networking capabilities IEEE 802.15.4 has been widely regarded as one strong candidate of communication technologies for wireless sensor networks (WSNs). It is expected that with an increasing number of deployments of 802.15.4 based WSNs, multiple WSNs could coexist with full or partial overlap in residential or enterprise areas. As WSNs are usually deployed without coordination, the communication could meet significant degradation with the 802.15.4 channel access scheme, which has a large impact on system performance. In this thesis we are motivated to investigate the effectiveness of 802.15.4 networks supporting WSN applications with various environments, especially when hidden terminals are presented due to the uncoordinated coexistence problem. Both analytical models and system level simulators are developed to analyse the performance of the random access scheme specified by IEEE 802.15.4 medium access control (MAC) standard for several network scenarios. The first part of the thesis investigates the effectiveness of single 802.15.4 network supporting WSN applications. A Markov chain based analytic model is applied to model the MAC behaviour of IEEE 802.15.4 standard and a discrete event simulator is also developed to analyse the performance and verify the proposed analytical model. It is observed that 802.15.4 networks could sufficiently support most WSN applications with its various functionalities. After the investigation of single network, the uncoordinated coexistence problem of multiple 802.15.4 networks deployed with communication range fully or partially overlapped are investigated in the next part of the thesis. Both nonsleep and sleep modes are investigated with different channel conditions by analytic and simulation methods to obtain the comprehensive performance evaluation. It is found that the uncoordinated coexistence problem can significantly degrade the performance of 802.15.4 networks, which is unlikely to satisfy the QoS requirements for many WSN applications. The proposed analytic model is validated by simulations which could be used to obtain the optimal parameter setting before WSNs deployments to eliminate the interference risks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have investigated how optimal coding for neural systems changes with the time available for decoding. Optimization was in terms of maximizing information transmission. We have estimated the parameters for Poisson neurons that optimize Shannon transinformation with the assumption of rate coding. We observed a hierarchy of phase transitions from binary coding, for small decoding times, toward discrete (M-ary) coding with two, three and more quantization levels for larger decoding times. We postulate that the presence of subpopulations with specific neural characteristics could be a signiture of an optimal population coding scheme and we use the mammalian auditory system as an example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the contemporary customer-driven supply chain, maximization of customer service plays an equally important role as minimization of costs for a company to retain and increase its competitiveness. This article develops a multiple-criteria optimization approach, combining the analytic hierarchy process (AHP) and an integer linear programming (ILP) model, to aid the design of an optimal logistics distribution network. The proposed approach outperforms traditional cost-based optimization techniques because it considers both quantitative and qualitative factors and also aims at maximizing the benefits of deliverer and customers. In the approach, the AHP is used to determine the relative importance weightings or priorities of alternative warehouses with respect to some critical customer-oriented criteria. The results of AHP prioritization are utilized as the input of the ILP model, the objective of which is to select the best warehouses at the lowest possible cost. In this article, two commercial packages are used: including Expert Choice and LINDO.