830 resultados para Multiport Network Model
Resumo:
Neural Network has emerged as the topic of the day. The spectrum of its application is as wide as from ECG noise filtering to seismic data analysis and from elementary particle detection to electronic music composition. The focal point of the proposed work is an application of a massively parallel connectionist model network for detection of a sonar target. This task is segmented into: (i) generation of training patterns from sea noise that contains radiated noise of a target, for teaching the network;(ii) selection of suitable network topology and learning algorithm and (iii) training of the network and its subsequent testing where the network detects, in unknown patterns applied to it, the presence of the features it has already learned in. A three-layer perceptron using backpropagation learning is initially subjected to a recursive training with example patterns (derived from sea ambient noise with and without the radiated noise of a target). On every presentation, the error in the output of the network is propagated back and the weights and the bias associated with each neuron in the network are modified in proportion to this error measure. During this iterative process, the network converges and extracts the target features which get encoded into its generalized weights and biases.In every unknown pattern that the converged network subsequently confronts with, it searches for the features already learned and outputs an indication for their presence or absence. This capability for target detection is exhibited by the response of the network to various test patterns presented to it.Three network topologies are tried with two variants of backpropagation learning and a grading of the performance of each combination is subsequently made.
Resumo:
Diagnosis of Hridroga (cardiac disorders) in Ayurveda requires the combination of many different types of data, including personal details, patient symptoms, patient histories, general examination results, Ashtavidha pareeksha results etc. Computer-assisted decision support systems must be able to combine these data types into a seamless system. Intelligent agents, an approach that has been used chiefly in business applications, is used in medical diagnosis in this case. This paper is about a multi-agent system named “Distributed Ayurvedic Diagnosis and Therapy System for Hridroga using Agents” (DADTSHUA). It describes the architecture of the DADTSHUA model .This system is using mobile agents and ontology for passing data through the network. Due to this, transport delay can be minimized. It is a system which will be very helpful for the beginning physicians to eliminate his ambiguity in diagnosis and therapy. The system is implemented using Java Agent DEvelopment framework (JADE), which is a java-complaint mobile agent platform from TILab.
Resumo:
In our study we use a kernel based classification technique, Support Vector Machine Regression for predicting the Melting Point of Drug – like compounds in terms of Topological Descriptors, Topological Charge Indices, Connectivity Indices and 2D Auto Correlations. The Machine Learning model was designed, trained and tested using a dataset of 100 compounds and it was found that an SVMReg model with RBF Kernel could predict the Melting Point with a mean absolute error 15.5854 and Root Mean Squared Error 19.7576
Resumo:
Genetic programming is known to provide good solutions for many problems like the evolution of network protocols and distributed algorithms. In such cases it is most likely a hardwired module of a design framework that assists the engineer to optimize specific aspects of the system to be developed. It provides its results in a fixed format through an internal interface. In this paper we show how the utility of genetic programming can be increased remarkably by isolating it as a component and integrating it into the model-driven software development process. Our genetic programming framework produces XMI-encoded UML models that can easily be loaded into widely available modeling tools which in turn posses code generation as well as additional analysis and test capabilities. We use the evolution of a distributed election algorithm as an example to illustrate how genetic programming can be combined with model-driven development. This example clearly illustrates the advantages of our approach – the generation of source code in different programming languages.
Resumo:
Most logistics network design models assume exogenous customer demand that is independent of the service time or level. This paper examines the benefits of segmenting demand according to lead-time sensitivity of customers. To capture lead-time sensitivity in the network design model, we use a facility grouping method to ensure that the different demand classes are satisfied on time. In addition, we perform a series of computational experiments to develop a set of managerial insights for the network design decision making process.
Resumo:
This article studies the static pricing problem of a network service provider who has a fixed capacity and faces different types of customers (classes). Each type of customers can have its own capacity constraint but it is assumed that all classes have the same resource requirement. The provider must decide a static price for each class. The customer types are characterized by their arrival process, with a price-dependant arrival rate, and the random time they remain in the system. Many real-life situations could fit in this framework, for example an Internet provider or a call center, but originally this problem was thought for a company that sells phone-cards and needs to set the price-per-minute for each destination. Our goal is to characterize the optimal static prices in order to maximize the provider's revenue. We note that the model here presented, with some slight modifications and additional assumptions can be used in those cases when the objective is to maximize social welfare.
Resumo:
Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted
Resumo:
This paper presents a study of connection availability in GMPLS over optical transport networks (OTN) taking into account different network topologies. Two basic path protection schemes are considered and compared with the no protection case. The selected topologies are heterogeneous in geographic coverage, network diameter, link lengths, and average node degree. Connection availability is also computed considering the reliability data of physical components and a well-known network availability model. Results show several correspondences between suitable path protection algorithms and several network topology characteristics
Resumo:
.
Resumo:
This file contains the results of application of the model (4.6) and (4.8)
Resumo:
Research Proposal in SB.TV case study. New dimension to brand model for Web 2.0 success
Resumo:
Speaker(s): Prof. Steffen Staab Organiser: Dr Tim Chown Time: 23/05/2014 10:30-11:30 Location: B53/4025 Abstract The Web is constructed based on our experiences in a multitude of modalities: text, networks, images, physical locations are some examples. Understanding the Web requires from us that we can model these modalities as they appear on the Web. In this talk I will show some examples of how we model text, hyperlink networks and physical-social systems in order to improve our understanding and our use of the Web.
Resumo:
Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.
Resumo:
This paper uses a two-sided market model of hospital competition to study the implications of di§erent remunerations schemes on the physiciansí side. The two-sided market approach is characterized by the concept of common network externality (CNE) introduced by Bardey et al. (2010). This type of externality occurs when occurs when both sides value, possibly with di§erent intensities, the same network externality. We explicitly introduce e§ort exerted by doctors. By increasing the number of medical acts (which involves a costly e§ort) the doctor can increase the quality of service o§ered to patients (over and above the level implied by the CNE). We Örst consider pure salary, capitation or fee-for-service schemes. Then, we study schemes that mix fee-for-service with either salary or capitation payments. We show that salary schemes (either pure or in combination with fee-for-service) are more patient friendly than (pure or mixed) capitations schemes. This comparison is exactly reversed on the providersíside. Quite surprisingly, patients always loose when a fee-for-service scheme is introduced (pure of mixed). This is true even though the fee-for-service is the only way to induce the providers to exert e§ort and it holds whatever the patientsívaluation of this e§ort. In other words, the increase in quality brought about by the fee-for-service is more than compensated by the increase in fees faced by patients.
Resumo:
We develop a model in which two insurers and two health care providers compete for a fixed mass of policyholders. Insurers compete in premium and offer coverage against financial consequences of health risk. They have the possibility to sign agreements with providers to establish a health care network. Providers, partially altruistic, are horizontally differentiated with respect to their physical address. They choose the health care quality and compete in price. First, we show that policyholders are better off under a competition between conventional insurance rather than under a competition between integrated insurers (Managed Care Organizations). Second, we reveal that the competition between a conventional insurer and a Managed Care Organization (MCO) leads to a similar equilibrium than the competition between two MCOs characterized by a different objective i.e. private versus mutual. Third, we point out that the ex ante providers’ horizontal differentiation leads to an exclusionary equilibrium in which both insurers select one distinct provider. This result is in sharp contrast with frameworks that introduce the concept of option value to model the (ex post) horizontal differentiation between providers.