920 resultados para NETWORK MODELS
Resumo:
Bayesian network classifiers are a powerful machine learning tool. In order to evaluate the expressive power of these models, we compute families of polynomials that sign-represent decision functions induced by Bayesian network classifiers. We prove that those families are linear combinations of products of Lagrange basis polynomials. In absence of V -structures in the predictor sub-graph, we are also able to prove that this family of polynomials does indeed characterize the specific classifier considered. We then use this representation to bound the number of decision functions representable by Bayesian network classifiers with a given structure.
Resumo:
Short-term load forecasting of power system has been a classic problem for a long time. Not merely it has been researched extensively and intensively, but also a variety of forecasting methods has been raised. This thesis outlines some aspects and functions of smart meter. It also presents different policies and current statuses as well as future projects and objectives of SG development in several countries. Then the thesis compares main aspects about latest products of smart meter from different companies. Lastly, three types of prediction models are established in MATLAB to emulate the functions of smart grid in the short-term load forecasting, and then their results are compared and analyzed in terms of accuracy. For this thesis, more variables such as dew point temperature are used in the Neural Network model to achieve more accuracy for better short-term load forecasting results.
Resumo:
Society today is completely dependent on computer networks, the Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Subconsciously, we rely increasingly on network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect, and enhance the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we do not consider network management systems during the development stage of distributed systems, then there could be serious consequences or even total failures in the development of the distributed system. It is necessary, therefore, to consider the management of the systems within the design of the distributed systems and to systematise their design to minimise the impact of network management in distributed systems projects. In this paper, we present a framework that allows the design of network management systems systematically. To accomplish this goal, formal modelling tools are used for modelling different views sequentially proposed of the same problem. These views cover all the aspects that are involved in the system; based on process definitions for identifying responsible and defining the involved agents to propose the deployment in a distributed architecture that is both feasible and appropriate.
Resumo:
In this paper the model of an Innovative Monitoring Network involving properly connected nodes to develop an Information and Communication Technology (ICT) solution for preventive maintenance of historical centres from early warnings is proposed. It is well known that the protection of historical centres generally goes from a large-scale monitoring to a local one and it could be supported by a unique ICT solution. More in detail, the models of a virtually organized monitoring system could enable the implementation of automated analyses by presenting various alert levels. An adequate ICT solution tool would allow to define a monitoring network for a shared processing of data and results. Thus, a possible retrofit solution could be planned for pilot cases shared among the nodes of the network on the basis of a suitable procedure utilizing a retrofit catalogue. The final objective would consist in providing a model of an innovative tool to identify hazards, damages and possible retrofit solutions for historical centres, assuring an easy early warning support for stakeholders. The action could proactively target the needs and requirements of users, such as decision makers responsible for damage mitigation and safeguarding of cultural heritage assets.
Resumo:
In this study, we utilise a novel approach to segment out the ventricular system in a series of high resolution T1-weighted MR images. We present a brain ventricles fast reconstruction method. The method is based on the processing of brain sections and establishing a fixed number of landmarks onto those sections to reconstruct the ventricles 3D surface. Automated landmark extraction is accomplished through the use of the self-organising network, the growing neural gas (GNG), which is able to topographically map the low dimensionality of the network to the high dimensionality of the contour manifold without requiring a priori knowledge of the input space structure. Moreover, our GNG landmark method is tolerant to noise and eliminates outliers. Our method accelerates the classical surface reconstruction and filtering processes. The proposed method offers higher accuracy compared to methods with similar efficiency as Voxel Grid.
Resumo:
Society, as we know it today, is completely dependent on computer networks, Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Moreover, and unconsciously, all services and distributed systems require network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect or improve the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we don’t consider network management systems during the development stage of main distributed systems, then there could be serious consequences or even total failures in the development of the distributed systems. It is necessary, therefore, to consider the management of the systems within the design of distributed systems and systematize their conception to minimize the impact of the management of networks within the project of distributed systems. In this paper, we present a formalization method of the conceptual modelling for design of a network management system through the use of formal modelling tools, thus allowing from the definition of processes to identify those responsible for these. Finally we will propose a use case to design a conceptual model intrusion detection system in network.
Resumo:
In this work, we propose a new methodology for the large scale optimization and process integration of complex chemical processes that have been simulated using modular chemical process simulators. Units with significant numerical noise or large CPU times are substituted by surrogate models based on Kriging interpolation. Using a degree of freedom analysis, some of those units can be aggregated into a single unit to reduce the complexity of the resulting model. As a result, we solve a hybrid simulation-optimization model formed by units in the original flowsheet, Kriging models, and explicit equations. We present a case study of the optimization of a sour water stripping plant in which we simultaneously consider economics, heat integration and environmental impact using the ReCiPe indicator, which incorporates the recent advances made in Life Cycle Assessment (LCA). The optimization strategy guarantees the convergence to a local optimum inside the tolerance of the numerical noise.
Resumo:
Background: Diabetes is known as a major cause of morbidity and mortality worldwide. Portugal is known as the European country with the highest prevalence of this disease. While diabetes prevalence data is updated annually in Portugal, the General Practitioner’s (GP) Sentinel Network represents the only data source on diabetes incidence. This study describes the trends in Diabetes incidence, between 1992 and 2015, and estimate projections for the future incidence rates in Portugal until 2024. Methods: An ecological time-series study was conducted using data from GP Sentinel Network between 1992 and 2015. Family doctors reported all new cases of Diabetes in their patients’ lists. Annual trends were estimated through Poisson regression models as well as the future incidence rates (until 2024), sex and age group stratified. Incidence rate projections were adjusted to the distribution of the resident Portuguese population given Statistics Portugal projections. Results: The average increase in Diabetes incidence rate was in total 4.29% (CI95% 3.80–4.80) per year under study. Until 1998–2000, the annual incidence rate was higher in women, and from 1998–2000 to 2013–2015 turn out to be higher in men. The incidence rate projected for 2022–2024 was 972.77/105 inhabitants in total, and 846.74/105 and 1114.42/105, respectively, in women and men. Conclusions: This is the first study in Portugal to estimate diabetes incidence rate projections. The disturbing reported projections seem realistic if things continue as in the past. Actually, effective public health policies will need to be undertaken to minimize this alarming future scenario.
Resumo:
Transportation Department, Office of University Research, Washington, D.C.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Orthotopic liver retransplantation (re-OLT) is highly controversial. The objectives of this study were to determine the validity of a recently developed United Network for Organ Sharing (UNOS) multivariate model using an independent cohort of patients undergoing re-OLT outside the United States, to determine whether incorporation of other variables that were incomplete in the UNOS registry would provide additional prognostic information, to develop new models combining data sets from both cohorts, and to evaluate the validity of the model for end-stage liver disease (MELD) in patients undergoing re-OLT. Two hundred eighty-one adult patients undergoing re-OLT (between 1986 and 1999) at 6 foreign transplant centers comprised the validation cohort. We found good agreement between actual survival and predicted survival in the validation cohort; 1-year patient survival rates in the low-, intermediate-, and high-risk groups (as assigned by the original UNOS model) were 72%, 68%, and 36%, respectively (P < .0001). In the patients for whom the international normalized ratio (INR) of prothrombin time was available, MELD correlated with outcome following re-OLT; the median MELD scores for patients surviving at least 90 days compared with those dying within 90 days were 20.75 versus 25.9, respectively (P = .004). Utilizing both patient cohorts (n = 979), a new model, based on recipient age, total serum bilirubin, creatinine, and interval to re-OLT, was constructed (whole model χ(2) = 105, P < .0001). Using the c-statistic with 30-day, 90-day, 1-year, and 3-year mortality as the end points, the area under the receiver operating characteristic (ROC) curves for 4 different models were compared. In conclusion, prospective validation and use of these models as adjuncts to clinical decision making in the management of patients being considered for re-OLT are warranted.
Resumo:
A test of the ability of a probabilistic neural network to classify deposits into types on the basis of deposit tonnage and average Cu, Mo, Ag, Au, Zn, and Pb grades is conducted. The purpose is to examine whether this type of system might serve as a basis for integrating geoscience information available in large mineral databases to classify sites by deposit type. Benefits of proper classification of many sites in large regions are relatively rapid identification of terranes permissive for deposit types and recognition of specific sites perhaps worthy of exploring further. Total tonnages and average grades of 1,137 well-explored deposits identified in published grade and tonnage models representing 13 deposit types were used to train and test the network. Tonnages were transformed by logarithms and grades by square roots to reduce effects of skewness. All values were scaled by subtracting the variable's mean and dividing by its standard deviation. Half of the deposits were selected randomly to be used in training the probabilistic neural network and the other half were used for independent testing. Tests were performed with a probabilistic neural network employing a Gaussian kernel and separate sigma weights for each class (type) and each variable (grade or tonnage). Deposit types were selected to challenge the neural network. For many types, tonnages or average grades are significantly different from other types, but individual deposits may plot in the grade and tonnage space of more than one type. Porphyry Cu, porphyry Cu-Au, and porphyry Cu-Mo types have similar tonnages and relatively small differences in grades. Redbed Cu deposits typically have tonnages that could be confused with porphyry Cu deposits, also contain Cu and, in some situations, Ag. Cyprus and kuroko massive sulfide types have about the same tonnages. Cu, Zn, Ag, and Au grades. Polymetallic vein, sedimentary exhalative Zn-Pb, and Zn-Pb skarn types contain many of the same metals. Sediment-hosted Au, Comstock Au-Ag, and low-sulfide Au-quartz vein types are principally Au deposits with differing amounts of Ag. Given the intent to test the neural network under the most difficult conditions, an overall 75% agreement between the experts and the neural network is considered excellent. Among the largestclassification errors are skarn Zn-Pb and Cyprus massive sulfide deposits classed by the neuralnetwork as kuroko massive sulfides—24 and 63% error respectively. Other large errors are the classification of 92% of porphyry Cu-Mo as porphyry Cu deposits. Most of the larger classification errors involve 25 or fewer training deposits, suggesting that some errors might be the result of small sample size. About 91% of the gold deposit types were classed properly and 98% of porphyry Cu deposits were classes as some type of porphyry Cu deposit. An experienced economic geologist would not make many of the classification errors that were made by the neural network because the geologic settings of deposits would be used to reduce errors. In a separate test, the probabilistic neural network correctly classed 93% of 336 deposits in eight deposit types when trained with presence or absence of 58 minerals and six generalized rock types. The overall success rate of the probabilistic neural network when trained on tonnage and average grades would probably be more than 90% with additional information on the presence of a few rock types.
Resumo:
Bistability and switching are two important aspects of the genetic regulatory network of phage. Positive and negative feedbacks are key regulatory mechanisms in this network. By the introduction of threshold values, the developmental pathway of A phage is divided into different stages. If the protein level reaches a threshold value, positive or negative feedback will be effective and regulate the process of development. Using this regulatory mechanism, we present a quantitative model to realize bistability and switching of phage based on experimental data. This model gives descriptions of decisive mechanisms for different pathways in induction. A stochastic model is also introduced for describing statistical properties of switching in induction. A stochastic degradation rate is used to represent intrinsic noise in induction for switching the system from the lysogenic pathway to the lysis pathway. The approach in this paper represents an attempt to describe the regulatory mechanism in genetic regulatory network under the influence of intrinsic noise in the framework of continuous models. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Consider a network of unreliable links, modelling for example a communication network. Estimating the reliability of the network-expressed as the probability that certain nodes in the network are connected-is a computationally difficult task. In this paper we study how the Cross-Entropy method can be used to obtain more efficient network reliability estimation procedures. Three techniques of estimation are considered: Crude Monte Carlo and the more sophisticated Permutation Monte Carlo and Merge Process. We show that the Cross-Entropy method yields a speed-up over all three techniques.