875 resultados para Distribution network reconfiguration problem
Resumo:
Identifying groundwater contributions to baseflowforms an essential part of surfacewater body characterisation. The Gortinlieve catchment (5 km2) comprises a headwater stream network of the Carrigans River, itself a tributary of the River Foyle, NW Ireland. The bedrock comprises poorly productive metasediments that are characterised by fracture porosity. We present the findings of a multi-disciplinary study that integrates new hydrochemical and mineralogical investigations with existing hydraulic, geophysical and structural data to identify the scales of groundwater flow and the nature of groundwater/bedrock interaction (chemical denudation). At the catchment scale, the development of deep weathering profiles is controlled by NE-SW regional scale fracture zones associated with mountain building during the Grampian orogeny. In-situ chemical denudation of mineral phases is controlled by micro- to meso-scale fractures related to Alpine compression during Palaeocene to Oligocene times. The alteration of primary muscovite, chlorite (clinochlore) and albite along the surfaces of these small-scale fractures has resulted in the precipitation of illite, montmorillonite and illite/montmorillonite clay admixtures. The interconnected but discontinuous nature of these small-scale structures highlights the role of larger scale faults and fissures in the supply and transportation of weathering solutions to/from the sites of mineral weathering. The dissolution of primarily mineral phases releases the major ions Mg, Ca and HCO3 that are shown to subsequently formthe chemical makeup of groundwaters. Borehole groundwater and stream baseflow hydrochemical data are used to constrain the depths of groundwater flow pathways influencing the chemistry of surface waters throughout the stream profile. The results show that it is predominantly the lower part of the catchment, which receives inputs from catchment/regional scale groundwater flow, that is found to contribute to the maintenance of annual baseflow levels. This study identifies the importance
of deep groundwater in maintaining annual baseflow levels in poorly productive bedrock systems.
Resumo:
Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. An algorithm for approximate credal network updating is presented. The problem in its general formulation is a multilinear optimization task, which can be linearized by an appropriate rule for fixing all the local models apart from those of a single variable. This simple idea can be iterated and quickly leads to accurate inferences. A transformation is also derived to reduce decision making in credal networks based on the maximality criterion to updating. The decision task is proved to have the same complexity of standard inference, being NPPP-complete for general credal nets and NP-complete for polytrees. Similar results are derived for the E-admissibility criterion. Numerical experiments confirm a good performance of the method.
Resumo:
Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).
Resumo:
This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. First, it is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure). Such proofs extend previous complexity results for the problem. Inapproximability results are also derived in the case of trees if the number of states per variable is not bounded. Although the problem is shown to be hard and inapproximable even in very simple scenarios, a new exact algorithm is described that is empirically fast in networks of bounded treewidth and bounded number of states per variable. The same algorithm is used as basis of a Fully Polynomial Time Approximation Scheme for MAP under such assumptions. Approximation schemes were generally thought to be impossible for this problem, but we show otherwise for classes of networks that are important in practice. The algorithms are extensively tested using some well-known networks as well as random generated cases to show their effectiveness.
Resumo:
A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.
Resumo:
A novel model-based principal component analysis (PCA) method is proposed in this paper for wide-area power system monitoring, aiming to tackle one of the critical drawbacks of the conventional PCA, i.e. the incapability to handle non-Gaussian distributed variables. It is a significant extension of the original PCA method which has already shown to outperform traditional methods like rate-of-change-of-frequency (ROCOF). The ROCOF method is quick for processing local information, but its threshold is difficult to determine and nuisance tripping may easily occur. The proposed model-based PCA method uses a radial basis function neural network (RBFNN) model to handle the nonlinearity in the data set to solve the no-Gaussian issue, before the PCA method is used for islanding detection. To build an effective RBFNN model, this paper first uses a fast input selection method to remove insignificant neural inputs. Next, a heuristic optimization technique namely Teaching-Learning-Based-Optimization (TLBO) is adopted to tune the nonlinear parameters in the RBF neurons to build the optimized model. The novel RBFNN based PCA monitoring scheme is then employed for wide-area monitoring using the residuals between the model outputs and the real PMU measurements. Experimental results confirm the efficiency and effectiveness of the proposed method in monitoring a suite of process variables with different distribution characteristics, showing that the proposed RBFNN PCA method is a reliable scheme as an effective extension to the linear PCA method.
Resumo:
his paper investigates the identification and output tracking control of a class of Hammerstein systems through a wireless network within an integrated framework and the statistic characteristics of the wireless network are modelled using the inverse Gaussian cumulative distribution function. In the proposed framework, a new networked identification algorithm is proposed to compensate for the influence of the wireless network delays so as to acquire the more precise Hammerstein system model. Then, the identified model together with the model-based approach is used to design an output tracking controller. Mean square stability conditions are given using linear matrix inequalities (LMIs) and the optimal controller gains can be obtained by solving the corresponding optimization problem expressed using LMIs. Illustrative numerical simulation examples are given to demonstrate the effectiveness of our proposed method.
Resumo:
This chapter focuses on the relationship between improvisation and indeterminacy. We discuss the two practices by referring to play theory and game studies and situate it in recent network music performance. We will develop a parallel with game theory in which indeterminacy is seen as a way of articulating situations where structural decisions are left to the discernment of the performers and discuss improvisation as a method of play. The improvisation-indeterminacy relationship is discussed in the context of network music performance, which employs digital networks in the exchange of data between performers and hence relies on topological structures with varying degrees of openness and flexibility. Artists such as Max Neuhaus and The League of Automatic Music Composers initiated the development of a multitude of practices and technologies exploring the network as an environment for music making. Even though the technologies behind “the network” have shifted dramatically since Neuhaus’ use of radio in the 1960’s, a preoccupation with distribution and sharing of artistic agency has remained at the centre of networked practices. Gollo Föllmer, after undertaking an extensive review of network music initiatives, produced a typology that comprises categories as diverse as remix lists, sound toys, real/virtual space installations and network performances. For Föllmer, “the term ‘Net music’ comprises all formal and stylistic kinds of music upon which the specifics of electronic networks leave considerable traces, whereby the electronic networks strongly influence the process of musical production, the musical aesthetic, or the way music is received” (2005: 185).
Resumo:
A periodic monitoring of the pavement condition facilitates a cost-effective distribution of the resources available for maintenance of the road infrastructure network. The task can be accurately carried out using profilometers, but such an approach is generally expensive. This paper presents a method to collect information on the road profile via accelerometers mounted in a fleet of non-specialist vehicles, such as police cars, that are in use for other purposes. It proposes an optimisation algorithm, based on Cross Entropy theory, to predict road irregularities. The Cross Entropy algorithm estimates the height of the road irregularities from vehicle accelerations at each point in time. To test the algorithm, the crossing of a half-car roll model is simulated over a range of road profiles to obtain accelerations of the vehicle sprung and unsprung masses. Then, the simulated vehicle accelerations are used as input in an iterative procedure that searches for the best solution to the inverse problem of finding road irregularities. In each iteration, a sample of road profiles is generated and an objective function defined as the sum of squares of differences between the ‘measured’ and predicted accelerations is minimized until convergence is reached. The reconstructed profile is classified according to ISO and IRI recommendations and compared to its original class. Results demonstrate that the approach is feasible and that a good estimate of the short-wavelength features of the road profile can be detected, despite the variability between the vehicles used to collect the data.
Resumo:
We investigate the cell coverage optimization problem for the massive multiple-input multiple-output (MIMO) uplink. By deploying tilt-adjustable antenna arrays at the base stations, cell coverage optimization can become a promising technique which is able to strike a compromise between covering cell-edge users and pilot contamination suppression. We formulate a detailed description of this optimization problem by maximizing the cell throughput, which is shown to be mainly determined by the user distribution within several key geometrical regions. Then, the formulated problem is applied to different example scenarios: for a network with hexagonal shaped cells and uniformly distributed users, we derive an analytical lower bound of the ergodic throughput in the objective cell, based on which, it is shown that the optimal choice for the cell coverage should ensure that the coverage of different cells does not overlap; for a more generic network with sectoral shaped cells and non-uniformly distributed users, we propose an analytical approximation of the ergodic throughput. After that, a practical coverage optimization algorithm is proposed, where the optimal solution can be easily obtained through a simple one-dimensional line searching within a confined searching region. Our numerical results show that the proposed coverage optimization method is able to greatly increase the system throughput in macrocells for the massive MIMO uplink transmission, compared with the traditional schemes where the cell coverage is fixed.
Resumo:
An algorithm for approximate credal network updating is presented. The problem in its general formulation is a multilinear optimization task, which can be linearized by an appropriate rule for fixing all the local models apart from those of a single variable. This simple idea can be iterated and quickly leads to very accurate inferences. The approach can also be specialized to classification with credal networks based on the maximality criterion. A complexity analysis for both the problem and the algorithm is reported together with numerical experiments, which confirm the good performance of the method. While the inner approximation produced by the algorithm gives rise to a classifier which might return a subset of the optimal class set, preliminary empirical results suggest that the accuracy of the optimal class set is seldom affected by the approximate probabilities
Resumo:
In his essay, Anti-Object, Kengo Kuma proposes that architecture cannot and should not be understood as object alone but instead always as series of networks and connections, relationships within space and through form. Some of these relationships are tangible, others are invisible. Stan Allen and James Corner have also called for an architecture that is more performative and operative – ‘less concerned with what buildings look like and more concerned with what they do’ – as means of effecting a more intimate and promiscuous relationship between infrastructure, urbanism and buildings. According to Allen this expanding filed offers a reclamation of some of the areas ceded by architecture following disciplinary specialization:
‘Territory, communication and speed are properly infrastructural problems and architecture as a discipline has developed specific technical means to deal with these variables. Mapping, projection, calculation, notation and visualization are among architecture’s traditional tools for operating at the very large scale’.
The motorway may not look like it – partly because we are no longer accustomed to think about it as such – but it is a site for and of architecture, a territory where architecture can be critical and active. If the limits of the discipline have narrowed, then one of the functions of a school of architecture must be an attempt occupy those areas of the built environment where architecture is no longer, or has yet to reach. If this is a project about reclamation of a landscape, it is also a challenge to some of the boundaries that surround architecture and often confine it, as Kuma suggests, to the appreciation of isolated objects.
M:NI 2014-15
We tend to think of the motorway as a thing or an object, something that has a singular function. Historically this is how it has been seen, with engineers designing bridges and embankments and suchlike with zeal … These objects like the M3 Urban Motorway, Belfast’s own Westway, are beautiful of course, but they have caused considerable damage to the city they were inflicted upon.
Actually, it’s the fact that we have seen the motorway as a solid object that has caused this problem. The motorway actually is a fluid and dynamic thing, and it should be seen as such: in fact it’s not an organ at all but actually tissue – something that connects rather than is. Once we start to see the motorway as tissue, it opens up new propositions about what the motorway is, is used for and does. This new dynamic and connective view unlocks the stasis of the motorway as edifice, and allows adaptation to happen: adaptation to old contexts that were ignored by the planners, and adaptation to new contexts that have arisen because of or in spite of our best efforts.
Motorways as tissue are more than just infrastructures: they are landscapes. These landscapes can be seen as surfaces on which flows take place, not only of cars, buses and lorries, but also of the globalized goods carried and the lifestyles and mobilities enabled. Here the infinite speed of urban change of thought transcends the declared speed limit [70 mph] of the motorway, in that a consignment of bananas can cause soil erosion in Equador, or the delivery of a new iphone can unlock connections and ideas the world over.
So what is this new landscape to be like? It may be a parallax-shifting, cognitive looking glass; a drone scape of energy transformation; a collective farm, or maybe part of a hospital. But what’s for sure, is that it is never fixed nor static: it pulses like a heartbeat through that most bland of landscapes, the countryside. It transmits forces like a Caribbean hurricane creating surf on an Atlantic Storm Beach: alien forces that mutate and re-form these places screaming into new, unclear and unintended futures.
And this future is clear: the future is urban. In this small rural country, motorways as tissue have made the whole of it: countryside, mountain, sea and town, into one singular, homogenous and hyper-connected, generic city.
Goodbye, place. Hello, surface!
Resumo:
A new heuristic based on Nawaz–Enscore–Ham (NEH) algorithm is proposed for solving permutation flowshop scheduling problem in this paper. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion for the objective of minimizing both makespan and machine idle-time. Statistical tests illustrate better solution quality of the proposed algorithm, comparing to existing benchmark heuristics.
Resumo:
“Branch-and-cut” algorithm is one of the most efficient exact approaches to solve mixed integer programs. This algorithm combines the advantages of a pure branch-and-bound approach and cutting planes scheme. Branch-and-cut algorithm computes the linear programming relaxation of the problem at each node of the search tree which is improved by the use of cuts, i.e. by the inclusion of valid inequalities. It should be taken into account that selection of strongest cuts is crucial for their effective use in branch-and-cut algorithm. In this thesis, we focus on the derivation and use of cutting planes to solve general mixed integer problems, and in particular inventory problems combined with other problems such as distribution, supplier selection, vehicle routing, etc. In order to achieve this goal, we first consider substructures (relaxations) of such problems which are obtained by the coherent loss of information. The polyhedral structure of those simpler mixed integer sets is studied to derive strong valid inequalities. Finally those strong inequalities are included in the cutting plane algorithms to solve the general mixed integer problems. We study three mixed integer sets in this dissertation. The first two mixed integer sets arise as a subproblem of the lot-sizing with supplier selection, the network design and the vendor-managed inventory routing problems. These sets are variants of the well-known single node fixed-charge network set where a binary or integer variable is associated with the node. The third set occurs as a subproblem of mixed integer sets where incompatibility between binary variables is considered. We generate families of valid inequalities for those sets, identify classes of facet-defining inequalities, and discuss the separation problems associated with the inequalities. Then cutting plane frameworks are implemented to solve some mixed integer programs. Preliminary computational experiments are presented in this direction.