975 resultados para Constrained network mapping


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Palaeoclimates across Europe for 6000 y BP were estimated from pollen data using the modern pollen analogue technique constrained with lake-level data. The constraint consists of restricting the set of modern pollen samples considered as analogues of the fossil samples to those locations where the implied change in annual precipitation minus evapotranspiration (P–E) is consistent with the regional change in moisture balance as indicated by lakes. An artificial neural network was used for the spatial interpolation of lake-level changes to the pollen sites, and for mapping palaeoclimate anomalies. The climate variables reconstructed were mean temperature of the coldest month (T c ), growing degree days above 5  °C (GDD), moisture availability expressed as the ratio of actual to equilibrium evapotranspiration (α), and P–E. The constraint improved the spatial coherency of the reconstructed palaeoclimate anomalies, especially for P–E. The reconstructions indicate clear spatial and seasonal patterns of Holocene climate change, which can provide a quantitative benchmark for the evaluation of palaeoclimate model simulations. Winter temperatures (T c ) were 1–3 K greater than present in the far N and NE of Europe, but 2–4 K less than present in the Mediterranean region. Summer warmth (GDD) was greater than present in NW Europe (by 400–800 K day at the highest elevations) and in the Alps, but >400 K day less than present at lower elevations in S Europe. P–E was 50–250 mm less than present in NW Europe and the Alps, but α was 10–15% greater than present in S Europe and P–E was 50–200 mm greater than present in S and E Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Social networks are common in digital health. A new stream of research is beginning to investigate the mechanisms of digital health social networks (DHSNs), how they are structured, how they function, and how their growth can be nurtured and managed. DHSNs increase in value when additional content is added, and the structure of networks may resemble the characteristics of power laws. Power laws are contrary to traditional Gaussian averages in that they demonstrate correlated phenomena. OBJECTIVES: The objective of this study is to investigate whether the distribution frequency in four DHSNs can be characterized as following a power law. A second objective is to describe the method used to determine the comparison. METHODS: Data from four DHSNs—Alcohol Help Center (AHC), Depression Center (DC), Panic Center (PC), and Stop Smoking Center (SSC)—were compared to power law distributions. To assist future researchers and managers, the 5-step methodology used to analyze and compare datasets is described. RESULTS: All four DHSNs were found to have right-skewed distributions, indicating the data were not normally distributed. When power trend lines were added to each frequency distribution, R(2) values indicated that, to a very high degree, the variance in post frequencies can be explained by actor rank (AHC .962, DC .975, PC .969, SSC .95). Spearman correlations provided further indication of the strength and statistical significance of the relationship (AHC .987. DC .967, PC .983, SSC .993, P<.001). CONCLUSIONS: This is the first study to investigate power distributions across multiple DHSNs, each addressing a unique condition. Results indicate that despite vast differences in theme, content, and length of existence, DHSNs follow properties of power laws. The structure of DHSNs is important as it gives insight to researchers and managers into the nature and mechanisms of network functionality. The 5-step process undertaken to compare actor contribution patterns can be replicated in networks that are managed by other organizations, and we conjecture that patterns observed in this study could be found in other DHSNs. Future research should analyze network growth over time and examine the characteristics and survival rates of superusers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deakin University along with the CRC for Coastal Zone, Estuary and Waterway Management, the Glenelg Hopkins CMA and the Marine & Coastal Community Network have formed a partnership to map the benthic habitats at 14 sites across approximately 5% of Victorian State waters. The project is funded through the Federal Government by the Natural Heritage Trust and brings together expertise from universities, government agencies and private enterprise. We will be using hydro-acoustic sonar technologies, towed video camera and remotely operated vehicles to collect information on the types of substrate and bathymetry to derive habitat maps. The coastal fringe of Victoria encompasses rich and diverse ecosystems which support a range of human uses including commercial and recreational fisheries, whale watching, navigation, aquaculture and gas development. The Deakin lead initiative will map from the 10-metre contour (safe ship passage) to the three nautical mile mark for selected regions and will provide a geospatial framework for managing and gaining better understanding of the near-shore marine environment Research products will be used for management, educational and research purposes over the coming years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mismatches in boundaries between natural ecosystems and land governance units often complicate an ecosystem approach to management and conservation. For example, information used to guide management, such as vegetation maps, may not be available or consistent across entire ecosystems. This study was undertaken within a single biogeographic region (the Murray Mallee) spanning three Australian states. Existing vegetation maps could not be used as vegetation classifications differed between states. Our aim was to describe and map ‘tree mallee’ vegetation consistently across a 104 000km2 area of this region. Hierarchical cluster analyses, incorporating floristic data from 713 sites, were employed to identify distinct vegetation types. Neural network classification models were used to map these vegetation types across the region, with additional data from 634 validation sites providing a measure of map accuracy. Four distinct vegetation types were recognised: Triodia Mallee, Heathy Mallee, Chenopod Mallee and Shrubby Mallee. Neural network models predicted the occurrence of three of them with 79% accuracy. Validation results identified that map accuracy was 67% (kappa = 0.42) when using independent data. The framework employed provides a simple approach to describing and mapping vegetation consistently across broad spatial extents. Specific outcomes include: (1) a system of vegetation classification suitable for use across this biogeographic region; (2) a consistent vegetationmapto inform land-use planning and biodiversity management at local and regional scales; and (3) a quantification of map accuracy using independent data. This approach is applicable to other regions facing similar challenges associated with integrating vegetation data across jurisdictional boundaries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network traffic classification is an essential component for network management and security systems. To address the limitations of traditional port-based and payload-based methods, recent studies have been focusing on alternative approaches. One promising direction is applying machine learning techniques to classify traffic flows based on packet and flow level statistics. In particular, previous papers have illustrated that clustering can achieve high accuracy and discover unknown application classes. In this work, we present a novel semi-supervised learning method using constrained clustering algorithms. The motivation is that in network domain a lot of background information is available in addition to the data instances themselves. For example, we might know that flow ƒ1 and ƒ2 are using the same application protocol because they are visiting the same host address at the same port simultaneously. In this case, ƒ1 and ƒ2 shall be grouped into the same cluster ideally. Therefore, we describe these correlations in the form of pair-wise must-link constraints and incorporate them in the process of clustering. We have applied three constrained variants of the K-Means algorithm, which perform hard or soft constraint satisfaction and metric learning from constraints. A number of real-world traffic traces have been used to show the availability of constraints and to test the proposed approach. The experimental results indicate that by incorporating constraints in the course of clustering, the overall accuracy and cluster purity can be significantly improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To build the service-oriented applications in a wireless sensor network (WSN), the workflow can be utilized to compose a set of atomic services and execute the corresponding pre-designed processes. In general, WSN applications rely closely on the sensor data which are usually inaccurate or even incomplete in the resource-constrained WSN. Then, the erroneous sensor data will affect the execution of atomic services and furthermore the workflows, which form an important part in the bottom-to-up dynamics of WSN applications. In order to alleviate this issue, it is necessary to manage the workflow hierarchically. However, the hierarchical workflow management remains an open and challenging problem. In this paper, by adopting the Bloom filter as an effective connection between the sensor node layer and the upper application layer, a hierarchical workflow management approach is proposed to ensure the QoS of workflow-based WSN application . The case study and experimental evaluations demonstrate the capability of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently effective connectivity studies have gained significant attention among the neuroscience community as Electroencephalography (EEG) data with a high time resolution can give us a wider understanding of the information flow within the brain. Among other tools used in effective connectivity analysis Granger Causality (GC) has found a prominent place. The GC analysis, based on strictly causal multivariate autoregressive (MVAR) models does not account for the instantaneous interactions among the sources. If instantaneous interactions are present, GC based on strictly causal MVAR will lead to erroneous conclusions on the underlying information flow. Thus, the work presented in this paper applies an extended MVAR (eMVAR) model that accounts for the zero lag interactions. We propose a constrained adaptive Kalman filter (CAKF) approach for the eMVAR model identification and demonstrate that this approach performs better than the short time windowing-based adaptive estimation when applied to information flow analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Commuting to work is one of the most important and regular routines of urban transportation. From a geographic perspective, the length of people's commute is influenced, to some degree, by the spatial separation of their home and workplace and the transport infrastructure. The rise of car ownership in Australia has been accompanied by a considerable decrease of public transport use. Increased personal mobility has fuelled the trend of decentralised housing development, mostly without a clear planning for local employment, or alternative means of transportation. As a result, the urban patterns of regional Australia is formed by a complex network of a multitude of small towns, scattered in relatively large areas, which are totally dependent and polarized by few medium and large cities. Such hierarchical and dispersed geographical structure implies significant carbon dioxide emissions from transportation. Transport sector accounts for 14% of Australia's net greenhouse gas emissions, and without further policy action, they are projected to continue to increase. The aim of this paper is to demonstrate the importance of incorporating urban climate understanding and knowledge into urban planning processes in order to develop cities that are more sustainable. A GIS-based gravity model is employed to examine the travel patterns related to hierarchical and geographical urban region networks, and the derived total carbon emissions, using the Greater Geelong region as a case study. The new challenges presented by climate change bring with them opportunities. In order to fully reach the very challenging targets of carbon reduction in Australia an integrated and strategic vision for urban and regional planning is necessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical power systems are evolving from today's centralized bulk systems to more decentralized systems. Penetrations of renewable energies, such as wind and solar power, significantly increase the level of uncertainty in power systems. Accurate load forecasting becomes more complex, yet more important for management of power systems. Traditional methods for generating point forecasts of load demands cannot properly handle uncertainties in system operations. To quantify potential uncertainties associated with forecasts, this paper implements a neural network (NN)-based method for the construction of prediction intervals (PIs). A newly introduced method, called lower upper bound estimation (LUBE), is applied and extended to develop PIs using NN models. A new problem formulation is proposed, which translates the primary multiobjective problem into a constrained single-objective problem. Compared with the cost function, this new formulation is closer to the primary problem and has fewer parameters. Particle swarm optimization (PSO) integrated with the mutation operator is used to solve the problem. Electrical demands from Singapore and New South Wales (Australia), as well as wind power generation from Capital Wind Farm, are used to validate the PSO-based LUBE method. Comparative results show that the proposed method can construct higher quality PIs for load and wind power generation forecasts in a short time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Short-term load forecasting (STLF) is of great importance for control and scheduling of electrical power systems. The uncertainty of power systems increases due to the random nature of climate and the penetration of the renewable energies such as wind and solar power. Traditional methods for generating point forecasts of load demands cannot properly handle uncertainties in datasets. To quantify these potential uncertainties associated with forecasts, this paper implements a neural network (NN)-based method for construction of prediction intervals (PIs). A newly proposed method, called lower upper bound estimation (LUBE), is applied to develop PIs using NN models. The primary multi-objective problem is firstly transformed into a constrained single-objective problem. This new problem formulation is closer to the original problem and has fewer parameters than the cost function. Particle swarm optimization (PSO) integrated with the mutation operator is used to solve the problem. Two case studies from Singapore and New South Wales (Australia) historical load datasets are used to validate the PSO-based LUBE method. Demonstrated results show that the proposed method can construct high quality PIs for load forecasting applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexity and level of uncertainty present in operation of power systems have significantly grown due to penetration of renewable resources. These complexities warrant the need for advanced methods for load forecasting and quantifying uncertainties associated with forecasts. The objective of this study is to develop a framework for probabilistic forecasting of electricity load demands. The proposed probabilistic framework allows the analyst to construct PIs (prediction intervals) for uncertainty quantification. A newly introduced method, called LUBE (lower upper bound estimation), is applied and extended to develop PIs using NN (neural network) models. The primary problem for construction of intervals is firstly formulated as a constrained single-objective problem. The sharpness of PIs is treated as the key objective and their calibration is considered as the constraint. PSO (particle swarm optimization) enhanced by the mutation operator is then used to optimally tune NN parameters subject to constraints set on the quality of PIs. Historical load datasets from Singapore, Ottawa (Canada) and Texas (USA) are used to examine performance of the proposed PSO-based LUBE method. According to obtained results, the proposed probabilistic forecasting method generates well-calibrated and informative PIs. Furthermore, comparative results demonstrate that the proposed PI construction method greatly outperforms three widely used benchmark methods. © 2014 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the early 2000s, Information Systems researchers in Australia had begun to emphasise socio-technical approaches in innovation adoption of technologies. The ‘essentialist' approaches to adoption (for example, Innovation Diffusion or TAM), suggest an essence is largely responsible for rate of adoption (Tatnall, 2011) or a new technology introduced may spark innovation. The socio-technical factors in implementing an innovation are largely flouted by researchers and hospitals. Innovation Translation is an approach that purports that any innovation needs to be customised and translated in to context before it can be adopted. Equally, Actor-Network Theory (ANT) is an approach that embraces the differences in technical and human factors and socio-professional aspects in a non-deterministic manner. The research reported in this paper is an attempt to combined the two approaches in an effective manner, to visualise the socio-technical factors in RFID technology adoption in an Australian hospital. This research investigation demonstrates RFID technology translation in an Australian hospital using a case approach (Yin, 2009). Data was collected using a process of focus groups and interviews, analysed with document analysis and concept mapping techniques. The data was then reconstructed in a ‘movie script' format, with Acts and Scenes funnelled to ANT informed abstraction at the end of each Act. The information visualisation at the end of each Act using ANT informed Lens reveal the re-negotiation and improvement of network relationships between the people (factors) involved including nurses, patient care orderlies, management staff and non-human participants such as equipment and technology. The paper augments the current gaps in literature regarding socio-technical approaches in technology adoption within Australian healthcare context, which is transitioning from non-integrated nearly technophobic hospitals in the last decade to a tech-savvy integrated era. More importantly, the ANT visualisation addresses one of the criticisms of ANT i.e. its insufficiency to explain relationship formations between participants and over changes of events in relationship networks (Greenhalgh & Stones, 2010).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistics-based Internet traffic classification using machine learning techniques has attracted extensive research interest lately, because of the increasing ineffectiveness of traditional port-based and payload-based approaches. In particular, unsupervised learning, that is, traffic clustering, is very important in real-life applications, where labeled training data are difficult to obtain and new patterns keep emerging. Although previous studies have applied some classic clustering algorithms such as K-Means and EM for the task, the quality of resultant traffic clusters was far from satisfactory. In order to improve the accuracy of traffic clustering, we propose a constrained clustering scheme that makes decisions with consideration of some background information in addition to the observed traffic statistics. Specifically, we make use of equivalence set constraints indicating that particular sets of flows are using the same application layer protocols, which can be efficiently inferred from packet headers according to the background knowledge of TCP/IP networking. We model the observed data and constraints using Gaussian mixture density and adapt an approximate algorithm for the maximum likelihood estimation of model parameters. Moreover, we study the effects of unsupervised feature discretization on traffic clustering by using a fundamental binning method. A number of real-world Internet traffic traces have been used in our evaluation, and the results show that the proposed approach not only improves the quality of traffic clusters in terms of overall accuracy and per-class metrics, but also speeds up the convergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Networks of marine protected areas (MPAs) are being adopted globally to protect ecosystems and supplement fisheries management. The state of California recently implemented a coast-wide network of MPAs, a statewide seafloor mapping program, and ecological characterizations of species and ecosystems targeted for protection by the network. The main goals of this study were to use these data to evaluate how well seafloor features, as proxies for habitats, are represented and replicated across an MPA network and how well ecological surveys representatively sampled fish habitats inside MPAs and adjacent reference sites. Seafloor data were classified into broad substrate categories (rock and sediment) and finer scale geomorphic classifications standard to marine classification schemes using surface analyses (slope, ruggedness, etc.) done on the digital elevation model derived from multibeam bathymetry data. These classifications were then used to evaluate the representation and replication of seafloor structure within the MPAs and across the ecological surveys. Both the broad substrate categories and the finer scale geomorphic features were proportionately represented for many of the classes with deviations of 1-6% and 0-7%, respectively. Within MPAs, however, representation of seafloor features differed markedly from original estimates, with differences ranging up to 28%. Seafloor structure in the biological monitoring design had mismatches between sampling in the MPAs and their corresponding reference sites and some seafloor structure classes were missed entirely. The geomorphic variables derived from multibeam bathymetry data for these analyses are known determinants of the distribution and abundance of marine species and for coastal marine biodiversity. Thus, analyses like those performed in this study can be a valuable initial method of evaluating and predicting the conservation value of MPAs across a regional network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.