139 resultados para pacs: computer networks and techniques
Resumo:
Efficient and effective urban management systems for Ubiquitous Eco Cities require having intelligent and integrated management mechanisms. This integration includes bringing together economic, socio-cultural and urban development with a well orchestrated, transparent and open decision making mechanism and necessary infrastructure and technologies. In Ubiquitous Eco Cities telecommunication technologies play an important role in monitoring and managing activities over wired, wireless or fibre-optic networks. Particularly technology convergence creates new ways in which the information and telecommunication technologies are used and formed the back bone or urban management systems. The 21st Century is an era where information has converged, in which people are able to access a variety of services, including internet and location based services, through multi-functional devices such as mobile phones and provides opportunities in the management of Ubiquitous Eco Cities. This research paper discusses the recent developments in telecommunication networks and trends in convergence technologies and their implications on the management of Ubiquitous Eco Cities and how this technological shift is likely to be beneficial in improving the quality of life and place of residents, workers and visitors. The research paper reports and introduces recent approaches on urban management systems, such as intelligent urban management systems, that are suitable for Ubiquitous Eco Cities.
Resumo:
The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as “training target” and “training input” to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and – ultimately – organisation competitiveness.
Resumo:
This position paper examines the development of a dedicated service aggregator role in business networks. We predict that these intermediaries will soon emerge in service ecosystems and add value through the application of dedicated domain knowledge in the process of creating new, innovative services or service bundles based on the aggregation, composition, integration or orchestration of existing services procured from different service providers in the service ecosystem. We discuss general foundations of service aggregators and present Fourth-Party Logistics Providers as a real-world example of emerging business service aggregators. We also point out a demand for future research, e.g. into governance models, risk management tools, service portfolio management approaches and service bundling techniques, to be able to better understand core determinants of competitiveness and success of service aggregators.
Resumo:
There are many interactive media systems, including computer games and media art works, in which it is desirable for music to vary in response to changes in the environment. In this paper we will outline a range of algorithmic techniques that enable music to adapt to such changes, taking into account the need for the music to vary in its expressiveness or mood while remaining coherent and recognisable. We will discuss the approaches which we have arrived at after experience in a range of adaptive music systems over recent years, and draw upon these experiences to inform discussion of relevant considerations and to illustrate the techniques and their effect.
Resumo:
Cultural objects are increasingly generated and stored in digital form, yet effective methods for their indexing and retrieval still remain an important area of research. The main problem arises from the disconnection between the content-based indexing approach used by computer scientists and the description-based approach used by information scientists. There is also a lack of representational schemes that allow the alignment of the semantics and context with keywords and low-level features that can be automatically extracted from the content of these cultural objects. This paper presents an integrated approach to address these problems, taking advantage of both computer science and information science approaches. We firstly discuss the requirements from a number of perspectives: users, content providers, content managers and technical systems. We then present an overview of our system architecture and describe various techniques which underlie the major components of the system. These include: automatic object category detection; user-driven tagging; metadata transform and augmentation, and an expression language for digital cultural objects. In addition, we discuss our experience on testing and evaluating some existing collections, analyse the difficulties encountered and propose ways to address these problems.
Resumo:
As a consequence of the increased incidence of collaborative arrangements between firms, the competitive environment characterising many industries has undergone profound change. It is suggested that rivalry is not necessarily enacted by individual firms according to the traditional mechanisms of direct confrontation in factor and product markets, but rather as collaborative orchestration between a number of participants or network members. Strategic networks are recognised as sets of firms within an industry that exhibit denser strategic linkages among themselves than other firms within the same industry. Based on this, strategic networks are determined according to evidence of strategic alliances between firms comprising the industry. As a result, a single strategic network represents a group of firms closely linked according to collaborative ties. Arguably, the collective outcome of these strategic relationships engineered between firms suggest that the collaborative benefits attributed to interorganisational relationships require closer examination in respect to their propensity to influence rivalry in intraindustry environments. Derived in large from the social sciences, network theory allows for the micro and macro examination of the opportunities and constraints inherent in the structure of relationships in strategic networks, establishing a relational approach upon which the conduct and performance of firms can be more fully understood. Research to date has yet to empirically investigate the relationship between strategic networks and rivalry. The limited research that has been completed utilising a network rationale to investigate competitive patterns in contemporary industry environments has been characterised by a failure to directly measure rivalry. Further, this prior research has typically embedded investigation in industry settings dominated by technological or regulatory imperatives, such as the microprocessor and airline industries. These industries, due to the presence of such imperatives, are arguably more inclined to support the realisation of network rivalry, through subscription to prescribed technological standards (eg., microprocessor industry) or by being bound by regulatory constraints dictating operation within particular market segments (airline industry). In order to counter these weaknesses, the proposition guiding research - Are patterns of rivalry predicted by strategic network membership? – is embedded in the United States Light Vehicles Industry, an industry not dominated by technological or regulatory imperatives. Further, rivalry is directly measured and utilised in research, thus distinguishing this investigation from prior research efforts. The timeframe of investigation is 1993 – 1999, with all research data derived from secondary sources. Strategic networks were defined within the United States Light Vehicles Industry based on evidence of horizontal strategic relationships between firms comprising the industry. The measure of rivalry used to directly ascertain the competitive patterns of industry participants was derived from the traditional Herfindahl Index, modified to account for patterns of rivalry observed at the market segment level. Statistical analyses of the strategic network and rivalry constructs found little evidence to support the contention of network rivalry; indeed, greater levels of rivalry were observed between firms comprising the same strategic network than between firms participating in opposing network structures. Based on these results, patterns of rivalry evidenced in the United States Light Vehicle Industry over the period 1993 – 1999 were not found to be predicted by strategic network membership. The findings generated by this research are in contrast to current theorising in the strategic network – rivalry realm. In this respect, these findings are surprising. The relevance of industry type, in conjunction with prevailing network methodology, provides the basis upon which these findings are contemplated. Overall, this study raises some important questions in relation to the relevancy of the network rivalry rationale, establishing a fruitful avenue for further research.
Resumo:
Effective management of groundwater requires stakeholders to have a realistic conceptual understanding of the groundwater systems and hydrological processes.However, groundwater data can be complex, confusing and often difficult for people to comprehend..A powerful way to communicate understanding of groundwater processes, complex subsurface geology and their relationships is through the use of visualisation techniques to create 3D conceptual groundwater models. In addition, the ability to animate, interrogate and interact with 3D models can encourage a higher level of understanding than static images alone. While there are increasing numbers of software tools available for developing and visualising groundwater conceptual models, these packages are often very expensive and are not readily accessible to majority people due to complexity. .The Groundwater Visualisation System (GVS) is a software framework that can be used to develop groundwater visualisation tools aimed specifically at non-technical computer users and those who are not groundwater domain experts. A primary aim of GVS is to provide management support for agencies, and enhancecommunity understanding.
Resumo:
Monitoring Internet traffic is critical in order to acquire a good understanding of threats to computer and network security and in designing efficient computer security systems. Researchers and network administrators have applied several approaches to monitoring traffic for malicious content. These techniques include monitoring network components, aggregating IDS alerts, and monitoring unused IP address spaces. Another method for monitoring and analyzing malicious traffic, which has been widely tried and accepted, is the use of honeypots. Honeypots are very valuable security resources for gathering artefacts associated with a variety of Internet attack activities. As honeypots run no production services, any contact with them is considered potentially malicious or suspicious by definition. This unique characteristic of the honeypot reduces the amount of collected traffic and makes it a more valuable source of information than other existing techniques. Currently, there is insufficient research in the honeypot data analysis field. To date, most of the work on honeypots has been devoted to the design of new honeypots or optimizing the current ones. Approaches for analyzing data collected from honeypots, especially low-interaction honeypots, are presently immature, while analysis techniques are manual and focus mainly on identifying existing attacks. This research addresses the need for developing more advanced techniques for analyzing Internet traffic data collected from low-interaction honeypots. We believe that characterizing honeypot traffic will improve the security of networks and, if the honeypot data is handled in time, give early signs of new vulnerabilities or breakouts of new automated malicious codes, such as worms. The outcomes of this research include: • Identification of repeated use of attack tools and attack processes through grouping activities that exhibit similar packet inter-arrival time distributions using the cliquing algorithm; • Application of principal component analysis to detect the structure of attackers’ activities present in low-interaction honeypots and to visualize attackers’ behaviors; • Detection of new attacks in low-interaction honeypot traffic through the use of the principal component’s residual space and the square prediction error statistic; • Real-time detection of new attacks using recursive principal component analysis; • A proof of concept implementation for honeypot traffic analysis and real time monitoring.
Resumo:
This paper investigates a wireless sensor network deployment - monitoring water quality, e.g. salinity and the level of the underground water table - in a remote tropical area of northern Australia. Our goal is to collect real time water quality measurements together with the amount of water being pumped out in the area, and investigate the impacts of current irrigation practice on the environments, in particular underground water salination. This is a challenging task featuring wide geographic area coverage (mean transmission range between nodes is more than 800 meters), highly variable radio propagations, high end-to-end packet delivery rate requirements, and hostile deployment environments. We have designed, implemented and deployed a sensor network system, which has been collecting water quality and flow measurements, e.g., water flow rate and water flow ticks for over one month. The preliminary results show that sensor networks are a promising solution to deploying a sustainable irrigation system, e.g., maximizing the amount of water pumped out from an area with minimum impact on water quality.
Resumo:
This paper presents research that is being conducted by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) with the aim of investigating the use of wireless sensor networks for automated livestock monitoring and control. It is difficult to achieve practical and reliable cattle monitoring with current conventional technologies due to challenges such as large grazing areas of cattle, long time periods of data sampling, and constantly varying physical environments. Wireless sensor networks bring a new level of possibilities into this area with the potential for greatly increased spatial and temporal resolution of measurement data. CSIRO has created a wireless sensor platform for animal behaviour monitoring where we are able to observe and collect information of animals without significantly interfering with them. Based on such monitoring information, we can identify each animal's behaviour and activities successfully
Resumo:
This paper describes some new wireless sensor hardware developed for pastoral and environmental applications. From our early experiments with Mote hardware we were inspired to develop our devices with improved radio range, solar power capability, mechanical and electrical robustness, and with unique combinations of sensors. Here we describe the design and evolution of a small family of devices: radio/processor board, a soil moisture sensor interface, and a single board multi-sensor unit for animal tracking experiments.
Resumo:
ElectricCOW is a network, animal behaviour and agent simulator designed to allow detailed simulation of an ad-hoc model network built from small mote-like devices called flecks. Detailed radio communications, cattle behaviour and sensor and actuator network modelling allows a closed-loop environment, where the network can influence the behaviour of its mobile platforms.
Resumo:
Computer profiling is the automated forensic examination of a computer system in order to provide a human investigator with a characterisation of the activities that have taken place on that system. As part of this process, the logical components of the computer system – components such as users, files and applications - are enumerated and the relationships between them discovered and reported. This information is enriched with traces of historical activity drawn from system logs and from evidence of events found in the computer file system. A potential problem with the use of such information is that some of it may be inconsistent and contradictory thus compromising its value. This work examines the impact of temporal inconsistency in such information and discusses two types of temporal inconsistency that may arise – inconsistency arising out of the normal errant behaviour of a computer system, and inconsistency arising out of deliberate tampering by a suspect – and techniques for dealing with inconsistencies of the latter kind. We examine the impact of deliberate tampering through experiments conducted with prototype computer profiling software. Based on the results of these experiments, we discuss techniques which can be employed in computer profiling to deal with such temporal inconsistencies.
Resumo:
Wireless Multi-media Sensor Networks (WMSNs) have become increasingly popular in recent years, driven in part by the increasing commoditization of small, low-cost CMOS sensors. As such, the challenge of automatically calibrating these types of cameras nodes has become an important research problem, especially for the case when a large quantity of these type of devices are deployed. This paper presents a method for automatically calibrating a wireless camera node with the ability to rotate around one axis. The method involves capturing images as the camera is rotated and computing the homographies between the images. The camera parameters, including focal length, principal point and the angle and axis of rotation can then recovered from two or more homographies. The homography computation algorithm is designed to deal with the limited resources of the wireless sensor and to minimize energy con- sumption. In this paper, a modified RANdom SAmple Consensus (RANSAC) algorithm is proposed to effectively increase the efficiency and reliability of the calibration procedure.
Resumo:
Wireless network technologies, such as IEEE 802.11 based wireless local area networks (WLANs), have been adopted in wireless networked control systems (WNCS) for real-time applications. Distributed real-time control requires satisfaction of (soft) real-time performance from the underlying networks for delivery of real-time traffic. However, IEEE 802.11 networks are not designed for WNCS applications. They neither inherently provide quality-of-service (QoS) support, nor explicitly consider the characteristics of the real-time traffic on networked control systems (NCS), i.e., periodic round-trip traffic. Therefore, the adoption of 802.11 networks in real-time WNCSs causes challenging problems for network design and performance analysis. Theoretical methodologies are yet to be developed for computing the best achievable WNCS network performance under the constraints of real-time control requirements. Focusing on IEEE 802.11 distributed coordination function (DCF) based WNCSs, this paper analyses several important NCS network performance indices, such as throughput capacity, round trip time and packet loss ratio under the periodic round trip traffic pattern, a unique feature of typical NCSs. Considering periodic round trip traffic, an analytical model based on Markov chain theory is developed for deriving these performance indices under a critical real-time traffic condition, at which the real-time performance constraints are marginally satisfied. Case studies are also carried out to validate the theoretical development.