901 resultados para Networks partner techniques
Resumo:
The landscape of early childhood education and care is changing. Governments world-wide are assuming increasing authority in relation to child-rearing in the years before school entry, beyond the traditional role in assisting parents to do the best they can by their children. As part of a social agenda aimed at forming citizens well prepared to play an active part in a globalised knowledge economy, the idea of ‘early learning’ expresses the necessity of engaging caregivers right from the start of children’s lives. Nichols, Rowsell, Rainbird, and Nixon investigate this trend over three years, in two countries, and three contrasting regions, by setting themselves the task of tracing every service and agent offering resources under the banner of early learning. Far from a dry catalogue, the study involves in-depth ethnographic research in fascinating spaces such as a church-run centre for African refugee women and children, a state-of-the-art community library and an Australian country town. Included is an unprecedented inventory of an entire suburban mall. Richly visually documented, the study employs emerging methods such as Google-mapping to trace the travels of actual parents as they search for particular resources. Each chapter features a context investigated in this large, international study: the library, the mall, the clinic, and the church. The author team unravels new spaces and new networks at work in early childhood literacy and development.
Resumo:
The ability of bridge deterioration models to predict future condition provides significant advantages in improving the effectiveness of maintenance decisions. This paper proposes a novel model using Dynamic Bayesian Networks (DBNs) for predicting the condition of bridge elements. The proposed model improves prediction results by being able to handle, deterioration dependencies among different bridge elements, the lack of full inspection histories, and joint considerations of both maintenance actions and environmental effects. With Bayesian updating capability, different types of data and information can be utilised as inputs. Expert knowledge can be used to deal with insufficient data as a starting point. The proposed model established a flexible basis for bridge systems deterioration modelling so that other models and Bayesian approaches can be further developed in one platform. A steel bridge main girder was chosen to validate the proposed model.
Resumo:
A Cooperative Collision Warning System (CCWS) is an active safety techno- logy for road vehicles that can potentially reduce traffic accidents. It provides a driver with situational awareness and early warnings of any possible colli- sions through an on-board unit. CCWS is still under active research, and one of the important technical problems is safety message dissemination. Safety messages are disseminated in a high-speed mobile environment using wireless communication technology such as Dedicated Short Range Communication (DSRC). The wireless communication in CCWS has a limited bandwidth and can become unreliable when used inefficiently, particularly given the dynamic nature of road traffic conditions. Unreliable communication may significantly reduce the performance of CCWS in preventing collisions. There are two types of safety messages: Routine Safety Messages (RSMs) and Event Safety Messages (ESMs). An RSM contains the up-to-date state of a vehicle, and it must be disseminated repeatedly to its neighbouring vehicles. An ESM is a warning message that must be sent to all the endangered vehi- cles. Existing RSM and ESM dissemination schemes are inefficient, unscalable, and unable to give priority to vehicles in the most danger. Thus, this study investigates more efficient and scalable RSM and ESM dissemination schemes that can make use of the context information generated from a particular traffic scenario. Therefore, this study tackles three technical research prob- lems, vehicular traffic scenario modelling and context information generation, context-aware RSM dissemination, and context-aware ESM dissemination. The most relevant context information in CCWS is the information about possible collisions among vehicles given a current vehicular traffic situation. To generate the context information, this study investigates techniques to model interactions among multiple vehicles based on their up-to-date motion state obtained via RSM. To date, there is no existing model that can represent interactions among multiple vehicles in a speciffic region and at a particular time. The major outcome from the first problem is a new interaction graph model that can be used to easily identify the endangered vehicles and their danger severity. By identifying the endangered vehicles, RSM and ESM dis- semination can be optimised while improving safety at the same time. The new model enables the development of context-aware RSM and ESM dissemination schemes. To disseminate RSM efficiently, this study investigates a context-aware dis- semination scheme that can optimise the RSM dissemination rate to improve safety in various vehicle densities. The major outcome from the second problem is a context-aware RSM dissemination protocol. The context-aware protocol can adaptively adjust the dissemination rate based on an estimated channel load and danger severity of vehicle interactions given by the interaction graph model. Unlike existing RSM dissemination schemes, the proposed adaptive scheme can reduce channel congestion and improve safety by prioritising ve- hicles that are most likely to crash with other vehicles. The proposed RSM protocol has been implemented and evaluated by simulation. The simulation results have shown that the proposed RSM protocol outperforms existing pro- tocols in terms of efficiency, scalability and safety. To disseminate ESM efficiently, this study investigates a context-aware ESM dissemination scheme that can reduce unnecessary transmissions and deliver ESMs to endangered vehicles as fast as possible. The major outcome from the third problem is a context-aware ESM dissemination protocol that uses a multicast routing strategy. Existing ESM protocols use broadcast rout- ing, which is not efficient because ESMs may be sent to a large number of ve- hicles in the area. Using multicast routing improves efficiency because ESMs are sent only to the endangered vehicles. The endangered vehicles can be identified using the interaction graph model. The proposed ESM protocol has been implemented and evaluated by simulation. The simulation results have shown that the proposed ESM protocol can prevent potential accidents from occurring better than existing ESM protocols. The context model and the RSM and ESM dissemination protocols can be implemented in any CCWS development to improve the communication and safety performance of CCWS. In effect, the outcomes contribute to the realisation of CCWS that will ultimately improve road safety and save lives.
Resumo:
Voltage drop and rise at network peak and off–peak periods along with voltage unbalance are the major power quality problems in low voltage distribution networks. Usually, the utilities try to use adjusting the transformer tap changers as a solution for the voltage drop. They also try to distribute the loads equally as a solution for network voltage unbalance problem. On the other hand, the ever increasing energy demand, along with the necessity of cost reduction and higher reliability requirements, are driving the modern power systems towards Distributed Generation (DG) units. This can be in the form of small rooftop photovoltaic cells (PV), Plug–in Electric Vehicles (PEVs) or Micro Grids (MGs). Rooftop PVs, typically with power levels ranging from 1–5 kW installed by the householders are gaining popularity due to their financial benefits for the householders. Also PEVs will be soon emerged in residential distribution networks which behave as a huge residential load when they are being charged while in their later generation, they are also expected to support the network as small DG units which transfer the energy stored in their battery into grid. Furthermore, the MG which is a cluster of loads and several DG units such as diesel generators, PVs, fuel cells and batteries are recently introduced to distribution networks. The voltage unbalance in the network can be increased due to the uncertainties in the random connection point of the PVs and PEVs to the network, their nominal capacity and time of operation. Therefore, it is of high interest to investigate the voltage unbalance in these networks as the result of MGs, PVs and PEVs integration to low voltage networks. In addition, the network might experience non–standard voltage drop due to high penetration of PEVs, being charged at night periods, or non–standard voltage rise due to high penetration of PVs and PEVs generating electricity back into the grid in the network off–peak periods. In this thesis, a voltage unbalance sensitivity analysis and stochastic evaluation is carried out for PVs installed by the householders versus their installation point, their nominal capacity and penetration level as different uncertainties. A similar analysis is carried out for PEVs penetration in the network working in two different modes: Grid to vehicle and Vehicle to grid. Furthermore, the conventional methods are discussed for improving the voltage unbalance within these networks. This is later continued by proposing new and efficient improvement methods for voltage profile improvement at network peak and off–peak periods and voltage unbalance reduction. In addition, voltage unbalance reduction is investigated for MGs and new improvement methods are proposed and applied for the MG test bed, planned to be established at Queensland University of Technology (QUT). MATLAB and PSCAD/EMTDC simulation softwares are used for verification of the analyses and the proposals.
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.
Resumo:
There are several popular soil moisture measurement methods today such as time domain reflectometry, electromagnetic (EM) wave, electrical and acoustic methods. Significant studies have been dedicated in developing method of measurements using those concepts, especially to achieve the characteristics of noninvasiveness. EM wave method provides an advantage because it is non-invasive to the soil and does not need to utilise probes to penetrate or bury in the soil. But some EM methods are also too complex, expensive, and not portable for the application of Wireless Sensor Networks; for example satellites or UAV (Unmanned Aerial Vehicle) based sensors. This research proposes a method in detecting changes in soil moisture using soil-reflected electromagnetic (SREM) wave from Wireless Sensor Networks (WSNs). Studies have shown that different levels of soil moisture will affects soil’s dielectric properties, such as relative permittivity and conductivity, and in turns change its reflection coefficients. The SREM wave method uses a transmitter adjacent to a WSNs node with purpose exclusively to transmit wireless signals that will be reflected by the soil. The strength from the reflected signal that is determined by the soil’s reflection coefficients is used to differentiate the level of soil moisture. The novel nature of this method comes from using WSNs communication signals to perform soil moisture estimation without the need of external sensors or invasive equipment. This innovative method is non-invasive, low cost and simple to set up. There are three locations at Brisbane, Australia chosen as the experiment’s location. The soil type in these locations contains 10–20% clay according to the Australian Soil Resource Information System. Six approximate levels of soil moisture (8, 10, 13, 15, 18 and 20%) are measured at each location; with each measurement consisting of 200 data. In total 3600 measurements are completed in this research, which is sufficient to achieve the research objective, assessing and proving the concept of SREM wave method. These results are compared with reference data from similar soil type to prove the concept. A fourth degree polynomial analysis is used to generate an equation to estimate soil moisture from received signal strength as recorded by using the SREM wave method.
Resumo:
Since the late twentieth century, there has been a shift away from delivery of infrastructure, including road networks, exclusively by the state. Subsequently, a range of alternative delivery models including governance networks have emerged. However, little is known about how connections between these networks and their stakeholders are created, managed or sustained. Using an analytical framework based on a synthesis of theories of network and stakeholder management, three cases in road infrastructure in Queensland, Australia are examined. The paper finds that although network management can be used to facilitate stakeholder engagement, such activities in the three cases are mainly focused within the core network of those most directly involved with delivery of the infrastructure often to the exclusion of other stakeholder groups.
Resumo:
This paper investigates the use of the dimensionality-reduction techniques weighted linear discriminant analysis (WLDA), and weighted median fisher discriminant analysis (WMFD), before probabilistic linear discriminant analysis (PLDA) modeling for the purpose of improving speaker verification performance in the presence of high inter-session variability. Recently it was shown that WLDA techniques can provide improvement over traditional linear discriminant analysis (LDA) for channel compensation in i-vector based speaker verification systems. We show in this paper that the speaker discriminative information that is available in the distance between pair of speakers clustered in the development i-vector space can also be exploited in heavy-tailed PLDA modeling by using the weighted discriminant approaches prior to PLDA modeling. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that WLDA and WMFD projections before PLDA modeling can provide an improved approach when compared to uncompensated PLDA modeling for i-vector based speaker verification systems.
Resumo:
Navigational collisions are one of the major safety concerns for many seaports. Continuing growth of shipping traffic in number and sizes is likely to result in increased number of traffic movements, which consequently could result higher risk of collisions in these restricted waters. This continually increasing safety concern warrants a comprehensive technique for modeling collision risk in port waters, particularly for modeling the probability of collision events and the associated consequences (i.e., injuries and fatalities). A number of techniques have been utilized for modeling the risk qualitatively, semi-quantitatively and quantitatively. These traditional techniques mostly rely on historical collision data, often in conjunction with expert judgments. However, these techniques are hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of collision counts for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique (NTCT), which uses traffic conflicts as an alternative to the collisions for modeling the probability of collision events quantitatively. This article explores the existing techniques for modeling collision risk in port waters. In particular, it identifies the advantages and limitations of the traditional techniques and highlights the potentials of the NTCT in overcoming the limitations. In view of the principles of the NTCT, a structured method for managing collision risk is proposed. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which consequently has great potential for managing collision risk in a fast, reliable and efficient manner.
Resumo:
This chapter reviews common barriers to community engagement for Latino youth and suggests ways to move beyond those barriers by empowering them to communicate their experiences, address the challenges they face, and develop recommendations for making their community more youth-friendly. As a case study, this chapter describes a program called Youth FACE IT (Youth Fostering Active Community Engagement for Integration and Transformation)in Boulder County, Colorado. The program enables Latino youth to engage in critical dialogue and participate in a community-based initiative. The chapter concludes by explaining specific strategies that planners can use to support active community engagement and develop a future generation of planners and engaged community members that reflects emerging demographics.
Resumo:
In this paper we consider the variable order time fractional diffusion equation. We adopt the Coimbra variable order (VO) time fractional operator, which defines a consistent method for VO differentiation of physical variables. The Coimbra variable order fractional operator also can be viewed as a Caputo-type definition. Although this definition is the most appropriate definition having fundamental characteristics that are desirable for physical modeling, numerical methods for fractional partial differential equations using this definition have not yet appeared in the literature. Here an approximate scheme is first proposed. The stability, convergence and solvability of this numerical scheme are discussed via the technique of Fourier analysis. Numerical examples are provided to show that the numerical method is computationally efficient. Crown Copyright © 2012 Published by Elsevier Inc. All rights reserved.
Resumo:
Monitoring the natural environment is increasingly important as habit degradation and climate change reduce theworld’s biodiversity.We have developed software tools and applications to assist ecologists with the collection and analysis of acoustic data at large spatial and temporal scales.One of our key objectives is automated animal call recognition, and our approach has three novel attributes. First, we work with raw environmental audio, contaminated by noise and artefacts and containing calls that vary greatly in volume depending on the animal’s proximity to the microphone. Second, initial experimentation suggested that no single recognizer could dealwith the enormous variety of calls. Therefore, we developed a toolbox of generic recognizers to extract invariant features for each call type. Third, many species are cryptic and offer little data with which to train a recognizer. Many popular machine learning methods require large volumes of training and validation data and considerable time and expertise to prepare. Consequently we adopt bootstrap techniques that can be initiated with little data and refined subsequently. In this paper, we describe our recognition tools and present results for real ecological problems.
Resumo:
Privacy is an important component of freedom and plays a key role in protecting fundamental human rights. It is becoming increasingly difficult to ignore the fact that without appropriate levels of privacy, a person’s rights are diminished. Users want to protect their privacy - particularly in “privacy invasive” areas such as social networks. However, Social Network users seldom know how to protect their own privacy through online mechanisms. What is required is an emerging concept that provides users legitimate control over their own personal information, whilst preserving and maintaining the advantages of engaging with online services such as Social Networks. This paper reviews “Privacy by Design (PbD)” and shows how it applies to diverse privacy areas. Such an approach will move towards mitigating many of the privacy issues in online information systems and can be a potential pathway for protecting users’ personal information. The research has also posed many questions in need of further investigation for different open source distributed Social Networks. Findings from this research will lead to a novel distributed architecture that provides more transparent and accountable privacy for the users of online information systems.
Resumo:
Conducting research into crime and criminal justice carries unique challenges. This Handbook focuses on the application of 'methods' to address the core substantive questions that currently motivate contemporary criminological research. It maps a canon of methods that are more elaborated than in most other fields of social science, and the intellectual terrain of research problems with which criminologists are routinely confronted. Drawing on exemplary studies, chapters in each section illustrate the techniques (qualitative and quantitative) that are commonly applied in empirical studies, as well as the logic of criminological enquiry. Organized into five sections, each prefaced by an editorial introduction, the Handbook covers: • Crime and Criminals • Contextualizing Crimes in Space and Time: Networks, Communities and Culture • Perceptual Dimensions of Crime • Criminal Justice Systems: Organizations and Institutions • Preventing Crime and Improving Justice Edited by leaders in the field of criminological research, and with contributions from internationally renowned experts, The SAGE Handbook of Criminological Research Methods is set to become the definitive resource for postgraduates, researchers and academics in criminology, criminal justice, policing, law, and sociology.
Resumo:
The popularity of Bayesian Network modelling of complex domains using expert elicitation has raised questions of how one might validate such a model given that no objective dataset exists for the model. Past attempts at delineating a set of tests for establishing confidence in an entirely expert-elicited model have focused on single types of validity stemming from individual sources of uncertainty within the model. This paper seeks to extend the frameworks proposed by earlier researchers by drawing upon other disciplines where measuring latent variables is also an issue. We demonstrate that even in cases where no data exist at all there is a broad range of validity tests that can be used to establish confidence in the validity of a Bayesian Belief Network.