576 resultados para locations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a rigorous and a reliable analytical procedure using finite element (FE) techniques to study the blast response of laminated glass (LG) panel and predict the failure of its components. The 1st principal stress (σ11) is used as the failure criterion for glass and the von mises stress (σv) is used for the interlayer and sealant joints. The results from the FE analysis for mid-span deflection, energy absorption and the stresses at critical locations of glass, interlayer and structural sealant are presented in the paper. These results compared well with those obtained from a free field blast test reported in the literature. The tensile strength (T) of the glass has a significant influence on the behaviour of the LG panel and should be treated carefully in the analysis. The glass panes absorb about 80% of the blast energy for the treated blast load and this should be minimised in the design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis developed semi-parametric regression models for estimating the spatio-temporal distribution of outdoor airborne ultrafine particle number concentration (PNC). The models developed incorporate multivariate penalised splines and random walks and autoregressive errors in order to estimate non-linear functions of space, time and other covariates. The models were applied to data from the "Ultrafine Particles from Traffic Emissions and Child" project in Brisbane, Australia, and to longitudinal measurements of air quality in Helsinki, Finland. The spline and random walk aspects of the models reveal how the daily trend in PNC changes over the year in Helsinki and the similarities and differences in the daily and weekly trends across multiple primary schools in Brisbane. Midday peaks in PNC in Brisbane locations are attributed to new particle formation events at the Port of Brisbane and Brisbane Airport.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most large cities around the world are undergoing rapid transport sector development to cater for increased urbanization. Subsequently the issues of mobility, access equity, congestion, operational safety and above all environmental sustainability are becoming increasingly crucial in transport planning and policy making. The popular response in addressing these issues has been demand management, through improvement of motorised public transport (MPT) modes (bus, train, tram) and non-motorized transport (NMT) modes (walk, bicycle); improved fuel technology. Relatively little attention has however been given to another readily available and highly sustainable component of the urban transport system, non-motorized public transport (NMPT) such as the pedicab that operates on a commercial basis and serves as an NMT taxi; and has long standing history in many Asian cities; relatively stable in existence in Latin America; and reemerging and expanding in Europe, North America and Australia. Consensus at policy level on the apparent benefits, costs and management approach for NMPT integration has often been a major transport planning problem. Within this context, this research attempts to provide a more complete analysis of the current existence rationale and possible future, or otherwise, of NMPT as a regular public transport system. The analytical process is divided into three major stages. Stage 1 reviews the status and role condition of NMPT as regular public transport on a global scale- in developing cities and developed cities. The review establishes the strong ongoing and future potential role of NMPT in major developing cities. Stage 2 narrows down the status review to a case study city of a developing country in order to facilitate deeper role review and status analysis of the mode. Dhaka, capital city of Bangladesh, has been chosen due to its magnitude of NMPT presence. The review and analysis reveals the multisectoral and dominant role of NMPT in catering for the travel need of Dhaka transport users. The review also indicates ad-hoc, disintegrated policy planning in management of NMPT and the need for a planning framework to facilitate balanced integration between NMPT and MT in future. Stage 3 develops an integrated, multimodal planning framework (IMPF), based on a four-step planning process. This includes defining the purpose and scope of the planning exercise, determining current deficiencies and preferred characteristics for the proposed IMPF, selection of suitable techniques to address the deficiencies and needs of the transport network while laying out the IMPF and finally, development of a delivery plan for the IMPF based on a selected layout technique and integration approach. The output of the exercise is a planning instrument (decision tool) that can be used to assign a road hierarchy in order to allocate appropriate traffic to appropriate network type, particularly to facilitate the operational balance between MT and NMT. The instrument is based on a partial restriction approach of motorised transport (MT) and NMT, structured on the notion of functional hierarchy approach, and distributes/prioritises MT and NMT such that functional needs of the network category is best complemented. The planning instrument based on these processes and principles offers a six-level road hierarchy with a different composition of network-governing attributes and modal priority, for the current Dhaka transport network, in order to facilitate efficient integration of NMT with MT. A case study application of the instrument on a small transport network of Dhaka also demonstrates the utility, flexibility and adoptability of the instrument in logically allocating corridors with particular positions in the road hierarchy paradigm. Although the tool is useful in enabling balanced distribution of NMPT with MT at different network levels, further investigation is required with reference to detailed modal variations, scales and locations of a network to further generalise the framework application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Internationally, Industry-School Partnerships (ISPs) are a ubiquitous government approach for enabling school to work transitions. Significant benefits of ISPs for centralised bureaucracies that are seeking to address common educational problems include: i) cost reduction; ii) supply to geographically dispersed locations, and; iii) industry access to innovative education solutions. In Queensland, there exists a government led ISP, the Gateway to Industry Schools Program. Under this initiative is the Queensland Minerals and Energy Academy, a lead industry organisation for 34 schools and 17 multinational sponsor companies. Acquiring an understanding of this strategic ISP is critical, given the current Resources Industry boom, and the workforce skills shortage experienced in Australia. This review paper adopts Ecological Systems Theory as a lens to understand the inner workings of ISPs. Acknowledging that ISPs will remain a key feature of government policy, this paper seeks to further illuminate the role of ISPs in transitioning young people from school to the working world.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research paper examines the potential of neighbourhood centres to generate and enhance social capital through their programs, activities, membership associations and community engagement. Social capital is a complex concept involving elements of norms, networks, and trust and is generally seen as enhancing community cohesion and the ability to attain common goals (outlined in more detail in Section 3). The aim of this research project is to describe the nature of social capital formation in terms of development and change in norms, networks and trust within the context of the operations of neighbourhood centres in three Queensland locations (i.e., Sherwood, Kingston/Slacks Creek, and Maleny). The study was prompted by surprisingly little research into how neighbourhood centres and their clients contribute to the development of social capital. Considering the large volume of research on the role of community organisations in building social capital, it is remarkable that perhaps the most obvious organisation with 'social capitalist' intentions has received so little attention (apart from Bullen and Onyx, 2005). Indeed, ostensibly, neighbourhood centres are all about social capital.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim Collisions between trains and pedestrians are the most likely to result in severe injuries and fatalities when compared to other types of rail crossing accidents. Currently, there is a growing emphasis towards developing effective interventions designed to reduce the prevalence of train–pedestrian collisions. This paper reviews what is currently known regarding the personal and environmental factors that contribute to train–pedestrian collisions, particularly among high-risk groups. Method Studies that reported on the prevalence and characteristics of pedestrian accidents at railway crossings up until June 2012 were searched in electronic databases. Results Males, school children and older pedestrians (and those with disabilities) are disproportionately represented in fatality databases. However, a main theme to emerge is that little is known about the origins of train–pedestrian collisions (especially compared to train–vehicle collisions). In particular, whether collisions result from engaging in deliberate violations versus making decisional errors. This subsequently limits the corresponding development of effective and targeted interventions for high-risk groups as well as crossing locations. Finally, it remains unclear what combination of surveillance and deterrence-based and education-focused campaigns are required to produce lasting reductions in train–pedestrian fatality rates. This paper provides direction for future research into the personal and environmental origins of collisions as well as the development of interventions that aim to attract pedestrians’ attention and ensure crossing rules are respected.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In biology, we frequently observe different species existing within the same environment. For example, there are many cell types in a tumour, or different animal species may occupy a given habitat. In modelling interactions between such species, we often make use of the mean field approximation, whereby spatial correlations between the locations of individuals are neglected. Whilst this approximation holds in certain situations, this is not always the case, and care must be taken to ensure the mean field approximation is only used in appropriate settings. In circumstances where the mean field approximation is unsuitable we need to include information on the spatial distributions of individuals, which is not a simple task. In this paper we provide a method that overcomes many of the failures of the mean field approximation for an on-lattice volume-excluding birth-death-movement process with multiple species. We explicitly take into account spatial information on the distribution of individuals by including partial differential equation descriptions of lattice site occupancy correlations. We demonstrate how to derive these equations for the multi-species case, and show results specific to a two-species problem. We compare averaged discrete results to both the mean field approximation and our improved method which incorporates spatial correlations. We note that the mean field approximation fails dramatically in some cases, predicting very different behaviour from that seen upon averaging multiple realisations of the discrete system. In contrast, our improved method provides excellent agreement with the averaged discrete behaviour in all cases, thus providing a more reliable modelling framework. Furthermore, our method is tractable as the resulting partial differential equations can be solved efficiently using standard numerical techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report the mechanical properties of different two-dimensional carbon heterojunctions (HJs) made from graphene and various stable graphene allotropes, including α-, β-, γ- and 6612-graphyne (GY), and graphdiyne (GDY). It is found that all HJs exhibit a brittle behaviour except the one with α-GY, which however shows a hardening process due to the formation of triple carbon rings. Such hardening process has greatly deferred the failure of the structure. The yielding of the HJs is usually initiated at the interface between graphene and graphene allotropes, and monoatomic carbon rings are normally formed after yielding. By varying the locations of graphene (either in the middle or at the two ends of the HJs), similar mechanical properties have been obtained, suggesting insignificant impacts from location of graphene allotropes. Whereas, changing the types and percentages of the graphene allotropes, the HJs exhibit vastly different mechanical properties. In general, with the increasing graphene percentage, the yield strain decreases and the effective Young’s modulus increases. Meanwhile, the yield stress appears irrelevant with the graphene percentage. This study provides a fundamental understanding of the tensile properties of the heterojunctions that are crucial for the design and engineering of their mechanical properties, in order to facilitate their emerging future applications in nanoscale devices, such as flexible/stretchable electronics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wide-Area Measurement Systems (WAMS) provide the opportunity of utilizing remote signals from different locations for the enhancement of power system stability. This paper focuses on the implementation of remote measurements as supplementary signals for off-center Static Var Compensators (SVCs) to damp inter-area oscillations. Combination of participation factor and residue method is used for the selection of most effective stabilizing signal. Speed difference of two generators from separate areas is identified as the best stabilizing signal and used as a supplementary signal for lead-lag controller of SVCs. Time delays of remote measurements and control signals is considered. Wide-Area Damping Controller (WADC) is deployed in Matlab Simulink framework and is tested under different operating conditions. Simulation results reveal that the proposed WADC improve the dynamic characteristic of the system significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Do different brains forming a specific memory allocate the same groups of neurons to encode it? One way to test this question is to map neurons encoding the same memory and quantitatively compare their locations across individual brains. In a previous study, we used this strategy to uncover a common topography of neurons in the dorsolateral amygdala (LAd) that expressed a learning-induced and plasticity-related kinase (p42/44 mitogen-activated protein kinase; pMAPK), following auditory Pavlovian fear conditioning. In this series of experiments, we extend our initial findings to ask to what extent this functional topography depends upon intrinsic neuronal structure. We first showed that the majority (87 %) of pMAPK expression in the lateral amygdala was restricted to principal-type neurons. Next, we verified a neuroanatomical reference point for amygdala alignment using in vivo magnetic resonance imaging and in vitro morphometrics. We then determined that the topography of neurons encoding auditory fear conditioning was not exclusively governed by principal neuron cytoarchitecture. These data suggest that functional patterning of neurons undergoing plasticity in the amygdala following Pavlovian fear conditioning is specific to memory formation itself. Further, the spatial allocation of activated neurons in the LAd was specific to cued (auditory), but not contextual, fear conditioning. Spatial analyses conducted at another coronal plane revealed another spatial map unique to fear conditioning, providing additional evidence that the functional topography of fear memory storing cells in the LAd is non-random and stable. Overall, these data provide evidence for a spatial organizing principle governing the functional allocation of fear memory in the amygdala.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Issue addressed: Although increases in cycling in Brisbane are encouraging, bicycle mode share to work in the state of Queensland remains low. The aim of this qualitative study was to draw upon the lived experiences of Queensland cyclists to understand the main motivators for utility cycling (cycling as a means to get to and from places) and compare motivators between utility cyclists (those who cycle for utility as well as for recreation) and non-utility cyclists (those who cycle only for recreation). Methods: For an online survey, members of a bicycle group (831 utility cyclists and 931 non-utility cyclists, aged 18-90 years) were asked to describe, unprompted, what would motivate them to engage in utility cycling (more often). Responses were coded into themes within four levels of an ecological model. Results: Within an ecological model, built environment influences on motivation were grouped according to whether they related to appeal (safety), convenience (accessibility) or attractiveness (more amenities) and included adequate infrastructure for short trips, bikeway connectivity, end-of-trip facilities at public locations and easy and safe bicycle access to destinations outside of cities. A key social-cultural influence related to improved interactions among different road users. Conclusions: The built and social-cultural environments need to be more supportive of utility cycling before even current utility and non-utility cyclists will be motivated to engage (more often) in utility cycling. So what?: Additional government strategies and more and better infrastructure that support utility cycling beyond commuter cycling may encourage a utility cycling culture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestion. Hence, reducing the frequency of crashes assist in addressing congestion issues (Meyer, 2008). Analysing traffic conditions and discovering risky traffic trends and patterns are essential basics in crash likelihood estimations studies and still require more attention and investigation. In this paper we will show, through data mining techniques, that there is a relationship between pre-crash traffic flow patterns and crash occurrence on motorways, compare them with normal traffic trends, and that this knowledge has the potentiality to improve the accuracy of existing crash likelihood estimation models, and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash occurrence. K-Means clustering algorithm applied to determine dominant pre-crash traffic patterns. In the first phase of this research, traffic regimes identified by analysing crashes and normal traffic situations using half an hour speed in upstream locations of crashes. Then, the second phase investigated the different combination of speed risk indicators to distinguish crashes from normal traffic situations more precisely. Five major trends have been found in the first phase of this paper for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Moreover, the second phase explains that spatiotemporal difference of speed is a better risk indicator among different combinations of speed related risk indicators. Based on these findings, crash likelihood estimation models can be fine-tuned to increase accuracy of estimations and minimize false alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many bridges, vertical displacements are one of the most relevant parameters for structural health monitoring in both the short- and long-terms. Bridge managers around the globe are always looking for a simple way to measure vertical displacements of bridges. However, it is difficult to carry out such measurements. On the other hand, in recent years, with the advancement of fibre-optic technologies, fibre Bragg grating (FBG) sensors are more commonly used in structural health monitoring due to their outstanding advantages including multiplexing capability, immunity of electromagnetic interference as well as high resolution and accuracy. For these reasons, a methodology for measuring the vertical displacements of bridges using FBG sensors is proposed. The methodology includes two approaches. One of which is based on curvature measurements while the other utilises inclination measurements from successfully developed FBG tilt sensors. A series of simulation tests of a full-scale bridge was conducted. It shows that both approaches can be implemented to measure the vertical displacements for bridges with various support conditions, varying stiffness along the spans and without any prior known loading. A static loading beam test with increasing loads at the mid-span and a beam test with different loading locations were conducted to measure vertical displacements using FBG strain sensors and tilt sensors. The results show that the approaches can successfully measure vertical displacements.