970 resultados para sandy locations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report the mechanical properties of different two-dimensional carbon heterojunctions (HJs) made from graphene and various stable graphene allotropes, including α-, β-, γ- and 6612-graphyne (GY), and graphdiyne (GDY). It is found that all HJs exhibit a brittle behaviour except the one with α-GY, which however shows a hardening process due to the formation of triple carbon rings. Such hardening process has greatly deferred the failure of the structure. The yielding of the HJs is usually initiated at the interface between graphene and graphene allotropes, and monoatomic carbon rings are normally formed after yielding. By varying the locations of graphene (either in the middle or at the two ends of the HJs), similar mechanical properties have been obtained, suggesting insignificant impacts from location of graphene allotropes. Whereas, changing the types and percentages of the graphene allotropes, the HJs exhibit vastly different mechanical properties. In general, with the increasing graphene percentage, the yield strain decreases and the effective Young’s modulus increases. Meanwhile, the yield stress appears irrelevant with the graphene percentage. This study provides a fundamental understanding of the tensile properties of the heterojunctions that are crucial for the design and engineering of their mechanical properties, in order to facilitate their emerging future applications in nanoscale devices, such as flexible/stretchable electronics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wide-Area Measurement Systems (WAMS) provide the opportunity of utilizing remote signals from different locations for the enhancement of power system stability. This paper focuses on the implementation of remote measurements as supplementary signals for off-center Static Var Compensators (SVCs) to damp inter-area oscillations. Combination of participation factor and residue method is used for the selection of most effective stabilizing signal. Speed difference of two generators from separate areas is identified as the best stabilizing signal and used as a supplementary signal for lead-lag controller of SVCs. Time delays of remote measurements and control signals is considered. Wide-Area Damping Controller (WADC) is deployed in Matlab Simulink framework and is tested under different operating conditions. Simulation results reveal that the proposed WADC improve the dynamic characteristic of the system significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Do different brains forming a specific memory allocate the same groups of neurons to encode it? One way to test this question is to map neurons encoding the same memory and quantitatively compare their locations across individual brains. In a previous study, we used this strategy to uncover a common topography of neurons in the dorsolateral amygdala (LAd) that expressed a learning-induced and plasticity-related kinase (p42/44 mitogen-activated protein kinase; pMAPK), following auditory Pavlovian fear conditioning. In this series of experiments, we extend our initial findings to ask to what extent this functional topography depends upon intrinsic neuronal structure. We first showed that the majority (87 %) of pMAPK expression in the lateral amygdala was restricted to principal-type neurons. Next, we verified a neuroanatomical reference point for amygdala alignment using in vivo magnetic resonance imaging and in vitro morphometrics. We then determined that the topography of neurons encoding auditory fear conditioning was not exclusively governed by principal neuron cytoarchitecture. These data suggest that functional patterning of neurons undergoing plasticity in the amygdala following Pavlovian fear conditioning is specific to memory formation itself. Further, the spatial allocation of activated neurons in the LAd was specific to cued (auditory), but not contextual, fear conditioning. Spatial analyses conducted at another coronal plane revealed another spatial map unique to fear conditioning, providing additional evidence that the functional topography of fear memory storing cells in the LAd is non-random and stable. Overall, these data provide evidence for a spatial organizing principle governing the functional allocation of fear memory in the amygdala.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Issue addressed: Although increases in cycling in Brisbane are encouraging, bicycle mode share to work in the state of Queensland remains low. The aim of this qualitative study was to draw upon the lived experiences of Queensland cyclists to understand the main motivators for utility cycling (cycling as a means to get to and from places) and compare motivators between utility cyclists (those who cycle for utility as well as for recreation) and non-utility cyclists (those who cycle only for recreation). Methods: For an online survey, members of a bicycle group (831 utility cyclists and 931 non-utility cyclists, aged 18-90 years) were asked to describe, unprompted, what would motivate them to engage in utility cycling (more often). Responses were coded into themes within four levels of an ecological model. Results: Within an ecological model, built environment influences on motivation were grouped according to whether they related to appeal (safety), convenience (accessibility) or attractiveness (more amenities) and included adequate infrastructure for short trips, bikeway connectivity, end-of-trip facilities at public locations and easy and safe bicycle access to destinations outside of cities. A key social-cultural influence related to improved interactions among different road users. Conclusions: The built and social-cultural environments need to be more supportive of utility cycling before even current utility and non-utility cyclists will be motivated to engage (more often) in utility cycling. So what?: Additional government strategies and more and better infrastructure that support utility cycling beyond commuter cycling may encourage a utility cycling culture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestion. Hence, reducing the frequency of crashes assist in addressing congestion issues (Meyer, 2008). Analysing traffic conditions and discovering risky traffic trends and patterns are essential basics in crash likelihood estimations studies and still require more attention and investigation. In this paper we will show, through data mining techniques, that there is a relationship between pre-crash traffic flow patterns and crash occurrence on motorways, compare them with normal traffic trends, and that this knowledge has the potentiality to improve the accuracy of existing crash likelihood estimation models, and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash occurrence. K-Means clustering algorithm applied to determine dominant pre-crash traffic patterns. In the first phase of this research, traffic regimes identified by analysing crashes and normal traffic situations using half an hour speed in upstream locations of crashes. Then, the second phase investigated the different combination of speed risk indicators to distinguish crashes from normal traffic situations more precisely. Five major trends have been found in the first phase of this paper for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Moreover, the second phase explains that spatiotemporal difference of speed is a better risk indicator among different combinations of speed related risk indicators. Based on these findings, crash likelihood estimation models can be fine-tuned to increase accuracy of estimations and minimize false alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many bridges, vertical displacements are one of the most relevant parameters for structural health monitoring in both the short- and long-terms. Bridge managers around the globe are always looking for a simple way to measure vertical displacements of bridges. However, it is difficult to carry out such measurements. On the other hand, in recent years, with the advancement of fibre-optic technologies, fibre Bragg grating (FBG) sensors are more commonly used in structural health monitoring due to their outstanding advantages including multiplexing capability, immunity of electromagnetic interference as well as high resolution and accuracy. For these reasons, a methodology for measuring the vertical displacements of bridges using FBG sensors is proposed. The methodology includes two approaches. One of which is based on curvature measurements while the other utilises inclination measurements from successfully developed FBG tilt sensors. A series of simulation tests of a full-scale bridge was conducted. It shows that both approaches can be implemented to measure the vertical displacements for bridges with various support conditions, varying stiffness along the spans and without any prior known loading. A static loading beam test with increasing loads at the mid-span and a beam test with different loading locations were conducted to measure vertical displacements using FBG strain sensors and tilt sensors. The results show that the approaches can successfully measure vertical displacements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the existence of air quality guidelines in Australia and New Zealand, the concentrations of particulate matter have exceeded these guidelines on several occasions. To identify the sources of particulate matter, examine the contributions of the sources to the air quality at specific areas and estimate the most likely locations of the sources, a growing number of source apportionment studies have been conducted. This paper provides an overview of the locations of the studies, salient features of the results obtained and offers some perspectives for the improvement of future receptor modelling of air quality in these countries. The review revealed that because of its advantages over alternative models, Positive Matrix Factorisation (PMF) was the most commonly applied model in the studies. Although there were differences in the sources identified in the studies, some general trends were observed. While biomass burning was a common problem in both countries, the characteristics of this source varied from one location to another. In New Zealand, domestic heating was the highest contributor to particle levels on days when the guidelines were exceeded. On the other hand, forest back-burning was a concern in Brisbane while marine aerosol was a major source in most studies. Secondary sulphate, traffic emissions, industrial emissions and re-suspended soil were also identified as important sources. Some unique species, for example, volatile organic compounds and particle size distribution were incorporated into some of the studies with results that have significant ramifications for the improvement of air quality. Overall, the application of source apportionment models provided useful information that can assist the design of epidemiological studies and refine air pollution reduction strategies in Australia and New Zealand.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In some parts of Australia, people wanting to learn to ride a motorcycle are required to complete an off-road training course before they are allowed to practice on the road. In the state of Queensland, they are only required to pass a short multiple-choice road rules knowledge test. This paper describes an analysis of police-reported crashes involving learner riders in Queensland that was undertaken as part of research investigating whether pre-learner training is needed and, if so, the issues that should be addressed in training.. The crashes of learner riders and other riders were compared to identify whether there are particular situations or locations in which learner motorcyclists are over-involved in crashes, which could then be targeted in the pre-learner package. The analyses were undertaken separately for riders aged under 25 (330 crashes) versus those aged 25 and over (237 crashes) to provide some insight into whether age or riding inexperience are the more important factors, and thus to indicate whether there are merits in having different licensing or training approaches for younger and older learner riders. Given that the average age of learner riders was 33 years, under 25 was chosen to provide a sufficiently large sample of younger riders. Learner riders appeared to be involved in more severe crashes and to be more often at fault than fully-licensed riders but this may reflect problems in reporting, rather than real differences. Compared to open licence holders, both younger and older learner riders had relatively more crashes in low speed zones and relatively fewer in high speed zones. Riders aged under 25 had elevated percentages of night-time crashes and fewer single unit (potentially involving rider error only) crashes regardless of the type of licence held. The contributing factors that were more prevalent in crashes of learner riders than holders of open licences were: inexperience (37.2% versus 0.5%), inattention (21.5% versus 15.6%), alcohol or drugs (12.0% versus 5.1%) and drink riding (9.9% versus 3.1%). The pattern of contributing factors was generally similar for younger and older learner riders, although younger learners were (not surprisingly) more likely to have inexperience coded as a contributing factor (49.7% versus 19.8%). Some of the differences in crashes between learner riders and fully-licensed riders appear to reflect relatively more riding in urban areas by learners, rather than increased risks relating to inexperience. The analysis of contributing factors in learner rider crashes suggests that hazard perception and risk management (in terms of speed and alcohol and drugs) should be included in a pre-learner program. Currently, most learner riders in Queensland complete pre-licence training and become licensed within one month of obtaining their learner permit. If the introduction of pre-learner training required that the learner permit was held for a minimum duration, then the immediate effect might be more learners riding (and crashing). Thus, it is important to consider how training and licensing initiatives work together in order to improve the safety of new riders (and how this can be evaluated).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a continuing need to improve safety at Railway Level Crossings (RLX) particularly those that do not have gates and lights regulating traffic flow. A number of Intelligent Transport System (ITS) interventions have been proposed to improve drivers’ awareness and reduce errors in detecting and responding appropriately at level crossings. However, as with other technologies, successful implementation and ultimately effectiveness rests with the acceptance of the technology by the end user. In the current research, four focus groups were held (n=38) with drivers in metropolitan and regional locations in Queensland to examine their perceptions of potential in-vehicle and road-based ITS interventions to improve safety at RLX. The findings imply that further development of the ITS interventions, in particular the design and related promotion of the final product, must consider ease of use, usefulness and relative cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article discusses the situation of income support claimants in Australia, constructed as faulty citizens and flawed welfare subjects. Many are on the receiving end of complex, multi-layered forms of surveillance aimed at securing socially responsible and compliant behaviours. In Australia, as in other Western countries, neoliberal economic regimes with their harsh and often repressive treatment of welfare recipients operate in tandem with a burgeoning and costly arsenal of CCTV and other surveillance and governance assemblages. Through a program of ‘Income Management’, initially targeting (mainly) Indigenous welfare recipients in Australia’s Northern Territory, the BasicsCard (administered by Centrelink, on behalf of the Australian Federal Government’s Department of Human Services) is one example of this welfare surveillance. The scheme operates by ‘quarantining’ a percentage of a claimant’s welfare entitlements to be spent by way of the BasicsCard on ‘approved’ items only. The BasicsCard scheme raises significant questions about whether it is possible to encourage people to take responsibility for themselves if they no longer have real control over the most important aspects of their lives. Some Indigenous communities have resisted the BasicsCard, criticising it because the imposition of income management leads to a loss of trust, dignity, and individual agency. Further, income management of individuals by the welfare state contradicts the purported aim that they become less ‘welfare dependent’ and more ‘self-reliant’. In highlighting issues around compulsory income management this paper makes a contribution to the largely under discussed area of income management and welfare surveillance, with its propensity for function creep, garnering large volumes of data on BasicsCard user’s approved (and declined) purchasing decisions, complete with dates, amounts, times and locations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The purpose of this study was to identify retrospectively the predictors of implant survival when the flapless protocol was used in two private dental practices. Materials and Methods: The collected data were initially computer searched to identify the patients; later, a hand search of patient records was carried out to identify all flapless implants consecutively inserted over the last 10 years. The demographic information gathered on statistical predictors included age, sex, periodontal and peri-implantitis status, smoking, details of implants inserted, implant locations, placement time after extraction, use of simultaneous guided hard and soft tissue regeneration procedures, loading protocols, type of prosthesis, and treatment outcomes (implant survival and complications). Excluded were any implants that required flaps or simultaneous guided hard and soft tissue regeneration procedures, and implants narrower than 3.25 mm. Results: A total of 1,241 implants had been placed in 472 patients. Life table analysis indicated cumulative 5-year and 10-year implant survival rates of 97.9% and 96.5%, respectively. Most of the failed implants occurred in the posterior maxilla (54%) in type 4 bone (74.0%), and 55.0% of failed implants had been placed in smokers. Conclusion: Flapless dental implant surgery can yield an implant survival rate comparable to that reported in other studies using traditional flap techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Carrying capacity assessments model a population’s potential self-sufficiency. A crucial first step in the development of such modelling is to examine the basic resource-based parameters defining the population’s production and consumption habits. These parameters include basic human needs such as food, water, shelter and energy together with climatic, environmental and behavioural characteristics. Each of these parameters imparts land-usage requirements in different ways and varied degrees so their incorporation into carrying capacity modelling also differs. Given that the availability and values of production parameters may differ between locations, no two carrying capacity models are likely to be exactly alike. However, the essential parameters themselves can remain consistent so one example, the Carrying Capacity Dashboard, is offered as a case study to highlight one way in which these parameters are utilised. While examples exist of findings made from carrying capacity assessment modelling, to date, guidelines for replication of such studies in other regions and scales have largely been overlooked. This paper addresses such shortcomings by describing a process for the inclusion and calibration of the most important resource-based parameters in a way that could be repeated elsewhere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The liberalization of international trade and foreign direct investment through multilateral, regional and bilateral agreements has had profound implications for the structure and nature of food systems, and therefore, for the availability, nutritional quality, accessibility, price and promotion of foods in different locations. Public health attention has only relatively recently turned to the links between trade and investment agreements, diets and health, and there is currently no systematic monitoring of this area. This paper reviews the available evidence on the links between trade agreements, food environments and diets from an obesity and non-communicable disease (NCD) perspective. Based on the key issues identified through the review, the paper outlines an approach for monitoring the potential impact of trade agreements on food environments and obesity/NCD risks. The proposed monitoring approach encompasses a set of guiding principles, recommended procedures for data collection and analysis, and quantifiable ‘minimal’, ‘expanded’ and ‘optimal’ measurement indicators to be tailored to national priorities, capacity and resources. Formal risk assessment processes of existing and evolving trade and investment agreements, which focus on their impacts on food environments will help inform the development of healthy trade policy, strengthen domestic nutrition and health policy space and ultimately protect population nutrition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spatial organisation of proteins according to their function plays an important role in the specificity of their molecular interactions. Emerging proteomics methods seek to assign proteins to sub-cellular locations by partial separation of organelles and computational analysis of protein abundance distributions among partially separated fractions. Such methods permit simultaneous analysis of unpurified organelles and promise proteome-wide localisation in scenarios wherein perturbation may prompt dynamic re-distribution. Resolving organelles that display similar behavior during a protocol designed to provide partial enrichment represents a possible shortcoming. We employ the Localisation of Organelle Proteins by Isotope Tagging (LOPIT) organelle proteomics platform to demonstrate that combining information from distinct separations of the same material can improve organelle resolution and assignment of proteins to sub-cellular locations. Two previously published experiments, whose distinct gradients are alone unable to fully resolve six known protein-organelle groupings, are subjected to a rigorous analysis to assess protein-organelle association via a contemporary pattern recognition algorithm. Upon straightforward combination of single-gradient data, we observe significant improvement in protein-organelle association via both a non-linear support vector machine algorithm and partial least-squares discriminant analysis. The outcome yields suggestions for further improvements to present organelle proteomics platforms, and a robust analytical methodology via which to associate proteins with sub-cellular organelles.