923 resultados para accelerometer, randomness check
Resumo:
As a part of vital infrastructure and transportation network, bridge structures must function safely at all times. Bridges are designed to have a long life span. At any point in time, however, some bridges are aged. The ageing of bridge structures, given the rapidly growing demand of heavy and fast inter-city passages and continuous increase of freight transportation, would require diligence on bridge owners to ensure that the infrastructure is healthy at reasonable cost. In recent decades, a new technique, structural health monitoring (SHM), has emerged to meet this challenge. In this new engineering discipline, structural modal identification and damage detection have formed a vital component. Witnessed by an increasing number of publications is that the change in vibration characteristics is widely and deeply investigated to assess structural damage. Although a number of publications have addressed the feasibility of various methods through experimental verifications, few of them have focused on steel truss bridges. Finding a feasible vibration-based damage indicator for steel truss bridges and solving the difficulties in practical modal identification to support damage detection motivated this research project. This research was to derive an innovative method to assess structural damage in steel truss bridges. First, it proposed a new damage indicator that relies on optimising the correlation between theoretical and measured modal strain energy. The optimisation is powered by a newly proposed multilayer genetic algorithm. In addition, a selection criterion for damage-sensitive modes has been studied to achieve more efficient and accurate damage detection results. Second, in order to support the proposed damage indicator, the research studied the applications of two state-of-the-art modal identification techniques by considering some practical difficulties: the limited instrumentation, the influence of environmental noise, the difficulties in finite element model updating, and the data selection problem in the output-only modal identification methods. The numerical (by a planer truss model) and experimental (by a laboratory through truss bridge) verifications have proved the effectiveness and feasibility of the proposed damage detection scheme. The modal strain energy-based indicator was found to be sensitive to the damage in steel truss bridges with incomplete measurement. It has shown the damage indicator's potential in practical applications of steel truss bridges. Lastly, the achievement and limitation of this study, and lessons learnt from the modal analysis have been summarised.
Resumo:
The issue of ‘rigour vs. relevance’ in IS research has generated an intense, heated debate for over a decade. It is possible to identify, however, only a limited number of contributions on how to increase the relevance of IS research without compromising its rigour. Based on a lifecycle view of IS research, we propose the notion of ‘reality checks’ in order to review IS research outcomes in the light of actual industry demands. We assume that five barriers impact the efficient transfer of IS research outcomes; they are lack of awareness, lack of understandability, lack of relevance, lack of timeliness, and lack of applicability. In seeking to understand the effect of these barriers on the transfer of mature IS research into practice, we used focus groups. We chose DeLone and McLean’s IS success model as our stimulus because it is one of the more widely researched areas of IS.
Resumo:
The outcomes of a two-pronged 'real-world' learning project, which aimed to expand the views of pre-service teachers about learning, pedagogy and diversity, will be discussed in this paper. Seventy-two fourth-year and 22 first-year students, enrolled in a Bachelor of Education degree in Queensland, Australia, were engaged in community sites outside of university lectures, and separate from their practicum. Using Butin's conceptual framework for service learning, we show evidence that this approach can enable pre-service teachers to see new realities about the dilemmas and ambiguities of performing as learners and as teachers. We contend that when such 'real-world' experiences have different foci at different times in their four-year degree, pre-service teachers have more opportunities to develop sophisticated understandings of pedagogy in diverse contexts for diverse learners.
Resumo:
Many factors have the potential to influence human health. These factors need to be monitored to maintain health. As is the case with human health, construction projects have a number of critical factors that can facilitate a broad evaluation of project health. In order to use these factors as an indication of health, they need to be assessed. This assessment can help to achieve desired outcomes for the project. This paper discusses the approach of assessing Critical Success Factors (CSFs) using Key Performance Indicators (KPIs) to ascertain the immediate health of a construction project. This approach is applicable to all phases of construction projects and many construction procurement methods. KPIs have been benchmarked on the basis of industry standards and historical data. The robustness of the KPIs to assess the immediate health of a project has been validated using Australian and international case studies.
Resumo:
We examine the use of randomness extraction and expansion in key agreement (KA) pro- tocols to generate uniformly random keys in the standard model. Although existing works provide the basic theorems necessary, they lack details or examples of appropriate cryptographic primitives and/or parameter sizes. This has lead to the large amount of min-entropy needed in the (non-uniform) shared secret being overlooked in proposals and efficiency comparisons of KA protocols. We therefore summa- rize existing work in the area and examine the security levels achieved with the use of various extractors and expanders for particular parameter sizes. The tables presented herein show that the shared secret needs a min-entropy of at least 292 bits (and even more with more realistic assumptions) to achieve an overall security level of 80 bits using the extractors and expanders we consider. The tables may be used to �nd the min-entropy required for various security levels and assumptions. We also �nd that when using the short exponent theorems of Gennaro et al., the short exponents may need to be much longer than they suggested.
Resumo:
BIM (Building Information Modelling) is an approach that involves applying and maintaining an integral digital representation of all building information for different phases of the project lifecycle. This paper presents an analysis of the current state of BIM in the industry and a re-assessment of its role and potential contribution in the near future, given the apparent slow rate of adoption by the industry. The paper analyses the readiness of the building industry with respect to the product, processes and people to present an argument on where the expectations from BIM and its adoption may have been misplaced. This paper reports on the findings from: (1) a critical review of latest BIM literature and commercial applications, and (2) workshops with focus groups on changing work-practice, role of technology, current perceptions and expectations of BIM.
Resumo:
The study described in this paper developed a model of animal movement, which explicitly recognised each individual as the central unit of measure. The model was developed by learning from a real dataset that measured and calculated, for individual cows in a herd, their linear and angular positions and directional and angular speeds. Two learning algorithms were implemented: a Hidden Markov model (HMM) and a long-term prediction algorithm. It is shown that a HMM can be used to describe the animal's movement and state transition behaviour within several “stay” areas where cows remained for long periods. Model parameters were estimated for hidden behaviour states such as relocating, foraging and bedding. For cows’ movement between the “stay” areas a long-term prediction algorithm was implemented. By combining these two algorithms it was possible to develop a successful model, which achieved similar results to the animal behaviour data collected. This modelling methodology could easily be applied to interactions of other animal species.
Resumo:
In order to tackle the growth of air travelers in airports worldwide, it is important to simulate and understand passenger flows to predict future capacity constraints and levels of service. We discuss the ability of agent-based models to understand complicated pedestrian movement in built environments. In this paper we propose advanced passenger traits to enable more detailed modelling of behaviors in terminal buildings, particularly in the departure hall around the check-in facilities. To demonstrate the concepts, we perform a series of passenger agent simulations in a virtual airport terminal. In doing so, we generate a spatial distribution of passengers within the departure hall to ancillary facilities such as cafes, information kiosks and phone booths as well as common check-in facilities, and observe the effects this has on passenger check-in and departure hall dwell times, and facility utilization.
Resumo:
High levels of sitting have been linked with poor health outcomes. Previously a pragmatic MTI accelerometer data cut-point (100 count/min-1) has been used to estimate sitting. Data on the accuracy of this cut-point is unavailable. PURPOSE: To ascertain whether the 100 count/min-1 cut-point accurately isolates sitting from standing activities. METHODS: Participants fitted with an MTI accelerometer were observed performing a range of sitting, standing, light & moderate activities. 1-min epoch MTI data were matched to observed activities, then re-categorized as either sitting or not using the 100 count/min-1 cut-point. Self-report demographics and current physical activity were collected. Generalized estimating equation for repeated measures with a binary logistic model analyses (GEE), corrected for age, gender and BMI, were conducted to ascertain the odds of the MTI data being misclassified. RESULTS: Data were from 26 healthy subjects (8 men; 50% aged <25 years; mean BMI (SD) 22.7(3.8)m/kg2). MTI sitting and standing data mode was 0 count/min-1, with 46% of sitting activities and 21% of standing activities recording 0 count/min-1. The GEE was unable to accurately isolate sitting from standing activities using the 100 count/min-1 cut-point, since all sitting activities were incorrectly predicted as standing (p=0.05). To further explore the sensitivity of MTI data to delineate sitting from standing, the upper 95% confidence interval of the mean for the sitting activities (46 count/min-1) was used to re-categorise the data; this resulted in the GEE correctly classifying 49% of sitting, and 69% of standing activities. Using the 100 count/min-1 cut-point the data were re-categorised into a combined ‘sit/stand’ category and tested against other light activities: 88% of sit/stand and 87% of light activities were accurately predicted. Using Freedson’s moderate cut-point of 1952 count/min-1 the GEE accurately predicted 97% of light vs. 90% of moderate activities. CONCLUSION: The distributions of MTI recorded sitting and standing data overlap considerably, as such the 100 count/min -1 cut-point did not accurately isolate sitting from other static standing activities. The 100 count/min -1 cut-point more accurately predicted sit/stand vs. other movement orientated activities.
Resumo:
Data quality has become a major concern for organisations. The rapid growth in the size and technology of a databases and data warehouses has brought significant advantages in accessing, storing, and retrieving information. At the same time, great challenges arise with rapid data throughput and heterogeneous accesses in terms of maintaining high data quality. Yet, despite the importance of data quality, literature has usually condensed data quality into detecting and correcting poor data such as outliers, incomplete or inaccurate values. As a result, organisations are unable to efficiently and effectively assess data quality. Having an accurate and proper data quality assessment method will enable users to benchmark their systems and monitor their improvement. This paper introduces a granules mining for measuring the random degree of error data which will enable decision makers to conduct accurate quality assessment and allocate the most severe data, thereby providing an accurate estimation of human and financial resources for conducting quality improvement tasks.