368 resultados para Broadly-based assessment
Resumo:
Monitoring pedestrian and cyclists movement is an important area of research in transport, crowd safety, urban design and human behaviour assessment areas. Media Access Control (MAC) address data has been recently used as potential information for extracting features from people’s movement. MAC addresses are unique identifiers of WiFi and Bluetooth wireless technologies in smart electronics devices such as mobile phones, laptops and tablets. The unique number of each WiFi and Bluetooth MAC address can be captured and stored by MAC address scanners. MAC addresses data in fact allows for unannounced, non-participatory, and tracking of people. The use of MAC data for tracking people has been focused recently for applying in mass events, shopping centres, airports, train stations etc. In terms of travel time estimation, setting up a scanner with a big value of antenna’s gain is usually recommended for highways and main roads to track vehicle’s movements, whereas big gains can have some drawbacks in case of pedestrian and cyclists. Pedestrian and cyclists mainly move in built distinctions and city pathways where there is significant noises from other fixed WiFi and Bluetooth. Big antenna’s gains will cover wide areas that results in scanning more samples from pedestrians and cyclists’ MAC device. However, anomalies (such fixed devices) may be captured that increase the complexity and processing time of data analysis. On the other hand, small gain antennas will have lesser anomalies in the data but at the cost of lower overall sample size of pedestrian and cyclist’s data. This paper studies the effect of antenna characteristics on MAC address data in terms of travel-time estimation for pedestrians and cyclists. The results of the empirical case study compare the effects of small and big antenna gains in order to suggest optimal set up for increasing the accuracy of pedestrians and cyclists’ travel-time estimation.
Resumo:
Objective To develop a height and weight based equation to estimate total body water (TBW) in Sri Lankan children. Methods Cross sectional descriptive study done involving 5–15 year old healthy children. Height and weight were measured. TBW was assessed using isotope dilution method (D2O) and fat free mass (FFM) calculated. Multiple regression analysis was used to develop prediction equation and validated using PRESS statistical technique. Height, weight and sex code (male=1; female=0) were used as prediction variables. Results This study provides height and weight equation for the prediction of TBW in Sri Lankan children. To the best of our knowledge there are no published height weight prediction equations validated on South Asian populations. Conclusion Results of this study need to be affirmed by more studies on other closely related populations by using multicomponent body composition.
Resumo:
Photographic and image-based dietary records have limited evidence evaluating their performance and use among adults with a chronic disease. This study evaluated the performance of a mobile phone image-based dietary record, the Nutricam Dietary Assessment Method (NuDAM), in adults with type 2 diabetes mellitus (T2DM). Criterion validity was determined by comparing energy intake (EI) with total energy expenditure (TEE) measured by the doubly-labelled water technique. Relative validity was established by comparison to a weighed food record (WFR). Inter-rater reliability was assessed by comparing estimates of intake from three dietitians. Ten adults (6 males, age=61.2±6.9 years, BMI=31.0±4.5 kg/m2) participated. Compared to TEE, mean EI was under-reported using both methods, with a mean ratio of EI:TEE 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. There was moderate to high correlations between the NuDAM and WFR for energy (r=0.57), carbohydrate (r=0.63, p<0.05), protein (r=0.78, p<0.01) and alcohol (rs=0.85, p<0.01), with a weaker relationship for fat (r=0.24). Agreement between dietitians for nutrient intake for the 3-day NuDAM (ICC = 0.77-0.99) was marginally lower when compared with the 3-day WFR (ICC=0.82-0.99). All subjects preferred using the NuDAM and were willing to use it again for longer recording periods.
Resumo:
This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).
Resumo:
A profluorescent nitroxide possessing an isoindoline nitroxide moiety linked to a perylene fluorophore was developed to monitor radical mediated degradation of melamine-formaldehyde crosslinked polyester coil coatings in an industry standard accelerated weathering tester. Trapping of polyester-derived radicals (most likely C-radicals) that are generated during polymer degradation leads to fluorescent closed-shell alkoxy amines, which was used to obtain time-dependent degradation profiles to assess the relative stability of different polyesters towards weathering. The nitroxide probe couples excellent thermal stability and satisfactory photostability with high sensitivity and enables detection of free radical damage in polyesters under conditions that mimic exposure to the environment on a time scale of hours rather than months or years required by other testing methods. There are indications that the profluorescent nitroxide undergoes partial photo-degradation in the absence of polymer-derived radicals. Unexpectedly, it was also found that UV-induced fragmentation of the NO–C bond in closed-shell alkoxy amines leads to regeneration of the profluorescent nitroxide and the respective C-radical. The maximum fluorescence intensity that could be achieved with a given probe concentration is therefore not only determined by the amount of polyester radicals formed during accelerated weathering, but also by the light-driven side reactions of the profluorescent nitroxide and the corresponding alkoxy amine radical trapping products. Studies to determine the optimum probe concentration in the polymer matrix revealed that aggregation and re-absorption effects lowered the fluorescence intensity at higher concentrations of the profluorescent nitroxide, but too low probe concentrations, where these effects would be avoided, were not sufficient to trap the amount of polyester radicals formed upon weathering. The optimized experimental conditions were used to assess the impact of temperature and UV irradiance on polymer degradation during accelerated weathering.
Resumo:
Anatomically precontoured plates are commonly used to treat periarticular fractures. A well-fitting plate can be used as a tool for anatomical reduction of the fractured bone. Recent studies highlighted that some plates fit poorly for many patients due to considerable shape variations between bones of the same anatomical site. While it is impossible to design one shape that fits all, it is also burdensome for the manufacturers and hospitals to produce, store and manage multiple plate shapes without the certainty of utilization by a patient population. In this study, we investigated the number of shapes required for maximum fit within a given dataset, and if they could be obtained by manually deforming the original plate. A distal medial tibial plate was automatically positioned on 45 individual tibiae, and the optimal deformation was determined iteratively using finite element analysis simulation. Within the studied dataset, we found that: (i) 89% fit could be achieved with four shapes, (ii) 100% fit was impossible through mechanical deformation, and (iii) the deformations required to obtain the four plate shapes were safe for the stainless steel plate for further clinical use. The proposed framework is easily transferable to other orthopaedic plates.
Resumo:
This paper describes a concept for a collision avoidance system for ships, which is based on model predictive control. A finite set of alternative control behaviors are generated by varying two parameters: offsets to the guidance course angle commanded to the autopilot and changes to the propulsion command ranging from nominal speed to full reverse. Using simulated predictions of the trajectories of the obstacles and ship, compliance with the Convention on the International Regulations for Preventing Collisions at Sea and collision hazards associated with each of the alternative control behaviors are evaluated on a finite prediction horizon, and the optimal control behavior is selected. Robustness to sensing error, predicted obstacle behavior, and environmental conditions can be ensured by evaluating multiple scenarios for each control behavior. The method is conceptually and computationally simple and yet quite versatile as it can account for the dynamics of the ship, the dynamics of the steering and propulsion system, forces due to wind and ocean current, and any number of obstacles. Simulations show that the method is effective and can manage complex scenarios with multiple dynamic obstacles and uncertainty associated with sensors and predictions.
Resumo:
The historical challenge of environmental impact assessment (EIA) has been to predict project-based impacts accurately. Both EIA legislation and the practice of EIA have evolved over the last three decades in Canada, and the development of the discipline and science of environmental assessment has improved how we apply environmental assessment to complex projects. The practice of environmental assessment integrates the social and natural sciences and relies on an eclectic knowledge base from a wide range of sources. EIA methods and tools provide a means to structure and integrate knowledge in order to evaluate and predict environmental impacts.----- This Chapter will provide a brief overview of how impacts are identified and predicted. How do we determine what aspect of the natural and social environment will be affected when a mine is excavated? How does the practitioner determine the range of potential impacts, assess whether they are significant, and predict the consequences? There are no standard answers to these questions, but there are established methods to provide a foundation for scoping and predicting the potential impacts of a project.----- Of course, the community and publics play an important role in this process, and this will be discussed in subsequent chapters. In the first part of this chapter, we will deal with impact identification, which involves appplying scoping to critical issues and determining impact significance, baseline ecosystem evaluation techniques, and how to communicate environmental impacts. In the second part of the chapter, we discuss the prediction of impacts in relation to the complexity of the environment, ecological risk assessment, and modelling.
Resumo:
Developing an effective impact evaluation framework, managing and conducting rigorous impact evaluations, and developing a strong research and evaluation culture within development communication organisations presents many challenges. This is especially so when both the community and organisational context is continually changing and the outcomes of programs are complex and difficult to clearly identify.----- This paper presents a case study from a research project being conducted from 2007-2010 that aims to address these challenges and issues, entitled Assessing Communication for Social Change: A New Agenda in Impact Assessment. Building on previous development communication projects which used ethnographic action research, this project is developing, trailing and rigorously evaluating a participatory impact assessment methodology for assessing the social change impacts of community radio programs in Nepal. This project is a collaboration between Equal Access – Nepal (EAN), Equal Access – International, local stakeholders and listeners, a network of trained community researchers, and a research team from two Australian universities. A key element of the project is the establishment of an organisational culture within EAN that values and supports the impact assessment process being developed, which is based on continuous action learning and improvement. The paper describes the situation related to monitoring and evaluation (M&E) and impact assessment before the project began, in which EAN was often reliant on time-bound studies and ‘success stories’ derived from listener letters and feedback. We then outline the various strategies used in an effort to develop stronger and more effective impact assessment and M&E systems, and the gradual changes that have occurred to date. These changes include a greater understanding of the value of adopting a participatory, holistic, evidence-based approach to impact assessment. We also critically review the many challenges experienced in this process, including:----- • Tension between the pressure from donors to ‘prove’ impacts and the adoption of a bottom-up, participatory approach based on ‘improving’ programs in ways that meet community needs and aspirations.----- • Resistance from the content teams to changing their existing M&E practices and to the perceived complexity of the approach.----- • Lack of meaningful connection between the M&E and content teams.----- • Human resource problems and lack of capacity in analysing qualitative data and reporting results.----- • The contextual challenges, including extreme poverty, wide cultural and linguistic diversity, poor transport and communications infrastructure, and political instability.----- • A general lack of acceptance of the importance of evaluation within Nepal due to accepting everything as fate or ‘natural’ rather than requiring investigation into a problem.
Resumo:
The use of collaborative assignments for assessment is a risky undertaking for students and course designers. Yet the benefits, in terms of core learning outcomes, competencies, collaborative sense making and student involvement, suggest that the effort is worthwhile. Formal descriptions and rules do little to ameliorate the perception of risk and increased anxiety by students. (Ryan, 2007). BEB100 Introducing Professional Learning is a faculty-wide foundation unit with over 1300 students from 19 disciplines across the Faculty of the Built Environment and Engineering (“BEE”) at the Queensland University of Technology (“QUT”), Brisbane, Australia. Finding order in chaos outlines the approach and justification, assessment criteria, learning resources, teamwork tools, tutorial management, communication strategies, 2007-09 Student Learning Experience Survey results, annual improvements, findings and outcomes.
Resumo:
Many contemporary currents in applied linguistics have favored discourse studies within assessment; there have been calls for cross-fertilization with other areas within applied linguistics, critiques of the positivist tradition within language testing research, and the growing impact of Conversation Analysis (CA) and sociocultural theory. This chapter focuses on the resulting increase in discourse-based studies of oral proficiency assessment techniques. These studies initially focused on the traditional oral proficiency interview but have since been extended to new test formats, including paired and group interaction. We discuss the research carried out on a number of factors in the assessment setting, including the role of the interlocutor, candidate, and rater, and the impact of tasks, task performance conditions, and rating criteria. Recent research has also concentrated more specifically on the assessment of pragmatic competence and on the applications of technology within the assessment of spoken language, including the comparability of semidirect and direct methods for such assessment and the use of computer corpora.
Resumo:
Vibration based damage identification methods examine the changes in primary modal parameters or quantities derived from modal parameters. As one method may have advantages over the other under some circumstances, a multi-criteria approach is proposed. Case studies are conducted separately on beam, plate and plate-on-beam structures. Using the numerically simulated modal data obtained through finite element analysis software, algorithms based on flexibility and strain energy changes before and after damage are obtained and used as the indices for the assessment of the state of structural health. Results show that the proposed multi-criteria method is effective in damage identification in these structures.
Resumo:
This paper uses dynamic computer simulation techniques to apply a procedure using vibration-based methods for damage assessment in multiple-girder composite bridge. In addition to changes in natural frequencies, this multi-criteria procedure incorporates two methods, namely the modal flexibility and the modal strain energy method. Using the numerically simulated modal data obtained through finite element analysis software, algorithms based on modal flexibility and modal strain energy change before and after damage are obtained and used as the indices for the assessment of structural health state. The feasibility and capability of the approach is demonstrated through numerical studies of proposed structure with six damage scenarios. It is concluded that the modal strain energy method is competent for application on multiple-girder composite bridge, as evidenced through the example treated in this paper.
Resumo:
The service-orientation paradigm has not only become prevalent in the software systems domain in recent years, but is also increasingly applied on the business level to restructure organisational capabilities. In this paper, we present the results of an extensive literature review of 30 approaches related to service identification and analysis for both domains. Based on the consolidation of a superset of comparison criteria for service-oriented methodologies found in related literature, we compare and evaluate the different characteristics of service engineering methods with a focus on service analysis. Although a close business and IT alignment is regarded as one of the core beneficial promises of service-orientation, our analysis suggests that there is a lack of unified, comprehensive methodology for service identification and analysis integrating and addressing both domains. Thus, we discuss how our results can inform directions for future research in this area.
Resumo:
Realistic estimates of short- and long-term (strategic) budgets for maintenance and rehabilitation of road assessment management should consider the stochastic characteristics of asset conditions of the road networks so that the overall variability of road asset data conditions is taken into account. The probability theory has been used for assessing life-cycle costs for bridge infrastructures by Kong and Frangopol (2003), Zayed et.al. (2002), Kong and Frangopol (2003), Liu and Frangopol (2004), Noortwijk and Frangopol (2004), Novick (1993). Salem 2003 cited the importance of the collection and analysis of existing data on total costs for all life-cycle phases of existing infrastructure, including bridges, road etc., and the use of realistic methods for calculating the probable useful life of these infrastructures (Salem et. al. 2003). Zayed et. al. (2002) reported conflicting results in life-cycle cost analysis using deterministic and stochastic methods. Frangopol et. al. 2001 suggested that additional research was required to develop better life-cycle models and tools to quantify risks, and benefits associated with infrastructures. It is evident from the review of the literature that there is very limited information on the methodology that uses the stochastic characteristics of asset condition data for assessing budgets/costs for road maintenance and rehabilitation (Abaza 2002, Salem et. al. 2003, Zhao, et. al. 2004). Due to this limited information in the research literature, this report will describe and summarise the methodologies presented by each publication and also suggest a methodology for the current research project funded under the Cooperative Research Centre for Construction Innovation CRC CI project no 2003-029-C.