937 resultados para area-based matching
Resumo:
This paper proposes a new approach for state estimation of angles and frequencies of equivalent areas in large power systems with synchronized phasor measurement units. Defining coherent generators and their correspondent areas, generators are aggregated and system reduction is performed in each area of inter-connected power systems. The structure of the reduced system is obtained based on the characteristics of the reduced linear model and measurement data to form the non-linear model of the reduced system. Then a Kalman estimator is designed for the reduced system to provide an equivalent dynamic system state estimation using the synchronized phasor measurement data. The method is simulated on two test systems to evaluate the feasibility of the proposed method.
Resumo:
This paper addresses development of an ingenious decision support system (iDSS) based on the methodology of survey instruments and identification of significant variables to be used in iDSS using statistical analysis. A survey was undertaken with pregnant women and factorial experimental design was chosen to acquire sample size. Variables with good reliability in any one of the statistical techniques such as Chi-square, Cronbach’s α and Classification Tree were incorporated in the iDSS. The ingenious decision support system was implemented with Visual Basic as front end and Microsoft SQL server management as backend. Outcome of the ingenious decision support system include advice on Symptoms, Diet and Exercise to pregnant women.
Resumo:
Universities often struggle to satisfy students’ need for feedback. This is an area where student satisfaction with courses of study can be low. Yet it is clear that one of the properties of good teaching is giving the highest quality feedback on student work. The term ‘feedback’ though is most commonly associated with summative assessment given by a teacher after work is completed. The student can often be a passive participant in the process. This paper looks at the implementation of a web based interactive scenario completed by students prior to summative assessment. It requires students to participate actively to develop and improve their legal problem solving skills. Traditional delivery of legal education focuses on print and an instructor who conveys the meaning of the written word to students. Today, mixed modes of teaching are often preferred and they can provide enhanced opportunities for feeding forward with greater emphasis on what students do. Web based activities allow for flexible delivery; they are accessible off campus, at a time that suits the student and may be completed by students at their own pace. This paper reports on an online interactive activity which provides valuable formative feedback necessary to allow for successful completion of a final problem solving assignment. It focuses on how the online activity feeds forward and contributes to the development of legal problem solving skills. Introduction to Law is a unit designed and introduced for completion by undergraduate students from faculties other than law but is focused most particularly on students enrolled in the Bachelor of Entertainment Industries degree, a joint initiative of the faculties of Creative Industries, Business and Law at the Queensland University of Technology in Australia. The final (and major) assessment for the unit is an assignment requiring students to explain the legal consequences of particular scenarios. A number of cost effective web based interactive scenarios have been developed to support the unit’s classroom activities. The tool commences with instruction on problem solving method. Students then view the stimulus which is a narrative produced in the form of a music video clip. A series of questions are posed which guide students through the process and they can compare their responses with sample answers provided. The activity clarifies the problem solving method and expectations for the summative assessment and allows students to practise the skill. The paper reports on the approach to teaching and learning taken in the unit including the design process and implementation of the activity. It includes an evaluation of the activity with respect to its effectiveness as a tool to feed forward and reflects on the implications for the teaching of law in higher education.
Resumo:
BACKGROUND: There is evidence that children's decisions to smoke are influenced by family and friends. OBJECTIVES: To assess the effectiveness of interventions to help family members to strengthen non-smoking attitudes and promote non-smoking by children and other family members. SEARCH STRATEGY: We searched 14 electronic bibliographic databases, including the Cochrane Tobacco Addiction Group specialized register, MEDLINE, EMBASE, PsycINFO and CINAHL. We also searched unpublished material, and the reference lists of key articles. We performed both free-text Internet searches and targeted searches of appropriate websites, and we hand-searched key journals not available electronically. We also consulted authors and experts in the field. The most recent search was performed in July 2006. SELECTION CRITERIA: Randomized controlled trials (RCTs) of interventions with children (aged 5-12) or adolescents (aged 13-18) and family members to deter the use of tobacco. The primary outcome was the effect of the intervention on the smoking status of children who reported no use of tobacco at baseline. Included trials had to report outcomes measured at least six months from the start of the intervention. DATA COLLECTION AND ANALYSIS: We reviewed all potentially relevant citations and retrieved the full text to determine whether the study was an RCT and matched our inclusion criteria. Two authors independently extracted study data and assessed them for methodological quality. The studies were too limited in number and quality to undertake a formal meta-analysis, and we present a narrative synthesis. MAIN RESULTS: We identified 19 RCTs of family interventions to prevent smoking. We identified five RCTs in Category 1 (minimal risk of bias on all counts); nine in Category 2 (a risk of bias in one or more areas); and five in Category 3 (risks of bias in design and execution such that reliable conclusions cannot be drawn from the study).Considering the fourteen Category 1 and 2 studies together: (1) four of the nine that tested a family intervention against a control group had significant positive effects, but one showed significant negative effects; (2) one of the five RCTs that tested a family intervention against a school intervention had significant positive effects; (3) none of the six that compared the incremental effects of a family plus a school programme to a school programme alone had significant positive effects; (4) the one RCT that tested a family tobacco intervention against a family non-tobacco safety intervention showed no effects; and (5) the one trial that used general risk reduction interventions found the group which received the parent and teen interventions had less smoking than the one that received only the teen intervention (there was no tobacco intervention but tobacco outcomes were measured). For the included trials the amount of implementer training and the fidelity of implementation are related to positive outcomes, but the number of sessions is not. AUTHORS' CONCLUSIONS: Some well-executed RCTs show family interventions may prevent adolescent smoking, but RCTs which were less well executed had mostly neutral or negative results. There is thus a need for well-designed and executed RCTs in this area.
Resumo:
There have been many improvements in Australian engineering education since the 1990s. However, given the recent drive for assuring the achievement of identified academic standards, more progress needs to be made, particularly in the area of evidence-based assessment. This paper reports on initiatives gathered from the literature and engineering academics in the USA, through an Australian National Teaching Fellowship program. The program aims to establish a process to help academics in designing and implementing evidence-based assessments that meet the needs of not only students and the staff that teach them, but also industry as well as accreditation bodies. The paper also examines the kinds and levels of support necessary for engineering academics, especially early career ones, to help meet the expectations of the current drive for assured quality and standards of both research and teaching. Academics are experiencing competing demands on their time and energy with very high expectations in research performance and increased teaching responsibilities, although many are researchers who have not had much pedagogic training. Based on the literature and investigation of relevant initiatives in the USA, we conducted interviews with several identified experts and change agents who have wrought effective academic cultural change within their institutions and beyond. These reveal that assuring the standards and quality of student learning outcomes through evidence-based assessments cannot be appropriately addressed without also addressing the issue of pedagogic training for academic staff. To be sustainable, such training needs to be complemented by a culture of on-going mentoring support from senior academics, formalised through the university administration, so that mentors are afforded resources, time, and appropriate recognition.
Resumo:
Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality data sets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares Regression and Bayesian Weighted Least Squares Regression for the estimation of uncertainty associated with pollutant build-up prediction using limited data sets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in the prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling.
Resumo:
Changing environments present a number of challenges to mobile robots, one of the most significant being mapping and localisation. This problem is particularly significant in vision-based systems where illumination and weather changes can cause feature-based techniques to fail. In many applications only sections of an environment undergo extreme perceptual change. Some range-based sensor mapping approaches exploit this property by combining occasional place recognition with the assumption that odometry is accurate over short periods of time. In this paper, we develop this idea in the visual domain, by using occasional vision-driven loop closures to infer loop closures in nearby locations where visual recognition is difficult due to extreme change. We demonstrate successful map creation in an environment in which change is significant but constrained to one area, where both the vanilla CAT-Graph and a Sum of Absolute Differences matcher fails, use the described techniques to link dissimilar images from matching locations, and test the robustness of the system against false inferences.
Resumo:
The dynamic capabilities view (DCV) focuses on renewal of firms’ strategic knowledge resources so as to sustain competitive advantage within turbulent markets. Within the context of the DCV, the focus of knowledge management (KM) is to develop the KMC through deploying knowledge governance mechanisms that are conducive to facilitating knowledge processes so as to produce superior business performance over time. The essence of KM performance evaluation is to assess how well the KMC is configured with knowledge governance mechanisms and processes that enable a firm to achieve superior performance through matching its knowledge base with market needs. However, little research has been undertaken to evaluate KM performance from the DCV perspective. This study employed a survey study design and adopted hypothesis-testing approaches to develop a capability-based KM evaluation framework (CKMEF) that upholds the basic assertions of the DCV. Under the governance of the framework, a KM index (KMI) and a KM maturity model (KMMM) were derived not only to indicate the extent to which a firm’s KM implementations fulfill its strategic objectives, and to identify the evolutionary phase of its KMC, but also to bench-mark the KMC in the research population. The research design ensured that the evaluation framework and instruments have statistical significance and good generalizabilty to be applied in the research population, namely construction firms operating in the dynamic Hong Kong construction market. The study demonstrated the feasibility of quantitatively evaluating the development of the KMC and revealing the performance heterogeneity associated with the development.
Resumo:
This work has led to the development of empirical mathematical models to quantitatively predicate the changes of morphology in osteocyte-like cell lines (MLO-Y4) in culture. MLO-Y4 cells were cultured at low density and the changes in morphology recorded over 11 hours. Cell area and three dimensional shape features including aspect ratio, circularity and solidity were then determined using widely accepted image analysis software (ImageJTM). Based on the data obtained from the imaging analysis, mathematical models were developed using the non-linear regression method. The developed mathematical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analyzing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary.
Resumo:
This paper reports on the new literacy demands in the middle years of schooling project in which the affordances of placed-based pedagogy are being explored through teacher inquiries and classroom-based design experiments. The school is located within a large-scale urban renewal project in which houses are being demolished and families relocated. The original school buildings have recently been demolished and replaced by a large ‘superschool’ which serves a bigger student population from a wider area. Drawing on both quantitative and qualitative data, the teachers reported that the language literacy learning of students (including a majority of students learning English as a second language) involved in the project exceeded their expectations. The project provided the motivation for them to develop their oral language repertoires, by involving them in processes such as conducting interviews with adults for their oral histories, through questioning the project manager in regular meetings, and through reporting to their peers and the wider community at school assemblies. At the same time students’ written and multimodal documentation of changes in the neighbourhood and the school grounds extended their literate and semiotic repertoires as they produced books, reports, films, powerpoints, visual designs and models of structures.
Resumo:
Abstract. In recent years, sparse representation based classification(SRC) has received much attention in face recognition with multipletraining samples of each subject. However, it cannot be easily applied toa recognition task with insufficient training samples under uncontrolledenvironments. On the other hand, cohort normalization, as a way of mea-suring the degradation effect under challenging environments in relationto a pool of cohort samples, has been widely used in the area of biometricauthentication. In this paper, for the first time, we introduce cohort nor-malization to SRC-based face recognition with insufficient training sam-ples. Specifically, a user-specific cohort set is selected to normalize theraw residual, which is obtained from comparing the test sample with itssparse representations corresponding to the gallery subject, using poly-nomial regression. Experimental results on AR and FERET databases show that cohort normalization can bring SRC much robustness against various forms of degradation factors for undersampled face recognition.
Resumo:
IEEE 802.11 based wireless local area networks (WLANs) are being increasingly deployed for soft real-time control applications. However, they do not provide quality-ofservice (QoS) differentiation to meet the requirements of periodic real-time traffic flows, a unique feature of real-time control systems. This problem becomes evident particularly when the network is under congested conditions. Addressing this problem, a media access control (MAC) scheme, QoS-dif, is proposed in this paper to enable QoS differentiation in IEEE 802.11 networks for different types of periodic real-time traffic flows. It extends the IEEE 802.11e Enhanced Distributed Channel Access (EDCA) by introducing a QoS differentiation method to deal with different types of periodic traffic that have different QoS requirements for real-time control applications. The effectiveness of the proposed QoS-dif scheme is demonstrated through comparisons with the IEEE 802.11e EDCA mechanism.
Resumo:
This paper provides a new general approach for defining coherent generators in power systems based on the coherency in low frequency inter-area modes. The disturbance is considered to be distributed in the network by applying random load changes which is the random walk representation of real loads instead of a single fault and coherent generators are obtained by spectrum analysis of the generators velocity variations. In order to find the coherent areas and their borders in the inter-connected networks, non-generating buses are assigned to each group of coherent generator using similar coherency detection techniques. The method is evaluated on two test systems and coherent generators and areas are obtained for different operating points to provide a more accurate grouping approach which is valid across a range of realistic operating points of the system.
Resumo:
In the decision-making of multi-area ATC (Available Transfer Capacity) in electricity market environment, the existing resources of transmission network should be optimally dispatched and coordinately employed on the premise that the secure system operation is maintained and risk associated is controllable. The non-sequential Monte Carlo simulation is used to determine the ATC probability density distribution of specified areas under the influence of several uncertainty factors, based on which, a coordinated probabilistic optimal decision-making model with the maximal risk benefit as its objective is developed for multi-area ATC. The NSGA-II is applied to calculate the ATC of each area, which considers the risk cost caused by relevant uncertainty factors and the synchronous coordination among areas. The essential characteristics of the developed model and the employed algorithm are illustrated by the example of IEEE 118-bus test system. Simulative result shows that, the risk of multi-area ATC decision-making is influenced by the uncertainties in power system operation and the relative importance degrees of different areas.
Resumo:
In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.