989 resultados para A-not-B error


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a problematisation of the teaching of art to young children. To problematise a domain of social endeavour, is, in Michel Foucault's terms, to ask how we come to believe that "something ... can and must be thought" (Foucault, 1985:7). The aim is to document what counts (i.e., what is sayable, thinkable, feelable) as proper art teaching in Queensland at this point ofhistorical time. In this sense, the thesis is a departure from more recognisable research on 'more effective' teaching, including critical studies of art teaching and early childhood teaching. It treats 'good teaching' as an effect of moral training made possible through disciplinary discourses organised around certain epistemic rules at a particular place and time. There are four key tasks accomplished within the thesis. The first is to describe an event which is not easily resolved by means of orthodox theories or explanations, either liberal-humanist or critical ones. The second is to indicate how poststructuralist understandings of the self and social practice enable fresh engagements with uneasy pedagogical moments. What follows this discussion is the documentation of an empirical investigation that was made into texts generated by early childhood teachers, artists and parents about what constitutes 'good practice' in art teaching. Twenty-two participants produced text to tell and re-tell the meaning of 'proper' art education, from different subject positions. Rather than attempting to capture 'typical' representations of art education in the early years, a pool of 'exemplary' teachers, artists and parents were chosen, using "purposeful sampling", and from this pool, three videos were filmed and later discussed by the audience of participants. The fourth aspect of the thesis involves developing a means of analysing these texts in such a way as to allow a 're-description' of the field of art teaching by attempting to foreground the epistemic rules through which such teacher-generated texts come to count as true ie, as propriety in art pedagogy. This analysis drew on Donna Haraway's (1995) understanding of 'ironic' categorisation to hold the tensions within the propositions inside the categories of analysis rather than setting these up as discursive oppositions. The analysis is therefore ironic in the sense that Richard Rorty (1989) understands the term to apply to social scientific research. Three 'ironic' categories were argued to inform the discursive construction of 'proper' art teaching. It is argued that a teacher should (a) Teach without teaching; (b) Manufacture the natural; and (c) Train for creativity. These ironic categories work to undo modernist assumptions about theory/practice gaps and finding a 'balance' between oppositional binary terms. They were produced through a discourse theoretical reading of the texts generated by the participants in the study, texts that these same individuals use as a means of discipline and self-training as they work to teach properly. In arguing the usefulness of such approaches to empirical data analysis, the thesis challenges early childhood research in arts education, in relation to its capacity to deal with ambiguity and to acknowledge contradiction in the work of teachers and in their explanations for what they do. It works as a challenge at a range of levels - at the level of theorising, of method and of analysis. In opening up thinking about normalised categories, and questioning traditional Western philosophy and the grand narratives of early childhood art pedagogy, it makes a space for re-thinking art pedagogy as "a game oftruth and error" (Foucault, 1985). In doing so, it opens up a space for thinking how art education might be otherwise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE It has been proposed that BRL37344, SR58611 and CGP12177 activate b3-adrenoceptors in human atrium to increase contractility and L-type Ca2+ current (ICa-L). b3-adrenoceptor agonists are potentially beneficial for the treatment of a variety of diseases but concomitant cardiostimulation would be potentially harmful. It has also been proposed that (-)-CGP12177 activates the low affinity binding site of the b1-adrenoceptor in human atrium. We therefore used BRL37344, SR58611 and (-)-CGP12177 with selective b-adrenoceptor subtype antagonists to clarify cardiostimulant b-adrenoceptor subtypes in human atrium. EXPERIMENTAL APPROACH Human right atrium was obtained from patients without heart failure undergoing coronary artery bypass or valve surgery. Cardiomyocytes were prepared to test BRL37344, SR58611 and CGP12177 effects on ICa-L. Contractile effects were determined on right atrial trabeculae. KEY RESULTS BRL37344 increased force which was antagonized by blockade of b1- and b2-adrenoceptors but not by blockade of b3-adrenoceptors with b3-adrenoceptor-selective L-748,337 (1 mM). The b3-adrenoceptor agonist SR58611 (1 nM–10 mM) did not affect atrial force. BRL37344 and SR58611 did not increase ICa-L at 37°C, but did at 24°C which was prevented by L-748,337. (-)-CGP12177 increased force and ICa-L at both 24°C and 37°C which was prevented by (-)-bupranolol (1–10 mM), but not L-748,337. CONCLUSIONS AND IMPLICATIONS We conclude that the inotropic responses to BRL37344 are mediated through b1- and b2-adrenoceptors. The inotropic and ICa-L responses to (-)-CGP12177 are mediated through the low affinity site b1L-adrenoceptor of the b1-adrenoceptor. b3-adrenoceptor-mediated increases in ICa-L are restricted to low temperatures. Human atrial b3-adrenoceptors do not change contractility and ICa-L at physiological temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Without the virtually free services of nature like clean air and water, humans would not last long. Natural systems can be incorporated in existing urban structures or spaces to add public amenity, mitigate the heat island eff ect, reduce pollution, add oxygen, and ensure water, electricity and food security in urban areas. Th ere are many eco-solutions that could radically reduce resource consumption and pollution and even provide surplus ecosystem services in the built environment at little or no operational cost, if adequately supported by design. Th is is the second part of a two part paper that explains what eco-services are, then provides examples of how design can generate natural as well as social capital. Using examples of actual and notional solutions, both papers set out to challenge designers to ‘think again’, and invent ways of creating net positive environmental gains through built environment design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Commentary on : Carey JV. Literature review : should antipyretic therapies routinely be administered to patients with [corrected] fever? J Clin Nurs 2010;19:2377–93.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Violence in nightclubs is a serious problem that has the Australian government launching multimillion dollar drinking campaigns. Research on nightclub violence has focused on identifying contributing social and environmental factors, with many concentrating on a variety of specific issues ranging from financial standpoints with effective target marketing strategies to legal obligations of supplying alcohol and abiding regulatory conditions. Moreover, existing research suggests that there is no single factor that directly affects the rate violence in licensed venues. As detailed in the review paper of Koleczko and Garcia Hansen (2011), there is little research about the physical environment of nightclubs and which specific design properties can be used to determine design standards to ensure/improve the physical design of nightclub environments to reduce patron violence. This current study seeks to address this omission by reporting on a series of interviews with participants from management and design domains. Featured case studies are both located in Fortitude Valley, a Mecca for party-goers and the busiest nightclub district in Queensland. The results and analysis support the conclusions that a number of elements of the physical environment influence elevated patron aggression and assault.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a method for the recovery of position and absolute attitude (including pitch, roll and yaw) using a novel fusion of monocular Visual Odometry and GPS measurements in a similar manner to a classic loosely-coupled GPS/INS error state navigation filter. The proposed filter does not require additional restrictions or assumptions such as platform-specific dynamics, map-matching, feature-tracking, visual loop-closing, gravity vector or additional sensors such as an IMU or magnetic compass. An observability analysis of the proposed filter is performed, showing that the scale factor, position and attitude errors are fully observable under acceleration that is non-parallel to velocity vector in the navigation frame. The observability properties of the proposed filter are demonstrated using numerical simulations. We conclude the article with an implementation of the proposed filter using real flight data collected from a Cessna 172 equipped with a downwards-looking camera and GPS, showing the feasibility of the algorithm in real-world conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Few studies have specifically investigated the functional effects of uncorrected astigmatism on measures of reading fluency. This information is important to provide evidence for the development of clinical guidelines for the correction of astigmatism. Methods: Participants included 30 visually normal, young adults (mean age 21.7 ± 3.4 years). Distance and near visual acuity and reading fluency were assessed with optimal spectacle correction (baseline) and for two levels of astigmatism, 1.00DC and 2.00DC, at two axes (90° and 180°) to induce both against-the-rule (ATR) and with-the-rule (WTR) astigmatism. Reading and eye movement fluency were assessed using standardized clinical measures including the test of Discrete Reading Rate (DRR), the Developmental Eye Movement (DEM) test and by recording eye movement patterns with the Visagraph (III) during reading for comprehension. Results: Both distance and near acuity were significantly decreased compared to baseline for all of the astigmatic lens conditions (p < 0.001). Reading speed with the DRR for N16 print size was significantly reduced for the 2.00DC ATR condition (a reduction of 10%), while for smaller text sizes reading speed was reduced by up to 24% for the 1.00DC ATR and 2.00DC condition in both axis directions (p<0.05). For the DEM, sub-test completion speeds were significantly impaired, with the 2.00DC condition affecting both vertical and horizontal times and the 1.00DC ATR condition affecting only horizontal times (p<0.05). Visagraph reading eye movements were not significantly affected by the induced astigmatism. Conclusions: Induced astigmatism impaired performance on selected tests of reading fluency, with ATR astigmatism having significantly greater effects on performance than did WTR, even for relatively small amounts of astigmatic blur of 1.00DC. These findings have implications for the minimal prescribing criteria for astigmatic refractive errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background/aims: Access to appropriate health care following an acute cardiac event is important for positive outcomes. The aim of the Cardiac ARIA index was to derive an objective, comparable, geographic measure reflecting access to cardiac services across Australia. Methods: Geographic Information Systems (GIS) were used to model a numeric-alpha index based on acute management from onset of symptoms to return to the community. Acute time frames have been calculated to include time for ambulance to arrive, assess and load patient, and travel to facility by road 40–80 kph. Results: The acute phase of the index was modelled into five categories: 1 [24/7 percutaneous cardiac intervention (PCI) ≤1 h]; 2 [24/7 PCI 1–3 h, and PCI less than an additional hour to nearest accident and emergency room (A&E)]: 3 [Nearest A&E ≤3 h (no 24/7 PCI within an extra hour)]: 4 [Nearest A&E 3–12 h (no 24/7 PCI within an extra hour)]: 5 [Nearest A&E 12–24 h (no 24/7 PCI within an extra hour)]. Discharge care was modelled into three categories based on time to a cardiac rehabilitation program, retail pharmacy, pathology services, hospital, GP or remote clinic: (A) all services ≤30 min; (B) >30 min and ≤60 min; (C) >60 min. Examples of the index indicate that the majority of population locations within capital cities were category 1A; Alice Springs and Byron Bay were 3A; and the Northern Territory town of Maningrida had minimal access to cardiac services with an index ranking of 5C. Conclusion: The Cardiac ARIA index provides an invaluable tool to inform appropriate strategies for the use of scarce cardiac resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increased use of powered two-wheelers (PTWs) often underlies increases in the number of reported crashes, promoting research into PTW safety. PTW riders are overrepresented in crash and injury statistics relative to exposure and, as such, are considered vulnerable road users. PTW use has increased substantially over the last decade in many developed countries. One such country is Australia, where moped and scooter use has increased at a faster rate than motorcycle use in recent years. Increased moped use is particularly evident in the State of Queensland which is one of four Australian jurisdictions where moped riding is permitted for car licence holders and a motorcycle licence is not required. A moped is commonly a small motor scooter and is limited to a maximum design speed of 50 km/h and a maximum engine cylinder capacity of 50 cubic centimetres. Scooters exceeding either of these specifications are classed as motorcycles in all Australian jurisdictions. While an extensive body of knowledge exists on motorcycle safety, some of which is relevant to moped and scooter safety, the latter PTW types have received comparatively little focused research attention. Much of the research on moped safety to date has been conducted in Europe where they have been popular since the mid 20th century, while some studies have also been conducted in the United States. This research is of limited relevance to Australia due to socio-cultural, economic, regulatory and environmental differences. Moreover, while some studies have compared motorcycles to mopeds in terms of safety, no research to date has specifically examined the differences and similarities between mopeds and larger scooters, or between larger scooters and motorcycles. To address the need for a better understanding of moped and scooter use and safety, the current program of research involved three complementary studies designed to achieve the following aims: (1) develop better knowledge and understanding of moped and scooter usage trends and patterns; and (2) determine the factors leading to differences in moped, scooter and motorcycle safety. Study 1 involved six-monthly observations of PTW types in inner city parking areas of Queensland’s capital city, Brisbane, to monitor and quantify the types of PTW in use over a two year period. Study 2 involved an analysis of Queensland PTW crash and registration data, primarily comparing the police-reported crash involvement of mopeds, scooters and motorcycles over a five year period (N = 7,347). Study 3 employed both qualitative and quantitative methods to examine moped and scooter usage in two components: (a) four focus group discussions with Brisbane-based Queensland moped and scooter riders (N = 23); and (b) a state-wide survey of Queensland moped and scooter riders (N = 192). Study 1 found that of the PTW types parked in inner city Brisbane over the study period (N = 2,642), more than one third (36.1%) were mopeds or larger scooters. The number of PTWs observed increased at each six-monthly phase, but there were no significant changes in the proportions of PTW types observed across study phases. There were no significant differences in the proportions or numbers of PTW type observed by season. Study 2 revealed some important differences between mopeds, scooters and motorcycles in terms of safety and usage through analysis of crash and registration data. All Queensland PTW registrations doubled between 2001 and 2009, but there was an almost fifteen-fold increase in moped registrations. Mopeds subsequently increased as a proportion of Queensland registered PTWs from 1.2 percent to 8.8 percent over this nine year period. Moped and scooter crashes increased at a faster rate than motorcycle crashes over the five year study period from July 2003 to June 2008, reflecting their relatively greater increased usage. Crash rates per 10,000 registrations for the study period were only slightly higher for mopeds (133.4) than for motorcycles and scooters combined (124.8), but estimated crash rates per million vehicle kilometres travelled were higher for mopeds (6.3) than motorcycles and scooters (1.7). While the number of crashes increased for each PTW type over the study period, the rate of crashes per 10,000 registrations declined by 40 percent for mopeds compared with 22 percent for motorcycles and scooters combined. Moped and scooter crashes were generally less severe than motorcycle crashes and this was related to the particular crash characteristics of the PTW types rather than to the PTW types themselves. Compared to motorcycle and moped crashes, scooter crashes were less likely to be single vehicle crashes, to involve a speeding or impaired rider, to involve poor road conditions, or to be attributed to rider error. Scooter and moped crashes were more likely than motorcycle crashes to occur on weekdays, in lower speed zones and at intersections. Scooter riders were older on average (39) than moped (32) and motorcycle (35) riders, while moped riders were more likely to be female (36%) than scooter (22%) or motorcycle riders (7%). The licence characteristics of scooter and motorcycle riders were similar, with moped riders more likely to be licensed outside of Queensland and less likely to hold a full or open licence. The PTW type could not be identified in 15 percent of all cases, indicating a need for more complete recording of vehicle details in the registration data. The focus groups in Study 3a and the survey in Study 3b suggested that moped and scooter riders are a heterogeneous population in terms of demographic characteristics, riding experience, and knowledge and attitudes regarding safety and risk. The self-reported crash involvement of Study 3b respondents suggests that most moped and scooter crashes result in no injury or minor injury and are not reported to police. Study 3 provided some explanation for differences observed in Study 2 between mopeds and scooters in terms of crash involvement. On the whole, scooter riders were older, more experienced, more likely to have undertaken rider training and to value rider training programs. Scooter riders were also more likely to use protective clothing and to seek out safety-related information. This research has some important practical implications regarding moped and scooter use and safety. While mopeds and scooters are generally similar in terms of usage, and their usage has increased, scooter riders appear to be safer than moped riders due to some combination of superior skills and safer riding behaviour. It is reasonable to expect that mopeds and scooters will remain popular in Queensland in future and that their usage may further increase, along with that of motorcycles. Future policy and planning should consider potential options for encouraging moped riders to acquire better riding skills and greater safety awareness. While rider training and licensing appears an obvious potential countermeasure, the effectiveness of rider training has not been established and other options should also be strongly considered. Such options might include rider education and safety promotion, while interventions could also target other road users and urban infrastructure. Future research is warranted in regard to moped and scooter safety, particularly where the use of those PTWs has increased substantially from low levels. Research could address areas such as rider training and licensing (including program evaluations), the need for more detailed and reliable data (particularly crash and exposure data), protective clothing use, risks associated with lane splitting and filtering, and tourist use of mopeds. Some of this research would likely be relevant to motorcycle use and safety, as well as that of mopeds and scooters.