906 resultados para Faults detect
Resumo:
Experimentally obtained Mg.SiO smokes were studied by analytical electron microscopy using the same samples that had been previously characterized by repeated infrared spectroscopy. Analytical electron microscopy shows that unannealed smokes contain some degree of microcrystallinity which increases with increased annealing for up to 30 hr. An SiO2 polymorph (tridymite) and MgO may form contemporaneously as a result of growth of forsterite (Mg2SiO4) microcrystallites in the initially nonstoichiometric smokes. After 4 hr annealing, forsterite and tridymite react to enstatite (MgSiO3). We suggest that infrared spectroscopy and X-ray diffraction analysis should be complemented by detailed analytical electron microscopy to detect budding crystallinity in vapor phase condensates.
Resumo:
The aim of children's vision screenings is to detect visual problems that are common in this age category through valid and reliable tests. Nevertheless, the cost effectiveness of paediatric vision screenings, the nature of the tests included in the screening batteries and the ideal screening age has been the cause of much debate in Australia and worldwide. Therefore, the purpose of this review is to report on the current practice of children's vision screenings in Australia and other countries, as well as to evaluate the evidence for and against the provision of such screenings. This was undertaken through a detailed investigation of peer-reviewed publications on this topic. The current review demonstrates that there is no agreed vision screening protocol for children in Australia. This appears to be a result of the lack of strong evidence supporting the benefit of such screenings. While amblyopia, strabismus and, to a lesser extent refractive error, are targeted by many screening programs during pre-school and at school entry, there is less agreement regarding the value of screening for other visual conditions, such as binocular vision disorders, ocular health problems and refractive errors that are less likely to reduce distance visual acuity. In addition, in Australia, little agreement exists in the frequency and coverage of screening programs between states and territories and the screening programs that are offered are ad hoc and poorly documented. Australian children stand to benefit from improved cohesion and communication between jurisdictions and health professionals to enable an equitable provision of validated vision screening services that have the best chance of early detection and intervention for a range of paediatric visual problems.
Resumo:
Circulating tumour cells (CTCs) have attracted much recent interest in cancer research as a potential biomarker and as a means of studying the process of metastasis. It has long been understood that metastasis is a hallmark of malignancy, and conceptual theories on the basis of metastasis from the nineteenth century foretold the existence of a tumour "seed" which is capable of establishing discrete tumours in the "soil" of distant organs. This prescient "seed and soil" hypothesis accurately predicted the existence of CTCs; microscopic tumour fragments in the blood, at least some of which are capable of forming metastases. However, it is only in recent years that reliable, reproducible methods of CTC detection and analysis have been developed. To date, the majority of studies have employed the CellSearch™ system (Veridex LLC), which is an immunomagnetic purification method. Other promising techniques include microfluidic filters, isolation of tumour cells by size using microporous polycarbonate filters and flow cytometry-based approaches. While many challenges still exist, the detection of CTCs in blood is becoming increasingly feasible, giving rise to some tantalizing questions about the use of CTCs as a potential biomarker. CTC enumeration has been used to guide prognosis in patients with metastatic disease, and to act as a surrogate marker for disease response during therapy. Other possible uses for CTC detection include prognostication in early stage patients, identifying patients requiring adjuvant therapy, or in surveillance, for the detection of relapsing disease. Another exciting possible use for CTC detection assays is the molecular and genetic characterization of CTCs to act as a "liquid biopsy" representative of the primary tumour. Indeed it has already been demonstrated that it is possible to detect HER2, KRAS and EGFR mutation status in breast, colon and lung cancer CTCs respectively. In the course of this review, we shall discuss the biology of CTCs and their role in metastagenesis, the most commonly used techniques for their detection and the evidence to date of their clinical utility, with particular reference to lung cancer.
Resumo:
This paper proposes a technique that supports process participants in making risk-informed decisions, with the aim to reduce the process risks. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we prompt the participant with the expected risk that a given fault will occur given the particular input. These risks are predicted by traversing decision trees generated from the logs of past process executions and considering process data, involved resources, task durations and contextual information like task frequencies. The approach has been implemented in the YAWL system and its effectiveness evaluated. The results show that the process instances executed in the tests complete with substantially fewer faults and with lower fault severities, when taking into account the recommendations provided by our technique.
Resumo:
Amongst the most prominent uses of Twitter at present is its role in the discussion of widely televised events: Twitter’s own statistics for 2011, for example, list major entertainment spectacles (the MTV Music Awards, the BET Awards) and sports matches (the UEFA Champions League final, the FIFA Women’s World Cup final) amongst the events generating the most tweets per second during the year (Twitter, 2011). User activities during such televised events constitute a specific, unique category of Twitter use, which differs clearly from the other major events which generate a high rate of tweets per second (such as crises and breaking news, from the Japanese earthquake and tsunami to the death of Steve Jobs), as preliminary research has shown. During such major media events, by contrast, Twitter is used most predominantly as a technology of fandom instead: it serves in the first place as a backchannel to television and other streaming audiovisual media, enabling users offer their own running commentary on the universally shared media text of the event broadcast as it unfolds live. Centrally, this communion of fans around the shared text is facilitated by the use of Twitter hashtags – unifying textual markers which are now often promoted to prospective audiences by the broadcasters well in advance of the live event itself. This paper examines the use of Twitter as a technology for the expression of shared fandom in the context of a major, internationally televised annual media event: the Eurovision Song Contest. It constitutes a highly publicised, highly choreographed media spectacle whose eventual outcomes are unknown ahead of time and attracts a diverse international audience. Our analysis draws on comprehensive datasets for the ‘official’ event hashtags, #eurovision, #esc, and #sbseurovision. Using innovative methods which combine qualitative and quantitative approaches to the analysis of Twitter datasets containing several hundreds of thousands, we examine overall patterns of participation to discover how audiences express their fandom throughout the event. Minute-by-minute tracking of Twitter activity during the live broadcasts enables us to identify the most resonant moments during each event; we also examine the networks of interaction between participants to detect thematically or geographically determined clusters of interaction, and to identify the most visible and influential participants in each network. Such analysis is able to provide a unique insight into the use of Twitter as a technology for fandom and for what in cultural studies research is called ‘audiencing’: the public performance of belonging to the distributed audience for a shared media event. Our work thus contributes to the examination of fandom practices led by Henry Jenkins (2006) and other scholars, and points to Twitter as an important new medium facilitating the connection and communion of such fans.
Resumo:
This paper examines the case of a procurement auction for a single project, in which the breakdown of the winning bid into its component items determines the value of payments subsequently made to bidder as the work progresses. Unbalanced bidding, or bid skewing, involves the uneven distribution of mark-up among the component items in such a way as to attempt to derive increased benefit to the unbalancer but without involving any change in the total bid. One form of unbalanced bidding for example, termed Front Loading (FL), is thought to be widespread in practice. This involves overpricing the work items that occur early in the project and underpricing the work items that occur later in the project in order to enhance the bidder's cash flow. Naturally, auctioners attempt to protect themselves from the effects of unbalancing—typically reserving the right to reject a bid that has been detected as unbalanced. As a result, models have been developed to both unbalance bids and detect unbalanced bids but virtually nothing is known of their use, success or otherwise. This is of particular concern for the detection methods as, without testing, there is no way of knowing the extent to which unbalanced bids are remaining undetected or balanced bids are being falsely detected as unbalanced. This paper reports on a simulation study aimed at demonstrating the likely effects of unbalanced bid detection models in a deterministic environment involving FL unbalancing in a Texas DOT detection setting, in which bids are deemed to be unbalanced if an item exceeds a maximum (or fails to reach a minimum) ‘cut-off’ value determined by the Texas method. A proportion of bids are automatically and maximally unbalanced over a long series of simulated contract projects and the profits and detection rates of both the balancers and unbalancers are compared. The results show that, as expected, the balanced bids are often incorrectly detected as unbalanced, with the rate of (mis)detection increasing with the proportion of FL bidders in the auction. It is also shown that, while the profit for balanced bidders remains the same irrespective of the number of FL bidders involved, the FL bidder's profit increases with the greater proportion of FL bidders present in the auction. Sensitivity tests show the results to be generally robust, with (mis)detection rates increasing further when there are fewer bidders in the auction and when more data are averaged to determine the baseline value, but being smaller or larger with increased cut-off values and increased cost and estimate variability depending on the number of FL bidders involved. The FL bidder's expected benefit from unbalancing, on the other hand, increases, when there are fewer bidders in the auction. It also increases when the cut-off rate and discount rate is increased, when there is less variability in the costs and their estimates, and when less data are used in setting the baseline values.
Resumo:
The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Spatio-Temporal interest points are the most popular feature representation in the field of action recognition. A variety of methods have been proposed to detect and describe local patches in video with several techniques reporting state of the art performance for action recognition. However, the reported results are obtained under different experimental settings with different datasets, making it difficult to compare the various approaches. As a result of this, we seek to comprehensively evaluate state of the art spatio- temporal features under a common evaluation framework with popular benchmark datasets (KTH, Weizmann) and more challenging datasets such as Hollywood2. The purpose of this work is to provide guidance for researchers, when selecting features for different applications with different environmental conditions. In this work we evaluate four popular descriptors (HOG, HOF, HOG/HOF, HOG3D) using a popular bag of visual features representation, and Support Vector Machines (SVM)for classification. Moreover, we provide an in-depth analysis of local feature descriptors and optimize the codebook sizes for different datasets with different descriptors. In this paper, we demonstrate that motion based features offer better performance than those that rely solely on spatial information, while features that combine both types of data are more consistent across a variety of conditions, but typically require a larger codebook for optimal performance.
Resumo:
Background: Understanding the spatial distribution of suicide can inform the planning, implementation and evaluation of suicide prevention activity. This study explored spatial clusters of suicide in Australia, and investigated likely socio-demographic determinants of these clusters. Methods: National suicide and population data at a statistical local area (SLA) level were obtained from the Australian Bureau of Statistics for the period of 1999 to 2003. Standardised mortality ratios (SMR) were calculated at the SLA level, and Geographic Information System (GIS) techniques were applied to investigate the geographical distribution of suicides and detect clusters of high risk in Australia. Results: Male suicide incidence was relatively high in the northeast of Australia, and parts of the east coast, central and southeast inland, compared with the national average. Among the total male population and males aged 15 to 34, Mornington Shire had the whole or a part of primary high risk cluster for suicide, followed by the Bathurst-Melville area, one of the secondary clusters in the north coastal area of the Northern Territory. Other secondary clusters changed with the selection of cluster radius and age group. For males aged 35 to 54 years, only one cluster in the east of the country was identified. There was only one significant female suicide cluster near Melbourne while other SLAs had very few female suicide cases and were not identified as clusters. Male suicide clusters had a higher proportion of Indigenous population and lower median socio-economic index for area (SEIFA) than the national average, but their shapes changed with selection of maximum cluster radii setting. Conclusion: This study found high suicide risk clusters at the SLA level in Australia, which appeared to be associated with lower median socio-economic status and higher proportion of Indigenous population. Future suicide prevention programs should focus on these high risk areas.
Resumo:
Purpose: Virally mediated head and neck cancers (VMHNC) often present with nodal involvement, and are generally considered radioresponsive, resulting in the need for a re-planning CT during radiotherapy (RT) in a subset of patients. We sought to identify a high-risk group based on nodal size to be evaluated in a future prospective adaptive RT trial. Methodology: Between 2005-2010, 121 patients with virally-mediated, node positive nasopharyngeal (EBV positive) or oropharyngeal (HPV positive) cancers, receiving curative intent RT were reviewed. Patients were analysed based on maximum size of the dominant node with a view to grouping them in varying risk categories for the need of re-planning. The frequency and timing of the re-planning scans were also evaluated. Results: Sixteen nasopharyngeal and 105 oropharyngeal tumours were reviewed. Twenty-five (21%) patients underwent a re-planning CT at a median of 22 (range, 0-29) fractions with 1 patient requiring re-planning prior to the commencement of treatment. Based on the analysis, patients were subsequently placed into 3 groups; ≤35mm (Group 1), 36-45mm (Group 2), ≥46mm (Group 3). Re-planning CT’s were performed in Group 1- 8/68 (11.8%), Group 2- 4/28 (14.3%), Group 3- 13/25 (52%). Sample size did not allow statistical analysis to detect a significant difference or exclusion of a lack of difference between the 3 groups. Conclusion: In this series, patients with VMHNC and nodal size > 46mm appear to be a high-risk group for the need of re-planning during a course of definitive radiotherapy. This finding will now be tested in a prospective adaptive RT study.
Resumo:
Purpose: Virally mediated head and neck cancers (VMHNC) often present with nodal involvement, and are generally considered radioresponsive, resulting in the need for a re-planning CT during radiotherapy (RT) in a subset of patients. We sought to identify a high-risk group based on nodal size to be evaluated in a future prospective adaptive RT trial. Methodology: Between 2005-2010, 121 patients with virally-mediated, node positive nasopharyngeal (EBV positive) or oropharyngeal (HPV positive) cancers, receiving curative intent RT were reviewed. Patients were analysed based on maximum size of the dominant node with a view to grouping them in varying risk categories for the need of re-planning. The frequency and timing of the re-planning scans were also evaluated. Results: Sixteen nasopharyngeal and 105 oropharyngeal tumours were reviewed. Twenty-five (21%) patients underwent a re-planning CT at a median of 22 (range, 0-29) fractions with 1 patient requiring re-planning prior to the commencement of treatment. Based on the analysis, patients were subsequently placed into 3 groups; ≤35mm (Group 1), 36-45mm (Group 2), ≥46mm (Group 3). Re-planning CT’s were performed in Group 1- 8/68 (11.8%), Group 2- 4/28 (14.3%), Group 3- 13/25 (52%). Sample size did not allow statistical analysis to detect a significant difference or exclusion of a lack of difference between the 3 groups. Conclusion: In this series, patients with VMHNC and nodal size > 46mm appear to be a high-risk group for the need of re-planning during a course of definitive radiotherapy. This finding will now be tested in a prospective adaptive RT study.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
Intelligent Transport Systems (ITS) resembles the infrastructure for ubiquitous computing in the car. It encompasses a) all kinds of sensing technologies within vehicles as well as road infrastructure, b) wireless communication protocols for the sensed information to be exchanged between vehicles (V2V) and between vehicles and infrastructure (V2I), and c) appropriate intelligent algorithms and computational technologies that process these real-time streams of information. As such, ITS can be considered a game changer. It provides the fundamental basis of new, innovative concepts and applications, similar to the Internet itself. The information sensed or gathered within or around the vehicle has led to a variety of context-aware in-vehicular technologies within the car. A simple example is the Anti-lock Breaking System (ABS), which releases the breaks when sensors detect that the wheels are locked. We refer to this type of context awareness as vehicle/technology awareness. V2V and V2I communication, often summarized as V2X, enables the exchange and sharing of sensed information amongst cars. As a result, the vehicle/technology awareness horizon of each individual car is expanded beyond its observable surrounding, paving the way to technologically enhance such already advanced systems. In this chapter, we draw attention to those application areas of sensing and V2X technologies, where the human (driver), the human’s behavior and hence the psychological perspective plays a more pivotal role. The focal points of our project are illustrated in Figure 1: In all areas, the vehicle first (1) gathers or senses information about the driver. Rather than to limit the use of such information towards vehicle/technology awareness, we see great potential for applications in which this sensed information is then (2) fed back to the driver for an increased self-awareness. In addition, by using V2V technologies, it can also be (3) passed to surrounding drivers for an increased social awareness, or (4), pushed even further, into the cloud, where it is collected and visualized for an increased, collective urban awareness within the urban community at large, which includes all city dwellers.
Resumo:
Bulk and size-fractionated kaolinites from seven localities in Australia as well as the Clay Minerals Society Source Clays Georgia KGa-1 and KGa-2 have been studied by X-ray diffraction (XRD), laser scattering, and electron microscopy in order to understand the variation of particle characteristics across a range of environments and to correlate specific particle characteristics with intercalation behavior. All kaolinites have been intercalated with N-methyl formamide (NMF) after pretreatment with hydrazine hydrate, and the relative efficiency of intercalation has been determined using XRD. Intercalate yields of kaolinite: NMF are consistently low for bulk samples that have a high proportion of small-sized particles (i.e., <0.5 µm) and for biphased kaolinites with a high percentage (>60%) of low-defect phase. In general, particle size appears to be a more significant controlling factor than defect distribution in determining the relative yield of kaolinite: NMF intercalate.
Resumo:
Detailed analytical electron microscope (AEM) studies of yellow whiskers produced by chemical vapor deposition (CVD)1 show that two basic types of whiskers are produced at low temperatures (between 1200°C and 1400°C) and low boron to carbon gas ratios. Both whisker types show planar microstructures such as twin planes and stacking faults oriented parallel to, or at a rhombohedral angle to, the growth direction. For both whisker types, the presence of droplet-like terminations containing both Si and Ni indicate that the growth process during CVD is via a vapor-liquid-solid (VLS) mechanism.