274 resultados para Select top-k patterns
Resumo:
Sustainability Declarations were introduced by the Queensland State Government on 1 January 2010 as a compulsory measure for all dwelling sales. The purpose of this policy decision was to improve the relevance of sustainability in the home ownership decision making process. This paper assesses the initial impact of this initiative over its first year in operation. In partnership with the Real Estate Institute of Queensland, real estate agents and salespeople in Queensland were surveyed to determine what impact the Sustainability Declaration has had on home buyer decision making. The level of compliance by the real estate industry was also reviewed. These preliminary findings indicate a high level of compliance from the real estate industry, however results confirm that sustainability is yet to become a criterion of relevance to the majority of home buyers in Queensland. The Sustainability Declarations are a first step in raising awareness in home owners of the importance of sustainability in housing. Further monitoring of this impact will be carried out over time.
Resumo:
Background: International data on child maltreatment are largely derived from child protection agencies, and predominantly report only substantiated cases of child maltreatment. This approach underestimates the incidence of maltreatment and makes inter-jurisdictional comparisons difficult. There has been a growing recognition of the importance of health professionals in identifying, documenting and reporting suspected child maltreatment. This study aimed to describe the issues around case identification using coded morbidity data, outline methods for selecting and grouping relevant codes, and illustrate patterns of maltreatment identified. Methods: A comprehensive review of the ICD-10-AM classification system was undertaken, including review of index terms, a free text search of tabular volumes, and a review of coding standards pertaining to child maltreatment coding. Identified codes were further categorised into maltreatment types including physical abuse, sexual abuse, emotional or psychological abuse, and neglect. Using these code groupings, one year of Australian hospitalisation data for children under 18 years of age was examined to quantify the proportion of patients identified and to explore the characteristics of cases assigned maltreatment-related codes. Results: Less than 0.5% of children hospitalised in Australia between 2005 and 2006 had a maltreatment code assigned, almost 4% of children with a principal diagnosis of a mental and behavioural disorder and over 1% of children with an injury or poisoning as the principal diagnosis had a maltreatment code assigned. The patterns of children assigned with definitive T74 codes varied by sex and age group. For males selected as having a maltreatment-related presentation, physical abuse was most commonly coded (62.6% of maltreatment cases) while for females selected as having a maltreatment-related presentation, sexual abuse was the most commonly assigned form of maltreatment (52.9% of maltreatment cases). Conclusion: This study has demonstrated that hospital data could provide valuable information for routine monitoring and surveillance of child maltreatment, even in the absence of population-based linked data sources. With national and international calls for a public health response to child maltreatment, better understanding of, investment in and utilisation of our core national routinely collected data sources will enhance the evidence-base needed to support an appropriate response to children at risk.
Resumo:
While unlicensed driving does not play a direct causative role in road crashes, it represents a major problem for road safety. A particular subgroup of concern is those offenders who continue to drive after having their licence disqualified for drink driving. Surveys of disqualified drivers suggest that driving among this group is relatively common. Method This paper reports findings from an analysis of the driving records of over 545,000 Queensland drivers who experienced a licence sanction between January 2003 and December 2008. The sample included drivers who were disqualified by a court (e.g., for drink driving); those who licence had been suspended administratively (e.g., for accumulation of demerit points); and those who were placed on a restricted licence. Results Overall, 95,461 of the drivers in the sample were disqualified from driving for a drink driving offence. During the period, these drivers were issued with a total of 2,644,619 traffic infringements with approximately 12% (n = 8, 095) convicted of a further drink driving offence while disqualified. Other traffic offences detected during this period including unlicensed driving (18%), driving an unregistered vehicle (27%), speeding (21%), dangerous driving (36%), mobile phone use (35%), non-restraint use (32%), and other moving violation (23%). Offending behaviour was more common among men than women. Conclusions While licence disqualification has previously been shown to be a relatively effective sanction for managing the behaviour of drink driving offenders, the results of the current study highlight that it is a far from perfect tool since many offenders continue to commit both drink driving and other traffic offences while disqualified. As such, this study highlights the ongoing need to enhance the detection of disqualified and unlicensed driving in order to deter this behaviour.
Resumo:
We have used a scanning tunneling microscope to manipulate heteroleptic phthalocyaninato, naphthalocyaninato, porphyrinato double-decker molecules at the liquid/solid interface between 1-phenyloctane solvent and graphite. We employed nano-grafting of phthalocyanines with eight octyl chains to place these molecules into a matrix of heteroleptic double-decker molecules; the overlayer structure is epitaxial on graphite. We have also used nano-grafting to place double-decker molecules in matrices of single-layer phthalocyanines with octyl chains. Rectangular scans with a scanning tunneling microscope at low bias voltage resulted in the removal of the adsorbed doubledecker molecular layer and substituted the double-decker molecules with bilayer-stacked phthalocyanines from phenyloctane solution. Single heteroleptic double-decker molecules with lutetium sandwiched between naphthalocyanine and octaethylporphyrin were decomposed with voltage pulses from the probe tip; the top octaethylporphyrin ligand was removed and the bottom naphthalocyanine ligand remained on the surface. A domain of decomposed molecules was formed within the double-decker molecular domain, and the boundary of the decomposed molecular domain self-cured to become rectangular. We demonstrated a molecular “sliding block puzzle” with cascades of double-decker molecules on the graphite surface.
Resumo:
In recent years there has been widespread interest in patterns, perhaps provoked by a realisation that they constitute a fundamental brain activity and underpin many artificial intelligence systems. Theorised concepts of spatial patterns including scale, proportion, and symmetry, as well as social and psychological understandings are being revived through digital/parametric means of visualisation and production. The effect of pattern as an ornamental device has also changed from applied styling to mediated dynamic effect. The interior has also seen patterned motifs applied to wall coverings, linen, furniture and artefacts with the effect of enhancing aesthetic appreciation, or in some cases causing psychological and/or perceptual distress (Rodemann 1999). ----- ----- While much of this work concerns a repeating array of surface treatment, Philip Ball’s The Self- Made Tapestry: Pattern Formation in Nature (1999) suggests a number of ways that patterns are present at the macro and micro level, both in their formation and disposition. Unlike the conventional notion of a pattern being the regular repetition of a motif (geometrical or pictorial) he suggests that in nature they are not necessarily restricted to a repeating array of identical units, but also include those that are similar rather than identical (Ball 1999, 9). From his observations Ball argues that they need not necessarily all be the same size, but do share similar features that we recognise as typical. Examples include self-organized patterns on a grand scale such as sand dunes, or fractal networks caused by rivers on hills and mountains, through to patterns of flow observed in both scientific experiments and the drawings of Leonardo da Vinci.
Resumo:
The present paper proposes a technical analysis method for extracting information about movement patterning in studies of motor control, based on a cluster analysis of movement kinematics. In a tutorial fashion, data from three different experiments are presented to exemplify and validate the technical method. When applied to three different basketball-shooting techniques, the method clearly distinguished between the different patterns. When applied to a cyclical wrist supination-pronation task, the cluster analysis provided the same results as an analysis using the conventional discrete relative phase measure. Finally, when analyzing throwing performance constrained by distance to target, the method grouped movement patterns together according to throwing distance. In conclusion, the proposed technical method provides a valuable tool to improve understanding of coordination and control in different movement models, including multiarticular actions.
Resumo:
The terms ‘literacy’ and ‘technology’ remain highly contentious within the field of education. What is meant by ‘literacy’ and the methods used to measure it vary quite markedly in educational and historical contexts across the world. Similarly, while there is a shared concern to research the potential impact of new information and communication technologies (ICTs) on patterns of teaching and learning, there are major discrepancies about which aspects and uses of these technologies should be incorporated into formal learning environments and how this can be accomplished. While government policy makers tend to regard ICTs in relation to ideas of ‘smartness’, efficiency, and the ‘knowledge’ (or ‘new’) economy, educators and educational researchers promote them as offering new tools for learning and critical thinking and the development of new literacies and socio-cultural identities. This clearly has ramifications for the ways literacy is taught and conceptualised throughout the years of schooling, K-12. Outside school, meanwhile, students engage with ICTs on another level entirely, as tools for the maintenance of social networks, for leisure, and for learning and participating in the cultures of their peers. Whatever the differences in perspective, it remains the case that a society’s dominant understandings about literacy and technology will have significant implications for the development of school curriculum.
Resumo:
Secondary fracture healing in long bones leads to the successive formation of intricate patterns of tissues in the newly formed callus. The main aim of this work was to quantitatively describe the topology of these tissue patterns at different stages of the healing process and to generate averaged images of tissue distribution. This averaging procedure was based on stained histological sections (2, 3, 6, and 9 weeks post-operatively) of 64 sheep with a 3 mm tibial mid-shaft osteotomy, stabilized either with a rigid or a semi-rigid external fixator. Before averaging, histological images were sorted for topology according to six identified tissue patterns. The averaged images were obtained for both fixation types and the lateral and medial side separately. For each case, the result of the averaging procedure was a collection of six images characterizing quantitatively the progression of the healing process. In addition, quantified descriptions of the newly formed cartilage and the bone area fractions (BA/TA) of the bony callus are presented. For all cases, a linear increase in the BA/TA of the bony callus was observed. The slope was greatest in the case of the most rigid stabilization and lowest in the case of the least stiff. This topological description of the progression of bone healing will allow quantitative validation (or falsification) of current mechano-biological theories.
Resumo:
The Texas Transportation Commission (“the Commission”) is responsible for planning and making policies for the location, construction, and maintenance of a comprehensive system of highways and public roads in Texas. In order for the Commission to carry out its legislative mandate, the Texas Constitution requires that most revenue generated by motor vehicle registration fees and motor fuel taxes be used for constructing and maintaining public roadways and other designated purposes. The Texas Department of Transportation (TxDOT) assists the Commission in executing state transportation policy. It is the responsibility of the legislature to appropriate money for TxDOT’s operation and maintenance expenses. All money authorized to be appropriated for TxDOT’s operations must come from the State Highway Fund (also known as Fund 6, Fund 006, or Fund 0006). The Commission can then use the balance in the fund to fulfill its responsibilities. However, the value of the revenue received in Fund 6 is not keeping pace with growing demand for transportation infrastructure in Texas. Additionally, diversion of revenue to nontransportation uses now exceeds $600 million per year. As shown in Figure 1.1, revenues and expenditures of the State Highway Fund per vehicle mile traveled (VMT) in Texas have remained almost flat since 1993. In the meantime, construction cost inflation has gone up more than 100%, effectively halving the value of expenditure.
Resumo:
This research report documents work conducted by the Center for Transportation (CTR) at The University of Texas at Austin in analyzing the Joint Analysis using the Combined Knowledge (J.A.C.K.) program. This program was developed by the Texas Department of Transportation (TxDOT) to make projections of revenues and expenditures. This research effort was to span from September 2008 to August 2009, but the bulk of the work was completed and presented by December 2008. J.A.C.K. was subsequently renamed TRENDS, but for consistency with the scope of work, the original name is used throughout this report.
Resumo:
This article applies social network analysis techniques to a case study of police corruption in order to produce findings which will assist in corruption prevention and investigation. Police corruption is commonly studied but rarely are sophisticated tools of analyse engaged to add rigour to the field of study. This article analyses the ‘First Joke’ a systemic and long lasting corruption network in the Queensland Police Force, a state police agency in Australia. It uses the data obtained from a commission of inquiry which exposed the network and develops hypotheses as to the nature of the networks structure based on existing literature into dark networks and criminal networks. These hypotheses are tested by entering the data into UCINET and analysing the outcomes through social network analysis measures of average path distance, centrality and density. The conclusions reached show that the network has characteristics not predicted by the literature.
Resumo:
Traditionally, transport disadvantage has been identified using accessibility analysis although the effectiveness of the accessibility planning approach to improving access to goods and services is not known. This paper undertakes a comparative assessment of measures of mobility, accessibility, and participation used to identify transport disadvantage using the concept of activity spaces. A 7 day activity-travel diary data for 89 individuals was collected from two case study areas located in rural Northern Ireland. A spatial analysis was conducted to select the case study areas using criteria derived from the literature. The criteria are related to the levels of area accessibility and area mobility which are known to influence the nature of transport disadvantage. Using the activity-travel diary data individuals weekly as well as day to day variations in activity-travel patterns were visualised. A model was developed using the ArcGIS ModelBuilder tool and was run to derive scores related to individual levels of mobility, accessibility, and participation in activities from the geovisualisation. Using these scores a multiple regression analysis was conducted to identify patterns of transport disadvantage. This study found a positive association between mobility and accessibility, between mobility and participation, and between accessibility and participation in activities. However, area accessibility and area mobility were found to have little impact on individual mobility, accessibility, and participation in activities. Income vis-àvis ´ car-ownership was found to have a significant impact on individual levels of mobility, and accessibility; whereas participation in activities were found to be a function of individual levels of income and their occupational status.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term-based ones in describing user preferences, but many experiments do not support this hypothesis. The innovative technique presented in paper makes a breakthrough for this difficulty. This technique discovers both positive and negative patterns in text documents as higher level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the higher level features. Substantial experiments using this technique on Reuters Corpus Volume 1 and TREC topics show that the proposed approach significantly outperforms both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and pattern based methods on precision, recall and F measures.
Resumo:
In this paper we present pyktree, an implementation of the K-tree algorithm in the Python programming language. The K-tree algorithm provides highly balanced search trees for vector quantization that scales up to very large data sets. Pyktree is highly modular and well suited for rapid-prototyping of novel distance measures and centroid representations. It is easy to install and provides a python package for library use as well as command line tools.